Convert natural language text into tokens. The tokenizers have a consistent interface and are compatible with Unicode, thanks to being built on the 'stringi' package. Includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, lines, and regular expressions.
Version: | 0.1.4 |
Depends: | R (≥ 3.1.3) |
Imports: | stringi (≥ 1.0.1), Rcpp (≥ 0.12.3), SnowballC (≥ 0.5.1) |
LinkingTo: | Rcpp |
Suggests: | testthat, covr, knitr, rmarkdown |
Published: | 2016-08-29 |
Author: | Lincoln Mullen [aut, cre], Dmitriy Selivanov [ctb] |
Maintainer: | Lincoln Mullen <lincoln at lincolnmullen.com> |
BugReports: | https://github.com/ropensci/tokenizers/issues |
License: | MIT + file LICENSE |
URL: | https://github.com/ropensci/tokenizers |
NeedsCompilation: | yes |
Materials: | README NEWS |
In views: | NaturalLanguageProcessing |
CRAN checks: | tokenizers results |
Reference manual: | tokenizers.pdf |
Vignettes: |
Introduction to the tokenizers Package |
Package source: | tokenizers_0.1.4.tar.gz |
Windows binaries: | r-devel: tokenizers_0.1.4.zip, r-release: tokenizers_0.1.4.zip, r-oldrel: tokenizers_0.1.4.zip |
OS X El Capitan binaries: | r-release: tokenizers_0.1.4.tgz |
OS X Mavericks binaries: | r-oldrel: tokenizers_0.1.4.tgz |
Old sources: | tokenizers archive |
Reverse imports: | covfefe, ptstem, tidytext |
Reverse suggests: | cleanNLP |
Please use the canonical form https://CRAN.R-project.org/package=tokenizers to link to this page.