Crandore Hub

tokenizers

Fast, Consistent Tokenization of Natural Language Text

Convert natural language text into tokens. Includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the 'stringi' and 'Rcpp' packages for fast yet correct tokenization in 'UTF-8'.

Versions across snapshots

VersionRepositoryFileSize
0.3.0 2026-04-09 windows/windows R-4.5 tokenizers_0.3.0.zip 954.2 KiB

Dependencies (latest)

Imports

LinkingTo

Suggests