sentencepiece
Text Tokenization using Byte Pair Encoding and Unigram Modelling
Unsupervised text tokenizer allowing to perform byte pair encoding and unigram modelling. Wraps the 'sentencepiece' library <https://github.com/google/sentencepiece> which provides a language independent tokenizer to split text in words and smaller subword units. The techniques are explained in the paper "SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing" by Taku Kudo and John Richardson (2018) <doi:10.18653/v1/D18-2012>. Provides as well straightforward access to pretrained byte pair encoding models and subword embeddings trained on Wikipedia using 'word2vec', as described in "BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages" by Benjamin Heinzerling and Michael Strube (2018) <http://www.lrec-conf.org/proceedings/lrec2018/pdf/1049.pdf>.
Versions across snapshots
| Version | Repository | File | Size |
|---|---|---|---|
0.2.5 |
rolling linux/jammy R-4.5 | sentencepiece_0.2.5.tar.gz |
2.0 MiB |
0.2.5 |
rolling linux/noble R-4.5 | sentencepiece_0.2.5.tar.gz |
2.0 MiB |
0.2.5 |
rolling source/ R- | sentencepiece_0.2.5.tar.gz |
2.9 MiB |
0.2.5 |
latest linux/jammy R-4.5 | sentencepiece_0.2.5.tar.gz |
2.0 MiB |
0.2.5 |
latest linux/noble R-4.5 | sentencepiece_0.2.5.tar.gz |
2.0 MiB |
0.2.5 |
latest source/ R- | sentencepiece_0.2.5.tar.gz |
2.9 MiB |
0.2.5 |
2026-04-26 source/ R- | sentencepiece_0.2.5.tar.gz |
2.9 MiB |
0.2.5 |
2026-04-23 source/ R- | sentencepiece_0.2.5.tar.gz |
2.9 MiB |
0.2.5 |
2026-04-09 windows/windows R-4.5 | sentencepiece_0.2.5.zip |
2.3 MiB |
0.2.3 |
2025-04-20 source/ R- | sentencepiece_0.2.3.tar.gz |
2.9 MiB |