edgemodelr
Local Large Language Model Inference Engine
Enables R users to run large language models locally using 'GGUF' model files and the 'llama.cpp' inference engine. Provides a complete R interface for loading models, generating text completions, and streaming responses in real-time. Supports local inference without requiring cloud APIs or internet connectivity, ensuring complete data privacy and control. Based on the 'llama.cpp' project by Georgi Gerganov (2023) <https://github.com/ggml-org/llama.cpp>.
Versions across snapshots
| Version | Repository | File | Size |
|---|---|---|---|
0.2.0 |
rolling source/ R- | edgemodelr_0.2.0.tar.gz |
811.1 KiB |
0.2.0 |
latest source/ R- | edgemodelr_0.2.0.tar.gz |
811.1 KiB |
0.2.0 |
2026-04-09 windows/windows R-4.5 | edgemodelr_0.2.0.zip |
1.9 MiB |