Crandore Hub

localLLM

Running Local LLMs with 'llama.cpp' Backend

Provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runtime rather than bundled with the package. Package features include text generation, reproducible generation, and parallel inference.

Versions across snapshots

VersionRepositoryFileSize
1.2.1 rolling linux/jammy R-4.5 localLLM_1.2.1.tar.gz 396.9 KiB
1.2.1 rolling linux/noble R-4.5 localLLM_1.2.1.tar.gz 398.7 KiB
1.2.1 rolling source/ R- localLLM_1.2.1.tar.gz 173.1 KiB
1.2.1 latest linux/jammy R-4.5 localLLM_1.2.1.tar.gz 396.9 KiB
1.2.1 latest linux/noble R-4.5 localLLM_1.2.1.tar.gz 398.7 KiB
1.2.1 latest source/ R- localLLM_1.2.1.tar.gz 173.1 KiB
1.2.1 2026-04-26 source/ R- localLLM_1.2.1.tar.gz 173.1 KiB
1.2.1 2026-04-23 source/ R- localLLM_1.2.1.tar.gz 173.1 KiB
1.2.1 2026-04-09 windows/windows R-4.5 localLLM_1.2.1.zip 735.1 KiB

Dependencies (latest)

Imports

LinkingTo

Suggests