Crandore Hub

pairwiseLLM

Pairwise Comparison Tools for Large Language Model-Based Writing Evaluation

Provides a unified framework for generating, submitting, and analyzing pairwise comparisons of writing quality using large language models (LLMs). The package supports live and/or batch evaluation workflows across multiple providers ('OpenAI', 'Anthropic', 'Google Gemini', 'Together AI', and locally-hosted 'Ollama' models), includes bias-tested prompt templates and a flexible template registry, and offers tools for constructing forward and reversed comparison sets to analyze consistency and positional bias. Results can be modeled using Bradley–Terry (1952) <doi:10.2307/2334029> or Elo rating methods to derive writing quality scores. For information on the method of pairwise comparisons, see Thurstone (1927) <doi:10.1037/h0070288> and Heldsinger & Humphry (2010) <doi:10.1007/BF03216919>. For information on Elo ratings, see Clark et al. (2018) <doi:10.1371/journal.pone.0190393>.

Versions across snapshots

VersionRepositoryFileSize
1.1.0 rolling linux/jammy R-4.5 pairwiseLLM_1.1.0.tar.gz 556.9 KiB
1.1.0 rolling linux/noble R-4.5 pairwiseLLM_1.1.0.tar.gz 556.6 KiB
1.1.0 rolling source/ R- pairwiseLLM_1.1.0.tar.gz 262.3 KiB
1.1.0 latest linux/jammy R-4.5 pairwiseLLM_1.1.0.tar.gz 556.9 KiB
1.1.0 latest linux/noble R-4.5 pairwiseLLM_1.1.0.tar.gz 556.6 KiB
1.1.0 latest source/ R- pairwiseLLM_1.1.0.tar.gz 262.3 KiB
1.1.0 2026-04-26 source/ R- pairwiseLLM_1.1.0.tar.gz 262.3 KiB
1.1.0 2026-04-23 source/ R- pairwiseLLM_1.1.0.tar.gz 262.3 KiB
1.1.0 2026-04-09 windows/windows R-4.5 pairwiseLLM_1.1.0.zip 570.5 KiB

Dependencies (latest)

Imports

Suggests