Crandore Hub

oolong

Create Validation Tests for Automated Content Analysis

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, <https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>) and word set intrusion (Ying et al. 2021) <doi:10.1017/pan.2021.33> tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) <doi:10.1080/10584609.2020.1723752>.

Versions across snapshots

VersionRepositoryFileSize
0.6.1 rolling linux/jammy R-4.5 oolong_0.6.1.tar.gz 3.1 MiB
0.6.1 rolling linux/noble R-4.5 oolong_0.6.1.tar.gz 3.1 MiB
0.6.1 rolling source/ R- oolong_0.6.1.tar.gz 3.4 MiB
0.6.1 latest linux/jammy R-4.5 oolong_0.6.1.tar.gz 3.1 MiB
0.6.1 latest linux/noble R-4.5 oolong_0.6.1.tar.gz 3.1 MiB
0.6.1 latest source/ R- oolong_0.6.1.tar.gz 3.4 MiB
0.6.1 2026-04-26 source/ R- oolong_0.6.1.tar.gz 3.4 MiB
0.6.1 2026-04-23 source/ R- oolong_0.6.1.tar.gz 3.4 MiB
0.6.1 2026-04-09 windows/windows R-4.5 oolong_0.6.1.zip 3.1 MiB
0.6.1 2025-04-20 source/ R- oolong_0.6.1.tar.gz 3.4 MiB

Dependencies (latest)

Imports

Suggests