Crandore Hub

sffdr

Surrogate Functional False Discovery Rates for Genome-Wide Association Studies

Pleiotropy-informed significance analysis of genome-wide association studies with surrogate functional false discovery rates (sfFDR). The sfFDR framework adapts the fFDR to leverage informative data from multiple sets of GWAS summary statistics to increase power in study while accommodating for linkage disequilibrium. sfFDR provides estimates of key FDR quantities in a significance analysis such as the functional local FDR and $q$-value, and uses these estimates to derive a functional $p$-value for type I error rate control and a functional local Bayes' factor for post-GWAS analyses (e.g., fine mapping and colocalization).

README

# sffdr package

<!-- badges: start -->

[![R-CMD-check](https://github.com/ajbass/sffdr/actions/workflows/R-CMD-check.yml/badge.svg)](https://github.com/ajbass/sffdr/actions/workflows/R-CMD-check.yml)
[![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/sffdr)](https://cran.r-project.org/package=sffdr)

## Introduction

<img src="inst/figures/sffdr.png" align="right" height="150" />

The sffdr package implements the surrogate functional false discovery
rate (sfFDR) procedure. This methodology integrates GWAS summary
statistics from related traits (i.e., pleiotropy) to increase
statistical power within the functional FDR framework. The inputs into
`sffdr` are a set of p-values from a GWAS of interest and a set of
p-values from one or many informative GWAS.

The significance quantities estimated by `sffdr` can be used for a
variety of analyses:

- Functional p-value (fp): Controls the type I error rate and can be
  used for standard significance analyses.
- Functional q-value (fq): A measure of significance in terms of the
  positive FDR (closely related to FDR).
- Functional local FDR (flfdr): A posterior error probability that is
  useful for functional fine-mapping and assessing significance of a
  SNP.

## Citing this package

The methods implemented in this package are described in:

> Bass AJ, Wallace C. Exploiting pleiotropy to enhance variant discovery
> with functional false discovery rates. [Nature Computational
> Science](https://doi.org/10.1038/s43588-025-00852-3); 2025.

Note that this work is an extension of the functional FDR methodology
and the software builds on some of the functions in the `fFDR` package
found at <https://github.com/StoreyLab/fFDR>.

## Getting help

To report any bugs or issues related to usage please report it on GitHub
at <https://github.com/ajbass/sffdr>.

## Installation

You can install the development version of `sffdr` from GitHub:

``` r
# install development version of package
install.packages("devtools")
devtools::install_github("ajbass/sffdr")
```

## Quick start guide

To demonstrate the package, we will use a sample dataset containing 10,000 SNPs for body mass index (BMI) as the primary trait of interest, with body fat percentage (BFP), cholesterol, and triglycerides as informative traits.

``` r
library(sffdr)
data(bmi)

# Define primary p-values and informative p-values
p <- sumstats$bmi
z <- as.matrix(sumstats[,-1])

head(sumstats)
```

### 1. Modeling the functional proportion of null tests $\pi_{0}(z)$

The first step is to model the relationship between the functional
proportion of null tests and the informative traits. We use `pi0_model`
to create a model and `fpi0est` to estimate $\pi_{0}(z)$​.

``` r
# Create model: adaptively chooses knots at small quantiles where signal is expected
mpi0 <- pi0_model(z)

# Estimation of functional pi0
fpi0 <- fpi0est(p, mpi0)
```

**Note on Knots**: Ideally, knots should cover the distribution of
non-null p-values in your informative studies. Our algorithm uses an FDR
threshold to choose the location of the knots and the number of knots.
Depending on the study, you may want to changes these values (see
`?pi0_model`). Note that this toy example contains only 10,000 p-values, so the knot locations are not optimal since the function was designed for GWAS-scale data. 

### 2. Estimating the functional significance quantities

With the functional $\pi_{0}$ estimated, we can now calculate the FDR
quantities and functional p-values.

``` r
# Apply sfFDR
sffdr_out <- sffdr(p, fpi0 = fpi0$fpi0)   

# Extract results
results <- data.frame(
  p_value = p,
  fp_value = sffdr_out$fpvalues,
  fq_value = sffdr_out$fqvalues,
  flfdr = sffdr_out$flfdr
)

# Plot significance results (focusing on high-significance SNPs)
plot(sffdr_out)
```

### 3. Incorporating linkage disequilibrium (LD)

In standard GWAS, SNPs are often correlated due to LD. To account for
this, you should provide a vector indicating which SNPs are
LD-independent (e.g., via pruning). `sffdr` uses these independent SNPs
to fit the model, then computes the significance quantities for the
entire dataset.

``` r
# logical vector: TRUE if SNP is LD-independent
# (In this example, all SNPs are already independent)
indep_snps <- rep(TRUE, length(p))

# Fit model using LD-independent subset
mpi0 <- pi0_model(z = z, indep_snps = indep_snps)

fpi0 <- fpi0est(p, mpi0, indep_snps = indep_snps)

# Estimate quantities for all SNPs
sffdr_out <- sffdr(p, fpi0 = fpi0$fpi0)
```

**Note on choosing independent SNPs**: There are two strategies: (1)
prune using plink (e.g., `--indep-pairwise 200kb 1 0.3`) and use the
pruned set of SNPs as the independent set, or (2) use informed sampling
of the informative traits (e.g., clumping or based on LD blocks). The
latter is likely better when the primary study has low power.


## Advanced usage

### Enforcing monotonicity

The kernel density estimator is adaptive. However, sometimes  artifacts---such as LD---can induce spurious "bumps" in the null region which can impact density estimation.

You can enforce a monotone constraint within surrogate bins (targeting ~1,000 independent SNPs per bin) to stabilize the joint density:

``` r
sffdr_out <- sffdr(p, fpi0 = fpi0$fpi0, monotone = TRUE)
```

While this provides robustness in the null region, users should note it may result in a loss of power in specific cases. We are considering making this the default behavior in future releases, but for now it is optional.

### Incorporating LD scores

If LD scores ($l_{j}$​) are available (e.g., from LDSC regression), they can be utilized as inverse-weights ($w_j​=1/l_{j}$​). This allows the estimators to account for LD directly while allowing the full dataset to be used. As such, pre-filtering for independent SNPs (i.e., pruning) is unnecessary.

Below we show how to incorporate weights into sfFDR:
``` r
# Create an artificial LD block
LD_block <- 500:1000
p[LD_block] <- runif(length(LD_block), min = 0.78, max = 0.82)

# Define weights (inverse LD)
w <- rep(1.0, length(p))
w[LD_block] <- 0.05

# Apply sfFDR with weights
mpi0 <- pi0_model(z, weights = w)
fpi0 <- fpi0est(p, mpi0, weights = w)
sffdr_out <- sffdr(p, fpi0 = fpi0$fpi0, weights = w)
```

  **Note on incorporating LD scores**: As LD scores are available for common reference panels (e.g., 1000 Genomes), they can be easily downloaded to use in sfFDR. If you have LD scores calculated from your data then that's even better. However, it is problematic to use LD scores when the reference panel is not matched to your study population. In that case, the pruning (or informative sampling) strategy discussed above is the best option.

### Manual Knot Selection and using other informative variables

The selection of knots in the B-spline basis is an important tuning parameter in sffdr. If your surrogate GWAS has extremely high power (very small p-values), we recommend placing more knots at the lower end of the p-value distribution to capture the local density of signals.

``` r
# Create model (can include other variables (e.g., MAF) or specify more complicated models)
fmod <- "~ns(bfp, knots = c(0.01, 0.025, 0.05, 0.1))"
fpi0_mod <- fpi0est(p = p,
                    z = mpi0$zt,
                    pi0_model = fmod)
``` 

  **Note on incorporating additional variables**: Other informative covariates can be included in the model (e.g., MAF, LD score, etc.). While the LD score annotations can be used (e.g., the baselineLD model), we have found that using the expected $\chi^{2}$ given the annotations performs better in practice and simplifies the model fitting.

Versions across snapshots

VersionRepositoryFileSize
1.1.2 rolling linux/jammy R-4.5 sffdr_1.1.2.tar.gz 382.8 KiB
1.1.2 rolling linux/noble R-4.5 sffdr_1.1.2.tar.gz 382.8 KiB
1.1.2 rolling source/ R- sffdr_1.1.2.tar.gz 382.8 KiB
1.1.2 latest linux/jammy R-4.5 sffdr_1.1.2.tar.gz 382.8 KiB
1.1.2 latest linux/noble R-4.5 sffdr_1.1.2.tar.gz 382.8 KiB
1.1.2 latest source/ R- sffdr_1.1.2.tar.gz 382.8 KiB
1.1.2 2026-04-26 source/ R- sffdr_1.1.2.tar.gz 382.8 KiB
1.1.2 2026-04-23 source/ R- sffdr_1.1.2.tar.gz 382.8 KiB
1.0.0 2025-04-20 source/ R- sffdr_1.0.0.tar.gz 358.0 KiB

Dependencies (latest)

Imports

LinkingTo

Suggests