Cellbender remove background
Eliminating technical artifacts from high-throughput single-cell RNA sequencing data.
Info
ID: cellbender_remove_background
Namespace: correction
Links
This module removes counts due to ambient RNA molecules and random barcode swapping from (raw) UMI-based scRNA-seq count matrices. At the moment, only the count matrices produced by the CellRanger count pipeline is supported. Support for additional tools and protocols will be added in the future. A quick start tutorial can be found here.
Fleming et al. 2022, bioRxiv.
Example commands
You can run the pipeline using nextflow run
.
View help
You can use --help
as a parameter to get an overview of the possible parameters.
nextflow run openpipelines-bio/openpipeline \
-r 1.0.2 -latest \
-main-script target/nextflow/correction/cellbender_remove_background/main.nf \
--help
Run command
Example of params.yaml
# Inputs
input: # please fill in - example: "input.h5mu"
modality: "rna"
# Outputs
# output: "$id.$key.output.h5mu"
# output_compression: "gzip"
layer_output: "cellbender_corrected"
obs_background_fraction: "cellbender_background_fraction"
obs_cell_probability: "cellbender_cell_probability"
obs_cell_size: "cellbender_cell_size"
obs_droplet_efficiency: "cellbender_droplet_efficiency"
obs_latent_scale: "cellbender_latent_scale"
var_ambient_expression: "cellbender_ambient_expression"
obsm_gene_expression_encoding: "cellbender_gene_expression_encoding"
# Arguments
expected_cells_from_qc: false
# expected_cells: 1000
# total_droplets_included: 25000
# force_cell_umi_prior: 123
# force_empty_umi_prior: 123
model: "full"
epochs: 150
low_count_threshold: 5
z_dim: 64
z_layers: [512]
training_fraction: 0.9
empty_drop_training_fraction: 0.2
# ignore_features: [123]
fpr: [0.01]
# exclude_feature_types: ["foo"]
projected_ambient_count_threshold: 0.1
learning_rate: 1.0E-4
# final_elbo_fail_fraction: 123.0
# epoch_elbo_fail_fraction: 123.0
num_training_tries: 1
learning_rate_retry_mult: 0.2
posterior_batch_size: 128
# posterior_regulation: "foo"
# alpha: 123.0
# q: 123.0
estimator: "mckp"
estimator_multiple_cpu: false
# constant_learning_rate: true
debug: false
cuda: false
# Nextflow input-output arguments
publish_dir: # please fill in - example: "output/"
# param_list: "my_params.yaml"
nextflow run openpipelines-bio/openpipeline \
-r 1.0.2 -latest \
-profile docker \
-main-script target/nextflow/correction/cellbender_remove_background/main.nf \
-params-file params.yaml
Note
Replace -profile docker
with -profile podman
or -profile singularity
depending on the desired backend.
Argument groups
Inputs
Name | Description | Attributes |
---|---|---|
--input |
Input h5mu file. Data file on which to run tool. Data must be un-filtered: it should include empty droplets. | file , required, example: "input.h5mu" |
--modality |
List of modalities to process. | string , default: "rna" |
Outputs
Name | Description | Attributes |
---|---|---|
--output |
Full count matrix as an h5mu file, with background RNA removed. This file contains all the original droplet barcodes. | file , required, example: "output.h5mu" |
--output_compression |
string , example: "gzip" |
|
--layer_output |
Output layer | string , default: "cellbender_corrected" |
--obs_background_fraction |
string , default: "cellbender_background_fraction" |
|
--obs_cell_probability |
string , default: "cellbender_cell_probability" |
|
--obs_cell_size |
string , default: "cellbender_cell_size" |
|
--obs_droplet_efficiency |
string , default: "cellbender_droplet_efficiency" |
|
--obs_latent_scale |
string , default: "cellbender_latent_scale" |
|
--var_ambient_expression |
string , default: "cellbender_ambient_expression" |
|
--obsm_gene_expression_encoding |
string , default: "cellbender_gene_expression_encoding" |
Arguments
Name | Description | Attributes |
---|---|---|
--expected_cells_from_qc |
Will use the Cell Ranger QC to determine the estimated number of cells | boolean , default: FALSE |
--expected_cells |
Number of cells expected in the dataset (a rough estimate within a factor of 2 is sufficient). | integer , example: 1000 |
--total_droplets_included |
The number of droplets from the rank-ordered UMI plot that will have their cell probabilities inferred as an output. Include the droplets which might contain cells. Droplets beyond TOTAL_DROPLETS_INCLUDED should be ‘surely empty’ droplets. | integer , example: 25000 |
--force_cell_umi_prior |
Ignore CellBender’s heuristic prior estimation, and use this prior for UMI counts in cells. | integer |
--force_empty_umi_prior |
Ignore CellBender’s heuristic prior estimation, and use this prior for UMI counts in empty droplets. | integer |
--model |
Which model is being used for count data. * ‘naive’ subtracts the estimated ambient profile. * ‘simple’ does not model either ambient RNA or random barcode swapping (for debugging purposes – not recommended). * ‘ambient’ assumes background RNA is incorporated into droplets. * ‘swapping’ assumes background RNA comes from random barcode swapping (via PCR chimeras). * ‘full’ uses a combined ambient and swapping model. | string , default: "full" |
--epochs |
Number of epochs to train. | integer , default: 150 |
--low_count_threshold |
Droplets with UMI counts below this number are completely excluded from the analysis. This can help identify the correct prior for empty droplet counts in the rare case where empty counts are extremely high (over 200). | integer , default: 5 |
--z_dim |
Dimension of latent variable z. | integer , default: 64 |
--z_layers |
Dimension of hidden layers in the encoder for z. | List of integer , default: 512 , multiple_sep: ";" |
--training_fraction |
Training detail: the fraction of the data used for training. The rest is never seen by the inference algorithm. Speeds up learning. | double , default: 0.9 |
--empty_drop_training_fraction |
Training detail: the fraction of the training data each epoch that is drawn (randomly sampled) from surely empty droplets. | double , default: 0.2 |
--ignore_features |
Integer indices of features to ignore entirely. In the output count matrix, the counts for these features will be unchanged. | List of integer , multiple_sep: ";" |
--fpr |
Target ‘delta’ false positive rate in [0, 1). Use 0 for a cohort of samples which will be jointly analyzed for differential expression. A false positive is a true signal count that is erroneously removed. More background removal is accompanied by more signal removal at high values of FPR. You can specify multiple values, which will create multiple output files. | List of double , default: 0.01 , multiple_sep: ";" |
--exclude_feature_types |
Feature types to ignore during the analysis. These features will be left unchanged in the output file. | List of string , multiple_sep: ";" |
--projected_ambient_count_threshold |
Controls how many features are included in the analysis, which can lead to a large speedup. If a feature is expected to have less than PROJECTED_AMBIENT_COUNT_THRESHOLD counts total in all cells (summed), then that gene is excluded, and it will be unchanged in the output count matrix. For example, PROJECTED_AMBIENT_COUNT_THRESHOLD = 0 will include all features which have even a single count in any empty droplet. | double , default: 0.1 |
--learning_rate |
Training detail: lower learning rate for inference. A OneCycle learning rate schedule is used, where the upper learning rate is ten times this value. (For this value, probably do not exceed 1e-3). | double , default: 1e-04 |
--final_elbo_fail_fraction |
Training is considered to have failed if (best_test_ELBO - final_test_ELBO)/(best_test_ELBO - initial_test_ELBO) > FINAL_ELBO_FAIL_FRACTION. Training will automatically re-run if –num-training-tries > 1. By default, will not fail training based on final_training_ELBO. | double |
--epoch_elbo_fail_fraction |
Training is considered to have failed if (previous_epoch_test_ELBO - current_epoch_test_ELBO)/(previous_epoch_test_ELBO - initial_train_ELBO) > EPOCH_ELBO_FAIL_FRACTION. Training will automatically re-run if –num-training-tries > 1. By default, will not fail training based on epoch_training_ELBO. | double |
--num_training_tries |
Number of times to attempt to train the model. At each subsequent attempt, the learning rate is multiplied by LEARNING_RATE_RETRY_MULT. | integer , default: 1 |
--learning_rate_retry_mult |
Learning rate is multiplied by this amount each time a new training attempt is made. (This parameter is only used if training fails based on EPOCH_ELBO_FAIL_FRACTION or FINAL_ELBO_FAIL_FRACTION and NUM_TRAINING_TRIES is > 1.) | double , default: 0.2 |
--posterior_batch_size |
Training detail: size of batches when creating the posterior. Reduce this to avoid running out of GPU memory creating the posterior (will be slower). | integer , default: 128 |
--posterior_regulation |
Posterior regularization method. (For experts: not required for normal usage, see documentation). * PRq is approximate quantile-targeting. * PRmu is approximate mean-targeting aggregated over genes (behavior of v0.2.0). * PRmu_gene is approximate mean-targeting per gene. | string |
--alpha |
Tunable parameter alpha for the PRq posterior regularization method (not normally used: see documentation). | double |
--q |
Tunable parameter q for the CDF threshold estimation method (not normally used: see documentation). | double |
--estimator |
Output denoised count estimation method. (For experts: not required for normal usage, see documentation). | string , default: "mckp" |
--estimator_multiple_cpu |
Including the flag –estimator-multiple-cpu will use more than one CPU to compute the MCKP output count estimator in parallel (does nothing for other estimators). | boolean_true |
--constant_learning_rate |
Including the flag –constant-learning-rate will use the ClippedAdam optimizer instead of the OneCycleLR learning rate schedule, which is the default. Learning is faster with the OneCycleLR schedule. However, training can easily be continued from a checkpoint for more epochs than the initial command specified when using ClippedAdam. On the other hand, if using the OneCycleLR schedule with 150 epochs specified, it is not possible to pick up from that final checkpoint and continue training until 250 epochs. | boolean |
--debug |
Including the flag –debug will log extra messages useful for debugging. | boolean_true |
--cuda |
Including the flag –cuda will run the inference on a GPU. | boolean_true |