The clone_filter program is designed to identify PCR clones. This can be done in two
different ways. In the simplest case, if you have a set of RAD data that is paired-end and is randomly sheared
(single-digest RAD or similar), you can identify clones by comparing the single and paired-end reads to find identical
sequence. More than one set of identically matching single and paired-end reads will be considered a clone and only
one representative of that set will be output. This method will likely overestimate the number of actual clones in a
A second method to identify PCR clones is to include a short random sequence (an 'oligo') with each molecule during
molecular library construction. After sequencing we can then compare oligo sequences and identify PCR clones as those
sequences with identical oligos. An oligo sequnce can be part of on inline barcode ligated onto each molecule (the
program assumes the oligo is at the most 5' end of the read, while an inline barcode will come after the oligo in
the sequenced read). An oligo sequence can also be added as either the i5 or the i7 index barcode of the Illumina
TruSeq kits. The clone_filter program can work with any combination of these types of data
and will reduce each set of identical oligos to a single representative in the output.
The clone_filter program is designed to work with the process_radtags
or process_shortreads programs. Depending on how unique your oligos are (they might be unique
to an entire library or only unique to a single individual) you can first demultiplex your data and then run
clone_filter or vice-versa.
The clone_filter program can also be run multiple times with different subsets of the data.
This allows you to filter for clones in increments if necessary due to a lack of memory on your computer.
Other Pipeline Programs
The process_shortreads program performs the same task as process_radtags
for fast cleaning of randomly sheared genomic or transcriptomic data. This program will trim reads that are below the
quality threshold instead of discarding them, making it useful for genomic assembly or other analyses.
The clone_filter program will take a set of reads and reduce them according to PCR
clones. This is done by matching raw sequence or by referencing a set of random oligos that have been included in the sequence.
The kmer_filter program allows paired or single-end reads to be filtered according to the
number or rare or abundant kmers they contain. Useful for both RAD datasets as well as randomly sheared genomic or
The ustacks program will take as input a set of short-read sequences and align them into
exactly-matching stacks. Comparing the stacks it will form a set of loci and detect SNPs at each locus using a
maximum likelihood framework.
The pstacks program will extract stacks that have been aligned to a reference genome by a
program such as Bowtie or GSnap and identify SNPs at each locus using a maximum likelihood framework.
A catalog can be built from any set of samples processed
by the ustacks program. It will create a set of consensus loci, merging alleles together. In the case
of a genetic cross, a catalog would be constructed from the parents of the cross to create a set of
all possible alleles expected in the progeny of the cross.
Sets of stacks constructed by the ustacks
or pstacks programs can be searched against a catalog produced by the cstacks program. In the case of a
genetic map, stacks from the progeny would be matched against the catalog to determine which progeny
contain which parental alleles.
This populations program will compute population-level summary statistics such
as π, FIS, and FST. It can output site level SNP calls in VCF format and
can also output SNPs for analysis in STRUCTURE or in Phylip format for phylogenetics analysis.
The genotypes program exports Stacks data as a set of observed haplotypes, or with
the haplotypes encoded into genotypes, in either a generic format or for a particular linkage mapper such as
JoinMap, OneMap, or R/QTL. It also provides methods for making automated corrections to certain types of loci.
The rxstacks program makes corrections to individual genotypes and haplotypes based
on data from a population of samples.
The denovo_map.pl program executes each of the Stacks components to create a genetic
linkage map, or to identify the alleles in a set of populations. It also handles uploading data to the database.
The ref_map.pl program takes reference-aligned input data and executes each of the Stacks
components, using the reference alignment to form stacks, and identifies alleles. It can be used in a genetic map
of a set of populations. It also handles uploading data to the database.
The load_radtags.pl program takes a set of data produced by either the denovo_map.pl or
ref_map.pl progams (or produced by hand) and loads it into the database. This allows the data to be generated on
one computer, but loaded from another. Or, for a database to be regenerated without re-executing the pipeline.
The index_radtags.pl program indexes the database to speed execution in the web interface
and enable web-based filtering after all of the data has been loaded. It will be run by
denovo_map.pl or ref_map.pl at the end of execution.
The export_sql.pl program provides for the export of all of the Stacks data in a compact form.
For each locus, the script reports the consensus sequence, the number of parents and progeny that Stacks could find the
locus in, the number of SNPs found at that locus and a listing of the individual SNPs and the observed alleles, followed
by the particular allele observed in each individual. The script allows you to specify a number of filters for the data,
the same filters as available through the web interface.
The sort_read_pairs.pl program collates paired-end sequences for each catalog locus, creating
a FASTA file of paired-reads for each catalog locus. These FASTA files can then be assembled to make paired-end contigs.
The exec_velvet.pl program will execute Velvet on the collated set of reads generated by