Metagenome Sequencing

The unbiased picture of your
entire community

Metagenomics reads all the DNA in a sample simultaneously — bacteria, fungi, viruses, and more — without culturing anything first. You get a complete, unbiased picture of who is there and what they're capable of doing.

Platform DNBseq
Read Length 2 x 150 bp paired-end
Spike-ins optional
Library prep Automated, dual-indexed
Overview

The unbiased picture

Shotgun metagenomics sequences all DNA extracted from a sample at once — no PCR amplification, no marker gene bias. Every organism present contributes reads in proportion to its abundance, giving you a simultaneous picture of who is there, what genes they carry, and — depending on depth — at what functional capacity they might operate.

Where 16S and ITS amplicon methods tell you which organisms are present, metagenomics tells you what those organisms are doing: the metabolic pathways encoded in their genomes, the resistance genes they carry, the virulence factors that distinguish strains. For projects where community composition is just the starting point, this is the right tool.

We handle extraction from your sample type, automated library preparation, sequencing  and quality-controlled FASTQ delivery with full documentation. We also offer add-on taxonomic and functional profiling pipelines via our expert partners for labs that prefer not to handle the bioinformatics themselves.

A note on sequencing depth

How much sequencing you need depends on how complex your sample is. A fermentation community with a handful of species may be fully characterised at 2–5 Gb. A human gut sample with hundreds of species, including rare ones, typically needs 10–20 Gb or more. Undersequencing is one of the most common and expensive mistakes in metagenomics — it’s harder to fix after the fact than to get right the first time. We’ll always work through this with you before agreeing a quote.

~150 × More genes in the gut microbiome than in the entire human genome — most of them still functionally uncharacterised
99 % Of environmental microorganisms cannot be cultured in a laboratory — invisible to traditional methods, fully captured by metagenomics
2 FDA Microbiome-based therapies approved in 2023 (Vowst & Rebyota) — built on metagenomics datasets like the ones we generate
When to use it

Metagenomics works well
here

Not every question needs the full shotgun approach. We'll tell you directly when 16S amplicon sequencing would serve you equally well at a fraction of the cost — and we mean it.

Not sure which service fits?

Our free consultation is designed exactly for this. Bring your research question — not a sequencing method — and we'll work backwards from there.

Get in touch →
01

Gut microbiome research where functional insight matters

If your research question goes beyond "who is there" — if you care about metabolic pathways, butyrate production, bile acid metabolism, gene expression potential, or antibiotic resistance carriage — metagenomics is the right tool. Recent landmark studies linking the gut microbiome to metabolic syndrome, colorectal cancer risk (via Fusobacterium nucleatum enrichment), and immunotherapy response have all been built on shotgun metagenomics datasets. Amplicon sequencing would not have revealed the functional layer underpinning those findings.

02

AMR surveillance — when you need the full resistome

Resistance genes don't respect species boundaries. They transfer horizontally on mobile genetic elements between organisms that would never be cultured together. Metagenomics captures this entire resistome — every known and putative resistance gene in every organism in the sample — in a single experiment, without needing to first decide which genes to look for. Stored raw data can also be reanalysed as new resistance genes are discovered and catalogued. This is increasingly the method of choice for hospital environmental surveillance, wastewater monitoring, and clinical AMR profiling.

03

Environmental and ecological surveys

Soil, water, sediment, air filters, marine samples — any environment where you want a complete, unbiased community profile without the amplification bias that can distort 16S results. Metagenomics is particularly powerful for detecting previously unknown organisms (and has found entirely novel phyla in soil samples that would be invisible to any marker-gene approach) and for comparing communities across geographies or time points in a reproducible, standardised way.

04

Culture-negative clinical samples

Bronchoalveolar lavage, cerebrospinal fluid, tissue biopsies, joint fluid — samples where conventional culture fails or is impractical. Metagenomics can identify pathogens directly from clinical material, including organisms that are slow-growing, fastidious, or simply not routinely cultured. Validated clinical metagenomics workflows are now in active use for CNS infection diagnosis (seven-year performance data published in Nature Medicine, 2024) and respiratory pathogen detection, including in cases where standard PCR panels are negative.

05

Bioprocess and fermentation monitoring

Industrial fermentation, food and beverage production, bioreactor processes — complex microbial communities where community composition directly affects yield, product quality, and process stability. Metagenomics provides a complete process fingerprint at each timepoint, capturing not just which organisms are present but what metabolic activities they're running. More useful than 16S alone when process optimisation requires understanding functional shifts across fermentation stages.

Specifications

What you need to know
before getting started

All specifications for standard projects. Contact us about non-standard inputs, large cohorts, or urgent timelines.

Technical Specification

Platform DNBSeq T7/ G400/ G99
Typical Depth 5-20 Gb per sample
Read config 2 x 150 bp paired-end
Controls per batch and process

Sample requirements

What you receive

Workflow

What happens after you
reach out

Five stages from first contact to data in your hands. You only need to be actively involved at stages 1 and 3 — everything else runs in our lab with full documentation.

1

Free project consultation

We talk through your research question, sample types, cohort size, and expected community complexity. We'll recommend sequencing depth, confirm whether host depletion is needed, suggest appropriate controls, and identify any collection protocol issues before they become expensive problems. If amplicon sequencing would genuinely serve you better, we'll say so.

Always free. We'd rather lose a job than give you a dataset that doesn't answer your question

2

Detailed quote and project agreement

A full written proposal covering method, sequencing depth, controls, expected deliverables, add-on analysis options, turnaround timeline, and pricing. Nothing proceeds without your written sign-off on the scope — including any add-on bioinformatics, which we scope upfront so there are no surprises at delivery.

We include all add-on options here so you can plan your analysis budget in advance.

3

Sample shipment and receipt QC

You ship following our packaging guidelines (domestic and international shipping both supported). We confirm receipt and send an initial QC report within two business days — including whether each sample meets minimum quality thresholds. Failed samples are flagged before they enter the pipeline, not after you've already paid for sequencing.

International shipments welcome. Contact us before shipping — customs documentation requirements vary significantly by country and sample type.

4

Extraction, library preparation, and sequencing

Automated DNA extraction with proprietary cell lysis, DNBseq sequencing with unique dual barcodes. Controls processed alongside every batch. All kit lots, instrument run IDs, and protocol deviations documented.

5

Quality control and data delivery

Per-sample QC review, adapter trimming, quality filtering, MultiQC report generation, and delivery of FASTQ files via authenticated secure link. Any optional add-on analyses (taxonomic profiles, functional annotation, AMR screening) delivered alongside. Raw data retained 90 days post-delivery on encrypted servers in Sweden. Full wet-lab protocol documentation — useful for methods sections, grant applications, and institutional audits.

Your raw FASTQ files are yours to reanalyse as bioinformatics tools improve. Storing raw data rather than processed outputs is always the right long-term decision.

FAQs

Common questions
about metagenomics

Both approaches tell you about the microorganisms in a sample, but they do it very differently. Amplicon sequencing (16S rRNA for bacteria, ITS for fungi) targets a single genetic marker — a short stretch of DNA that most bacteria share in a slightly different form. By reading that marker across thousands of organisms, you can identify which genera are present and roughly how abundant each one is. It’s cost-efficient and scales well to large cohorts. The limitations are real though: genus-level resolution at best (rarely species, almost never strain), no functional information, and potential bias if your primers don’t amplify all relevant organisms equally. Shotgun metagenomics has none of those limitations. It sequences everything — every organism, every gene, in one experiment. You get species- and strain-level resolution, a complete functional gene inventory (including resistance genes, metabolic pathways, virulence factors), and no PCR amplification bias. The trade-off is cost: metagenomics typically costs 5–10× more per sample. For large cohorts with a composition-only question, amplicon sequencing often remains the right choice. For mechanistic questions, clinical applications, or any project where functional data matters, metagenomics is worth the investment.

This is one of the most consequential decisions in study design, and there’s no universal answer. Depth requirements depend on: how many species your community contains, how evenly they’re distributed (rare organisms require more total reads to detect reliably), whether you need functional annotation (which requires more depth than taxonomy alone), and what statistical power you need across your cohort. As rough orientation: low-complexity communities (fermentation, simple environmental samples) — 2–5 Gb. Standard human gut metagenomics — 10–15 Gb. Deep functional profiling or detection of rare organisms — 20–50 Gb. These are starting points, not rules. Undersequencing is one of the most common and costly errors in metagenomic studies — it’s much harder to fix after the fact than to address in study design. We work through depth requirements with every project before finalising quotes, and we’ll always flag if your planned depth risks leaving your biological question unanswered.

Yes, but low-biomass samples require a different approach and more careful planning. The two main challenges are: (1) reagent contamination — when there are only a few hundred bacteria in a sample, DNA introduced by extraction reagents and consumables can outnumber your actual sample signal, and (2) host DNA dominance — in samples like BAL or tissue biopsies, host DNA can make up 99%+ of total DNA, leaving almost nothing microbial to sequence after host depletion. We address both by processing strict negative controls alongside every low-biomass batch (using the same reagents and workflow), using host depletion methods validated for the specific sample type, and — where inputs are very low — recommending low-input-optimised library preparation protocols. Importantly, the collection protocol matters enormously for low-biomass samples: contact us before you collect, not after. The choice of collection tube, stabilisation method, and storage conditions can make the difference between a viable study and an unusable dataset.

Both are fine. We accept raw biological material — stool, swabs, tissue, water samples, soil — and perform extraction using our validated automated workflows. We also accept pre-extracted DNA, if you have an established extraction protocol you want to maintain for consistency with existing data. If you’re sending pre-extracted DNA, tell us which extraction kit and protocol was used so we can document it appropriately in your methods report. For cohort studies, we generally recommend letting us handle extraction across all samples — even small protocol differences between operators or kit lots can introduce batch effects that are difficult to correct in downstream analysis. Consistency matters more than which specific protocol you use.