How to Conduct a Systematic Literature Review (SLR): A Complete Step-by-Step Practical Guide

A systematic literature review (SLR) is one of the most powerful research tools available: it synthesizes all relevant evidence on a clearly defined question using a transparent, reproducible, and auditable method. Whether you’re a student preparing a dissertation, an early-career researcher building expertise, or an experienced scholar seeking a rigorous overview of a field, this guide walks you through each step in practical, actionable detail.

This is an original, step-by-step manual — written to help you design, execute, analyze, and report an SLR. I explain what to do, why it matters, and how to avoid common pitfalls. Where useful, I include templates, examples, and sample forms you can copy and adapt.

Table of Contents

1. What is a systematic literature review, and when to use it

A systematic literature review is a structured and reproducible method for locating, evaluating, and synthesizing research evidence to answer a specific, predefined question. Unlike narrative reviews (which can be selective and unsystematic), an SLR aims to reduce bias through explicit methods, transparent records, and pre-specified analyses.

Use an SLR when:

  • You need a comprehensive, defensible answer to a focused question (e.g., “Does X therapy reduce Y outcome in Z population?”).
  • You want to quantify effect sizes across studies (meta-analysis).
  • You aim to identify gaps in knowledge, guide policy, or inform practice.
  • You need to synthesize heterogeneous evidence qualitatively (thematic synthesis) or quantitatively.

2. Core principles of a high-quality SLR

Keep these principles front-and-center:

  • Transparency: Every decision (inclusion criteria, search strings, coding rules) should be recorded.
  • Reproducibility: Another team should be able to repeat your search and arrive at the same pool of studies.
  • Comprehensiveness: Aim to find all relevant studies through multiple databases and grey literature.
  • Minimization of bias: Use independent screening, double data extraction, and risk-of-bias assessment.
  • Pre-specification: Write and (ideally) register a protocol before running the full review.

3. Step 1 — Narrow and frame your research question

3.1 Choose a clear, answerable question

The clearer your question, the more efficient your entire review. The most useful SLR questions are specific about:

  • Population (who?)
  • Intervention/exposure/phenomenon (what?)
  • Comparator (if relevant)
  • Outcome(s) (which outcomes matter?)
  • Study design(s) to include

Common acronyms: PICO (Population, Intervention, Comparator, Outcome) — useful for clinical questions. PICOS adds Study design. SPIDER or PEO may suit qualitative or social science work.

Example clinical: “In adults with chronic insomnia (P), does cognitive behavioural therapy for insomnia (I), compared with pharmacotherapy (C), reduce sleep latency and improve sleep quality (O)?”

Example qualitative: “How do first-generation college students (P) describe barriers to seeking mental health support (E — experience) in university settings (S)?”

3.2 Define scope and limits

Decide and document:

  • Timeframe (e.g., 2000–present) — only if justified
  • Languages included (e.g., English only or multiple languages)
  • Geographic limits (if any)
  • Types of evidence (quantitative RCTs, observational studies, qualitative studies, mixed-methods)
    Avoid being both too broad (unmanageable) and too narrow (missing important context).

3.3 Translate the question into objectives and eligibility criteria

Create a short objective statement and a table of inclusion/exclusion criteria. This will later feed into your search strategy and screening.


4. Step 2 — Write a protocol and register it

A protocol defines your plan; registering it increases credibility and prevents post-hoc changes.

4.1 Essential components of a protocol

  • Background and rationale
  • Clear research question(s) and objectives
  • Eligibility criteria (inclusion/exclusion)
  • Information sources and databases to search
  • Full search strategy (example strings)
  • Study selection process (how many reviewers, conflict resolution)
  • Data extraction items and forms
  • Risk-of-bias tools and procedures
  • Planned synthesis methods (narrative/meta-analysis)
  • Subgroup and sensitivity analyses
  • Plans for reporting and dissemination
  • Timeline and responsibilities

4.2 Registering the protocol

Register before starting the main review. This reduces the chance of bias from selective outcome reporting. If registration isn’t possible, deposit the protocol in a timestamped repository or as a preprint.


5. Step 3 — Design search strategies and pick information sources

A search strategy is the most important technical part of an SLR — it determines what studies you can find.

5.1 Choose databases and information sources

At a minimum, pick multiple complementary databases. Typical selections depending on discipline:

Health / Medicine

Social Sciences

Multidisciplinary

Grey Literature

Grey literature sources reduce publication bias. Examples include:

Other Methods: Citation Tracking & Expert Contact

  • Backward citation tracking — Reviewing reference lists of included articles.
  • Forward citation tracking —
  • Contacting experts — Reach out directly to authors in your field for unpublished or ongoing research.

5.2 Build your search strategy

Components:

  • Keywords: synonyms, spelling variants, acronyms.
  • Controlled vocabulary: e.g., MeSH in PubMed, Emtree in Embase.
  • Boolean operators: AND, OR, NOT.
  • Field tags: title/abstract vs. whole record.
  • Truncation & wildcards: e.g., therap* returns therapy, therapies, and therapeutic.
  • Phrase searching: quotes for exact phrases.

Start by listing core concepts (e.g., population, intervention, outcome) and brainstorm synonyms. Combine synonyms with OR, then combine concepts with AND.

Sample conceptual layout (pseudo-search):

(Population synonyms) AND (Intervention synonyms) AND (Outcome synonyms)

Example (PubMed-style simplified):

("insomnia"[MeSH Terms] OR insomnia[tiab] OR "sleep initiation disorders"[tiab]) 
AND ("cognitive therapy"[MeSH Terms] OR "CBT-I"[tiab] OR "cognitive behavioural therapy"[tiab]) 
AND ("sleep quality"[MeSH Terms] OR "sleep latency"[tiab] OR "sleep efficiency"[tiab])

5.3 Adapt searches for each database

Each database has its own syntax. Save and document every version you run. Export search histories or take screenshots.

5.4 Pilot and refine

Run preliminary searches to check if known key papers appear. If not, revise keywords and strategy.


6. Step 4 — Run searches and manage records

6.1 Run and save searches

Run all searches and save results in a consistent format (RIS, BibTeX, CSV). Save search histories and the date each search was run. Document the database and any limits applied.

6.2 Reference management and deduplication

Use a reference manager (EndNote, Zotero, Mendeley, or citation management in systematic review software). Import all records, then deduplicate carefully — but don’t delete duplicates without documenting how many were removed and why.

6.3 Create a search log

Record database, date, search string, number of hits, and exported records. This supports reproducibility and PRISMA flow reporting.


7. Step 5 — Screening: title/abstract, then full-text

A reproducible screening process reduces selection bias.

7.1 Prepare screening forms and pilot them

Create an inclusion/exclusion checklist; pilot it on a sample of records to ensure clarity and consistency. Resolve ambiguities, then finalize the form.

7.2 Two-stage screening

  1. Title/abstract screening — fast, liberal inclusion to avoid false exclusions.
  2. Full-text screening — apply strict eligibility criteria.

Always have at least two independent reviewers per record at both stages. Resolve disagreements by consensus or arbitration by a third reviewer.

7.3 Track screening decisions

Use screening software (Rayyan, Covidence, EPPI-Reviewer) or a spreadsheet. Record reasons for exclusion at full-text stage — these reasons become part of the PRISMA flow.

7.4 Measure agreement (optional but recommended)

Calculate inter-rater agreement (e.g., Cohen’s kappa) during pilot screening to quantify consistency and refine criteria if needed.


8. Step 6 — Data extraction

Accurate, consistent extraction is vital. Extract only what you need for your objectives.

8.1 Build a data extraction form

Typical extraction items:

  • Bibliographic info (author, year, journal)
  • Study design and setting
  • Population characteristics (age, sex, sample size, inclusion/exclusion)
  • Intervention/exposure details (dose, duration)
  • Comparator details
  • Outcomes and how measured (time points)
  • Numerical results (means, SDs, event counts)
  • Funding and conflicts of interest
  • Notes on missing data or peculiarities

Use spreadsheets, specialized tools, or systematic review platforms. Ensure consistent formats (e.g., units, currencies).

8.2 Pilot extraction

Extract from a subset of included studies to ensure everyone understands the fields. Modify the form if necessary.

8.3 Double extraction

Best practice: two reviewers independently extract data and reconcile differences. If resources are limited, one extracts and another checks.

8.4 Handling missing or ambiguous data

Document missing items. Try contacting the study authors for clarification. If no response, report assumptions explicitly and conduct sensitivity analyses to evaluate the impact.

Sample extraction table (short)

Study IDDesignN (interv)N (control)Age (mean ± SD)Outcome measureEffect (interv vs control)TimepointNotes
Smith 2018RCT504842.1 ± 9.3Sleep latency (min)Mean difference = -12.3 (95% CI -18.0, -6.6)8 weeksITT analysis

9. Step 7 — Assess study quality/risk of bias

Choose risk-of-bias tools matched to the study design.

9.1 Types of tools

  • Randomized trials: domain-based tools that look at randomization, deviations from interventions, missing data, outcome measurement, and reporting bias.
  • Non-randomized studies: tools that examine confounding, selection, misclassification, and outcome measurement.
  • Observational studies: check for selection bias, measurement bias, and confounding.
  • Qualitative studies: critical appraisal checklists examine sampling, reflexivity, and credibility.

9.2 How to assess

  • Two independent reviewers assess each study.
  • Use domain-level judgments (low, some concerns, high) rather than a single composite score (composites hide which domains are problematic).
  • Document the rationale for each judgment.

9.3 Presenting risk-of-bias

  • Tables: domain-by-study matrix with judgments and brief justifications.
  • Visuals: traffic-light plots or summary bar charts by domain.

9.4 Use risk-of-bias judgments in synthesis

  • Sensitivity analyses: exclude high-risk studies to see the impact.
  • Weight studies in interpretation (e.g., moderate evidence only if most studies have low risk).
  • Downgrade certainty in narrative or structured evidence grading (e.g., GRADE approach).

10. Step 8 — Synthesize the evidence

Decide whether a quantitative meta-analysis is appropriate or whether a qualitative/narrative synthesis suffices.

10.1 When to meta-analyze

Meta-analysis is appropriate when included studies are sufficiently similar in PICO and outcome measurement such that combining effect sizes is meaningful. If heterogeneity is extreme or studies are conceptually different, opt for narrative or thematic synthesis.

10.2 Preparing for meta-analysis

  • Select an effect measure: risk ratio (RR), odds ratio (OR), mean difference (MD), standardized mean difference (SMD).
  • Ensure numerical data are compatible (convert when necessary).
  • Decide on model: fixed-effect vs random-effects. Randomness is common when studies differ in populations or methods.

10.3 Key meta-analysis steps (practical)

  1. Compute study-level effect sizes and variances.
  2. Choose the effect model and compute the pooled effect.
  3. Assess heterogeneity: Cochran’s Q and I² statistic (I² = max(0, (Q - df)/Q) Ă— 100%), where Q = sum of squared deviations and df = k – 1.
  4. Explore heterogeneity: subgroup analysis or meta-regression.
  5. Check for publication bias: funnel plot asymmetry, and statistical tests (with caution).
  6. Sensitivity analyses: exclude outliers, high-risk studies, or use alternative effect measures.

10.4 Narrative and thematic synthesis

When meta-analysis isn’t possible:

  • Use structured narrative synthesis: group studies by key characteristics, summarize patterns, and highlight consistencies/inconsistencies.
  • Thematic synthesis (qualitative): code findings, develop descriptive and analytical themes, and synthesize participants’ voices across studies.

10.5 Presenting results

  • Forest plots for meta-analysis (show each study’s effect and the overall pooled effect).
  • Tables summarizing study characteristics, outcomes, and risk-of-bias.
  • Summary of findings table with effect estimates and confidence intervals, plus the certainty of evidence.

11. Step 9 — Report the review (PRISMA principles and manuscript structure)

Good reporting makes your SLR useful and citable.

11.1 Use a reporting guideline

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) is the standard for SLRs. Ensure you address checklist items: title, abstract, rationale, objectives, eligibility, information sources, search, selection, data items, risk-of-bias, results, summary of evidence, limitations, and funding.

11.2 Manuscript structure (recommended)

  • Title: Include “systematic review” (and “meta-analysis” if performed).
  • Abstract: A structured abstract with objectives, methods, results, and conclusions.
  • Introduction: Background, rationale, and objectives.
  • Methods: Detailed eligibility, search strategy (including full strings in the appendix), data extraction, risk-of-bias methods, and synthesis plan.
  • Results: PRISMA flow diagram (numbers at each stage), tables summarizing included studies, results of risk-of-bias, synthesis (forest plots, tables), and sensitivity/subgroup analyses.
  • Discussion: Interpretation, strengths/limitations, implications, and future research.
  • Conclusions: Key takeaways.
  • Declarations: Funding, conflicts, data availability, and protocol registration.

11.3 PRISMA flow diagram

Report the number of records identified, screened, excluded (with reasons), and included. This diagram is core evidence of your transparent process.


12. Step 10 — Share data, reproducible code, and supplementary materials

Transparency matters. Share:

  • Search strings for each database
  • Full list of excluded studies and reasons at full-text screening
  • Data extraction tables (raw numeric data used for synthesis)
  • Risk-of-bias judgments and justifications
  • Analysis scripts (R, Stata, RevMan files)
  • Protocol and any amendments
    Deposit materials in a public repository or as journal supplementary files. This enables reuse and verification.

13. Common pitfalls and troubleshooting

  • Poorly defined question: leads to unfocused searches and unmanageable study sets.
  • Weak search strategy: missing synonyms or controlled terms leads to missing important studies.
  • Single reviewer screening/extraction: increases the risk of errors and bias.
  • Inadequate documentation makes the review non-reproducible.
  • Over-aggregation in meta-analysis: pooling apples and oranges yields misleading pooled effects.
  • Ignoring risk-of-bias: gives equal weight to poor and high-quality studies.
  • Publication bias: negative results are less likely to appear in published literature — look for grey literature.
  • Selective reporting: Be cautious of studies reporting only favorable outcomes.

Fixes: pilot forms, double roles (screening/extraction), pilot and refine your search strategy, include grey literature, and be explicit about limitations.


14. Practical checklists, templates, and example tables

14.1 Quick SLR checklist (for your internal use)

  • Define focused question (PICO/SPIDER)
  • Draft and register protocol
  • Build a search strategy and pilot it
  • Select databases and grey literature sources
  • Run searches and export records
  • Deduplicate and import to the screening tool
  • Pilot and perform title/abstract screening (two reviewers)
  • Pilot and perform full-text screening (two reviewers)
  • Extract data (double extraction or check)
  • Assess risk-of-bias (two reviewers)
  • Decide on the synthesis method and run analyses
  • Prepare PRISMA flow and tables
  • Draft manuscript and appendices; share data and code

14.2 Example inclusion/exclusion table (short)

CriterionIncludeExclude
PopulationAdults ≥18 with diagnosed insomniaChildren/adolescents
InterventionCBT-I delivered face-to-face or onlineSingle-session sleep education only
OutcomesSleep latency, sleep efficiency, patient-reported sleep qualityBiomarkers only with no patient-reported outcomes
DesignRCTs, quasi-RCTsCase reports, editorials

14.3 Example flow of screening record (text to report)

  • Records identified through database searching: 4,212
  • Additional records from other sources: 124
  • After deduplication: 2,976
  • Titles/abstracts screened: 2,976
  • Full-text articles assessed: 312
  • Studies included in qualitative synthesis: 38
  • Studies included in quantitative synthesis (meta-analysis): 28

15. Final tips: collaboration, software, and ethics

15.1 Collaboration and team roles

An effective SLR team often includes:

  • Lead researcher (protocol and oversight)
  • Content experts (topic knowledge)
  • Methodologist/statistician (synthesis and meta-analysis)
  • Information specialist or librarian (search design)
  • Reviewers for screening and extraction
  • Data manager/analyst

Clearly define roles and authorship expectations at the outset.

15.2 Recommended software tools (examples)

  • Reference management: Zotero, EndNote, Mendeley
  • Screening and extraction: Rayyan, Covidence, EPPI-Reviewer, Excel/Google Sheets
  • Meta-analysis: R (metafor, meta), RevMan, Stata
  • Visuals: R (forestplot), RevMan

15.3 Ethical considerations

  • Accurately represent findings — do not overclaim.
  • Declare funding sources and conflicts of interest.
  • If handling unpublished or sensitive data (e.g., individual participant data), ensure compliance with data sharing agreements and ethics approvals.

Closing thoughts

A systematic literature review is both a craft and a science. It requires meticulous planning, clear documentation, and critical thinking at every stage. Do not rush the search and screening phases — they are the foundation. Be transparent about choices and limitations. Finally, an excellent SLR not only summarizes evidence but adds value: it clarifies gaps, proposes next steps, and guides decision-making in policy, practice, or research.

FAQ on Systematic Literature Review (SLR)

1. What is a Systematic Literature Review (SLR)?

A systematic literature review is a structured, transparent, and reproducible method of identifying, evaluating, and synthesizing research evidence to answer a focused research question. Unlike narrative reviews, SLRs minimize bias by following predefined steps such as protocol registration, comprehensive searches, screening, and quality assessment.

2. Why should I conduct a Systematic Literature Review?

An SLR helps:

  • Summarize existing evidence on a topic.
  • Identify research gaps.
  • Provide evidence-based recommendations.
  • Support academic projects like theses, dissertations, or grant proposals.
  • Build a reliable foundation for future research or policy-making.

3. What are the main steps in conducting an SLR?

The essential steps include:

  1. Defining a clear research question (using PICO, SPIDER, or similar frameworks).
  2. Writing and registering a review protocol.
  3. Designing and executing a comprehensive search strategy.
  4. Screening studies based on inclusion and exclusion criteria.
  5. Extracting relevant data systematically.
  6. Assessing the quality or risk of bias in included studies.
  7. Synthesizing findings (narrative, thematic, or meta-analysis).
  8. Reporting results according to PRISMA guidelines.

4. How is an SLR different from a traditional literature review?

  • Systematic Review: Structured, protocol-driven, transparent, reproducible, and includes risk-of-bias assessment.
  • Traditional Review: Narrative, flexible, may lack transparency and may not cover all available evidence.

5. Which databases should I search for an SLR?

It depends on your field:

  • Health/Medicine: PubMed/MEDLINE, Embase, Cochrane Library, CINAHL.
  • Social Sciences: PsycINFO, Web of Science, Scopus.
  • Multidisciplinary: Scopus, Web of Science, Google Scholar (supplementary).
  • Grey Literature: ProQuest Dissertations, ClinicalTrials.gov, OpenGrey.

6. What is a protocol, and why is it important?

A protocol is a detailed plan describing how the SLR will be conducted (objectives, eligibility criteria, search strategies, methods of analysis). Registering a protocol (e.g., PROSPERO) ensures transparency, prevents duplication, and reduces bias from post-hoc changes.

7. How do I choose the right inclusion and exclusion criteria?

Base them on your research question. Consider:

  • Population or participants.
  • Intervention, exposure, or phenomenon of interest.
  • Comparator (if relevant).
  • Outcomes.
  • Study designs.
  • Time period, language, or geographical restrictions (if justified).

8. What tools or software can help in managing an SLR?

  • Reference managers: Zotero, EndNote, Mendeley.
  • Screening & collaboration: Rayyan, Covidence, EPPI-Reviewer.
  • Meta-analysis: RevMan, R (metafor/meta packages), Stata.

9. How do I assess the quality of included studies?

Use standardized risk-of-bias tools depending on study design:

  • RCTs → Cochrane RoB 2 tool.
  • Observational studies → ROBINS-I or Newcastle–Ottawa Scale.
  • Qualitative studies → CASP checklist.

10. What if studies report conflicting findings?

Conflicting results are common. You should:

  • Check the risk of bias in each study.
  • Explore heterogeneity (subgroup analysis or sensitivity analysis).
  • Present balanced interpretations, highlighting both consistencies and inconsistencies.

11. What is PRISMA and why is it important?

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) is a reporting guideline that ensures your review is transparent and complete. The PRISMA flow diagram is used to show how studies were identified, screened, excluded, and included.


12. Can I do a meta-analysis as part of my SLR?

Yes, if your included studies are sufficiently similar in design, population, intervention, and outcomes. If heterogeneity is high, consider a narrative or thematic synthesis instead of pooling results statistically.


13. How long does it take to complete an SLR?

It depends on the scope, team size, and resources. A small student-level SLR may take 3–6 months, while large collaborative reviews can take 12–18 months or more.


14. What are the most common mistakes to avoid in an SLR?

  • Poorly defined research question.
  • Inadequate or biased search strategy.
  • Using only one reviewer for screening or extraction.
  • Ignoring risk-of-bias assessment.
  • Selective reporting of favorable results.

15. Can an individual researcher conduct an SLR alone?

Technically yes, but not recommended. Independent screening and data extraction by at least two reviewers minimizes bias and error. If working solo, be transparent about limitations.

3 thoughts on “How to Conduct a Systematic Literature Review (SLR): A Complete Step-by-Step Practical Guide”

  1. Hello there! This post couldn’t be written any better! Reading this post reminds me of my old room mate! He always kept talking about this. I will forward this page to him. Fairly certain he will have a good read. Many thanks for sharing!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top