Part 2 book “Epidemiology, evidence-based medicine and public health” has contents: Systematic reviews and meta-analysis, health economics, public health, infectious disease epidemiology and surveillance, health improvement, health care targets, global health, infectious disease epidemiology and surveillance,… and other contents.
Trang 1In this chapter you will learn to:
✓ define a systematic review, and explain why it provides more reliable evidence than a traditional narrative review;
✓ succinctly describe the steps in conducting a systematic review;
✓ understand the concept of meta-analysis and other means of synthesising results;
✓ explain what is meant by heterogeneity;
✓ critically appraise the conduct of a systematic review.
What are systematic
reviews and why do we
need them?
Systematic reviews arestudies of studiesthat offer
a systematic approach to reviewing and
sum-marising evidence They follow a defined structure
to identify, evaluate and summarise all available
evidence addressing a particular research
ques-tion Systematic reviews should use and report
clearly-defined methods, in order to avoid the
biases associated with, and subjective nature of,
traditional narrative reviews Key characteristics of
a systematic review include a set of objectives with
pre-defined inclusion criteria, explicit and ducible methodology, comprehensive searchesthat aim to identify all relevant studies, assessment
repro-of the quality repro-of included studies, and a ised presentation and synthesis of the characteris-tics and findings of the included studies
standard-Systematic reviews are an essential tool to allowindividuals and policy makers to make evidence-based decisions and to inform the development
of clinical guidelines Systematic reviews fulfilthe following key roles: (1) allow researchers tokeep up to date with the constantly expandingnumber of primary studies; (2) critically appraiseprimary studies addressing the same researchquestion, and investigate possible reasons forconflicting results among them; (3) providemore precise and reliable effect estimates than is
Epidemiology, Evidence-based Medicine and Public Health Lecture Notes, Sixth Edition Yoav Ben-Shlomo, Sara T Brookes and Matthew Hickman.
Trang 2
Systematic reviews and meta-analysis 103
possible from individual studies, which are
of-ten underpowered; and (4) identify gaps in the
evidence base
How do we conduct a
systematic review?
It is essential to first produce a detailed protocol
which clearly states the review question and the
proposed methods and criteria for identifying and
selecting relevant studies, extracting data,
assess-ing study quality, and analysassess-ing results To
min-imise bias and errors in the review process, the
ref-erence screening, inclusion assessment, data
ex-traction and quality assessment should involve at
least two independent reviewers If it is not
practi-cal for all tasks to be conducted in duplicate, it can
be acceptable for one reviewer to conduct each
stage of the review while a second reviewer checks
their decisions The steps involved in a systematic
review are similar to any other research
Screen titles and abstracts
Retrieve full text papers
Apply inclusion criteria
Extract data and assess study quality
Define the review question and inclusion criteria
A detailed review question supported by clearly
de-fined inclusion criteria is an essential component
of any review For a review of an intervention theinclusion criteria should be defined in terms ofpatients, intervention, comparator interventions,outcomes (PICO) and study design Other types ofreview (for example, reviews of diagnostic test ac-curacy studies) will use different criteria
Example: We will use a review by Lawlor and
Hopker (2001) on the effectiveness of exercise as anintervention for depression to illustrate the steps
in a systematic review This review aimed ‘to mine the effectiveness of exercise as an interven-tion in the management of depression’
deter-Inclusion criteria were defined as follows:
Patients: Adults (age> 18 years) with a
diagnosis of depression (anymeasure and any severity)
Intervention: Exercise
Comparator: Established treatment of
depression Studies with anexercise control group wereexcluded
Studies reporting only anxiety
or other disorders wereexcluded
Study design: Randomised controlled trials
Identify relevant studies
A comprehensive search should be undertaken
to locate all relevant published and unpublishedstudies Electronic databases such as MEDLINEand EMBASE form the main source of publishedstudies These bibliographic databases index ar-ticles published in a wide range of journals andcan be searched online Other available databaseshave specific focuses: the exact databases, andnumber of databases, that should be searched
is dependent upon the review question TheCochrane CENTRAL register of controlled trials,which includes over 640,000 records, is the bestsingle source for identifying reports of controlledtrials (both published and unpublished) A de-tailed search strategy, using synonyms for the type
of patients and interventions of interest, and bined using logical AND and OR operators should
com-be used to help identify relevant studies
Trang 3There is a trade-off between maximising the
number of relevant studies identified by the
searches whilst limiting the number of ineligible
studies in order that the search retrieves a
man-ageable number of references to screen It is
com-mon to have to screen several thousands of
refer-ences Searches of bibliographic databases alone
tend to miss relevant studies, especially
unpub-lished studies, and so additional steps should be
taken to ensure that all relevant studies are
cluded in the review For example, these could
in-clude searching relevant conference proceedings,
grey literature databases, internet websites,
hand-searching journals, contacting experts in the field,
screening the bibliographies of review articles and
included studies, and searches for citations to key
papers in the field Online trial registers are of
increasing importance in helping identify studies
that have not, or not yet, been published Search
results should be stored in a single place, ideally
using bibliographic software (such as Reference
Manager or EndNote)
Selecting studies for inclusion is a two-stage
process First, the search results, which generally
include titles and abstracts, are screened to
iden-tify potentially relevant studies The full text of
these studies is then obtained (downloaded
on-line, ordered from a library, or copy requested from
the authors) and assessed for inclusion against
the pre-specified criteria Retrieved papers are
then assessed for eligibility against pre-specified
criteria
Example: The Lawlor and Hopker (2001) review
conducted a comprehensive search including
Med-line, Embase, Sports Discus, PsycLIT, Cochrane
CENTRAL, and the Cochrane Database of
System-atic Reviews Search terms included ‘exercise,
phys-ical activity, physphys-ical fitness, walking, jogging,
run-ning, cycling, swimming, depression, depressive
disorder, and dysthymia.’ Additional steps to locate
relevant studies included screening bibliographies,
contacting experts in the field, and handsearching
issues of relevant journals for studies published in
1999 No language or publication restrictions were
applied Three reviewers independently reviewed
titles and available abstracts to retrieve potentially
relevant studies; studies needed to be identified by
only one person to be retrieved
Extract relevant data
Data should be extracted using a standardised
form designed specifically for the review, in order
to ensure that data are extracted consistentlyacross different studies Data extraction formsshould be piloted, and revised if necessary Elec-tronic data collection forms and web-based formshave a number of advantages, including thecombination of data extraction and data entry
in one step, more structured data extraction andincreased speed, and the automatic detection
of inconsistencies between data recorded bydifferent observers
Example: For the Lawlor and Hopker (2001) review
two reviewers independently extracted data on ticipant details, intervention details, trial quality,outcome measures, baseline and post interventionresults and main conclusions Discrepancies wereresolved by referring to the original papers andthrough discussion
par-Assess the quality of the included studies
Assessment of study quality is an important ponent of a systematic review It is useful to dis-tinguish between the risk of bias (internal valid-ity) and the applicability (external validity, or gen-eralisability) of the included studies to the re-view question Bias occurs if the results of a studyare distorted by flaws in its design or conduct(see Chapter 3), while applicability may be limited
com-by differences between included patients’ graphic or clinical features, or in how the interven-tion was applied, compared to the patients or in-tervention that are specified in the review ques-tion Biases can vary in magnitude: from smallcompared with the estimated intervention effect
demo-to substantial, so that an apparent finding may
be entirely due to bias The effect of a particularsource of bias may vary in direction between trials:for example lack of blinding may lead to underes-timation of the intervention effect in one study butoverestimation in another study
The approach that should be used to assessstudy quality within a review depends on the de-sign of the included studies – a large number ofdifferent scales and checklists are available Com-monly used tools include the Cochrane Risk of Biastool for RCTs and the QUADAS-2 tool for diag-nostic accuracy studies Authors often wish to usesummary ‘quality scores’ based on adding pointsthat are assigned based on a number of aspects
of study design and conduct, to provide a gle summary indicator of study quality However,empirical evidence and theoretical considerations
Trang 4sin-Systematic reviews and meta-analysis 105
suggest that summary quality scores should not be
used to assess the quality of trials in systematic
re-views Rather, the relevant methodological aspects
should be identified in the study protocol, and
as-sessed individually
At a minimum, a narrative summary of the
re-sults of the quality assessment should be
pre-sented, ideally supported by a tabular or graphical
display Ideally, the results of the quality
assess-ment should be incorporated into the review for
example by stratifying analyses according to
sum-mary risk of bias or restricting inclusion in the
re-view or primary analysis to studies judged to be at
low risk of bias for all or specified criteria
Associa-tions of individual items or summary assessments
of risk of bias with intervention effect estimates
can be examined usingmeta-regression analyses
(a statistical method to estimate associations of
study characteristics (‘moderator variables’) with
intervention effect estimates), but these are often
limited by low power Studies with a rating of high
or unclear risk of bias/concerns regarding
applica-bility may be omitted, insensitivity analyses
Example: The Lawlor and Hopker (2001) review
assessed trial quality by noting whether
alloca-tion was concealed, whether there was blinding,
and whether an intention to treat analysis was
re-ported They conducted meta-regression analyses
(see ‘Heterogeneity between study results’ section,
pp 106–108, below) to investigate the influence of
these quality items on summary estimates of
treat-ment effect
How do we synthesise
findings across studies?
Where possible, results from individual studies
should be presented in astandardised format,
to allow comparison between them If the
end-point is binary (for example, disease versus no
dis-ease, or dead versus alive) then risk ratios, odds
ratios or risk differences may be calculated
Empir-ical evidence shows that, in systematic reviews of
randomised controlled trials, results presented as
risk ratios or odds ratios are more consistent than
those expressed as risk differences
If the outcome is continuous and measurements
are made on the same scale (for example, blood
pressure measured in mm Hg) then the
interven-tion effect is quantified as the mean difference
between the intervention and control groups Ifdifferent studies measured outcomes in differentways (for example, using different scales for mea-suring depression in primary care) it is necessary
to standardise the measurements on a commonscale to allow their inclusion in meta-analysis This
is usually done by calculating the standardised mean differencefor each study (the mean differ-ence divided by the pooled standard deviation ofthe measurements)
Example: In the Lawlor and Hopker (2001) review,
the primary outcome of interest, depression score,was a continuous measure assessed using differentscales Standardised mean differences were there-fore calculated for each study
Meta-analysis
Ameta-analysisis a statistical analysis that aims
to produce a single summary estimate by ing the estimates reported in the included stud-ies This is done by calculating a weighted av-erage of the effect estimates from the individualstudies (for example, estimates of the effect of theintervention from randomised clinical trials, or es-timates of the magnitude of association from epi-demiological studies) Ratio measures should belog-transformed before they are meta-analysed:they are then back-transformed for presentation ofestimates and confidence intervals For example,
combin-denoting the odds ratio in study i by OR iand the
weight in study i by w i, the weighted average logodds ratio is
corre-The choice of weight depends on the choice ofmeta-analysis model Thefixed effectmodel as-sumes the true effect to be the same in each study,
so that the differences between effect estimates
Trang 5in the different studies are exclusively due to
ran-dom (sampling) variation.Random-effects
meta-analysis models allow for variability between the
true effects in the different studies Such
variabil-ity is known asheterogeneity, and is discussed in
more detail below
In fixed-effect meta-analyses, the weights are
based on theinverse varianceof the effect in each
study:
w i= 1
v i
where the variance vi is the square of the
stan-dard error of the effect estimate in study i
Be-cause large studies estimate the effect precisely
(so that the standard error and variance of the
ef-fect estimate are small), this approach gives more
weight to the studies that provide most
informa-tion Other methods for fixed-effect meta-analysis,
such as the Mantel-Haenszel method or the Peto
method are based on different formulae but give
similar results in most circumstances
In a random-effects meta-analysis, the weights
are modified to account for the variability in
true effects between the studies This
modifica-tion makes the weights (a) smaller and (b)
rela-tively more similar to each other Thus,
random-effects meta-analyses give relatively more weight
to smaller studies The most commonly used
method for random-effects meta-analysis was
proposed by DerSimonian and Laird The
sum-mary effect estimate from a random-effects
meta-analysis corresponds to the mean effect, about
which the effects in different studies are assumed
to vary It should thus be interpreted differently
from the results from a fixed-effect meta-analysis
Example: The Lawlor and Hopker review used
a fixed effect inverse variance weighted
meta-analysis when heterogeneity could be ruled out,
otherwise a DerSimonian and Laird random effects
model was used
Forest plots
The results of a systematic review and
meta-analysis should be displayed in aforest plot Such
plots display a square centred on the effect
esti-mate from each individual study and a horizontal
line showing the corresponding 95% confidence
intervals The area of the square is proportional to
its weight in the meta-analysis, so that studies that
contribute more weight are represented by larger
squares A solid vertical line is usually drawn torepresent no effect (risk/odds ratio of 1 or meandifference of 0) The result of the meta-analysis
is displayed by a diamond at the bottom of thegraph: the centre of the diamond corresponds tothe summary effect estimate, while its width cor-responds to the corresponding 95% confidence in-terval A dashed vertical line corresponding to thesummary effect estimate is included to allow vi-sual assessment of the variability of the individualstudy effect estimates around the summary esti-mate Even if a meta-analysis is not conducted, it
is often still helpful to include a forest plot out a summary estimate, in which case the sym-bols used to display the individual study effect es-timates will all be the same size
with-Example: Figure 12.2 shows a forest plot, based on
results from the Lawler and Hopker (2001) review,
of the effect of exercise compared to no treatment
on change in depressive symptoms, measured ing standardised mean differences The summaryintervention effect estimate suggests that exercise
us-is associated with an improvement in symptoms,compared to no treatment
Heterogeneity between study results
Before pooling studies in a meta-analysis it is portant to consider whether it is appropriate to
im-do so If studies differ substantially from one other in terms of population, intervention, com-parator group, methodological quality or study de-sign then it may not be appropriate to combinetheir results It is also possible that even whenthe studies appear sufficiently similar to justify ameta-analysis, estimates of intervention effect dif-fer to such an extent that a summary estimate isnot appropriate or should accommodate these dif-ferences Differences between intervention effectestimates greater than those expected because ofsampling variation (chance) are known as ‘statis-tical heterogeneity’ As part of the process of con-ducting a meta-analysis, the presence of hetero-geneity should be formally assessed The first step
an-is van-isual inspection of the results dan-isplayed in theforest plot On average, in the absence of hetero-geneity, 95% of the confidence intervals aroundthe individual study estimates will include thefixed-effect summary effect estimate The secondstep is to report a measure of heterogeneity, and ap-value from a test for heterogeneity
Trang 6Systematic reviews and meta-analysis 107
Study (No of weeks of intervention)
Diamond shows summary intervention effect estimate across studies Centre of diamond is the intervention effect estimate, tips of diamond indicate upper and lower confidence limits
Standardised mean difference in effect size
2
for depression
Heterogeneity can be quantified using the τ2
or I2 statistics The τ2 statistic represents the
between-study variance in the true intervention
effect, and is used to derive the weights in a
random-effects meta-analysis A disadvantage is
that it is hard to interpret, although it can be
con-verted to provide a range within which we expect
the true treatment effect to fall (for example a 90%
range for the mean difference) The I 2 statistic
quantifies the percentage of total variation across
studies that is due to heterogeneity rather than
chance I2 lies between 0% and 100%; a value
of 0% indicates no observed heterogeneity, and
larger values show increasing heterogeneity When
I2= 0 then τ2= 0, and vice-versa
A statisticaltest for heterogeneityis a test of the
null hypothesis that there is no heterogeneity, i.e
that the true intervention effect is the same in all
studies (the assumption underlying a fixed-effect
meta-analysis) A test for heterogeneity proceeds
by deriving aQ-statistic, whose value is not in
it-self of interest but which can be compared with
theχ2 distribution in order to derive a p-value
As usual, the smaller the p-value the stronger is
the evidence against the null hypothesis Hence,
a small p-value from a test for heterogeneity
sug-gests that the true intervention effect varies
be-tween the studies Tests for heterogeneity should
be interpreted with caution, because they typically
have low power
If heterogeneity is present then a small
number of (ideally pre-specified) subgroup
and/or sensitivity analyses can be conducted to
investigate whether the treatment effect differsacross subgroups of studies (for example, thoseusing high versus low dose of the intervention orthose assessed as at high compared to low risk
of bias) However, typical meta-analyses containfewer than 10 component studies, which severelylimits the potential for these additional analyses toprovide definitive explanations for heterogeneity
If heterogeneity remains unexplained but pooling
is still considered appropriate, arandom effects analysis can be used to accommodate hetero-geneity, though its results should be interpreted
in the light of the underlying assumption thatthe true intervention effect varies between thestudies Alternatively, it may be appropriate topresent a narrative synthesis of findings acrossstudies, without combining the results into asingle summary estimate
Example: There was substantial variability
be-tween the results of the studies of exercise pared with no treatment for depression that werelocated by Lawlor and Hopker (2001) (Figure 12.2).Four of the 10 confidence intervals around thestudy effect estimates did not include the sum-mary effect estimate This visual impression wasconfirmed by strong evidence of heterogeneity(Q= 35.0, P < 0.001) The estimated value of the
com-between-study variance was τ2 = 0.41 Lawlorand Hopker reported results from a random-effects meta-analysis, and used meta-regressionanalyses to investigate heterogeneity due toquality features (allocation concealment, use ofintent-to-treat analysis, blinding), setting, baseline
Trang 7depression severity, type of exercise, and type of
publication As shown in Figure 12.2, intervention
effect estimates were greater in two studies that
were published only as conference abstracts than
in the studies published as full papers
Reporting biases
The dissemination of research findings is a
con-tinuum ranging from the sharing of draft papers
among colleagues, presentations at meetings,
publication of abstracts, to availability of full
papers in journals that are indexed in the major
bibliographic databases Not all studies are
pub-lished in full in an indexed journal and therefore
easily identifiable for systematic review Reports
of large externally funded studies with
statis-tically significant results are more likely to be
published, published quickly, published in an
English-language journal, published in more than
one place, and cited in subsequent publications
and so their results are more accessible and easy
to locate Reporting biases are introduced when
the publication of research findings is
influ-enced by the strength and direction of results
Publication biasrefers to the nonpublication of
whole studies, whilelanguage biascan occur if a
review is restricted to studies reported in specific
languages For example, investigators working
in a non-English-speaking country may be more
likely to publish positive findings in international,
English-language journals, while sending less
interesting negative or null findings to
local-language journals It follows that restricting a
review to English-language publications has the
potential to introduce bias Even when a study
is published, selective reporting of outcomes has
the potential to lead to serious bias in systematic
reviews
Reporting biases may lead to an association
be-tween study size and effect estimates Such an
as-sociation will lead to an asymmetrical appearance
of a funnel plot – a scatter plot of a measure of
study size against effect estimate (the lighter
cir-cles in the upper panel of Figure 12.3 are the results
of unpublished studies that will be missing in the
funnel plot) Thereforefunnel plots(Figure 12.3),
and statistical tests for funnel plot asymmetry, can
be used to investigate evidence of reporting
bi-ases However, it is important to realise that
fun-nel plot asymmetry can have causes other than
re-porting biases: for example that poor
methodolog-ical quality leads to spuriously inflated effects in
Symmetricalfunnel plot
No small study effect
evidence of small study effect
smaller studies, or that effect size differs according
to study size because of differences in the intensity
of interventions
Presenting the results of the review
A systematic review should present overviews
of the characteristics, quality and results of theincluded studies Tabular summaries are veryhelpful for providing a clear overview Types ofdata that may be summarised include details ofthe study population (setting, demographic fea-tures, presenting condition details), intervention(e.g dose, method of administration), comparatorinterventions, study design, outcomes evaluatedand results Depending on the amount of data to
be summarised it can be helpful to include rate tables for baseline information, study quality,
Trang 8sepa-Systematic reviews and meta-analysis 109
Trang 9and study results The narrative discussion should
consider the strength of the evidence for a
treat-ment effect, whether there is unexplained
varia-tion in the treatment effect across individual
stud-ies, and should incorporate a discussion of the risk
of bias and applicability of the included studies If
meta-analysis is not possible, for example because
outcomes assessed in the included studies were
too different to pool, then the narrative discussion
is the main synthesis of results across studies It
is important to provide some synthesis of results
across studies, even if this is not statistical, rather
than simply describing the results of each included
study
Example: Table 12.1 shows an extract from the
study details table reported in the Lawlor and
Hop-ker (2001) review This table allows the reader to
quickly scan both the characteristics of individual
studies (rows) and the pattern of a characteristic
across the whole review (columns)
Critical appraisal of
systematic reviews
When reading a report of a systematic review the
following criteria should be considered:
(1) Is the search strategy comprehensive, or could
some studies have been missed?
(2) Were at least two reviewers involved in all
stages of the review process (reference
screen-ing, inclusion assessment, data extraction and
quality assessment)?
(3) Was study quality assessed using appropriate
criteria?
(4) Were the methods of analysis appropriate?
(5) Is there heterogeneity in the treatment effect
across individual studies? Is this investigated?
(6) Could results have been affected by reporting
biases or small study effects?
If a systematic review does not report sufficient
detail to make a judgment on one or more of
these items then conclusions drawn from the
re-view should be cautious The PRISMA statement is
a 27-item checklist that provides guidance to
sys-tematic review authors on what they should
re-port in journal articles It is not a critical appraisal
checklist, but reports following PRISMA should
give enough information to permit a
comprehen-sive critical appraisal of the review
KEY LEARNING POINTS
r Systematic reviews are ‘studies of studies’ that
follow a defined structure to identify, evaluate and summarise all available evidence addressing a particular research question
r Key characteristics of a systematic review
include a set of objectives with pre-defined inclusion criteria, explicit and reproducible methodology, comprehensive searches that aim
to identify all relevant studies, assessment of the quality of included studies, and a standardised presentation and synthesis of the characteristics and findings of the included studies
r Meta-analysis is a statistical analysis that aims
to produce a single summary estimate, with associated confidence interval, based on a weighted average of the effect size estimates from individual studies
r Heterogeneity is variability between the true
effects in the different studies
Acknowledgements
We thank Chris Metcalfe and Matthias Egger forsharing lecture materials that contributed to thischapter
Centre for Reviews and Dissemination (2009)
Sys-tematic Reviews: CRD’s Guidance for ing Reviews in Health Care York: CRD, Univer-
Undertak-sity of York
Trang 10Systematic reviews and meta-analysis 111
Higgins JPT, Altman DG, Gotzsche PC, Juni P,
Moher D, Oxman AD, et al (2011) The Cochrane
Collaboration’s tool for assessing risk of bias in
randomised trials BMJ 343: d5928.
Higgins JPT, Green S (2011) Cochrane Handbook
for Systematic Reviews of Interventions Version
5.1.0 The Cochrane Collaboration.
Higgins JPT, Thompson SG, Deeks JJ, Altman
DG (2003) Measuring inconsistency in
Sterne JA, Sutton AJ, Ioannidis JP, Terrin N, Jones
DR, Lau J, et al (2011) Recommendations for
ex-amining and interpreting funnel plot try in meta-analyses of randomised controlled
asymme-trials BMJ 343: d4002.
Trang 11In this chapter you will learn:
✓ to explain basic concepts of economics and how they relate to health;
✓ to distinguish the main types of economic evaluation;
✓ to understand the key steps in costing health care;
✓ to understand the Quality Adjusted Life Year (QALY) and its limitations;
✓ to interpret the results of an economic evaluation.
What is economic
evaluation?
Economic evaluation is the comparison of the
costs and outcomes of two or more alternative
courses of action If you bought this book, you have
already conducted an informal economic
evalua-tion This involved comparing the cost of this book
and the expected benefits of the information it
contains against the cost and expected benefits of
alternative books on the topic In health, economic
evaluation commonly compares the cost and
out-comes of different methods of prevention,
is free or heavily subsidised at the time of use
We never know its cost, and we do not considerwhether it is public money well spent
Epidemiology, Evidence-based Medicine and Public Health Lecture Notes, Sixth Edition Yoav Ben-Shlomo, Sara T Brookes and Matthew Hickman.
Trang 12
Uni ted K
Austr alia (2009)
lic Finlan
d Chile
Lux embou
countries in 2010 (When 2010 data were unavailable, previous years data were used as indicated in parentheses.)
Source: Based on data from OECD (2012) Total expenditure on health, Health: Key Tables from OECD, No 1.
http://dx.doi.org/10.1787/hlthxp-total-table-2012-1-en and OECD (2012) Public expenditure on health, Health: Key Tables
from OECD, No 3 http://dx.doi.org/10.1787/hlthxp-pub-table-2012-1-en
Health care use is often initiated by a
pa-tient deciding to see a doctor In a system with
‘free’ care, this decision can be based on
med-ical and not financial considerations This
cre-ates more equitable access; however it may lead
to overuse of health services for trivial reasons,
sometimes referred to as moral hazard During
the medical consultation, treatment decisions are
often taken by the doctor with some patient
in-put Decisions should be based on sound
evi-dence about treatment effectiveness for the
pa-tient (evidence-based medicine) and affordability
for the population In practice they may also be
ad-versely influenced by incomplete evidence,
com-mercial marketing, and even financial incentives
if doctors are paid per procedure (sometimes
re-ferred to assupplier induced demand) By
pro-viding high-quality evidence on the costs and
out-comes of alternative ways of providing health care,
economic evaluation aims to improve the health
of the population for any fixed level of public
expenditure
The design of an economic evaluation
Key elements of study design discussed in vious chapters also apply to economic stud-
pre-ies For example, a specification of the Patient group, Intervention, Comparator(s) and Outcome (PICO – see Chapter 8) is essential In economic
evaluation the outcome of interest is frequently pressed as a ratio, such as the additional cost perlife year gained
ex-An economic evaluation conducted side a randomised controlled trial (RCT) would,typically, provide stronger evidence than an eval-uation based on a cohort study Regrettably, manyRCTs do not include an economic evaluation,although regulators are increasingly demand-ing proof of efficiency before approval of newdrugs and devices In the absence of relevantinformation from RCTs, policy-makers rely on
Trang 13along-Analgesic A
>=50% pain relief
Cost_drug A + Other_Costs_Success
Cost_drug B + Other_Costs_SuccessCost_drug A + Other_Costs_Fail
Cost_drug B + Other_Costs_Fail
Prob_A
Prob_B 1-Prob_A
1-Prob_B
>=50% pain relief
<50% pain relief
<50% pain reliefAnalgesic B
Patients with pain
The probability of successful pain relief with drug A (Prob_A) and drug B (Prob_B) can be estimated from RCTs or the bestavailable observational data If economic data from an RCT are unavailable, the costs of prescribing drugs A and B(Cost_drugA and Cost_drugB) and the other costs of treating patients with successful (Other_Costs_Success) andunsuccessful (Other_Costs_Fail) pain relief can be estimated from observational studies These six parameters allowestimation of the additional cost per patient with substantial pain relief of drug A versus drug B
For example if: Prob_A= 0.75; Prob_B = 0.50; Cost_drugA = £100; Cost_drugB = £50; Other_Costs_Success = £20and Other_Costs_Fail= £40, then the cost effectiveness of drug A versus drug B is:
[(£100+ (0.75)∗£20+ (1 − 0.75)∗£40)− (£50 + (0.50)∗£20+ (1 − 0.50)∗£40)]/(0.75 − 0.50)
This equates to£180 for every additional patient with substantial pain relief from drug A
economic evidence generated by decision analysis
models These models define the possible clinical
pathways resulting from alternative interventions
(Figure 13.2) and then use literature reviews to
draw together the best available evidence on
the probability of each pathway, the expected
costs and impact on patient health Clearly these
models are only as valid as the studies upon which
they are based
Efficiency is in the eye of
the beholder
It is essential to consider the boundaries of the
economic evaluation A programme to prevent
obesity in children is unlikely to appear
cost-effective during the first few years, but may prove
a wise investment over subsequent decades as
the cohort develops fewer weight-related diseases
Therefore, for chronic diseases the appropriate
time horizon for the economic evaluation is
of-ten the lifetime of the patient group This has
important implications for expensive new
treat-ments where effectiveness can be proven relatively
quickly by an RCT, but efficiency may not
be-come apparent until long after the end of the RCT
follow-up
A natural starting point for an integrated health
system is to ask whether the money it spends on
a health technology is justified by the ment it achieves in patient health However, thishealth-system perspective may inadvertently lead
improve-to blinkered decision making, whereby costs areshifted onto other elements of society For examplecentralisation of health care into larger clinics orhospitals might save the health system money atthe expense of patients, carers and society throughgreater travel costs and more time off work Giventhis, a strong argument can be made that, in mak-ing public spending decisions, we should take anall-encompassing (societal) viewpoint
In everyday life, we are accustomed to ing about costs in terms of monetary values.However money is just an imperfect indicator
think-of the value think-of the resources used For ple, a doctor-led clinic-based routine follow-up
exam-of women with breast cancer could be replacedwith a nurse-led telephone based approach Thefinancial cost of the doctor-led clinics may be nohigher than the nurse-led telephone follow-up ifthe clinics are of short duration and conducted
by low-salaried junior doctors However, the true
opportunity cost of the doctor led clinics may
be much higher if these routine follow-up visitsare preventing other women with incident breastcancer receiving prompt treatment at the clinic.The concept of opportunity cost acknowledgesthat the true cost of using a scarce resource inone way is its unavailability to provide alternativeservices
Trang 14Health economics 115
How much does it cost?
The costing process involves identification of
resource items affected by the intervention,
mea-surement of patient use of these items and
val-uation to assign costs to resources used
Identi-fication is governed by the chosen perspective of
the analysis From a health system perspective,
an evaluation of a new drug for multiple
sclero-sis would go no further than tracking patient use
of community, primary and secondary care health
services A broader societal perspective would
re-quire additional information on lost productivity
due to ill health, care provided by friends and
fam-ily and social services, and patient expenses
re-lated to the illness (e.g travel to hospital, purchase
of mobility equipment)
The introduction of electronic records has
greatly increased the potential to use routinely
col-lected data to measure resources (e.g tests,
pre-scriptions, procedures) used in hospitals and
pri-mary care However, there are drawbacks Records
are often fragmented across different health
sys-tem sectors and difficult to access Records are
usually established for clinical and/or payment
purposes rather than research and therefore may
not contain sufficient information for accurate
costing Therefore, patient self-report in the form
of questionnaires or diaries is often used, but may
be affected byloss to follow-up and recall bias
The degree of detail required for costing will vary A
study evaluating electronic prescribing would
re-quire direct observation of the prescription
pro-cess In other studies such minute detail on the
du-ration of a clinic visit would be unnecessary
Many health systems publish the unit costs of
health care, for example the average cost of a MRI
scan of the spine, which can be used to valuethe resources used by patients However, in anRCT comparing rapid versus conventional MRI ofthe spine an average cost would not be sufficientand a unit cost must be calculated from scratch.This would include allocating the purchase cost
of the imaging equipment across its lifetime nuitisation) and apportioning salaries, mainte-nance, estate and other costs to every minute ofmachine use It is particularly difficult to gener-alise the valuation of resource use between na-tions General practitioners in the United States,United Kingdom and the Netherlands are paid up
(an-to twice as much as their counterparts in Belgiumand Sweden, even after adjusting for the cost ofliving
Is it worth it?
The typical goal of an intervention is to use sources to optimise health measured by clinicaloutcomes such as mortality or bone density, orpatient-reported outcomes such as pain or qual-ity of life (known astechnical efficiency) If oneoutcome is of overriding importance then aCost Effectiveness Analysis (CEA)(Table 13.1) could beused to summarise whether any additional costs
re-of the intervention are justified by gains in health.For example an evaluation of acupuncture versusconventional care for patients with pain could cal-culate the extra cost per additional patient whohas a 50% reduction in pain score at 3 months Ifmore than one aspect of health, for instance pain
and function, are considered important outcomes
of treatment, analysts can choose to simply late the costs and all outcomes in aCost Conse- quences Study (CCS) In a CCS, the reader is left
tabu-Table 13.1 Types of economic evaluation.
Cost-effectiveness analysis A primary physical measure e.g 50%
reduction in pain score
Extra cost per extra unit of unit ofprimary outcome measureCost consequences study More than one important outcome
measure e.g 50% reduction in painscore, 50% increase in mobility scoreand patient satisfaction score
Costs and outcomes are presented
in tabular form with no aggregation
a if no intervention is dominant.
Trang 15to weigh up the potentially conflicting evidence on
disparate cost and outcomes to reach a conclusion
about the most efficient method of care
Less frequently, analysts useCost Benefit
Anal-ysis (CBA)to place a monetary value on treatment
programmes This is simplest in areas where
citi-zens are familiar with paying for care For example,
people who might benefit from a new type of In
Vitro Fertilisation (IVF) could be asked how much
they would be willing to pay (WTP) for a cycle of
this therapy, based on evidence that it increases
the chances of birth from 20% to 30% If the WTP
of those who might benefit is greater than the
ad-ditional costs of this new type of IVF, then this
pro-vides evidence that it is an efficient use of health
care resources
Policy-makers aim to create a health care
sys-tem that is bothtechnically and allocatively
effi-cient This means that money spent on each
sec-tor of care (e.g oncology, orthopaedics or mental
health) would not result in more health benefits if
reallocated elsewhere in the health system These
allocative comparisons would be aided by a
uni-versal outcome measure This measure needs to be
flexible enough to be applicable in trials with
out-comes as diverse as mortality, depression, and
vi-sion.Quality Adjusted Life Years (QALYs)used in
Cost-Utility Analysis (CUA)aim to provide such a
universal measure (see Table 13.1 which compares
the 4 different types of analysis)
What is a QALY?
QALYs measure health outcomes by weighting
years of life by a factor (Q) that represents the
pa-tient’s health-related quality of life Q is anchored
at 1 (perfect health) and 0 (a health state
consid-ered to be as bad as death) and is estimated for all
health states between these extremes and a small
number of health states that might be considered
worse than death A QALY is simply the number
of years that a patient spends in each health state
multiplied by the quality of life weight, Q, of that
state For example, a patient who spends 2 years in
an imperfect health state, where Q= 0.75, would
achieve 1.5 QALYs (0.75× 2) Q is generally
esti-mated indirectly via a questionnaire such as the
EQ-5D The questionnaire asks the patient to
cat-egorise current health in various dimensions – for
example, mobility, pain, and mental health Every
possible combination of questionnaire response is
given a quality weight, Q These weights are rived from surveys of the public’s valuations for thehealth states described by the questionnaire.There are concerns that in the attempt to mea-sure and value a very broad range of dimensions
de-of health, QALY questionnaires such as the 5D have sacrificed responsiveness to small butimportant changes within an individual dimen-sion Additionally, there is disagreement about theappropriate group to use in the valuation sur-vey Should it be the general population who cantake a dispassionate, but perhaps ill-informed, ap-proach to valuing ill health? Or should it be pa-tient groups who have experienced the healthstate? Perhaps the most persistent question aboutQALYs is whether they result in fair interpersonalcomparisons of treatment effectiveness The CUAmethodology typically does not differentiate be-tween a QALY resulting from treatment of a con-genital condition in a child and a QALY resultingfrom palliative care in an elderly patient with a ter-minal illness It is debatable whether this neutralstance reflects public opinion For these and otherreasons, QALYs remain controversial; in the UKthey currently play an important role in nationalhealth care decision making, whereas in Germanytheir role is less prominent
EQ-What are the results of an economic evaluation?
In essence, there are only four possible resultsfrom an economic evaluation of a new interven-tion versus current care (Cost Effectiveness Plane (CEP); (Figure 13.3)) Many new drugs are in theNorth East (NE) quadrant; they are more expen-sive, but more effective than existing treatmentoptions But that need not be the case ‘Break-through’ drugs (e.g Penicillin) can be both effec-tive and cost saving (i.e dominant in the SouthEast quadrant) if the initial cost of the drug is re-couped through future health care avoided Whenthe most effective intervention is simply not af-fordable, policy makers may opt for an interven-tion in the South West quadrant which is slightlyless effective but will not bankrupt the health sys-tem Sadly, the history of medicine also has a num-ber of examples of new technologies (e.g Thalido-mide for morning sickness) that fall into the NorthWest quadrant, more costly and eventually seen to
Trang 16be harmful (i.e dominated) Most controversy and
headlines in high-income countries concern
inter-ventions in the NE quadrant Can public funds
af-ford to pay for all health care that is effective, no
matter how expensive or marginally effective it is?
Assuming that the answer is no, then one
solu-tion for differentiating between more efficient and
less efficient innovations would be to define a
cost-effectiveness threshold For example, the UK
Gov-ernment has indicated that it is unwilling to fund
interventions that yield less than one QALY per
£30,000 spent (i.e anything above and to the left
of the dashed line in Figure 13.3)
The key finding of an economic evaluation is
of-ten summarised in anIncremental Cost
Effective-ness Ratio (ICER) This is simply the difference in
cost between the intervention and the comparator(Ci– Cc) divided by the difference in effectiveness(Ei– Ec) A worked example, based on a UK eval-uation of a new drug for advanced liver cancer, isprovided in Table 13.2 In that example, the drugwas effective, but the large additional cost resulted
in a high ICER suggesting that it might not be anefficient use of public money
In countries such as the UK where there is arelatively established threshold, the ICER is com-monly converted into a Net Monetary Benefit (NMB)statistic (Table 13.2) The NMB is attractivebecause it simplifies interpretation, a new treat-ment with a negative NMB is not cost-effective,and enables straightforward calculation of confi-dence intervals
Table 13.2 Worked example of calculating the ICER and NMB.
NMB(30,000) (1.08− 0.72)∗£30,000− (£28,359 − £9,739) = −£7,820
ICER =Incremental cost-effectiveness ratio
NMB(30,000)= net monetary benefit statistic (at a £30,000 threshold)
Trang 17Cost per QALY threshold
Cost-effectiveness acceptability curve – drug A
Cost per QALY threshold
Cost-effectiveness acceptability curve – drug B
0.00
Note: The probability that a drug is cost-effective can be estimated by plotting a line up from a chosen threshold on thehorizontal axis (e.g.£30,000 per QALY) to the curve and then across to read off the probability from the vertical axis Anapproximate lower (and upper) 95% confidence limit can be estimated by plotting a line across from 0.025 (0.975) on thevertical axis to the curve and then down to read off the cost per QALY limit from the horizontal axis
Trang 18Health economics 119
Interpreting the result
Thecost effectiveness acceptability curve (CEAC)
(Figure 13.4) is becoming a popular way of
pre-senting the degree of certainty about the result
of an economic evaluation These graphs can
be interpreted by scanning across the horizontal
axis to a conventional cost-effectiveness threshold
(£30,000 per QALY in the UK) and reading off the
associated probability of cost-effectiveness from
the vertical axis In Figure 13.4, both drugs A and
B are probably not cost-effective at the
recom-mended threshold (p<0.50) However while drug A
is almost certainly not cost-effective (approximate
95% confidence interval of £31,000 to £72,000 per
QALY), the case is far from proven for drug B
(approximate 95% confidence interval of £12,000
to £91,000 per QALY) A larger RCT with longer
follow-up might provide a more definitive answer
Uncertainty can also be addressed through
sen-sitivity analysis where key assumptions of the
analysis, for example the drug or device cost, are
varied to determine the robustness of conclusions
What happens next?
Even if the benefits of an intervention have been
clearly shown to justify the costs these results form
just one part of the decision-making process
Polit-ical objectives such as promotion of equality and
budgetary considerations (i.e what will we stop
doing in order to afford this new treatment?) will
also be taken into account before the intervention
is recommended
Summary
Economic evaluation is a key component of
evidence-based medicine It represents a shift in
thinking away from ‘what is the most effective way
of improving this patient’s health?’ and towards
‘what is the most efficient way of using a
health-care budget to optimise the health and wellbeing
of the population?’
KEY LEARNING POINTS
r Economic evaluations allow one to make
rationale choices between treatments
r Costs and benefits are commonly calculated
from a health system or societal perspective
r Costing requires identification, measurement
and valuation of resources
r There are four main types of economic
evaluation: cost-effectiveness, cost-consequence, cost-benefit and cost-utility analysis
r QALYs combine health-related quality of life and
survival, enabling comparison of treatments across different domains of health care with a common metric
r An economic evaluation may indicate that a new
intervention is dominant (effective and cost-saving), dominated (ineffective and costly)
or effective but more expensive
r The trade off between the costs and
effectiveness of therapies can be summarised
by the Incremental Cost Effectiveness Ratio and Net Monetary Benefit statistic
r Statistical uncertainty can be quantified using a
confidence interval or Cost Effectiveness Acceptability Curve
r Sensitivity analyses are usually undertaken to
see if the conclusions are robust to various assumptions
FURTHER READING
Drummond MF, Sculpher MJ, Torrance GW,
O’Brien BJ, Stoddart GL (2005) Methods for
the Economic Evaluation of Health Care grammes Oxford: Oxford University Press.
Pro-Drummond MF, Richardson WS, O’Brien BJ,Levine M, Heyland D (1997) Users’ guides tothe medical literature XIII How to use an ar-ticle on economic analysis of clinical practice
A Are the results of the study valid?
Evidence-Based Medicine Working Group JAMA 277:
1552–7
O’Brien BJ, Heyland D, Richardson WS, Levine M,Drummond MF (1997) Users’ guides to the med-ical literature XIII How to use an article on eco-nomic analysis of clinical practice B What arethe results and will they help me in caring for
my patients? Evidence-Based Medicine Working
Group JAMA 277: 1802–6.
Ramsey S, Willke R, Briggs A, et al (2005) Good
research practices for cost-effectiveness sis alongside clinical trials: the ISPOR RCT-CEA
analy-Task Force report Value Health 8: 521–33.
Trang 19In this chapter you will learn:
✓ how to describe the process around the audit cycle;
✓ what the general ethical principles are around research;
✓ what is the role of the research ethics committee;
✓ what special issues relate to interventional and observational studies;
✓ the principles around research with children and incapacitated
adults.
How do we know we are
doing a good job?
It is common for health care professionals to
re-view the management of patients when something
goes very wrong, such as an unexpected death or
serious complication post-surgery (critical
inci-dent analysis) However problems with more
mi-nor events, e.g wound infection rates, or mortality
in high-risk patients may not be detected without
some sort of formal audit procedure which is
in-tended to detect ‘outliers’ These can be both
posi-tive (better than expected) or negaposi-tive (worse than
expected) rates of events and the unit of analysis
could be at the level of an individual clinician, cialty within a hospital level or at a hospital level.For example the Bristol Royal Infirmary enquiryinvestigated an excess number of children underthe age of one dying from open heart surgery be-tween 1991 and 1995 (between 30 and 35 addi-tional deaths) It concluded
spe-There was no systematic mechanism for monitoring the clinical performance of healthcare profession- als or of hospitals For the future there must be ef- fective systems within hospitals to ensure that clin- ical performance is monitored There must also be
a system of independent external surveillance to view patterns of performance over time and to iden- tify good and failing performance (www.bristol- inquiry.org.uk/)
re-Epidemiology, Evidence-based Medicine and Public Health Lecture Notes, Sixth Edition Yoav Ben-Shlomo, Sara T Brookes and Matthew Hickman.
Trang 20
Audit, research ethics and research governance 121
The audit cycle
Auditis a form of quality improvement that aims
to improve clinical care by critically examining
ex-isting practice and identifying any areas for
con-cern The necessary steps involve:
(1) choosing a topic for the audit;
(2) predefining acceptable standards or using the
variation in the distribution of outcomes to
identify outliers (see Figure 14.1);
(3) collecting relevant data to address the topic
including information on case mix or clinical
severity;
(4) analysing the data so that performance is
compared to expected outcomes;
(5) implementing any necessary
recommenda-tions;
(6) repeating audit after a sufficient time period to
enable any improvement to occur
What’s the difference
between audit, service
evaluation and research?
Unlike research, audit by definition is not designed
to obtain new evidence but rather compares
actual performance with some agreed level ofquality standards The findings may be unique tothe individual hospital or health care system andnot generalisable to other situations Its aim is toimprove health care delivery rather than identifynew risk factors or new interventions that work
It is concerned with the appropriate tation of evidence or consensus based guidelinesrather than their development It usually uses ex-isting data rather than collecting new data thoughthe process of extracting that data may be similar
implemen-to that used in research.Service evaluationcan beconsidered even one stage earlier than audit as itsprimary purpose is simply to measure what andhow services are actually delivered without refer-ence to any specific quality standard as in audit.Both audit and research, however, may have eth-ical implications (see below) though usually au-dit and service evaluation do not require formalethical review by a research ethics committee Ap-pendix 14.1 highlights the differences between re-search, audit and service evaluation
Ethical issues
Research ethics can be defined as the sustainedanalysis of motives of, procedures for and so-cial effects of biomedical research (Murphy, 2004,
renal replacement therapy across different renal units in the United Kingdom The x-axis indicates whether the unit is large orsmall and the graph shows different confidence intervals so one can infer the probability that the result may have occurred
by chance There are four high performing units and one low performing unit outside the 99.9% confidence limits
Source: taken from Hodsman A, Ben-Shlomo Y, Roderick P et al (2011) The ‘centre effect’ in nephrology: what do
differences between nephrology centres tell us about clinical performance in patient management? Nephron Clin Pract 119:
c10–c17 Reproduced with permission from S Karger AG Basel
Image not available in this digital edition.
Trang 21p 1) Any clinical, biomedical, epidemiological or
social-science research which involves direct
con-tact with NHS patients or healthy participants
should be undertaken in accordance with
com-monly agreed standards of good ethical practice
The Declaration of Helsinki, first written in 1963 by
the World Medical Association, lays down a set of
ethical principles for medical research The
funda-mental and widely accepted ethical principles can
be broadly classified as:
r Beneficence (to do good)
r Nonmaleficence (first, do no harm)
r Autonomy (individual’s right to choose)
r Justice (fairness and equality)
r Truthfulness (informed consent,
confidential-ity)
Historical events, such as the Nuremberg Trials
(Nazi doctors experimented on prisoners under
the pretext of medical research) and the Tuskegee
syphilis study (where African-American men with
syphilis were never asked for consent and had
penicillin knowingly withheld after its
introduc-tion so that doctors could study the natural history
of the disease), led to the need for a statement
of ethical issues in research, such as the
Decla-ration of Helsinki, and for arrangements for the
ethical review of proposed research in order to
protect the research participants and promote
high-quality research
For research involving patients of the United
Kingdom National Health Service (NHS), their
tis-sue or their data, ethical review and favourable
ethical opinion is sought prospectively from an
NHS Research Ethics Committee Research
under-taken by academic staff or students involving
par-ticipants outside of the NHS should be reviewed
by ethical committees within the host Higher
Ed-ucation Institution Ethical review must occur
be-fore any research related activity takes place Other
developed countries have different but equivalent
bodies such as Institutional Review Boards (IRBs)
in the United States or Independent Ethics
Com-mittees Ethics committees must not only
con-sider key ethical aspects of the research but also
its validity; poor quality research can be
uneth-ical because it may have no benefit in terms of
new knowledge whilst have some risk for the
par-ticipants It may also put future participants at
harm if the research is misleading (for example
the scare concerning MMR vaccination and risk of
autism leading to a decline in population
vaccina-tion rates)
Ethical issues in Randomised Controlled Trials (RCTs)
All research studies raise ethical issues, such as
participant confidentiality However RCTs volve more difficult issues than observationalstudies, because they mean that the choice oftreatment is not made by patients and cliniciansbut is instead devolved to a process of random al-location This means that a patient in an RCT mayreceive a new untested treatment, or not be able tochoose a new active treatment if allocated to theplacebo group
in-Before one can undertake an RCT, the healthprofessionals treating the patients must be un-certain about whether the treatments beingevaluated are better, worse or the same as anyexisting treatment or a placebo This is called
clinical equipoise If there is existing evidencethat a new treatment is superior then cliniciansshould not participate However, in reality, mostclinicians will have some preference or ‘hunch’that one treatment is better than another, butthey will need to suspend these views to conduct
an RCT to provide clear evidence Often, RCTresults are different from clinicians’ hunches.For example a recent large RCT of a drug thatinhibits the cholesteryl ester transfer protein(CETP) and raises HDL-cholesterol, associatedwith a reduced risk of heart disease, actually found
an increased risk of cardiovascular events spite improving HDL-cholesterol, it was unclearwhy patients on active treatment had a highermortality rate though the drug did unexpectedlyraise the participants’ blood pressure (Barter
De-et al., 2007).
As described in more detail below patients mustgiveinformed consentto participate in an RCTand must understand that the treatment they re-ceive will be determined by chance through ran-domisation If one of the treatments is a placebogroup then the patients must know this Theyshould not be coerced to take part or given finan-cial incentive other than any expenses that arisefrom participation Even if they consent to partic-ipate, they are entitled to withdraw from the study
at any time and this should in no way mise their future treatment For informed consent
compro-to be ethically valid the investigacompro-tor must discloseall risks and benefits and the participant must becompetent to understand this Independent re-search ethics committee must review and approvestudies before they are undertaken
Trang 22Audit, research ethics and research governance 123
One special aspect of RCTs is the use of ‘sham’
procedures to maintain blinding In a drug trial
it is usually straightforward to create an
identi-cal looking placebo so that participants cannot
tell whether they are taking the active or placebo
medication This is more complex for
nonmed-ical interventions, especially surgnonmed-ical
interven-tions In this case a sham procedure may be used
though this may have risk in itself For example,
a RCT of foetal nigral transplantation for
Parkin-son’s disease randomised patients to the insertion
of aborted material using stereotactic surgery The
placebo group underwent the same procedure and
had partial burr holes made in the skull but no
needle or foetal material was inserted (Olanow
et al., 2003).
Ethical issues in observational
studies
Observational studies are usually less problematic
and of lower risk as the researchers simply
mea-sure characteristics of the participants using
ques-tionnaires, tissue, imaging or physiological
mea-sures One issue that may arise in such studies is
opportunistic identification of clinical
abnormal-ities and it is good practice to have an explicit
protocol for how these will be handled as well
as obtaining consent from the participants as to
whether they would wish to have this
informa-tion feedback to them and/or their general
practi-tioners For example many epidemiological
stud-ies will measure blood pressure and there are clear
evidence-based guidelines on what constitutes a
level worthy of treatment if it is sustained over
sev-eral readings or over a 24-hour period However,
studies of MRI brain imaging in the elderly will find
a high prevalence of asymptomatic brain infarcts
(around 18% in subjects between 75 and 97 years
in the Rotterdam study) In this case it is less clear
that feeding back abnormal results is helpful as it
may cause participant anxiety without necessarily
any improvement in health care (Vernooij et al.,
2007)
Informed consent
Informed consent is at the heart of ethical
re-search Most studies involving individuals must
have appropriate arrangements for obtaining
consent from potential research participants formed consent must be:
In-r voluntary and freely given;
writ-Obtaining informed consent should be seen as
a process of communication and discussion tween researcher and participant The researcherhas a duty to ensure the participant truly under-stands what is being asked of them, and that theyare willing to voluntarily give full, informed con-sent Researchers should be very careful not to co-erce the participant or to emphasise the poten-tial benefits, nor attempt to minimise the risks ordisadvantages of participation Coercion may beimplicit rather than explicit if the recruiting clin-ician has a long standing relationship with the pa-tient who may find it hard to refuse the invitation.Participants have the right to ask questions of theresearcher, and be given reasonable time to con-sider their decision to participate before confirm-ing their willingness to participate both verballyand in writing All participants must have given in-formed consent before any aspect of the researchstarts
be-Vulnerable groups (children and
incapacitated adults)
Children
Informed consent must be obtained from thechild’s parent (or legal guardian) as appropriate
When parental consent is obtained, the assent
(voluntary agreement) of the child should also besought by researchers, as appropriate to the child’sage and level of understanding A full explanation
Trang 23of the research must be given to the parent (or legal
guardian) of the child, in accordance with the
prin-ciples described earlier, including the provision of
written information and opportunity for questions
and time for consideration The parent (or legal
guardian) may then give informed consent for the
child to participate in the study
The child should also be given information
about the research This will be age-appropriate
and offered according to the child’s level of
under-standing Often the use of visual aids or cartoons
can explain basic information for young children
Verbal assent should be sought from the child, and
recorded in the research notes, as well as the child’s
medical record (for clinical trials) Older children
may wish to sign a consent form For children over
the age of 16 this would constitute legally valid
consent
Written information provided to children should
be written in age-appropriate language that the
child could understand Different versions of the
research information should therefore be
pro-duced for different age ranges e.g under 5s, 6–12
year olds, 13–15 year olds and over 16
Incapacitated adults
Incapacitated adults do not have mental
capac-ity to make decisions for themselves This may
be because of unconsciousness, mental illness, orother causes, to the extent that the person does nothave sufficient understanding or ability to make
or communicate responsible decisions Special rangements exist to ensure the interests of inca-pacitated adults recruited into research studiesare protected For investigational medicinal prod-uct (drug) trials, or trials of medical devices inEngland, Wales and Northern Ireland the provi-sions for inclusion of incapacitated adults are laiddown in the Medicines for Human Use (ClinicalTrials) Regulations 2004 and as amended In Scot-land, these regulations and also the Adults withIncapacity (Scotland) Act 2004 (regulations 4 to
ar-16 and Parts 3 and 5 of Schedule 1) will also ply Such requirements are considered suitable forother types of clinical research
ap-When considering a patient who is unable toconsent for themselves for suitability for a trial, thedecision on whether to consent to, or refuse, par-
ticipation in a trial will be taken by a legal
repre-sentative who is independent of the research team
and should act on the basis of the person’s sumed wishes The type and hierarchy of legalrepresentative who should be approached to giveinformed consent on behalf of an incapacitatedadult prior to inclusion of the subject in the trial
pre-is given in Table 14.1 (note that arrangements forScotland are slightly different)
Table 14.1 Type and hierarchy of legal representative who can give informed consent on behalf of an incapacitated adult prior to inclusion of the subject in the trial.
England, Wales and Northern Ireland Scotland
A person not connected with the conduct of the trial who is:
(a) suitable to act as the legal representative by virtue of
their relationship with the adult, and
(b) available and willing to do so
1A Any guardian or welfare attorney who has power toconsent to the adult’s participation in research
1B If there is no such person, the adult’s nearest relative
as defined in section 87(1) of the Adults with Incapacity(Scotland) Act 2000
A person not connected with the conduct of the trial who is:
(a) the doctor primarily responsible for the adult’s medical
treatment, or
(b) a person nominated by the relevant health care provider
(e.g an acute NHS Trust or Health Board)
A professional legal representative may be approached if
no suitable personal legal representative is available
A person not connected with the conduct of the trial who is:(a) the doctor primarily responsible for the adult’s medicaltreatment, or
(b) a person nominated by the relevant health care provider
A professional legal representative may be approached if it
is not reasonably practicable to contact either 1A or 1Bbefore the decision to enter the adult into the trial is made.Informed consent must be given before the subject isentered into the trial
Trang 24Audit, research ethics and research governance 125
The appropriate legal representative should be
provided with an approved Legal Representative
Information Sheet and Legal Representative
In-formed Consent Form to document the consent
process
The consent given by the legal representative
re-mains valid in law even if the patient recovers
ca-pacity However, at this point, the patient should
be informed about the trial and asked to decide
whether or not they should continue in the trial,
and consent to continue should be sought
Research governance
Research governancecan be defined as the broad
range of regulations, principles and standards of
good practice that exist to achieve, and
continu-ously improve, research quality across all aspects
of health care in the UK and worldwide In the UK,
the Department of Health published the first
Re-search Governance Framework for Health and
So-cial Care in 2001, and this was updated in 2005 and
sets out to:
r safeguard participants in research;
r protect researchers/investigators (by providing
a clear framework to work within);
r enhance ethical and scientific quality;
r minimise risk;
r monitor practice and performance;
r promote good practice and ensure lessons are
learned
Research governance includes research that is
concerned with the protection and promotion of
public health, undertaken in or by the Department
of Health, its non-Departmental Public Bodies and
the NHS, or within social care agencies It includes
clinical and nonclinical research; and any research
undertaken by industry, charities, research
coun-cils and universities within the health and social
care systems Everyone who undertakes
health-care research (research involving individuals, their
tissue or their data) therefore has responsibilities
for research governance This includes lead
re-searchers, research nurses, students undertaking
research, as well as NHS organisations where
re-search takes place and universities who may
em-ploy or supervise researchers or act as sponsor
or-ganisations
Research governance should be considered atall stages of the research, from the initial develop-ment and design of the research project, throughit’s set-up, conduct, analysis and reporting Re-searchers need to ensure that:
r day to day responsibility for elements of eachresearch project is clearly stated;
r research follows the agreed protocol;
r research participants receive the appropriatecare while participating in the research;
r data protection, integrity and confidentiality ofall records is intact;
r reporting adverse incidents or suspected conduct is undertaken
mis-Research governance approval is required fromany NHS Trust before the research can take place
on their premises, or access patients, their sue or their data All research documents such asresearch protocol, participant information sheetsand informed consent forms, details of NHS Re-search Ethics Committee approval, researcher CVare submitted for governance checks Current sys-tems for multi-centre research review the researchgovernance compliance at a nominated lead NHSTrust, and local information only is submitted tothe local NHS Trusts The Integrated Research Ap-plication System (www.myresearchproject.org.uk)
tis-is used for submtis-ission of research information toNHS Research Ethics Committees as well as NHSresearch governance approval
KEY LEARNING POINTS
r Audit is a process to ensure that delivery of
health care meets accepted standards of care and can identify both exemplars of very good or very poor practice
r To complete the audit cycle, one must demonstrate that any identified deficiencies have been acted upon and there has been improvement
r All research has ethical implications but these
tend to be more serious with RCTs than observational studies especially around the issue of clinical equipoise RCTs may also use sham procedures to maintain blinding
r In general terms, it is essential to avoid any
unnecessary harm to participants, ensure they
Trang 25are fully informed prior to consent and maintain
participant confidentiality
r Studies of children need to seek child assent as
well as parental consent
r Special rules apply to research with
incapacitated adults where is needs to be shown
that the research could not be done in any other
way and is in the participants’ best interest
r Research ethics committees must approve
research studies before they commence and
there are often governance procedures that
ensure that the research is undertaken to the
highest level.
REFERENCES
Barter PJ, Caulfield M, Eriksson M, et al (2007)
Ef-fects of Torcetrapib in patients at high risk for
coronary events NEJM 357: 2109–22.
Hodsman A, Ben-Shlomo Y, Roderick P, et al.
(2011) The ‘centre effect’ in nephrology: what do
differences between nephrology centres tell us
about clinical performance in patient
manage-ment? Nephron Clin Pract 119: c10–c17.
Olanow CW, Goetz CG, Kordower JH, et al (2003) A
double-blind controlled trial of bilateral fetal
ni-gral transplantation in Parkinson’s disease Ann
Neurol 54: 403–14.
Vernooij MW, Ikram MA, Tanghe HL, et al (2007)
Incidental findings on brain MRI in the general
population N Engl J Med 357: 1821–8.
FURTHER READING
Bristol Royal Infirmary Inquiry (2001)
Learn-ing from Bristol: the report of the public quiry into children’s heart surgery at the Bristol Royal Infirmary 1984–1995 Norwich: Stationery
in-Office, (CM 5207.) Available at inquiry.org.uk/
www.bristol-Campbell A, Jones G, Gillett G (2001) Medical
Ethics, 3rd edn Oxford: Oxford University Press.
Hope T, Savulescu J, Hendrick J (2008) Medical
Ethics and Law: The Core Curriculum, 2nd
re-vised edn London: Churchill Livingstone
Murphy, Timothy (2004) Case Studies in
Biomedi-cal Research Ethics Cambridge, MA: MIT Press.
UK National Research Ethics Service website:http://www.nres.npsa.nhs.uk/
Trang 26Audit, research ethics and research governance 127
Appendix 14.1 Differentiating research, service evaluation and clinical audit
The attempt to derive generalisable new
knowledge including studies that aim to
generate hypotheses as well as studies that
aim to test them
Designed and conducted solely todefine or judge current care
Designed and conducted toproduce information to informdelivery of best care
Quantitative research – designed to test a
hypothesis
Qualitative research – identifies/ explores
themes following established methodology
Measures against a standard
Quantitative research – may involve
evaluating or comparing interventions,
particularly new ones
Qualitative research – usually involves
studying how interventions and relationships
are experienced
Involves an intervention in use only
The choice of treatment is that ofthe clinician and patient according
to guidance, professionalstandards and/or patientpreference
Involves an intervention in use only.The choice of treatment is that ofthe clinician and patient according
to guidance, professionalstandards and/or patientpreference
Usually involves collecting data that are
additional to those for routine care but may
include data collected routinely May involve
treatments, samples or investigations
additional to routine care
Usually involves analysis of existingdata but may include administration
of interview or questionnaire
Usually involves analysis of existingdata but may include administration
of simple interview or questionnaire
Quantitative research – study design may
involve allocating patients to intervention
groups
Qualitative research – uses a clearly defined
sampling framework underpinned by
conceptual or theoretical justifications
No allocation to intervention: thehealth professional and patienthave chosen intervention beforeservice evaluation
No allocation to intervention: thehealth professional and patienthave chosen intervention beforeaudit
Normally requires REC review Refer to
www.nres.npsa.nhs.uk/applications/apply/
for more information
Does not require REC review Does not require REC review
∗Service development and quality improvement may fall into this category
Source: adapted from Defining Research: NRES guidance to help you decide if your project requires review by a Research Ethics Committee, NHS National Patient Safety Agency 2010.
Trang 27questions – Part 2:
Evidence-based medicine
Q1 Do tired doctors make more medical errors?
(modified from Landrigan et al., N Engl J Med
2004)
Newly qualified doctors (PRHOs) typically
work the greatest number of hours per week
which may mean that they are especially
prone to fatigue related errors Whilst there is
evidence that sleep deprivation impairs
neu-robehavioural performance, it is still unclear
whether there is an increased risk of medical
errors
The Intern Sleep and Patient Safety Study
was conducted in the medical intensive care
unit and coronary care unit of a large
aca-demic hospital in Boston PRHOs between
July 2002 and June 2003 were allocated to
work either the traditional schedule (work
week average of 77 to 81 hours with up to 34
continuous hours of scheduled work, when
clinic occurred after they were on call) or
the intervention schedule (maximal
sched-uled hours 60–63 per week, with
consecu-tive hours of work limited to approximately
16 hours) The allocation was done by the
research team using random numbers and
none of the team had any prior knowledge of
the PRHOs’ past academic record or clinical
performance During any week, all PRHOs on
each unit were working the same schedule
The aim of the intervention schedule was to
improve opportunities for sleep while
min-imising errors The primary outcome of the
study was the number of serious medical
er-rors in which PRHOs were directly involved,
this was recorded through direct observation
by physicians Blinding of these physician
observers was not possible as they had to dertake the same work patterns
un-During the study there were 634 sions in total The patients’ and PRHOs’ char-acteristics were very similar within the twoschedules All the PRHOs who were ran-domised completed the study except for onedoctor allocated to the traditional schedulewho dropped out due to ill-health TableQB.1 presents the rates of serious errors andpreventable adverse events per 1000 patient-days within each schedule
admis-(a) What is the null hypothesis?
(b) Does the table provide any evidence
of an association between the type ofschedule and the rate of ‘serious medicalerrors’? Comment on the rate ratio, con-fidence interval, and P value
(c) What type of errors does the interventionschedule most effectively reduce? Justifyyour answer
(d) Why was it important that the researcherwho decided whether a PRHO was allo-cated to either the intervention or tradi-tional schedule had no knowledge aboutthe PRHO? What do we call the designprocedure employed in this type of study
to prevent this potential bias? If this doesnot occur how could the results havebeen biased
(e) What is meant by ‘blinding’ and did it cur in this study? If a study is unblindedwhat might occur? How could this haveaffected the results in this study?(f ) What other type of bias may effect thistype of study design? Did it occur in thisspecific study?
oc-Epidemiology, Evidence-based Medicine and Public Health Lecture Notes, Sixth Edition Yoav Ben-Shlomo, Sara T Brookes and Matthew Hickman.
Trang 28
Self-assessment questions – Part 2: Evidence-based medicine 129
Table QB.1 Incidence of serious medical errors.
Rate per 1000 patient-days Intervention
schedule
Traditional schedule Rate ratio∗ 95% confidence interval P value
∗Rate ratio can be interpreted similarly to the risk ratio
# Injury due to a non-intercepted serious error in medical management
(g) In addition to the potential biases
men-tioned in questions d–f, what other
ex-planations should be considered before
concluding that the intervention
sched-ule reduces the rate of serious errors?
How likely are each of these?
(h) Assuming the association between work
schedule and serious errors is true, why
is it important to look at the association
between work schedule and preventable
adverse events before any policy changes
are made What else should also be
considered?
(i) Does the intervention schedule lead to
fewer preventable adverse events? Please
comment on the rate ratio, confidence
interval and P value? How precise is the
estimate of the rate ratio?
(j) How generalisable are the findings?
Would you suggest implementation of
the intervention schedule into the UK?
Q2 Blinding in RCTs
For each of the following trial designs, state
whether it is possible to blind patients
and/or researchers to the allocation of
treatments
(a) A drug trial of a new antidepressant
com-pared to placebo where the outcome
measure is a patient rated depression
scale
(b) Group versus individual speech
ther-apy sessions for patients who have a
language difficulty (dysphasia) after a
stroke where the outcome measure is
a recorded conversation reading aspecific piece of text
(c) A hyptertensive trial where patients aregiven a drug (beta-blocker) that alsoslows down heart rate or an identicallooking placebo and a nurse measuresthe blood pressure
(d) A RCT of two different joint prosthesesfor osteoarthritis of the hip with the out-come measure being hip flexion withoutpain done by the researcher and a pa-tient administered quality of life score.(e) A trial of acupuncture for arthriticpain using a patient completed painindex and participants who have neverpreviously had acupuncture
Q3 Economic evaluation
The economic evaluation was conducted
as part of a randomised controlled trialwhich examined the effectiveness, accept-ability and accessibility of a general practi-tioner with special interest (GPSI) dermatol-ogy service compared with routine hospitaloutpatient care Patients who were referred
to a hospital outpatient dermatology clinicand were deemed suitable to be managed
by a general practitioner with special est were randomised to either usual care (i.e.hospital outpatient care) or the GPSI service.Resources were measured for the patientsfor nine months following randomisation.Most of the NHS resources were measuredthrough computerised systems Resourcesused by patients and their companions,and information on time off work (lost
Trang 29inter-production) were measured through patient
self completed postal questionnaires
Two types of evaluation using differing
perspectives were conducted
(I) A primary outcome measure, the
derma-tology life quality index score, and costs
were evaluated from an NHS perspective
(II) This outcome measure and others (an
ac-cess score, patient satisfaction with the
consultation, satisfaction with facilities,
attendance rates, and waiting times) and
costs were evaluated from several
per-spectives (NHS, patient and companion,
societal lost production)
A sensitivity analysis was conducted which
extended the collection of NHS resources for
an additional 3 months (12 months in total)
(Based on Coast J, Noble S, Noble A, Horrocks
S, Asim O, Peters TJ, Salisbury C (2005) BMJ
331(7530): 1444–8.)
(a) Identify the types of resource use that
you would collect if the evaluation was
from
i an NHS/health service provider
per-spective
ii societal perspective
(b) What are the potential problems with
using self-completed questionnaires to
measure patient costs and time off work?
(c) A cost-effectiveness analysis could be
used for the first type of evaluation What
is thePICOfor this analysis?
(d) What type of economic evaluation might
you use for the second type of
evalua-tion?
(e) Thecost-effectiveness analysiswill give
evidence as to which is the most
tech-nically efficientprovision of care If this
evaluation was to aid the creation of
an allocatively efficient health care
sys-tem what outcome measure(s) should
be used
(f ) Comparing the new GPSI service with
routine outpatient care, the results
from the first type of evaluation were
in the North East quadrant of the
cost-effectiveness plane What does this
mean?
(g) How should uncertainty in relation to
thecost-effectiveness analysisbe
repre-sented?
(h) Why do you think thesensitivity
analy-siswas conducted?
Q4 Diagnostic tests
(a) The abstract of a recent publicationhas reported the following performancemeasures for a new diagnostic test: sen-sitivity 50%, specificity 98%, positive pre-dictive value 86%, negative predictivevalue 89%
i What percentage of patients with thedisease are correctly identified by thetest?
ii Given a positive test result, what is therisk of having the disease ?
iii What is the false positive rate of thistest?
iv What percentage of patients with anegative test will still have the disease?
v Is this test more useful for ruling out orruling in the diagnosis?
(b) When assessing a new diagnostic test:
i It should have a high specificity(>90%) if it is important not to miss
new cases of disease True or false?
ii If it is more expensive than existingtests it should not be introduced True
or false?
iii If subsequent investigation of peoplewho test positive is invasive and risky,then the test should have a high posi-tive predictive value True or false?
iv Cohort studies are the best study sign to evaluate whether a new diag-nostic test improves health True orfalse?
de-v The best diagnostic test has 100% sitivity and 100% specificity True orfalse?
sen-Q5 Prognosis
State which of the following questions aretrue about prognosis (T) and which are not(F):
(a) A 2-year-old girl has glue ear Is she likely
to have long-term hearing impairment?(b) A 60-year-old man has just been di-agnosed with lung cancer What is hischance of surviving 10 years?
(c) A 9-year-old boy has symptoms of tonia (neurological movement disorder).What is the most likely cause?
dys-(d) An injecting drug user (female, 30 yearsold) has just been diagnosed with hep-atitis C How likely is she to develop livercancer?
Trang 30Self-assessment questions – Part 2: Evidence-based medicine 131
(e) A nursery-age child is diagnosed with
measles Is he likely to pass this on to
unvaccinated family members through
household contact?
Q6 Systematic reviews
The figure shows the results from a Cochrane
systematic review of randomised controlled
trials of exercise-based rehabilitation for
people with coronary heart disease The
ma-jority of patients randomised had suffered
an acute myocardial infarction and were
middle-aged men The following questions
refer to this systematic review
Review Exercise-based rehabilitation for coronary heart disease
01 Exercise only versus usual care
01 Total Mortality
Treatment n/N
Control n/N
Relative risk (fixed) 95% CI
Relative risk (fixed) 95% CI Weight (%)
Test for heterogeneity chi-square = 10.50 df = 11 p = 0.4858
Test for overall effect = –2.08 p = 0.04
3/42 12/116 21/152 0/40 1/35 8/146 24/328 2/84 13/131 1/29 2/25 35/157
2.5 9.6 16.9 0.4 0.8 6.6 19.3 1.7 10.3 1.4 2.0 28.4
1.22 [0.29, 5.12] 1.37 [0.68, 2.76] 0.58 [0.29, 1.13] 9.00 [0.50, 161.87] 1.03 [0.07, 15.81] 0.60 [0.20, 1.79] 0.63 [0.34, 1.19] 1.43 [0.25, 8.36] 0.40 [0.15, 1.10] 0.23 [0.01, 5.52] 0.20 [0.01, 3.97] 0.79 [0.51, 1.24]
Which of the following statements aretrue/false?
(a) The results are presented as a field plot(b) Whether treatment increased or de-creased mortality is shown in the figure(c) The pooled effect shows that exercise-based cardiac rehabilitation is betterthan usual care
(d) The trials with the longest bars are thebiggest
(e) The size of the square for each trial resents the precision of the estimate
Trang 32rep-Part 3
Public health
Trang 34In this chapter you will learn:
✓ to define public health and distinguish between public health and individual health care;
✓ to identify how public health is measured or diagnosed;
✓ to identify types of public health intervention.
Introduction
Public health has been defined as: ‘the science
and art of preventing disease, prolonging life, and
promoting health through organised efforts of
so-ciety’ Public health focuses on improving the
health of entire populations rather than on
individ-ual patients The population is the patient Tools
for improving population health range from the
development of new clinical services for treating
disease, screening programmes to detect disease
at an early (treatable) stage, immunisation to
pre-vent the transmission of infectious diseases, to
leg-islation to prohibit actions or behaviours, or health
improvement in schools and workplaces Public
health aims to encompass the whole clinical
ice-berg (see Figure 15.1)
Public health seeks to target all ill health
com-prising the population that are asymptomatic or
prodromal (unaware of illness), the populationthat have not yet presented to medical services aswell as population being managed by health careservices
Public health practice
Public health is an interdisciplinary practice –with public health specialists operating locally, na-tionally and internationally; within healthcare ser-vices, local authorities, the voluntary sector andother government bodies – and drawing on a mul-titude of skills As we write this chapter UK pub-lic health along with the NHS is being reorgan-ised This is not new – and however public health
is organised the type of problems specialists tackleand the approach they take will be similar What iscommon and critical is the population approach,i.e that health needs are identified and assessedand interventions delivered and evaluated at apopulation level
Epidemiology, Evidence-based Medicine and Public Health Lecture Notes, Sixth Edition Yoav Ben-Shlomo, Sara T Brookes and Matthew Hickman.
Trang 35
Known tomedical services
Aware of illness but notsought adviceDiseased but not yetaware of illnessWell
The three domains of public health in the UK
are:
Health
improvement
Improving services
Health protection
Inequalities Clinical
effectiveness
Infectiousdiseases
poisons
evaluation
EmergencyresponseFamily/community Clinical
governance
Environmentalhealth hazards
Surveillance and
monitoring of
specific diseases
and risk factors
Many of these are covered elsewhere in the
book – including infectious disease
epidemiol-ogy and health protection; and evidence based
medicine which underpins improving services.
The issues in health improvement emphasise that
public health aims to reduce inequalities as well as
improve population health and that public health
interventions can operate at multiple levels As
well as the domains above, public health is
con-cerned with Health Impact Assessment (HIA) –
UK and international examples of which are
col-lated at the HIA Gateway HIA is defined by WHO
as ‘A combination of procedures, methods and
tools by which a policy, programme or project may
be judged as to its potential effects on the health of
a population, and the distribution of those effects
within the population’
The United States Communicable Disease trol has identified ten public health achievements
Con-of the twentieth century (Figure 15.2)
It is estimated that the average lifespan in the
US increased by>30 years during the twentieth
century and that 25 years (∼3/4) of the gain isattributable to public health Similar gains haveoccurred in other industrial/developed countries.The contribution of public health interventions in-clude: eradication of smallpox, elimination of po-lio, and control of many other infections through
‘vaccination’; reductions in motor vehicle dents due to improvements in driving and ‘motor-vehicle safety’; reductions in environmental ex-posure and occupational injury through ensuring
acci-‘safer work places’; improved sanitation, ity of clean water, and antimicrobials leading tobetter ‘control of infectious diseases’; reductions
availabil-in risk behaviours, such as smokavailabil-ing, and control
of blood pressure and early detection have tributed to a decline in ‘deaths from coronaryheart disease and stroke’; better hygiene and nutri-tion of mothers and babies has contributed to re-ductions in neonatal and maternal mortality; andthe many anti-smoking campaigns and policieshave led to changes in public perceptions of thedangers and acceptability of smoking by the pub-lic and substantial reductions in smoking and en-vironmental exposure to tobacco in the popula-tion Morbidity and socioeconomic circumstancesalso have improved through ‘safer and healthierfoods’ reducing frequency of diseases associatedwith nutritional deficiency (such as rickets andpellagra); family planning/contraceptive servicesreducing family size and transmission of sexu-ally transmitted infections; and fluoridation hasmade substantial reductions in tooth decay andloss
Trang 364 Control of infectious diseases
5 Decline in deaths from coronary heart disease and stroke
6 Safer and healthier foods
7 Healthier mothers and babies
8 Family planning
9 Fluoridation of drinking water
10 Recognition of tobacco use as a health hazard
Source: http://www.cdc.gov.
These represent a range of different
interven-tions:
r primary prevention: such as vaccination and
health promotion campaigns; legislation and
enforcement to promote safer driving, and safer
work places;
r environmental and social changes: including
improved nutrition and availability of clean
wa-ter and fluoridated wawa-ter;
r medical advances: such as hygiene during child
birth and other surgical interventions; and
more aggressive identification and treatment of
early signs of heart disease
The health problems affecting low and middle
income countries differ from those of
industri-alised nations – and some of the public health
achievements of the twentieth century
identi-fied above are as yet unresolved in developing
countries For example, there were approximately
6.5 million deaths in children under five in
African and Southeast Asian countries in 2008
(Black et al., 2010) The top six causes were:
pneumonia (18%); diarrhoea (15%); neonatal birthcomplications (12%); neonatal asphyxia (9%) orsepsis (6%) and malaria (8%) Key interventions
to prevent these deaths include: clean/sterile livery, nutrition and nutritional deficiencies (e.g.vitamin A and zinc), antibiotics, water sanitation,vaccination (Hib and measles), oral rehydration,and mosquito (insecticide treated) nets Key pub-lic health interventions in developed and develop-ing countries can be medical, educational, social
de-or legal
Public health diagnosis
The steps to improving public health are gous to clinical medicine but with slightly differenttools We need to ‘diagnose’ the problem The tools
analo-at our disposal are informanalo-ation on the populanalo-ation,routine data on mortality and morbidity, hospi-talisation, public health surveillance data, popu-lation health surveys and other epidemiologicalstudies Some of the key sources used in the UK
Trang 37are listed below These allow us to measure rates of
disease in the population, and consider variations
in health and disease in different populations and
(6) notification data: infectious disease (e.g.
meningitis, mumps), congenital
malforma-tions, maternal deaths;
(7) laboratory reporting statistics (e.g sentinel
surveillance and other routine information
on laboratory tests and diagnoses);
(8) morbidity surveys: National Psychiatric
Mor-bidity Survey; Health Survey for England (to
give prevalence of e.g CVD, asthma);
(9) lifestyle surveys: General Health Survey,
Health Survey for England (prevalence of
smoking, obesity alcohol, exercise etc.);
(10) qualitative and quantitative research studies.
(11) national confidential enquiries – e.g
mater-nal mortality, peri-operative deaths and
sui-cide and homisui-cide by people in contact with
mental health services
In England analysis of these data sources are
routinely undertaken at a local level and published
as aJoint Strategic Needs Assessment, to inform
commissioning of NHS and local authority
ser-vices, and in an annual independent report of the
Director of Public Health Importantly these tools
also allow public health specialists to address
spe-cific questions such as:
r Is the CHD death rate in a local populationhigher than the regional or national average,and are the rates re-vascularisation higher orlower than expected?
r Do increases in cannabis exposure cause creases in schizophrenia?
in-r Is breast cancer survival in Britain improvingcompared to other European countries?
r What causes Sudden Infant Death Syndrome(cot death) and how can we prevent it?
r Is childhood obesity increasing?
r Has Chlamydia screening reduced the dence of pelvic inflammatory disease?
inci-r What strategies could be used to prevent cide?
sui-and many more
These questions can be reformulated as a PICO
as discussed earlier – the difference is that we aremeasuring the intervention and the outcome at apopulation level
Public health interventions
After making our diagnosis we need to prescribe
an intervention or select a management that canaddress the health problem This will involve acritical appraisal of the evidence on effectiveness
of alternative interventions in the same way asevidence-based medicine is recommended forclinical practice For example, from 1995 to 2005the proportion of children (aged 2–15) classified asobese increased from 11% to 18% (establishing theneed for public health action) ACochrane review
of primary prevention studies suggested that themajority did not demonstrate strong evidence andthat many studies were limited in design, duration
or analysis; but that perhaps comprehensivestrategies which address diet and physical activity,change the environment, and involve psychoso-cial support have the best change of preventingobesity (Summerbell, 2005) Strategies adopted
by local governments and health trusts, therefore,have tended to be multifaceted
Unlike clinical medicine the implementation ordelivery of the intervention may require the de-velopment of health strategies, mobilisation of re-sources and introduction of new services and per-suasion of other government partners to adapt
or change their policies (as highlighted below)
Trang 38Public health 139
Improving population health can involve
restrict-ing individual freedom Since public health
inter-ventions and strategies operate at a population
level; they may create a tension between
individ-ual choices and the public health For example,
lo-cal residents launched a legal challenge through
a judicial review of the decision in South England
by South Central Strategic Health Authority to add
fluoride to the local water supply
Cigarette smoking, heavy drinking and obesity
are related to multiple causes of death and cause
substantial premature mortality in UK For
in-stance:
r Smoking is related to over 40 causes of death
and morbidity, and causes approximately
100,000 deaths per year are due to smoking
including a third of persistent smokers
r Alcohol abuse and misuse is associated with
over 50 causes of death and morbidity, with
esti-mates of direct and indirect causes of mortality
annually ranging from 20 000 to 70 000 deathsper year; population levels of drinking in the UKhave increased in the last 20 years consistentwith increases in the number of people dyingfrom liver disease (one of the few major causes
of death which is increasing and the mean age
of diagnosis and death is decreasing)
r By 2008 approximately 1 in 4 men and womenwere obese, a twofold and 150% increase re-spectively Obesity is associated with Type 2 dia-betes, osteoarthritis, coronary heart disease andsome cancers, and will add substantially to therisk of death and disability if present with smok-ing and alcohol misuse
Interventions to reduce these behaviours haveadopted a range of methods and styles TheNuffield School on Bioethics classified a Public Health Intervention Ladderin the degree of socialcontrol on individual choice (see Table 15.1)
Table 15.1 Public Health Intervention Ladder
Eliminate choice: Introduce laws that
entirely eliminate choice
Seatbelt legislation, drink drive laws, bans on alcohol sales in thoseunder 18
Restrict choice Introduce laws that restrict
the options available to people
Banning smoking in public places; banning transfatty acids as aningredient of processed food in restaurants∗; banning ‘happy hours’and reducing drinking hours∗
Guide choice through disincentives.
Introduce financial or other disincentives to
influence people’s behaviour
Increasing taxes on cigarettes; introducing a minimum price per unit
of alcohol∗
Guide choices through incentives.
Introduce financial or other incentives to
influence people’s behaviours
Offering tax-breaks on buying bicycles for travelling to work;subsidising gym membership; contingency management(interventions in which substance misusing people receive tangible,positive reinforcers for objective evidence of behaviour change) aspart of the treatment of people with alcohol, smoking, or weightproblems
Guide choices through changing the
default policy.
Changing the standard side dish in school meals from chips to ahealthier alternative; changing nature of drinking environments sothat alcohol served with food∗)
Enable choice Help individuals to change
their behaviours
Providing free ‘smoking cessation’ programmes; exerciseprescription; provide brief interventions for alcohol, or smoking ingeneral practice, A&E and other health and social settings
Provide information Inform and educate
the public
Campaigns to encourage people to walk more or eat five portions offruit and vegetables a day; food labelling to identify health andunhealthy foods∗; smoking warnings on cigarette packs; information
on recommended drinking levels and units of alcohol
Do nothing or simply monitor the
current situation
Several of the suggestions above∗have yet to be introduced;monitor trends in alcohol harm, such as emergency and inpatientadmissions and assaults http://www.nwph.net/alcohol)
Note: Several of the interventions suggested marked as (∗) have yet to be introduced;
Trang 39Public health action
We examine two further examples of public health
analysis and intervention: on prevention of suicide
and sudden infant death syndrome (SIDS)
Suicide Prevention
Suicide is an important cause of premature
mor-tality, especially among young people, and is the
most severe outcome of mental illness For
exam-ple, each year during the 1990s in the UK there
were approximately:
r 4,500 deaths from suicide;
r >200,000 hospital attendances for self-harm;
r 1 million individuals experience suicidal
thoughts;
r 2 million people prescribed an antidepressant;
r 5 million adults with neurotic symptoms;
r 150 million working days lost through mentalillness;
Suicide is a complex public health problem, (asshown in Figure 15.3)
There are multiple potential influences andcauses of suicide Another way of looking at sui-cide and injury prevention is to construct aHad- don Matrix(Haddon, 1999) which describes riskand protective factors in terms of the person, agent
or event, and environment and whether they cur before, during or after the injury We will fo-cus on one potential source of prevention – themethod of suicide – and show how the ‘restriction’
oc-or ‘elimination’ of choice’ can reduce the all number of suicides The figures show that formen and women the rate of suicide by domes-tic gas fell as coal gas which had high carbonmonoxide (CO) content and was highly toxic was
over-IMPULSIVEBEHAVIOUR INRESPONSE TOLIFE EVENTS
• Personal & cultural acceptability
(may influence some,
but not all, domains
in this figure)
ENVIRONMENTAL
INFLUENCES ON
NEURODEVELOPMENT
e.g childhood abuse;
loss of parent; low
birthweight
MENTAL ILLNESSdepression;
schizophrenia
SUICIDALTHOUGHTS
ATTEMPTED
AVAILABILITY OFEFFECTIVETREATMENTS/ANTIDOTES
• Motherhood
• Social support
• Help seeking
• Religious sanctions against suicide
PROTECTIVEFACTORS
CHOICE OF METHOD/METHOD AVAILABILITY(media, culture)
Source: After Gunnell D, Lewis G (2005) Studying suicide from the lifecourse perspective: implications for prevention Br J
Psychiatry 187: 206–8 With permission from The Royal College of Psychiatrists.
Trang 40All othermethods
Total
Domestic gas
All othermethods
Domestic gas
1955–1971
Source: after Kreitman N (1976) The coal gas story: United Kingdom suicide rates, 1960–71 Brit J Prev Soc Med
30: 86–93, with permission from BMJ Publishing Group Ltd.
gradually replaced by natural gas (low CO), until
by 1970s there was no domestic supply based on
coal gas and no suicides from this method What
is remarkable – and critical to suicide prevention –
is that the reduction in deaths from domestic gas
have not been replaced by other methods In the
1950s and early 1960s coal gas was the
common-est method of suicide; its withdrawal and eventual
removal led to approximately 9,000 fewer suicides
(Figure 15.4)
This observation has led to interest in
control-ling other methods of suicide For example, the
introduction of legislation restricting the
quan-tity of paracetamol that could be bought over
the counter is associated with a reduction in
sui-cide deaths due to paracetomol as well as
de-clines in liver transplants More recent bans on
coproxamol also are estimated to have prevented
around 150–250 poisoning deaths per year in the
UK These interventions and evaluations of
im-pact are derived from the examination of routine
data sets and an assessment of the natural
his-tory and causal influences on disease Worldwide
the commonest method of suicide is pesticide
self-poisoning – accounting for over 250,000 deaths per
year – bans on the most toxic pesticides may have
a profound impact on the incidence of suicideworldwide
SIDS prevention
Sudden Infant Death Syndrome (SIDS) or ‘cotdeath’ was responsible for approximately 900deaths per year (1 in 500 children in the first year
of life) in UK in the 1970s/1980s, with marked sonal variation peaking in winter months Someresearchers thought it was possible that infantssleeping position (prone – lying on stomach vs.supine – lying on back) may influence risk Ad-vice on the sleeping position of babies was largelyunchanged since the 1940s when the most pop-ular child-rearing book suggested that babiesshould be encouraged to sleep on their stomach,i.e a prone sleeping position – as illustrated inFigure 15.5
sea-A case control study examining potential causes
of SIDS with 72 cases and 144 controls found the
following (Fleming et al., 1990, p 85):
r Odds Ratio (OR) of prone sleeping: 8.8 (95% CI7.0–11.0; p< 0.001);