1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Risk Assessment and Indoor Air Quality - Chapter 4 potx

24 277 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 24
Dung lượng 218,83 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Although many of the same methods can be used forexperimental and epidemiologic studies, this chapter focuses on statistical methodsthat have been developed for analyses of epidemiologic

Trang 1

CHAPTER 4

Dose–Response Assessment — Quantitative

Methods for the Investigation

IV Empirical Statistical Methods

A Relative Risk Regression Models

B Poisson Regression

V Biologically Based Models

VI Multistage Models

A The Armitage–Doll Multistage Model

B The Two-Mutation Clonal Expansion Model

1 Likelihood Construction and Maximization

2 Examples

VII Other Quantitative Methods

A Physiologically Based Pharmacokinetic (PBPK) Models

VIII Prospects for the Future

Bibliography

I INTRODUCTION

The estimation of exposure–response or dose–response relationships is a uisite for a rational approach to the setting of standards for human exposures to

Trang 2

prereq-potentially toxic substances In many instances when human epidemiologic data arenot available, standards are based on assessment of toxic responses in experimentaldata followed by extrapolation of risks to humans Additionally, experiments inanimals are often carried out at high exposure levels so that the experiments havethe requisite statistical power The resultant issues of interspecies and low-doseextrapolation are among the most contentious scientific issues of the day.

In the past few decades, a vast biostatistical literature has appeared on sure–response and dose–response analyses Summarizing this literature in a singlechapter is a formidable task Although many of the same methods can be used forexperimental and epidemiologic studies, this chapter focuses on statistical methodsthat have been developed for analyses of epidemiologic studies, and on biologicallybased mathematical models for analyses of data in which the end point of interest

expo-is cancer Physiologically based pharmacokinetic (PBPK) models, developed toinvestigate the relationship of exposure to dose by consideration of the uptake,distribution, and disposal of agents of interest, will be discussed only briefly This

is not because these models are considered to be unimportant, but because thissubject is outside the author’s area of expertise Interspecies differences in response

to exposure to environmental agents can often be explained, at least partially, interms of differences in uptake and distribution of the agent Thus, PBPK modelshave advanced broadly our understanding of differential species toxicology and can

be considered important tools in risk assessment

For risk assessment, epidemiologic studies offer two obvious advantages overexperimental studies Firstly, since the studies are done in the species of ultimateinterest, the human, the difficult problem of interspecies extrapolation is finessed.Secondly, most epidemiologic studies are done at levels of exposure that are muchcloser to typical exposures in free-living human populations than is possible withexperimental studies It is true that often epidemiologic studies are conducted inindustrial cohorts, which are typically exposed to higher levels of the agent of interestthan the general population Nonetheless, the levels of exposure, even in industrialcohorts, are much closer to those in the general population than the exposures used

in experimental studies Some of what epidemiologic studies gain in the way ofrelevance over experimental studies is given up in precision, however It is generallytrue that both exposures and disease outcomes are measured with less precision inepidemiologic studies than in laboratory studies, leading, possibly, to bias in theestimate of risk and the shape of the dose–response curve Exposure measurementerror is now widely regarded as being an important issue in analyses of epidemiologicdata Another potential problem for risk assessment arises from the fact that humanpopulations, especially industrial cohorts, are rarely exposed to single agents Whenexposure to multiple agents is involved, the effect of the single agent of interest isoften difficult to investigate This fact is of particular relevance for air pollutionbecause it is generally a complex mixture of toxic agents The role of any singlecomponent of the mixture can be difficult to study

Epidemiologic studies that can be used to investigate dose–response relationshipsare classified into three broad categories The cohort study is, at least conceptually,close to the traditional experimental study in that groups of exposed and unexposed

Trang 3

individuals are followed in time and the occurrence of disease in the two groupscompared In the case-control study, relative risks are estimated from cases of thedisease under investigation and suitably chosen controls In these two types of study,information on exposures and disease is available on an individual basis for allsubjects enrolled in the study In a third type of study, the ecological study, infor-mation is available only on a group basis Ecological studies have generally beenlooked upon with disfavor by epidemiologists for reasons that have been extensivelydiscussed elsewhere (Greenland and Morgenstern 1989; Greenland and Robins1994) Nonetheless, they can provide useful information and, particularly, in airpollution epidemiology, they have played a central role in recent times Another type

of study, in which disease outcome and some confounders are known on an individualbasis and others together with exposure to air pollutants are known only on a groupbasis, has recently played an important role in air pollution epidemiology There iscurrently no generally accepted term for such studies, which share attributes of thecohort study and the ecological study Such studies are called here hybrid studies.Because epidemiologic studies are observational (i.e., groups of subjects cannotrandomly be assigned to one exposure group or another), careful attention must bepaid to controlling factors that may bias estimates of risk Thus, controlling for whatepidemiologists call “confounding” is of paramount importance both in the designand analyses of epidemiologic studies

Within the last two decades sophisticated statistical tools have been developedfor the analyses of epidemiologic data Many of these methods fall under the rubric

of the so-called relative risk regression models Additionally, recent research in airpollution epidemiology has exploited regression methods for analyses of time-series

of counts Both parametric and semiparametric Poisson regression models have beendeveloped for analyses of these data Special methods are required when multipleobservations are made on the same individual, as is done in panel studies, or in thesame geographic location, as is done with Poisson regression analyses of time-series

of counts Account must then be taken of serial correlations in the observations.Various statistical methods are used to address this issue Finally, when the healtheffect of interest is cancer, stochastic models based on biological considerations can

be used for data analyses These models provide a useful complement to the moreempirical statistical approaches to data analyses Each of these approaches will bediscussed briefly in this chapter

II MEASURES OF DISEASE FREQUENCY AND MEASURES OF EFFECT

When discussing dose– or exposure–response relationships it is important to

define clearly what response one is talking about Often the term dose– or sure–response is used with no indication of what response means In order to define

expo-response precisely it is important to have a clear idea of the various commonly usedmeasures of disease frequency and of effect Perhaps the most fundamental measure

of disease frequency is the incidence rate, also called the hazard rate in the statistical

Trang 4

literature The incidence or hazard rate measures the rate (per person per unit time)

at which new cases of a disease appear in the population under study Because theincidence rates of many chronic diseases, including cancer, vary strongly with age,

a commonly used measure of frequency is the age-specific incidence rate, usuallyreported in five-year age categories For example, the age-specific incidence rate peryear in the five-year age group 35–39 may be estimated as the ratio of the number

of new cases of cancer occurring in that age group in a single year to the number

of individuals in that age group who are cancer free at the beginning of the year.Strictly speaking, the denominator should be not the total number of individualswho are cancer free at the beginning of the year but the person-years at risk duringthe year This is because some individuals contribute less than a full year of expe-rience to the denominator, either because they enter the relevant population after theyear has begun (for example, an individual may reach age 35 sometime during theyear) or because they may leave the population before the year is over (for example,

an individual may reach age 40, die, or migrate during the year) Mathematically,the concept of incidence rate is an instantaneous concept, and is most preciselydefined in terms of the differential calculus A precise definition of the concept isgiven in the next section, and the reader is referred to texts on survival analysis (e.g.,Kalbfleisch and Prentice 1980; Cox and Oakes 1984) for further details

Another commonly used measure of disease frequency is the probability that anindividual will develop disease in a specified period of time For risk assessment,interest is most often focused on the lifetime probability, often called lifetime risk

of developing disease Here, lifetime is arbitrarily defined in the U.S usually as 70

years The incidence (or hazard) rate and probability of developing disease are related

by a simple formula This relationship is expressed by the following equation:

where P(t) is the probability of developing the disease of interest by age t, and I(s)

is the incidence or hazard rate at age s Note that although the probability of disease,P(t), is called cumulative incidence in some epidemiology textbooks (Rothman1986), the integral

is actually the cumulative incidence When the incidence rate is small, as is true formost chronic diseases, the probability of disease by time t, P(t), is approximatelyequal to the cumulative incidence,

Trang 5

The impact of an environmental agent on the risk of disease can be measured

on either the absolute or the relative scale The last two decades have seen anexplosion of statistical literature on relative measures of risk, which can be estimated

in both case-control and cohort studies Let Ie be the incidence rate in the exposedpopulation and Iu be the incidence rate in the unexposed population Then the relativeincidence (relative risk) is defined by

RR = Ie/Iu

A closely related measure is excess relative risk, which is defined as

ERR = (Ie – Iu)/Iu = RR – 1

Yet another measure of risk is the attributable or etiologic fraction, which isdefined as

AF = (Ie – Iu)/Ie = (RR – 1)/RRThe AF is the fraction of incident cases in the exposed population that wouldnot have occurred in the absence of exposure, and “can be interpreted as the pro-portion of exposed cases for whom the disease is attributable to the exposure”(Rothman 1986) In most regression analyses of epidemiologic data, RR is modeledeither as a “multiplicative” or an “additive” function of the covariates of interest.Since RR is readily estimated from both case-control and cohort studies, the variousmeasures of effect discussed above which are functions of RR alone can be estimated

On the absolute scale, the impact of an agent can be measured simply by thedifference of incidence rates (or probabilities) among exposed and nonexposedsubjects Absolute measures of risk cannot be estimated from case-control studieswithout ancillary information (Rothman and Greenland 1998)

The impact of an environmental agent on the risk of disease on a populationwill depend not only on the strength of its effect in the exposed subpopulation, butalso on how large this subpopulation is Even if the agent is a very potent carcinogen,its impact on the cancer burden of the entire population will be small if only a smallfraction of the population is exposed On the other hand, if exposure to a weakcarcinogen is widespread, the population impact could be substantial A measure ofrisk that attempts to quantify the population burden of disease due to a specificexposure is the population attributable fraction, PAF, which is defined as the fraction

of all cases in the population that can be attributed to the exposure, and is given bythe expression

PAF = (IT – Iu)/ITwhere IT is the incidence in the total population In addition to the RR, estimation

of the PAF requires information on the fraction of the population exposed to theagent of interest (see Rothman 1986) The PAF can be estimated directly from

Trang 6

case-control data only if the controls are a random sample from the population(Rothman and Greenland 1998) When the RR associated with exposure to an agent

is high and the exposure is widespread, a major fraction of disease in the populationcan be attributed to the agent For example, it has been estimated that approximately84% of all lung cancers and 43% of all bladder cancers in Australian men in 1992could be attributed to cigarette smoking (English et al 1995)

The calculation of the PAF can be extended to situations where there are multiplelevels of exposure (by considering each level in turn and adding up the PAFs) orwhere the exposure is a continuous variable rather than a categorical one (by creatingdiscrete categories such as quartiles or quintiles of exposure, or by using regressionmodels) Joint effects of several exposures may be considered similarly In the case

of two or more exposures, the separate PAFs may be calculated for each exposurewhile ignoring the other exposures, or a combined PAF may be calculated byconsidering all possible combinations of exposures, calculating the PAF for eachand adding up When two or more exposures are involved, the sum of the separatePAFs will frequently exceed the combined PAF calculated in this way and mayactually exceed 100% The reason for this is clear: cases that occur in the jointexposure categories are counted multiple times when PAFs for single exposures arecomputed, once for each exposure in the joint exposure category Attribution ofcausation in the case of joint exposures is best done by considering all possiblecombinations of exposures For example, with two exposures, attribution of causationmay be summarized by subdividing the cases into those that can be considered asbeing caused by the combination of the two agents, each agent exclusively, or neitheragent (Enterline 1983) For a more advanced treatment of PAFs, see Bruzzi et al.(1985), Wahrendorf (1987), Benichou (1991), and Greenland and Drescher (1993)

III CONFOUNDING

A detailed discussion of confounding, a concept of central importance in miology, is outside the scope of this chapter Confounding arises in epidemiologicstudies as a consequence of the fact that these are observational (not randomized).Suppose one is interested in alcohol as a possible cause of oral cancer Suppose that

epide-an epidemiologic study shows epide-an association between alcohol consumption epide-and oralcancer That is, suppose the incidence of oral cancer in the subpopulation of individ-uals that imbibes alcohol is higher than the incidence of oral cancer in the subpop-ulation of teetotalers The crucial question then is the following: Could the associationbetween alcohol consumption and oral cancer be “spurious” in the sense that it isdue to another agent that is itself a cause of oral cancer, and more likely to be found

in the subpopulation of alcohol imbibers than in the subpopulation of teetotalers?One example of such an agent is tobacco smoke Individuals who imbibe alcohol aremore likely than teetotalers to be smokers Moreover, smoking is strong risk factorfor oral cancer Thus the observed association between alcohol consumption and oralcancer may actually be due to the association between smoking and alcohol con-sumption In a study of oral cancer and alcohol, tobacco smoke is a confounder

Trang 7

As shown by this example, confounding is the distortion of the effect of theagent of interest by an extraneous factor To be a confounder, a factor must satisfytwo conditions First, the putative confounder must be a risk factor for the disease

in the absence of the agent of interest Second, the putative confounder must beassociated with the exposure of interest in the population in which the study isconducted Sometimes a third condition (Rothman and Greenland 1998) isadded—the putative confounder must not be an intermediate step in the pathwaybetween exposure and disease While these three criteria define a confounder formost epidemiologists, other definitions which are close but not identical to thedefinition given here have been given by biostatisticians These are usuallycouched in terms of collapsibility of contingency tables For a more detaileddiscussion, the reader is directed to Greenland and Robins (1986) and Rothmanand Greenland (1998)

Confounding in epidemiologic studies can be addressed in one of two ways—itcan be prevented by appropriate study design or controlled by appropriate analyses.The specific methods used depend upon the type of epidemiologic study The reader

is referred to recent texts (Rothman and Greenland 1998) for details

The main statistical tools for exposure– and dose–response analyses of miologic data will now be discussed briefly Many of these methods can be usedfor analyses of experimental data as well

epide-IV EMPIRICAL STATISTICAL METHODS

Because most epidemiologic studies are observational, issues of sampling anddata analysis are particularly important to assure appropriate interpretation of results

in the presence of possible confounding Some of the main statistical tools developedover the last few decades to address these issues are discussed below

A Relative Risk Regression Models

The development here will follow that in the paper by Prentice et al (1986).Although this paper was written over a decade ago, it lays out the basic frameworkfor these models The concept of hazard function was introduced above as being theappropriate statistical concept that captures the epidemiologic idea of an incidencerate A more precise definition of this concept follows Consider a large, conceptuallyinfinite, population that is being followed forward in time, and about which onewishes to draw inferences regarding the occurrence of some health related event,generically referred to as a “failure.” Typically one is interested in relating the failure

to preceding levels of one or more risk factors, such as genetic and lifestyle factorsand exposure to external agents, collectively referred to as covariates Let z(t) denotethe vector of covariates for an individual at time t Time may be the age of theindividual, or, in some settings, it may be more natural to consider other specifica-tions, such as time from a certain calendar date, or duration of employment in aspecific occupation Let T denote the time of failure for a subject, and suppose that

Trang 8

Z(t) represents the covariate history up to time t Then the population frequency offailure, which may be thought of as the probability of failure, in a time interval t to

t + ∆ with covariate history Z(t), will be denoted by P[t + ∆|Z(t)] The hazard orincidence function (which, if failure refers to death, is often called the force ofmortality) is then defined by

In order to simplify notation, the dependence of h, P, etc on the covariate historyZ(t) will be suppressed unless this is not clear from the context Thus, for example,h[t; Z(t)] will be written as h(t) An intuitive interpretation of the hazard is that it

is the rate of failure at time t among those who have not failed up to that time.Now suppose that one is interested in the incidence of failures among individualswith a specific covariate history, Z(t) For example, one may be interested in theincidence among individuals who are exposed to certain environmental agentsthought to be associated with the disease under investigation Let Z0(t) representsome standard covariate history; for example, Z0(t) could be thought of as thecovariate history among those not exposed to the agents of interest One can then write

RR[t; Z(t)] = exp (β1z1 + β2z2 + … + βnzn)and the additive model by

RR[t; Z(t)] = 1 + β1z1 + β 2z2 + … + β nznwhere z1 through zn are the covariates of interest and the β’s are parameters to beestimated from the data Note that the additive model posits that the relative risk is

a linear function of the exposures of interest and that the effect of joint exposures

is additive The multiplicative model posits that the logarithm of relative risk is alinear function of the exposures and that the effect of joint exposures is multiplicative.Quite often the relative risk cannot be adequately described by either a multiplicative

or an additive model For example, the relative risk associated with joint exposure

to radon and cigarette smoke is greater than additive but less than multiplicative(BEIR IV 1988) Various mixture models have been proposed (Thomas 1981;Breslow and Storer 1985; Guerrero and Johnson 1982) to address such situations

h t Z t[; ( ) ] = lim∆ → 0P t[ +∆ Z t T( ); ≥t]∆− = ′P t Z t[ ( ) ] ( −P t Z t[ ( ) ] )

Trang 9

The use of these models presents special statistical problems (Moolgavkar andVenzon 1987; Venzon and Moolgavkar 1988).

There is a vast biostatistical literature on the application of relative risk sion models to the analyses of various study designs encountered in epidemiology

regres-It is outside the scope of this chapter to review this literature The interested reader

is referred to the appropriate publications (Breslow and Day 1980; Breslow andDay 1987)

In the field of air pollution epidemiology, relative risk regression models wereused for analyses of two important studies of the long-term effects of air pollution

on health These are the Harvard Six Cities Study (Dockery et al 1993) and theACS II study (Pope et al 1995) In these studies, cohorts of individuals wereassembled from cities with different pollution profiles and information collected oncertain life-style factors, such as cigarette smoking These individuals were thenfollowed and their mortality experience recorded The authors of these studies refer

to them as cohort studies There is, however, an important element of the ecologicdesign to these studies The exposure of interest, namely air pollution, is measurednot on the individual level, but on the level of the city That is, because information

on concentrations of pollutants is available only from central monitoring stations,exposure to air pollution is assumed to be identical for all study subjects in a city.Because these studies combine elements of the cohort design with ecologic design,

the term hybrid studies has been coined by this author for designs of this type This

study design can pose formidable problems in the interpretation of the results ofanalyses (Moolgavkar and Luebeck 1996)

B Poisson Regression

Quite often information is available, not on individual members of a study cohort,but on subgroups that are reasonably homogeneous with respect to important char-acteristics, including exposure, that determine disease incidence As a concreteexample, consider the well-known British doctors’ study of tobacco smoking andlung cancer For the cohort of individuals in this study, information on the number

of lung cancer deaths is cross-tabulated by daily level of smoking (reported in fairlynarrow ranges) and five-year age categories Another well-known example is pro-vided by the incidence and mortality data among the cohort of atomic bomb survi-vors, for which the numbers of cancer cases are reported in cross-tabulated form by(ranges of) age at exposure, total dose received (in narrow ranges) and by five-yearattained age categories When data are presented in this way the method of Poissonregression is often used for analyses Only a very brief outline of the method isgiven here For more details the reader is referred to the standard text by McCullaghand Nelder (1989)

For Poisson regression, the number of events of the outcome of interest (death

or number of cases of disease) in each cell in the cross-tabulated data is assumed

to be distributed as a Poisson random variable with expectation (mean) that is afunction of the covariates of interest The numbers of events in distinct cells of thecross-tabulated data are assumed to be independent Suppose that the data are

Trang 10

presented in I distinct cross-tabulated cells, and let Ei be the expectation of thenumber of events in cell I Suppose that the observed number of events in cell I is

Oi Then under the assumption that the number of events is Poisson distributed, thelikelihood of the data is

of biologically based carcinogenesis models (Moolgavkar et al 1989) Whatever themodel form for the expectation, the parameters are estimated by maximizing thelikelihood function

Poisson regression models have played a prominent role in recent analyses ofassociations between indices of air quality in various urban areas and health out-comes such as mortality (Schwartz and Dockery 1992; Schwartz 1993) and hospitaladmissions (Burnett et al 1994; Moolgavkar et al 1997) for specific causes (respi-ratory disease, heart disease) These studies purport to investigate the acute effects

of air pollution in contrast to the hybrid studies referred to above, which investigatedthe long-term effects of air pollution In these analyses, daily counts of events (deaths

or hospital admissions) in a defined geographical area are regressed against levels

of air pollution as measured at monitoring stations in that area Explicitly, the number

of events on any given day is assumed to be a Poisson random variable, the tation of which depends upon indices of air quality and weather on the same orprevious days In this type of study, inferences regarding the association of airpollution with the health events of interest depend upon relating fluctuations in dailycounts of events to levels of air pollution on the same or previous days As indicatedabove, in the simplest form of Poisson regression the logarithm of the expectation

expec-is a linear function of the covariates Thexpec-is restriction on the shape of the sure–response function may not be appropriate, and recently more flexible methodsthat make no assumptions regarding the shape of this relationship have been intro-duced for analyses of these data (Health Effects Institute 1995) An importantdifference between Poisson regression analyses of air pollution data and the otherexamples given above (e.g., analysis of the atomic bomb survivor data) is that inthe air pollution data information on exposure is available only from central monitors

expo-of air quality It is not possible to form strata expo-of individuals with like exposureswithin a narrow range It is not possible, therefore, to investigate the number ofdeaths or hospital admissions among individuals similarly exposed This fact makesthis type of study of air pollution an ecological study in that exposures and outcomesare known only on the group level, and it is not clear that the number of events isrelated to the level of exposure

Trang 11

V BIOLOGICALLY BASED MODELS

Biologically based models for the process of carcinogenesis have been in usefor analyses of epidemiologic and experimental data for the past four decades Whenthe response of interest is cancer, these models provide a useful complement to theempirical statistical methods briefly described above for the analyses of data.Because the parameters of the model have direct interpretation in biological terms,analyses using these models may lead to testable hypotheses These models alsoprovide a framework within which the process of carcinogenesis can be viewed, andhelp break up a complex problem into simpler component pieces The models areparticularly useful for analyses of data with complicated patterns of exposure toenvironmental agents, as typically occurs with exposures in occupational cohortswhere workers may switch jobs often The linearized multistage procedure, whichhas been used as a default by the United States Environmental Protection Agency(EPA), is based on an early stochastic model, the Armitage-Doll multistage model

VI MULTISTAGE MODELS

Current understanding of carcinogenesis as a complex multistage process isbased on observations from histopathological, epidemiologic, and molecular biolog-ical studies Disruption of normal cell proliferation is the sine qua non of themalignant state Conversely, there is accumulating evidence that the kinetics of celldivision, cell differentiation (or death), and apoptosis of normal and premalignantcells are important in the carcinogenic process (Cohen and Ellwein 1990) Increases

in cell division rates may lead to increases in the rates of critical mutational events,and an increase in cell division without a compensatory increase in differentiation

or apoptosis leads to an increase in the size of critical target cell populations Theseobservations indicate that carcinogenesis involves successive genomic changes, some

of which may result in disruption of normal cellular kinetics and facilitate theacquisition of further mutations The number of necessary genomic changes requiredfor malignant transformation is not known with certainty for any tumor, although it

is thought to be at least two

The following fundamental assumptions underlie the models considered here:

(1) cancers are clonal (i.e., malignant tumors arise from a single malignant progenitorcell);

(2) each susceptible (stem) cell in a tissue is as likely to become malignant as any other;(3) the process of malignant transformation in a cell is independent of that in any othercell; and

(4) once a malignant cell is generated, it gives rise to a detectable tumor with bility 1 after a constant lag time

proba-The last two assumptions are clearly false, and are made for mathematical venience Methods for relaxing these assumptions are currently being investigated

Trang 12

con-(Yang and Chen 1991; Luebeck and Moolgavkar 1994) A mathematical review ofsome carcinogenesis models can be found in the book by Tan (1991).

A The Armitage–Doll Multistage Model

The Armitage–Doll model, which has been used extensively in the last fourdecades, was first proposed to explain the observation that, in many human carci-nomas, the age-specific incidence rates increase roughly with a power of age TheArmitage–Doll model postulates that a malignant tumor arises in a tissue when asingle susceptible cell in that tissue undergoes malignant transformation via a finitesequence of intermediate stages, the waiting time between any stage and the subse-quent one being exponentially distributed Schematically, the model may be repre-sented as follows:

E0 → E1 → …En–1 → En

Here E0 represents the normal cell, and En the malignant cell Suppose that acell moves from stage Ej to stage Ej+1 with transition rate λj Precisely, this meansthat the waiting time distribution for a cell to move from stage Ej to stage Ej+1 isexponential with parameter λj Let pj(t) represent the probability that a given cell is

in stage Ej by time t Then, pn = p(t) is the probability that the cell is malignantlytransformed by time t, and the expression for the hazard, h(t), is given by h(t) = Np′(t)/(1–p(t)), where N is the number of susceptible cells in the tissue In the usualtreatment of the multistage model, two approximations are usually made at thispoint First, at the level of the single cell, malignancy is a very rare phenomenon.Thus, for any cell, p(t) is very close to zero during the life span of an individual,and h(t) is approximately equal to Np′(t) An explicit expression for Np′(t) in terms

of the transition rates λj is given in Moolgavkar (1978, 1991) Expanding p′(t) in aTaylor series, one obtains

h(t) Ý Np′(t) = Nλ0λ1 … λn–1tn–1{1 – mean(λ)t + f(λ, t)}/(n – 1)!

where mean(λ) is the mean of the transition rates and f(λ, t) involves second andhigher order moments of the transition rates Retention of only the first nonzero term(this is the second approximation) in this series expansion leads to the Armitage–Dollexpression, namely

h(t) Ý Nλ0λ1 … λn–1tn–1/(n-1)!

Thus, with the two approximations made, this model predicts an age-specificincidence curve that increases with a power of age that is one less than the number

of distinct stages involved in malignant transformation

It is immediately obvious from the model that, given sufficient time, anysusceptible cell eventually becomes malignant Further, since the waiting timedistribution to malignant transformation is the sum of n exponential waiting time

Ngày đăng: 11/08/2014, 09:21

TỪ KHÓA LIÊN QUAN