1. Trang chủ
  2. » Giáo Dục - Đào Tạo

ECOTOXICOLOGY: A Comprehensive Treatment - Chapter 9 docx

28 463 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 28
Dung lượng 389,8 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

For example, Sprague 1969 argued for 96 h after observing that“For 211 of 375 toxicity tests reviewed, acute lethal action apparently ceased with 4 days, althoughthis tabulation may have

Trang 1

as acute if it is a relatively brief and intense one to a poison Standard durations are espoused forconducting acute lethality tests For example, Sprague (1969) argued for 96 h after observing that

“For 211 of 375 toxicity tests reviewed, acute lethal action apparently ceased with 4 days, althoughthis tabulation may have been biased .” This kind of correlative analysis and the convenience of

fitting a test within the workweek motivated the initial codification of a 96-h test

It is important to note that Sprague stated in his 1969 monograph that his intentions were todescribe “profitable bioassay methods” about which there was ample “room for healthy disagree-ment.” Along the vein of healthy disagreement, one could conclude from these same data that a 96-hduration was insufficient for characterizing acute lethality in more than 4 out of 10 tests (Figure 9.1).Further, Sprague notes that the tests considered in making his recommendation included many statictests1in which toxicant concentrations probably decreased substantially during the exposures andthat those results from continuous flow tests that had much less chance of substantial toxicant con-centration decrease during the tests generally indicated a longer duration was needed than did thestatic tests Given the urgency in the 1960s for standard tools for dealing with pervasive pollution,the assumption that mortality by 96 h accurately reflected that occurring during any acute exposureduration is an understandable regulatory stance However, it is scientifically indefensible and insuf-ficient for today’s needs Consequently, many thoughtful ecotoxicologists now generate lethal effectmetrics several times during acute toxicity tests.2And, as we will see, alternative approaches existthat avoid this issue altogether

A similar blend of science and pragmatism contributed to the current selection of test durationsfor chronic exposures By recent convention, chronic exposure occurs if exposure duration exceeds10% of an organism’s lifetime (Suter 1993); however, this has not always been the conventionand 10% is an arbitrary cut-off point Consequently, other durations are specified in some standardchronic test protocols and associated results are reported throughout the peer-reviewed literature.Test protocols have emerged for exposures differing relative to the medium containing the tox-icant(s) as well as exposure duration For example, test protocols for acute (e.g., EPA 2002a) andchronic (e.g., EPA 2002b) water exposures quantify lethality under these two general categories

of exposure duration Exposures occur by oral, dermal, and respiratory routes, and accordingly,testing techniques have emerged that accommodate these routes (e.g., EPA (2002) for sediments)

1 Generally, the toxicant is introduced into the test tanks at the beginning of a static aquatic toxicity test and not renewed for the test duration Such tests are often characterized by substantial decreases in toxicant concentrations as the toxicant degrades, volatilizes, adsorbs to solids, or otherwise leaves solution Such dosing problems in early, static tests have been reduced in current techniques by either periodic renewal of toxicant solutions or supplying a continuous flow of toxicant solution into exposure tanks (see for more detail Buikema et al 1982).

2 Sprague (1969) recommended this strategy to increase the information drawn from acute lethality tests.

135

Trang 2

FIGURE 9.1 The number of early toxicity tests tabulated by Sprague (1969) in which acute mortality appeared

to be completely expressed in exposed individuals by the specified exposure duration Sprague noted that thisdata set included results from many static exposure tests in which the toxicant solutions were not changedand, as a consequence, the toxicant concentrations likely decreased substantially during testing The tests arecategorized here based on the time interval thought to be adequate for full expression of acute mortality, forexample, “<1” = complete acute lethality expressed in 1 day or shorter.

Unfortunately, standard methods incorporating predictions of mortality from pulsed exposures areyet to be codified, but methods for dealing with these exposure scenarios are becoming increasinglyseen as necessary to consider by ecological risk assessors Those accommodating simultaneousexposure to several sources are also less common than warranted

Approaches for characterizing or predicting lethal effects of single toxicant exposures are wellestablished although some potentially useful approaches have yet to be explored sufficiently Thisbeing the case, conventional and emerging approaches will be described in this chapter after dis-cussion of some examples of lethality as manifested at the whole organism level of biologicalorganization

9.1.1 DISTINCTDYNAMICSARISING FROMUNDERLAYING

MECHANISMS ANDMODES OFACTION

Molecular, cellular, anatomical, and physiological alterations that contribute to somatic death weresketched out in preceding chapters Here, organismal consequences of such processes as nar-cosis, uncoupling of oxidative phosphorylation, and general stress will be explored Hopefully,these examples demonstrate that all lethal responses to poisonings are not identical and that under-standing the suborganismal processes resulting from exposure is extremely helpful for predictingconsequences to individuals and populations

Narcosis is often described as a reversible, chemically induced decrease in general nervoussystem functioning The decrease in nervous system function results from disruption of nerve cellmembrane functioning in higher animals as explained earlier (Chapter 3, Section 3.10); however,narcotic effects due to pervasive membrane dysfunction also manifest as a general depression ofbiological activity in organisms lacking nervous systems Narcosis of sufficient intensity and durationlowers biological activities of any organism below those essential to maintaining the soma, resulting

in death But, because narcosis is reversible, postexposure mortality may be low relative to that

resulting from damage which requires more time to repair For example, grass shrimp (Palaemonetes

pugio) acutely exposed for 48 or 60 h to polycyclic aromatic hydrocarbons (1-ethylnaphthalene,

Trang 3

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

FIGURE 9.2 Cumulative mortality, including postexposure mortality, of amphipods (Hyalella azteca) exposed

to four concentrations of dissolved copper (Modified panel from Figure 1 of Zhao and Newman (2004).) Notethat substantial mortality occurred after copper exposure ended

2,6-dimethylnaphthalene, and phenanthrene) showed minimal postexposure mortality (Unger et al

2007) In contrast, mortality experienced by amphipods (Hyalella azteca) after exposure to dissolved

copper was quite high (Figure 9.2) because, as discussed in previous chapters, metals cause extensivebiochemical, cellular, and tissue damage that takes considerable time to repair (Zhao and Newman2004)

Another specific mechanism that can produce mortality is oxidative phosphorylation uncoupling.Such disruption of this essential mitochondrial process is typical of many substituted phenols (seeChapter 3, Section 3.9) At the organismal level, consequences range from elevated blood pH todisruption of normal respiratory processes to somatic death Like the narcosis-related mortalityjust described, there can be minimal postexposure death in an exposed population For example,amphipods acutely exposed to sodium pentachlorophenol showed minimal postexposure mortality(Zhao and Newman 2004) The pentachlorophenol is quickly eliminated from this amphipod and

effects are reversible (Nuutinen et al 2003) Mosquitofish (Gambusia holbrooki) acutely exposed to

pentachlorophenol showed similar minimal postexposure mortality for the same reasons (Newmanand McCloskey 2000)

In contrast to the lethal dynamics of such poisons, some toxicants cause pervasive changes ordamage that requires considerable time to recover The copper damage that resulted in the post-exposure mortality shown in Figure 9.2 is one example The tissue damage resulting from metalexposure took considerable time to repair and, consequently, mortality continued well beyond ter-

mination of exposure Similarly, mosquitofish (G holbrooki) acutely exposed to high concentrations

of sodium chloride showed prolonged and high postexposure mortality (Newman and McCloskey2000) The cellular and tissue damage caused by the associated isomotic and ionic conditions takestime to repair Fish, succumbing after exposure ends, did not have enough time or energy reserves

to recuperate

The nature of the lethal response can vary in other important ways Some toxicants will display

a concentration or dose threshold below which no lethal consequences are apparent Mosquitofishexposure to high concentrations of sodium chloride is one obvious example in which death willnot occur as long as the individual is able to osmo- and ionoregulate sufficiently at the particularsodium chloride concentration However, the energetic burden imposed on the individual might

Trang 4

result in decreased fitness in other aspects of the individual’s life cycle In addition, some, butnot all, toxicants are characterized by a minimum time to die: the individual simply cannot diefaster than this threshold time regardless of the exposure concentration or dose (Gaddum 1953) Thepresence and magnitude of a threshold time depends on the toxicant’s bioaccumulation kinetics andthe suborganismal nature of its effect upon any particular species or individual.

Complete freedom from stress is death

(Selye 1973)

The somatic deaths described above involved specific modes of action but some somatic deathsinvolve the general stress process Like the inappropriate toxicant-induced apoptosis described inChapter 4 (Section 4.2.1) or the adverse consequences of inflammation described in Chapter 4(Section 4.2.3), inappropriate or inadequate expression of the body’s general reaction to stressorscan lead to death of individuals Such somatic death is said to result from what Selye (1984) described

as a disease of adaptation Regardless of the stressor, the body invokes a general suite of reactionsthat, because of their universal presence and integrative nature, merit detailed discussion at the level

to resist changes associated with a stressor by using less energy than changes associated with thealarm phase, and also, to maintain homeostasis Examples of changes are adrenal gland enlargement

to produce glucocorticoids that modify metabolism and also shifts in the immune system so thatthe body generally has reduced ability to express an inflammation response.4 Selye refers to thisstate of artificially increased homeostasis as heterostasis If stress continues and eventually exceedsthe individual’s finite adaptive energy, the exhaustion phase is entered in which the individualgradually loses its ability to maintain any semblance of essential stasis in the presence of the stressor

3 A reasonable argument could be made that this issue, because of the essential role played by hormones, should have been discussed in Chapter 6 However, the associated processes involve the integration of many biochemicals, organs, tissues, and organ systems within the individual, so it is more appropriate to discuss it here The fact that it could be covered in either chapter attests to the soundness of the central theme of this book that making linkages among levels of biological organization

is important and possible in ecotoxicology.

4 The body’s response to a local stressor is called a Local Adaptation Syndrome (LAS) and will be coordinated within the GAS An example of such coordination is the influence of the GAS on the degree to which the body expresses inflammation locally in a damaged tissue.

Trang 5

Box 9.1 The Pharmacologist of Dirt

As a University of Prague medical student in 1925, Selye noticed a consistent syndrome withpatients suffering from different, but intense, demands on their bodies (Selye 1973, 1984)

A decade later as a young researcher studying sex hormones, he saw the same syndrome fest in laboratory rats after injection with ovarian extracts Rats showed a distinct syndrome

mani-in which the adrenal cortex enlarged, lymphatic structures (thymus, spleen, and lymph nodes)shrunk, and stomach ulcers appeared He later found that injection of extracts from other tissuesand even formalin elicited this same syndrome

Because his original intent had been to identify novel sex hormones by injecting ovarianextracts into rats, his findings were extremely disheartening That tissues other than ovarieselicited the same response might be an acceptable finding because tissues other than gonadswere known at that time to produce sex hormones But the appearance of the syndrome afterformalin injection was inexplicable by any mechanism involving a sex hormone After perform-ing several more permutations of his experiments, he reluctantly came to the conclusion thatthe syndrome was not a specific one to an extracted hormone, but a general defense response

to demands placed on the soma by a stressor But his mood gradually changed from despair

to fascination He had found a general adaptive response, yet medical convention at that timefocused solely on telltale effects produced by specific disease agents Contrary to convention,

he had discovered a nonspecific, defensive response He shared his excitement about this novelvantage with a valued mentor who, after failing to dissuade him from further work along thistheme, exclaimed, “But, Selye, try to realize what you are doing before it is too late You havenow decided to spend your entire life studying the pharmacology of dirt.” After recoveringfrom the sting of this comment, Selye spent his career studying what later became known asthe theory of stress Along the way, he published 1500 articles and 30 books that established

a completely new discipline Fortunately, the label “dirt pharmacology” never caught on.What is the point? To use Selye’s own thoughts about his experience, “My advice to a novicescientist is to look for the mere outlines of the big things with his fresh, untrained, but stillunprejudiced mind” (Selye 1984) Respect, but do not be confined by, the current thinking inyour field (see alsoChapter 36)

Many changes that appeared during the alarm stage and abated during the resistance phase canreappear during the exhaustion phase (Selye 1950) Death occurs at the end of the exhaustionphase

What is the significance of the GAS-associated shifts relative to coping with an infectious ornoninfectious stressor? Selye breaks these changes down into responses facilitated by syntoxic andcatatoxic hormones Syntoxic hormones facilitate an individual’s ability to coexist with the stressorduring the period of challenge (e.g., those modulating the inflammation response during a generalinfection) Specific examples include cortisone and cortisol inhibition of inflammation as well astheir altering of glucose metabolism The catatoxic hormones are designed to enhance stressordestruction, “mostly through the induction of poison-metabolizing enzymes in the liver” (Selye1984) Dysfunctions of these responses are called diseases of adaptation because they reflect health-enhancing processes gone awry Human diseases of this sort include hypertension, some heart andkidney diseases, and rheumatoid arthritis The activation of chemicals by liver enzymes discussed

in previous chapters fit into this category of diseases also Regardless, the reader will probablyrecognize at this point that the syntoxic and catatoxic hormones are pivotal to integrating the diversedefense mechanisms described in earlier chapters at the organismal level

Not only can stress cause direct mortality of exposed individuals but can also, as suggested bythe immunological changes described above, modify an individual’s risk of death from toxicants or

Trang 6

infectious agents Friedman and Lawrence (2002) describe such exacerbation by stress of mentally induced human maladies Contaminants can also modify the stress response of exposedspecies Hontela (1998) reported that low, chronic toxicant field exposures of fish appeared to reduceplasma corticosteroid levels, suggesting a compromised ability to respond to other stressors Amphi-

environ-bians (Necturus maculosus) exposed in the field to polychlorinated biphenyls and organochlorine

pesticides also demonstrated reduced ability to produce corticosterone when stressed (Gendron et al.1997) As a final example, Benguira and Hontela (2000) documented reduced ability of rainbow

trout (Oncorhychus mykiss) interrenal tissue to secrete cortisol with adrenocorticotropic hormone stimulation after in vitro exposure to o,p-dichlorodiphenyldichloroethane (DDD).

So, toxicant-induced death can result from specific and nonspecific effects to or responses ofindividuals This conclusion should create in the reader an anticipation that a diversity of mortalitydynamics exist within groups of exposed individuals In the next section, the focus will shift to thenature of these differences among lethally exposed individuals

9.1.2 LETHALITYDIFFERENCES AMONGINDIVIDUALS

It has been recognized that in bioassays, the least and most resistant individuals in a group show muchgreater variability in response than individuals near the median A good deal of accuracy may therefore

be gained by measuring some average response rather than a minimum or maximum response .

(Sprague 1969)

Not surprisingly, toxicologists see variability in the resistance of individuals to lethal agents Severalfactors contribute to this variability including allometric scaling, sex, age, genetics, and randomchance Even in the earliest publications quantifying lethal effects (e.g., Gaddum 1933), the influences

of these factors were known Except for random chance, which will be discussed in Sections 9.1.2.1and 9.1.2.2, these factors will be described briefly here

Scaling is simply the influence of organism size on structural and functional ics (Schmidt-Nielsen 1986) Many relevant processes such as those determining bioaccumulation(Anderson and Spear 1980), structures such as gill exchange surface area (Hughes 1966), and statessuch as metal body burden (Newman and Heagler 1991) are subject to scaling, so it is no surprisethat the risk of death can be influenced by organism size In fact, allometry, the science of scaling,

characterist-is used to quantitatively predict differences in mortality for individuals differing in size (seeman(1995) for details) Bliss (1936) developed a general power model that, in its various forms,currently enjoys widespread use for scaling lethal effects As an important example, Anderson andWeber (1975) extended Bliss’s approach to predict the mortality expected in a toxicity test if testedfish differed in size:

where Probit(P) = the probit transform5of the proportion of exposed fish dying, M= the toxicant

concentration, W = the weight of the exposed fish for which prediction was being made, and h = an

exponent adjusting mortality predictions for fish weight Hedtke et al (1982) used Equation 9.1

successfully to quantify the influence of Coho salmon (Oncorhynchus kisutch) size on the lethal

effects of copper, zinc, nickel, and pentachlorophenol Anderson and Weber (1975) advocated thatthis relationship be applied generally; however, some studies such as Lamanna and Hart (1968) showthat not all data sets fit this relationship As will be discussed later in this chapter, scaling effects onmortality can also be easily accommodated using survival time modeling, as implemented by manystatistical programs

5 See Section 9.2.2 for details about the probit transformation.

Trang 7

Sex and age can influence the risk of dying during toxicant exposure Several studies have showndifferences in sensitivity between the sexes including Kostial et al (1974) and Newman et al (1989).Age is commonly an important factor determining sensitivity of toxicants (e.g., Hogan et al 1987)although its influence is often confounded by its positive correlation with size A cursory review of theprevious chapters should reveal important biochemical, physiological, and anatomical differencesthat could give rise to sex- and age-dependent sensitivities Some of these differences can produceunexpected results in combination As an example, Williamson (1979) found that age and size of

the land snail (Cepaea hortensis) had opposite effects on cadmium accumulation and probably the

adverse effects of this toxic metal

As a quick glance ahead to Chapters 16 through 18 will confirm, many opportunities existfor genetic qualities to contribute to tolerance differences.6 There is no need to discuss genetictolerance further at this point except to point out that one example described inBox 18.1can belinked to the GAS In that example, mosquitofish differed in the genetically determined form of

a glycolytic enzyme (glucosephosphate isomerase) that is pivotal in the processing of glucose throughmetabolic pathways Glucosephosphate isomerase-2 genotypes differed in their survival probabilitiesunder stress and these differences were correlated with those in changes in glycolytic flux undergeneral stress Downward in the biological hierarchy, explanation for these response differencescould notionally be linked to syntoxic hormone (glucocorticoid) responses in which blood glucoseincreases under stress As done in Chapter 16, the glucosephosphate isomerase genotype differencesduring stress can also be projected upward in the biological hierarchy as one mechanism contributing

to phenotypic plasticity and associated changes in life history strategies

9.1.2.1 Individual Effective Dose Hypothesis

On this theory, the dosage-mortality curve is primarily descriptive of the variation in susceptibility betweenindividuals of a population the susceptibility of each individual may be represented by a smallest dose

which is sufficient to kill it, the individual lethal dose

(Bliss 1935)

The distributions of the individual effective doses and the results of the tests are in most cases

“lognormal” .

(Gaddum 1953)

In modeling lethal effects, the variation in response among tested individuals is most often explained

in the context of the individual effective dose or lethal tolerance hypothesis The two quotes abovepresent the essential features of this hypothesis There is a minimum dose (or concentration) that ischaracteristic of each individual in a population at or above which it will die, and below which

it will survive under the specified exposure conditions For most populations, the distribution ofsuch tolerances is believed to be described best by a log normal distribution with some individualsbeing very tolerant (Figure 9.3) Early toxicologists conjectured mechanisms for differences based

on the then-popular Weber–Fechner Law7or conventional adsorption laws such as the Langmuirisotherm model The context from which these conjectures emerged was conventional laboratorytoxicity testing in which most variables such as animal age, sex, and size were controlled, so thetolerance differences being explained were inherent—perhaps genetic—qualities However, becauseconventional ecotoxicity test data are generated for diverse inbred laboratory lines or field-collected

6 See Mulvey and Diamond (1991) for a general review.

7 A field called psycho-physics emerged during the first half of the nineteenth century in an attempt to quantify the intensity

of human sensation resulting from a stimulus of a specified magnitude The Weber–Fechner Law of psycho-physics states that the magnitude of the sensation (expressed on an arithmetic scale) increases in proportion to the logarithm of the stimulation Extending this law, early toxicologists related the magnitude of toxic response to the logarithm of the dose or exposure concentration.

Trang 8

FIGURE 9.3 The upper panel shows the typical sigmoid concentration- (or dose-) mortality curve The

logarithm of the exposure concentration is plotted on the x-axis against the proportion of individuals dying during the exposure (P) This sigmoid curve can be described as a cumulative density function (cdf, upper panel) in which P= 16, 50, and 84 correspond to approximately –1 standard deviation below the mean, themean, and+1 standard deviation above the mean The antilogarithm of the x-value associated with P = 50 is

an estimate of the median lethal concentration (LC50) or dose (LD50) The bottom panel shows the same dataexpressed as a probability density function, that is, as the conventional normal “bell curve.” The cumulative

area to the left of the mean is 50, corresponding to P= 50 in the cdf above

individuals, it is difficult to imagine a genetic mechanism that consistently produced a log normaldistribution of tolerances for most populations and toxicants Mono- and multigenetic differences intolerance (seeChapters 17and18) could produce a variety of distributions from ecotoxicity testing

Moreover, some conventional tests use metazoan clones (e.g., Daphnia magna or Lemna minor) or

unicellular algal or bacterial cultures It is difficult to invoke a genetic mechanism that produces

a log normal distribution of tolerances for these diverse clones, laboratory strains, and field-caughtindividuals It is more plausible that phenotypic plasticity (seeChapter 16) might generate variability

in many of these cases but there does not seem to be a clear mechanism associated with phenotypicplasticity that would consistently produce a log normal distribution of tolerances Regardless, thisconcept of a log normal distribution of inherent tolerance differences in all test populations was thefirst, and remains the dominant, explanation presented in the current ecotoxicology literature

Trang 9

the hypothesis may be capable of corroboration by independent experiments If on the other hand the [lognormal] formulation is only that of a “mathematical model” then it would be [better] not to create any

hypothetical tolerances .

(Berkson 1951)

This quote by Berkson precedes his counterargument that it is better to apply a log logistic model than

a log normal one to toxicity data But, more generally, it is an eminently reasonable point that remainsinadequately addressed more than half a century later (seeBox 12.2in Chapter 12) Disinterest withthe underlying mechanism by the founders of modern toxicology arises from pragmatism as is evident

in the following quote from Finney’s seminal book (1947):

The validity and appropriateness of the logarithmic transformation in the analysis of experimentaldata are not dependent on the truth or falsity of any hypotheses relating to adsorption; use of the logconcentration requires no more justification than it introduces a simplification into the analyses.

In his arguments, Berkson (1951) related one experiment involving human tolerances to highaltitude conditions that did not support the individual tolerance hypothesis, suggesting instead thatdifferences in individual tolerances during testing were mostly random Such a conclusion givesrise to an alternate explanation (probabilistic or stochasticity hypothesis) that most of the variationamong similar individual’s results from a random process (or processes) that is best modeled with

a log normal or a similar skewed distribution Which specific individual dies within a treatment is

a matter of chance Nearly half a century later, Newman and McCloskey (2000) tested these twohypotheses, rejecting the customary assumption that the individual tolerance hypothesis was thesole explanation for observed differences in response of lethally exposed individuals The stochasti-city hypothesis was supported in two cases and the individual tolerance hypothesis in another.Neither hypothesis alone was adequate to explain the observed differences Similar conclusions were

recently made by Zhao and Newman (2007) for amphipods (H azteca) exposed to copper or sodium

pentachlorophenol

Two questions may have occurred to the critical reader at this point First, why was the ing mechanism for a foundation approach in classic toxicology left undefined for so long? Second,why is an understanding of the underlying mechanism important to the practicing ecotoxicologist?

underly-An inkling of an answer to the first question emerges from statements of prominent toxicologists

of the time such as that of Finney above Originally, the log normal model was applied to quantifyrelative poison toxicity or drug potency so it did not matter what the underlying mechanism was.Within the context of the laboratory bioassay, one chemical was or was not more potent than another.Classic toxicology could progress just fine without knowing the reason that data seemed to fit

a skewed distribution Precipitate explanation was presented without much scrutiny and the methodswere broadly applied in studies of poisons and drugs Unfortunately, because many ecotoxico-logists tend to feel that anything good for mammalian toxicologists is good enough for them, ithas been erroneously supposed that the underlying mechanism is also an esoteric issue in eco-toxicology, the science concerned with effects ranging from those to individuals to those to thebiosphere The error in this supposition can be shown in several ways but we will illustrate ithere using only population consequences under repeated toxicant exposures Suppose that a pop-ulation was exposed for exactly 96 h to a toxicant concentration that kills half of the exposedindividuals Only the most tolerant individuals remain alive according to the individual tolerancetheory but the stochasticity hypothesis would predict that, after recovery, the tolerances of thesurvivors will be the same as those of the original population During a second exposure, theconcentration-response curve could be very different (individual tolerance theory) or the same(stochasticity theory) as that for the original population during the first exposure Indeed, dur-ing a sequence of such exposures, the survivors would drop in numbers by 50% during the firstexposure and then remain at that number under the individual tolerance hypothesis but would drop

Trang 10

Spontaneous mortality model Hormesis

FIGURE 9.4 Conventional sigmoid and sigmoid models with spontaneous (natural) mortality or

a dose/concentration threshold The inset illustrates hormesis at sublethal concentrations

down 50% with each exposure under the stochasticity hypothesis The likelihood of local lation extinction is quite different depending on which hypothesis is most appropriate or if bothmanifest in combination Knowing which hypothesis is correct should be important to the ecotoxic-ologist attempting to predict population and associated community changes resulting from multipleexposures

popu-9.1.3 SPONTANEOUS ANDTHRESHOLDRESPONSES

The model shown inFigure 9.3 can have an additional feature in some cases If the test involves

a prolonged exposure relative to the longevity of the test organism or tested life stage of the organism,there can be a certain level of spontaneous (natural) mortality Unfortunately, in still other cases inwhich the husbandry of the test species is imperfect, there may be background mortality associatedwith the general stress placed on the test organisms In these cases, the mortality curve will take on

an additional feature as shown in Figure 9.4

Another change in Figure 9.3 is required if a threshold concentration or dose is characteristic

of a chemical agent (Cox 1987) Like the minimum time-to-death described in Section 9.1.1, sometoxicity test data appear to have a minimum concentration or dose that must be exceeded before anydeaths occur in the test treatments (Figure 9.4)

9.1.4 HORMESIS

The nature of toxicologically-based dose-response relationships has a long history that is rooted in thedevelopment and interpretation of the bioassay While the general features of the bioassay were clearlyestablished in the 19th century, the application of statistical principles and techniques to the bioassay iscredited to Trevan and the subsequent contributions of Bliss and Gaddum [which] described the nature

of the S-shaped dose-response relationship and the distribution of susceptibility within the context of thenormal curve Despite this long history of the S-shaped dose-response relationship, a substantial

number of toxicologically-based publications from the 1880s to the present indicate that biologicallyrelevant activity may occur below the NOAEL .8

(Calabrese and Baldwin 1998)

8 The NOAEL (no observed adverse effect level) is a statistically derived measure often used to imply a threshold concentration or dose below which no effect will be observed See Chapter 10, Section 10.3 for more detail.

Trang 11

As described in the above quote, the sigmoid model that emerged out of a long history of bioassayresearch has gained a well-deserved place in the mammalian and ecological toxicology literatures.However, its prominence comes at the expense of some important features One such example hasalready been discussed (i.e., the weak foundation for the oft-assumed individual tolerance theory).Another is associated with the lower end of the dose/concentration–(lethal or sublethal) effect model.Hormesis is the apparent stimulatory effect of a toxicant at subinhibitory concentrations or doses.With hormesis, the sigmoid curve is not monotonic and, instead, drips down at very low doses orconcentrations (Figure 9.4) Superficially, hormesis might seem counterintuitive How can a smallamount of a poison be “good” for an exposed individual? However, as we have seen, a stressor canevoke the GAS or some other process, creating the potential for overcompensation at low levels Touse Selye’s terms, it can produce a state of heterostasis in which one aspect of fitness is conditionallyenhanced InChapter 16, related shifts in phenotypes such as those associated with life historystrategies under harsh environmental conditions, also provide a rationale for such “stimulation”under subinhibitory doses or concentrations.

Hormesis has been recognized for some time, being established at various periods under thelabels of Arndt-Schultz law or Hueppe’s rule; however, it is only recently being discussed as ageneral phenomenon, rather than a surprising oddity Further discussion of hormesis and associatedmodels can be found in Calabrese et al (1987), Calabrese and Baldwin (1998, 2001), and Sagan(1987)

9.1.5 TOXICANTINTERACTIONS

To this point, the lethal effects of single toxicants have been emphasized, but many ures involve simultaneous exposure to several toxicants that can interact There are two tra-ditional vantages for discussing the joint action of toxicants: mode of action and additivitybased

expos-Relative to mode of action, toxicants are said to have similar joint action if they act throughthe same mechanism The joint lethal effects of two similarly acting toxicants can be predicted

by knowing the dose or concentration of each toxicant and adjusting these concentrations for therelative potencies of each (Finney 1947) If toxicants have independent joint action, they havedifferent modes of action and prediction of mixture effects is not as straightforward In instances

of potentiation, one chemical that is not toxic under the exposure conditions being considered canworsen—potentiate—the effect of another chemical Synergistic action is the final joint action modefor which prediction is possible only after one has a sound understanding of the means by whichone toxicant synergizes (increases) or antagonizes (decreases) the action of the other Antagonismbetween chemical agents can result from a variety of mechanisms A functional antagonism occurs ifthe two chemicals counterbalance one another by affecting the same process in opposite directions.Two chemicals combine to form a less potent product with chemical antagonism Dispositionalantagonism involves chemicals that influence the uptake, movement or deposition within the body,

or elimination of each other in a way that lessens their joint effect Finally, receptor antagonismoccurs if one chemical blocks the other from a receptor involved in its action and, in doing so, lowersits ability to adversely affect the exposed organism

Mixture treatment in terms of additivity is based on deviations from simple addition of two

or more toxicant effects Two or more chemicals are said to be (effect) additive if their combinedeffect in mixture is simply the sum of the effects expected for each if each were administeredseparately If their effects together are less than additive, they are said to be acting antagonistic-ally If their effects together are greater than additive, they are said to be acting synergistically.This approach will not be described in further detail because it provides less potential for linkagebetween suborganismal and organismal population-level effects than the vantage based on mode ofaction

Trang 12

9.2 QUANTIFYING LETHALITY

9.2.1 GENERAL

In 1927 Trevan drew attention to the fact that the threshold dose [of a drug] varies enormously even whenthe animals are as uniform as possible, and proposed that toxicity testing should be based on the medianlethal dose, which kills 50 per cent of the animals

eco-9.2.2 DOSEor CONCENTRATION–RESPONSEMODELS

QUANTIFYINGLETHALITY 9

A well-established approach exists for quantifying lethal effects from data sets of concentration versusproportion of exposed individuals dying The most common is the log normal model discussed abovewhich involves log transformation of the concentration and then fitting of the data to the followingmodel (Finney 1947):

where P = the proportion expected to die, x0= the concentration for which predictions are being

made,µ = the mean, and σ = the standard deviation.

Early in the formulation of quantitative methods for dealing with concentration-effect data, therewas a need to transform data into terms that could easily be dealt with using simple logarithm tablesand mechanical adding machines The model above was transformed accordingly by expressing theproportions responding in units of standard deviations from the mean of the normal distribution.10The name given to this transformed proportion was the normal equivalent deviation (NED) Thistransformation still resulted in some computational inconvenience at that time because NED valuesfor proportions below 0.5 were negative numbers Simply to avoid negative numbers in computations,five was added to the NED to produce the probit transformation: Probit(P) = NED(P) + 5 A plot ofNED or probit versus log concentration should produce a straight line if the log normal model wasappropriate for a data set Now, using some method such as maximum likelihood estimation, thesetypes of data could be fit to a model such as the following:

Trang 13

where C = the exposure concentration, a = an estimated regression intercept, and b = an estimated

regression parameter accounting for the influence of exposure concentration Because no advantageexists for using the probit transform after the advent of modern computers, models also are formulatedusing the NED instead of the probit Nonlinear fitting can also be done with standard software withoutany computational difficulty

The simple generalized model can be specified based on the cumulative normal function ( 11

Spontaneous mortality (P S= the proportion of unexposed individuals dying) can be included in

this model If P ≥ PS,

and P = PSat C = 0 A lethal threshold can also be included in Equation 9.4 for concentrations (C)

greater than the threshold concentration (CT),

The P approaches 0 if C ≤ CTfor this model This model can be modified further to include

natural mortality (e.g., Equation 9.5) in which case P = PSif C ≤ CT Including hormesis in thesemodels is more involved but can be done as demonstrated by Bailer and Oris (1994)

Several other functions are commonly fit to these kinds of data Those associated with the loglogistic (or logit) model are the most common alternatives to the log normal functions just described.Conventionally, the log odds or logit transformation is applied:

Transformed logit= Logit(P)

a Gompertz model to ecotoxicity data All of these models can be applied to concentration-lethalresponse data after appropriate substitutions into Equations 9.4 through 9.6

11 To illustrate the ease to which these calculations can now be done, invoking the Excel™ function NORMINV(Probability, Mean, Standard Deviation) where Probability = the proportion for which the calculation is to be done, Mean = the distri- bution mean, and Standard Deviation = the distribution standard deviation, calculates the NED if mean = 0 and standard deviation= 1, that is, for the unit normal curve, N(0,1) As an example, NORMINV(.84134474,0,1) will return 1 The

following function would return the probit, NORMINV() + 5.

Trang 14

Concentration–lethal effect data

Assume model?

Trimming desired?

Assume best model known?

Compare and select best model with goodness-of-fit statistic (e.g., 2 )

Maximum likelihood estimation (MLE)

MLE fit to the appropriate model

FIGURE 9.5 Methods for estimating LC50 and associated confidence limits from dose/concentration versus

proportion dying data sets Although not shown in the diagram, these summary statistics can be eked out of datasets in which all of the treatments had either complete or no mortality at all A binomial method can provide anestimate for such data sets with no partial kills

Parametric and nonparametric methods exist for analyzing data from concentration-lethalresponse tests Many can also be applied for nonlethal effects Each (Figure 9.5) carries advantagesand disadvantages The best methods can be applied if one assumes an explicit model The presence

of spontaneous or threshold mortality, or hormesis requires a model incorporating these features.Such more complicated models are available that use maximum likelihood methods to estimate theassociated model parameters and lethality metrics such as the LC50 Most concentration-lethal effectdata are analyzed using simpler models that assume a specific model If there is no a priori reason toselect one model over another, for example, log normal over the log logistic, Gompertz, or Weibull,the data can be fit to all of the candidate models and then the results compared Comparison usuallyinvolves plotting the actual data and model predictions, and also calculating a goodness-of-fit statisticsuch as theχ2-statistic The model providing the best fit is selected for estimating model parametersand predicting metrics such as the LC50 and its 95% fiducial (confidence) limits Nonparametricmethods can be used to estimate the LC50 and 95% fiducial limits if an acceptable model was notapparent The Spearman–Karber method, with or without trimming, is the most commonly appliednonparametric approach Most applications of the Spearman–Karber approach conform to recom-mendations of Hamilton et al (1977), especially those about trimming rules In some applications

of toxicity testing, there are no partial kills: each treatment in the test has either no mortality orcomplete mortality Stephan (1977) suggested that an LC50 and associated confidence limit could

be estimated from such data using a binomial method Essentially, the LC50 can be estimated from

the highest concentration treatment with no mortality (CNO) and the lowest concentration treatment

with complete mortality (CALL),

Ngày đăng: 18/06/2014, 16:20

TỪ KHÓA LIÊN QUAN