Environmental Toxicology

1

Environmental toxicology

1.1. Environmental toxicology

🧭 Overview

🧠 One-sentence thesis

Environmental toxicology studies how chemicals interact with organisms and the environment to produce hazardous effects, integrating knowledge from environmental chemistry (fate and exposure), toxicology (effects on individuals), and ecology (ecosystem-level implications).

📌 Key points (3–5)

  • What it studies: the fate and effects of potentially hazardous chemicals in the environment, including humans.
  • Three pillars: environmental chemistry (chemical fate), toxicology (effects on organisms), and ecology (higher-level biological organization).
  • Common confusion: environmental toxicology vs. ecotoxicology—environmental toxicology includes human health endpoints, while ecotoxicology focuses only on ecological endpoints.
  • Why it emerged: growing awareness in the late 20th century that emitted chemicals can trigger hazardous effects in organisms, including humans.
  • Key challenge: translating individual-level effects to population and ecosystem levels, where subtle effects may be highly relevant or irrelevant depending on ecological context.

🔬 The three-pillar framework

🧪 Environmental chemistry: fate and exposure

Environmental chemistry studies the fate of chemicals in the environment, including their distribution over different environmental compartments and how this distribution is influenced by physicochemical properties and environmental characteristics.

  • What it covers: pathways and processes after emission—advection, deposition, (bio)degradation.
  • Ultimate aim: reliable assessment of organism exposure, complicated by environmental heterogeneity.
  • Tools used:
    • Analytical measurements (emissions, concentrations, specific processes like biodegradation).
    • Mathematical modeling to predict fate and exposure, reducing the need for expensive measurements once validated.
  • Example: measuring how a chemical spreads through soil, water, and air, then building a model to predict exposure levels for organisms in those compartments.

🧬 Toxicology: effects on organisms

Toxicology studies the effects of chemicals on organisms, often at the individual level, aiming to understand mechanisms of toxicity and identify safe exposure levels.

  • Fundamental principle: the dose concept (Paracelsus, 16th century)—likelihood of adverse effects depends on the dose organisms are exposed to.
  • Two key process types:
    • Toxicokinetics (ADME): absorption, distribution, metabolism, excretion—"what the body does to the substance"; determines internal dose at the site of toxic action.
    • Toxodynamics: evolution of adverse effects from the molecular initiating event (MIE) through the adverse outcome pathway (AOP)—"what the substance does to the body."
  • Shift in methods: moving from whole-organism "black box" testing to in vitro testing and in silico predictive approaches (e.g., physiologically-based toxicokinetic models, genomics, proteomics) to reduce animal testing.
  • Example: a chemical is absorbed into an organism, metabolized in the liver, and the metabolite binds to a receptor (MIE), triggering a cascade leading to an adverse outcome.

🌿 Ecology: ecosystem-level implications

Ecology studies the interactions between organisms and their environment, providing knowledge to translate individual-level effects to population and ecosystem levels.

  • Why it matters: effects relevant at the individual level (e.g., tumor risk) may be irrelevant at higher levels; subtle individual effects may be highly relevant at the ecosystem level.
  • Key ecological knowledge needed:
    • Life cycles, population regulation factors, genetic variability, spatial distribution patterns.
    • Organism roles in nutrient cycling and decomposition.
  • Recent focus: landscape configuration, distribution patterns, and timing of exposure events as important determinants of ecosystem effects.
  • Example: behavioral changes in fish after exposure to antidepressants may seem subtle individually but can affect population dynamics and ecosystem processes.
  • Don't confuse: an effect that looks serious in a lab test on one organism may not matter at the population level if it doesn't affect reproduction or survival rates in the wild.

🔄 Interactions and integration

🔺 The triangle model

The excerpt describes environmental toxicology as a triangle with three vertices: chemicals, the environment, and organisms.

  • What it illustrates: fate and hazardous effects are determined by interactions among all three components.
  • How the pillars fit:
    • Environmental chemistry → chemicals and environment.
    • Toxicology → chemicals and organisms.
    • Ecology → organisms and environment (and higher-level organization).
  • This framework emphasizes that no single discipline alone can address environmental toxicology problems; integration is essential.

🧩 From measurements to models to risk assessment

  • Environmental chemists measure emissions and concentrations, discover patterns (e.g., between substance properties and environmental characteristics), and integrate these into mathematical models.
  • Validated models allow risk assessors to assess exposure without expensive measurements.
  • Toxicologists determine dose-response relationships and mechanisms, increasingly using in vitro and in silico methods.
  • Ecologists provide the context to interpret whether individual-level effects matter at population or ecosystem scales.

🆚 Environmental toxicology vs. ecotoxicology

TermScopeKey distinction
Environmental toxicologyFate and effects of chemicals in the environment, including human health endpointsIncludes humans as assessment endpoints
EcotoxicologyFate and effects of chemicals in the environment, restricted to ecological endpointsExcludes human health; focuses only on ecosystems
  • The excerpt states that because the book includes human health as an assessment endpoint, "environmental toxicology" is preferred over "ecotoxicology."
  • Don't confuse: the two terms are often used interchangeably, but the inclusion or exclusion of human health is the main distinction.

🌍 Historical and practical context

📅 Why it emerged

  • Environmental toxicology emerged in response to growing awareness in the second half of the 20th century that chemicals emitted to the environment can trigger hazardous effects in organisms, including humans.
  • The excerpt notes that Section 1.3 (not included here) provides a brief history.

🎯 Multidisciplinary nature

Environmental toxicology is a multidisciplinary field assimilating and building upon knowledge, concepts, and techniques from other disciplines, such as toxicology, analytical chemistry, biochemistry, genetics, ecology, and pathology.

  • It is not a standalone discipline but integrates tools and concepts from many fields.
  • This integration is necessary to address the complexity of chemical fate and effects in heterogeneous environments.
2

DPSIR Framework

1.2. DPSIR

🧭 Overview

🧠 One-sentence thesis

The DPSIR framework structures environmental pollution problems into five categories—Drivers, Pressures, State, Impacts, and Responses—to facilitate communication among scientists, policymakers, and stakeholders and to clarify the role of environmental toxicology in analyzing and solving these problems.

📌 Key points (3–5)

  • What DPSIR is: a communication tool that organizes environmental issues into five categories (Drivers, Pressures, State, Impacts, Responses) to prevent miscommunication.
  • Why it matters: different people use terms like "cause," "source," and "effects" differently; DPSIR provides a common frame of reference.
  • Where environmental toxicology fits: mainly in the Pressures, State, and Impacts blocks—chemical use/emissions, fate in the environment, and adverse effects.
  • Common confusion: labeling processes (e.g., is "agriculture" a driver or a pressure?) varies by perspective and detail level; the framework must be applied flexibly, not rigidly.
  • Limitations: DPSIR emphasizes physical cause-and-effect chains and underrepresents the societal dimension (governance, awareness, resources, knowledge).

🧩 The five DPSIR categories

🚗 Drivers

Drivers: the human needs underlying the human activities that ultimately result in adverse effects.

  • These are fundamental human needs, not the activities themselves.
  • Example: the human need for food drives the use of pesticides like neonicotinoids.
  • Don't confuse: drivers are needs, not the actions taken to meet those needs.

🏭 Pressures

Pressures: human activities initiated to fulfill human needs and resulting in changes in the physical environment that ultimately lead to—often unforeseen—adverse consequences for the environment or certain groups of society that are perceived as problematic, either now or in the future.

  • These are the actual activities (e.g., using neonicotinoids in agriculture).
  • They cause changes in the physical environment.
  • The adverse consequences may be unforeseen and may be perceived as problematic now or later.

🌊 State (variables)

State: refers to the status of the physical environment.

  • Often quantified by observable changes in environmental parameters.
  • Example: the concentration of neonicotinoids in water, air, soil, and biota.
  • Consecutive changes can be labeled as 1st, 2nd, and 3rd order state variables (e.g., rising CO₂ levels → temperature increase → species abundance shift).

⚠️ Impacts

Impacts: any changes in the physical environment or society that are a consequence of the environmental pressures and that are perceived as problematic by society or some groups in society.

  • These are changes perceived as problematic.
  • Example: increasing bee mortality attributed to neonicotinoids, or human health effects of pesticides.
  • Closely related to protection goals in risk assessment: if society agrees an impact should be prevented, it becomes a protection goal.
  • Don't confuse: a change in species abundance can be labeled as a 3rd order state variable or as an impact, depending on whether it is perceived as problematic.

🛠️ Responses

Responses: all initiatives developed by society to address the issue.

  • Range from gathering knowledge to developing policy plans and taking mitigation measures.
  • Examples: risk-based admission procedures for neonicotinoids, more efficient spraying techniques, environmentally friendly pest control.

🗣️ Why DPSIR is a communication tool

🗣️ Preventing miscommunication

  • People use terms like "cause," "source," and "effects" differently.
  • Example confusion: some may say a farmer is the main cause of pesticide pollution; others may say it's the manufacturer or the growing world population.
  • Example confusion: some may see pesticide concentration in water as an effect of use; others may refer to species extinction as the effect.
  • DPSIR provides a common vocabulary to avoid these misunderstandings.

🔄 Flexibility is key

  • The framework is flexible and should be adapted to the problem at hand.
  • It is not a rigid mold that fits all environmental issues.
  • Labeling of processes (driver vs. pressure, state vs. impact) will differ between people because categories are broadly defined and detail levels vary.
  • Example: some may call "agriculture" a driver, others a pressure; some add a new category "human activities" between drivers and pressures.
  • Main strength: it stimulates communication and supports development of a common understanding.

🧪 Environmental toxicology within DPSIR

🧪 Focus on Pressures, State, and Impacts

  • Pressures block: the use of chemicals by society (e.g., in agriculture or consumer products) and their emission to the environment.
  • State block: the fate of chemicals in the environment and their accumulation in organisms.
  • Impacts block: the adverse effects triggered in ecosystems and humans.

🎯 Risk assessment and DPSIR

  • An important step in risk assessment is deriving safe exposure levels (e.g., Predicted No Effect Concentration (PNEC) for ecosystems, Acceptable Daily Intake (ADI) for humans).
  • In DPSIR terms: defining an acceptable impact level (e.g., zero effect or 1-in-a-million tumor risk) and translating it into a corresponding state parameter (e.g., chemical concentration in air or water).
  • Fate modelling predicts state parameters (soil, water, air, organism concentrations) based on pressure parameters (emission data).

⚖️ Advantages and limitations

✅ Advantages

  • Provides a common frame of reference for scientists, policymakers, and stakeholders.
  • Shows why knowledge on fate and impact of chemicals is needed to address pollution issues.
  • Highlights that use of this knowledge is always subject to valuation (i.e., how society values the adverse effects).
  • Widely used by national and international institutes (European Environment Agency, US-EPA, OECD).
  • Sometimes used as a first step in modelling: identify relevant processes, then describe them quantitatively to predict environmental concentrations or ecological effects.

❌ Limitations and criticisms

  • Tries to capture all processes in cause-and-effect relationships, resulting in a bias toward the physical dimension (human activities, emissions, physical effects, mitigation measures).
  • The societal dimension is less easily captured (knowledge generation, governance structures, resources needed for measures, awareness, societal framing).
  • Although DPSIR can be adapted (e.g., extended DPSIR in Figure 2 emphasizes governance, awareness, resources, and knowledge), it has limitations.
  • Several alternative frameworks better capture the societal dimension.
  • Don't confuse: DPSIR is not a panacea; it is one tool among others.

📊 Summary table

DPSIR categoryDefinitionExample (neonicotinoids)
DriversHuman needs underlying activitiesNeed for food
PressuresHuman activities causing environmental changesUse of neonicotinoids in agriculture
StateStatus of the physical environmentConcentration of neonicotinoids in water, air, soil, biota
ImpactsChanges perceived as problematicIncreasing bee mortality, human health effects
ResponsesSocietal initiatives to address the issueRisk-based admission procedures, efficient spraying techniques, environmentally friendly pest control
3

Short History of Environmental Toxicology

1.3. Short history

🧭 Overview

🧠 One-sentence thesis

Environmental toxicology evolved from ancient empirical knowledge of natural poisons to a modern problem-oriented discipline, driven by increasing awareness of how human activities—especially large-scale cultivation and industrial work—can turn toxic exposures into widespread environmental and health disasters.

📌 Key points (3–5)

  • Ancient roots: People have known about poisonous plants and animals since earliest times; ancient Egyptians, Greeks, and Romans documented toxic and curative properties of natural products.
  • From individual to population-scale harm: Historical poisoning events (e.g., ergotism epidemics killing thousands) showed that large-scale human activities like grain cultivation introduced new risks that were initially not understood.
  • Occupational exposure recognition: Centuries ago, observers noted that miners exposed to metals and elements developed specific diseases, marking early awareness of industrial toxic hazards.
  • Common confusion: Natural poisons vs. human-caused exposure—poisons have always existed in nature, but the scale and context of exposure changed dramatically with agriculture and industry.
  • Growing awareness over time: The excerpt traces a progression from localized, often mysterious poisonings to documented, large-scale environmental and occupational health risks.

🌿 Poisons in nature and early human knowledge

🌿 Natural toxins and empirical knowledge

  • Poisonous substances are common in nature—certain plants and animals have always posed risks.
  • People living in close contact with nature developed extensive empirical knowledge of these poisons.
  • Uses included practical applications: catching fish, poisoning arrowheads, magic rituals, and medicines.
  • Example: Communities used natural poisons for hunting and healing, showing that toxicity was recognized and harnessed long before formal science.

📜 Ancient documentation

  • Egyptian knowledge (1550 BC): The Ebers Papyrus demonstrates that ancient Egyptians had extensive knowledge of both toxic and curative properties of natural products.
  • Greek and Roman interest: Greeks and Romans were very interested in poisons; they used them for executions (e.g., Socrates executed with hemlock extract from Conium maculatum) and to murder political opponents.
  • Why poisons were effective for murder: It was usually impossible to establish the cause of death by examining the victim, since advanced chemical analysis was not available at that time.

📖 Early European writings

  • European literature includes many writings on toxins, such as herbals.
  • Example: The Dutch "Herbarium of Kruidtboeck" by Petrus Nylandt (1673) documented plant toxins.

⚠️ From individual poisoning to environmental disasters

⚠️ Ergotism: a historical epidemic

Ergotism: a condition caused by the fungus Claviceps purpurea, which occurs as a parasite in grain (particularly rye, known as spurred rye).

  • Scale of disaster: In the past, ergotism epidemics killed thousands of people who ingested the fungus with their bread.
  • Example from 992 AD: An estimated 40,000 people died of ergotism in France and Spain in a single year.
  • Lack of awareness: People were not aware that death was caused by eating contaminated bread; it was not until much later that it came to be understood that large-scale cultivation of grain involved this kind of risk.

🔍 Key shift in understanding

  • The excerpt highlights a transition: what was once a mysterious mass death event was eventually understood as a consequence of agricultural practices.
  • Don't confuse: The fungus itself is natural, but the large-scale risk emerged from human activity (grain cultivation at scale).
  • This illustrates how human activities can amplify natural hazards into environmental disasters.

🏭 Occupational and industrial toxic exposures

🏭 Mining industry and metal exposure

  • Centuries ago, it was pointed out that workers in the mining industry came into contact with a variety of metals and other elements and tended to develop specific diseases.
  • Observed patterns: Symptoms were regularly observed as a result of contact with arsenic and mercury in the mining industry.
  • This marks early recognition that industrial work environments could cause specific, reproducible health effects.

🧪 Paracelsus (mentioned but not elaborated)

  • The excerpt mentions Paracelsus in the context of historical figures who contributed to understanding toxicity, but does not provide details about his contributions in this passage.
  • (The text cuts off after mentioning him, so no further information is available from the excerpt.)

📈 Evolution of awareness

📈 Increasing recognition over time

  • The excerpt traces a historical arc:
    1. Earliest times: Empirical knowledge of natural poisons, used for practical purposes.
    2. Ancient civilizations: Documented knowledge (Egyptians, Greeks, Romans) and deliberate use (executions, murder).
    3. Medieval/early modern period: Large-scale disasters (ergotism) that were initially mysterious but later understood as linked to human activities.
    4. Industrial era: Recognition of occupational diseases from mining and metal exposure.
  • Key theme: Awareness of environmental and health risks has grown as human activities scaled up and as people began to connect specific exposures to specific harms.

🔄 From mystery to understanding

  • Early poisoning events were often unexplained; cause of death could not be established without advanced chemical analysis.
  • Over time, patterns emerged (e.g., miners getting sick, bread causing epidemics) that allowed people to link exposures to outcomes.
  • This growing awareness laid the groundwork for the modern discipline of environmental toxicology, which is problem-oriented and aims to prevent and address pollution problems in society (as noted in earlier sections).
4

Historical Development of Environmental Toxicology

2.1. Introduction

🧭 Overview

🧠 One-sentence thesis

Environmental toxicology emerged from centuries of observations about poisons and occupational diseases, gaining formal recognition in the mid-20th century when industrial chemical production and major disasters revealed the widespread environmental and health impacts of synthetic chemicals.

📌 Key points (3–5)

  • Paracelsus's dose principle: "Everything is a poison; only the dose makes it not a poison"—the toxic effect depends on the amount of exposure, a principle still valid but often neglected in public understanding.
  • Early occupational toxicology: Specific diseases in miners (16th century) and scrotal cancer in chimney sweepers (1775) showed that workplace exposure to substances like metals, arsenic, mercury, and soot caused health problems.
  • Silent Spring and DDT: Rachel Carson's 1962 book raised public awareness that persistent pesticides like DDT bioaccumulate through food chains, causing reproductive failure in birds and prompting regulatory bans by the 1980s.
  • Major disasters drove policy: The 1984 Bhopal pesticide leak (killing ~4,000 immediately) and the 1986 Sandoz agrochemical spill (massive Rhine river damage) triggered stricter environmental standards, permitting, and quality controls.
  • Common confusion: Environmental toxicology vs. toxicology—environmental toxicology expanded from studying toxic impacts on humans to studying impacts on the entire environment (ecosystems, wildlife, etc.).

📜 Ancient and early knowledge of poisons

🌿 Traditional and ancient use

  • Indigenous peoples and ancient civilizations (Egyptians, Greeks, Romans) had extensive empirical knowledge of poisonous animals and plants.
  • Uses included fishing, poisoning arrowheads, magic rituals, medicines, and executions (e.g., Socrates executed with hemlock extract).
  • Poisons were also used for political murder because chemical analysis to detect cause of death did not exist at the time.

⚠️ Historical poisoning epidemics

  • Ergotism: caused by the fungus Claviceps purpurea parasitizing grain (especially rye).
  • People unknowingly ate contaminated bread, leading to mass deaths (e.g., ~40,000 deaths in France and Spain in 992).
  • Large-scale grain cultivation increased this risk, but the connection was understood only much later.

🔬 Renaissance and the dose principle

🧪 Paracelsus (1493–1541)

Paracelsus's principle: "All things are poison … it is only the dose that makes it not a poison."

  • Paracelsus, a Swiss physician, described diseases in miners exposed to arsenic, mercury, and other metals in his 1567 treatise.
  • He emphasized dose-dependency: the toxic effect of a substance depends on the amount of exposure.
  • This principle remains valid today but is one of the most neglected in public understanding of toxicology.

🛡️ Early preventive measures

  • Georgius Agricola's 1556 work De Re Metallica addressed health aspects of working with metals.
  • Agricola advised preventive measures: wearing protective clothing (masks) and using ventilation.

🏭 Occupational toxicology and cancer

🧹 Percival Pott and chimney sweepers (1775)

  • Percival Pott was the first to describe occupational cancer: high frequency of scrotal cancer among British chimney sweepers due to soot exposure.
  • Soot contains polycyclic aromatic hydrocarbons (PAHs) and carcinogens like cadmium and chromium.
  • Of 1,487 scrotal cancer cases reported, 6.9% occurred in chimney sweepers; similar cases were reported in other countries.
  • Pott described the harsh conditions: children were "bruised, burned, and almost suffocated" in narrow, hot chimneys, then developed "a most noisome, painful, and fatal disease" at puberty.

🦋 Peppered moth and industrial pollution

  • The peppered moth (Biston betularia) is normally light-colored with black speckles, camouflaged against lichen-covered tree trunks.
  • During the Industrial Revolution, coal smoke darkened trees in the UK, and the dark (melanic) form of the moth became dominant in polluted areas.
  • British biologists Cedric Clarke and Phillip Sheppard showed that dark moths had a survival advantage on dark trunks (natural selection).
  • When air pollution decreased, the melanic form became less abundant again.

🌍 Post-WWII awareness and Silent Spring

🧪 Synthetic chemicals and limited awareness (1950s–1960s)

  • After WWII, synthetic chemical production became widespread, but awareness of environmental and health risks was limited.
  • In the 1950s, environmental toxicology emerged as concern grew about toxic chemicals' impact on the environment, expanding toxicology from human health to environmental impacts.

📖 Rachel Carson's Silent Spring (1962)

  • Rachel Carson's book warned of the dangers of chemical pesticides, especially DDT, triggering widespread public concern about improper pesticide use.
  • DDT characteristics:
    • Very persistent and concentrates (bioaccumulates) when moving through the food chain.
    • High levels in organisms high in the food chain (e.g., birds) caused eggshell thinning and reproductive failure.
  • Regulatory actions began in the late 1950s–1960s in the U.S.; by the 1980s, DDT was banned in most Western countries.

💥 Major disasters and policy response

🏭 Bhopal disaster (1984, India)

  • More than 40 tons of highly toxic methyl isocyanate (MIC) gas leaked from a pesticide plant.
  • Almost 4,000 people killed immediately; 500,000 exposed, causing many additional deaths from gas-related diseases.
  • The plant was producing MIC on a large scale (originally only allowed to import it), and safety procedures were far below international standards.
  • The disaster highlighted the urgent need for improved environmental safety standards to prevent large-scale industrial disasters.

🌊 Sandoz agrochemical spill (1986, Switzerland)

  • A fire in a storehouse near Basel caused a large pesticide emission into the Rhine river.
  • Severe ecological damage: massive mortality of benthic organisms and fish (especially eels and salmonids).
  • At the time, environmental standards for chemicals were largely lacking.

📊 Policy response

  • These incidents triggered more research on adverse environmental impacts of chemicals.
  • Public pressure increased, and policymakers introduced instruments to control pollution:
    • Environmental permitting
    • Discharge limits
    • Environmental quality standards

🌱 Sustainable development and endocrine disruption

🌍 Brundtland Report (1987)

Sustainable development: "a development that meets the needs of the present without compromising the ability of future generations to meet their own needs."

  • The World Commission on Environment and Development released Our Common Future (Brundtland Report) in 1987.
  • It placed environmental issues firmly on the political agenda.

🧬 Our Stolen Future (1996)

  • Written by Theo Colborn and colleagues.
  • Raised awareness of endocrine-disrupting effects of chemicals released into the environment.
  • Emphasized that these effects threaten reproduction not only in fish and wildlife but also in humans (feminization and reproductive impacts).

🤝 Professional societies and maturation

🌐 SETAC (Society of Environmental Toxicology and Chemistry)

  • Before the 1980s, no forum existed for interdisciplinary communication among environmental scientists (biologists, chemists, toxicologists, managers).
  • SETAC was founded in North America in 1979; European branch started in 1991; later branches in South America, Africa, and South-East Asia.
  • SETAC publishes two journals: Environmental Toxicology and Chemistry (ET&C) and Integrated Environmental Assessment and Management (IEAM).
  • SETAC provides training (e.g., online courses) and certification programs for toxicologists.

🧪 Other societies

  • EUROTOX (Europe) and Society of Toxicology (SOT) (North America) focus on toxicology broadly.
  • Many national toxicological and ecotoxicological societies became active since the 1970s.
  • Growth in membership, meeting attendance, and publications shows that environmental toxicology has become a mature field of science.
5

Pollutants with specific properties

2.2. Pollutants with specific properties

🧭 Overview

🧠 One-sentence thesis

Pollutants can be classified by specific environmental properties—such as persistence, mobility, bioaccumulation, and toxicity—that determine their behavior, regulatory treatment, and risk to ecosystems and human health.

📌 Key points (3–5)

  • Property-based classification matters: Grouping pollutants by persistence (P), bioaccumulation (B), toxicity (T), and mobility helps predict environmental fate and guide regulation.
  • Metals vs. metalloids: Metals are classified by binding affinity (Class A, B, or borderline), which determines their interaction with biological molecules and their toxicity mechanisms.
  • POPs are the "dirty dozen" expanded: Persistent Organic Pollutants combine persistence, bioaccumulation, toxicity, and long-range transport; the Stockholm Convention regulates them globally.
  • PMOCs threaten water supplies: Persistent Mobile Organic Chemicals can pass through natural and engineered barriers, reaching drinking water sources.
  • Common confusion: Not all persistent chemicals bioaccumulate—mobile polar chemicals may be persistent yet water-soluble, behaving very differently from lipophilic POPs.

🧪 Metals and metalloids

🔬 What defines metals and metalloids

Metals: Elements that are solid at room temperature (except mercury), good electrical and thermal conductors, with high luster and malleability.

Metalloids: Elements with both metallic and non-metallic properties, or nonmetallic elements that can combine with metals to form alloys.

  • The periodic table shows most elements are metals; a subset are "heavy metals" (density > 5 g/cm³).
  • Don't confuse: Heavy metal classification by density is not biologically meaningful—rare earth elements (REEs) have high density but different chemical behavior.
  • Essential elements include major nutrients (Ca, P, K, Mg, Na, Cl, S) and trace elements (Fe, Cu, Zn, Mn, Co, Mo, Se, Cr, Ni, V, Si, As, B).

⚛️ Nieboer-Richardson classification by binding affinity

The excerpt emphasizes a classification based on equilibrium constants for metal-complex formation:

ClassLewis acid typeAffinityExamplesBiological relevance
Class AHardOxygen-containing groups (carboxyl, alcohol)Al, Ba, Be, Ca, K, Li, Mg, Na, SrDetermines membrane transport, storage, protein binding
Class BSoftNitrogen- and sulfur-containing groups (amino, sulfhydryl)Ag, Au, Bi, Hg, Pd, Pt, TlHigh affinity for biological thiols; often more toxic
BorderlineIntermediateLess pronounced A or B characteristicsAs, Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, Sb, Sn, Ti, V, ZnVariable behavior
  • Why it matters: This classification predicts how metals cross cell membranes, bind to proteins, induce metal-binding proteins, and behave in the environment.
  • Example: Class B metals like mercury bind strongly to sulfhydryl groups in enzymes, disrupting function even at low concentrations.

🌍 Sources of metal pollution

Natural sources:

  • Weathering of rock formations and ores releases metals into biogeochemical cycles.
  • Volcanic emissions (largest natural source, but usually diluted).
  • Special case: Arsenic in groundwater (e.g., Bangladesh, West Bengal) mobilized by oxygen and organic carbon inflow during irrigation pumping.

Anthropogenic sources (often 1–3 orders of magnitude higher than natural fluxes):

  • Metal mining and smelting (physical ecosystem destruction + emissions).
  • Domestic and industrial products and waste discharge.
  • Metal-containing pesticides (e.g., copper sulfate "Bordeaux Mixture" fungicide, organo-tin compounds).
  • REEs in microelectronics.
  • Coal and oil combustion producing metal-containing fly ash.
  • Corrosion of electrical infrastructure.
  • Non-metal industries (leather uses chromium; cement produces thallium).
  • Historical use of tetraethyl lead (TEL) in gasoline; catalytic converters release platinum and palladium.

☢️ Radioactive compounds

🔄 Radioactive decay fundamentals

Radioactivity: The spontaneous disintegration of unstable atomic nuclei to form more stable ones, emitting particles and/or radiation.

  • Decay is stochastic (unpredictable for individual atoms) but follows exponential kinetics at the population level.
  • Decay constant λ [s⁻¹] describes the probability per unit time that a nucleus will decay.
  • Half-life (t₁/₂): Time required for half the nuclei to decay; ranges from fractions of seconds to billions of years.

Key decay types:

  • Alpha decay: Emission of α-particle (⁴He nucleus); atomic number drops by 2, mass number by 4.
  • Beta decay: Neutron → proton + electron (β⁻) + antineutrino, or proton → neutron + positron (β⁺) + neutrino.
  • Gamma decay: Emission of high-energy photons; nucleus remains the same but loses excess energy.
  • Fission: Heavy nucleus splits into two lighter nuclei plus neutrons.

📊 Activity and measurement

Activity (A): The rate of decay in a sample, measured in Becquerels [Bq] = 1 disintegration/second.

  • Activity = λ × N(t), where N(t) is the number of radioactive nuclei at time t.
  • Activity decreases exponentially following the same curve as the number of nuclei.
  • Older unit: Curie (Ci) = 3.7 × 10¹⁰ Bq.

🌐 Natural vs. artificial radionuclides

Naturally occurring:

  • Primordial radionuclides: Created before Earth's formation with billion-year half-lives (e.g., ²³⁸U, ²³²Th, ⁴⁰K).
  • Decay chains: ²³⁸U, ²³²Th, and ²³⁵U decay through multiple steps until reaching stable Pb isotopes; intermediate products include ²²⁶Ra, ²¹⁰Pb, ²¹⁰Po.
  • Radon risk: ²²²Rn and ²²⁰Rn (noble gases in decay chains) migrate through soil pores into the atmosphere; when inhaled, decay products cause internal lung irradiation.
  • Cosmogenic: Continuously formed in the atmosphere (e.g., ¹⁴C from ¹⁴N + thermal neutrons).

Artificial radionuclides:

  • Generated in nuclear reactors, particle accelerators, and radionuclide generators.
  • Released through nuclear weapon testing (largest contributor), accidents (Chernobyl, Fukushima), and improper waste management.
  • Examples: ³H, ¹⁴C, ⁹⁰Sr, ⁹⁹Tc, ¹²⁹I, ¹³⁷Cs, ²³⁷Np, ²⁴¹Am, U and Pu isotopes.
  • ¹³⁷Cs (half-life 30.17 y) remains the most important long-term contaminant after Chernobyl and Fukushima because shorter-lived isotopes have decayed.

⚡ Radiation interaction with matter

Directly ionizing (charged particles):

  • Alpha particles: High mass, high energy (~5 MeV), high ionizing potential; stopped by paper or a few cm of air; straight, short path with dense ionization.
    • Low external hazard (cannot penetrate skin) but high internal hazard when ingested/inhaled.
    • Used in targeted cancer therapy due to localized energy deposition.
  • Beta particles: Low mass, high speed electrons/positrons; lower ionizing potential than alpha but higher penetration (decimeters in air, millimeters in tissue); irregular path due to deflection.
    • External and internal hazard.
    • Can produce Bremsstrahlung (electromagnetic radiation) when deflected; use low-atomic-number shielding (Plexiglas, aluminum) to minimize this.

Indirectly ionizing (uncharged):

  • Gamma radiation: High-energy photons; minimal interaction along path but high penetration potential.
    • When interaction occurs, energy transfers to charged particles (e.g., electrons) that then cause ionizations.
    • External and internal hazard; requires lead shielding.
    • Basis for medical/industrial X-ray imaging due to differential tissue density interaction.

🏭 Industrial chemicals and REACH

📜 REACH regulation framework

REACH: Registration, Evaluation, Authorisation and Restriction of Chemicals—EU regulation (C 1907/2006) that entered force June 1, 2007.

Key principle: Reverses burden of proof—companies that manufacture, import, or use chemicals must identify and manage risks, not governments.

Registration timeline (for existing chemicals):

  • 2010: >1000 tonnes/year + most hazardous (CMRs >1 tonne/year; very toxic aquatics >100 tonnes/year).
  • 2013: 100–1000 tonnes/year.
  • 2018: 1–100 tonnes/year.
  • New chemicals: must register before manufacture/import.

By 2018: 21,787 substances registered by 14,262 companies; 48% of registrations in Germany; 70% were "old chemicals" without prior registration.

🔍 What REACH covers and exempts

Covered: All chemical substances including metals, organic chemicals, nanoparticles, polymers.

Partly exempt (covered by other EU legislation):

  • Pesticides (plant protection products): Separate assessment by institutes other than ECHA.
  • Biocides: Strict laws balancing hazard with benefits (disinfectants, pest control).
  • Food/feed additives: Regulation (EC) No 1331/2008 ensures no harmful effects.
  • Medicinal products: Directive 2001/83/EC guarantees quality and safety.
  • Waste: Not covered, but products recovered from waste are.

💡 REACH benefits and challenges

Benefits:

  • Public transparency: Most registration data is publicly available (searchable at ECHA website).
  • Substance of Very High Concern (SVHC) classification enables substitution with safer alternatives.
  • Emphasis on animal-friendly testing: read-across, QSAR, weight-of-evidence, in vitro methods reduce animal use (though 9,287 in vivo studies still performed vs. 5,795 in vitro by 2017).

Challenges:

  • Many new animal tests still required (prenatal, repeated-dose, reproductive toxicity).
  • Complex structures (e.g., antibiotic erythromycin C₃₇H₆₇NO₁₃) make property estimation uncertain.
  • Example from excerpt: Loratadine (antihistamine) registered as industrial intermediate with incomplete dossier; regulated as medicinal product elsewhere.

🌍 Persistent Organic Pollutants (POPs)

📋 Stockholm Convention criteria

POPs: Xenobiotic chemicals that are persistent, bioaccumulative, toxic (PBT), and transported over long distances.

The UN Stockholm Convention (adopted 2001, in force 2004) defines POPs by four criteria:

PropertyCriterion (simplified from Table 1)
PersistenceHalf-life >2 months (water), >6 months (soil/sediment), or otherwise sufficiently persistent
BioaccumulationBCF or BAF >5000 in aquatic species, or log K_OW >5, or monitoring data showing high bioaccumulation
Long-range transportMeasured in locations distant from sources; monitoring/modeling shows air/water/migratory-species transport; air half-life >2 days
ToxicityEvidence of adverse effects to health/environment, or toxicity/ecotoxicity data indicating damage potential

Objective (Article 1): "To protect human health and the environment from the harmful impacts of persistent organic pollutants" using the precautionary approach.

🧪 The "Dirty Dozen" and beyond

Initial 12 POPs (2001): Aldrin, chlordane, dieldrin, DDT, endrin, heptachlor, hexachlorobenzene (HCB), mirex, PCBs, PCDDs, PCDFs, toxaphene.

Later additions (total 29 by excerpt date): Chlordecone, lindane, pentachlorobenzene, endosulfan, chlorinated naphthalenes, hexachlorobutadiene, PBDEs (tetra-, penta-, deca-), and others.

  • All POPs contain carbon and halogen atoms (chlorine, bromine, or fluorine).
  • Unintentional POPs: PCDDs and PCDFs formed during thermal processes (e.g., waste incineration) when chlorine (from PVC) combines with organic matter at high temperatures.
    • Discovered after Seveso disaster (1976) when trichlorophenol factory explosion released dioxins.

🔗 Why POPs persist and bioaccumulate

Strong carbon-halogen bond: Decreases in strength from C–F > C–Cl > C–Br > C–I, but all resist degradation.

Lipophilicity: POPs partition into lipids (fats) in organisms.

Biomagnification: POPs enter food chains (often via fish) and concentrate at higher trophic levels; highest levels found in marine mammals (seals, whales, polar bears) and humans.

  • Women transfer POPs to children, especially firstborns, through pregnancy and breastfeeding.

🌬️ Long-range transport: the grasshopper effect

  • POPs evaporate in warm areas, travel in the atmosphere, condense in cooler areas, re-evaporate, and repeat in "hops."
  • This carries them thousands of kilometers within days.
  • Result: Arctic and other cold regions receive disproportionately high POP loads despite no local sources.

🔄 Alternatives and ongoing challenges

Second-generation pesticides (replacing organochlorines):

  • Organophosphates (e.g., parathion, diazinon): Less persistent but highly neurotoxic; parathion banned/restricted in 23 countries.
  • Carbamates (e.g., carbaryl): Toxic to insects and bees; rapidly detoxified in vertebrates; third-most-used insecticide in U.S.
  • Neonicotinoids (e.g., imidacloprid): Most widely used insecticide worldwide; banned in EU (2018) along with clothianidin and thiamethoxam.

Brominated flame retardants (PBDEs, HBCD): Added to electronics, furniture, building materials; now banned but other brominated alternatives still produced in growing volumes.

Perfluorinated alkyl substances (PFASs): Used in Teflon, fire-fighting foams, water/dirt repellents; both lipophilic and hydrophilic due to polar groups; behave differently from classic POPs.

DDT exception: Still allowed for limited indoor spraying against malaria in Africa where no suitable alternatives exist.

💧 Persistent Mobile Organic Chemicals (PMOCs)

🌊 Why mobility matters for water safety

  • Ecosystems and humans are protected by wastewater treatment and drinking-water purification.
  • Conventional technologies remove substances by degradation or sorption (binding to particles).
  • Problem: Chemicals that are persistent AND mobile can pass through soil layers, water catchments, riverbanks, and treatment barriers, eventually reaching tap water.

⚗️ Polarity determines mobility

Polarity: Uneven distribution of electrons creates positive and negative regions (electric dipoles) in a molecule.

  • Polar molecules: Asymmetric charge distribution; interact with water.
  • Apolar molecules: Even charge distribution; neutral; prefer lipid environments.
  • Ionogenic molecules: Permanent charge (cations = positive; anions = negative); highly water-soluble.

Key relationship:

  • Lipophilic (high K_OW) POPs: Persistent + bioaccumulative but LOW mobility (sorb to soil/sediment).
  • Polar/ionogenic PMOCs: Persistent + HIGH mobility (remain in water phase) but often LOW bioaccumulation.

🔀 Partition coefficients and mobility

The excerpt introduces (but does not fully explain) key coefficients:

  • K_OW (octanol-water partition coefficient): Measures lipophilicity; high K_OW = prefers organic phase over water.
  • D_OW: Distribution coefficient accounting for ionization at a given pH.
  • K_D: Soil-water distribution coefficient; describes sorption to soil particles.

Inference from excerpt:

  • Low K_OW or low K_D → chemical stays in water → high mobility.
  • High persistence + high mobility → reaches drinking water sources.
  • Don't confuse: POPs (persistent + bioaccumulative + low mobility in water) vs. PMOCs (persistent + mobile + often low bioaccumulation).

⚠️ Human exposure risk

  • PMOCs' combination of persistence and mobility means they can travel long distances through aquatic systems.
  • Conventional drinking-water treatment may not remove them.
  • Example implication: A persistent, polar chemical released upstream can reach drinking-water intakes downstream largely intact.

Note: The excerpt ends mid-sentence in the PMOC section and does not provide complete information on partition coefficients, specific PMOC examples, or detailed regulatory frameworks for PMOCs. The notes reflect only what is explicitly stated or clearly inferable from the provided text.

6

Pollutants with specific use

2.3. Pollutants with specific use

🧭 Overview

🧠 One-sentence thesis

Organic pollutants can be classified by their specific applications—such as pesticides, pharmaceuticals, industrial chemicals, and cosmetics—and these usage-based groupings often correspond to distinct regulatory frameworks.

📌 Key points (3–5)

  • Usage-based classification: grouping pollutants by application (pesticides, pharmaceuticals, industrial chemicals, etc.) helps organize the enormous variety of organic contaminants.
  • Pesticides subdivide by target: herbicides kill plants, insecticides kill insects, fungicides kill fungi, rodenticides kill rodents, and biocides are broadly toxic.
  • Pharmaceuticals and drugs: many are ionogenic bases with specific bioactivity and often unknown side effects; drugs of abuse include opioids and synthetic designer drugs.
  • Industrial and consumer products: include detergents, cosmetics, food additives, fuel products, and refrigerants, each with distinct chemical properties tailored to their function.
  • Common confusion: the same chemical may appear in multiple classification schemes (structure, property, or use); usage-based grouping (Table 1C) is particularly relevant for regulation.

🧪 Pesticides and biocides

🌿 What pesticides are

Pesticides: toxic to pests; subdivided by target organism.

  • The excerpt lists five main subcategories:
    • Herbicides: toxic to plants (examples: atrazine, glyphosate).
    • Insecticides: toxic to insects (examples: chlorpyrifos, parathion).
    • Fungicides: toxic to fungi (example: phenyl mercury acetate).
    • Rodenticides: toxic to rodents (example: hydrogen cyanide).
    • Biocides: toxic to many species (example: benzalkonium).
  • Each subgroup targets a specific type of organism, but all share the common feature of being intentionally toxic.

⚠️ Don't confuse pesticides with other usage groups

  • Pesticides are defined by their intended toxic effect on pests, not by their chemical structure or environmental persistence.
  • Some pesticides (e.g., DDT) also appear in property-based classifications like POPs (persistent organic pollutants), but the usage-based grouping emphasizes regulatory and application context.

💊 Pharmaceuticals and drugs

💊 Pharmaceuticals

Pharmaceuticals: specifically bioactive chemicals with often (un)known side effects; many are bases.

  • Designed to have targeted biological activity, but side effects are common and sometimes unknown.
  • Examples include:
    • Diclofenac (pain killer)
    • Iodixanol (radio-contrast agent)
    • Carbamazepine, Prozac (psychiatric drugs)
  • Many pharmaceuticals are ionogenic bases, meaning they can gain or lose protons depending on pH, affecting their environmental behavior.

🧪 Drugs of abuse

Drugs of abuse: often opioid-based or synthetic designer drugs with similar activity; many are ionogenic bases.

  • Include cannabinoids, opioids, amphetamine, and LSD.
  • Like pharmaceuticals, many are ionogenic bases, which influences their solubility and mobility in the environment.

🐄 Veterinary pharmaceuticals

Veterinary pharmaceuticals: can include relatively complex (ionogenic) structures.

  • Used in animal medicine; examples include antibiotics, antifungals, steroids, and non-steroidal anti-inflammatories.
  • These can enter the environment through animal waste and runoff.

🏭 Industrial and consumer chemicals

🏭 Industrial chemicals

Industrial chemicals: produced in large volumes by chemical industry for a wide array of products and processes.

  • Example: phenol.
  • Characterized by high production volumes and diverse applications, leading to widespread environmental presence.

⛽ Fuel products

Fuel products: flammable chemicals.

  • Example: kerosene.
  • Derived from fossil fuels; their combustion and spillage contribute to environmental contamination.

❄️ Refrigerants and propellants

Refrigerants and propellants: small chemicals with specific boiling points.

  • Example: freon-22.
  • Designed for specific physical properties (e.g., volatility) that enable their use in cooling and aerosol applications.

🧴 Cosmetics and personal care products

Cosmetics/personal care products: wide varieties of specific ingredients that render specific properties of a product.

  • Examples: sunscreen, parabens.
  • Formulated to achieve desired effects (UV protection, preservation), but ingredients can enter wastewater and the environment.

🧼 Detergents and surfactants

Detergents and surfactants: long hydrophobic hydrocarbon tails and polar/ionic headgroups.

  • Examples: sodium lauryl sulfate (SLS), benzalkonium.
  • Their amphiphilic structure (both water-loving and water-repelling parts) allows them to reduce surface tension and solubilize oils, but also to accumulate at environmental interfaces.

🍞 Food and feed additives

Food and feed additives: to preserve flavor or enhance taste, appearance, or other qualities.

  • Examples: "E-numbers," acetic acid (E260 in EU, additive 260 elsewhere).
  • Regulated for human and animal consumption, but can enter the environment through waste streams.

📋 How usage-based classification fits into the bigger picture

📋 Three complementary classification schemes

The excerpt presents three tables for grouping organic contaminants:

TableBasisExamples of groups
1ASpecific chemical structuresHydrocarbons, PAHs, halogenated hydrocarbons, dioxins, organometallics
1BSpecific propertiesPOPs, PMOCs, ionogenic chemicals, plastics, nanoparticles
1CSpecific usagePesticides, pharmaceuticals, industrial chemicals, cosmetics, detergents, food additives
  • All three schemes are interrelated: chemical structure determines properties, and properties influence both environmental behavior and suitability for specific uses.
  • Usage-based classification (Table 1C) is particularly important for regulation, as laws often target chemicals by application (e.g., pesticide regulations, pharmaceutical approval).

🔗 Why usage matters for environmental toxicology

  • Chemicals with the same use often share regulatory frameworks and exposure pathways.
  • Example: pharmaceuticals enter the environment primarily through wastewater, while pesticides are applied directly to fields.
  • Understanding the intended use helps predict where and how a chemical will appear in the environment, and which organisms are most likely to be exposed.
7

Metals and Metalloids in Environmental Toxicology

3.1. Environmental compartments

🧭 Overview

🧠 One-sentence thesis

Metals and metalloids are a large, heterogeneous group of elements whose toxicity and environmental behavior depend on their chemical bonding properties (especially their affinity for different biological molecules), not simply on their density.

📌 Key points (3–5)

  • Why "heavy metal" is misleading: density (< or > 5 g/cm³) does not reflect the diverse chemical and biological properties of metals; rare earth elements have high density but behave very differently.
  • Speciation determines behavior: the chemical form of a metal (oxidized, free ion, or complexed) controls its transport and interaction in the environment.
  • Classification by binding affinity: the Nieboer–Richardson system groups metals by their affinity for oxygen, nitrogen, or sulfur groups in macromolecules—this affects membrane transport, storage, and toxicity.
  • Common confusion: not all dense elements are "heavy metals" (e.g., rare earth elements); not all metals are toxic—some are essential for life.
  • Anthropogenic emissions dominate: human activities release metals at rates one to three orders of magnitude higher than natural sources.

🧪 What metals and metalloids are

🧪 Metals vs metalloids vs rare earth elements

  • Metals: the majority of the periodic table; most are solid at room temperature (except mercury), good conductors, and readily lose electrons.
  • Metalloids: elements with both metallic and non-metallic properties, or nonmetals that can form alloys with metals.
  • Rare earth elements (REEs): lanthanides and actinides; high density but chemically distinct from typical "heavy metals."

Heavy metals: traditionally defined as metals with density > 5 g/cm³ relative to water.

  • The excerpt emphasizes that this density-based distinction is not meaningful for such a heterogeneous group with different biological and chemical properties.
  • Example: REEs have high density but are usually not considered heavy metals because of their different chemical behavior.

🌱 Essential vs non-essential elements

  • Some metals are essential to life: Ca, P, K, Mg, Na, Cl, S (major); Fe, I, Cu, Mn, Zn, Co, Mo, Se, Cr, Ni, V, Si, As, B (trace).
  • Others may support physiological functions at ultra-trace levels: Li, Al, F, Sn.
  • Don't confuse: there is no relation between metal concentrations in the Earth's crust and the elemental requirements of organisms.

🔗 Chemical properties that matter

🔗 Speciation and chemical form

Speciation: the chemical form in which an element occurs (e.g., oxidized, free ion, or complexed to inorganic or organic molecules).

  • Speciation determines transport and interaction in the environment.
  • Pure elemental metals are rare in the environment; metals exist as compounds, complexes, and ions at low concentrations.
  • Chemical bonding is determined by outer orbital electron behavior; metals tend to lose electrons when reacting with nonmetals.
  • In biological reactions, metals act as cofactors in coenzymes (e.g., in vitamins) and as electron acceptors/donors in oxidation–reduction reactions.

⚛️ Nieboer–Richardson classification

The excerpt describes a classification based on the equilibrium constant for metal complex formation:

ClassLewis acid typeAffinityExamples
Class AHard Lewis acidsHigh affinity for oxygen-containing groups (carboxyl, alcohol)Al, Ba, Be, Ca, K, Li, Mg, Na, Sr
Class BSoft Lewis acidsHigh affinity for nitrogen and sulfur-containing groups (amino, sulphydryl)Ag, Au, Bi, Hg, Pd, Pt, Tl
BorderlineIntermediateType A or B characteristics less pronouncedAs, Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, Sb, Sn, Ti, V, Zn
  • Why it matters: this classification is highly relevant for:
    • Transport across cell membranes
    • Intracellular storage in granules
    • Induction of metal-binding proteins
    • General environmental behavior
  • Example: a Class B metal with high affinity for sulfur groups will bind strongly to sulphydryl groups in proteins, affecting its toxicity and bioaccumulation.

🌍 Occurrence and distribution

🌍 Natural background concentrations

  • Metals and rare earth elements are diffusely distributed over the Earth but highly concentrated in metal ores.
  • Background concentrations in soils vary widely depending on rock or sediment type and origin.
  • Volcanic rock (e.g., basalt) contains high metal levels; sedimented rock (e.g., limestone) contains low levels.
  • Don't confuse: natural abundance does not correlate with biological essentiality.

🌋 Natural emissions

  • Weathering: upon weathering of stone formations and ores, elements are released and enter biogeochemical cycles.
  • Depending on water solubility, soil properties, and vegetation, metals may be transported and deposited near or far from their source.
  • Volcanoes: the largest natural input of metals to the environment; concentrations rarely reach toxic levels due to massive atmospheric dilution.
  • Permanently active volcanoes may be an important local pollution source.

⚠️ Special case: arsenic

  • Arsenic may occur naturally in soils at fairly high levels, particularly in groundwater.
  • High-arsenic groundwater areas: Argentina, Chile, Mexico, China, Hungary, Bangladesh, India (West Bengal), Cambodia, Laos, Vietnam.
  • In the Bengal Basin, millions of wells were dug for safe drinking water; irrigation pumping introduces oxygen and organic carbon, mobilizing arsenic normally bound to ferric oxyhydroxides.
  • Result: many wells exceed the WHO guideline value of 10 μg/L for drinking water.

🏭 Anthropogenic sources

🏭 Major emission sources

The excerpt lists important anthropogenic sources:

  • Metal mining: also causes enormous physical disturbance and ecosystem destruction.
  • Metal smelting.
  • Domestic and industrial products: discharge of domestic waste and sewage.
  • Metal-containing pesticides: e.g., Bordeaux Mixture (copper sulfate with lime) used as a fungicide in viniculture, hop-growing, and fruit culture; organo-tin fungicides.
  • Microelectronics: use of metals and especially rare earth elements.
  • Energy-producing industries: burning coal and oil, producing metal-containing fly ash.
  • Energy transport and traffic: corrosion of electric wires and pylons.
  • Non-metal industries: e.g., leather (chromium), cement production (thallium).
  • Automotive: Tetra Ethyl Lead (TEL) as anti-knocking agent in petrol (now banned in most countries); catalysts in cars (platinum, palladium).

📊 Scale of anthropogenic emissions

  • Anthropogenic releases of many metals (Pb, Zn, Cd, Cu) are estimated to be one to three orders of magnitude higher than natural fluxes.
  • Example: up to 50,000 tonnes of mercury are released naturally per year from degassing of the Earth's crust, but human activities account for even larger emissions.
8

Sources of chemicals

3.2. Sources of chemicals

🧭 Overview

🧠 One-sentence thesis

Emission assessment—characterizing how chemicals are released into the environment from different sources throughout their life cycle—is essential for evaluating environmental exposure and risk at both local and large scales.

📌 Key points (3–5)

  • Three origins of pollutants: natural chemicals (released naturally or by humans), synthetic chemicals (intentionally produced), and unintentional byproducts of human activities.
  • Life cycle phases: chemicals can be emitted during production, use, and waste phases, each offering different pathways into the environment.
  • Point vs diffuse sources: point sources are few in number but emit large quantities (e.g. smoke stacks), while diffuse sources are many but emit small amounts each (e.g. car exhaust).
  • Common confusion: the same chemical (e.g. PAHs) can be both natural and a pollutant—the key is whether human activities are involved in its production or release.
  • Quantification approaches: emissions can be measured directly (monitoring) or estimated using emission factors, proxies, and extrapolation from sample data.

🌍 Origin and types of pollutants

🌍 Three categories of chemicals in the environment

The excerpt distinguishes three types based on origin:

TypeDefinitionExamples
Natural chemicalsNaturally present; released by natural or human-induced processesMetals, natural toxins, volcanic emissions, resource extraction
Synthetic chemicalsIntentionally produced and used by societyPharmaceuticals, pesticides, plastics
Unintentional byproductsFormed as side products of human activitiesDioxins, disinfection byproducts
  • The first and third categories can overlap: reaction products of natural processes (e.g. combustion) may also be considered natural chemicals.
  • Example: Polycyclic aromatic hydrocarbons (PAHs) can come from a forest fire (natural) or a power plant (human-induced).

🔑 What defines an environmental pollutant

Environmental pollutant: a chemical is considered an environmental pollutant when human activities are involved in either its production or its release into the environment.

  • It is not the chemical itself but the emission process that determines pollutant status.
  • Don't confuse: a naturally occurring chemical becomes a pollutant when humans cause its release (e.g. metals from mining).
  • Some synthetic chemicals are also naturally present, as organic chemistry includes synthesis of natural products.

🔄 Life cycle and emission phases

🔄 The three main life cycle phases

The excerpt divides the life cycle of synthetic chemicals into:

  1. Production: manufacturing the chemical; emissions can occur from production plant effluent.
  2. Use: application of the chemical or product; emissions occur during consumption or application.
  3. Waste: disposal after use; emissions from landfills, flushing, or dumping.
  • Between production and use, there may be intermediate steps (e.g. formulation, incorporation into products).
  • After use, chemicals or products may be recycled back into production, formulation, or use phases.

💊 Pharmaceutical example

The excerpt illustrates the life cycle with pharmaceuticals:

  • Production: effluent from a production plant discharged into a nearby river.
  • Use: excretion of the parent compound via urine and feces into the sewer system and then the environment.
  • Waste: unused pharmaceuticals flushed through the toilet or dumped in a dustbin, ending up in landfills.

🛠️ Product and service life cycles

  • Environmental assessments often focus on the life cycle of products and services rather than individual chemicals.
  • This approach includes an extra phase: resource extraction.
  • It is useful for comparing alternatives (e.g. glass vs carton milk containers) and includes broader impacts: non-renewable resource use, land use, greenhouse gases, noise, odor.
  • Quantifying chemical emissions is an important step in life cycle assessment, material flow analysis, input/output analysis, and environmental impact assessment.

🎯 Why emission assessment matters

🎯 Purposes of emission assessment

Emission assessment: the process of characterizing the emission of a chemical into the environment.

The excerpt identifies two main purposes:

  1. Local scale: assess exposure and risks near an emission source, typically for environmental permits (e.g. discharge permits for surface water, smoke stack emissions).
  2. Higher scale (national/global): map all sources of a compound throughout its life cycle to understand total environmental release.
  • For synthetic chemicals, higher-scale assessment requires mapping emissions across all three life cycle phases.

🏭 Characteristics of emission sources

🏭 Point sources vs diffuse sources

Source typeNumberQuantity per sourceExamples
Point sourcesFewLarge quantitiesSmoke stacks of power plants, discharge pipes of WWTPs
Diffuse sourcesManySmall quantities eachCar exhaust, volatilization from paints
  • The distinction can be arbitrary and is mainly relevant in regulatory contexts.
  • Point sources are generally easier to control than diffuse sources.

🌊 Compartment and matrix

  • Compartment: the environmental medium receiving the emission (air, water, soil).
  • Matrix: the carrier in which chemicals are contained (wastewater, hot air, suspended matter, soil particles).

Common entry pathways:

  • Chemicals in wastewater discharged into surface waters.
  • Chemicals in hot air released through smoke stacks.
  • Spraying of pesticides (emission into air, soil, and water).
  • Application of manure containing veterinary medicines.
  • Dumping of polluted soils.
  • Dispersal of polluted sediments.
  • Leaching of chemicals from products.

🌬️ Dispersal speed

  • Chemicals emitted to air (and to a lesser extent water) disperse faster than those emitted to soils.
  • Whether the chemical is dissolved or bound to a phase (organic matter, suspended matter, soil particles) influences dispersal.

⏰ Temporal patterns

The excerpt distinguishes emission sources by time:

PatternDescriptionExamples
ContinuousOngoing releaseWWTPs, power plants
IntermittentPeriodic releasePesticide application

Strength variation over time:

  1. Constant emission: steady release rate.
  2. Regularly fluctuating: predictable pattern (e.g. WWTP effluent shows diurnal/nocturnal patterns reflecting human activity).
  3. Irregularly fluctuating: unpredictable pattern (e.g. pesticide emissions vary with season and pest emergence).

⚡ Peak emissions

Peak emissions: the release of relatively large amounts within a relatively short time frame.

  • Typical examples: industrial accidents, intense rain events, pesticide runoff after drought, combined sewer overflows (CSOs).
  • Production plants operating only during the day show a block pattern.

📊 Quantifying emissions

📊 Measurement vs estimation

The excerpt distinguishes two main approaches:

Measurement:

  • Often involves determining two dimensions separately:
    1. Concentration of the chemical in the matrix being emitted.
    2. Flow of the matrix into the environment (e.g. volume of wastewater or air per unit of time).
  • Emission load (mass of chemical released per unit of time) = concentration × flow.
  • Monitoring: continuous measurement of an emission source.
  • Drawback: costly and time-consuming.

Estimation:

  • Often based on measurement data that are generalized or extrapolated.
  • Example: exhaust emissions measured from a few cars can be extrapolated to an entire country if you know the total number of cars.

🔢 Emission factors

Emission factor: quantifies the fraction of a chemical being used that ultimately reaches the environment.

  • Often a conservative value based on worst-case interpretation of available measurement data or process data.
  • Widely used for coarse emission estimation.

🧮 Proxy-based estimation

  • A more detailed approach than emission factors.
  • Estimate emissions using proxies: amount produced, sold, or used, combined with specific data on the release process.
  • Example for pharmaceuticals:
    • Know the amount sold in a country → calculate average per capita use.
    • Estimate the amount discharged by a particular WWTP if you know: (1) number of people connected to the WWTP; (2) the fraction [excerpt ends here].
9

Pathways and processes determining chemical fate

3.3. Pathways and processes determining chemical fate

🧭 Overview

🧠 One-sentence thesis

Environmental fate modeling captures how chemicals move, transform, and degrade after emission by applying first-order kinetics and mass balance equations to transport, transfer, and degradation processes.

📌 Key points (3–5)

  • Core concept: "Fate" is the net result of transport, transfer, and degradation processes acting on a chemical after emission, linking releases to exposure.
  • First-order kinetics dominance: Most environmental fate processes obey first-order kinetics (loss rate proportional to mass present), which means constant half-lives independent of concentration.
  • Common confusion—reaction order: True second-order reactions (rate depends on two reactants) often become pseudo first-order in the environment when one reactant (e.g., oxygen, water) is in excess; zero-order kinetics (constant rate) occurs when enzymes saturate or catalysts limit the reaction.
  • Three process categories: Degradation (abiotic/biotic chemical reactions), transport (advection and dispersion by wind/water), and transfer (partitioning across interfaces like water-air or water-sediment).
  • Why it matters: Quantitative knowledge of process kinetics enables mathematical modeling to predict exposure of ecosystems and organisms, including humans.

🔬 Understanding chemical fate

🔬 What "fate" means

The 'fate' of a chemical in the environment: the net result of a suite of transport, transfer, and degradation processes that start to act on the chemical directly after its emission and during subsequent environmental distribution.

  • Fate is not just "where the chemical goes," but the full sequence of processes from release to final exposure.
  • Exposure assessment science seeks to analyze and quantitatively describe the pathways linking releases to exposure.
  • Example: A pharmaceutical released into wastewater undergoes degradation in the treatment plant, advective transport in the river, volatilization to air, and partitioning into sediment—all these together determine its fate.

📐 Why first-order kinetics is central

  • Environmental fate processes generally obey first-order kinetics: the loss rate is proportional to the first power of the mass present.
  • This allows characterization by a single parameter: the first-order rate constant k.
  • Mass balance equation (eq. 1): rate of change of mass = input rate − k × mass.
  • Don't confuse: first-order means the rate depends on the chemical's own concentration, not on other reactants (that would be second-order).

⚗️ Degradation processes

⚗️ Abiotic reactions (chemical transformations)

  • True first-order: Rare; occurs only when substances degrade spontaneously without interacting with other chemicals (e.g., radioactive decay).
  • Second-order: Most chemical reactions between two substances follow second-order kinetics (eq. 5 and 6): rate proportional to concentrations of both reactants.
    • As both reactants decrease, the reaction rate decreases more rapidly at high initial concentrations.
    • Half-life is not constant; it increases as the reaction proceeds and concentrations drop.

🧪 Pseudo first-order kinetics

  • When the second reactant (transforming agent) is in excess, its concentration remains nearly unaffected.
  • Examples: oxidation (reaction with oxygen) and hydrolysis (reaction with water).
  • The rate decreases only with the decreasing concentration of the first chemical (eq. 7), so kinetics become practically first-order.
  • Pseudo first-order kinetics is very common in the environment.

🦠 Biotic reactions (enzyme-catalyzed)

  • Reactions catalyzed by enzymes follow the Michaelis-Menten model for single-substrate reactions.
  • At low concentrations: No enzyme saturation → pseudo first-order kinetics.
  • At high concentrations: Enzyme saturates → zero-order kinetics (eq. 8): rate is constant, independent of reactant concentration.
    • Rate depends only on catalyst availability (e.g., reactive surface area).
    • Half-life is longer for greater initial concentrations.
  • Example: Alcohol (ethanol) transformation in the liver proceeds at a constant rate regardless of the amount consumed (zero-order).

🦠 Microbial degradation (biodegradation)

  • A special case of biotic transformation: enzymatically catalyzed by microbial cells.
  • Can be viewed as the encounter of chemical molecules with microbial cells → apparent second-order kinetics (eq. 9): first order with respect to microbial cell mass and first order with respect to chemical mass.
  • In practice, if microbial biomass is relatively constant, this simplifies to pseudo first-order with respect to the chemical.

🌊 Transport processes

🌊 Advection and dispersion

Advection: transport along the current axis (e.g., by wind or water flow).
Dispersion: turbulent mixing in all directions.

  • Driven by external forces (wind, water velocity, gravity, rainfall, soil leaching).
  • Example of first-order advective loss (eq. 10): outflow of a chemical from a lake.
    • Loss rate = (flow rate Q / lake volume V) × mass.
    • Q/V is the renewal rate constant k_adv of the transport medium.
  • Most exposure models use simplified descriptions (e.g., dispersive air plume model); more sophisticated models require detailed spatial/temporal resolution and higher computing effort.

🔄 Transfer and partitioning

🔄 Intermedia transfer

Transfer through an interface between two media (e.g., water-air, water-sediment): rate proportional to the concentration difference in the two media (Fick's first law).

  • Examples: volatilization from water to air, gas absorption from air to water/soil, adsorption from water to sediments/suspended solids/biota, desorption from solid surfaces.
  • Transfer can occur in two directions (e.g., volatilization and gas absorption), each with its own first-order rate constant.
  • Each direction's rate is proportional to the concentration in the medium of origin.

⚖️ Equilibrium and partition coefficients

  • Intermedia partitioning proceeds spontaneously until thermodynamic equilibrium is reached.
  • At equilibrium (eq. 12): forward and backward rates are equal; total Gibbs free energy is at a minimum; the system has come to rest.
  • The ratio of concentrations in the two media at equilibrium is the equilibrium constant or partition coefficient.
  • Don't confuse: partitioning is a dynamic process (forward and reverse transfer) that reaches a steady state, not a one-way movement.

📊 Characterizing fate with rate constants and half-lives

📊 First-order rate constant and half-life relationship

  • First-order loss processes result in exponential decrease of mass (eq. 3): m(t) = m₀ × exp(−k × t).
  • Half-life t₁/₂ (time for 50% disappearance) is inversely proportional to the rate constant k (eq. 4): t₁/₂ = ln(2) / k ≈ 0.693 / k.
  • Half-life is constant and independent of concentration for first-order processes.
  • The disappearance time DT50 is often used in regulation but is identical to half-life only if the process is first-order.
  • Don't confuse: assuming constant half-life silently assumes first-order kinetics; for second-order or zero-order processes, half-life changes with concentration.

📈 Comparing reaction orders

Reaction orderRate equationHalf-life behaviorCommon examples
Zero-orderRate = constant (eq. 8)Longer at higher initial concentrationsEnzyme-saturated reactions, catalyst-limited reactions (e.g., alcohol in liver)
First-orderRate ∝ m (eq. 2)Constant, independent of concentrationRadioactive decay, pseudo first-order environmental transformations (oxidation, hydrolysis)
Pseudo first-orderRate ∝ m (eq. 7)Constant (one reactant in excess)Oxidation, hydrolysis, microbial degradation (when biomass is constant)
Second-orderRate ∝ m₁ × m₂ (eq. 5, 6)Increases as reaction proceeds and concentrations dropMost true chemical reactions between two substances

🧮 Mass balance modeling

  • Environmental fate modeling implements degradation, transfer, and transport processes in mathematical models that simulate fate.
  • Uses mass balance equations (eq. 1): rate of change = input − loss.
  • Borrows mathematical expressions from thermodynamic laws and chemical reaction kinetics.
  • Goal: predict exposure of ecosystems, populations, and organisms (including humans) from emission data.
10

Partitioning and partitioning constants

3.4. Partitioning and partitioning constants

🧭 Overview

🧠 One-sentence thesis

Partitioning constants quantify how chemicals distribute between environmental media (water, air, sediment, biota) at equilibrium, and these constants—driven by hydrophobicity, volatility, and ionization—determine a chemical's fate, transport, and bioavailability in the environment.

📌 Key points (3–5)

  • Partitioning is driven by thermodynamics: chemicals spontaneously transfer between media (e.g., water ↔ air, water ↔ sediment) until equilibrium is reached, at which point forward and reverse rates are equal and the concentration ratio becomes constant.
  • Three core properties govern partitioning: hydrophobicity (tendency to escape water), volatility (tendency to vaporize), and degree of ionization (for acids and bases).
  • Hydrophobicity depends on molecular size and polarity: larger molecules and non-polar structures increase hydrophobicity; polar groups (–OH, –NH₂) that form hydrogen bonds with water decrease it.
  • Common confusion—Kₒw vs. Kₒc: the octanol-water partition coefficient (Kₒw) measures a chemical's hydrophobicity in a standard two-phase system, while the organic-carbon-normalized sorption coefficient (Kₒc) describes sorption to soil/sediment organic matter; both relate to hydrophobicity but apply to different environmental compartments.
  • Ionization drastically changes partitioning: for acids and bases, the ionized form is much more water-soluble and sorbs far less to sediment than the neutral form; the fraction ionized depends on pH and pKa.

🌊 Intermedia transfer and equilibrium

🔄 How transfer works (Fick's law)

Transfer rate through an interface between two media is proportional to the concentration difference of the chemical in the two media.

  • As long as concentration in one medium is higher, molecules are more likely to pass through the interface.
  • Examples: volatilization (water → air), gas absorption (air → water), adsorption (water → sediment), desorption (sediment → water).
  • Transfer can occur in both directions simultaneously, each with a first-order rate constant.

⚖️ Thermodynamic equilibrium

  • Partitioning proceeds spontaneously until equilibrium is reached.
  • At equilibrium:
    • Forward and backward transfer rates become equal.
    • Total Gibbs free energy reaches a minimum (system at rest).
    • The concentration ratio becomes constant—this is the partition coefficient or equilibrium constant.
  • Example: for water ↔ air, volatilization rate from water equals gas absorption rate from air.

🚫 Don't confuse: rate vs. equilibrium

  • Rate constants describe how fast transfer occurs.
  • Equilibrium constants (partition coefficients) describe the final concentration ratio when the system is at rest.
  • A chemical can transfer quickly but have low equilibrium partitioning into one phase, or vice versa.

💧 Hydrophobicity

🧪 What hydrophobicity means

Hydrophobicity: the tendency of a substance to escape the aqueous phase (literally "fear of water").

  • A hydrophobic chemical does not like to dissolve in water.
  • Water molecules are tightly bound via hydrogen bonds; dissolving a chemical requires forming a cavity in water, which costs energy.
  • The more energy required to create the cavity, the more hydrophobic the chemical.

📏 Two factors that control hydrophobicity

FactorEffect on hydrophobicityMechanism
Molecular sizeLarger size → more hydrophobicLarger cavity needed; more energy cost
Polarity / H-bonding abilityMore polar groups → less hydrophobicPolar groups (–OH, –NH₂) form hydrogen bonds with water, favoring dissolution
  • Example: adding chlorine or bromine atoms increases size and hydrophobicity.
  • Example: adding –OH or –NH₂ groups decreases hydrophobicity because these groups hydrogen-bond with water.
  • Most hydrophobic chemicals: non-polar organic micropollutants (PCBs, PAHs, chlorinated hydrocarbons).

🧴 Octanol-water partition coefficient (Kₒw)

Kₒw: the ratio of concentrations of a chemical in n-octanol and in water at equilibrium.

  • Octanol mimics tissue lipids and membrane phospholipids (has a long alkyl chain plus one –OH group).
  • Kₒw is a simple, standardized measure of hydrophobicity.
  • Water solubility is negatively correlated with Kₒw (higher Kₒw → lower solubility).
  • Expressed as log Kₒw; typical range: 0 to 8+.

🔬 Three methods to determine Kₒw

MethodHow it worksSuitable range
Equilibration (shake-flask, slow-stirring, generator column)Measure distribution experimentally between octanol and waterlog Kₒw up to ~6–7
Chromatography (HPLC)Derive from retention time; calibrate with known Kₒw standardslog Kₒw 2–8
Calculation (fragment method)Sum contributions of molecular fragments (fₙ) plus correction factors (Fₚ)Wide range; software like KOWWIN, EPISUITE
  • Fragment method equation: log Kₒw = Σfₙ + ΣFₚ
  • Each fragment (e.g., –CH₃, aromatic carbon, –OH) has a known contribution.
  • Correction factors account for intramolecular interactions (e.g., steric hindrance).

⚠️ Limitation for ionized chemicals

  • The calculations and standard Kₒw values apply to non-ionized chemicals.
  • Ionization drastically changes hydrophobicity (see Ionization section below).

🌬️ Volatility

💨 What volatility measures

Volatility from water to air is expressed via Henry's law constant (Kₕ, units: Pa·m³/mol).

  • Kₕ = Pᵢ / Cₐq, where:
    • Pᵢ = partial pressure of the chemical in air (Pa)
    • Cₐq = aqueous concentration (mol/m³)
  • Partial pressure is a measure of gas-phase concentration (but in pressure units, not mol/m³).

🔀 Two opposing forces

Kₕ can be estimated from:

  • Kₕ ≈ Vₚ / Sᵥ, where:
    • Vₚ = vapor pressure of the pure chemical (Pa) → high Vₚ means more volatile
    • Sᵥ = water solubility (mol/m³) → high Sᵥ means less volatile from water

Example (from Table 2 in excerpt):

  • Benzene and ethanol have similar vapor pressure, but benzene is much more volatile from water because its water solubility is ~500× lower than ethanol's.

🌡️ Air-water partition coefficient (Kₐᵢᵣ₋wₐₜₑᵣ)

  • To compare concentrations in the same units (mol/m³ in both phases), convert Kₕ:
  • Kₐᵢᵣ₋wₐₜₑᵣ = Kₕ / (R·T)
  • R = gas constant (8.314 m³·Pa·K⁻¹·mol⁻¹); T = temperature in Kelvin.
  • At 25°C, R·T ≈ 2477.
  • Kₐᵢᵣ₋wₐₜₑᵣ is "dimensionless" (L/L or m³/m³).

📊 Interpreting the numbers

  • Even for volatile chemicals, the equilibrium concentration in air is typically lower than in water (Kₐᵢᵣ₋wₐₜₑᵣ < 1).
  • Example: benzene has Kₐᵢᵣ₋wₐₜₑᵣ ≈ 0.225, meaning air concentration is ~22.5% of the dissolved concentration at equilibrium.
  • Larger, less polar molecules tend to be slightly more volatile from water (but still prefer water over air in absolute terms).

🧊 Don't confuse: vapor pressure vs. volatility from water

  • Vapor pressure (Vₚ) = pressure above the pure liquid chemical.
  • Volatility from water depends on both Vₚ and water solubility.
  • A chemical can have high Vₚ but low volatility from water if it is very water-soluble.

⚡ Degree of ionization

🔋 Acids and bases in equilibrium

  • Acids: neutral form (HA) ↔ anionic form (A⁻)
  • Bases: neutral form (B) ↔ cationic form (BH⁺)
  • The degree of ionization depends on pH and pKa (the acid dissociation constant).

🧮 Calculating fraction ionized

Chemical typeFormula for % ionized
Acid% ionized = 100 / [1 + 10^(pKa − pH)]
Base% ionized = 100 / [1 + 10^(pH − pKa)]
  • When pH = pKa, the chemical is 50% ionized.
  • For acids: higher pH (more basic) → more ionized.
  • For bases: lower pH (more acidic) → more ionized.

🌊 How ionization affects fate

  • Ionized forms are much more water-soluble than neutral forms.
  • Sorption to sediment/soil is 100× lower for ionized forms (especially anions, because sediment particles are negatively charged and repel anions).
  • If >99% is ionized (pH more than 2 units above pKa for acids), sorption of the anion may contribute significantly; otherwise, sorption is dominated by the <1% neutral fraction.

Example (from Table 4):

  • Pentachlorophenol (pKa = 4.60): at pH 7.0, 99.6% ionized → very low sorption.
  • Phenol (pKa = 9.98): at pH 7.0, only 0.1% ionized → sorption dominated by neutral form.

⚠️ Special case: organic cations (bases)

  • Cationic forms (BH⁺) are also more water-soluble, but they are electrostatically attracted to negatively charged sediment surfaces.
  • Result: sorption affinity of cations can be stronger than that of the neutral base.
  • Examples: many pharmaceuticals (amphetamine, antidepressants, beta-blockers).

🧪 Correcting sorption coefficients for ionization

  • For weakly ionizing acids where pH is not too high:
    • Kd = Kd(neutral) × α, where α = fraction non-ionized.
  • This approximation assumes the ionized form does not sorb; valid when <1% is ionized.

🪨 Sorption processes and phases

🧩 Two major sorption mechanisms

ProcessDefinitionIsotherm shapeExample
AbsorptionPartitioning ("dissolution") into a 3-D sorbent matrix; concentration homogeneous throughoutLinearPartitioning into organic matter, lipids
AdsorptionBinding to a 2-D surface; limited number of sitesNon-linear (saturates at high concentration)Binding to clay surfaces, soot

📈 Sorption isotherms

A sorption isotherm: the relationship between concentration in the sorbent (Cs) and concentration in the aqueous phase (Caq).

  • Linear isotherm (absorption): Cs = Kp · Caq, where Kp is the sorption coefficient (units: L/kg).
  • Langmuir isotherm (adsorption): Cs = (b·Cmax·Caq) / (1 + b·Caq)
    • Cmax = maximum sorption capacity (all sites occupied).
    • b = adsorption site energy term.
    • At low Caq, behaves linearly; at high Caq, saturates.
  • Freundlich isotherm (empirical, non-linear): Cs = KF · Caq^n
    • n = Freundlich exponent (if n = 1, linear; if n < 1, saturation; if n > 1, cooperative sorption).
    • Log Cs = n·log Caq + log KF (linear on log-log plot).

🌍 Major sorption phases in sediment/soil

PhaseCharacteristicsSorption mechanism
Organic matter (SOM)Humic acids, fulvic acids, detritus; ~58% organic carbonAbsorption (partitioning); linear isotherm
Clay mineralsNegatively charged surfaces; large surface areaAdsorption; important for cations
Soot particlesCombustion residue; very high affinity for hydrophobic chemicalsAdsorption; can dominate for PAHs, PCBs
CaCO₃, sand, siltInert or low sorptionMinor role for organic chemicals

🧪 Organic-carbon-normalized sorption coefficient (Kₒc)

  • Hydrophobic organic chemicals sorb mainly to organic matter.
  • Sorption coefficient Kp depends on the fraction of organic carbon (fₒc) in the sediment/soil:
    • Kₒc = Kp / fₒc (units: L/kg)
  • Kₒc is more intrinsic (less dependent on sediment type) than Kp.
  • Rule of thumb: organic matter contains 58% organic carbon, so fₒc = 0.58 × fₒₘ.

📊 Kₒc correlates with Kₒw

  • For neutral, non-polar hydrophobic chemicals (PCBs, PAHs, chlorinated hydrocarbons), Kₒc is linearly related to Kₒw.
  • This makes sense: both reflect hydrophobicity.
  • Does not apply to polar or ionized chemicals, or metals.

⚠️ Don't confuse: Kp, Kₒc, and Kₒw

  • Kp = sorption coefficient for a specific sediment/soil (depends on organic matter content).
  • Kₒc = normalized to organic carbon content (more general).
  • Kₒw = octanol-water partition coefficient (standard hydrophobicity measure).
  • All three are related for hydrophobic chemicals, but Kₒw is a pure chemical property, while Kp and Kₒc describe environmental sorption.

🔥 Special case: soot

  • Soot has extremely high affinity for hydrophobic chemicals (much higher than organic matter).
  • If sediment contains soot, Kp values are often higher than predicted from fₒc alone.
  • Sorption to soot is adsorption (surface binding), not absorption.

🧬 Quantitative Structure-Property Relationships (QSPRs)

🎯 What QSPRs do

QSPR: a model that relates an environmental parameter (Y-variable, e.g., sorption coefficient, toxicity) to chemical properties (X-variables, e.g., Kₒw, molecular size, polarity).

  • Goal: predict fate or effect parameters for chemicals without experimental testing.
  • QSAR = Quantitative Structure-Activity Relationship (often used for toxicity).
  • QSPR = Quantitative Structure-Property Relationship (often used for physical-chemical properties, fate).

🧩 Elements of a QSPR

ElementDescription
Y-variableThe parameter to predict (e.g., Kₒc, BCF, biodegradation rate, toxicity)
X-variable(s)Chemical properties or descriptors (e.g., Kₒw, molecular volume, H-bond acidity)
ModelMathematical relationship (linear equation, multivariate regression, etc.)
Training setChemicals used to develop the model
Validation setIndependent chemicals used to test the model's predictive power

📊 Common X-variables (chemical descriptors)

CategoryExamples
HydrophobicityAqueous solubility, Kₒw, hydrophobic fragment constant (π)
ElectronicAtomic charges, dipole moment, H-bond acidity (A), H-bond basicity (B), Hammett constant (σ)
StericTotal surface area, molecular volume (V), Taft constant (Es)
  • Some models use hundreds of descriptors derived from molecular graphs (e.g., CODESSA Pro: 494 molecular + 944 fragment descriptors).

🔧 Modeling techniques

  • Graphical: simple visual correlation.
  • Linear regression: Y = a₁X₁ + a₂X₂ + ... + b
    • a₁, a₂ = regression coefficients; b = intercept.
    • Quality: correlation coefficient (r, closer to 1.0 = better fit), standard error (s).
  • Hansch approach (classical QSAR): log(1/C) = c·π + c'·σ + c''·Es + c'''
    • π = hydrophobic substituent constant; σ = electronic; Es = steric.
  • Multivariate techniques: Principal Component Analysis (PCA), Partial Least Squares (PLS).
    • Useful when many X-variables are involved or when X-variables are correlated.

🧪 Polyparameter Linear Free Energy Relationships (pp-LFER)

pp-LFER: a mechanistic QSPR approach that includes parameters for different types of molecular interactions.

  • General form: log K = c + e·E + s·S + a·A + b·B + v·V
    • E = excess molar refraction (related to π-electrons)
    • S = dipolarity/polarizability
    • A = H-bond acidity (H-bond donor strength)
    • B = H-bond basicity (H-bond acceptor strength)
    • V = molar volume (size)
    • Lowercase (e, s, a, b, v) = system-specific coefficients; uppercase (E, S, A, B, V) = compound-specific descriptors.

🔬 Types of molecular interactions (pp-LFER basis)

Compound typeInteractionsExamples
ApolarOnly van der WaalsAlkanes, PCBs, chlorobenzenes
Monopolar (H-acceptor)van der Waals + H-acceptorEthers, ketones, esters, alkenes
Monopolar (H-donor)van der Waals + H-donorCHCl₃, CH₂Cl₂
Bipolarvan der Waals + H-donor + H-acceptorAmines, alcohols, carboxylic acids
  • Van der Waals: attractive forces between all molecules; strength depends on contact area (size).
  • Hydrogen bond: electrostatic attraction between H (covalently bound to N, O, F) and an electronegative atom with a lone pair.

🐟 Example: QSPR for bioconcentration factor (BCF)

  • Kₒw-based model (Veith et al., 1979): log BCF = 0.85·log Kₒw − 0.70
    • Works well for neutral hydrophobic chemicals (chlorobenzenes, PCBs).
    • Explanation: octanol mimics fish lipids.
  • Deviations:
    • Metabolized chemicals: BCF lower than predicted (chemical is broken down).
    • Very high Kₒw (log Kₒw > 7): BCF decreases again (molecules too large to cross membranes, or overestimation of aqueous concentration due to particle binding).

🧬 Example: pp-LFER for lipid partitioning

  • Storage lipids (triglycerides): log KSL-W has large negative coefficient for A (−1.72) and B (−4.14).
    • Storage lipids are non-polar; H-bonding chemicals partition less into them.
  • Membrane lipids (phospholipids/liposomes): log KML-W has positive coefficient for A (+0.29).
    • Phosphate group in phospholipids is a strong H-bond acceptor; H-bond donors (high A) partition more into membranes.
  • This difference in the "a" coefficient makes mechanistic sense and shows the strength of pp-LFER.

🌱 Example: QSPR for soil sorption (Kₒc)

  • For neutral, non-polar hydrophobic chemicals: log Kₒc correlates linearly with log Kₒw.
  • Not valid for:
    • Polar chemicals (need pp-LFER or other models).
    • Ionic chemicals (cations sorb to clay via cation exchange; anions sorb much less).
    • Metals.

⚖️ Reliability and limitations

  • Domain of applicability: each model applies only to certain chemical classes; must check if a new chemical falls within the training set's chemical space.
  • Validation: model quality should be tested on an independent validation set, not just the training set.
  • OECD principles: guidance exists for validating QSAR/QSPR models.
  • Statistical quality: high r² on training set does not guarantee good predictions; need external validation.

🚫 Don't confuse: training set vs. validation set

  • Training set: chemicals used to develop (calibrate) the model.
  • Validation set: independent chemicals used to test the model's predictive power.
  • A model can fit the training set perfectly but fail on new chemicals if it is overfitted or extrapolated beyond its domain.
11

Metal speciation

3.5. Metal speciation

🧭 Overview

🧠 One-sentence thesis

Metal speciation—the distribution of metals over different chemical and physical forms—determines their mobility and bioavailability in the environment, and is controlled by dynamic chemical reactions and environmental conditions rather than static equilibria.

📌 Key points (3–5)

  • What speciation means: the distribution of a metal over different forms (free ions, complexes, minerals, adsorbed states) in the environment.
  • Four main reactions: adsorption/desorption, ion exchange/dissolution, precipitation/co-precipitation, and complexation with inorganic and organic ligands.
  • Environmental controls: pH, redox potential, and ligand concentrations (including dissolved organic matter) drive speciation reactions.
  • Common confusion: models assume equilibrium, but real environments are dynamic—land use changes, bioturbation, weather, and biological uptake continuously alter speciation.
  • Why it matters: speciation determines mobility (how far metals move) and bioavailability (how easily organisms take them up), thus controlling bioaccumulation and toxicity.

🧪 What metal speciation is

🧪 Definition and forms

Metal speciation: the distribution of a metal over different physical and chemical forms in the environment.

Metals can exist as:

  • The pure element (very rare in the environment)
  • Components of minerals
  • Free cations dissolved in water (e.g. Cd²⁺)
  • Bound to inorganic or organic molecules in solid or dissolved phases (e.g. HgCH₃⁺ or AgCl₂⁺)

🔗 Physical vs. chemical processes

  • Chemical speciation (strict sense): reactions that change the chemical form of the metal (complexation, precipitation, redox).
  • Physical processes (broader sense): electrostatic attraction of metal cations to negatively charged mineral surfaces—affects mobility and bioavailability but is not always called "speciation" in the strict sense.
  • Both are discussed together because both control where metals go and whether organisms can take them up.

⚗️ The four main speciation reactions

⚗️ Adsorption and desorption

  • Metals bind to reactive components in soils, sediments, and (to a lesser extent) water.
  • Reactive components include:
    • Clay minerals
    • Hydroxides (e.g. FeOH₃) and carbonates (e.g. CaCO₃)
    • Organic matter
  • Binding is often reversible (adsorption ↔ desorption).

🔄 Ion exchange and dissolution

  • Cationic metals (e.g. Cd²⁺) bind reversibly to negatively charged clay minerals via cation-exchange.
  • Cation-exchange capacity (CEC): the density of negatively charged sites per mass of soil or sediment (units: cmol_c/kg soil); measures how many cations can be retained.
  • Competition: multiple cations compete for the same binding sites; the winner depends on binding affinity and concentration of each species.
  • pH effect: increasing pH increases CEC (more functional groups release H⁺); decreasing pH lowers CEC and increases competition from H⁺ ions.

🧊 Precipitation and co-precipitation

  • Metals can form solid phases (precipitates) under certain conditions.
  • Co-precipitation: metals incorporate into other minerals as they form.

🔗 Complexation to ligands

  • Ligands: molecules that form complexes with metals, ranging from simple anions (sulfate, organic acid anions) to macromolecules (proteins, biomolecules).
  • Metals form complexes with functional groups (mainly carboxylic and phenolic) in organic matter.
  • In aquatic systems, dissolved organic matter (DOM) or dissolved organic carbon (DOC)—operationally defined as the fraction passing a 0.45 μm filter, often fulvic and humic acids—plays a major role.
  • Adsorption to oxide/hydroxide surfaces and organic matter functional groups (oxygen- or nitrogen-containing) is also called complexation when covalent bonds form.
  • pH is critical because many metal-binding groups are acidic or basic.

Don't confuse: Complexation in solution (dissolved ligands) vs. complexation to solid surfaces (adsorption)—both involve ligand binding, but one keeps the metal mobile, the other immobilizes it.

🧮 Modelling metal speciation

🧮 The equilibrium approach

  • Speciation can be modelled if we know the main reactions and environmental conditions.
  • Knowledge is expressed as equilibrium constants for complexation and redox reactions.

🔢 Complexation equilibria

For a general complexation reaction:

  • a M^(m+) (aq) + b L^(n−) (aq) ↔ M_a L_b^(q+) (aq), where q = am − bn

The equilibrium constant K_f relates the activities (or concentrations) of free and complexed metal ions.

  • If K_f is known (from experiments or estimation), we can calculate relative concentrations.
  • Actual concentrations require either direct measurement of one species and the ligand, or measurement of total metal concentration plus a mass balance model.

Example: Copper speciation in salt water without DOC depends on pH (Blust et al., 1991):

  • At pH 7.5, most Cu is bound to CO₃²⁻.
  • At pH 6, Cu is mainly present as free Cu²⁺ and complexes with chloride (CuCl⁺) and sulfate (CuSO₄).

⚡ Redox equilibria

  • For redox reactions, the Nernst equation describes the equilibrium between reduced and oxidized states:
    • E_h (redox potential) depends on E_h⁰ (standard potential), temperature, number of electrons transferred, and the activity ratio of reduced to oxidized species.
  • Many redox reactions involve H⁺ transfer, so the ratio depends on pH.
  • Redox potential is also expressed as pe (negative logarithm of electron activity).

📊 Pourbaix (pe-pH) diagrams

  • By combining equilibrium equations for all relevant reactions, models describe speciation as a function of ligand concentrations, pH, and redox potential.
  • Pourbaix diagram: a pe-pH plot showing which species dominates under different conditions.
  • Boundary lines mark conditions where the activity ratio of two species equals one.
  • Fields between lines are labelled with the dominant species.

Example: For iron in water, the diagram shows Fe³⁺, Fe²⁺, Fe(OH)₃(s), and Fe(OH)₂(s) dominance regions depending on pe and pH.

Don't confuse: The diagram assumes equilibrium and a fixed total metal concentration—real environments are not at equilibrium.

🌍 Environmental controls and dynamics

🌍 Key environmental parameters

ParameterEffect on speciation
pHControls protonation of functional groups, CEC, and competition with H⁺; affects redox equilibria involving H⁺
Redox potential (E_h or pe)Determines oxidation state of metals (e.g. Fe²⁺ vs. Fe³⁺)
Ligand concentrationsMore ligands (e.g. DOC, sulfate, chloride) shift equilibria toward complexed forms
Dissolved organic matter (DOC)Increases complexation in water; operationally defined as <0.45 μm fraction

🌊 Large-scale dynamic changes

Real environments are not at equilibrium—speciation and fate are highly dynamic.

Land use changes:

  • Agricultural land → nature: soil organic matter increases, pH decreases (no more liming, organic matter accumulates and decomposes).
  • Result: DOC increases, metal mobility increases (e.g. Cu²⁺ more mobile than CuCO₃).
  • "Chemical time bomb" effect: historical metal pollution suddenly becomes available.

Infrastructure changes:

  • River reconstruction or deep soil digging alters environmental conditions.
  • Example: Drilling wells in Bangladesh introduced oxygen and organic matter into groundwater, changing arsenic speciation and increasing its solubility and human exposure.

🐛 Small-scale biotic and abiotic dynamics

Abiotic factors:

  • Rain and flooding events
  • Weather conditions
  • Redox status changes

Biotic factors:

  • Bioturbation: sediment-dwelling organisms re-suspend particles; earthworms aerate soil and excrete mucus that stimulates microbial activity.
  • Root exudates: plants produce acidic compounds that alter pH and speciation.
  • Metal uptake: organisms preferentially take up ionic metal forms, shifting partitioning among species.

Example: Chironomid (midge) larvae burrowing in sediment alters environmental conditions (oxygen, pH, redox) and thus metal speciation.

Don't confuse: Equilibrium models (useful for understanding reactions) vs. real environments (constantly changing due to biological and physical processes).

🔬 Why speciation matters for risk assessment

🔬 Mobility and bioavailability

  • Mobility: how far metals move in soil, sediment, or water—depends on whether they are dissolved or bound to solids.
  • Bioavailability: how easily organisms take up metals—free ionic forms (e.g. Cd²⁺) are generally most bioavailable.
  • Speciation determines both, and thus controls potential bioaccumulation and toxicity.

🧪 Implications for ecological risk

  • Assessing ecological risks of metals requires considering speciation, not just total metal concentration.
  • The same total concentration can have very different effects depending on pH, DOC, redox conditions, and competing ions.

Example: In low-pH soil, more metal is present as free ions → higher bioavailability and toxicity. In high-pH soil with high organic matter, more metal is complexed → lower bioavailability.

12

Availability and bioavailability

3.6. Availability and bioavailability

🧭 Overview

🧠 One-sentence thesis

Bioavailability is a dynamic, process-oriented concept that describes how chemicals move from the environment through organisms to sites of toxic action, and accounting for it improves risk assessment by distinguishing between total concentrations and the fractions that actually cause exposure and effects.

📌 Key points (3–5)

  • Three-process framework: bioavailability consists of chemical availability (external), actual/potential uptake (toxicokinetics), and internal distribution to the site of action (toxicodynamics).
  • Why total concentration misleads: not all chemical present in soil or water contributes to exposure; binding to organic matter, clay, and other factors reduces the fraction available for uptake.
  • Time-dependent dynamics: bioavailability operates across timescales from seconds to hundreds of years; some pollutant fractions may never reach organisms during their lifespan.
  • Common confusion: "available" vs. "bioavailable"—chemical availability is the external fraction contributing to exposure, while bioavailability includes uptake and internal processes leading to effects.
  • Regulatory application: bioavailability-based models (e.g., Biotic Ligand Models, Equilibrium Partitioning) allow site-specific risk limits that account for water chemistry and soil properties, improving prioritization of cleanup efforts.

🔬 The three-process bioavailability framework

🔬 Process 1: Chemical availability

Chemical availability: the fraction of the total concentration of chemicals present in an environmental compartment that contributes to the exposure of an organism.

  • The total concentration in soil or sediment is not the same as the exposure concentration.
  • A smaller or larger fraction may be bound to:
    • Organic matter
    • Clay particles
    • Influenced by cations and pH
  • These binding processes reduce the fraction that can interact with organisms.
  • Example: high organic carbon in soil binds hydrophobic chemicals strongly, lowering the available fraction.

🔬 Process 2: Toxicokinetics (uptake)

  • Describes the actual or potential uptake of the substance.
  • Reflects how the concentration on and in the organism develops over time.
  • This is the movement across cell membranes and into tissues.
  • Don't confuse: this is not yet about effects—it's about how much gets inside.

🔬 Process 3: Toxicodynamics (internal distribution and action)

  • Describes the internal distribution leading to interaction at the cellular site of toxicity.
  • Sometimes called "toxico-availability."
  • Includes biochemical and physiological processes resulting from the chemical at the site of action.
  • This is where effects actually occur.

⏱️ Time dynamics and kinetics

⏱️ Variable timescales

  • Kinetics are involved in all three processes.
  • Timeframes range from very brief (less than seconds) to very long (hundreds of years).
  • Some fractions in soil or sediment may never contribute to transport during an organism's lifespan.

⏱️ Why bioavailability is dynamic

  • Different fractions have different desorption kinetics.
  • Fast-desorbing fractions become available quickly (hours to days).
  • Slow-desorbing fractions remain bound for months or years.
  • The relevant bioavailability metric depends on the exposure duration and organism lifespan.
  • Example: a pollutant tightly bound to sediment particles may pose little risk to short-lived invertebrates but could affect long-lived fish if it slowly releases over years.

🧪 Assessing bioavailability of organic chemicals

🧪 Freely dissolved concentration (C_free)

Freely dissolved concentration: the equilibrium aqueous concentration representing what organisms "see" under equilibrium exposure.

  • Most hydrophobic chemicals remain sorbed to solids; only a small fraction is in the water phase.
  • C_free is the "tip of the iceberg"—a small visible fraction connected to a large sorbed pool.
  • Determined by the equilibrium between sorption and desorption.
  • Connected to the sorbed concentration through a partitioning coefficient.

🧪 Passive sampling methods

  • Use polymer-coated fibers or sheets (e.g., polydimethylsiloxane, polyethylene).
  • The sampler establishes sorption equilibrium with the aqueous phase.
  • The pollutant enriches in the sampler, allowing indirect determination of very low aqueous concentrations.
  • Key requirement: non-depletive—the sampler must not alter the solid-water equilibrium.
  • Equilibrium may take days or weeks.
  • C_free is calculated from the concentration in the polymer and the polymer-to-water partitioning coefficient.
  • Well-suited for contaminated sediments; already used in regulatory assessments.

🧪 Fast-desorbing fraction (F_fast)

  • Relevant when biological uptake is rapid and equilibrium is never achieved.
  • Represents the bioaccessible fraction—the portion that can desorb quickly (hours to days).
  • Determined using "infinite sink" methods that trap desorbed chemicals, maintaining near-zero aqueous concentration.
  • Common materials: Tenax (sorptive resin) or cyclodextrin (solubilizing agent).
  • A two-compartment kinetic model describes fast and slow desorption fractions and their rate constants.
  • A 20-hour extraction is often used as a practical approximation, though the time needed varies by chemical and soil type.

🧪 Assessing bioavailability of metals

🧪 Why total metal concentration is insufficient

  • Fate, mobility, uptake, and toxicity are highly determined by metal speciation.
  • Speciation describes partitioning among various chemical forms.
  • Chemical extraction methods are indicative but ignore dynamics and biological processes.
  • Nevertheless, they are preferred over total concentrations for risk prediction.

🧪 Extraction methods for metals

MethodWhat it measuresInterpretation
Porewater extractionReadily available fractionBest approximates what organisms experience directly
Water extractionImmediately available (soil solution)Dilutes porewater, may affect equilibria
Diluted salts (e.g., 0.01 M CaCl₂)Easily available or exchangeable fractionGood predictor of uptake in plants and invertebrates
Chelating agents (EDTA, DTPA)Plant-available fractionSimulates root exudates; varies by plant species
Diluted acids (e.g., 0.43 M HNO₃)Potentially available (long-term)Estimates geochemically active pool; correlates with oral bioaccessibility
Sequential extraction (Tessier)Binding to different soil componentsFive fractions from exchangeable to residual, indicating increasing binding strength
Passive sampling (DGT)Diffusion-available fractionMetals diffuse through gel to resin; requires moist soil

🧪 Porewater and dilute salt extractions

  • Porewater extraction: centrifuge and filter (0.45 or 0.22 μm) to obtain soil solution.
    • Filtration does not remove all complexes.
    • Correlates well with metal uptake and toxicity when corrected for pH.
  • 0.01 M CaCl₂ extraction: widely accepted in soil ecotoxicology.
    • Unbuffered, so does not interfere with soil pH.
    • Example: lead toxicity to enchytraeid worms showed much better correlation when expressed as CaCl₂-extractable concentration than as total concentration.

🧪 Sequential extraction (Tessier method)

  • Uses a series of five extraction solutions to determine binding strength and location:
    1. Exchangeable: weakly bound, most available
    2. Carbonate-bound: released under acidic conditions
    3. Fe/Mn oxide-bound: released under reducing conditions
    4. Organic matter-bound: released by oxidation
    5. Residual: tightly bound in mineral lattice, least available
  • Indicates both where metals are bound and how strongly.
  • Also adapted for human bioaccessibility (gut passage simulation).

🧪 DGT (Diffusive Gradients in Thin films)

  • A resin (Chelex) with high metal affinity is covered by a diffusive gel and membrane.
  • Placed in contact with moist soil.
  • Metals diffuse from porewater through the gel to the resin.
  • Porewater concentration is calculated from accumulated metal, gel thickness, and contact time.
  • Limitation: requires sufficiently moist soil for effective diffusion.
  • Better suited for assessing availability to plants than to invertebrates not in continuous contact with soil solution.

🎯 Application in risk assessment

🎯 Bioavailability-corrected risk limits

  • Generic quality standards based on total concentrations may be overly strict or insufficiently protective.
  • Bioavailability models account for site-specific chemistry (pH, dissolved organic carbon, etc.).
  • Example: copper in different water types (Dutch case study):
    • Generic standard: 1.5 μg/L total dissolved copper.
    • Bioavailability-corrected HC5 (hazardous concentration for 5% of species) varied widely:
      • Sandy springs and large rivers: ~7–10 μg/L (low DOC, more bioavailable)
      • Streams/brooks: ~74 μg/L (high DOC, less bioavailable)
      • Canals/lakes and ditches: intermediate values
  • High DOC and neutral-to-basic pH increase binding to organic matter, reducing bioavailability.

🎯 Prioritizing cleanup

  • Water-type-specific or soil-type-specific risk limits help identify priority sites.
  • Sites with the same total concentration may have very different risks depending on chemistry.
  • Example: a stream with high copper but also high DOC may pose less risk than a river with lower copper but low DOC.
  • Don't confuse: even with bioavailability correction, extreme conditions (drought, heavy rain) can shift chemistry and increase risk, so generic standards remain as a safety margin.

🎯 Regulatory tools

  • Biotic Ligand Models (BLMs): for inorganic contaminants; account for competition at uptake sites and complexation in solution.
  • Equilibrium Partitioning (EqP): for organic chemicals; assumes equilibrium between phases.
  • ISO standards exist for several extraction methods (e.g., leaching tests, ammonium nitrate extraction, dilute nitric acid extraction).

💡 Practical illustration: Iron deficiency in humans

💡 Exposure vs. uptake

  • Iron deficiency occurs when the body lacks sufficient iron for hemoglobin and cytochrome P450 enzymes.
  • Doctors prescribe iron supplements and iron-rich foods (red meat, spinach).
  • Higher intake does not guarantee higher uptake—bioavailability matters.

💡 Factors reducing iron bioavailability

  • Calcium ions (in milk): compete with iron for the same uptake sites in the intestine.
  • Carbonates and caffeine: bind iron strongly, reducing availability for absorption.
  • Phytate (in vegetables): also binds iron, lowering uptake.
  • Advice: avoid milk or caffeinated drinks when consuming iron-rich products.
  • This illustrates that the chemical form and presence of competing or binding agents control bioavailability, not just the amount present.
13

Degradation

3.7. Degradation

🧭 Overview

🧠 One-sentence thesis

Chemical, photochemical, and biological degradation processes together determine how quickly organic pollutants are removed from the environment, with biodegradation often being the dominant removal mechanism but highly dependent on both chemical structure and environmental conditions.

📌 Key points (3–5)

  • Three main degradation pathways: abiotic chemical reactions (hydrolysis, redox), photochemical reactions (direct and indirect), and biodegradation by microorganisms—each pathway dominates in different environmental compartments.
  • Structure determines degradability: branched, aromatic, and halogenated structures degrade more slowly; linear, aliphatic structures and certain functional groups (esters, carbamates) degrade faster.
  • Biodegradation vs biotransformation distinction: microorganisms mineralize chemicals to CO₂ as part of nutrient/energy cycles, whereas higher organisms biotransform chemicals for detoxification and excretion without using them as nutrients.
  • Common confusion—intrinsic vs environmental rates: there is no single "intrinsic biodegradation rate" for a chemical; rates vary widely with microbial community, exposure history (adaptation), and environmental conditions (aerobic vs anaerobic, pH, redox).
  • Testing strategy: standardized tiered tests identify "readily biodegradable" chemicals (rapid removal under stringent conditions) versus persistent chemicals requiring further simulation tests under environmentally relevant conditions.

🧪 Chemical and photochemical degradation processes

💧 Hydrolysis reactions

Hydrolysis: reactions with water to break a bond, producing an acid and either an alcohol or amine as products.

  • The name means using water (hydro-) to break (-lysis) a bond.
  • Particularly important for chemicals containing acid derivatives as functional groups: organophosphate and carbamate pesticides (parathion, diazinon, aldicarb, carbaryl), organophosphate flame retardants.
  • pH dependence: hydrolyses can be catalyzed by either OH⁻ or H⁺ ions, so rates vary strongly with pH.
  • Halogenated organic molecules may also hydrolyze to form alcohols (releasing halide ions), but rates are generally too slow except for tertiary organohalogens and secondary organohalogens with Br and I.

Example: An ester pesticide in neutral water hydrolyzes slowly, but the same compound hydrolyzes much faster in alkaline conditions (high pH) due to OH⁻ catalysis.

⚡ Redox reactions

Redox reactions: reduction and oxidation reactions that transfer electrons; important for contaminant transformation in both aerobic and anaerobic environments.

Oxidation reactions:

  • Thermodynamically favorable in the presence of oxygen, but occur at insignificant rates unless oxygen is activated (as radicals/peroxides) or the reaction is catalyzed by transition metals or enzymes.

Reduction reactions:

  • Important in anaerobic environments (sediment, groundwater aquifers).
  • Affect chemicals with reducible functional groups: carboxylic acids, nitro groups.
  • Reductive dehalogenation: organohalogens undergo reduction where halogen substituents are replaced by hydrogen; electron donors can be inorganic (e.g., Fe(II) mineral oxidation) or biochemical (organic chemical oxidation).
  • Natural organic matter often acts as a catalyst enhancing electron transfer.

Example: Hexachlorobenzene in anaerobic sediment undergoes reductive dehalogenation, progressively losing chlorine atoms to form less chlorinated, less hydrophobic products.

Don't confuse: Biological processes are indirectly involved because microbial activity determines the environmental redox conditions that control which redox reactions can occur.

☀️ Photodegradation mechanisms

Photodegradation: chemical reactions initiated by sunlight energy, particularly important in the atmosphere and surface waters.

Where it occurs:

  • Atmosphere: most important; well-known examples include CFC reactions damaging the ozone layer and hydrocarbon oxidations generating smog.
  • Aquatic surface: at the water surface or in the top layer of clear water; light penetration is reduced by particles (scattering) and dissolved organic matter (absorption).
  • Soil and ice: some evidence for pesticides and chemicals in sewage sludge amendments, and for chemicals in polar ice, but significance is unclear.

Direct photodegradation:

  • Aromatic compounds and chemicals with unsaturated bonds absorb sunlight and become excited (energized).
  • Leads to bond cleavage (especially C-halogen bonds) producing radical species.
  • Radicals are highly reactive: they remove H or OH from water to form C-H or C-OH bonds, or react with themselves to form larger molecules.

Indirect photodegradation:

  • Organic chemicals react with photochemically produced radicals, especially reactive oxygen species (OH radicals, ozone, singlet oxygen).
  • These reactive species exist at very low concentrations but are so reactive they contribute significantly to removal.
  • OH radicals are the most important: they remove hydrogen radicals or react with unsaturated bonds (alkenes, aromatics) to produce hydroxylated products.
  • In water, natural organic matter absorbs light and participates in indirect reactions; nitrogen oxides and iron complexes may also be involved.

Example: Triclosan in the North Sea is removed by photolysis; oil spills are degraded by photodegradation, which favors longer-chain alkanes compared to biodegradation (which preferentially attacks linear and small alkanes).

🦠 Biodegradation processes

🔄 Biodegradation vs biotransformation

TermWho does itPurposeOutcome
BiodegradationMicroorganismsObtain nutrients and energyPotentially complete mineralization to CO₂ (inorganic end products)
BiotransformationHigher organismsDetoxification; facilitate excretionConversion to more polar/ionizable products for excretion; costs energy (ATP, NADH, NADPH)
  • Microorganisms have broad degradative capacity because they play a role in natural biogeochemical cycles, degrading most (perhaps all) naturally occurring organic chemicals in organic matter.
  • This capacity extends to many anthropogenic chemicals.
  • Higher organisms do not take up pollutants as nutrients; biotransformation attaches polar/ionizable units to make compounds more water-soluble and excretable via kidneys/urine, usually rendering them less toxic.
  • "Biotransformation" is sometimes also used for microbial conversions that produce a new product without complete mineralization.

🧬 Enzymatic reactions and kinetics

Enzyme catalysis:

  • Biodegradation is enzymatically catalyzed, so rates should theoretically follow Michaelis-Menten kinetics (or Monod kinetics if microbial growth is considered).
  • In practice, first-order kinetics are often used because environmental concentrations of chemicals are much lower than the half-saturation concentrations of degrading enzymes—this simplification is justified and more convenient.

Measuring biodegradation:

  • Either follow the concentration of the chemical (requires analytical methods, not always available in routine labs) or follow conversion to end products (O₂ consumption or CO₂ production).
  • Measuring CO₂ production is straightforward but must account for CO₂ from other sources (soil or dissolved organic matter); using ¹⁴C-labeled chemicals solves this but requires special facilities.
  • Advantage of CO₂ measurement: quantitative conversion to CO₂ means no concern about toxic metabolite accumulation.

🏗️ Structural effects on biodegradability

General patterns (aerobic environment):

More biodegradableLess biodegradable
Linear alkanes < C₁₂Linear alkanes > C₁₂
Linear chainBranched chain
AliphaticAromatic
-C-C-C- linkage-C-O-C- linkage
Substituents: -OH, -CO₂H, -NH₂Substituents: -F, -Cl, -NO₂, -OCH₃, -CF₃
Cl more than 6 carbons from terminal CCl less than 6 carbons from terminal C
  • Branched hydrocarbon structures degrade more slowly than linear structures.
  • Cyclic and especially aromatic chemicals degrade more slowly than aliphatic (non-aromatic) chemicals.
  • Halogens and other electron-withdrawing substituents have strongly negative effects.
  • Not surprising: the list of persistent organic pollutants (POPs) is dominated by organohalogen compounds, particularly those with aromatic or alicyclic structures.

Don't confuse: These are general trends; actual rates depend heavily on environmental conditions and microbial community composition—there is no intrinsic biodegradation rate for a chemical.

🔄 Adaptation and acclimation

Adaptation (acclimation): the phenomenon where biodegradation rates increase over time following long-term exposure of microbial communities to a chemical.

  • Often observed following repeated pesticide application at the same location.
  • Example: Atrazine degradation rates increase with longer exposure; L-GLDA (a builder in cleaning products) shows shorter lag phases in wastewater treatment plants from regions where the product has been available longer.

Mechanisms:

  • Shifts in composition or abundances of species in a bacterial community.
  • Mutations within single populations.
  • Horizontal transfer of DNA.
  • Genetic recombination events.
  • Or combinations of these.

Example: Activated sludge from regions where L-GLDA was not yet on the market required long lag times before degradation started, whereas sludge from regions with several months of L-GLDA exposure required shorter lag phases.

⚙️ Biodegradation reactions and pathways

🔥 Aerobic oxidation reactions

Oxygenases—activation step:

  • Conversion of an organic chemical to CO₂ is an overall oxidation reaction, so oxidation reactions involving molecular oxygen are the most important.
  • Oxygenation is often the first essential step, converting relatively stable molecules to more reactive intermediates.
  • Particularly important for aromatic chemicals: oxygenation is required to make aromatic rings susceptible to ring cleavage and further degradation.

Two classes of oxygenases:

Enzyme typeReactionWhere foundExample pathway
MonoxygenasesOne oxygen atom of O₂ reacts with organic molecule → hydroxylated productAll organisms (e.g., cytochrome P450 family)Alkane oxidation to carboxylic acids; beta-oxidation shortens linear alkanoic acids in C₂ steps
DioxygenasesBoth oxygen atoms of O₂ react with organic moleculeUnique to microorganisms (bacteria)Benzene/toluene ring hydroxylation and cleavage; degradation of PAHs and halogenated aromatics

Example: Dodecanoic acid (a 12-carbon alkanoic acid) is shortened to decanoic acid (10-carbon) via beta-oxidation, involving Coenzyme-A.

🌑 Anaerobic degradation reactions

Oxidation without oxygen:

  • Other oxidants (nitrate, sulfate, Fe(III)) present in sufficiently high concentrations can act as terminal electron acceptors supporting microbial growth.
  • Activation relies on different reactions: carboxylation or addition of fumarate are the most important.

Example: Naphthalene degradation under sulfate-reducing conditions in sediment involves two initial mechanisms: (a) carboxylation to naphthoic acid, or (b) methylation to 2-methylnaphthalene followed by fumarate addition, both leading to 2-naphthoic acid, which is then oxidized to CO₂.

Reductive reactions:

  • Important in anaerobic ecosystems (sediments, groundwater plumes).
  • Affect functional groups: reduction of acids to aldehydes to alcohols; nitro groups to amino groups.
  • Reductive dehalogenation: substitution of halogens by hydrogen—particularly important for highly chlorinated chemicals resistant to oxidative biodegradation.
  • Converts highly chlorinated chemicals to less chlorinated products more amenable to aerobic biodegradation.
  • Examples: tetrachloroethene-contaminated groundwater (from dry-cleaning), PCB-contaminated sediment.
  • Some microorganisms harvest energy from these exothermic reactions to support growth—a form of respiration called chlororespiration.

🧪 Hydrolysis in biodegradation

  • Hydrolyses are important reactions in biodegradation pathways, particularly for chemicals that are derivatives of organic acids: carbamate, ester, and organophosphate pesticides.
  • Often the first step in biodegradation.
  • Similar to abiotic hydrolysis reactions described earlier.

📋 Degradation test methods

🎯 Strategy and purpose of standardized testing

Why standardized tests:

  • Highly controlled laboratory experiments to environmental monitoring studies are all possible, but standardized protocols have clear advantages: suitable for many laboratories, broad scientific/regulatory acceptance, comparability across chemicals.
  • OECD test guidelines are the most important set; cover physical-chemical properties, bioaccumulation, toxicity, and environmental fate.
  • Developed internationally, extensively validated in different laboratories, ensuring wide acceptance.

Three distinct regulatory purposes (EU example):

  • Classification and labeling.
  • Hazard/persistent assessment.
  • Environmental risk assessment.

🧴 Chemical degradation tests

OECD Test 111: Hydrolysis as a Function of pH

  • Measures hydrolytic transformations in aquatic systems at environmental pH values (pH 4–9).
  • Sterile aqueous buffer solutions at different pH (4, 7, 9) containing test substance (radio-labeled or unlabeled, below saturation) incubated in the dark at constant temperature.
  • First tier (preliminary): 5 days at 50°C.
  • Second tier: studies unstable substances and identifies hydrolysis products; may extend to 30 days.

OECD Test 316: Phototransformation of Chemicals in Water - Direct Photolysis

  • Measures direct photolysis rate constants using xenon arc lamp (simulating natural sunlight, 290–800 nm) or natural sunlight.
  • Extrapolated to natural water.
  • If estimated losses ≥ 20%, transformation pathway and identities/concentrations/rates of major transformation products are identified.

Note: Sterilized controls may be used in biodegradability tests to determine the contribution of chemical degradation.

🦠 Biodegradability testing—tiered approach

Overall strategy:

  • All chemicals subjected to screening tests (ready biodegradability tests) to identify chemicals that are "readily biodegradable" (removed rapidly from WWTPs and the environment).
  • Chemicals failing screening tests may undergo higher tier tests (inherent biodegradability or simulation tests) under more favorable or environmentally representative conditions.

Ready biodegradability tests (Tier 1):

TestParameter measuredPass criterion example
OECD 301A: DOC Die-awayDissolved organic carbon (DOC)
OECD 301B: CO₂ evolutionCO₂ production
OECD 301C: Modified MITI(I)O₂ consumption
OECD 301D: Closed bottleO₂ consumption70% of theoretical O₂ consumption within 28 days
OECD 301E: Modified OECD screeningDOC
OECD 301F: Manometric respirometryO₂ consumption
OECD 306: SeawaterDOC
OECD 310: Headspace testCO₂ in sealed vessels

Design features (stringent conditions):

  • Originally developed for surfactants.
  • Use activated sludge from WWTPs as microbial source (since wastewater treatment is a major conduit of chemical emissions).
  • Low bacterial concentrations and test chemical as the only carbon/energy source at high concentrations.
  • Assumption: chemicals showing rapid biodegradation under these unfavorable conditions will always degrade rapidly under environmental conditions.
  • Biodegradation determined as conversion to CO₂ (mineralization): directly measure CO₂ produced, or O₂ consumed, or DOC removal—this is the most desirable outcome.

Inherent biodegradability tests (Tier 2):

  • Performed under more favorable conditions for chemicals that fail ready biodegradability tests but may still degrade slowly.
  • Examples: OECD 302A (Modified SCAS), 302B (Zahn-Wellens), 302C (Modified MITI(II)).

Simulation tests (Tier 3):

  • Designed to represent environmental conditions in specific compartments: redox potential, pH, temperature, microbial community, test substance concentration, other substrates.
  • Examples: OECD 303A/B (activated sludge/biofilms), 304A (soil), 307 (aerobic/anaerobic soil transformation), 308 (aquatic sediment), 309 (surface water), 311 (anaerobic digested sludge), 314 (wastewater simulation).

⚠️ Issues and limitations

Practical difficulties:

  • Volatile or poorly soluble chemicals are difficult to test.

Variability:

  • For some chemicals, results are highly variable—usually attributed to the source of microorganisms (activated sludge from different WWTPs).
  • Wide variability in degradation speed likely results from different exposure concentrations, exposure periods, and dependence on small populations of degrading microorganisms that may not always be present in sludge samples.
  • Not dealt with systematically in testing protocols.

Suggestions for improvement:

  • Preliminary exposure period to allow sludge to adapt.
  • Use higher, more environmentally relevant concentrations of activated sludge as inoculum.

Interpreting failure:

  • Failure to pass ready biodegradability tests does not necessarily mean the chemical is persistent; slow biodegradation may still occur.
  • Higher tier tests determine whether biodegradation contributes significantly to removal.

📊 Classification criteria

PBT and vPvB classification:

PropertyPBT criteriavPvB criteria
PersistenceT₁/₂ > 60 days (marine water) or > 40 days (fresh/estuarine water) or > 180 days (marine sediment) or > 120 days (fresh/estuarine sediment) or > 120 days (soil)T₁/₂ > 60 days (marine/fresh/estuarine water) or > 180 days (marine/fresh/estuarine sediment) or > 180 days (soil)
BioaccumulationBCF > 2000 L/kgBCF > 5000 L/kg
ToxicityNOEC < 0.01 mg/L (aquatic organisms) or classified as carcinogenic/mutagenic/toxic for reproduction or other evidence of chronic toxicity

Weight of evidence approach:

  • As well as standardized test results, other data can be considered: environmental monitoring data, microbiology studies.

Use in fate modeling:

  • Results sometimes used to derive input data for environmental fate models, but not always straightforward—need to account for other processes (e.g., partitioning) when transferring data from multi-compartment test systems to individual compartment degradation rates.
14

Modelling exposure

3.8. Modelling exposure

🧭 Overview

🧠 One-sentence thesis

Multicompartment mass balance modeling predicts chemical concentrations in the environment by applying the universal conservation principle—that the rate of change equals inputs minus outputs—to environmental compartments, enabling regulatory risk assessment and understanding of chemical fate.

📌 Key points (3–5)

  • Core principle: Mass balance modeling uses the universal conservation equation (rate of change = inputs − outputs) applied to environmental compartments (air, water, soil, sediment, biota).
  • Steady state vs equilibrium: Steady state means concentrations stop changing (inputs = outputs); thermodynamic equilibrium means chemical potential is equal across all compartments (no driving force for flow).
  • Complexity levels: Mackay's approach uses four levels—Level I (equilibrium, no flows), Level II (steady state + equilibrium), Level III (steady state, unequal fugacities, flow resistance), Level IV (time-dependent, transient solutions).
  • Common confusion: Equilibrium ≠ steady state; a system can be at steady state with unequal chemical potentials (Level III) or at equilibrium with equal potentials (Level I/II).
  • Why it matters: These models are essential regulatory tools (e.g., REACH in Europe) and explain phenomena like long-range transport of persistent pollutants to the Arctic.

🧮 The mass balance equation

🧮 Universal conservation principle

The governing principle is that the rate of change (of any entity, in any system) equals the difference between the sum of all inputs (of that entity) to the system and the sum of all outputs from it.

  • Expressed mathematically: dm/dt = input − output, where dm/dt is the rate of change of mass over time.
  • This principle applies universally, not just to chemicals—the excerpt uses a "leaking bucket" analogy: water height in a bucket depends on inflow and outflow rates.
  • Example: A lake receiving constant chemical emissions will accumulate mass until degradation and outflow balance the input.

🧪 One-compartment case

  • For a single compartment (e.g., a lake), the equation becomes: dm/dt = I − k·m
    • I = constant emission rate (kg/s)
    • k = first-order rate constant for loss (degradation, outflow)
    • m = mass of chemical in the lake (kg)
  • At steady state (dm/dt = 0), the mass reaches a predictable maximum: m_steady = I / k
  • Intuitive result: Doubling emissions doubles steady-state mass; faster degradation (higher k) lowers steady-state mass.
  • The general solution shows mass increases exponentially from initial value m₀ to the steady-state level.

🔗 Multi-compartment systems

  • "Multi" means more than one compartment—typically air, water, soil, sediment, biota; advanced models may use hundreds.
  • Each compartment has its own mass balance equation; chemical can flow between compartments (one compartment's output becomes another's input).
  • All flows are characterized by first-order rate constants k_i,j (flow from compartment i to j).
  • At steady state, all compartments balance: inputs = outputs for each compartment.

🧰 Solving multi-compartment models

🧰 Algebraic approach

  • A system of n compartments yields n linear equations with n unknowns (the steady-state masses).
  • Manual solution becomes tedious beyond two equations (though one PhD student famously solved 14 by hand!).
  • Linear algebra offers an easier method: rewrite as matrix equation M̅ = K⁻¹ · I̅
    • M̅ = vector of steady-state masses
    • K = model matrix of rate constants
    • I̅ = vector of emission rates
    • K⁻¹ = inverse of the model matrix
  • Spreadsheet software (Excel, LibreOffice Calc, Google Sheets) has built-in functions for matrix inversion and multiplication.

🧊 Unit World modeling concept

  • Pioneered by US EPA researchers in the late 1970s; refined by Mackay and co-workers.
  • Views the environment as a set of well-mixed chemical reactors (compartments), each representing one medium.
  • Chemical flows between compartments driven by "departure from equilibrium"—differences in chemical potential or fugacity.
  • Mackay's "fugacity approach" became widely known, though most models still use mass as the state variable.
  • Hydraulic analogy: Chemical flow is like water flowing between tanks at different heights; steady state is reached when levels stabilize (though not necessarily equal).

🎚️ Complexity levels (Mackay's framework)

🎚️ Level I: Thermodynamic equilibrium

  • Assumption: No inputs, no outputs; chemical flows freely between compartments without restriction.
  • The system reaches thermodynamic equilibrium: equal chemical potential and fugacity in all media.
  • In the water-tank analogy: water levels are equal in all tanks.
  • Simplest level: Requires only thermodynamic equilibrium constants derived from basic physical properties.
  • Limitation: Unrealistic for real environments with emissions and degradation.

🎚️ Level II: Equilibrium + steady state

  • Assumption: Inputs (emissions) balance outputs (degradation, advection); chemical flows freely between compartments.
  • Steady state develops with thermodynamic equilibrium maintained at all times.
  • Does not require knowledge of mass transfer resistances (assumes resistances are negligible).
  • More complex than Level I (includes degradation/outflow rates) but simpler than Level III.
  • Surprisingly realistic results in many situations despite simplifications.

🎚️ Level III: Steady state with flow resistance

  • Assumption: Flow between compartments experiences resistance; steady state reached with permanent "departure from equilibrium."
  • Fugacities (chemical potentials) are unequal across compartments at steady state.
  • In the water-tank analogy: water rests at different heights in different tanks.
  • Requires detailed knowledge: which compartments receive emissions, degradation rates in each compartment, transfer resistances between compartments.
  • More realistic than Level II; reward for complexity is better accuracy.

🎚️ Level IV: Transient (time-dependent)

  • Assumption: Simulations start with zero chemical (empty tanks) and track how compartments fill over time.
  • Produces time-dependent solutions until steady state is reached.
  • Most realistic representation but requires most detailed knowledge of flows and resistances.
  • Key insight: Indicates time to steady state—how long to clear the environment of persistent chemicals no longer in use.
  • Time-varying states are harder to interpret and not always most informative.

🔄 Don't confuse: Equilibrium vs steady state

ConceptDefinitionExample
Thermodynamic equilibriumEqual chemical potential/fugacity in all compartments; no driving force for flowLevel I: water at equal height in all tanks
Steady stateRate of change = 0; inputs = outputs; concentrations constant over timeLevel III: water at different heights but not changing
BothSteady state + equilibriumLevel II: equal heights, constant over time
  • A system can be at steady state without equilibrium (Level III).
  • Equilibrium implies no net flow; steady state allows flows as long as they balance.

🌍 Applications of multimedia models

🌍 Regulatory use

  • Essential tools in regulatory environmental decision-making about chemical substances.
  • Europe (REACH): Chemicals can be registered for marketing only when demonstrated safe; SimpleBox and SimpleTreat models play key roles.
  • National models: ChemCAN (Canada), CalTOX (California), SimpleBox (Netherlands), HAZCHEM (ECETOC), ELPOS (Germany).
  • All serve the same purpose: standardized platforms for evaluating environmental risks from chemical use.

🌍 Scientific research applications

  • GloboPOP (Wania, late 1990s): Global model exploring "cold condensation effect"—why persistent organic pollutants accumulate in the Arctic where they were never used.
  • CliMoChem (Scheringer): Investigated long-range transport of persistent chemicals into Alpine regions.
  • BETR World (MacLeod): Studied global transport of pollutants.
  • Later models became larger, spatially and temporally explicit, used for in-depth fate analysis.

🌍 What models reveal

  • Predict exposure concentrations from emission rates.
  • Explain differences in environmental behavior and fate of chemicals.
  • Assess time scales: how long until steady state? How long to clear persistent chemicals?
  • Example: Understanding why chemicals with no local sources appear in remote regions.

Metal speciation models

🧭 Overview

🧠 One-sentence thesis

Metal speciation models calculate the distribution of chemical species in solution from total concentrations and thermodynamic equilibrium data, enabling prediction of bioavailability, solubility, and chemical behavior without direct measurement.

📌 Key points (3–5)

  • What speciation models do: Calculate species concentrations from total concentrations using thermodynamic equilibrium constants, rather than measuring speciation directly.
  • How they work: Set up mass balance equations for each element; iteratively adjust free ion concentrations until calculated totals match known totals (typically using Newton-Raphson method).
  • Key corrections needed: Equilibrium constants depend on temperature and ionic strength; models must convert constants from standard conditions (25°C, zero ionic strength) to actual conditions.
  • Organic matter complexity: Dissolved Organic Carbon (DOC) is heterogeneous with a continuous range of binding strengths; models like WHAM use discrete binding sites plus diffuse double-layer accumulation.
  • Common confusion: Models calculate equilibrium, but nature is dynamic and never truly at equilibrium; models work best when systems are close to equilibrium.

🧬 Core concepts

🧬 What speciation means

  • Speciation: The distribution of an element among different chemical forms (species) in solution.
  • Example: Total copper may exist as free Cu²⁺, CuOH⁺, Cu(OH)₂, CuCl⁺, CuCl₂, etc.
  • Different species have different properties: solubility, bioavailability, toxicity, mobility.
  • Models allow calculation rather than measurement (chemical analysis or bioassays).

🧬 Input and output

  • Input: Total concentrations of elements, pH, temperature, ionic strength.
  • Output: Concentrations of individual species (free ions, complexes, precipitates).
  • The term "constant" is misleading—equilibrium constants depend on temperature and ionic strength.

⚖️ How speciation models work

⚖️ Equilibrium constants

  • For each reaction, an equilibrium constant can be defined.
  • Example: Cu²⁺ + 4Cl⁻ ⇌ CuCl₄²⁻ has constant β
    • [CuCl₄²⁻] = β × [Cu²⁺] × [Cl⁻]⁴
  • If free ion concentrations are known, complex concentrations are easily calculated.
  • Problem: Free concentrations are usually unknown; only total concentrations are known.

⚖️ Mass balance approach

  • Set up mass balance equations for each element.
  • Example for copper: [total Cu] = [free Cu²⁺] + [CuOH⁺] + [Cu(OH)₂] + [CuCl⁺] + [CuCl₂] + ...
  • Each complex concentration is a function of free ion concentrations.
  • If free ion concentrations are known → can calculate all complexes → can calculate totals.

⚖️ Iterative solution

  • Mass balance equations are non-linear and cannot be solved by rearrangement.
  • Model repeatedly estimates free ion concentrations, adjusting them each loop so calculated totals more closely match known totals.
  • When calculated and known totals agree within defined precision, speciation is solved.
  • Critical part: adjusting free ion concentrations efficiently (usually Newton-Raphson method).

🌡️ Temperature and ionic strength corrections

🌡️ Temperature effects

  • Equilibrium constants are valid at specific temperatures (standard: 25°C).
  • Conversion to other temperatures requires heat capacity (ΔH) data, often unavailable.
  • Van 't Hoff equation: relates constants K₁ and K₂ at temperatures T₁ and T₂ using enthalpy ΔH and gas constant R.

🌡️ Ionic strength effects

  • Equilibrium constants are valid at specific ionic strength (standard: zero).
  • Ionic strength (I): Calculated from concentrations (C) and charges (Z) of ions: I = 0.5 × Σ(C × Z²)
  • At non-zero ionic strength, activity (a) deviates from concentration (c): a = γ × c
    • γ = activity coefficient (dimensionless)
  • Many semi-empirical methods exist to calculate γ; none is perfect.

🌡️ Activity coefficient methods

  • Debye-Hückel theory (1923): Assumes ions are point charges; good up to ~0.01 M for 1:1 electrolytes, only ~0.001 M for 2:2 electrolytes.
  • Extensions: Extended Debye-Hückel, Güntelberg, Davies, Bromley, Pitzer, Specific Ion Interaction Theory (SIT).
  • Davies equation (commonly used): log γ = −A × z² × [√I / (1 + √I) − 0.3 × I]
    • z = charge of species
    • Sometimes 0.2 instead of 0.3
  • All extensions start with Debye-Hückel and add empirical correction terms for higher ionic strength.
  • Limitation: Mainly empirical, lacking solid theoretical basis (though Simonin 2017 proposed a theoretically sound method with limited data availability).

💎 Solubility predictions

💎 Solubility product principle

  • Most salts have limited solubility, important under environmental conditions.
  • Example: CaCO₃ solubility product = 10⁻⁸·⁴⁸
    • When [Ca²⁺] × [CO₃²⁻] > 10⁻⁸·⁴⁸ → CaCO₃ precipitates until product = 10⁻⁸·⁴⁸
    • When [Ca²⁺] × [CO₃²⁻] < 10⁻⁸·⁴⁸ and solid CaCO₃ present → CaCO₃ dissolves until product = 10⁻⁸·⁴⁸
  • Note: Ca and CO₃ refer to free ions, not total concentrations.

💎 Dissolved vs free concentrations

  • Example: 10⁻¹³ M solution of Ag₂S leads to precipitation.
    • Free Ag: 6.5×10⁻¹⁵ M; free S: 1.8×10⁻²² M (matches solubility product 10⁻⁵⁰·¹²)
    • Dissolved Ag: 7.1×10⁻¹⁵ M; dissolved S: 3.6×10⁻¹⁵ M
    • Dissolved S is seven orders of magnitude higher than free S due to complexation with protons (HS⁻, H₂S) and Ag.

💎 Amphoteric metals

  • Many metals (e.g., Al) have minimum solubility at moderate pH, dissolving more at both higher and lower pH.
  • Example: Aluminum solubility is minimum around pH 6.2; much higher at both lower and higher pH values.
  • Important for predicting metal mobility and bioavailability across pH gradients.

🌿 Complexation by organic matter

🌿 Why DOC is different

  • DOC (Dissolved Organic Carbon) complexation differs from inorganic or well-defined ligands (acetate, NTA).
  • Reasons:
    • DOC is very heterogeneous; varies by site and isolation method.
    • Shows continuous range of equilibrium constants due to chemical and steric differences.
    • Increased cation binding and ionic strength change electrostatic interactions among functional groups, influencing constants.
    • Electrostatic changes may cause conformational changes in molecules.

🌿 WHAM models (Tipping)

  • Model V (1992), VI (1998), VII (2011): Also known as WHAM; most popular for organic complexation.
  • Assume two binding types:
    1. Specific binding: Chemical bond formation between ion and functional group(s).
    2. Diffuse double layer accumulation: Ions of opposite charge accumulate near molecule without chemical bond (usually cations near negatively charged molecule).

🌿 WHAM structure

  • Distinguishes fulvic acids (FA) and humic acids (HA), treated separately (FA typically most abundant in surface freshwaters).
  • Each class has eight discrete binding sites with a range of acid-base properties.
  • Metals bind monodentate (one site), bidentate (two sites), or tridentate (three sites; Model VI onward).
  • Model VI onward: Each bidentate/tridentate group has three sub-groups, further increasing range of binding strengths.

🌿 Conditional constants

  • Binding constants depend on ionic strength and electrostatic interactions.
  • Conditional constant K_cond = K_int × exp(−z × w)
    • z = charge on organic acid (moles per gram)
    • w = P × √I / (1 + √I), where P is a constant (different for FA/HA and each model), I is ionic strength
  • Conditional constant depends on charge on organic acids and ionic strength.

🌿 Diffuse double layer calculations

  • Usually negatively charged, populated by cations for electric neutrality.
  • Volume calculated separately for each acid type: V_DDL = (N_Av / M) × (4π/3) × [(r + 1/κ)³ − r³]
    • N_Av = Avogadro's number
    • M = molecular weight
    • r = radius (0.8 nm for FA, 1.72 nm for HA)
    • κ = Debye-Hückel parameter (depends on ionic strength)
  • "Tricks" limit DDL volume to 25% of total to avoid artifacts at low ionic strength and high organic content.
  • Concentration in DDL depends on bulk concentration and charge; calculated iteratively to ensure electrical neutrality.

🔬 Applications

🔬 Laboratory situations

  • Understanding chemical behavior: Predict pH changes when adding chelators (e.g., EDTA salts produce different pH values).
  • Redox predictions: Use measured redox potential to predict Fe(II) vs Fe(III), Cu oxidation states, etc.—important because different oxidation states have different bioavailability and chemical behavior.
  • Phase reactions: Predict dissolution (e.g., carbonate dissolution due to CO₂), precipitation (e.g., Dutch Standard Water oversaturated with CaCO₃—small precipitate causes pH shift of 0.22).
  • Explaining biological differences: Compare speciation changes across pH ranges to identify likely causes of organism responses.
    • Example: Boron speciation changes only between pH 8–10.5, so unlikely to cause differences between pH 7–8.
    • Example: Copper speciation does change between pH 7–8, more likely cause of biological differences.

🔬 Field situations

  • Much more complex than laboratory: decomposition produces huge variety of organic compounds (fulvic acids, humic acids, proteins, amino acids, carbohydrates).
  • Many compounds interact strongly with cations; some with anions or neutral molecules.
  • Metals adsorb to clay and sand particles everywhere in nature.
  • Suspended matter can contain high organic content capable of binding cations.
  • WHAM 7 (Tipping, Lofts & Sonke, 2011): Predicts cation complexation by FA and HA over wide range of conditions despite compositional variation; incorporated in several speciation programs.

🔬 Suspended matter challenges

  • Inorganic suspended matter: (Hydr)oxides of Mn, Fe, Al, Si, Ti; clay minerals.
    • Proportions vary dramatically by source; chemical properties vary hugely.
    • Modeling requires measuring some properties of suspended matter.
  • Organic suspended matter: Also large variation; modeling is challenging.

🔬 Bioavailability assessment

  • Understand bioavailability in test media: e.g., EDTA keeps metals in solution but EDTA-complexes are generally not bioavailable.
  • Forgotten aspect: CO₂ exchange with atmosphere influences pH and carbonate solubility.
  • Field situations: Models help understand element bioavailability; DOC influence can be assessed well in many situations; suspended matter remains difficult.
  • Models deliver insights in seconds that would otherwise require great difficulty to obtain.

🖥️ Available models and limitations

🖥️ How programs work

  • Many speciation programs available; several are free.
  • Input: Total concentrations, pH, redox, organic carbon concentration, etc.
  • Output: Calculated speciation.
  • Iterative procedure required (equations cannot be solved analytically).
  • Most programs construct non-linear mass balance equations and solve by simple or advanced mathematics.
  • Complication: Equilibrium constants depend on ionic strength, which can only be calculated when speciation is known; same for precipitation of solids.
  • Typical flow: Input totals → estimate free ions → calculate complexes → calculate totals → compare with input → adjust free ions → repeat until convergence.

🖥️ Limitations

LimitationDescriptionImpact
Missing dataThermodynamic data not available for all equilibriaHampers usefulness
Data uncertaintyLarge variations in literature values (factor of 10 not uncommon)Influences reliability
TemperatureData often only at 25°C; correction data unavailableCannot assess other temperatures accurately
Ionic strengthMany correction methods, mostly semi-empiricalUncertainty in corrections
Equilibrium assumptionPrograms calculate equilibrium; nature is dynamicMore dynamic systems → less reliable results

🖥️ Fundamental caveat

  • Models calculate equilibrium situation; some reactions are very slow.
  • Nature is dynamic and never truly in equilibrium.
  • If system is close to equilibrium, models can make good assessments.
  • The more dynamic a system, the more care needed in interpreting results.
  • Chemical systems move toward equilibrium; organisms may move them away:
    • Phototrophs move systems away from equilibrium.
    • Decomposers and heterotrophs move systems toward equilibrium.
15

Toxicokinetics

Section 4.1. Toxicokinetics

🧭 Overview

🧠 One-sentence thesis

Toxicokinetics describes how chemicals move into, through, and out of organisms over time, and understanding these processes—including uptake rates, internal distribution, metabolism, and elimination—is essential for predicting when and where toxic effects will occur.

📌 Key points (3–5)

  • What toxicokinetics covers: All processes related to uptake, internal transport, accumulation, metabolism, and excretion of chemicals in organisms, distinct from toxicodynamics (receptor interaction and toxic effects).
  • Bioaccumulation varies by chemical and organism: Lipophilic chemicals accumulate in fat; metals bind to specific proteins or granules; organism traits (size, lipid content, sex, metabolic capacity) strongly influence internal concentrations.
  • Kinetic models predict time-dependent concentrations: One-compartment models with first-order kinetics describe uptake and elimination rates; steady-state bioconcentration factors (BCF) equal the ratio of uptake to elimination rate constants.
  • Common confusion—steady state vs. dynamic exposure: Bioaccumulation factors (BCF, BAF, BMF) assume equilibrium, but real environments are dynamic; kinetic models are needed to predict concentrations over time.
  • Critical body concentration (CBC) links kinetics to toxicity: Toxicity occurs when internal concentrations exceed a threshold, regardless of external exposure level or duration; understanding kinetics explains why small organisms show effects faster than large ones.

🧬 Bioaccumulation fundamentals

🧬 What bioaccumulation means

Bioaccumulation: the transfer and accumulation of a chemical from the environment into an organism.

  • It is not simply "presence in the body"—it emphasizes the concentration ratio between organism and environment.
  • Hydrophobic chemicals (low water solubility, high octanol-water partition coefficient) preferentially partition into lipid-rich tissues.
  • Example: Hexachlorobenzene concentrations in fish can be >10,000 times higher than in water.

📊 Bioaccumulation metrics

MetricDefinitionUptake routeUnits
BCF (Bioconcentration Factor)C_org / C_aq at steady stateWater onlyL/kg
BAF (Bioaccumulation Factor)C_org / C_aq at steady stateWater + sediment/soil ingestionL/kg
BMF (Biomagnification Factor)C_org / C_food at steady stateFood (predator-prey)dimensionless
BSAF (Biota-Sediment Accumulation Factor)C_org / C_sediment at steady stateSediment/soil + porewaterkg_sed/kg_org
  • All assume steady state: uptake rate equals elimination rate, so internal concentration is constant.
  • Don't confuse: BCF (water exposure only, lab) vs. BAF (field, multiple routes) vs. BMF (trophic transfer).

🐟 Organism traits affecting accumulation

Lipid content

  • Lipophilic chemicals dissolve in fat; higher lipid content → higher whole-body concentration.
  • Example: PCBs in high-fat eel vs. low-fat fish.
  • However, lipid-normalized concentrations can still vary between species due to differences in lipid composition (storage vs. membrane lipids).

Sex and reproduction

  • Females can transfer accumulated chemicals to offspring via eggs or lactation (e.g., milk fat in marine mammals).
  • Result: mature females often have lower concentrations than males of the same age because they have an additional excretion route.
  • Example: Organochlorines in male dolphins increase with age; female concentrations decrease after maturity due to transfer to calves.

Body size (surface-to-volume ratio)

  • Smaller organisms have larger surface area relative to volume → faster exchange with environment → reach steady state sooner.
  • Elimination and uptake rate constants scale with body mass to the power of approximately -0.25 (see allometric relationships).

Metabolic capacity

  • Organisms that can biotransform (metabolize) chemicals to more water-soluble forms excrete them faster.
  • Example: Fish metabolize PAHs efficiently (low accumulation); shrimp have limited PAH metabolism (high accumulation despite similar exposure).

⏱️ Toxicokinetic models

⏱️ One-compartment kinetic model assumptions

  • First-order kinetics: Rate of change is proportional to concentration (dC/dt ∝ C).
  • Single compartment: The organism is treated as homogeneous; chemical distributes uniformly.
  • Constant rate constants: Uptake (k_w) and elimination (k_e) rates do not change with internal concentration (valid below critical thresholds).

📈 Core equations

Uptake phase (constant exposure)

  • Change in organism concentration over time: dC_org/dt = k_w × C_aq - k_e × C_org
  • Integrated form: C_org(t) = (k_w / k_e) × C_aq × (1 - e^(-k_e × t))
  • At steady state (t → ∞): BCF = C_org / C_aq = k_w / k_e

Elimination phase (clean environment)

  • After transfer to clean medium: dC_org/dt = -k_e × C_org
  • Exponential decay: C_org(t) = C_org(0) × e^(-k_e × t)
  • Plotting ln(C_org) vs. time yields a straight line with slope = -k_e
  • Half-life: T_1/2 = ln(2) / k_e ≈ 0.693 / k_e

Units matter

  • k_w (uptake rate constant): L/(kg·day) or kg_soil/(kg_org·day)
  • k_e (elimination rate constant): 1/day (or day^-1)
  • BCF: L/kg or dimensionless (depending on units)

🔄 Multi-compartment models

  • Used when elimination does not follow a single exponential (non-linear on log scale).
  • Two-phase elimination: fast phase (well-perfused tissues like liver) + slow phase (fat tissue with low blood flow).
  • Equation: C_org(t) = F(I) × e^(-k_e(I) × t) + F(II) × e^(-k_e(II) × t)
  • Example compartments: blood vs. liver, liver vs. fat tissue.

🕐 Time to steady state

  • Smaller organisms equilibrate faster due to higher surface-to-volume ratio.
  • Chemicals with higher BCF (lower k_e) take longer to reach steady state.
  • Example: Chemical with BCF = 10,000 takes 10× longer to equilibrate than chemical with BCF = 1,000 (if k_w is the same).
  • Rule of thumb: ~95% of steady state reached after 3 half-lives; ~99% after 5 half-lives.

🧪 Biological processes: metals vs. organics

🧪 Metal accumulation mechanisms

Inorganic binding (granules) Four main types in invertebrate gut/digestive glands:

  1. Calcium-pyrophosphate granules: bind Mg, Mn, Zn, Cd, Pb, Fe
  2. Sulfur granules: bind Cu, sometimes Cd
  3. Iron granules: exclusively Fe
  4. Calcium carbonate granules: mostly Ca
  • Metals stored in granules are detoxified (not in active pool causing toxicity).
  • Specialized cells (e.g., S cells in isopod hepatopancreas) accumulate large amounts of Cu for hemocyanin synthesis.

Organic binding (peptides and proteins)

  • Phytochelatin (PC): oligomer of glutathione (γ-glu-cys)_n-gly; n = 2–11
    • Induced by free metal ions activating PC synthase
    • Cysteine thiol groups bind metals
    • PC-metal complexes transported to vacuoles (plants) or lysosomes (animals)
  • Metallothionein (MT): low-molecular-weight protein rich in cysteine
    • Gene (Mt) activated by Metal Transcription Factor-1 (MTF-1) binding to Metal-Responsive Elements (MRE) in promoter
    • Cd displaces Zn from ligands → free Zn activates MTF-1 → Mt transcription
    • Binds multiple metal ions per molecule (e.g., 4–5 ions in 9–10 cysteines per cluster)

Don't confuse: PC (enzymatically extended peptide, immediate response) vs. MT (de novo protein synthesis, slower but sustained response).

🔬 Xenobiotic metabolism (organic chemicals)

Phase I: Activation (oxidation)

  • Cytochrome P450 (CYP): membrane-bound enzyme in smooth endoplasmic reticulum
  • Introduces oxygen into substrate: RH + O₂ + NADPH + H⁺ → ROH + H₂O + NADP⁺
  • Multiple isoforms (CYP1, CYP2, CYP3 families in vertebrates; 57 CYP genes in humans)
  • Located mainly in liver (vertebrates), Malpighian tubules/fat body (insects), hepatopancreas (mollusks/crustaceans)

Phase II: Conjugation

  • Adds hydrophilic groups to increase water solubility
  • Enzymes include:
    • Glutathione-S-transferase (GST): adds glutathione
    • UDP-glucuronyl transferase: adds glucuronic acid
    • Sulfotransferase: adds sulfate
    • Methyltransferase: adds methyl (decreases reactivity but increases apolarity)
  • In humans, glutathione conjugates often processed to mercapturic acids (cysteine + acetyl) excreted in urine

Phase III: Excretion

  • ATP-binding cassette (ABC) transporters actively pump conjugates out of cells
  • Includes multidrug resistance proteins (P-glycoproteins)
  • Excretion routes: bile → feces (high MW, hydrophobic); kidney → urine (low MW, hydrophilic); skin/breath (volatile)

🔥 Induction of biotransformation

  • CYP and Phase II enzymes are highly inducible: activity increases 10–100× in presence of xenobiotics.
  • Arylhydrocarbon receptor (AhR) pathway (3-MC-type induction):
    1. Xenobiotic binds AhR in cytoplasm, releasing it from heat-shock protein 90
    2. AhR + ARNT (AhR nuclear translocator) → nucleus
    3. Complex binds xenobiotic-responsive elements (XRE) in CYP1 gene promoter
    4. Enhanced transcription → more CYP protein → proliferation of smooth ER
  • Constitutive androstane receptor (CAR): mediates PB-type induction (CYP2, CYP3 genes)
  • Stereospecificity: molecular "fit" in receptor determines induction strength (e.g., some PCB congeners are strong inducers, others are not despite same number of Cl atoms)

⚠️ Metabolic activation (toxification)

  • Phase I can create more reactive intermediates than parent compound.
  • Polycyclic aromatic hydrocarbons (PAHs) example:
    • Benzo(a)pyrene (B[a]P) → CYP1A1 adds epoxide at 7,8 position → epoxide hydrolase → dihydrodiol → CYP adds second epoxide at 9,10 → diol-epoxide
    • Diol-epoxide binds DNA (especially guanine) → DNA adduct → mutation → cancer
    • "Bay-region" PAHs (notch in structure) are stronger carcinogens
  • Chronic oxidative stress:
    • Recalcitrant inducers (e.g., TCDD, some PCBs) strongly induce CYP but are not degraded
    • Continuous CYP activity generates reactive oxygen species (ROS) → oxidative damage
  • Don't confuse: detoxification (usual outcome) vs. metabolic activation (reactive intermediates cause toxicity)

📏 Allometric scaling

📏 Why body size matters

  • Many biological rates scale to body mass (m) as power functions: Y = a × m^b
  • Plotted as log(Y) vs. log(m) → straight line with slope b
  • Kleiber's law: metabolic rate scales as m^0.75 (not m^0.67 as geometric surface-area theory predicts)

🔢 Common scaling exponents

VariableExponent (b)Example
Rates (consumption, growth, reproduction)+0.7510,000 kg elephant eats (10^4)^0.75 = 1,000× more than 1 kg rabbit/day
Rate constants (specific rates)-0.25Per-kg consumption in elephant is (10^4)^-0.25 = 0.1× that of rabbit
Time (lifespan, generation time)+0.25Elephant lives (10^4)^0.25 = 10× longer than rabbit
Abundance (individuals/area)-0.75Fewer large animals per hectare
Areas (gill surface, home range)+0.75Larger animals have larger gills, territories

🧬 Toxicokinetic scaling

  • Uptake and elimination rate constants decrease with body mass: k ∝ m^-0.25
  • Smaller organisms:
    • Exchange chemicals with environment faster
    • Reach steady state sooner
    • Show toxic effects earlier at same external concentration
  • BCF is size-independent: BCF = k_w / k_e; both scale with same exponent, so ratio cancels out
  • Example: Daphnids appear "sensitive" partly because they are small and accumulate quickly during short-term tests, not necessarily because they have lower critical body concentrations

🧮 Predicting toxicity from size

  • Lethal concentrations (LC₅₀) in water scale with body mass and K_ow
  • Oral LD₅₀ (mg/animal) scales approximately as m^1.0 (isometric: larger animals tolerate proportionally larger absolute doses)
  • Caution: Scaling describes general trends; individual species can deviate significantly due to specific physiology, metabolism, or life history

🍽️ Food-chain transfer and biomagnification

🍽️ Biomagnification defined

Biomagnification: increasing concentrations of persistent, bioaccumulative chemicals at successively higher trophic levels in a food web.

  • Occurs when:
    1. Chemical is persistent (not degraded/metabolized)
    2. Chemical has high affinity for organism (high log K_ow for organics)
    3. Uptake from food > elimination
  • Classic example: DDT in aquatic food webs (water → algae → zooplankton → fish → osprey); top predators had highest concentrations, causing eggshell thinning

🔗 Role of metabolism

  • Metabolizable chemicals (e.g., PAHs) do not biomagnify in species with metabolic capacity (fish).
  • Non-metabolizable chemicals (e.g., organochlorine pesticides, PCBs, PBDEs) accumulate at all trophic levels.
  • Species differences matter:
    • Fish metabolize PAHs → low BSAF for PAHs
    • Shrimp have limited PAH metabolism → high BSAF for PAHs (similar to persistent OCPs)

🦅 Diet composition drives exposure

  • "You accumulate what you eat"—internal concentrations reflect diet, not just habitat.
  • Example: Bank voles (omnivorous, eat earthworms with high Cd) vs. common voles (herbivorous, eat plants with low Cd) in same area → bank voles have higher kidney Cd.
  • Stable isotope ratios (δ¹³C, δ¹⁵N) can trace diet and predict contaminant levels.

🐋 Case study: Orcas and PCBs

  • Transient orcas (feed on marine mammals) have higher PCB levels than resident orcas (feed on fish), despite living in same region.
  • Females have lower PCBs than males due to maternal transfer via lipid-rich milk during lactation (offloading to calves).
  • PCB levels increase with age in males (persistent accumulation over lifespan).
  • Decades after PCB ban, orca populations still exceed toxic thresholds, threatening viability.

⚠️ Implications for risk assessment

  • Biomagnification means dilution is not a solution for persistent, bioaccumulative toxicants.
  • Top predators (long-lived, high trophic level, high lipid content) are sentinel species for assessing ecosystem contamination.
  • Food-web structure and species-specific traits (diet, metabolism, lipid content, reproductive strategy) must be considered in exposure models.

🎯 Critical body concentration (CBC)

🎯 The CBC concept

Critical Body Concentration (CBC): the internal concentration of a chemical in an organism that causes a defined effect (e.g., 50% mortality, 50% reduction in reproduction), independent of exposure concentration or duration.

  • Toxicity is determined by internal dose, not external exposure.
  • Mortality (or other effect) occurs when internal concentration exceeds CBC, regardless of whether exposure was high for short time or low for long time.
  • Integrates bioavailability, uptake kinetics, and toxicodynamics into a single threshold.

📉 Linking CBC to kinetics

  • Fast uptake kinetics → CBC reached sooner → effects appear earlier.
  • LC₅₀ decreases over time until it reaches a constant value (ultimate LC₅₀, or LC₅₀∞).
  • At steady state: CBC = LC₅₀∞ × BCF = LC₅₀∞ × (k_w / k_e)
  • Time to reach ultimate LC₅₀ depends on elimination rate constant (k_e) and organism size.

✅ Evidence supporting CBC

  • Narcotic organic chemicals: For compounds with log K_ow from 1 to 6, CBC for lethality is remarkably constant at ~1–10 mmol/kg body mass across diverse species and test media.
  • Lipid normalization reduces variability: expressing CBC per kg lipid (rather than whole body) accounts for differences in lipid content between species.
  • Example: As K_ow increases, LC₅₀ in water decreases (more toxic), but BCF increases proportionally, so CBC (= LC₅₀ × BCF) remains constant.

❌ When CBC fails

  • Metals: Organisms can sequester metals in inert forms (granules, metallothionein in lysosomes) that do not contribute to toxicity.
    • No monotonic relationship between total body burden and effects.
    • "Biologically reactive" fraction, not total concentration, determines toxicity.
    • Adaptation/tolerance further complicates the relationship.
  • Chemicals binding to proteins (e.g., perfluorinated substances): lipid normalization does not apply.
  • Mixtures with different modes of action: unclear how to sum CBCs for chemicals acting via different mechanisms.
  • Metabolically activated toxicants: parent compound may have low toxicity; reactive metabolite (not measured) causes effects.

🧭 Advantages of CBC approach

  • Removes uncertainty about bioavailability from different media (soil, sediment, water).
  • Accounts for time-varying exposure: internal concentration integrates exposure history.
  • Enables cross-species extrapolation: if CBC is similar across species, differences in sensitivity reflect differences in uptake/elimination kinetics, not intrinsic susceptibility.
  • Useful for field risk assessment: measure tissue residues in wild organisms and compare to lab-derived CBC to assess risk without knowing environmental concentrations.

Don't confuse:

  • Steady-state metrics (BCF, BAF) vs. kinetic models (time-dependent concentrations)
  • Detoxification (usual metabolism outcome) vs. metabolic activation (reactive intermediates)
  • Total body burden vs. biologically reactive fraction (especially for metals)
  • External exposure concentration vs. internal critical concentration (CBC)
16

Toxicodynamics & Molecular Interactions

Section 4.2. Toxicodynamics & Molecular Interactions

🧭 Overview

🧠 One-sentence thesis

Toxic responses require molecular interactions between compounds and specific biomolecular targets (proteins, DNA, membranes, or small molecules), with the consequences depending on the target's biological role and the nature of the interaction.

📌 Key points (3–5)

  • What toxicodynamics describes: the dynamic interactions between a compound and its biological target that ultimately lead to adverse effects
  • Range of biomolecular targets: proteins (receptors, enzymes, transporters, structural), DNA/RNA, phospholipid membranes, and small regulatory molecules
  • Protein interaction consequences vary by role: receptor proteins trigger cellular responses; enzymes lose catalytic activity; transporters fail to move ligands
  • Common confusion—baseline vs. specific toxicity: all chemicals cause narcosis (baseline toxicity) by partitioning into membranes at similar lipid concentrations, but specific mechanisms depend on the compound's structure and the species' biochemistry
  • Why it matters: understanding molecular mechanisms enables prediction of toxic effects, development of biomarkers, and design of safer chemicals

🎯 Biomolecular targets of toxicity

🧬 Proteins as primary targets

Ligands: both endogenous and xenobiotic compounds that bind to proteins

  • Proteins serve three main toxicologically relevant functions: receptors, enzymes, and transporters
  • Binding can be covalent (irreversible, permanent damage) or non-covalent (reversible, temporary inhibition)
  • The consequence depends entirely on what the protein normally does in the cell

Don't confuse: binding to a protein vs. damaging a protein—some bindings activate receptors (intended function), while toxic binding disrupts normal function

🧬 DNA and RNA molecules

  • Especially vulnerable at guanine bases, which electrophilic compounds can bind covalently
  • DNA adducts → copy errors during replication → point mutations
  • This pathway is central to genotoxicity and carcinogenesis
  • Example: reactive metabolites from bioactivation form DNA adducts

🧬 Phospholipid bilayer membranes

  • Compounds partition into lipid bilayers, disrupting membrane integrity
  • Loss of integrity → leakage of electrolytes, loss of membrane potential
  • Particularly affects outer cell membrane and mitochondrial membranes
  • This is the mechanism of narcosis (baseline toxicity)

🧬 Small regulatory molecules

  • Includes molecules that maintain cellular homeostasis
  • Glutathione, calcium ions, phosphate groups can all be targets
  • Disruption cascades into broader cellular dysfunction

🔧 Protein interaction mechanisms

🔧 Receptor proteins

Receptor proteins: specifically bind and respond to endogenous signaling ligands (hormones, prostaglandins, growth factors, neurotransmitters) by causing a typical cellular response

  • Located in cell membrane, cytosol, or nucleus
  • Agonistic ligands activate the receptor (mimic natural ligand)
  • Antagonistic ligands inactivate the receptor and block natural agonists
  • Four major receptor types affected:
    • Ion channels
    • G-protein coupled receptors (GPCRs)
    • Enzyme-linked receptors
    • Nuclear receptors

Example: A xenobiotic agonist binds to an estrogen receptor → activates genes normally controlled by estradiol → endocrine disruption

Don't confuse: agonist vs. antagonist—agonists turn the receptor "on," antagonists prevent it from turning on (they don't turn it "off" if it's already inactive)

⚙️ Enzyme inhibition

  • Binding usually decreases conversion rate of substrate to product
  • Results in substrate surplus and/or product deficit
  • Reversible inhibition: non-covalent binding; three subtypes:
    • Competitive: chemical competes with substrate for active site (resembles substrate structure)
    • Non-competitive: binds allosteric site, changes active site shape
    • Uncompetitive: only binds when substrate is already bound
  • Irreversible inhibition: covalent binding or metal substitution during synthesis

Example: Organophosphate insecticides covalently bind serine in acetylcholinesterase active site → acetylcholine accumulates in synapse → overstimulation → convulsions, muscle weakness

🚚 Transporter protein inhibition

  • Xenobiotics compete with natural ligand for binding
  • Blocks transport of endogenous molecules across membranes or through blood

Example: Halogenated phenols (OH-PCBs, OH-PBDEs) compete with thyroid hormone (T4) for transthyretin (TTR) binding → more unbound T4 in blood → increased hepatic uptake and excretion → decreased blood T4 levels

Don't confuse: transport inhibition vs. receptor antagonism—both block a ligand's action, but transporters move molecules while receptors trigger responses

💥 Narcosis and membrane damage

💊 Baseline toxicity (narcosis)

  • Partitioning into lipid bilayer is non-specific
  • All chemicals exert this effect at sufficient concentration
  • Concentration in target membrane causing 50% mortality ≈ 50 mmol/kg lipid (constant across species and compounds)
  • External exposure levels differ because lipid-water partition coefficients vary by compound
  • This is considered the minimum toxicity any chemical will have

⚡ Other membrane disruption mechanisms

  • Ionophores: dissolve in membrane and shuttle ions across, disrupting electrolyte gradients
  • Ion channel modulators: open or close protein channels (different mechanism than ionophores)

Don't confuse: ionophores vs. ion channel inhibitors—ionophores are the transport vehicle themselves; ion channel inhibitors affect existing protein gates

🔥 Oxidative stress pathways

🔥 Reactive oxygen species (ROS) generation

  • Some compounds directly increase ROS formation via:
    • Redox cycling
    • Interfering with electron transport chain
  • Others indirectly increase ROS by:
    • Interfering with ROS-scavenging antioxidants (glutathione, catalase, superoxide dismutase)

🔥 Indirect toxicity via ROS

  • The compound itself doesn't bind the target
  • ROS produced by the compound bind covalently to DNA, proteins, and lipids
  • This is a secondary mechanism but can be highly damaging

Example: A compound undergoes redox cycling → generates superoxide radicals → radicals attack membrane lipids → lipid peroxidation → membrane damage


Key takeaway: Toxicodynamics is fundamentally about which molecule meets which target. The same compound can have different effects depending on what proteins, receptors, or membranes are present in a given species or tissue. Understanding these molecular interactions is the foundation for predicting toxicity, designing biomarkers, and developing safer chemicals.

17

Toxicity Testing

Section 4.3. Toxicity testing

🧭 Overview

🧠 One-sentence thesis

Toxicity testing provides systematic methods to assess the bioaccumulation potential and hazardous effects of chemicals on organisms through standardized protocols that measure concentration-response relationships and various endpoints, while increasingly incorporating alternative methods to reduce animal testing.

📌 Key points (3–5)

  • Purpose of toxicity testing: Laboratory tests assess bioaccumulation potential and establish concentration-response relationships to derive toxicity parameters (LC₅₀, EC₅₀) for hazard assessment
  • Two main endpoint categories: Whole-organism endpoints (survival, growth, reproduction, behavior) and molecular/biochemical endpoints (gene expression, enzyme activity, metabolic changes)
  • Quality control requirements: Tests must meet validity criteria including minimum control survival, adequate replication, proper test design, and use of negative/positive controls to ensure reliable results
  • Standardization importance: International bodies (OECD, ISO) develop standardized guidelines to reduce variation and enable regulatory acceptance through mutual acceptance of data (MAD)
  • Common confusion - static vs. dynamic tests: Static bioaccumulation tests measure concentrations at one time point (may underestimate if equilibrium not reached), while dynamic tests measure uptake and elimination kinetics over time to calculate rate constants

🧪 Core testing approaches

🔬 Bioaccumulation testing methods

Bioaccumulation: The uptake of chemicals in organisms from the environment, quantified by bioconcentration factor (BCF) for water exposure or biota-to-soil/sediment accumulation factor (BSAF) for soil/sediment exposure.

Static exposure systems:

  • Medium dosed once with test chemical
  • Organisms and medium analyzed after exposure period
  • BCF/BSAF calculated from measured concentrations
  • Challenge: Exposure concentrations may decrease during test due to biodegradation, volatilization, sorption, or organism uptake
  • Solution: Measure concentrations at multiple time points and use time-weighted-average (TWA) values

Dynamic (toxicokinetic) tests:

  • Uptake phase: Organisms exposed in spiked medium, sampled at multiple time points
  • Elimination phase: Organisms transferred to clean medium, sampled over time
  • First-order one-compartment model fitted to data
  • BCF/BSAF = uptake rate constant (k₁) / elimination rate constant (k₂)
  • Advantage: Overcomes uncertainty about whether equilibrium was reached
  • Sampling design: Typically 5-6 time points each for uptake and elimination, with 3-4 replicates per time point

Example: In a molybdenum uptake/elimination study with earthworms (Eisenia andrei), researchers calculated a BSAF of approximately 1.0 from the ratio of uptake and elimination rate constants, suggesting low bioaccumulation potential.

📊 Concentration-response relationships

Key paradigm: "The dose makes the poison" (Paracelsus) - any chemical can be toxic, but the dose determines the severity of effect.

Essential components:

  • Organisms exposed to range of concentrations plus control group
  • Response plotted against exposure concentration
  • Concentration-response curves fitted to derive toxicity measures

Measures of toxicity:

  • ECₓ/EDₓ: Effective concentration/dose causing x% effect (must specify endpoint)
  • LCₓ/LDₓ: Lethal concentration/dose causing x% mortality
  • EC₅₀/LC₅₀: Median effect/lethal concentration (most common estimate)
  • NOEC: No-Observed Effect Concentration (highest concentration with no significant difference from control)
  • LOEC: Lowest Observed Effect Concentration (lowest concentration significantly different from control)

Quantal vs. continuous data:

  • Quantal: Yes/no responses (e.g., survival, avoidance) - population-level
  • Continuous: Measurable parameters (e.g., growth rate, reproduction number) - can be individual-level

Curve parameters:

  • Minimum response (often set to control level or zero)
  • Maximum response (often 100% relative to control)
  • Slope (steepness; determines distance between EC₅₀ and EC₁₀)
  • Position (where curve sits on x-axis; may equal EC₅₀)

Don't confuse: Statistical significance vs. toxicological/biological significance - both must be evaluated separately.

🎯 Why ECₓ values are preferred over NOEC

Disadvantages of NOEC:

  • Obtained by hypothesis testing rather than regression analysis
  • Equals one of the test concentrations (doesn't use all data)
  • Sensitive to number of replicates and variation between replicates
  • Depends on statistical test chosen and variance
  • No confidence intervals (cannot assess reliability)
  • Difficult to compare between laboratories and species
  • May sometimes equal or exceed EC₅₀ due to variation

Advantages of ECₓ (e.g., EC₁₀, EC₂₀):

  • Uses all data from the test via curve fitting
  • Has 95% confidence intervals indicating reliability
  • Allows statistical comparison between studies
  • More reproducible across laboratories

🦠 Test organisms and endpoints

🐛 Selection criteria for test organisms

Practical requirements:

  1. Easy to culture and maintain in laboratory
  2. Sensitive to different stressors
  3. Ecologically and/or economically relevant
  4. Available standardized test protocols

Common tension: Easy-to-culture species (often generalists) tend to be less sensitive, while sensitive species (often specialists) are harder to culture.

Representative test battery should include:

  • Different life histories and functional groups
  • Different taxonomic groups
  • Different routes of exposure
  • Organisms from relevant environmental compartments

Standard test organisms by compartment:

CompartmentExample organisms
WaterDaphnia magna (crustacean), Danio rerio (fish), algae/cyanobacteria
SedimentChironomus riparius (midge), Lumbriculus variegatus (oligochaete)
SoilEisenia fetida/andrei (earthworm), Folsomia candida (collembolan), Enchytraeus species
TerrestrialApis mellifera (honeybee), various bird species, crop/non-crop plants

Don't confuse: Standard vs. non-standard organisms - non-standard species may be more ecologically relevant for specific ecosystems (e.g., riverine insects for streams vs. Daphnia for stagnant water) but lack standardized protocols.

📈 Endpoints measured in toxicity tests

Whole-organism endpoints:

Mortality/survival:

  • Assessed in both acute and chronic tests
  • Can be scored at intervals throughout test
  • Expressed as percentage of initial number or percentage of control
  • Used to derive LC₅₀ values

Growth:

  • Measured as increase in length or weight
  • Best practice: Measure at start and end of exposure
  • Express as percentage of initial measurement
  • More distinctive than measuring final size alone

Reproduction:

  • Day of first reproduction (ecologically relevant for population growth)
  • Number of offspring (eggs, seeds, neonates, juveniles)
  • Quality of offspring (physiological status, size, survival to adulthood)
  • Integrated parameter incorporating many aspects

Behavioral endpoints (acute tests):

  • Avoidance behavior (can be 100-1000× more sensitive than survival)
  • Feeding inhibition
  • Swimming behavior changes
  • Ventilation behavior
  • Advantage: Rapid response, sensitive, ecologically relevant (affects trophic interactions)

Example: Copper effects on caddisfly ventilation behavior occurred at ~150× lower concentrations than lethal effects on larvae.

Photosynthesis (plants/algae):

  • Measured using pulse amplitude modulation (PAM) fluorometry
  • Effective photosystem II efficiency (ΦPSII) in light-adapted cells
  • Rapid, sensitive endpoint for herbicide effects
  • Most herbicides directly or indirectly affect PSII

Sub-organismal endpoints:

  • Enzyme activity (e.g., acetylcholinesterase inhibition)
  • Biochemical markers (hormone levels, oxidative stress markers)
  • Gene expression changes (transcriptomics)
  • Metabolic changes (metabolomics)
  • Epigenetic modifications

🌱 Primary producers in testing

Groups tested:

  • Microalgae and cyanobacteria (unicellular)
  • Macrophytes (multicellular aquatic plants)
  • Terrestrial plants (dicots and monocots)

Exposure routes:

  • Aquatic organisms: Water phase (all), sediment (rooting plants)
  • Terrestrial plants: Air, soil, and water (soil moisture/rain)
  • Emergent/floating plants: Multiple compartments

Key endpoints:

  • Bioaccumulation (incorporation into tissues)
  • Photosynthesis inhibition (acute effects)
  • Growth inhibition (biomass, cell counts, size increase)
  • Seedling emergence (germination and early development)
  • Root morphology and metabolism

Challenge: Soil and sediment exposure introduces variability from redox conditions and organic matter content affecting chemical behavior.

🦠 Microorganisms in testing

Importance:

  • Base of all ecosystems
  • Vital for nutrient cycling (carbon, nitrogen)
  • Perform essential ecosystem services
  • Highly diverse metabolically

Protection goals:

  • Biodiversity: Protecting species diversity
  • Ecosystem services: Protecting functional processes (e.g., nitrogen transformation, water purification)

Test types:

Single species tests:

  • Example: Ames test (Salmonella typhimurium for mutagenicity)
  • Example: Freshwater algae/cyanobacteria growth inhibition (OECD 201)
  • Advantage: Reproducible, standardized
  • Limitation: Ecological relevance debatable; assumes similar sensitivity to relevant species

Community tests:

  • Test whole communities exposed together
  • Pollution-induced community tolerance (PICT) method
  • Detects shift from sensitive to tolerant species
  • Challenge: Difficult to attribute changes solely to toxic effects vs. species interactions

Process tests:

  • Measure microbial processes (e.g., nitrogen transformation, carbon transformation)
  • Example: OECD 216 - Soil nitrogen transformation test
  • Clover meal → ammonia → nitrite → nitrate
  • Key insight: Process may be maintained even if some species are intoxicated, due to growth of tolerant species
  • Generally less sensitive than single species tests

Don't confuse: Short-term vs. long-term microbial tests - shorter tests often MORE sensitive because growth during longer tests allows resistant mutants to develop and take over.

🐦 Specialized testing: Birds

🦅 Why birds are important models

Physiological uniqueness:

  • Oviparous (egg-laying with hard shells)
  • Concentrated exposure to maternally transferred material in egg
  • Embryos regulate own hormones early (no maternal physiological interference)
  • High body temperature (40.6°C) and metabolic rate
  • Rapid growth rate (especially altricial species)

Ecological importance:

  • Diverse, abundant, widespread
  • Inhabit human-altered habitats (agriculture)
  • Essential ecosystem roles (seed dispersal, biological control, scavenger function)
  • Often iconic species with public appeal

Exposure routes in agricultural settings:

  1. Feeding on treated crop
  2. Feeding on weeds or treated weed seeds
  3. Feeding on ground-dwelling or foliar invertebrates
  4. Feeding on earthworms in treated soil
  5. Drinking contaminated water
  6. Feeding on fish in contaminated streams

🧪 Avian toxicity tests

Acute oral toxicity:

  • Gavage or capsule dosing at test start
  • OECD 223: Sequential design, average 26 birds, can stop when accuracy sufficient
  • USEPA: 60 birds (10 per dose × 5 doses + 10 controls)
  • Derives LD₅₀ (mg/kg body weight/day)
  • Species: Bobwhite quail, Japanese quail, mallard duck, zebra finch, budgerigar

Reproduction testing:

  • One-generation tests in bobwhite quail and/or mallard
  • Substance mixed into diet for 10 weeks before egg-laying
  • Egg-laying period: at least 10 weeks
  • Endpoints: Adult body weight, food consumption, reproductive parameters, 14-day old surviving chicks/ducklings

Avoidance testing:

  • Assesses whether birds avoid contaminated food (especially seed treatments)
  • Can reduce exposure risk but confounds dietary toxicity estimates
  • Often conducted as pen (semi-field) studies

Endocrine disruptor testing:

  • Two-generation test using Japanese quail
  • Why Japanese quail: Precocial species (sexual differentiation occurs in egg), young mature and breed within 12 months
  • Debate: Need for altricial species test (e.g., zebra finch) to capture different developmental patterns

Field studies:

  • Test effects on multiple species simultaneously under actual exposure conditions
  • Methods: Corpse searches, censusing, radiotracking
  • Challenge: Defining relevant species for other locations

🧬 Alternative and molecular methods

🔬 In vitro toxicity testing

In vitro: Testing using tissues, cells, or proteins (literally "in glass"), now typically in plastic microtiter well-plates.

Advantages:

  • Small test volumes
  • Short test durations (15 minutes to 48 hours)
  • Medium to high throughput
  • Mechanism-specific responses
  • Reduces animal use (3Rs: Reduce, Refine, Replace)

Protein-based assays:

Ligand binding assays:

  • Purified protein incubated with test substance
  • Determines if substance binds protein and inhibits natural ligand binding
  • Uses radiolabeled or fluorescently labeled ligands
  • Example: Estrogen receptor binding assay with radiolabeled estradiol

Enzyme inhibition assays:

  • Measures decrease in substrate conversion rate
  • Quantified by spectrophotometry or fluorescence
  • Example: Acetylcholinesterase inhibition assay

🧫 Cell-based assays

Cell culture types:

TypeDescriptionAdvantagesDisadvantages
Primary cultureCells directly isolated from donorClosely resemble in vivo physiologySlow division, specific conditions, finite, genetic variation between donors
Finite cell lineSubcultures from primary cultureSame as primaryLimited passages (20-60 divisions) before senescence
Continuous cell lineImmortalized cells (indefinite lifespan)Quick proliferation, easy to culture/manipulateDifferent genotype/phenotype than healthy cells, behave like cancer cells, lost some capacities

Stem cell differentiation models:

Potency levels:

  • Totipotent: Can differentiate into all cell types including extraembryonic (morula stage)
  • Pluripotent: Can differentiate into all cell types except extraembryonic (inner cell mass)
  • Multipotent: Can differentiate into restricted number of cell types (germ layers)
  • Unipotent: Committed to single cell type

Embryonic stem cells (ESCs):

  • Isolated from embryos at various stages
  • Can divide indefinitely while undifferentiated
  • Differentiated by manipulating culture conditions (growth factors, hormones, etc.)
  • Ethical concern: Requires embryo destruction for human ESCs

Induced pluripotent stem cells (iPSCs):

  • Differentiated cells reprogrammed to pluripotent state
  • Exposed to reprogramming factors (transcription factors)
  • Can then differentiate into any cell type
  • Advantage: Avoids embryo destruction
  • Alternative: Transdifferentiation (direct conversion between differentiated cell types without pluripotent stage)

Cell-based endpoints:

  • Cell viability (mitochondrial function, membrane damage, metabolism)
  • Cell growth
  • Cell kinetics (absorption, elimination, biotransformation)
  • Transcriptome, proteome, metabolome changes
  • Cell-type specific functioning
  • Effects on differentiation process

Reporter gene bioassays:

  • Genetically modified cells/bacteria with reporter protein gene
  • Expression triggered by receptor-toxicant interaction
  • Measured as color, fluorescence, or luminescence change
  • Used to screen for receptor activation/inactivation

🔮 Future developments

3D cell culturing:

  • More realistic than 2D monolayers
  • Includes cell-cell interactions, polarization, extracellular matrix, diffusion gradients
  • Epithelial cells can be grown at air-liquid interface

Cell co-culturing:

  • Different cell types cultured together
  • Example: Differentiated cell + liver cell (for metabolism)
  • Mimics organ-level interactions

Organ-on-a-chip:

  • Different cell types in miniaturized channels
  • Microfluidic system mimics blood flow
  • Can expose to toxicants through "circulation"

Human body-on-a-chip:

  • Multiple organ compartments interconnected
  • Microfluidic circulatory system
  • Reflects complex ADME processes
  • Current status: In development, some impracticalities remain

Don't confuse: In vitro advantages for mechanism studies vs. limitations for in vivo extrapolation - cell cultures lack toxicokinetic processes (absorption, distribution, elimination), repair mechanisms, feedback loops, and tissue/organ interactions present in whole organisms.

🔍 Human toxicity testing

📋 General principles

Two main aims:

  1. Identify potential adverse effects on humans (hazard identification)
  2. Establish dose/concentration-response relationships for safe exposure levels

International harmonization:

  • WHO and OECD develop testing guidelines
  • OECD Mutual Acceptance of Data (MAD) system
  • Built on OECD Test Guidelines and Good Laboratory Practice (GLP) principles
  • Data generated in one member country accepted in all member countries

Alternative methods (3Rs):

  • (Quantitative) Structure-Activity Relationships ((Q)SARs)
  • In vitro tests (preferably human-origin cells/tissues)
  • Read-across (using data from structurally related chemicals)
  • Integrated Approaches to Testing and Assessment (IATA)
  • Intelligent Testing Strategies (ITS)

Core test elements:

Test substance characterization:

  • Chemical structure, composition, purity
  • Impurities (nature and quantity)
  • Stability
  • Physicochemical properties (lipophilicity, density, vapor pressure)

Route of administration:

  • Oral, dermal, or inhalation
  • Based on physical-chemical properties and predominant human exposure route

Dose selection:

  • Typically ≥3 dose levels (low, mid, high) plus control
  • Increments between doses: factors of 2-10
  • High dose: Produces toxicity without severe suffering or >10% mortality
  • Mid dose: Produces slight toxicity
  • Low dose: No toxicity
  • Informed by toxicokinetic data and range-finding studies

Animal species:

  • Usually small laboratory rodents (rats) of both sexes
  • Economic and logistic reasons
  • Sufficient numbers for statistical analysis
  • Specialized tests may use guinea pigs, rabbits, dogs, non-human primates

Test duration:

  • Acute: Single dose, effects within 14 days
  • Subacute: 28 days (rats)
  • Semi-chronic/sub-chronic: 90 days (rats)
  • Chronic: 2 years lifetime (rats)

Parameters studied:

  • Biochemical organ function
  • Physiological measurements
  • Metabolic and hematological information
  • Extensive histopathological examination
  • More parameters in longer, more expensive tests

Quality requirements:

  • Good Laboratory Practice (GLP) compliance
  • Qualified personnel at every level
  • Detailed reporting for regulatory evaluation
  • Statistical analysis (significance vs. biological relevance)
  • Derive NOAEL, LOAEL, or benchmark doses (BMDs)

🧪 In vitro human toxicity tests

Cytotoxicity assays:

Trypan Blue Exclusion (TBE):

  • Live cells exclude dye (clear cytoplasm)
  • Dead cells take up dye (blue cytoplasm)
  • Count viable/dead cells with hemacytometer
  • Advantage: Simple, inexpensive, indicates membrane integrity
  • Disadvantage: ~10% counting errors

Neutral Red Uptake (NRU):

  • Viable cells incorporate neutral red into lysosomes
  • After washing, dye released under acidified conditions
  • Measured by spectrophotometry
  • Based on universal cell functions (membrane integrity, energy, transport)

MTT assay:

  • Mitochondrial enzymes reduce yellow MTT to purple formazan crystals
  • Reflects number of viable cells
  • Solubilize with DMSO, measure absorbance
  • Advantage: Easy, safe, highly reproducible
  • Disadvantage: Requires DMSO to solubilize insoluble product

Skin toxicity tests:

Skin corrosion/irritation:

  • 3D human skin model (Episkin)
  • Topical application of test substance
  • Cell viability assessed by MTT
  • Corrosive/irritant if viability decreases below threshold (LD₅₀)
  • Replaces rabbit Draize test

Phototoxicity (3T3 NRU PT):

  • Mouse fibroblast cell line (Balb/c 3T3)
  • Compare cytotoxicity with/without simulated solar light
  • Neutral red uptake measured 24h after treatment
  • Light exposure may alter cell surface, reducing dye uptake

Skin sensitization:

  • Tests address key biological events in sensitization process:
    1. Haptenation (chemical binding to skin proteins)
    2. Keratinocyte signaling (cytokine release, cytoprotective pathways)
    3. Dendritic cell maturation and mobilization
    4. T-cell proliferation in lymph nodes

Available non-animal methods:

  • Direct Peptide Reactivity Assay (DPRA)
  • KeratinoSens (ARE-Nrf2 luciferase test)
  • h-CLAT (Human Cell Line Activation Test)
  • U-SENS (U937 cell line activation)
  • IL-8 Luc assay (Interleukin-8 reporter gene)

Carcinogenicity assays:

Genotoxic (GTX) carcinogens:

  • Directly interact with DNA
  • Cause DNA damage or chromosomal aberrations
  • Tests: Ames test, E. coli reverse mutation, chromosome aberration assay, gene mutation test, micronucleus test
  • Can use in vitro or in vivo approaches

Non-genotoxic (NGTX) carcinogens:

  • Don't cause direct DNA damage
  • Affect gene expression, signal transduction, cell structures, cell cycle
  • Challenge: Large number of potential pathways makes identification difficult
  • Tests: Two-year rodent assay, cell transformation assay
  • Fewer in vitro alternatives available

📊 Epidemiology and molecular markers

📈 Environmental epidemiology basics

Epidemiology: The study of distribution and determinants of health-related states or events in specified populations, and application to prevention and control of health problems.

Key terms:

  • Cohort: Group of people followed over time
  • Determinant/risk factor: Factor (causally) related to health outcome
  • Outcome: Disease (morbidity) or death (mortality)
  • Target population: People of interest
  • Study population/sample: Representative subset actually studied

Study designs:

Cross-sectional:

  • Determinant and outcome measured at same time
  • Quick and cheap
  • Limitation: Cannot conclude causality (lacks temporality)
  • Hypothesis-generating

Case-control:

  • Sample selected based on outcome
  • Determinant measured retrospectively
  • Cases matched with controls (same disease risk)
  • Advantages: Suitable for low incidence diseases, long latency periods
  • Limitations: Recall bias, weak evidence for causality
  • Calculates odds ratios (OR)

Cohort (prospective):

  • Determinant measured at start
  • Incidence measured after follow-up
  • Starts with people at risk but not yet affected
  • Advantages: Can conclude causal relationship (temporality), multiple outcomes
  • Limitations: Not suitable for low incidence or long latency, attrition (loss to follow-up)
  • Calculates relative risk (RR)

Nested case-control:

  • Case-control study within cohort study
  • Useful when few cases in cohort

Ecological:

  • Uses aggregated data (not individual)
  • Geographical or temporal comparisons
  • Advantages: Uses published statistics, cheap, fast
  • Limitations: Groups may differ in unmeasured ways, can't link individual exposure to individual outcome
  • Hypothesis-generating

Experimental (RCT):

  • Participants randomly assigned to intervention or control
  • Variations: Cluster-randomized, crossover, waiting list designs
  • Strongest evidence for causality

Confounding and effect modification:

  • Confounder: Third factor influences both outcome and determinant
  • Effect modifier: Association between exposure and outcome differs for certain groups
  • Solution for both: Stratification (analyze groups separately)

🧬 Human biomonitoring

Human biomonitoring (HBM): Assessment of human exposure to chemicals by quantitative analysis of compounds, metabolites, or reaction products in human samples.

Sample types:

  • Blood, urine, feces, saliva, breast milk, sweat
  • Hair, nails, teeth

Purpose:

  • Obtain insight into population exposure (internal dose)
  • Integrate with health data for impact assessment
  • Often focuses on specific age groups (neonates, children, adolescents, adults, elderly)

Major HBM programs:

  • German Environment Survey (GerES)
  • US National Health and Nutrition Examination Survey (NHANES)
  • Canadian Health Measures Survey
  • Flemish Environment and Health Study
  • Japan Environment and Children's Study

Cohort studies:

  • Cross-sectional: Exposure and health data at one time point
  • Longitudinal: Follow-up at intervals to track changes and time trends
  • Often 100,000+ participants for statistical power
  • Collect exposure data, health measures, questionnaires (diet, lifestyle, socioeconomic status)

Ethics requirements:

  • Medical Ethical Approval Committee approval mandatory
  • Documentation needed:
    1. Study protocol
    2. Privacy safeguarding statement
    3. Information letter for volunteers (informed consent)

Chemical distribution in body:

  • Depends on physicochemical properties (lipophilicity, persistence)
  • Phase I and II metabolism
  • Lipophilic compounds: Stored in fat tissue
  • Hydrophilic compounds: Excreted after metabolism or unchanged
  • Matrix choice based on compound properties (blood for lipophilic, urine for metabolites)

Analytical procedure:

  1. Sample pretreatment (remove particles)
  2. Extraction (concentrate compounds, remove interfering matrix)
  3. Chromatographic separation (LC or GC)
  4. Mass spectrometry detection (MS)
  5. Quantification

Analytical challenges:

  • Very low concentrations (pg/L in cord blood)
  • Small sample volumes
  • Background contamination (e.g., phthalates in environment)
  • Solution for contamination: Measure metabolites instead of parent compound
  • Need for high accuracy, high throughput
  • Long-term storage requirements (-20°C or -80°C)

Don't confuse: Parent compound vs. metabolite measurement - measuring metabolites (e.g., DEHP metabolites instead of DEHP itself) ensures the chemical passed through the body and underwent metabolism, avoiding false positives from environmental contamination.

🌍 The exposome concept

Exposome: Measure of all human life-long exposures and how these relate to health (Wild, 2005).

Three domains:

1. Internal exposome:

  • Metabolism
  • Endogenous hormones
  • Body morphology
  • Physical activity
  • Gut microbiota
  • Inflammation
  • Aging

2. Specific external exposome:

  • Radiation
  • Infections
  • Chemical contaminants and pollutants
  • Diet
  • Lifestyle factors (tobacco, alcohol)
  • Medical interventions

3. General external exposome:

  • Social capital
  • Education
  • Financial status
  • Psychological stress
  • Urban-rural environment
  • Climate

Tools for exposome assessment:

  • Wearables for monitoring
  • Exposure modeling
  • Internal biological measurements
  • Statistical and data science frameworks
  • Machine learning algorithms

🔬 Meet-in-the-middle model

Purpose: Identify causal relationships linking exposures to disease.

Three-step approach:

  1. Association between exposure and biomarkers of exposure
  2. Relationship between exposure and intermediate omics biomarkers (early effects)
  3. Relation between disease outcome and intermediate omics biomarkers

Principle: Causal association reinforced if found in all three steps.

Molecular markers studied:

Gene expression (transcriptomics):

  • Changes at mRNA level
  • Candidate approach (specific mRNAs) or genome-wide (microarray, NGS)
  • Examples: Transcriptomic profiles for dioxin exposure, diesel exhaust, smoking, prenatal exposures

Epigenetics:

  • Heritable changes not affecting DNA sequence
  • DNA methylation: Most widely studied
    • Methyl groups added to DNA
    • Alters expression without changing sequence
    • Can have transgenerational effects
    • Example: Dutch Hunger Winter study - prenatal famine exposure affected IGF2 methylation 60 years later
  • Histone modifications: Post-translational modifications, induced by oxidative stress
  • microRNAs: Small noncoding RNAs regulating gene expression

Metabolomics:

  • Study of all small molecule metabolic products
  • Includes self-made metabolites, nutrients, pollutants, microbial products
  • ~2900 known human metabolites (vs. ~30,000 genes)
  • Strong statistical power due to lower number of features
  • Can characterize biochemical changes from xenobiotic metabolism

Challenges:

  • Difficult to obtain samples
  • Need large study populations
  • Complex statistical methods
  • Tissue-specific effects (e.g., DNA methylation correlation varies by CpG site)
  • Correlation between levels often poor (transcript ≠ protein ≠ metabolite)

⚙️ Quality control and standardization

✅ Validity criteria

Purpose: Control quality of toxicity test outcomes.

Typical criteria:

  • Minimum % survival of control organisms
  • Minimum growth rate or offspring production in controls
  • Limited variation (<30%) in replicate control data
  • If criteria not met: Results prone to doubt, may not be accepted

Control types:

Negative control:

  • Non-exposed organisms
  • Used to check validity criteria
  • Monitor condition of test organisms

Solvent control:

  • When test chemical added using solvent
  • If response differs significantly from negative control: Use as control for analysis
  • If no significant difference: Pool with negative control

Positive control:

  • Chemical with known toxicity
  • Tested frequently
  • Checks if long-term culturing changes organism sensitivity

📏 Standardization importance

Organized by:

  • Organization for Economic Co-operation and Development (OECD)
  • International Standardization Organization (ISO)
  • ASTM International

Aims:

  • Reduce variation in test outcomes
  • Carefully describe methods for:
    • Culturing and handling organisms
    • Test procedures
    • Test media properties and composition
    • Exposure conditions
    • Data analysis

Process:

  • Based on extensive round-robin testing (multiple laboratories)
  • Regulatory bodies require standardized tests for chemical registration
  • Example: REACH in Europe requires OECD guidelines

What is standardized:

  • Test organism selection and care
  • Exposure media
  • Test conditions and duration
  • Endpoints
  • Environmental variables (caging, diet, temperature, humidity)
  • Personnel requirements
  • Animal welfare considerations

🔢 Replication and design

Biological replication:

  • Determined by number of independent samples/isolations
  • Not by technical replication later in procedure
  • Sufficient replication needed to cope with biological variation

Test design considerations:

  • Careful dose level selection and spacing
  • Adequate to fulfill regulatory requirements
  • Enable proper toxicity data estimates
  • Minimize variation while maximizing information

Good Laboratory Practice (GLP):

  • Quality control: Minimize errors, maximize accuracy and validity
  • Quality assurance: Ensure procedures followed according to regulations
  • Qualified personnel at every level
  • Detailed documentation and reporting
  • Electronic data processing systems for accuracy

Don't confuse: Number of dose levels vs. number of replicates - typically ≥3 dose levels plus control for dose-response, with multiple biological replicates at each level for statistical power.


This document provides a comprehensive overview of toxicity testing methods, from traditional whole-organism assays to cutting-edge molecular approaches, emphasizing the importance of standardization, quality control, and the ongoing shift toward alternative methods that reduce animal use while maintaining scientific rigor.

18

Increasing ecological realism in toxicity testing

Section 4.4. Increasing ecological realism in toxicity testing

🧭 Overview

🧠 One-sentence thesis

Single-species toxicity tests must move beyond acute mortality endpoints toward chronic, mixture, multistress, and multigeneration exposures to reflect the actual conditions organisms face at contaminated sites.

📌 Key points (3–5)

  • The gap: Most tests use acute, single-chemical, mortality-only protocols, but real contaminated sites expose organisms to low-level mixtures under suboptimal conditions for their entire life span.
  • Consecutive steps to realism: sublethal endpoints → chronic exposure → multigeneration → mixture toxicity → multistress (abiotic + biotic).
  • Common confusion: "Chronic" is relative—4 days is acute for fish but chronic (multiple generations) for algae; the term must be scaled to the organism's life cycle.
  • Mixture toxicity: Concentration Addition (same mode of action) vs Response Addition (different modes); synergism/antagonism describe deviations from these models.
  • Why it matters: Current single-generation, single-chemical tests likely underestimate long-term population risks; multigeneration studies show effects can worsen or thresholds can emerge.

🧩 Why ecological realism is needed

🧩 The mismatch between lab and field

The vast majority of single-species toxicity tests reported in the literature concerns acute or short-term exposures to individual chemicals, in which mortality is often the only endpoint.

  • Real contaminated sites: organisms exposed to relatively low levels of mixtures of contaminants under suboptimal conditions for their entire life span.
  • Laboratory tests: short, high-dose, single-chemical, mortality-focused.
  • Example: A test organism in the lab may be exposed for 48 hours to one chemical at a lethal dose, while in the field it faces years of low-level exposure to dozens of chemicals plus temperature stress and food shortage.

🎯 Mortality is crude and irrelevant

  • Mortality represents response to relatively high and therefore often environmentally irrelevant toxicant concentrations.
  • At much lower, environmentally relevant concentrations, organisms suffer a wide variety of sublethal effects (growth, reproduction, behavior, physiology).
  • Don't confuse: high-dose mortality in the lab ≠ realistic risk at field concentrations.

🪜 Consecutive steps to increase realism

🪜 Step 1: Sublethal endpoints

  • First step: address sublethal endpoints instead of, or in addition to, mortality.
  • Challenge: given the short exposure time in acute tests, it is difficult to assess endpoints other than mortality.
  • Elegant solutions: photosynthesis (plants) and behavior (animals) are sensitive and rapidly responding endpoints that can be incorporated into short-term tests.

⏱️ Step 2: Chronic exposure

  • Next step: increase exposure time by performing chronic experiments.
  • Rationale: organisms are often exposed to relatively low levels of contaminants for their entire life span.
  • In chronic tests, a wide variety of sublethal endpoints can be assessed in addition to mortality; the most common are growth and reproduction.
  • Don't confuse: "chronic" is not an absolute duration—it is relative to the organism's life cycle (see key point above).

🔁 Step 3: Multigeneration effects

  • For invertebrates and unicellular organisms (bacteria, algae) with relatively short life cycles, it is relevant to prolong exposure even further.
  • Expose organisms for their entire life span (from egg or juvenile phase till adulthood including reproductive performance) or for several generations.
  • Example: A springtail population could be exposed continuously for 10 generations to assess whether effects worsen, stabilize, or lead to adaptation.

🧪 Step 4: Mixture toxicity

  • In contaminated environments, organisms are generally exposed to a wide variety of toxicants.
  • To gain ecological realism, mixture toxicity scenarios should be considered.
  • See detailed section below.

🌡️ Step 5: Multistress

  • Organisms are exposed to toxicants under variable and sub-optimal conditions (temperature, pH, drought, food shortage, predation, competition).
  • To further gain ecological realism, multistress scenarios (chemical-abiotic and chemical-biotic interactions) should be considered.
  • See detailed sections below.

🏆 The ultimate goal

  • The highest ecological relevance may be achieved by addressing all issues together: chronic mixture toxicity tests assessing sublethal endpoints.
  • Yet, even nowadays such studies remain scarce.
  • Another way: move toward multispecies test systems that assess impacts on species interactions within communities.

🧪 Mixture toxicity concepts

🧪 Four classes of joint effects

The excerpt presents a classification by Hewlett and Plackett (1959):

ClassInteraction?Action typeModel
Simple similar actionNoSimilar (same mode of action, same target)Concentration Addition
Complex similar actionYesSimilar (same mode of action, but interact)Deviation from Concentration Addition
Independent actionNoDissimilar (different modes of action)Response Addition / Independent Action
Dependent actionYesDissimilar (different modes of action, interact)Deviation from Response Addition

🔢 Concentration Addition (CA)

Concentration Addition is taken as the starting point for compounds that share the same mode of action and do not interact.

  • Applies when compounds act on the same biological pathway, affecting strictly the same molecular target.
  • The only difference is the relative potency of the compounds.
  • Uses the Toxic Unit (TU) approach:
    • TU = c / EC_x (where c = concentration in the mixture, EC_x = concentration causing x% effect for that compound alone).
    • Mixture toxicity = sum of TUs of individual compounds.
  • Equitoxicity concept: to compose mixtures where compounds are represented at equal toxic strength, e.g., 1 equitoxic TU = 0.5 TU_A + 0.5 TU_B.
  • Example: Deneer et al. (1988) tested a mixture of 50 narcotic compounds and observed perfect concentration addition, even when individual compounds were present at only 0.0025 TU (0.25% of their EC50). This showed that narcotic compounds present at concentrations way below their no effect level still contribute to the joint toxicity of a mixture.
  • Alarming implication: environmental legislation is still based on a compound-by-compound approach, ignoring additive mixture effects.

🔢 Response Addition (RA) / Independent Action

When chemicals have a different mode of action, act on different targets, but still contribute to the same biological endpoint, the mixture is expected to behave according to Response Addition.

  • Example: one compound inhibits photosynthesis, another inhibits DNA replication, but both inhibit algal growth.
  • Formula: E(mix) = E(A) + E(B) - E(A)·E(B)
    • E(mix) = fraction affected by the mixture (scaled 0 to 1).
    • The equation sums the fractions affected by A and B, then corrects for the fact that the fraction already affected by A cannot be affected again by B (or vice versa).
  • Rewritten: 1 - E(mix) = (1 - E_A)·(1 - E_B)
    • The probability of not being affected by the mixture is the product of the probabilities of not being affected by A and by B.
  • At the EC50 of a mixture of two compounds acting according to Independent Action, both compounds should be present at a concentration equalling their EC29 (since (1 - 0.29)² ≈ 0.5).
  • Illustration (Berenbaum 1981): A handful of nails breaks 5 eggs; a handful of pebbles could break 4 eggs, but 1 was already broken by the nails, so pebbles break 3 additional eggs. Total broken = 5 + 3 = 8, not 5 + 4 = 9.

⚙️ Interactions and deviations

  • Both CA and RA assume compounds do not interact.
  • Interactions can occur at four steps:
    1. Chemical/physicochemical: affecting each other's bioavailability (e.g., Zn causes Cd to be more available in soil solution).
    2. Uptake (toxicokinetics): competition for uptake sites at cell membrane.
    3. Internal processing (toxicokinetics): effects on biotransformation or detoxification.
    4. Target site (toxicodynamics): interactions during intoxication.
  • Whole-organism responses integrate the last three types, resulting in deviations from CA and RA predictions.

📊 Synergism and antagonism

  • Antagonism (less than additive): EC50 of mixture > 1 TU (and lower 95% CI also > 1 TU); compounds reduce each other's toxicity.
  • Synergism (more than additive): EC50 of mixture < 1 TU (and upper 95% CI also < 1 TU); compounds enhance each other's toxicity.
  • Two possible reasons for deviation from CA:
    1. Compounds have the same mode of action but do interact (complex similar action).
    2. Compounds have different modes of action (so they actually follow Response Addition, not Concentration Addition).
  • Important: One can only conclude on synergism/antagonism if the experimental observations are higher/lower than the predictions by both CA and RA. Otherwise, "antagonism" relative to CA could simply mean the compounds follow RA (different modes of action), not that they interact antagonistically.

🗺️ Isoboles

  • Isoboles show the toxicity of mixtures in a two-dimensional plane (one axis per chemical).
  • They are cross sections through a dose-response surface.
  • The isobole for the 50% effect level shows:
    • Black line: Concentration Addition (no interaction).
    • Blue line: antagonism (less than CA).
    • Red line: synergism (more than CA).

🌡️ Multistress: abiotic factors

🌡️ Stress and the ecological niche

Stress is defined as an environmental change that affects the fitness and ecological functioning of species (growth, reproduction, behavior), ultimately leading to changes in community structure and ecosystem functioning.

Multistress is a situation in which an organism is exposed both to a toxicant and to stressful environmental conditions.

  • Stress is not absolute; it is defined with reference to the normal range of ecological functioning (the organism's ecological niche or ecological amplitude).
  • Stress arises when an environmental factor pushes the organism near or over the edges of its ecological niche (Van Straalen, 2003).
  • Example: An organism at point 1 (within its niche) is pushed to point 2 (outside its niche) by an increase in an environmental factor. The organism cannot grow and reproduce outside the niche, but may survive temporarily if it can return in time. If the niche borders are extended through adaptation, that state no longer causes stress.

🌡️ Temperature

  • One of the predominant abiotic factors altering toxic effects.
  • For poikilothermic (cold-blooded) organisms, increases in temperature lead to increased activity, which may affect both uptake and effects of chemicals.
  • Generally, toxic effects increased with increasing temperature (Heugens et al., 2001).
  • Differences in toxic effects between laboratory and relevant field temperatures ranged from a factor of 2 to 130.
  • Freezing temperatures may also interfere: synergistic interaction between metals and temperatures below zero, with membrane damage as an explanation (Holmstrup et al., 2010).

🍽️ Food availability

  • Food availability may have a strong effect on sensitivity to chemicals.
  • Generally, decreasing food or nutrient levels increased toxicity, resulting in differences ranging from a factor of 1.2 to 10 (Heugens et al., 2001).
  • Extreme example: Daphnids in outdoor mesocosm ditches under low to ambient nutrient concentrations showed toxicity (LOEC for growth and reproduction) at thiacloprid concentrations 2500-fold lower than laboratory-derived LOEC values (Barmentlo et al., submitted).
  • Explanation: increased primary production under nutrient-enriched conditions allowed for compensatory feeding and perhaps also reduced bioavailability of the insecticide.
  • Similar results for damselflies: those feeding on natural resources were significantly more affected than those offered high-quality artificial food.

🧂 Salinity

  • Influence of salinity on toxicity is less clear (Heugens et al., 2001).
  • If salinity pushes the organism toward its niche boundaries, it will worsen toxic effects.
  • If salinity fits within the ecological niche, processes affecting exposure will predominantly determine stress.
  • Example: Metal toxicity decreases with increasing salinity (due to competition of ions).
  • Organophosphate insecticide toxicity increases with increasing salinity.
  • For other chemicals, no clear relationship was observed.
  • A salinity increase from freshwater to marine water decreased toxicity by a factor of 2.1 (but less extreme changes are more relevant under field conditions, so the change is probably much smaller).

🧪 pH

  • Many organisms have a species-specific range of pH levels at which they function optimally.
  • At pH values outside the optimal range, organisms may show reduced reproduction and growth, in extreme cases even reduced survival.
  • pH may also have an indirect impact on exposure: metal speciation and the form in which ionizable chemicals occur are highly dependent on pH.
  • Example: Springtail Folsomia candida showed reduced control reproduction but also the lowest cadmium toxicity at soil pH_KCl 7.0 compared to pH_KCl 3.1–5.7 (Crommentuijn et al., 1997).

💧 Drought

  • In soil, moisture content is an important factor; drought is often limiting the suitability of the soil as a habitat.
  • Chemicals interfering with the drought tolerance of soil organisms (e.g., by affecting membrane functioning or accumulation of sugars) may exacerbate the effects of drought (Holmstrup et al., 2010).
  • Earthworms breathe through the skin and can only survive in moist soils; springtail eggs can only survive at relative air humidity close to 100%, making these organisms especially sensitive to drought.
  • Drought sensitivity may be enhanced by exposure to chemicals like metals, polycyclic aromatic hydrocarbons, or surfactants.

🧮 Multistress in risk assessment

  • In environmental risk assessment, differences between laboratory (standardized, optimal, single toxicant) and field (multiple stressors) are taken into account by applying an uncertainty factor.
  • Yet, the choice for uncertainty factors is based on little ecological evidence.
  • It remains a challenge to predict toxicant-induced effects on species and communities while accounting for variable and suboptimal environmental conditions.

🐛 Multistress: biotic factors

🐛 Definition and examples

Biotic stress: stress caused by living organisms, including predation, competition, population density, food availability, pathogens, and parasitism.

  • Biotic stressors can have direct and indirect effects.
  • Example: Predators consume prey (direct) and also induce energetically costly defense mechanisms, decrease foraging activity, and even induce morphological changes (e.g., Daphnia pulex can develop neck spines when subject to predation).
  • Parasites can alter host behavior or induce morphological changes (e.g., coloration), compromise the immune system, and alter the energy budget of the host.
  • High population density affects energy budgets and intra/interspecific competition for space, status, or resources, leading to changes in growth and size at maturity.
  • Pathogens (viruses, bacteria, fungi) can lower fitness and fecundity.
  • Don't strictly separate: effects of different biotic stressors overlap (e.g., pathogens spread more rapidly at high population densities; predation limits competition).

🧪 Effects on bioavailability and toxicokinetics

  • Biotic stressors can alter the bioavailability of chemicals.
  • Example: In aquatic environments, food level may determine the availability of chemicals to filter feeders, as chemicals may adsorb to particulate organic matter (e.g., algae).
  • The exposure route (waterborne or via food) can influence subsequent toxicokinetic processes, changing the chemicals' toxic effects.

🏃 Effects on behavior

  • Biotic stressors cause behavioral effects that could change the toxic effects of chemicals.
  • Example: Presence of a predator reduces prey (foraging) activity to avoid detection, decreasing chemical uptake via food. But the condition of the prey decreases due to lower food consumption, meaning less energy is available for other physiological processes.
  • Chemicals can also disrupt essential behaviors by reducing olfactory receptor sensitivity, inhibiting cholinesterase, altering brain neurotransmitter levels, and impairing gonadal or thyroid hormone levels.
  • This could lead to disruptive effects on communication, feeding rates, and reproduction (e.g., inability to find mating partners, worsened by low population density).
  • Chemicals can alter predator-prey relationships, resulting in trophic cascades:
    • Top-down effects: predator or grazer is more sensitive to the contaminant than its prey.
    • Bottom-up effects: susceptibility of a prey species to predation is increased.
  • Example: Cu exposure of fish and crustaceans can decrease their response to olfactory cues, making them unresponsive to predator stress and increasing the risk of being detected and consumed (Van Ginneken et al., 2018).

🫁 Physiology and energy trade-offs

  • Biotic stressors can cause elevated respiration rates, leading (in aquatic organisms) to higher toxicant uptake through diffusion.
  • Or they can decrease respiration (e.g., low food levels decrease metabolic activity and thus respiration).
  • A reduced metabolic rate could decrease the toxicity of chemicals that are metabolically activated.
  • Certain chemicals (e.g., metals) can cause higher or lower oxygen consumption, which might counteract or reinforce the effects of biotic stressors.
  • Both biotic and chemical stressors can induce physiological damage: predator stress and pesticides cause oxidative stress, leading to synergistic effects on the induction of antioxidant enzymes (catalase, superoxide dismutase) (Janssens and Stoks, 2013).
  • Organisms can eliminate or detoxify internal toxicant concentrations (e.g., transformation via Mixed Function Oxidation enzymes, sequestration by binding to metallothioneins or storage in inert tissues like granules). These defensive mechanisms are energetically costly, leading to energy trade-offs: less energy for growth, locomotion, or reproduction.
  • Food availability and lipid reserves play an important role: well-fed organisms exposed to toxicants can more easily pay the energy costs than food-deprived organisms.

⚖️ Interactive effects

  • Possible interactions (antagonism, synergism, additivity) between effects of stressors are difficult to predict and can differ depending on the stressor combination, chemical concentration, endpoint, and species.
  • Example: For Ceriodaphnia dubia, predator cues interacted antagonistically with bifenthrin and thiacloprid, but synergistically with fipronil (Qin et al., 2011).
  • Interactive effects in nature might be weaker than in the laboratory (stress levels fluctuate more rapidly; animals can move away from high-risk areas).
  • Or they might be more important in nature (generally more than two stressors are present, which could interact in an additive or synergistic way).
  • Understanding interactions among multiple stressors is essential to estimate the actual impact of chemicals in nature.
  • Example: Relyea (2003) found that apparently safe concentrations of carbaryl can become deadly to some amphibian species when combined with predator cues.

⏱️ Chronic toxicity testing

⏱️ Acute vs chronic: relative to life cycle

  • The terms acute and chronic must be considered in relation to the length of the life cycle of the organism.
  • A short-term exposure of four days is acute for fish, but chronic for algae (already four generations).
  • Generally, chronic tests aim to include at least one reproductive event or the completion of an entire life cycle.

⏱️ Challenges in chronic testing

  • Higher costs due to much longer duration.
  • Feeding: During prolonged exposure, organisms have to be fed. This will influence the partitioning and bioavailability of the test compound. Lipophilic compounds will strongly bind to food, making toxicant uptake via food more important than for hydrophilic compounds, causing compound-specific changes in exposure routes.
  • Oxygen: For chronic aquatic toxicity tests, especially sediment testing, it may be challenging to maintain sufficiently high oxygen concentrations throughout the entire experiment.
  • Validity criteria: To ensure at least one reproductive event, test guidelines set validity criteria (e.g., mean number of living offspring per control parent daphnid should be above 60; 85% of adult control chironomid midges should emerge between 12 and 23 days; mean number of juveniles produced by 10 control collembolans should be at least 100).

📉 Acute-to-Chronic Ratio (ACR)

ACR = LC50 from an acute test / NOEC or EC10 from the chronic test (or LC50_acute / LC50_chronic).

  • Generally, toxicity increases with increasing exposure time.
  • If compounds exhibit a strong direct lethal effect, the ACR will be low.
  • For compounds that slowly build up lethal body burdens, the ACR can be very high.
  • There is a relationship between the mode of action of a compound and the ACR.
  • If chronic toxicity has to be extrapolated from acute toxicity data and the mode of action is unknown, an ACR of 10 is generally considered (but this number is chosen quite arbitrarily, potentially leading to under- or overestimation).

🌱 Sublethal endpoints in chronic tests

  • Chronic tests allow an array of sublethal endpoints to be assessed: growth, reproduction, species-specific endpoints like emergence (time) of chironomids.
  • Compounds with different modes of action may cause very diverse sublethal effects during chronic exposure.
  • Example: The PAC phenanthrene did not affect the completion of the midge life cycle, but above a certain concentration the larvae died and no emergence was observed at all (suggesting non-specific mode of action / narcosis). In contrast, the PAC acridone caused no mortality but delayed adult emergence significantly over a wide range of test concentrations (suggesting specific mode of action affecting life cycle parameters) (Leon Paumen et al., 2008).
  • This demonstrates that specific effects on life cycle parameters need time to become expressed.

📈 Population growth rate (r)

  • If effects on all relevant life-cycle parameters are assessed, these can be integrated into effects on population growth rate (r).
  • For the 21-day daphnid test, this is achieved by integrating age-specific data on probability of survival and fecundity.
  • Population growth rates calculated from chronic toxicity data are not related to natural population growth rates in the field, but they allow construction of dose-response relationships for the effects of toxicants on r, the ultimate endpoint in chronic toxicity testing.

🧪 Chronic testing in practice

  • Several protocols for standardized chronic toxicity tests are available (though less numerous than for acute tests).
  • Water: 21-day Daphnia reproduction test (OECD 2012).
  • Sediment: 28-day test guidelines for midge Chironomus riparius (OECD 2010) and worm Lumbriculus variegatus (OECD 2007).
  • Terrestrial soil: springtail Folsomia candida (OECD 2016a), earthworm Eisenia fetida (OECD 2016b), enchytraeids (OECD 2016c).

🔁 Multigeneration toxicity testing

🔁 Why multigeneration?

  • At contaminated sites, organisms may be exposed during multiple generations, and the shorter the life cycle, the more realistic this scenario becomes.
  • It is generally assumed that chronic life cycle toxicity tests are indicative of the actual risk that populations suffer from long-term exposure, but there are only few multigeneration studies performed, due to obvious time and cost constraints.

🔁 Experimental challenges

  • Since aquatic and terrestrial life cycle tests generally last for 28 days, multigeneration testing will take approximately one month per generation.
  • The test compound often affects the life cycle in a dose-dependent manner: the control population could already be in the 9th generation, while an exposed population could still be in the 8th generation due to chemical-related delay in growth/development.
  • Multigeneration experiments are extremely error prone (the chance that an experiment fails increases with increasing exposure time).
  • Design choices:
    • How many generations? Most frequently, but completely arbitrarily, set at approximately 10.
    • Test concentrations: mostly based on chronic life cycle EC50 and EC10 values, but it cannot be anticipated if, and to what extent, toxicity increases (or decreases) during multigeneration exposure.
    • Risk: testing only one or two exposure concentrations increases the risk that observed effects are not dose-related, but simply due to stochasticity. If concentrations are too high, many treatments may go extinct after few generations; if too low, no effect at all (Marinkovic et al., 2012, had to increase exposure concentrations during the experiment).
    • Replication: since a single experimental treatment often consists of an entire population, treatment replication is challenging.
  • Transition from generation to generation:
    • If a replicate is maintained in a single jar, generations may overlap and exposure concentrations may decrease with time.
    • Most often, a new generation is started by exposing offspring from the previous exposed parental generation in a freshly spiked experimental unit.
    • If the aim is to determine how a population recovers when the concentration decreases with time, exposure to a single spiked medium is also an option (seems most applicable to soils).
    • To assess recovery after several generations of (continuous) exposure, offspring from previously exposed generations may be maintained under control conditions.
  • Endpoints: survival, larval development time, emergence, emergence time, adult life span, reproduction (aquatic insects); survival, growth, reproduction (terrestrial invertebrates). Only a very limited number of studies evaluated actual population endpoints like population growth rate (Postma and Davids, 1995).

🔁 To persist or to perish

  • If organisms are exposed for multiple generations, the effects tend to worsen, ultimately leading to extinction, first of the population exposed to the highest concentration, followed by populations exposed to lower concentrations in later generations (Leon Paumen et al., 2008).
  • Yet, it cannot be excluded that extinction occurs due to the relatively small population sizes in multigeneration experiments, while larger populations may pass a bottleneck and recover during later generations.
  • Thresholds have also been reported: below certain exposure concentrations, the exposed populations perform equally well as the controls, generation after generation. Hence, these concentrations may be considered as the 'infinite no effect concentration'.
  • Mechanistic explanation: the metabolic machinery of the organism is capable of detoxifying or excreting the toxicants, and this takes so little energy that there is no trade-off regarding growth and reproduction.
  • Conclusion: The frequently reported worsening of effects during multigeneration toxicant exposure raises concerns about the use of single-generation studies in risk assessment to tackle long-term population effects of environmental toxicants.

🧬 Adaptation

  • If populations exposed for multiple generations do not get extinct and persist, they may have developed resistance or adaptation.
  • Regular sensitivity testing can be included in multigeneration experiments (e.g., after 3, 6, and 9 generations, test the sensitivity of the organisms to the test compound).
  • Yet, it is still under debate whether this lower sensitivity is due to genetic adaptation, epigenetics, or phenotypic plasticity (Marinkovic et al., 2012).
  • Example: Postma & Davids (1995) observed extinction at a relatively high exposure concentration and adaptation at a relatively low exposure concentration during a multigeneration experiment with the non-biting midge Chironomus riparius.

🌴 Tropical ecotoxicology

🌴 Distinctive features of tropical ecosystems

  • The tropics cover approximately 40% of the world, lying between the Tropic of Cancer (23½° N) and the Tropic of Capricorn (23½° S).
  • Characterized by, on average, higher temperatures and sunlight levels than temperate regions.
  • Based on precipitation patterns, three main tropical climates: tropical rainforest, monsoon, and savanna.
  • Tropical areas harbor the highest biodiversity in the world and generate nearly 60% of the primary production.

🌴 Climate-related factors

  • Three basic climate factors are essential for pesticide risks when comparing temperate and tropical aquatic agroecosystems: rainfall, temperature, and sunlight.
  • Example: High tropical temperatures have been associated with higher microbial activities and hence enhanced microbial pesticide degradation, resulting in lower exposure levels.
  • On the other hand, toxicity of pesticides to aquatic biota may be higher with increasing temperature.
  • For terrestrial ecosystems, other important abiotic factors: soil humidity, pH, clay and organic carbon content, and ion exchange capacity.
  • Although several differences in climatic factors may be distinguished, these do not lead to consistent greater or lesser pesticide risk.

🌴 Species sensitivities

  • Higher species richness in tropical areas dictates that the possible occurrence of more sensitive species cannot be ignored.
  • However, studies comparing the sensitivity of species from the same taxonomic group did not demonstrate a consistent higher or lower sensitivity of tropical organisms compared to temperate organisms.
  • Certain taxonomic groups may be more represented and/or ecologically or economically more important in tropical areas, such as freshwater shrimps and (terrestrial) termites.
  • The development of test procedures for such species and their incorporation in risk assessment procedures seems imperative.

🌴 Testing methods

  • Given the vast differences in environmental conditions between tropical and temperate regions, the use of test procedures developed under temperate environments to assess pesticide risks in tropical areas has often been disputed.
  • Methods developed under temperate conditions need to be adapted to tropical environmental conditions, e.g., by using tropical test substrates and by testing at higher temperatures (Niva et al., 2016).

🌴 Agricultural practices and legislation

  • Agricultural practices in tropical countries are likely to lead to higher pesticide exposure and hence higher risks.
  • Main reasons:
    1. Unnecessary applications and overuse.
    2. Use of cheaper but more hazardous pesticides.
    3. Dangerous transportation and storage conditions.
    • All often a result of a lack in training of pesticide applicators in the tropics.
  • Countries in tropical regions usually do not have strict laws and risk assessment regulations in place regarding the registration and use of pesticides.
  • Pesticides banned in temperate regions for environmental reasons are often regularly available and used in tropical countries (e.g., Brazil).
19

5.1. Introduction: Linking population, community and ecosystem responses

5.1. Introduction: Linking population, community and ecosystem responses

🧭 Overview

🧠 One-sentence thesis

The excerpt does not contain substantive content for this section; it consists only of chapter navigation elements and a partial heading.

📌 Key points (3–5)

  • The excerpt shows only a chapter title and section heading without explanatory text.
  • No definitions, mechanisms, or conceptual content are provided.
  • The material appears to be a transition point in the textbook structure.
  • Actual content linking population, community, and ecosystem responses is not present in this excerpt.

📄 Content status

📄 What the excerpt contains

The provided text includes:

  • A chapter title: "Chapter 5: Population, Community and Ecosystem Ecotoxicology"
  • A partial section heading: "5.1. Introduction: Linking population, community"
  • Navigation metadata (page numbers, textbook title)
  • Questions from a previous section (4.4.7) about tropical vs temperate ecotoxicology

❌ What is missing

  • No introductory paragraph explaining how population, community, and ecosystem levels connect
  • No definitions of population-level, community-level, or ecosystem-level responses
  • No conceptual framework or thesis about linking these ecological scales
  • No mechanisms, examples, or explanations of cross-scale relationships

🔍 Note for review

🔍 Implication for study

This section heading suggests the chapter will address how toxicological effects cascade or connect across different ecological organization levels (population → community → ecosystem), but the actual explanatory content is not included in the excerpt provided.

To study this topic, the full section text would be needed.

20

Population ecotoxicology in laboratory settings

5.2. Population ecotoxicology in laboratory settings

🧭 Overview

🧠 One-sentence thesis

Population-level ecotoxicology is essential because environmental protection targets populations rather than individuals, and population-specific phenomena like age-structure and density create unique responses to toxicants that cannot be predicted from individual-level tests alone.

📌 Key points (3–5)

  • Why study populations: Protection targets are populations, communities, and ecosystems, not just individuals; several phenomena (age-specific sensitivity, individual interactions) are unique to this level.
  • Age and stage sensitivity: Young individuals (neonates, first instars) can be up to three orders of magnitude more sensitive than adults; instar-specific sensitivity can vary as much as species-specific sensitivity.
  • Density effects: High-density populations suffering from crowding and starvation are markedly more sensitive to toxicants (up to 100× more), and can show unexpected indirect effects.
  • Common confusion: Population ecotoxicity tests using cohorts are actually extensions of chronic toxicity tests, not true population tests with natural heterogeneous composition.
  • Key challenge: Increased uncertainty and complexity due to individual variability, feedback loops, and weaker dose-response relationships compared to individual-level studies.

🎯 Why population-level ecotoxicology matters

🛡️ Protection targets

  • Environmental protection goals focus on populations, communities, and ecosystems—not isolated individuals.
  • Effects observed at the individual level do not necessarily predict what happens to entire populations.
  • The excerpt emphasizes that studying populations is "highly important" to obtain data and insights into mechanisms at this level.

🔗 Unique phenomena at population level

Several processes only emerge when studying groups of organisms:

  • Age-specific sensitivity to chemicals
  • Interactions between individuals (competition, crowding)
  • Social structure and genetic composition
  • Density and age structure dynamics

Don't confuse: Population-level effects are not simply the sum of individual effects; feedback loops and interactions create emergent properties.

🧬 Population-specific properties

📊 Four unique properties

The excerpt identifies properties unique to the population level of biological organization:

PropertyDescription
Social structureOrganization and interactions among individuals
Genetic compositionVariation in genes across the population
DensityNumber of individuals per unit area/volume
Age structureDistribution of individuals across different ages/stages

🎂 Age and developmental stage sensitivity

For almost all species, young individuals like neonates or first instars are markedly more sensitive than adults or late instar larvae.

  • Sensitivity differences can reach three orders of magnitude (1000×).
  • Instar-specific sensitivities may vary as much as species-specific sensitivities (Figure 1 in excerpt shows this for the insecticide diazinon).
  • Timing matters: When exposure occurs relative to critical life stages seriously affects the extent of adverse effects, especially in seasonally synchronized populations.

Example: If a toxicant pulse occurs when most individuals are in their most sensitive early stage, population impact will be far greater than if it occurs when most are adults.

📈 Population developmental stage effects

  • Exponentially growing daphnid populations recovered much faster from insecticide exposure than populations at carrying capacity.
  • Population state (growing vs. stable) influences resilience to toxicants.

🔄 Density-dependent effects

🏙️ High density increases sensitivity

When populations suffer from starvation and crowding due to high densities and intraspecific competition, they are markedly more sensitive to toxicants, sometimes even up to a factor of 100.

  • Crowding and food limitation make organisms more vulnerable to chemical stress.
  • This represents a synergistic effect: density stress + toxicant stress together are worse than either alone.

🔀 Unexpected indirect effects

The excerpt describes a counterintuitive finding with chironomid populations and cadmium:

  • High-density populations: Population growth rate actually increased at moderate Cd exposure because Cd-induced mortality reduced food shortage for survivors. Only at the highest Cd levels did growth rate decrease.
  • Low-density populations: Showed the expected decrease in growth rate with increasing Cd.
  • At all Cd levels, low-density populations had markedly higher growth rates than high-density populations.

Don't confuse: A toxicant causing mortality does not always reduce population growth rate if density-dependent factors (like food competition) are relieved by that mortality.

🧪 Population ecotoxicity testing approaches

🔬 Test design differences

The excerpt distinguishes two approaches:

ApproachCharacteristicsPurpose
Chronic ecotoxicity studiesUse cohorts of same size and age; minimize variationIndividual-level chronic effects
Population ecotoxicologyNatural heterogeneous population compositionTrue population-level dynamics

🦠 Laboratory test species

To circumvent long life spans of higher organisms, researchers select species with short life cycles:

  • Algae: A 3–4 day test can be a multigeneration experiment.
  • Bacteria: Rapid reproduction allows population studies.
  • Zooplankton (e.g., daphnids): Female daphnids may release up to three clutches of neonates during a 21-day test.

📉 Population growth rate (r)

These population ecotoxicity tests offer the unique possibility to calculate the ultimate population parameter, the population growth rate (r).

  • r is a demographic parameter integrating survival, maturity time, and reproduction.
  • It provides a comprehensive measure of population-level effects.

Important limitation: The excerpt notes that chronic experiments are "typically performed with cohorts and not with natural populations, making these experiments rather an extension of chronic toxicity tests than true population ecotoxicity tests."

⚠️ Challenges and complexity

🌀 Increased uncertainty

Research at the population level is characterized by:

  • Less direct link between chemical exposure and observed effects (compared to individual level).
  • Individual variability within populations.
  • Feedback loops that loosen dose-response relationships.
  • Increasing uncertainty if these processes are not properly addressed.

⏱️ Time and effort

  • Population studies require more time and effort than individual-level studies.
  • Effects at the population level are "understudied" as a result.
  • Even more understudied: meta-populations, communities, and ecosystems.

🧩 Population stability definition

A challenging question involved in population ecotoxicology is when a population is considered to be stable or in steady state.

  • The excerpt shows that populations with various types of oscillation can all be considered stable (Figure 2).
  • "One could even argue that any population that does not go extinct can be considered stable."
  • A single population can vary considerably in density over time, potentially strongly affecting the impact of toxicant exposure.

Don't confuse: "Stable" does not mean constant density; populations can fluctuate widely and still be considered stable if they persist.

21

Wildlife population ecotoxicology

5.3. Wildlife population ecotoxicology

🧭 Overview

🧠 One-sentence thesis

The catastrophic decline of Asian vulture populations by over 90–99% in the mid-1990s to early 2000s required forensic ecotoxicological investigation to identify the causative factor, demonstrating the critical role of retrospective monitoring when prospective risk assessment fails.

📌 Key points (3–5)

  • Scale of the crisis: vulture populations in India, Pakistan, and Nepal declined by 90–99% from the mid-1990s to early 2000s, with total losses estimated in the tens of millions.
  • Geographic spread: similar declines across multiple countries indicated the causative factor was not restricted to a specific area.
  • Initial hypotheses ruled out: infectious diseases were suspected first, but no known or new diseases could explain the mortality rates, and vultures have highly developed immune responses due to their scavenging diet.
  • Forensic approach needed: interdisciplinary ecological studies were launched to understand background mortality and identify the unknown cause through retrospective investigation.
  • Common confusion: prospective risk assessment vs retrospective monitoring—the vulture case shows that new chemicals may cause unforeseen population-level effects that only become apparent through forensic investigation after the damage has occurred.

🦅 The vulture population crash

📉 Magnitude and timeline

  • Historically, vulture populations in India, Pakistan, and Nepal were "too numerous to be effectively counted."
  • In the mid-1990s, numbers in northern India began to decline catastrophically, first documented in Keoladeo National Park (Prakash 1999).
  • By the early 2000s, three species experienced unprecedented declines:
    • Oriental White-backed vultures (Gyps bengalensis)
    • Long-billed vultures (Gyps indicus)
    • Slender-billed vultures (Gyps tenuirostris)
  • Decline rate: over 90–99% population loss within approximately 5–10 years.
  • Total impact: tens of millions of vultures lost.

🌍 Geographic pattern

  • The decline was not isolated to one location.
  • Similar patterns observed across India, Pakistan, and Nepal in the following years after the initial detection.
  • Key implication: the causative factor operated across national borders and diverse habitats, suggesting a widespread environmental or anthropogenic cause rather than a localized event.

🔍 Forensic investigation approach

🧪 What forensic ecotoxicology means

Forensic approaches in ecotoxicology: retrospective investigation methods used to identify the cause of observed population declines or mortality events after they have occurred.

  • Unlike prospective risk assessment (predicting harm before a chemical is released), forensic ecotoxicology works backward from observed damage.
  • The excerpt emphasizes this approach is necessary when population crashes occur without an obvious known cause.
  • Example: When vulture populations collapsed, researchers had to investigate retrospectively because no existing risk assessment had predicted the problem.

🦠 Ruling out disease hypotheses

Initial hypotheses focused on infectious diseases because:

  • Large-scale mortality events in wildlife often involve pathogens.
  • The possibility of new diseases to which vultures had no prior exposure was considered.

Why diseases were ruled out:

  • No identified diseases had shown similar mortality rates in other bird species.
  • Vultures possess a highly developed immune response, an adaptation to their diet of scavenging dead and often decaying animals.
  • The immune system of vultures is expected to handle a wide range of pathogens, making a disease-based explanation less plausible.

Don't confuse: high immune capacity does not mean vultures are immune to all threats—it specifically relates to pathogen resistance, not necessarily to chemical toxicants.

🔬 Interdisciplinary ecological studies

  • To understand the crisis, researchers initiated studies to establish background mortality in the affected species.
  • These studies began in large colonies in Pakistan.
  • The goal was to provide a baseline understanding of normal mortality patterns before identifying abnormal causes.
  • The excerpt indicates this was the starting point for a broader forensic investigation (the text cuts off before revealing the ultimate cause).

📊 Implications for risk assessment

⚠️ Uncertainty in prospective assessment

The vulture case study is presented as a learning objective to "critically reflect on the uncertainty of prospective risk assessment of new chemicals."

What this means:

  • Prospective risk assessment attempts to predict harm before chemicals are widely used.
  • The vulture crash demonstrates that such assessments can fail to anticipate real-world population-level effects.
  • Key lesson: even with risk assessment procedures in place, unforeseen interactions between chemicals and wildlife populations can occur.

🔄 Retrospective vs prospective monitoring

ApproachTimingPurposeVulture case example
Prospective risk assessmentBefore chemical releasePredict potential harmFailed to prevent the vulture decline
Retrospective monitoring (forensic)After observed damageIdentify cause of harmRequired to investigate the population crash
  • The excerpt's keywords include "pharmaceuticals" and "uncertainty," suggesting the causative factor may have been a pharmaceutical compound not anticipated to harm vultures.
  • Common confusion: retrospective investigation is not a failure of science—it is a necessary tool when prospective methods have inherent limitations or when novel exposure pathways emerge.

🎯 Population-level focus

The excerpt places this case study within the context of "wildlife population ecotoxicology," emphasizing:

  • Effects manifest at the population level (90–99% declines), not just individual toxicity.
  • Population parameters (survival, reproduction, age structure) are the relevant endpoints.
  • Why it matters: individual-level toxicity tests may not predict population crashes, especially when factors like age-specific sensitivity, population structure, or ecological context play a role.
22

Radioactive Decay and Ionising Radiation

5.4. Trait-Based approaches

🧭 Overview

🧠 One-sentence thesis

Radioactive decay is a spontaneous, stochastic process governed by exponential laws that transforms unstable nuclei into stable ones through particle or photon emission, and the resulting ionising radiation interacts with matter in ways that depend on particle type, energy, and mass.

📌 Key points (3–5)

  • What radioactive decay is: spontaneous, irreversible disintegration of unstable nuclei into more stable forms, emitting particles (alpha, beta) or photons (gamma rays).
  • How decay is quantified: the decay constant λ and half-life t₁/₂ describe the probability and time scale of decay; activity (Becquerel) measures disintegrations per second.
  • Types of decay mechanisms: alpha decay (emits helium nucleus), beta decay (converts neutron↔proton, emits electron/positron), gamma decay (emits photons), and fission (splits into lighter nuclei).
  • Common confusion—external vs internal hazard: alpha particles have high ionising potential but low penetration (stopped by paper/skin), so they are dangerous only when inside the body; beta particles penetrate deeper but ionise less densely.
  • Natural vs artificial radionuclides: primordial radionuclides (e.g. ²³⁸U, ²³²Th, ⁴⁰K) exist naturally with billion-year half-lives; artificial ones (e.g. ¹³⁷Cs, ⁹⁰Sr) come from nuclear activities and accidents.

⚛️ The physics of radioactive decay

⚛️ What decay is and why it happens

Radioactivity: the phenomenon of spontaneous disintegration or decay of unstable atomic nuclei to form energetically more stable ones.

  • Decay is irreversible: after one or more transformations, a stable, non-radioactive atom is formed.
  • Particles (protons, neutrons) and/or radiation (photons) can be emitted during the process.
  • It is a stochastic phenomenon: impossible to predict when any given atom will disintegrate, but the probability per unit time is constant (the decay constant λ).

📉 Exponential decay law

  • The number of remaining nuclei N(t) at time t follows an exponential function:
    N(t) = N₀ × e^(−λt), where N₀ is the initial number and λ is the decay constant [s⁻¹].
  • The half-life t₁/₂ is the time needed for half the nuclei to decay:
    t₁/₂ = ln(2) / λ ≈ 0.693 / λ.
  • Half-lives vary enormously: from fractions of a second to billions of years.

Example: ²³⁸U has a half-life of 4.468 × 10⁹ years; ¹³⁷Cs has 30.17 years; ¹³¹I has only 8 days.

📊 Activity: measuring decay rate

Activity A: the rate at which decay occurs in a sample, determined by the number of radioactive nuclei present and the decay constant λ.

  • Formula: A(t) = λ × N(t).
  • Activity decreases exponentially over time, following the same curve as N(t).
  • SI unit: Becquerel [Bq] = one disintegration per second.
  • Older unit: Curie [Ci] = 3.7 × 10¹⁰ Bq.

Example: A sample with 100 Bq activity has 100 radioactive disintegrations every second.

🔬 Mechanisms of radioactive decay

🔬 Alpha decay

  • An α-particle (a helium-4 nucleus: 2 protons + 2 neutrons) is emitted.
  • The new nucleus has:
    • Atomic number Z reduced by 2 (lost 2 protons).
    • Mass number A reduced by 4 (lost 2 protons + 2 neutrons).
  • Alpha decay is common in heavy nuclei.

🔬 Beta decay

Beta decay corrects an imbalance between neutrons and protons by converting one nucleon into the other.

TypeWhat happensParticle emittedCharge change
β⁻ decayNeutron → protonElectron (β⁻) + antineutrinoZ increases by 1
β⁺ decayProton → neutronPositron (β⁺) + neutrinoZ decreases by 1
Electron captureProton captures inner electron → neutronNeutrinoZ decreases by 1
  • All involve an extra charged particle to conserve electric charge.

🔬 Gamma decay

  • The nucleus emits energy as highly energetic electromagnetic radiation (photons).
  • The original nucleus is maintained (no change in Z or A).
  • Often a secondary process after alpha or beta decay, when the nucleus is left in an excited state.
  • Gamma ray energies are unique to each radionuclide, so gamma spectra can identify radionuclides in a sample.

🔬 Fission

  • Some heavy nuclei with excess neutrons naturally decay by splitting into two lighter nuclei and a few neutrons.
  • The new nuclei usually also further decay.
  • Fission is usually artificially induced (e.g. in nuclear reactors), but can occur naturally.

🗺️ The nuclide chart

  • A two-dimensional representation of all known atoms:
    • X-axis: number of neutrons.
    • Y-axis: number of protons (atomic number Z).
  • Contains information on half-lives, mass numbers, decay modes, energies of emitted radiation, etc.
  • Different colours represent stable nuclei and specific decay modes (alpha, beta, electron capture).
  • More than 4000 radionuclides are known.
  • An interactive nuclide chart is available from the International Atomic Energy Agency (IAEA).

Don't confuse with the periodic table: the periodic table arranges elements by atomic number and chemical properties; the nuclide chart arranges nuclides by neutron and proton count.

🌍 Sources of radionuclides in the environment

🌍 Naturally occurring radionuclides

Naturally occurring radionuclides: radionuclides that are omnipresent in the environment, often found in geological materials such as igneous rocks and ores.

Examples: ²³⁸U, ²³²Th, ²²⁶Ra, ⁴⁰K.

  • Activity concentrations in common rock types:
    • ²³⁸U: 7–60 Bq/kg.
    • ⁴⁰K: 70–1500 Bq/kg.

🌍 Primordial radionuclides

  • Primordial radionuclides: created before Earth's formation, with half-lives of billions of years.
  • Some exist alone (e.g. ⁴⁰K).
  • Others are the head of nuclear decay chains (e.g. ²³⁸U, ²³²Th, ²³⁵U):
    • Through successive alpha and beta decays, they decay until a stable lead (Pb) isotope is formed.
    • Decay chain members include ²³⁸U, ²³²Th, ²²⁶Ra, ²¹⁰Pb, ²¹⁰Po, each with specific chemical and radiological properties.
  • The three radioactive decay chains and ⁴⁰K contribute most to external background radiation humans are exposed to.

🌍 Radon and its decay products

  • Within the ²³⁸U and ²³²Th decay chains, two isotopes of the noble gas radon (²²²Rn and ²²⁰Rn) are formed.
  • Unlike other decay products, radon can migrate through rock pores to the soil surface and be released into the atmosphere.
  • Average atmospheric activity concentration: 1–10 Bq/m³ (highly dependent on soil type and composition).
  • Although radon itself is inert, it decays to alpha and beta emitters that can attach to tissues.
  • When inhaled, radon decay products cause internal lung irradiation.

🌍 Industrial activities and enhanced concentrations

  • Industries (metal mining, phosphate industry, oil and gas) exploit natural resources containing naturally occurring radionuclides.
  • These activities result in enhanced concentrations in products, by-products, and residues.
  • Can lead to elevated or more bioavailable radionuclide levels in the environment, posing risks to human and ecosystem health.

🌍 Cosmogenic radionuclides

  • Some radionuclides are continuously formed in the atmosphere through interaction with cosmic radiation.

Example: ¹⁴C is produced by thermal neutrons interacting with nitrogen (¹⁴N(n,p)¹⁴C).

🏭 Artificial radionuclides

Artificial radionuclides: radionuclides that are artificially generated, for example in nuclear power plants, particle accelerators, and radionuclide generators.

  • Generated for energy production, medical applications, and research.
  • In the last century, sources of environmental contamination include:
    • Nuclear weapon production and testing.
    • Improper waste management.
    • Nuclear energy production and related accidents.
  • Examples: ³H, ¹⁴C, ⁹⁰Sr, ⁹⁹Tc, ¹²⁹I, ¹³⁷Cs, ²³⁷Np, ²⁴¹Am, and several U and Pu isotopes.

🏭 Nuclear accidents and long-term contamination

  • During the Chernobyl and Fukushima accidents, a wide range of radionuclides were released.
  • Most had half-lives of hours, days, or weeks, resulting in rapid decline.
  • After the initial release, ¹³⁷Cs remained the most important radionuclide causing enhanced long-term exposure risk for humans and biota, due to its relatively long half-life (30.17 years) and high release rate.
  • Compared to nuclear weapon production and testing, nuclear accidents contribute only a small fraction to environmental contamination.

☢️ Interaction of ionising radiation with matter

☢️ What ionising radiation is

  • Ionising radiation has the potential to react with atoms and molecules in matter, causing ionisations, excitations, and radicals, which result in damage to organisms.
  • Sources:
    • Radioactive decay.
    • Artificially generated (e.g. X-rays).
    • Cosmic radiation.

☢️ Directly ionising radiation

Directly ionising radiation: charged particles (such as alpha or beta particles) with sufficient kinetic energy to cause ionisations.

  • When colliding with electrons, these particles transfer part of their energy, resulting in ionisations.

🔴 Alpha particles: high ionisation, low penetration

🔴 Properties of alpha particles

  • Usually high energy, typically around 5 MeV.
  • Relatively high mass, high kinetic energy, and charge → high ionising potential.
  • Due to high mass, follow a relatively straight and short path through matter.
  • During each interaction, a small amount of energy is transferred until the particle is stopped.
  • Result: a lot of damage in a small area (high ionising potential).

🔴 Penetration and shielding

  • Low penetration depth: stopped by a few centimetres of air or a sheet of paper.
  • Cannot penetrate the skin → low hazard in case of external irradiation.

🔴 Internal contamination hazard

  • When present inside the body (internal contamination), alpha particles cause much more damage due to:
    • High ionising capacity.
    • Lack of a shielding barrier.

🔴 Medical applications

  • Alpha emitters are suited for local treatment of tumour cells.
  • A targeting biomolecule chemically bound to an alpha emitter (or a radionuclide that decays into one) can be injected intravenously.
  • It spreads through the body, accumulates in specific tissues or cells, and locally irradiates tumour metastases.

Example: The property to deposit all energy in a very small area makes alpha emitters ideal for targeted cancer therapy.

🔵 Beta particles: lower ionisation, higher penetration

🔵 Properties of beta particles

  • High-speed electrons or positrons emitted during radioactive decay.
  • Low mass, usually high kinetic energy, and charge → lower ionising potential compared to alpha radiation but higher penetration depth.

🔵 Interaction pattern

  • Do not follow a linear path when interacting with matter.
  • When colliding with other electrons, beta particles can change direction, resulting in a very irregular interaction pattern.

🔵 Penetration depth

  • In air: several decimetres up to a few meters.
  • In denser materials: reduced to centimetres.

Don't confuse alpha and beta hazards:

  • Alpha: high damage, low penetration → dangerous only internally.
  • Beta: lower damage per interaction, higher penetration → can penetrate skin but less densely ionising.
23

Population models

5.5. Population models

🧭 Overview

🧠 One-sentence thesis

Population models—especially exponential and logistic growth frameworks—provide a way to predict how toxicants affect population viability beyond individual-level effects, which is essential for ecological risk assessment.

📌 Key points (3–5)

  • Why population models matter: Risk assessment usually focuses on individuals, but protecting viable populations in ecological contexts requires understanding population-level dynamics.
  • Exponential growth and the intrinsic rate of increase (r): When resources are unlimited, populations grow exponentially at rate r, which can be calculated from life-history data and serves as a measure of population performance under toxic stress.
  • Logistic growth and carrying capacity (K): Density-dependence limits growth; the logistic model includes carrying capacity K, but K is hard to measure in natural or contaminated conditions.
  • Common confusion: Exponential vs logistic—exponential assumes unlimited resources (early colonization, lab cohorts), while logistic includes density-dependence (resource scarcity, competition); both can be affected by toxicants but in different ways.
  • Mechanistic effect models (MEMs): Adding ecological details (dispersal, predators, individual physiology) makes models more realistic and useful for regulatory risk assessment.

📊 Why population models are needed in ecotoxicology

🎯 Limitations of individual-level risk assessment

Ecological risk assessment of toxicants usually focuses on the risks run by individuals, by comparing exposures with no-effect levels.

  • Traditional approaches do not account for quantitative dynamics of populations and communities.
  • Yet in many cases the goal is not protecting individual plants or animals but protecting a viable population in an ecological context.

🔍 Three reasons population models are urgently needed

  1. Extrapolation from lab to population: We need to know whether quality standards derived from toxicity tests are sufficiently (but not overly) protective at the population level.
  2. Lab vs field differences: Responses of isolated, homogenous cohorts in the laboratory may differ from those of interacting, heterogeneous populations in the field.
  3. Prioritizing management: To set the right priorities, we need to know the relative and cumulative effect of chemicals in relation to other environmental pressures.
  • Don't confuse: Individual toxicity endpoints (LC₅₀, NOEC) vs population-level endpoints (growth rate, carrying capacity)—the former measure immediate harm to individuals, the latter measure long-term viability of populations.

🌱 Exponential growth and the intrinsic rate of increase

📐 The exponential growth model

If resources are unlimited, and the per capita birth (b) and death rates (d) are constant in a population closed to migration, the number of individuals added to the population per time unit (dN/dt) can be written as: dN/dt = (b − d)N = rN, where r is called the intrinsic rate of increase.

  • What it describes: Population size N grows exponentially over time when resources are unlimited.
  • When it applies: Early in the growing season, when populations colonize a new environment, or during most of human population history (recognized by Malthus in 1789 and used by Darwin for natural selection theory).
  • The solution is: N(t) = N₀ × e^(rt), where N₀ is the initial population size.

🧬 How toxicants affect exponential growth

  • Since toxicants affect either reproduction or survival (or both), they also affect the exponential growth rate r.
  • This suggests r can be considered a measure of population performance under toxic stress.
  • Example: A toxicant that reduces birth rate or increases death rate will lower r, slowing or reversing population growth (Figure 1a in the excerpt shows blue = no toxicant, red = with toxicant).

🧮 Calculating r from life-history data

The intrinsic rate of increase can be derived from age-specific survival and fertility rates using the so-called Euler-Lotka equation.

  • The Euler-Lotka equation integrates survivorship l(x) and offspring production m(x) over all ages x.
  • Problem: The equation does not allow simple derivation of r; r must be obtained by iteration.
  • Approximate shortcut: For many animals (especially those with high reproductive output and low juvenile survivorship), a reasonably good estimate is:
    r ≈ (ln(S × m)) / α
    where α = age at first reproduction, S = survival to first reproduction, m = reproductive output.
  • Why this works: Age at first reproduction is often the dominant variable determining population growth.

🧪 Life-table response experiments

  • Investigators observe effects of toxicants on age-specific survival and fertility, then calculate r for each exposure concentration.
  • These tests are called life-table response experiments.
  • Example (Figure 2): Water flea Daphnia magna exposed to cadmium and ethyl-parathion—offspring production, age at first reproduction, and calculated r all respond to toxicant concentration.
  • Conclusion from Forbes and Calow (1999): Using r adds ecological relevance, but it does not necessarily provide a more or less sensitive endpoint than the underlying vital rates.

📉 Relationship between r and toxicant concentration

  • Hendriks et al. (2005) postulated that r should show a near-linear decrease with toxicant concentration, scaled to the LC₅₀.
  • This was confirmed in a meta-analysis of 200 laboratory experiments, mostly invertebrates (Figure 3).
  • Interpretation: As exposure concentration increases relative to LC₅₀, population growth rate declines predictably.
  • Field cases with large vertebrates also show pollution limiting population development.

🔄 Discrete-time alternative: Leslie matrix

  • Instead of continuous time (calculus), population events can be modeled at equidistant moments using a Leslie matrix.
  • The Leslie matrix is a table of survival and fertility scores for each age class.
  • When multiplied by the current age distribution, it gives the age distribution at the next time step.
  • If the matrix is time-invariant, the population grows each time step by a factor λ, where ln λ = r (λ = 1 corresponds to r = 0).
  • Advantage: λ (the dominant eigenvalue) can be more easily decomposed into component life-history traits affected by toxicants.

🌾 Logistic growth and carrying capacity

📐 The logistic growth model

One common approach is to assume that the birth rate b decreases with density due to increasing scarcity of resources.

  • The simplest assumption is a linear decrease with N:
    dN/dt = rN(1 − N/K)
  • New parameter K: the carrying capacity, the density at which dN/dt becomes zero and the population reaches equilibrium (N(∞) = K).
  • Also called the Verhulst-Pearl equation (described by François Verhulst in 1844, rediscovered by Raymond Pearl in 1920).
  • The solution is an S-shaped curve: population starts at N₀ and increases asymptotically toward K (Figure 1b).

🧪 Can logistic parameters measure population performance?

  • Problem: Carrying capacity K is difficult to measure under natural and contaminated conditions.
  • Many field populations (e.g., arthropods) fluctuate widely due to predator-prey dynamics and hardly ever reach K within a growing season.
  • Example study (Noël et al., 2006): Zinc in the diet of springtail Folsomia candida did not affect K, but there were interactions below K (including hormesis—growth stimulation by low doses—and Allee effects—loss of growth potential at low density due to lower encounter rate).

🛡️ Density-dependence as a buffering mechanism

  • Density-dependence is expected to buffer population-level effects because toxicity-induced population decline diminishes competition.
  • However, effects depend on the details of population regulation.
  • Example (Schipper et al., 2013): Peregrine falcon exposed to DDE and PBDEs—equilibrium population size declined, probability of finding a territory increased, but the number of non-breeding birds shifting to breeding became limiting, resulting in a strong decrease in the equilibrium number of breeders.
  • Don't confuse: Buffering (competition relief) vs regulation details (breeding stage transitions, territory availability)—the net effect depends on which demographic process is limiting.

🔬 Mechanistic effect models (MEMs)

🧩 What MEMs add

To enhance the potential for application of population models in risk assessment, more ecological details of the species under consideration must be included, e.g. effects of dispersal, abiotic factors, predators and parasites, landscape structure and many more.

  • By including such details, a model becomes more realistic and can make more precise predictions on the effects of toxic exposures.
  • A further step: track the physiology and ecology of each individual in the population.
  • Example approach: Dynamic energy budget (DEB) modelling (Kooijman et al., 2009).

🎯 MEMs in regulatory risk assessment

  • MEMs allow a causal link between:
    • The protection goal (e.g., viable population),
    • A scenario of exposure to toxicants, and
    • The adverse population effects generated by model output.
  • The European Food Safety Authority (EFSA) in 2014 issued detailed guidelines on developing such models and adjusting them for risk assessment of plant protection products.
  • Until recently, population models were rarely used in regulatory risk assessment due to a lack of connection between model output and risk assessment needs (Schmolke et al., 2010).
Model typeKey featuresLimitationsUse in risk assessment
ExponentialUnlimited resources; constant rRare in nature; applies early in colonizationr as measure of population performance; life-table response experiments
LogisticDensity-dependence; carrying capacity KK hard to measure in field; fluctuations obscure equilibriumLimited practical application; buffering depends on regulation details
Mechanistic (MEMs)Ecological details (dispersal, predators, individual physiology)More complex; requires more dataCausal link to protection goals; EFSA guidelines for plant protection products
24

Metapopulations

5.6. Metapopulations

🧭 Overview

🧠 One-sentence thesis

Metapopulation dynamics can amplify chemical risks beyond contaminated sites by turning local populations into sinks that drain neighboring populations, or can mitigate local impacts through immigration, depending on migration patterns and the extent of chemical stress.

📌 Key points (3–5)

  • What metapopulations are: spatially separated populations of the same species that interact through migration between habitat patches.
  • How chemicals affect metapopulations: contaminated sites can act as "sinks" that draw organisms from neighboring populations, spreading impacts to non-contaminated areas, or migration can help "recover" affected populations.
  • What determines migration: distances between populations, habitat quality between patches (including "stepping stones"), and the species' dispersal potential.
  • Common confusion: metapopulation effects can work both ways—chemicals may harm populations beyond the contaminated site or migration may buffer local impacts, depending on stress duration and immigration capacity.
  • Why it matters: chemicals can reduce the total carrying capacity and growth rate of the entire metapopulation, even when affecting only some patches or the areas between them.

🗺️ What metapopulations are

🗺️ Definition and structure

Metapopulation: a set of spatially separated populations which interact to a certain extent.

  • A population is a group of organisms from the same species living in a specific geographical area, interacting and breeding with each other.
  • A metapopulation is one level higher: multiple populations separated in space but connected by migration.
  • Individual populations occupy habitat patches that are more or less favorable, separated by less favorable areas.
  • Some good habitats may be unoccupied (populations not yet established) or have gone locally extinct.

🧭 What controls migration between populations

Three factors determine exchange between populations:

FactorHow it affects migration
DistancesCloser populations exchange more organisms
Habitat quality between patches"Stepping stones" (small areas where organisms survive temporarily but cannot support a local population) facilitate movement
Dispersal potentialSpecies-specific ability to move between patches

🔄 How chemicals affect metapopulations

🪤 Contaminated sites as sinks

When a chemical affects survival in a local population:

  • Local population densities decline.
  • This increases immigration from neighboring populations (more space available).
  • Emigration from the contaminated site decreases (fewer organisms to leave).
  • Net result: organisms flow into the contaminated site.

Key condition: This happens when organisms do not sense the contaminants or when contaminants do not alter perceived habitat quality.

Consequence: If the immigration rate is too high for the source population to replace leaving organisms, population densities in neighboring non-contaminated populations may decline.

Contaminated sites act as a sink for other populations within the metapopulation, so chemicals may have a much broader impact than just local.

Example: A contaminated patch continuously draws organisms from a nearby healthy population; the healthy population cannot reproduce fast enough to replace emigrants, so it also declines even though it is not contaminated.

🩹 Migration as recovery mechanism

Metapopulation dynamics may also mitigate local chemical stress:

  • When the local population is relatively small, or
  • When chemical stress is not chronic (temporary exposure),
  • Influx of organisms from neighboring populations can recover population densities to pre-stress levels.

What determines recovery success:

  • The extent and duration of the chemical impact on the local population.
  • The capacity of other populations to replenish the loss.

Don't confuse: Migration can either spread harm (sink effect) or buffer harm (recovery effect), depending on the balance between local stress and immigration capacity.

📉 Effects on metapopulation carrying capacity and growth

📉 The Levins model framework

The excerpt describes a modeling approach (Levins, late 1960s) to illustrate how chemicals affect the entire metapopulation.

Key variables:

  • F: fraction of patches occupied by local populations.
  • (1 - F): fraction of patches not occupied.
  • e: average chance of extinction per day (day⁻¹).
  • c: chance per day that a non-occupied patch becomes populated from occupied patches (day⁻¹).

Daily change in occupied patches:

  • New occupations: c × F × (1 - F) (occupied patches colonizing empty patches).
  • Extinctions: e × F (fraction of patches going extinct).

Carrying capacity (CC) and growth rate (GR) of the metapopulation:

  • CC depends on the ratio e/c: if extinction risk (e) increases or colonization chance (c) decreases, CC decreases (because e/c increases).
  • GR also decreases and may even go below zero.

🧪 How chemicals change these parameters

Chemicals can:

Chemical effectModel parameterResult
Increase extinction risk↑ e↓ CC, ↓ GR
Decrease establishment in new patches↓ c↓ CC, ↓ GR

Limitation of the basic model: It uses average coefficients, not site-specific parameters. More complex recent models allow local-population-specific parameters and include stochasticity (randomness), increasing environmental relevance.

🌉 Chemicals in areas between patches

Chemicals may also affect the habitat between patches (not just the patches themselves):

  • This affects the potential of organisms to migrate between patches.
  • Migration potential decreases → chances of repopulating non-occupied patches decrease (i.e., c decreases).
  • Result: both CC and GR decline.

In a metapopulation setting, chemicals even in a non-preferred habitat may affect long-term metapopulation dynamics.

Example: A pesticide sprayed in a corridor between two habitat patches makes it harder for organisms to move between them, reducing colonization rates and lowering the overall metapopulation carrying capacity.

🔗 Relevance for environmental risk assessment

🔗 Why metapopulation dynamics matter

  • Chemicals may affect species at levels higher than the local population, including at non-contaminated sites.
  • Risk assessment must consider:
    • Whether contaminated sites act as sinks, spreading impacts.
    • Whether migration can buffer local impacts.
    • Effects on total metapopulation carrying capacity and growth rate.
    • Impacts on migration corridors and stepping stones, not just habitat patches.

🧪 Connection to mechanistic effect models

The excerpt mentions that these types of models are called mechanistic effect models (MEMs):

  • They link the protection goal, exposure scenarios, and adverse population effects.
  • The European Food Safety Authority (EFSA) issued guidelines in 2014 on developing such models for risk assessment of plant protection products.
25

Community ecotoxicology

5.7. Community ecotoxicology

🧭 Overview

🧠 One-sentence thesis

Community ecotoxicology reveals that most effects of toxicants on natural ecosystems are indirect—mediated through species interactions like predation and competition—rather than direct toxic effects, making higher-level testing essential for realistic environmental protection.

📌 Key points (3–5)

  • Why study communities: Environmental protection targets populations, communities, and ecosystems, not individual organisms, so community-level research has higher ecological and societal relevance despite greater complexity and cost.
  • What makes community ecotoxicology unique: Toxicants affect species interactions (predator-prey, competition, etc.), causing indirect effects that dominate over direct toxic effects—estimated to be the majority of effects at this level.
  • Arch-shaped dose-response paradox: At low toxicant concentrations, less-sensitive species may increase in abundance due to indirect benefits (e.g., predator removal), but decline at higher concentrations when direct toxicity overrides indirect benefits.
  • Common confusion—direct vs. indirect effects: A species declining after toxicant application shows a direct effect; a species increasing after application (then declining later) shows an indirect effect driven by changes in predators, competitors, or food availability.
  • Analysis challenges: Community experiments generate overwhelming data from dozens to hundreds of species, requiring multivariate analysis, ecological indices, or effect-class categorization rather than simple univariate approaches.

🎯 Why community-level ecotoxicology matters

🎯 Scaling up increases ecological relevance

  • Moving from molecular → cells → organs → individuals → populations → communities → ecosystems increases ecological and societal relevance of data (Figure 1 in excerpt).
  • Trade-off: Higher levels bring greater complexity, lower reproducibility, longer time requirements, and higher costs.
  • The gap: Community and ecosystem effects are understudied despite being the actual targets of environmental protection.

🔍 Field effects are hard to trace

  • When effects are observed in the field, linking them to specific chemicals and identifying drivers is difficult.
  • Community-level experiments bridge the gap between controlled lab studies and real-world complexity.

🧬 Core concept: indirect effects dominate

🧬 Definition of community ecotoxicology

Community ecotoxicology: the study of the effects of toxicants on patterns of species abundance, diversity, community composition, and species interactions.

  • Builds on community ecology, which studies assemblages of interacting populations within a particular area or habitat.
  • Key difference from lower levels: Species interactions are unique to community and ecosystem levels and cause most observed effects.

🔗 Six pathways for indirect effects

Indirect effects are exerted via:

  • Predator-prey relationships
  • Consumer-producer relationships
  • Competition between species
  • Parasite-host relationships
  • Symbiosis
  • Biotic environment

📈 Example: Culicidae (mosquito larvae) increase after fungicide

  • Study: Roessink et al. (2006) applied triphenyltin acetate (TPT) fungicide to outdoor mesocosms.
  • Direct effects: Several species showed dose-related declines after application, then gradual recovery as concentrations decreased.
  • Indirect effects (Culicidae): Abundance increased shortly after application, then gradually declined.
  • Mechanism: Predators and competitors of Culicidae were more sensitive to TPT → diminished predation and competition + higher food availability → temporary population boom → as predators/competitors recovered over time, Culicidae declined again.

Don't confuse: The same toxicant causes opposite patterns depending on whether the species is directly affected or indirectly benefiting from effects on other species.

🏔️ The arch-shaped response phenomenon

🏔️ When indirect benefits meet direct toxicity

  • At low concentrations: Indirect positive effects dominate (e.g., predator removal) → abundance increases.
  • At intermediate concentrations: Positive indirect effects balance adverse direct effects → stable abundance.
  • At high concentrations: Direct toxic effects overrule indirect benefits → abundance declines.
  • Result: An "arch-shaped" relationship between abundance and toxicant concentration.

🦐 Example: Daphnia and lambda-cyhalothrin insecticide

  • Study: Roessink et al. (2005) tested lambda-cyhalothrin in mesocosms.
  • Species: Daphnia (prey) and Chaoborus (phantom midge, predator; more sensitive).
  • Pattern: Low concentrations → Chaoborus declines → Daphnia released from predation → Daphnia increase. Higher concentrations → direct toxicity to Daphnia → Daphnia decline.
  • Key insight: These combined dose-dependent direct and indirect effects are inherent to community-level experiments but are compound- and species-interaction-specific.

🔬 Studying communities: methods and challenges

🔬 Experimental systems (cosms)

  • Mesocosms: Artificial ponds, ditches, streams (often outdoor).
  • Scale: Experiments must be scaled up to accommodate multiple interacting species.
  • Sampling: Requires meticulous schemes, artificial substrates, emergence traps for aquatic invertebrates with terrestrial adult stages.
  • Alternative: Scale down communities (e.g., algae and bacteria on coin-sized substrates) so the experimental unit is an entire community.

📊 Three approaches to data analysis

ApproachDescriptionWhen appropriate
Univariate analysisFocus on individual species responsesSingle-species experiments; falls short for communities with 100+ species
Multivariate analysisAnalyze patterns across many species simultaneouslyMost appropriate for community data; similar to field-study approaches
Ecological indices & effect classesUse species richness, diversity indices, or categorize responses into effect classes (Figure 4)Simplifies complex data; useful for communication

📊 Species sensitivity distributions (SSDs)

  • Compare sensitivity under semi-field conditions vs. laboratory conditions.
  • Helps determine if lab-derived toxicity data are representative of field responses.

⚠️ Challenge: natural variability vs. toxic signal

  • Each replicate cosm develops independently, even controls.
  • By experiment end (often several months), control replicates may differ from each other and from treatments.
  • The challenge: Separate the toxic signal from natural variability in community development.

⚠️ Challenge: defining recovery

  • Communities may recover but develop in a different direction than controls.
  • Question: Is this true recovery?
  • Factors affecting recovery:
    • Dispersal capacity of species
    • Distance to nearby populations within a metapopulation
    • Habitat heterogeneity (affects toxicant distribution and provides shelter)
    • Community state and timing of exposure

⏱️ Timing matters: lifecycle stage and community dynamics

  • Communities exhibit temporal dynamics in species composition and ecosystem processes.
  • Species sensitivity varies by life stage: Young individuals can be orders of magnitude more sensitive than adults or late-instar larvae.
  • Population growth phase matters: Exponentially growing populations recover much faster than populations at carrying capacity.
  • Implication: Timing of toxicant exposure seriously affects both the extent of adverse effects and recovery potential.

🌍 From communities to ecosystems and landscapes

🌍 Ecosystem-level characteristics

When scaling up from community to ecosystem level, unique characteristics emerge:

  • Structural: Biodiversity
  • Functional (ecosystem processes): Primary production, ecosystem respiration, nutrient cycling, decomposition

⚖️ Structure vs. function bias

  • Good environmental quality depends on both ecosystem structure and functioning.
  • Current bias: Science and policy focus more on ecosystem structure than function.

🗺️ Beyond ecosystems

  • Landscape ecotoxicology: Covers levels of biological organization higher than ecosystems.
  • Ecosystem services: A practical concept for assessing broader-scale impacts.

🧪 Community ecotoxicology in practice

🧪 The biological hierarchy trade-off

LevelSpeed of responseEcological realismCausality/tractabilityTypical setting
Lower (subcellular → individual)FasterLowerHigher (easier to link response to dose)Standard laboratory
Higher (population → ecosystem)SlowerHigher (species interactions, realistic exposure)Lower (harder to trace causality)Cosms, field

🎯 Fitness implications by level

  • Gene to individual affected: Warning for ecosystem health
  • Individual to population affected (e.g., reproductive output): Incident
  • Community structure/function affected: Disaster for ecosystem health

🏞️ What are cosm studies?

  • Bridge between laboratory and natural world.
  • Microcosms: 10⁻³ to 10 m³
  • Mesocosms: 10 to 10⁴ m³ or larger (approaching whole natural systems)
  • Key features: Combine ecological realism (basic ecosystem components) with controlled, replicable treatments and accessible measurement of physicochemical, biological, and toxicological parameters.

🏞️ Advantages of cosms

  • Establish food webs
  • Assess direct and indirect effects simultaneously
  • Evaluate contamination effects on multiple trophic and taxonomic levels in ecologically relevant context
  • Study the parts (individuals, populations, communities) and the whole (ecosystems) at the same time
  • Allow manipulation of multiple environmental factors with replication

📏 Size selection guidance (OECD, 2004)

  • Smaller systems: Suitable for short-term studies (3–6 months) and smaller organisms (e.g., plankton)
  • Larger systems: Appropriate for long-term studies (≥6 months) and larger organisms

🌊 Historical example: Experimental Lakes Area (ELA)

  • Located in Ontario, Canada
  • 46 natural, relatively undisturbed lakes designated for ecosystem-level research
  • Significant contributions since early 1970s
  • Studies on nutrients, synthetic estrogens, pesticides, etc.

📋 Typical study parameters (OECD, 2004)

Treatment regime:

  • Dosing regime, duration, frequency, loading rates
  • Meteorological records (outdoor cosms)
  • Physicochemical water parameters (temperature, oxygen, pH, etc.)

Biological levels (sampling and taxonomic identification):

  • Phytoplankton: chlorophyll-a, cell density, dominant taxa, richness, biomass
  • Periphyton: chlorophyll-a, cell density, species, richness, biomass
  • Zooplankton: density, dominant orders (Cladocera, Rotifera, Copepoda), species, richness, biomass
  • Macrophytes: biomass, species composition, % surface covering
  • Emergent insects: emergence rate, dominant taxa, richness, biomass, life stages
  • Benthic macroinvertebrates: density, richness, dominant species, life stages
  • Fish: biomass, weights, lengths, condition index, behavior, pathology, fecundity

🔍 Two real-world cosm examples

🔍 Example 1: Joint interactions (Barmentlo et al., 2018)

  • System: 65 L outdoor mesocosm ponds, 35-day exposure
  • Stressors: Three agrochemicals (imidacloprid, terbuthylazine, NPK fertilizers) at environmentally relevant concentrations
  • Design: Full factorial (all combinations tested)
  • Food chains: Detritivorous and algal-driven chains inoculated

Key findings:

  • Binary mixtures: Species responses predictable by concentration addition
  • Trinary mixtures: Much more variable and counterintuitive
  • Mayfly (Cloeon dipterum): Extreme low recovery (3.6% of control) in some mixtures, but trinary mixture recovery did not deviate from control—higher than expected
  • Zooplankton (Daphnia magna, Cyclops sp.): Abundance positively affected by nutrients as expected, but pesticide addition did not lower recovery (unexpected)
  • Conclusion: Unexpected results only detectable when testing multiple species and multiple stressors; cannot be found in single-species lab tests

🔍 Example 2: Indirect cascading effects (Van den Brink et al., 2009)

  • System: Indoor freshwater plankton-dominated microcosms
  • Stressors: Chronic applications of atrazine (herbicide) + lindane (insecticide) mixture
  • Mechanisms affected: Both top-down and bottom-up regulation

Observed cascade:

  • Lindane → decreased sensitive detritivorous and herbivore arthropods → allowed insensitive competitors (worms, rotifers, snails) to increase
  • Atrazine → inhibited algal growth (photosynthesis) → affected herbivores → lower dissolved oxygen and pH, increased alkalinity, nitrogen, and electrical conductivity

Example: The ecological effect chain shows how one toxicant affects primary producers (bottom-up) while another affects consumers (top-down), creating complex cascading effects through the food web.

⚖️ Realism vs. replicability

⚖️ The conceptual conflict

  • Replicability may require simplification of the system
  • Realism requires complexity approaching natural ecosystems
  • Resolution: The crucial point is not to maximize realism, but to ensure ecologically relevant information can be obtained

🔑 Key to ecological representativeness

  • Preserve key features at both structural and functional levels within cosms
  • These features ensure ecological representativeness even in simplified systems

🌐 Extrapolation considerations

  • Extrapolation from small experimental systems to the real world is generally more problematic than from larger systems
  • Larger systems allow study of more complex interactions experimentally
  • Caquet et al. (2000) claim: Mesocosms refine classical ecotoxicological risk assessment by providing conditions for better understanding of environmentally relevant effects of chemicals
26

Structure versus function incl. ecosystem services

5.8. Structure versus function incl. ecosystem services

🧭 Overview

🧠 One-sentence thesis

Ecosystem functioning (processes) is generally less sensitive to contaminants than ecosystem structure (species composition) because multiple species can perform the same function, a phenomenon called functional redundancy.

📌 Key points (3–5)

  • Two sides of biodiversity: structural diversity (which species are present) and functional diversity (which processes those species perform, such as decomposition or predation).
  • Functional redundancy: when a sensitive species disappears due to contamination, less-sensitive species can take over its role, so the process continues even though species composition has changed.
  • Structure is more sensitive than function: effects on species (structural diversity) appear at lower contaminant concentrations and sooner than effects on ecosystem processes (functional diversity).
  • Common confusion: redundancy does not mean all species are equally efficient—redundant species often perform the same function less effectively, so total ecosystem performance may still decline.
  • Why it matters: understanding the structure–function relationship helps interpret whether contamination affects only species composition or also critical ecosystem services.

🌿 Biodiversity at three levels

🌿 Genetic, species, and landscape diversity

In ecology, biodiversity describes the richness of natural life at three levels: genetic diversity, species diversity (the most well-known), and landscape diversity.

  • Species diversity is the most commonly studied level in environmental toxicology.
  • Genetic diversity matters when assessing subspecies or local populations that are more or less sensitive (e.g., populations in mining areas with persistent pollution).
  • Landscape diversity is a newer focus, looking at total contaminant load across a landscape (e.g., pesticides in agricultural areas) or interactions between ecosystems (e.g., a lake surrounded by forest and grassland).

📊 Shannon–Wiener index

  • The most commonly used biodiversity index.
  • Formula (in words): the sum of (proportion of individuals of species i times the natural logarithm of that proportion), with a negative sign in front.
  • Higher index = more species and/or more even distribution of individuals across species.
  • Lower index = a few very dominant species.
  • Environmental pollution tends to increase dominance: a few species are favored, many become rare.

🔗 Structural versus functional diversity

🔗 What structural diversity means

  • Structural diversity = the variety of species types present in an ecosystem.
  • Traditional biodiversity indices (like Shannon–Wiener) focus on structure: how many species and how many individuals per species.
  • The excerpt notes that interactions between species (food webs, food chains) are not captured by indices like Shannon–Wiener.

⚙️ What functional diversity means

Biodiversity not only has a structural side: the various types of species, but also a functional one: which species are involved in the execution of which process.

  • Functional diversity = the variety of processes or roles that species perform.
  • Examples of functions: photosynthesis, decomposition, predation, grazing.
  • Functions are often related to:
    • Nutritional ecology (what species eat).
    • Dealings with specific nutrients (carbon, nitrogen).
    • Behavior (e.g., litter decomposition).
  • Example: In a soil food web (Figure 1), nematodes are classified by feeding mode—phytophagous (feeding on plants), fungivorous (eating fungi), predaceous (eating other nematodes).

🌐 Food webs and ecosystem services

  • Detailed food web descriptions link trophic groups and describe energy flows, which determine the intensity and stability of interactions.
  • The concept of ecosystem services has emerged to recognize functional aspects at the ecosystem level (though it does not trace back to individual species).
  • Trait-based approaches attempt to group species by traits linked to exposure, sensitivity, and functioning, potentially bridging structural and functional biodiversity.

🔄 Functional redundancy

🔄 Core concept

Functional redundancy postulates that not all species that can perform a specific process are always active, and thus necessary, in a specific situation. Consequently some of them are "superfluous" or redundant.

  • When a sensitive species disappears due to contamination, a less-sensitive (redundant) species may take over its role.
  • Result: the process (function) continues even though species composition (structure) has changed.
  • Key implication: effects on structural diversity appear at lower contaminant concentrations than effects on functional diversity.

⚠️ Limitations and caveats

  • Redundancy in situation A does not guarantee redundancy in situation B with different environmental conditions and species composition.
  • Redundant species are often less efficient at performing the function than the original species.
  • When functional biodiversity is affected, structural biodiversity is already severely damaged: most likely several important species have gone extinct or are strongly inhibited.

📈 Measuring redundancy

  • The degree of functional redundancy is not easily measured.
  • One approach: plot ecosystem process rate against species diversity.
    • A curve that levels off toward a ceiling indicates clear redundancy (top left graph in Figure 2).
    • Other possible shapes: linear, discontinuous (if keystone species are lost), or other forms.
  • The relationship can take many forms depending on the ecosystem and the role of keystone species.

🧪 Examples of structure–function relationships

🧪 Copper contamination and soil fungi (Tyler 1984; Rühling et al. 1984)

  • Field observation: Near a copper manufacturing plant in Gusum, Sweden, specific enzyme functions and general processes (like mineralization) decreased faster than total fungal biomass with increasing copper contamination.
  • Experimental explanation: Various microfungi species showed different sensitivities to copper. Most species declined to zero abundance at high copper levels, but two species increased in abundance, so total biomass remained roughly constant (Figure 3a).
  • Interpretation: The redundant species maintained biomass but were less efficient at performing breakdown processes, so functional diversity declined more slowly than structural diversity but still declined.

🪱 Earthworm functional groups

Three ecological types of earthworms, classified by behavior and role:

TypeBehaviorFunction
AnecicsDeep burrowing, move between deep layers and surfaceConsume leaf litter, transport organic matter downward
EndogeicsActive in deeper mineral and humus layersConsume fragmented litter and humus, form humus
EpigeicsActive in upper litter layerConsume leaf litter, fragment it
  • Effects of contamination:
    • Loss of anecics → litter accumulates at the soil surface.
    • Loss of epigeics → reduced litter fragmentation.
    • Loss of endogeics → reduced humus formation.
    • General earthworm community loss → reduced soil aeration, soil compaction.
  • Sensitivity varies: Different earthworm groups have different sensitivities to different pesticides; no general ranking exists.
  • Don't confuse: There is no universal relationship between a species' function (e.g., surface-active) and its sensitivity to all contaminants.

🦠 Heavy metal resistance and microbial degradation (Doelman et al. 1994)

  • Researchers isolated fungi and bacteria from contaminated and clean areas, tested sensitivity to zinc and cadmium, and divided them into sensitive and resistant groups.
  • They then measured each group's ability to degrade and mineralize a series of organic compounds.
  • Result: The sensitive group was much more effective at degrading a variety of organic compounds; the resistant group was far less effective (Figure 4).
  • Interpretation: Although functional redundancy may buffer some contaminant effects, the overall performance of the community generally decreases upon exposure.
  • This example also shows that genetic diversity (numbers of sensitive vs. resistant species) plays a role in the functional stability and sustainability of microbial degradation processes.

🌍 Implications for ecosystem services

🌍 Why study ecosystem services in relation to contamination

  • The excerpt concludes that ecosystem services are worth studying in relation to contamination.
  • Understanding the structure–function relationship helps assess whether contamination affects only species composition or also critical processes that ecosystems provide.
  • Even when functional redundancy buffers immediate process loss, the long-term performance and stability of the ecosystem may decline because redundant species are less efficient.
27

Landscape ecotoxicology

5.9. Landscape ecotoxicology

🧭 Overview

🧠 One-sentence thesis

This section is currently in preparation and does not yet contain substantive content.

📌 Key points (3–5)

  • The excerpt consists only of a heading and a note stating "In preparation."
  • No definitions, concepts, mechanisms, or conclusions are provided.
  • The section appears between a discussion of functional diversity and a chapter on risk assessment and regulation.

📭 Content status

📭 Section under development

The excerpt explicitly states:

In preparation

  • No explanatory text, examples, or references are included.
  • The section title suggests it would cover ecotoxicology at the landscape scale, but no details are available.
  • Readers should refer to other completed sections or wait for future updates.
28

Introduction: The Essence of Risk Assessment

6.1. Introduction: the essence of risk assessment

🧭 Overview

🧠 One-sentence thesis

Chemical risk assessment systematically estimates the probability of adverse effects from chemical exposure through a four-step process that combines exposure and effect data to inform management decisions.

📌 Key points (3–5)

  • Risk vs. hazard distinction: Risk depends on actual exposure levels, while hazard is the inherent capacity to cause harm regardless of exposure.
  • Four-step process: Problem definition → exposure assessment → effect assessment → risk characterization, often repeated through multiple tiers.
  • Tiering principle: Start with simple, conservative assessments; gather more data only for chemicals showing potential risk in early tiers.
  • Common confusion: Risk assessment and risk management are often depicted as sequential, but they overlap—management decisions (e.g., defining protection goals) are needed before assessment begins.
  • Risk indicator interpretation: Comparing actual exposure to a reference level yields a risk indicator; values above 1.0 suggest potential risk requiring either more data or mitigation.

🔍 Core terminology and distinctions

🧪 What is risk?

Risk: "the probability of an adverse effect after exposure to a chemical"

  • This definition focuses on quantifiable, "objective" aspects measurable by natural scientists and engineers.
  • It deliberately excludes subjective dimensions like public perception and knowledge gaps (studied by social scientists).
  • Why this matters: Risk managers may act on perceived risks even when scientific assessments show negligible health risks.

⚠️ Risk vs. hazard

ConceptDefinitionDepends on exposure?Example
HazardInherent capacity to cause adverse effectsNoLabeling a substance "carcinogenic" based on lab tests
RiskProbability of adverse effect after exposureYesActual cancer risk depends on how much exposure occurs
  • Don't confuse: A highly hazardous substance poses low risk if exposure is negligible; a less hazardous substance can pose high risk if exposure is high.

👥 Risk assessment vs. risk management

  • Risk assessment: Performed by natural scientists and engineers ("risk assessors"); describes risks using scientific methods.
  • Risk management: Performed by policy makers ("risk managers"); decides whether to accept or reduce risks, considering socio-economic implications.
  • Key insight: These are not strictly sequential—management decisions (e.g., what to protect, at what level) must be made before assessment can proceed.

🎯 Solution-focused risk assessment

  • Instead of assessing current risks first, then seeking solutions, this approach maps potential solutions and assesses their associated risks concurrently.
  • The scenario with maximum feasible risk reduction becomes the preferred management option.

🪜 The four-step process and tiering

📋 Step 1: Problem definition

Questions answered in this phase:

  • What is the problem and which chemical(s) are involved?
  • What should be protected (general population, sensitive groups, ecosystems, specific species) and at what level?
  • What information already exists?
  • What resources and timeframe are available?
  • What exposure routes and timeframes (acute vs. chronic) will be considered?
  • What risk metric will express the risk?
  • How will uncertainties be addressed?

Who participates: Not just risk assessors—should involve risk managers and stakeholders collaboratively.

Important transparency issue: Stakeholder concerns may be broad or poorly articulated; risk assessors can only assess aspects for which methods exist. Being transparent about what will and won't be assessed avoids later disappointment.

Output: A risk assessment plan detailing how risks will be assessed given constraints.

📊 Step 2: Exposure assessment

Exposure scenario: Describes the situation being assessed (e.g., soil organisms at a contaminated site, or a hypothetical future use scenario for a new pesticide).

  • Scenarios are often conservative, meaning exposure estimates will be higher than expected average exposure.

Exposure metrics vary by protection target:

  • Ecosystems: Medium concentration (water, sediment, soil, or air concentration).
  • Humans: Route-specific metrics—air concentration for inhalation, average daily intake for oral exposure, skin uptake for dermal exposure; can be combined into internal dose metrics like Area Under the Curve (AUC) in blood.
  • Wildlife/farm animals: Similar to human metrics.

Complexity note: Route-specific exposure (especially oral) is more complex than simple medium concentration because it requires quantifying both substance concentration in contact medium and contact intensity (e.g., how much water consumed per day).

🧬 Step 3: Effect assessment

Goal: Estimate a reference exposure level—an exposure level expected to cause no or very limited adverse effects.

Common reference levels:

  • Ecological: Predicted No Effect Concentration (PNEC)—the concentration at which no adverse ecosystem-level effects are expected.
  • Human: Acceptable Daily Intake (ADI), Reference Dose (RfD), Derived No Effect Level (DNEL), Point of Departure (PoD), Virtually Safe Dose (VSD)—each used in specific contexts (exposure route, regulatory domain, substance type, assessment method).

🎯 Protection goals: from abstract to operational

  1. Abstract protection goals (set by politicians): e.g., "protect the entire ecosystem and all individuals of the human population."
  2. Problem: Abstract goals don't always match assessment methods. Example: 100% protection from genotoxic carcinogens would require banning them entirely (reference level = 0).
  3. Operationalization (dialogue between experts and risk managers): Translate abstract goals into practical terms matching assessment methods.
    • Example for genotoxic carcinogens: "one in a million lifetime risk estimated with a conservative dose-response model."
    • Example for ecosystems: "concentration at which only 5% of species exceed their No Observed Effect Concentration (NOEC)."

🔬 Deriving reference levels

Process:

  1. Use (eco)toxicity test data (lab animals for human levels; primary consumers, invertebrates, vertebrates for ecological levels).
  2. Plot exposure level (x-axis) vs. effect/response level (y-axis).
  3. Fit a mathematical dose-response relationship to the data.
  4. Derive an exposure level corresponding to a predefined effect level.
  5. Extrapolate to the ultimate protection goal by dividing by assessment/safety factors that account for:
    • Differences in sensitivity between lab and field conditions
    • Differences between tested species and species to be protected
    • Variability in sensitivity within human population or ecosystem
    • Uncertainties in the assessment

Important note: Assessment factors are not purely scientific—they also account for uncertainties and ensure the reference level is conservative.

📈 Step 4: Risk characterization

Goal: Produce a risk estimate with associated uncertainties.

Risk indicator calculation:

  • Compare actual exposure level to reference level.
  • If reference level = maximum safe exposure, risk indicator should be < 1.0.
  • Risk indicator > 1.0 indicates potential risk (not certain risk, because conservative assumptions may have been made).

Two management actions when risk indicator > 1.0:

  1. If resources allow and assessment was conservative: gather additional data, perform higher-tier assessment.
  2. Consider mitigation options to reduce risk.

Margin-of-safety approach (alternative):

  • Reference level has not yet been extrapolated (no assessment factors applied).
  • Risk indicator reflects "margin of safety" between actual exposure and non-extrapolated reference level.
  • Margin-of-safety typically should be ≥100.
  • Key difference: Timing of addressing uncertainties in effect assessment.

🔄 The tiering principle

Tiering: Repetition of the four assessment steps, starting simple and conservative, then adding more data in subsequent tiers to reduce conservatism.

How it works:

  1. Tier 1: Simple, conservative assessment with limited data.
  2. If risk indicator > 1.0: Move to Tier 2 with more data and less conservative assumptions.
  3. Continue until acceptable risk is demonstrated or mitigation is needed.

Purpose: Focus time and resources on chemicals with potential risk; gather detailed data only when needed.

Debate on step order: Should exposure or effect assessment come first?

  • Effect-first argument: Effect information is independent of scenario and can guide exposure determination (e.g., toxicokinetics inform exposure duration).
  • Exposure-first argument: Assessing effects is expensive and unnecessary if exposure is negligible.
  • Current consensus: Determine order case-by-case; prefer parallel assessment with information exchange.

🌐 Retrospective vs. prospective assessment

🔙 Retrospective (diagnostic) risk assessment

  • Environment is already polluted.
  • Uses measured exposure levels.
  • Goal: Determine if current risk is acceptable and which substances contribute to it.

🔮 Prospective risk assessment

  • Environment is not yet polluted.
  • Uses predicted exposure levels (estimated emissions + dispersion models).
  • Goal: Determine if a projected activity will result in unacceptable risks.

Note: Even in polluted environments, predicted exposure may be preferred over measurements if measurements are too expensive and pollution sources are well-characterized.

🧩 The DPSIR framework and risk assessment

The excerpt illustrates risk assessment using the DPSIR chain (Drivers-Pressures-State-Impact-Response):

  • Reference levels are derived from protection goals (maximum acceptable impact level).
  • Actual exposure is measured or predicted from emissions and dispersion.
  • Risk indicator compares actual exposure to reference level.

⚠️ Criticisms and limitations

📉 Single-point limitation

Current practice: Only one point of the dose-response relationship is used (the reference level).

Criticism: This is suboptimal and wasteful because:

  • A risk indicator of 2.0 means exposure is twice the reference level, but doesn't indicate how many individuals/species are affected or effect intensity.
  • Using the full dose-response relationship would yield a better-informed risk estimate.

🧪 Substance-by-substance approach

Problem: Risk assessment is often performed for individual substances, but real-world exposure involves mixtures.

Challenge: Each mixture has unique composition and concentration ratios, making it difficult to determine reference levels.

Progress: Mixture toxicology is advancing with:

  • Whole mixture methods
  • Compound-based approaches
  • Effect-based methods: Assess risk based on toxicity measured in environmental samples rather than chemical concentration (assess impacts rather than state/pressures in DPSIR terms).
29

6.2. Ecosystem services and protection goals

6.2. Ecosystem services and protection goals

🧭 Overview

🧠 One-sentence thesis

The excerpt does not provide substantive content on ecosystem services and protection goals; it only states that this section is "in preparation."

📌 Key points (3–5)

  • The section titled "6.2. Ecosystem services and protection goals" is marked as "In preparation."
  • No definitions, explanations, or conceptual content are provided in the excerpt.
  • The surrounding context includes questions on risk assessment and a subsequent section on predictive risk assessment approaches.
  • The excerpt does not contain information about what ecosystem services are or how protection goals relate to them.

📭 Content status

📭 Section availability

The excerpt explicitly states:

6.2. Ecosystem services and protection goals
In preparation

  • This indicates the section has not yet been written or completed.
  • No substantive material is available for review or study at this time.

🔗 Contextual placement

  • The section appears between:
    • Questions on risk assessment (6.1. Question 1–4), which cover hazard-based vs. risk-based interventions, roles of risk assessors and managers, retrospective vs. prospective assessments, and prioritization of substances.
    • A section on predictive risk assessment approaches (6.3), which discusses environmental realistic scenarios for human and ecological exposure.
  • This placement suggests the missing section would likely bridge risk assessment concepts with practical exposure scenario modeling.

⚠️ Note for learners

⚠️ What is missing

  • Ecosystem services: No definition or examples provided in the excerpt.
  • Protection goals: No explanation of what they are, how they are set, or how they relate to ecosystem services.
  • Connection to risk assessment: The excerpt does not explain how protection goals inform or are informed by risk assessment processes discussed in surrounding sections.

Learners should refer to completed sections of the textbook or other resources for information on this topic.

30

Predictive risk assessment approaches and tools

6.3. Predictive risk assessment approaches and tools

🧭 Overview

🧠 One-sentence thesis

Predictive risk assessment uses exposure scenarios, reference levels (like PNECs), statistical models (like SSDs), and structure-based methods (like QSARs) to estimate chemical risks before or without extensive testing, enabling regulatory decisions for thousands of chemicals.

📌 Key points (3–5)

  • Exposure scenarios define the combination of environmental and application parameters needed to model realistic worst-case exposures (e.g., 90th percentile concentrations).
  • PNECs (Predicted No Effect Concentrations) are reference levels derived by dividing the lowest toxicity test result by an assessment factor to account for extrapolation uncertainty from lab to ecosystem.
  • Species Sensitivity Distributions (SSDs) use the statistical distribution of species sensitivities to derive protective benchmarks (HC₅) or estimate the fraction of species affected (PAF) at a given concentration.
  • QSARs (Quantitative Structure-Activity Relationships) predict toxicity from chemical properties (especially log K_ow for baseline toxicants) and require mode-of-action classification to avoid misestimating toxicity.
  • Common confusion: A PNEC is not a "no effect" level in the field—it is a calculated threshold below which unacceptable effects are not expected, derived by applying assessment factors to lab data; similarly, SSDs describe variation in sensitivity but do not explain why species differ.

🌊 Exposure scenarios for environmental risk assessment

🎯 What exposure scenarios are

An exposure scenario describes the combination of circumstances needed to estimate exposure by means of models.

  • For pesticides, scenarios combine abiotic parameters (soil type, water depth, climate) and agronomic parameters (crop, application method) to represent a realistic worst-case.
  • Example: a ditch with 30 cm minimum water depth alongside a clay-soil crop field, with pesticide exposure via spray drift and drainpipe leaching, modeled over 20 years of weather data.
  • The scenario feeds into models that output time-series of exposure concentrations.

📐 Exposure assessment goals

The excerpt emphasizes that "realistic worst-case" is too vague; exposure assessment goals must specify:

ElementWhat it definesExample
E1: Ecotoxicologically Relevant Concentration (ERC)Type of concentrationFreely dissolved pesticide in water
E2: Temporal dimensionTime windowAnnual peak or time-weighted average
E3: Type of water bodySpatial categoryDitch, stream, or pond
E4: Spatial dimensionPhysical constraintMinimum water depth 30 cm
E5: Multi-year temporal populationTime series for one body20 peak concentrations over 20 years
E6: Spatial populationSet of water bodiesAll ditches ≥30 cm alongside treated fields
E7: Percentile combinationStatistical target90th percentile from spatial-temporal population
  • The 90th percentile approach means 90% of concentrations are lower (10% higher) than the derived value.
  • Specification involves political choices (e.g., 30 cm vs. 10 cm water depth changes peak concentration threefold) as well as scientific data.

🔗 Linking exposure and effect assessments

  • Both exposure and effect assessments are tiered: simple/conservative first tiers, more realistic higher tiers.
  • Specific Protection Goals (set by risk managers) define the overall protection level and guide both exposure and effect assessment goals.
  • The type of concentration from exposure (e.g., time-weighted average) must match the effect assessment (e.g., do not use time-weighted average for acute effects).
  • A newer approach integrates exposure and effects at the landscape level using spatially-explicit population models, requiring "environmental scenarios" that combine exposure and ecological parameters.

Don't confuse: Exposure scenarios (parameters for modeling exposure) vs. environmental scenarios (integrated parameters for both exposure and ecological effects/recovery).

🛡️ Reference levels for ecosystem protection

🧪 What a PNEC is

The PNEC (Predicted No Effect Concentration) is the concentration below which adverse effects on the ecosystem are not expected to occur.

  • PNECs are derived per compartment (water, soil, sediment, air) and apply to directly exposed organisms.
  • For bioaccumulative chemicals, separate PNECs for secondary poisoning (predatory birds/mammals) are derived.
  • The term "reference level" is generic; "quality standard" implies legal status (e.g., Water Framework Directive); "PNEC" is used in prospective chemical safety (e.g., REACH).

🔢 The assessment factor (AF) approach

Basic PNEC derivation uses single-species laboratory toxicity tests (algae, water fleas, fish) and extrapolates to the ecosystem level by assuming:

  • Protection of ecosystem structure ensures protection of ecosystem functioning.
  • Effects on ecosystem structure can be predicted from species sensitivity.

To account for extrapolation uncertainty, the lowest test result is divided by an assessment factor (AF):

Available dataAssessment factor
At least one short-term LC50/EC50 from three trophic levels (fish, invertebrates, algae)1000
One long-term EC₁₀ or NOEC (fish or Daphnia)100
Two long-term results from two trophic levels50
Long-term results from ≥3 species (three trophic levels)10
  • The AF covers uncertainties: intra/inter-laboratory variation, biological variance within/between species, test duration, and lab-to-field differences.
  • Larger AFs are needed when extrapolating from acute to chronic exposure or when data are limited.

Don't confuse: The AF is not arbitrary—it systematically scales with data quality and diversity; more data (more species, longer tests) → smaller AF.

🦅 Secondary poisoning reference levels

  • For chemicals that bioaccumulate, toxicity to birds/mammals is transformed into safe concentrations in prey (fish or mussels).
  • Simple methods use default conversion factors; more sophisticated methods (e.g., WFD) use predator energy demand and food energy content, and can include longer food chains with field bioaccumulation factors.

📊 Species Sensitivity Distributions (SSDs)

📈 What an SSD is

A Species Sensitivity Distribution (SSD) is a distribution describing the variance in sensitivity of multiple species exposed to a hazardous compound.

  • Different species have different sensitivities; when log-transformed toxicity endpoints (NOECs, EC50s, LC50s) are plotted, they follow a bell-shaped (log-normal) distribution.
  • The cumulative form is an S-shaped (sigmoid) curve with log concentration on the X-axis and cumulative probability (0–1) on the Y-axis.

🔄 Dual utility of SSDs

The SSD serves two purposes:

  1. Y → X (protection): Select a protection level on Y (e.g., 5% of species affected) to derive a Hazardous Concentration (HC₅) on X—a regulatory benchmark.

    • At HC₅, 5% of species are exposed above their NOEC; 95% are below.
    • Often an extra assessment factor (1–5) is applied to HC₅ to derive a PNEC or Environmental Quality Standard (EQS).
  2. X → Y (assessment): Given a measured or predicted concentration (X), derive the Potentially Affected Fraction (PAF) of species (Y).

    • Example: "24% of species are likely affected" at a given pollution level.
    • The meaning depends on the endpoint: SSD_NOEC quantifies species affected beyond no-effect level; SSD_EC50 quantifies species affected at 50% effect level.

Don't confuse: SSDs describe variation in sensitivity statistically but do not explain why species differ; they rank chemicals or sites by hazard/impact but require biological interpretation (e.g., mode of action) to avoid misfit.

🛠️ Deriving and validating an SSD

Step 1: Data collection

  • Extract ecotoxicity data (e.g., EC50, NOEC) from databases (EPA Ecotox, REACH, EnviroTox).
  • Rank data from most to least sensitive; use only best-quality data; if multiple values per species, use geometric mean.
  • Each species represented once.
  • Minimum data often "Algae, Daphnids, Fish"; more data (>100 species) improves model.

Step 2: Model fitting

  • Log₁₀-transform data; plot bell-shaped distribution to inspect for deviations.
  • Fit log-normal model (estimate mean and standard deviation) using statistical software or dedicated tools (e.g., ETX).
  • Check goodness of fit (statistical tests, visual inspection).
  • If a chemical has a selective mode of action (e.g., insecticides highly toxic to insects), derive separate SSDs for affected and unaffected groups (SSD_Insect, SSD_Other).

Step 3: Application

  • For protection: derive HC₅ (or HC_x) from the fitted curve.
  • For assessment: input ambient concentration to read PAF from the curve.

🌍 Practical uses

  1. Regulatory standard-setting (prospective): EU, OECD, many countries use SSDs to set protective standards; if predicted concentration exceeds standard, chemical may be prohibited or restricted.
  2. Environmental quality assessment (retrospective): compare measured concentrations to reference levels; use PAF to prioritize sites for remediation.
  3. Life Cycle Assessment: aggregate PAFs over all chemicals in a product to identify ecotoxicity "hot spots" (e.g., USEtox model).

🧬 Predicting ecotoxicity from chemical structure (QSARs)

🧩 What a QSAR is

A QSAR (Quantitative Structure-Activity Relationship) is a mathematical model that relates ecotoxicity data (Y) with structural descriptors and/or physicochemical properties (X) for a series of chemicals.

  • Y-variable: toxicity endpoint (e.g., fish LC50, Daphnia NOEC) in molar units (relates to molecular events).
  • X-variable(s): molecular weight, log K_ow (octanol-water partition coefficient), electronic/topological descriptors, functional groups.
  • Form: linear regression (Y = a₁X₁ + a₂X₂ + ... + b) or multivariate techniques (PCA, PLS) or machine learning (SVM, Random Forest, neural networks).

🎯 Why mode of action (MOA) matters

  • A QSAR should be developed for chemicals with similar mode of action to avoid mixing mechanisms.
  • Verhaar classification (widely used) divides chemicals into four classes:
ClassNameMode of actionExample structures
1Inert (non-polar narcosis)Non-specific membrane perturbationAromatic/aliphatic hydrocarbons, alcohols, ethers, ketones
2Less inert (polar narcosis)Membrane perturbation + H-bondingPhenols, anilines
3Reactive (electrophiles)Covalent reaction with nucleophiles (proteins, DNA)Aldehydes, epoxides, α,β-unsaturated carbonyls
4Specific actingSpecific receptor/enzyme interactionInsecticides (nervous system), organophosphates (AChE inhibitors)
  • Baseline (narcosis) toxicity: the minimum effect caused by any organic chemical via non-specific membrane interactions; internal lethal concentration (ILC) ≈ 50 mmol/kg lipid, independent of K_ow.
  • Class 2, 3, 4 chemicals show excess toxicity (toxic ratio Te) compared to baseline: Te = (LC50_baseline / LC50_experimental).

Don't confuse: Small structural changes can shift MOA dramatically (Box 4 example: 1-chloro-2,4-dinitrobenzene is reactive/alkylating, but 1,2-dichloro-4-nitrobenzene is polar narcosis—100% structural similarity but 230× toxicity difference).

📐 QSARs for baseline toxicants (Class 1)

For non-polar narcosis chemicals, toxicity correlates strongly with log K_ow (hydrophobicity):

  • Why: LC50 is inversely related to bioconcentration factor (BCF); BCF increases with K_ow → LC50 decreases with K_ow.
  • Internal lethal concentration (ILC) is constant; external LC50 = ILC / BCF.

Example QSARs (Table 1 in excerpt):

  • Guppy 14-d LC50 (mol/L): log LC50 = –0.869 log K_ow – 1.19 (n=50, r²=0.969)
  • Zebrafish 28-d NOEC (mol/L): log LC50 = –0.898 log K_ow – 2.30 (n=27, r²=0.917)

Interpretation: The intercept difference (–2.30 vs. –1.19 = 1.11 log units) means the NOEC test is 10^1.11 ≈ 13× more sensitive than the LC50 test, consistent with the standard assessment factor of 10 for LC50→NOEC extrapolation.

⚗️ QSARs for reactive and specific-acting chemicals

  • More complex: intrinsic toxicity (reactivity, receptor potency) and biotransformation affect the endpoint.
  • Example (Box 3, α,β-unsaturated carboxylates):
    log 96-h LC50 = –0.67 log k_GSH – 0.31 log K_ow – 3.33
    (k_GSH = reaction rate with glutathione, a measure of reactivity)
  • LC50 decreases with increasing K_ow (bioaccumulation) and with increasing reactivity.

Don't confuse: Excess toxicity (Te) is a ratio, not an absolute measure; it tells you how much more toxic a chemical is than predicted by baseline narcosis, but the biological meaning depends on the MOA.

🖥️ Expert systems and software

ECOSAR (US-EPA):

  • Estimates acute and chronic aquatic toxicity using structure-activity relationships.
  • Groups structurally similar chemicals; continuously updated QSARs.
  • Free downloadable software.

OECD QSAR Toolbox:

  • Identifies structural characteristics and MOA of a target chemical.
  • Finds analogues with the same MOA.
  • Fills data gaps via read-across, trend analysis, or QSAR models.
  • Requires expertise; includes guidance and video tutorials.

Validation (Box 1):

  • Training set: chemicals used to build the model.
  • Validation set: independent chemicals to test predictions.
  • Cross-validation (Q²): leave-one-out or leave-group-out to assess predictivity.
  • OECD principles: defined endpoint, unambiguous algorithm, defined applicability domain, goodness-of-fit/robustness/predictivity measures, mechanistic interpretation if possible.

Caution: Models using hundreds of descriptors (e.g., CODESSA: 494 molecular + 944 fragment descriptors) risk overfitting; validation is essential.

⚠️ Limitations and context

🧪 Substance-by-substance vs. mixtures

  • Risk assessment is often performed substance-by-substance, but real environments contain mixtures.
  • Mixture toxicology is progressing: whole-mixture methods and compound-based approaches exist (Section 6.3.6, under review in excerpt).
  • Effect-based methods (Section 6.4.2) assess risk based on measured toxicity in environmental samples rather than chemical concentration—assessing impacts rather than state/pressures in the DPSIR framework.

🔍 Hazard-based vs. risk-based decisions

  • Example question (6.1.1): Banning glyphosate based solely on carcinogenic properties = hazard-based (ignores exposure).
  • Risk-based = considers both hazard and exposure.

📏 Risk indicators and prioritization

  • A risk indicator value (e.g., 1.5 vs. 2.0) ranks substances, but prioritization should also consider magnitude of impact, feasibility of intervention, and uncertainty (Question 6.1.4).

Don't confuse: A PNEC or HC₅ is not a "safe" concentration in an absolute sense—it is a regulatory threshold derived to protect a defined fraction of species under stated assumptions; exceedance triggers further assessment or action, not automatic harm.

31

Diagnostic risk assessment approaches and tools

6.4. Diagnostic risk assessment approaches and tools

🧭 Overview

🧠 One-sentence thesis

Diagnostic risk assessment uses living organisms—from cell lines to field communities—to detect and identify toxic effects of environmental samples, complementing chemical analysis by revealing the combined impact of all bioavailable contaminants rather than measuring individual compounds in isolation.

📌 Key points (3–5)

  • What diagnostic tools do: They assess actual toxicity of environmental samples using bioassays (living test systems) rather than predicting risk from chemical concentrations alone.
  • Predictive vs. diagnostic distinction: Predictive tools (standard toxicity tests) estimate future risk from known chemicals; diagnostic tools (bioassays on field samples) detect present effects from unknown mixtures.
  • Hierarchy of biological organization: Tools range from in vitro (cell-based, fast, mechanism-specific) → in vivo (whole organisms, ecologically relevant) → mesocosms → field biomonitoring, each with trade-offs between speed/specificity and ecological realism.
  • Common confusion—single test vs. battery: A single bioassay may miss important toxicants (e.g., herbicides won't affect animal tests); batteries covering multiple taxa and endpoints reduce over- or underestimation of risk.
  • Integration approaches: The TRIAD and eco-epidemiology bridge chemistry, toxicity, and field ecology to support management decisions with weight-of-evidence.

🔬 In vitro bioassays: mechanism-specific screening

🧪 What in vitro means and why it matters

In vitro bioassays: biological test systems using tissues, cells, or proteins (not whole organisms) that show measurable, biologically relevant responses to environmental samples.

  • "In vitro" = "in glass"—originally test tubes, now typically 96- or 384-well microplates.
  • Advantages: small sample volume, short duration (minutes to 48 hours), high throughput, mechanism-specific (e.g., estrogenic activity, dioxin-like effects), fewer ethical concerns than whole-animal tests.
  • Trade-off: Lower ecological relevance than whole organisms; responses are molecular/cellular, not survival or reproduction.

🧬 Reporter gene assays

  • Principle: Genetically modified cells contain a reporter gene (e.g., luciferase) under control of a toxicant-responsive element (e.g., estrogen-responsive element, ERE).
  • How it works:
    1. Toxicant (e.g., endocrine disruptor) enters cell and binds receptor (e.g., estrogen receptor).
    2. Activated receptor translocates to nucleus, binds response element.
    3. Reporter gene is transcribed/translated → measurable signal (light, fluorescence).
  • Example: Estrogenic compounds activate ER → luciferase production → light emission quantified in luminometer.
  • Don't confuse with: Direct enzyme assays (below)—reporter assays measure gene expression triggered by receptor activation, not enzyme activity itself.

⚙️ Enzyme induction and inhibition assays

  • Enzyme induction (EROD assay): Dioxin-like compounds activate arylhydrocarbon receptor (AhR) → CYP1A1 enzyme production → substrate (ethoxyresorufin) converted to fluorescent product (resorufin).
  • Enzyme inhibition (AChE assay): Organophosphate/carbamate insecticides block acetylcholinesterase → reduced hydrolysis of alternative substrate (acetylthiocholine) → less yellow product formation (measured photometrically).
  • Example: Electric eel AChE is incubated with water sample + substrate + indicator; decreased yellow color = inhibition = insecticide presence.

📊 Interpreting toxicity profiles

Two main strategies:

StrategyWhat it comparesUse case
BenchmarkSample profile vs. reference profile (clean sites)Identify which endpoints deviate most; cluster similar contamination patterns
Trigger value (EBT)Each bioassay response vs. "safe" threshold (effect-based trigger value)Flag samples exceeding protective levels; prioritize sites for action
  • Bioanalytical equivalent concentration (BEQ): expresses mixture potency as concentration of reference compound (e.g., "0.1 ng estradiol equivalents per liter").
  • If response exceeds EBT or deviates from benchmark → potential ecological risk → may trigger Effect-Directed Analysis (EDA) to identify culprit chemicals.

🔍 Effect-Directed Analysis: identifying unknown toxicants

🎯 Purpose and complementarity

  • Problem: Bioassay shows toxicity, but chemical analysis of priority pollutants finds nothing—or finds compounds that don't explain the observed effect.
  • Solution: EDA combines bioassay (inclusive, detects all active compounds) with chemistry (identifies specific molecules) to discover emerging contaminants.
  • When to use: At "hotspots" where observed bioassay response significantly exceeds what known chemicals can explain (via concentration addition).

🧩 The six-step EDA process

🧩 1. Extract

  • Prepare concentrated extract from environmental sample (water, sediment, soil, biota).
  • Remove matrix interferences; concentrate compounds of interest.

🧩 2. Biological analysis

  • Test extract in bioassay(s) to confirm activity.
  • Typically in vitro (96-well plates) for throughput; sometimes in vivo.

🧩 3. Fractionation

  • Separate extract into simpler fractions using chromatography (liquid or gas).
  • Collect fractions at time intervals (seconds to minutes).
  • Goal: Reduce complexity—each fraction contains fewer compounds than original extract.
  • Test fractions in bioassay; select active fractions for further analysis.
  • May require second round of fractionation if fractions still too complex.

🧩 4. Chemical analysis

  • Inject active fractions into liquid chromatography–mass spectrometry (LC-MS).
  • High-resolution MS (HR-MS) provides accurate mass → molecular formula.
  • MS-MS fragmentation yields structural clues.

🧩 5. Identification

  • Use accurate mass + fragmentation pattern + databases (PubChem, ToxCast) to propose structure.
  • Suspect screening: match to known compounds (e.g., consumer product ingredients).
  • Non-target screening: identify truly unknown compounds (may need NMR).
  • Output: tentative identification, a suspect list.

🧩 6. Confirmation

  • Obtain or synthesize authentic standard of suspected compound.
  • Compare analytical behavior (retention time, mass spectrum) and bioassay response of standard vs. environmental sample.
  • Bottleneck: Standards often not commercially available; synthesis is costly/time-consuming.

🔄 High-throughput EDA (HT-EDA)

  • Fractionation directly into microwell plates (small fractions, seconds apart).
  • Splitter directs eluent to both well plate and MS simultaneously.
  • One round of fractionation suffices; faster workflow.

🐛 In vivo bioassays: whole-organism responses

🌍 What in vivo means and ecological relevance

In vivo bioassays: tests using whole living organisms exposed to environmental samples, measuring survival, reproduction, growth, or behavior.

  • Higher ecological relevance than in vitro: endpoints (e.g., reproduction, emergence) directly relate to population impacts.
  • Same species and endpoints as standard toxicity tests (see earlier sections), but control selection differs:
    • Standard tests: clean medium controls.
    • Bioassays: reference-site samples (similar physicochemical properties but less contaminated) or least-contaminated samples from a gradient.

🪱 Common test organisms by compartment

CompartmentTypical organismsExample endpoints
SoilEarthworms (Eisenia, Lumbricus), collembolans (Folsomia), enchytraeidsReproduction, survival
WaterDaphnids (Daphnia magna, Chydorus), other invertebrates, algaeMortality, growth inhibition
SedimentOligochaetes, chironomids (Chironomus), rooting macrophytes, benthic diatomsEmergence, growth
  • Example: Earthworm reproduction reduced at Pb-contaminated shooting range soils; effect depends on both Pb concentration and soil pH (acidic soils exacerbate toxicity).
  • Example: Chironomid emergence slower and lower on contaminated sediment vs. reference.

🔋 Bioassay batteries: why multiple species?

  • Problem: Toxicity is species- and compound-specific. Herbicides harm plants, not animals; insecticides harm insects more than other taxa.
  • Solution: Test battery with organisms from different taxa (primary producers, invertebrates, vertebrates) and trophic levels.
  • Benefit: Reduces uncertainty, avoids over- or underestimation of risk, increases ecological relevance.
  • Don't confuse: A single "representative" species cannot capture all sensitivities—different chemicals target different groups.

💧 Effect-based water quality assessment

🚰 Why shift from compound lists to effect-based monitoring?

  • Traditional approach: Measure priority substances, compare each to its environmental quality standard (EQS).
  • Limitations:
    • Priority lists are outdated (legacy pollutants declining, new alternatives emerging).
    • Cannot cover thousands of unregulated compounds, metabolites, or transformation products.
    • Ignores mixture effects and compounds below detection limits.
    • Large portion of observed toxicity unexplained by measured chemicals.
  • Effect-based approach: Bioassay battery detects combined impact of all bioavailable (un)known compounds and metabolites.

🧪 Designing a bioassay battery

  • Structure: Combine in situ (field cages) + in vivo (lab whole-organism) + in vitro (mechanism-specific) assays.
  • Rationale:
    • In situ and in vivo: general toxic pressure, high ecological relevance.
    • In vitro: specific modes of action (e.g., estrogenicity, dioxin-like activity, genotoxicity, antibiotic resistance).
  • Adverse Outcome Pathway (AOP) logic: Molecular initiating events (in vitro) link to adverse outcomes (in vivo/field); responses at multiple levels narrow down candidate toxicants.

📐 Toxic and bioanalytical equivalents (TEQ/BEQ)

  • Concept: Express mixture potency as concentration of a reference compound causing equivalent effect.
  • Example: Water sample causes 38% effect in estrogen receptor assay → equivalent to 0.02 nM of 17β-estradiol → reported as "0.02 ng estradiol equivalents per liter (EEQ/L)".
  • Assumption: All active compounds share the same mode of action and act concentration-additively.

🚦 Effect-based trigger values (EBT)

Effect-based trigger value (EBT): bioassay response threshold differentiating acceptable from poor water quality, expressed as toxic/bioanalytical equivalents.

  • Derived from lab toxicity data, field observations, or regulatory EQS translated into bioassay units.
  • Example EBTs: 0.1 ng EEQ/L for estrogenicity, 50 pg TEQ/L for dioxin-like activity.
  • Decision rule: Response above EBT → potential ecological risk.

🗺️ Ranking sites by integrated risk

  1. Test samples from multiple sites in bioassay battery.
  2. For each site and each bioassay, classify response: green (no response), yellow (below EBT), orange (above EBT).
  3. Calculate cumulative risk score across all bioassays.
  4. Output: Heat map ranking sites by ecotoxicological risk (not by presence of specific compounds).
  5. Management: Invest in chemical identification (targeted/non-target analysis, EDA) only at high-risk sites.

🏞️ Biomonitoring: in situ exposure and contaminant accumulation

🔬 What biomonitoring means

Biomonitoring: use of living organisms for in situ assessment of environmental quality on a regular basis.

Two types:

TypeProcedureWhat is measured
PassiveCollect organisms from site of interestCondition and/or tissue contaminant concentrations
ActiveDeploy reference organisms in cages/substrates at study sites, then recollectCondition (in situ bioassay) and/or tissue contaminant concentrations
  • Active biomonitoring controls for organism origin; passive uses local populations (may be pre-adapted or locally extinct).

🦪 Selecting biomonitoring organisms

Ideal characteristics:

  • Sedentary: Ensures exposure reflects site conditions; mobile organisms confound spatial interpretation.
  • Native and representative: Tolerates local conditions (temperature, salinity, pH); avoids introducing exotic species.
  • Long-lived and large: Survives exposure duration; provides sufficient biomass for chemical analysis.
  • Easy to handle.
  • Responsive to contamination gradient (for in situ bioassays) or accumulates contaminants without dying (for tissue analysis).

Examples:

  • Marine: Mussels (Mytilus spp.)—global distribution, easy to cage, accumulate contaminants.
  • Freshwater: Daphnia magna, mayflies, snails, worms, amphipods, bivalves, periphyton.
  • Sediment/soil: Chironomids (limited use due to complexity).

🪤 In situ exposure devices

  • Mussels: Cages suspended in water.
  • Daphnids: Glass jars with permeable lids.
  • Caddisflies: Tubes connected to floats, maintain constant depth; larvae settle on artificial substrate (plastic doormat).
  • Periphyton: Sand-blasted glass discs on polyethylene racks (170 discs per rack), vertical in current; allows replicated community experiments.

📊 In situ bioassays: measuring organism condition

  • Endpoints: Survival (routine monitoring), reproduction (daphnids, snails, isopods), growth (isopods), emergence (aquatic insects).
  • Strength: Organisms respond to all joint stressors at the site (chemical + physical + biological).
  • Limitation: Cannot isolate cause—adverse effects could be due to toxicants, low oxygen, high temperature, etc.
  • Best practice: Combine with lab bioassays (controlled conditions) and physicochemical analysis (TRIAD approach).
    • If effects occur in situ and in lab → likely chemical toxicity.
    • If effects occur in situ but not in lab → likely non-chemical stressor (pH, temperature, habitat).

🧬 Measuring contaminant concentrations in organisms

  • Advantages over environmental samples:
    • Time-integrated: Organisms accumulate over days/weeks; grab samples are snapshots.
    • Bioavailability: Only bioavailable fraction is taken up—ecologically relevant.
    • Organisms act as "biological passive samplers."
  • Limitations:
    • Elevated tissue concentrations ≠ toxic effects (need to measure condition too).
    • Sample preparation more complex/expensive than water or sediment analysis.
  • Applications: Spatial and temporal mapping of bioavailable contamination (e.g., PCBs in zebra mussels across Flanders; rapid Cd uptake/depuration in translocated biofilms).

📡 Online biomonitoring

  • Continuous exposure of organisms to flowing surface water in laboratory setup (on shore or boat).
  • Endpoint: Behavior (e.g., valve closure in mussels, swimming activity in daphnids).
  • Use: Early warning system—alarm triggers if behavior changes, allowing shutdown of drinking water intake.

⚖️ TRIAD approach: weight-of-evidence integration

🧩 What TRIAD integrates

TRIAD approach: site-specific ecological risk assessment combining three independent lines of evidence (LoE) into a weight-of-evidence for decision support.

Three lines of evidence:

Line of EvidenceWhat it measuresMethods
1. ChemistryContaminant concentrations, fate, bioavailabilityChemical analysis, exposure modeling, toxic pressure metrics (e.g., msPAF)
2. ToxicityActual toxicity of site samplesBioassays (in vitro, in vivo) on extracted samples under controlled conditions
3. EcologyObserved effects in the fieldCommunity surveys (species composition, abundance, diversity), ecosystem function

🎯 Weight-of-evidence principle

  • Convergence: When all three LoE indicate similar risk level → strong evidence → confident decision.
  • Divergence: When LoE disagree → high uncertainty → further investigation needed.
  • Example: High contaminant concentrations (Chemistry) + high bioassay toxicity (Toxicity) + low biodiversity (Ecology) → convergence → likely chemical impact.
  • Counter-example: High contaminants + no bioassay effects + high biodiversity → divergence → contaminants may not be bioavailable, or community is adapted, or other factors dominate.

📋 TRIAD workflow

  1. Collect basic data per LoE (Table 1 in excerpt: soil samples analyzed for metals, tested in bioassays, surveyed for nematodes/feeding activity).
  2. Scale each endpoint to risk values (0 = no effect, 1 = maximum effect), e.g., using Potentially Affected Fraction (PAF) from Species Sensitivity Distributions.
  3. Aggregate risk values per LoE (equal or differential weights).
  4. Integrate across LoE into overall TRIAD risk score.
  5. Calculate deviation between LoE (threshold e.g., 0.4): low deviation = weight of evidence established.

🔍 Interpreting divergence

  • Scenario A: High chemistry risk, low toxicity, low ecology risk → Contaminants present but not bioavailable (e.g., strongly bound to organic matter).
  • Scenario B: Low chemistry risk (no priority substances detected), high toxicity, low ecology → Unknown toxicants present; trigger EDA.
  • Scenario C: Low chemistry, low toxicity, low ecology → Non-chemical stressor (habitat degradation, low oxygen, acidification).

🌐 Eco-epidemiology: diagnosing mixture impacts in the field

🧬 Definition and origins

Eco-epidemiology (ecotoxicological context): the study of the distribution and causation of impacts of multiple stressor exposures in ecosystems, and the application of this study to reduce ecological impacts.

  • Analogy to human epidemiology: John Snow's 1854 cholera study linked disease to contaminated water via spatial/temporal patterns, not lab experiments.
  • Ecotoxicology parallel: Use field monitoring data + statistics to attribute ecological impacts to chemical mixtures, validating lab-based risk assessments.
  • First mention: 1984 (Bro-Rasmussen & Løkke)—proposed as discipline to validate ecotoxicological models.

🔍 Why eco-epidemiology is needed now

  • Historical impacts were obvious: Rachel Carson's Silent Spring (1962) documented clear chemical effects (e.g., bird population declines from DDT).
  • Modern complexity: Thousands of chemicals at low concentrations; mixtures vary in space/time; impacts harder to discern from natural variability and multiple stressors.
  • Validation gap: Lab-based risk assessments predict effects, but do they occur in the field? Eco-epidemiology tests this.

📊 Four-step eco-epidemiological workflow

  1. Collect monitoring data: Abiotic variables (habitat, chemistry) + biotic data (species occurrence/abundance) for environmental compartment.
  2. Optimize data: Harmonize taxonomy, align abiotic and biotic data in space/time, add mixture stress metrics (e.g., multi-substance Potentially Affected Fraction, msPAF).
  3. Statistical analysis: Use ecological bio-assessment techniques (quantile regression, species distribution models, generalized linear models) to delineate impacts and probable causes.
  4. Interpret and apply: Distinguish statistical association from causation; use for model validation or management (prioritize sites/chemicals).

📈 Quantile regression: a simple eco-epidemiological method

  • Principle: Common regression analyzes mean response; quantile regression analyzes upper percentiles (e.g., 95th) of response variable.
  • Logic: If stressor limits response, high response values are absent at high stressor levels → "empty corner" in XY plot.
  • Example: Plot biodiversity (Y) vs. toxic pressure (X). If upper 95th percentile declines with increasing X → toxicity limits biodiversity.
  • Interpretation: Empty area in upper-right corner = stressor acts as limiting factor.

🧪 Mixture stress proxies

  • Challenge: Adding each chemical as separate variable requires huge sample sizes.
  • Solution: Aggregate chemicals into single mixture metric using Species Sensitivity Distributions (SSDs) and mixture toxicity rules (e.g., response addition).
  • msPAF (multi-substance Potentially Affected Fraction): Converts measured concentrations → PAF per chemical → aggregates into total mixture PAF.
  • Studies show >60% of taxa co-affected by chemical mixtures when msPAF included in models.

🎯 Two uses of eco-epidemiological outputs

🎯 1. Validation of ecotoxicological models

  • Do lab-derived protective benchmarks (e.g., EQS) actually protect field communities?
  • Do mixture toxicity models (concentration addition, response addition) predict field impacts?
  • Example: Berger et al. (2016) found field effects at concentrations lower than lab-predicted—suggests models may underestimate risk.

🎯 2. Diagnosis and control

  • Identify which chemicals/mixtures cause observed impacts.
  • Prioritize management actions to sites and stressors with greatest impact.
  • Example: Statistical association between missing species and metal mixtures led to discovery of leaching from old mining spoil heaps.

🔮 Prospective eco-epidemiology

  • Land-use signatures: Different land uses (agriculture, urban) emit characteristic chemical mixtures.
  • Modeling: Emission + fate + ecotoxicity models predict impact magnitudes from signatures.
  • Application: Avoid impacts by preventing high-risk emission patterns before they occur.

🧩 Integration with other approaches

  • Eco-epidemiology provides one line of evidence; can combine with TRIAD (chemistry, toxicity, ecology) or more lines (Chapman & Hollert, 2006).
  • When to use multiple lines: Higher-stakes decisions warrant more evidence; convergence across lines increases confidence.

Common confusion recap:

  • Predictive vs. diagnostic: Predictive = lab tests on pure chemicals, estimate future risk; diagnostic = bioassays on field samples, detect present effects.
  • In vitro vs. in vivo: In vitro = cells/proteins, fast, mechanism-specific, lower ecological relevance; in vivo = whole organisms, slower, ecologically relevant, less specific.
  • Bioassay vs. toxicity test controls: Toxicity tests use clean medium; bioassays use reference-site samples or least-contaminated samples.
  • Single species vs. battery: One species misses taxon-specific toxicants; battery reduces blind spots.
  • Statistical association vs. causation: Eco-epidemiology shows correlation; causation requires additional evidence (experiments, mechanistic understanding, multiple lines).
32

Regulatory Frameworks

6.5. Regulatory Frameworks

🧭 Overview

🧠 One-sentence thesis

Different categories of chemicals are regulated by separate EU frameworks that require data submission, risk assessment, and management measures to ensure safe production and use while protecting human health and the environment.

📌 Key points (3–5)

  • No single global framework: Regulations differ by chemical category (industrial chemicals, pesticides, biocides, pharmaceuticals) based on usage.
  • Data as the foundation: All frameworks require sufficient data on identity, production, emissions, fate, and (eco)toxicity as building blocks for risk assessment.
  • Two parallel tracks: Hazard-based classification (CLP) identifies intrinsic dangers via labeling, while risk assessment (e.g., REACH) compares exposure to safe levels.
  • Common confusion: CLP is hazard-based (what the chemical can do), whereas REACH and other frameworks are risk-based (whether exposure causes harm under actual use conditions).
  • Coordination by EU agencies: ECHA handles REACH and biocides, EFSA manages plant protection products, and EMA oversees pharmaceuticals.

🧩 Categories and structure

🧩 Chemical categories

The excerpt identifies four major categories:

CategoryExamplesOverlap possible?
Industrial chemicalsSolvents, plasticizersYes (e.g., zinc in building and as biocide)
Plant protection productsHerbicides, fungicides, insecticidesYes (same active ingredient may be PPP or biocide)
BiocidesAntifouling agents, preservatives, disinfectantsYes
Human and veterinary drugsPharmaceuticalsYes (zinc oxide as veterinary drug)
  • Each category has specific EU regulations or directives.
  • A single substance (e.g., zinc) may fall under multiple categories depending on use.

📋 Core requirements across frameworks

All legal frameworks share key elements:

  • Benchmark date: A formal cutoff after which polluting is illegal (prevention).
  • Data submission: Producers or importers must provide valid data on production, identity, use volumes, emissions, environmental fate, and (eco)toxicity.
  • Assessment guidelines: Both hazard and risk assessments follow specified technical guidelines.
  • Risk management: Outcomes trigger measures ranging from requests for additional data to use restrictions or full bans.

Don't confuse: Data requirements are not the same across all frameworks; tonnage and use determine the depth of testing (e.g., REACH requires more data at higher production volumes).

🔬 REACH: Registration, Evaluation, Authorisation, and Restriction

🔬 What REACH covers

REACH: A regulation to improve protection of human health and the environment from chemical risks while enhancing EU chemicals industry competitiveness.

  • Entered into force 1 June 2007.
  • Replaced ~40 community regulations and directives with one single regulation.
  • Applies to a very broad spectrum: industrial to household chemicals and more.
  • Coordinated by the European Chemical Agency (ECHA).

📊 Registration thresholds and responsibilities

  • ≥ 1 tonne/year: Manufacturers and importers must register unless exempted.
  • ≥ 10 tonnes/year: Responsible parties must show substances do not adversely affect human health or the environment.
  • Data requirements scale with tonnage: Higher production volumes require more standard information.
  • Animal testing: Before testing on vertebrates (fish, mammals), alternative methods must be considered.

📄 Chemical Safety Assessment (CSA)

For production volumes > 10 tonnes/year, industry must prepare a CSA that includes:

  • Exposure assessment: Estimating how much humans and the environment are exposed.
  • Hazard or dose-response assessment: Determining safe levels.
  • Risk characterization: Showing risk ratios < 1.0 (safe use).

The CSA is documented in a chemical safety report (CSR) covering all life-cycle stages and risk management measures.

Example: An industrial solvent produced at 50 tonnes/year requires a full CSA demonstrating that worker exposure, consumer exposure, and environmental release all result in risk ratios below 1.0.

🏷️ Classification, Labelling, and Packaging (CLP)

🏷️ Purpose of CLP

CLP regulation: Requires manufacturers, importers, or downstream users to classify, label, and package hazardous chemicals appropriately before placing them on the market.

  • Hazard-based: When information (e.g., ecotoxicity data) meets classification criteria, hazards are identified by assigning a hazard class and category.
  • Not risk-based: CLP does not consider exposure or actual use conditions; it labels intrinsic danger.

🐟 Aquatic hazard classes

An important CLP hazard class is "Hazardous to the aquatic environment", divided into categories:

  • Category Acute 1: Most (acute) toxic chemicals (LC50/EC50 ≤ 1 mg/L).
  • Pictograms (e.g., dead fish and tree symbol) communicate hazard at a glance.

Don't confuse CLP with REACH risk assessment: CLP tells you what the chemical can do (hazard); REACH tells you whether it will cause harm under actual conditions (risk).

🌾 Plant Protection Products (PPPs) and Biocides

🌾 Plant Protection Products regulation

  • What they are: Pesticides used to keep crops healthy (herbicides, fungicides, insecticides, acaricides, plant growth regulators, repellents).
  • Regulation: EU Regulation (EC) No 1107/2009.
  • Authorization: PPPs cannot be placed on the market or used without prior authorization.
  • Coordinated by: European Food and Safety Authority (EFSA).

🦠 Biocides regulation

  • Distinction from PPPs: As a rule of thumb, PPP regulation applies to substances used by farmers for crop protection; biocides regulation covers all other pesticide applications.
  • Examples: Antifouling agents, preservatives, disinfectants.
  • Regulation: EU Biocidal Products Regulation (BPR).
  • Authorization: All biocidal products require authorization; active substances must be previously approved.
  • Coordinated by: ECHA.
  • Risk assessment: Similar to other legislations, environmental risk is assessed by comparing compartmental concentrations (PEC) with the concentration below which unacceptable effects will most likely not occur (PNEC).

Example: The same active ingredient used as a crop fungicide (PPP) and as a wood preservative (biocide) falls under two different regulations.

💊 Veterinary and Human Pharmaceuticals

💊 Environmental Risk Assessment (ERA) requirement

  • Since 2006: EU law requires an ERA for all new applications for marketing authorization of human and veterinary pharmaceuticals.
  • Guidance documents: Developed for conducting ERA based on two phases.
  • Coordinated by: European Medicines Agency (EMA).

🧪 Two-phase ERA structure

Phase 1: Estimates environmental exposure to the drug substance. Based on an action limit, the assessment may be terminated.

Phase 2: Obtains and assesses information about fate and effects in the environment. A base set, including ecotoxicity data, is required.

⚖️ Risk-benefit analysis differences

Product typeRisk-benefit analysis
Veterinary medicinesERA is part of risk-benefit analysis; positive therapeutic effects are weighed against environmental risks
Human medicinesEnvironmental concerns are excluded from risk-benefit analysis

Why the difference?: Practical and ethical reasons make it difficult to restrict human medicines based on environmental risk, whereas veterinary medicine use can be constrained more easily.

🔧 Harmonization of testing

🔧 Why harmonization matters

  • Variability problem: Test outcomes may vary depending on conditions (temperature, test medium, light).
  • Goals: Standardize test conditions, harmonize procedures between agencies and countries, avoid duplication of testing, and create a more efficient and effective testing system.

🌍 OECD role

The Organization for Economic Co-operation and Development (OECD) assists member governments by:

  • Developing harmonized guidelines to test and assess chemical risks.
  • Establishing a system of mutual acceptance of chemical safety data among OECD countries.
  • Developing Principles of Good Laboratory Practice (GLP) to ensure study quality, rigor, and verifiability.
  • Facilitating development of new tools (e.g., OECD QSAR toolbox) to obtain more safety information while reducing costs, time, and animal testing.

Example: An ecotoxicity test conducted in Germany following OECD guidelines is accepted by authorities in France and the Netherlands without re-testing.

🏛️ Regulatory agencies

🏛️ Three major EU agencies

AgencyAcronymResponsibilities
European Chemical AgencyECHACoordinates REACH and Biocidal Products Regulation (BPR)
European Food and Safety AuthorityEFSACoordinates EU regulation on Plant Protection Products (PPPs)
European Medicines AgencyEMAResponsible for scientific evaluation, supervision, and safety monitoring of medicines in the EU

Don't confuse: ECHA handles industrial chemicals and biocides; EFSA handles agricultural pesticides; EMA handles pharmaceuticals.

❓ Key questions to consider

❓ Is registration always required?

  • For REACH: A chemical producer in the EU is not allowed to put a new chemical on the market without registration or authorization if it is produced or imported at ≥ 1 tonne/year (unless exempted).

❓ What determines minimum information under REACH?

  • Tonnage: The amount of minimum information required depends on the quantity of the substance manufactured or imported.
  • Rationale: Higher production volumes mean greater potential exposure and risk, justifying more extensive data requirements.

❓ Is CLP based on hazard, exposure, or risk?

  • Hazard: The CLP regulation is based on the hazard of chemicals (intrinsic properties), not exposure or risk.
  • It identifies and labels what the chemical can do, not whether it will cause harm under actual use conditions.
33

Risk management and risk communication

6.6. Risk management and risk communication

🧭 Overview

🧠 One-sentence thesis

Chemical risk management is a multi-stakeholder process that balances conflicting interests between those who benefit from chemicals and those affected by their risks, increasingly moving from linear expert-driven approaches toward inclusive frameworks that involve stakeholders throughout.

📌 Key points (3–5)

  • Stakeholder conflicts: Producers and consumers benefit from chemicals, while affected populations (workers, public, ecosystems) bear the risks—often with incongruity between beneficiaries and those harmed (e.g., future generations, downstream communities).
  • Linear vs. inclusive approaches: Traditional risk management follows a linear sequence (problem definition → scientific assessment → measures → communication), assuming risk is objectively measurable; newer frameworks like IRGC involve stakeholders throughout the process.
  • Policy principles guide decisions: ALARA (reduce pollution when feasible), Polluter Pays (charge polluters), Precautionary Principle (act without full certainty when legitimate concern exists), and No Data No Market (producers must prove safety).
  • Common confusion: Risk is not purely objective—scientists must make subjective choices about endpoints, uncertainty factors, and acceptable effects, which are implicit political decisions.
  • Why it matters: Effective risk management requires understanding that risk questions often have no single scientific answer and that stakeholder values shape what counts as acceptable risk.

🎭 Stakeholders and conflicting interests

🎭 Who is involved

The DPSIR chain (Drivers–Pressures–State–Impacts–Responses) illustrates the different groups with stakes in chemical issues:

Stakeholder groupInterestRole in DPSIR
ConsumersUse products containing chemicalsBenefit from Drivers (production/use)
Producers & retailersProfit from production and salesDrive chemical production and use
WorkersOccupational healthAffected by Pressures (exposure)
General publicPersonal and family healthAffected by Impacts
Environmental advocatesEcosystem healthConcerned about State and Impacts
GovernmentMediate conflicts, set rulesDefine Responses
ScientistsStudy risks, develop interventionsAssess State/Impacts, propose Responses

⚖️ The conflict of interests

  • Overlap and incongruity: Some stakeholders are both beneficiaries and affected parties (e.g., consumers whose health may be harmed by the products they use).
  • Temporal and spatial mismatches:
    • Future generations face pollution problems caused by current generations.
    • Downstream populations suffer from pollution caused upstream.
  • When conflict arises: If pollution cannot be easily avoided, those affected demand action from those benefitting—producers and consumers do not aim to pollute, but action is not always simple.
  • Government as mediator: The government defines rules that all stakeholders must follow, acting as the key mediating agency.

Example: A community downstream receives contaminated water from industrial discharge upstream—the beneficiaries (factory owners, workers, consumers of products) are different from those bearing the health and environmental costs.

🔄 Two approaches to risk management

🔄 Linear risk management (conventional)

The conventional way arranges risk management as a linear process with distinct sequential steps.

The four steps:

  1. Recognition and definition of the risk problem—led by government (politicians, policy makers), sometimes after stakeholder pressure.
  2. Establishment of nature and extent of the risk by scientists, often working in isolation.
  3. Identification and selection of risk reduction measures (if needed)—led by government, often collaborating with primary stakeholders (producers and consumers).
  4. Communication of the risk management strategy to stakeholders and the general public—typically unidirectional flow of (primarily scientific) information from government to stakeholders.

Underlying belief:

  • Chemical risk is a strictly defined concept that can be objectively measured and quantified by scientific methods.
  • Reflected in regulations like European Water Framework Directive environmental quality standards (EQSs), workplace exposure standards, and air quality regulations.

🌐 IRGC framework (inclusive approach)

The International Risk Governance Council framework provides guidance for early identification and handling of risks, involving multiple stakeholders throughout the process.

Why the shift?

  • The belief that risk is strictly objective is increasingly challenged.
  • Scientists and risk assessors must make subjective assumptions even when trying to be objective.
  • Endpoints, unacceptable effects, and uncertainty factors are controversial topics based on implicit political choices.
  • Risk questions often have no single scientific answer, or answers are multiple and contestable.

Key characteristics:

  • Recommends an inclusive approach to frame, assess, evaluate, manage, and communicate risk issues.
  • Addresses complexity, uncertainty, and ambiguity.
  • Generic framework that can be tailored to various risks and organizations.
  • Comprises four interlinked elements plus three cross-cutting aspects.
  • Pre-assessment phase involves identification and framing, leading to early warning and preparations, with relevant actors and stakeholder groups involved from the start.

Don't confuse: The IRGC framework does not replace scientific assessment—it recognizes that stakeholder values and political choices are inherent in defining and managing risk, rather than pretending these are purely technical decisions.

Example: Mixture effects in chemicals were ignored for decades but are now increasingly included—this shows risk assessment evolves based on changing scientific and societal priorities, not just objective measurement.

📜 Policy principles in risk management

📜 Four key principles

Risk management is not based solely on quantitative risk estimates—various policy principles guide decisions, applied alone or in combination.

🔽 ALARA (As Low As Reasonably Achievable)

Environmental pollution should be avoided whenever feasible.

  • Applied regardless of the calculated risk level.
  • Plays an important role in environmental licensing of production facilities.
  • Emphasizes pollution prevention as a default stance.

💰 Polluter Pays Principle (PPP)

The polluter should pay for polluting the environment.

  • Forms the basis for taxation of polluting activities (e.g., waste disposal, sewer discharges).
  • Ideal use: Taxes should fund prevention and reduction of contamination (e.g., building and maintaining wastewater treatment plants).
  • Reality: This ideal is not always achieved in practice.

🛡️ Precautionary Principle

Scientific certainty is not required before taking preventive measures.

What it means:

  • A way to deal with uncertainty in risk assessment.
  • Enables policy makers to take preventive action in the absence of absolute certainty.
  • Requirement: There must be a legitimate reason for concern—not just any hypothetical worry.

The challenge:

  • Probably the most debated principle in environmental regulation.
  • Operationalization is complicated: when is there "sufficient reason for concern to take preventive action"?
  • Ultimately a normative choice that strongly depends on the stakes and interests involved.

Don't confuse: The precautionary principle does not mean acting on any fear without evidence—it requires legitimate concern, but does not demand absolute proof of harm before acting.

🚫 No data, no market

Producers must prove that chemicals and products they put on the market are safe and sustainable.

  • One of the fundamental principles underlying Europe's REACH chemical legislation.
  • Related to Polluter Pays Principle: puts the burden of proof (and associated costs) on producers or importers.
  • Increasingly used in environmental legislation.
  • Shifts responsibility from regulators proving harm to producers proving safety.

Example: Under REACH, a company importing a new chemical must provide safety data before it can be sold—the company cannot market the chemical and wait for regulators to prove it is dangerous.

🔬 The limits of objectivity in risk assessment

🔬 Why risk is not purely objective

Even when scientists try to assess risk objectively, they must make subjective assumptions:

  • Endpoints: Which health or environmental outcomes matter most?
  • Unacceptable effects: What level of harm is too much?
  • Uncertainty factors: How much safety margin is needed when extrapolating from animal studies or limited data?

These are controversial topics based on implicit political choices, not purely scientific determinations.

🔍 Examples of subjectivity

  • Mixture effects: Ignored for decades, now increasingly included—this reflects changing priorities, not just better science.
  • Stakeholder values: Not all stakeholders value risks the same way (covered in section 6.7 on risk perception).
  • Multiple answers: Risk questions often have no single scientific answer, or answers are multiple and contestable.

Don't confuse: Acknowledging subjectivity does not mean risk assessment is arbitrary—it means recognizing that scientific assessment operates within a framework of values and choices that should be made transparent and inclusive.

34

Risk perception

6.7. Risk perception

🧭 Overview

🧠 One-sentence thesis

Risk perception is determined far more by contextual factors like control, trust, and voluntariness than by the actual calculated health risk, explaining why people fear low-risk situations and accept high-risk activities.

📌 Key points (3–5)

  • The "first law" of risk perception: People fear things that do not make them sick and get sick from things they do not fear—annual health risk has only limited influence on perception.
  • What drives perception: Factors like voluntary choice, control, trust in authorities, media attention, and perceived openness matter more than the numerical risk.
  • Common confusion: Passive smoking vs. active smoking—passive smoking is perceived as ~100 times more objectionable despite being 100 times lower in actual risk, because most perception factors (control, voluntariness) end up on the "dangerous" side.
  • Experts are not immune: Even experts react like laypeople in their own daily lives, accepting high risks (traffic, holidays, extreme sports) when advantages or necessity are present.
  • Context is everything: Risks are never judged in isolation; people evaluate the entire situation or activity, of which the numerical risk is often only a small part.

🎯 The core paradox

🎯 The "first law" of risk perception

"People fear things that do not make them sick and get sick from things they do not fear."

  • This is the foundational observation of risk perception research.
  • Example from the excerpt: People worry intensely about newly discovered soil pollution (very low actual health risk), yet drive diesel cars and smoke cigarettes to relieve stress (much higher actual risks).
  • The explanation: annual risk of getting sick, being injured, or dying has only limited influence on how people perceive a risk.

🧩 The simplified model

The excerpt presents a basic model (Figure 1) with:

  • Top: annual health risks (the calculated probability).
  • Middle: a list of factors that determine perception (each can land on the "safe" or "dangerous" side).
  • Bottom: the resulting perception (safe or dangerous).

Key insight: Most of the middle factors matter more than the top factor (the actual risk number).

🔍 What really drives perception

🔍 Voluntariness and control

  • Voluntary choice: Did the person choose to be exposed?
  • Control: Does the person feel they can manage or stop the exposure?
  • These two factors are among the most powerful.
  • Example: Smokers light their own cigarettes and believe they can quit anytime (even if addicted, they overestimate control)—so most factors end up on the "safe" side despite high actual risk.

🏢 Trust and openness

  • Trust in authorities and companies: Low trust pushes perception toward "dangerous."
  • Perceived openness: If people suspect information is being withheld, worry increases.
  • Example (soil pollution): Authorities say "no cause for alarm," but low trust makes people suspect they care more about money than health—so people worry more, not less.

📰 Media attention and controversy

  • Local media coverage and controversy amplify perception of danger.
  • Example: A newly discovered soil pollution site gets media attention, especially if there is disagreement—this reinforces the "dangerous" perception.

🤝 Dependence on others

  • If people must rely on authorities or companies for information or remediation, and they do not trust them, perception shifts toward danger.
  • Example: Residents have no control over soil sanitation and depend on untrusted authorities—this makes the situation feel more dangerous.

🚬 Case study: Soil pollution vs. smoking

🏡 Why people fear soil pollution (low actual risk)

FactorWhere it landsWhy
Actual riskSafe sideHealth effects are often very small
VoluntaryDangerous sidePeople did not choose to live on polluted soil
ControlDangerous sideNo control over sanitation; depend on authorities
TrustDangerous sideLow trust in authorities and companies
OpennessDangerous sideSuspicion that information is withheld
MediaDangerous sideLocal media attention and controversy

Result: Nearly all factors end up on the dangerous side → high worry, despite low actual risk.

🚬 Why smokers are not afraid (high actual risk)

FactorWhere it landsWhy
Actual riskDangerous sideEverybody knows smoking is dangerous
VoluntarySafe sidePeople light their own cigarettes
ControlSafe sideSmokers think they can quit anytime (optimistic bias)
DependenceSafe sideNo need to rely on others for information or action
OpennessSafe sideNo information is withheld about smoking

Result: Most factors end up on the safe side → low worry, despite high actual risk.

Don't confuse: The excerpt notes that if smokers learn cigarette companies purposely make cigarettes more addictive, they may decide to quit—not because of health effects, but because the company "takes over control," which people greatly resent.

🌫️ Passive smoking: perception vs. reality

  • Actual risk: Passive smoking is ~100 times smaller than active smoking.
  • Perception: Passive smoking is perceived as ~100 times more objectionable and worrisome.
  • Why: Most factors (voluntary, control) end up on the dangerous side for passive smoking—people did not choose to inhale others' smoke and have no control over it.

👨‍🔬 Experts and laypeople

👨‍🔬 Experts react like laypeople at home

  • Many people are surprised that calculated health risks influence perception so little.
  • But we experience it in our own daily lives: All of us perform risky activities because they are necessary, come with advantages, or are fun.
  • Examples from the excerpt:
    • Daily traffic: annual risk of dying far higher than 1 in a million.
    • Holidays: multiple risks (transport, microbes, robbery, even divorce).
    • Thrill-seeking: diving, mountain climbing, parachute jumping—often without even knowing the fatality rates.
  • High-stakes risks: People knowingly risk their life to improve it (e.g., immigrants crossing the Mediterranean) or for a higher cause (e.g., soldiers at war; Churchill: "I have nothing to offer but blood, toil, tears and sweat").

🎰 The absurd lottery example

  • Scenario: Government starts a lottery with a 1-in-a-billion chance of winning; every citizen must play; tickets are free; the only prize is a public execution broadcasted live.
  • Point: No matter how small the risk, it can be totally unacceptable and nonsensical if the context is wrong.
  • Application: When government tells people to accept a small risk because they accept larger risks elsewhere, people feel they have been given a ticket in this absurd lottery—this is how residents feel when told the soil pollution risk is extremely small and they should quit smoking instead.

🌍 Context and the rich environment

🌍 All risks have a context

Risks always occur in a context. A risk is always part of a situation or activity which has many more characteristics than only the chance of getting sick, being injured or to die.

  • We do not judge risks in isolation; we judge situations and activities of which the risk is often only a small part.
  • Risk perception occurs in a rich environment: After 50 years of research, much has been discovered, but predicting how angry or afraid people will be in a new, unknown situation is still a daunting task.

🎁 The role of advantages

  • The excerpt adds "advantages" as another factor in the model.
  • All of us perform risky activities because they are necessary, come with advantages, or are fun.
  • Example: We accept the high risk of daily traffic because it is necessary or advantageous; we go on holidays despite multiple risks because of the benefits and enjoyment.

🔄 The interplay of factors

  • The model is a simplification; research has identified many more factors that are often connected to each other.
  • Example (soil pollution): The factors (voluntary, control, trust, openness, media, dependence) are interconnected—low trust amplifies suspicion of withheld information, which in turn increases worry.

🧠 Implications for communication

🧠 What not to say

  • "No cause for alarm": This phrase will only make people worry more, because it signals that authorities are not taking concerns seriously.
  • "The risk is extremely small, so quit smoking instead": This makes people feel they have been given a ticket in an absurd lottery—it ignores the context and the factors that matter to them.

🧠 What to recognize

  • Risk perception is not irrational: It reflects a rich evaluation of the entire situation, not just the numerical risk.
  • Context matters: Voluntariness, control, trust, openness, and advantages are legitimate concerns that shape perception.
  • Experts are not exempt: Even those who calculate risks accept high-risk activities in their own lives when the context is favorable.