Measurement and Instrumentation An Introduction to Concepts and Methods

1

Introduction to Measurement and Instrumentation

1.1 Introduction

🧭 Overview

🧠 One-sentence thesis

A measurement is meaningful only when it includes both magnitude and units, and understanding this fundamental requirement—along with the historical standardization of units—is essential for all engineering disciplines that need to gain information about physical quantities or processes.

📌 Key points (3–5)

  • Why we measure: to gain information about a thing (device, process, or physical quantity), with goals that influence how measurements are made.
  • Two fundamental components: every measurement requires both magnitude and units; neither alone is sufficient.
  • Historical evolution of units: units were not standardized until the Age of Enlightenment (late 18th–early 19th century), when French philosophers recognized the need for fixed, verifiable standards.
  • Common confusion: magnitude vs. units—saying "it weighs two" or "it weighs pounds" is incomplete; only "it weighs two pounds" is a useful measurement.
  • Measurement systems: can be analog (no computer/processor) or digital (requires processor); the thing being measured is called the measurand.

🎯 Why we measure things

🎯 The purpose of measurement

  • All engineering disciplines need to measure things at times.
  • The fundamental reason: to gain information about the thing being measured.
  • What we do with that information varies widely.
  • Our goals in acquiring information influence how we make measurements.

🔍 What we measure

  • Physical aspects: length, weight, pressure, vibration, and many other physical properties.
  • Less concrete properties: for processes, we might measure things like the number of people entering a store during a certain period.
  • The thing being measured can be a device, a process, or a physical quantity.

📏 The two fundamental components of measurement

📏 Magnitude and units

Every measurement consists of two fundamental components that are both necessary to achieve the status of being a measurement: magnitude and units.

  • Both components are equally important.
  • Neither provides enough information about the measurement on its own.

❌ Why incomplete measurements fail

The excerpt uses a book-weighing example to illustrate:

StatementWhat's missingWhy it's not useful
"It weighs two"UnitsNo context for the number
"The book weighs pounds"MagnitudeNo specific value
"The book weighs two pounds"NothingComplete measurement ✓

Don't confuse: A number alone or a unit alone with a measurement—both are required.

Example: Saying "it weighs two" does not convey a satisfactory amount of information; saying "the book weighs pounds" is also not useful; only "the book weighs two pounds" is a useful measurement.

🏛️ Historical development of units

🏛️ Ancient and pre-modern units

  • The concept of units has existed as long as the concept of enumeration.
  • Centuries ago, people understood descriptive phrases with enumeration (3 cows, 2 horses, 5 buckets).
  • However, specific engineering units (length, weight, pressure) were not well developed.
  • Biblical references: used cubits for length.
  • Ancient Greeks: used the stade to denote units of length.

💡 The Age of Enlightenment standardization

  • We owe the modern concept of fixed, verifiable units to the Age of Enlightenment.
  • When: late 18th and early 19th century.
  • Who: French philosophers.
  • Why they standardized:
    • So scientific experiments could be replicated for verification.
    • So bridges could be more accurately built.
    • So civil infrastructure could be made more uniform (ensuring more efficient use of materials).

🎯 Modern standards (21st century)

  • We now have clear definitions of what one second is, what one foot is, what one kilogram is, etc.
  • These units are defined by standards.
  • The ability of all measuring devices to measure is defined with respect to these standards.

Main concept: A measurement system must provide both a magnitude and unit of the physical quantity being measured for it to be meaningful.

🔧 Measurement systems

🔧 What is a measurement system

A measurement system is what we use to make measurements. The thing being measured is called the measurand.

  • Can be as simple as a ruler.
  • Can be as complex as a scanning electron-microscope.

🔀 Two broad categories

CategoryDefinitionRequirement
AnalogDoes not require a computer or processor to perform the measurementNo processor needed
DigitalRequires some type of computer or processorProcessor required

Note: The excerpt mentions that measurement accuracy and error will be covered in section 1.3, but those topics are not included in this excerpt.

2

Measurement Systems

1.2 Measurement Systems

🧭 Overview

🧠 One-sentence thesis

A measurement system must provide both magnitude and units to be meaningful, and all such systems inevitably contain measurement error that can be characterized, minimized through calibration, and understood through the concepts of precision, accuracy, and uncertainty.

📌 Key points (3–5)

  • What a measurement system does: uses a sensor to interact with the measurand (the thing being measured) and produces a signal representing the physical quantity, either through analog or digital means.
  • Measurement error is unavoidable: all systems have error—the difference between measured and true value—which falls into two categories: deterministic (systematic, repeatable bias) and random (unpredictable, statistical variation).
  • Precision vs accuracy (common confusion): precision is repeatability (small variance between measurements), while accuracy is how close the mean is to the true value; a system can be precise but inaccurate, or accurate but imprecise.
  • Calibration against known standards: measurement systems must be tested against known reference values to assess and correct for bias and to establish their valid range and linearity.
  • Dynamic response matters: sensors have finite response time and bandwidth; a system must respond faster than the measurand changes, or the measurement will lag or distort.

📐 Anatomy of a measurement system

📐 Core components

A measurement system consists of several elements working together:

  • Sensor: the component that physically interacts with the measurand.
    • Example: a ruler placed next to an object, a thermocouple in contact with a surface.
  • Signal conditioning: optional modification or amplification of the signal to improve measurement quality.
    • Example: a spring scale uses an internal mechanism to magnify a small spring displacement into a larger dial deflection.
  • Readout or digitization: the final stage that presents the measurement.

🔌 Analog vs digital systems

FeatureAnalogDigital
ProcessingNo computer or processor requiredRequires computing power to digitize
Sensor outputDirect physical representationTransduced into electrical signal
ConversionHuman or mechanical readoutAnalog-to-digital (A/D) converter assigns binary values to voltage/current
Signal conditioningAmplification, analog filtering, Wheatstone BridgeAlso includes digital filtering after digitization

Transduction: conversion of a physical feature from one form (e.g., temperature, pressure) into another (typically an electrical signal).

Analog-to-digital (A/D) conversion: a device that assigns discrete binary values to continuous voltage or current levels.

Don't confuse: the sensor itself with the entire measurement system—the sensor is only the interface; the system includes conditioning, conversion, and readout.

⚠️ Understanding measurement error

⚠️ What error is

Measurement error: the difference between the measured value and the true value, expressed as ε(i) = x_measured(i) − x_truth(i).

  • Cosmic truth is a concept only: we cannot know the true value exactly; if we could, we wouldn't need to measure.
  • A good measurement system can get close enough that the difference has no practical effect on design or decisions.
    • Example: if a shaft tolerance is ±0.001 inches and the measuring device is accurate to 0.000001 inches, the device is "as good as cosmic truth" for that application.

🎲 Deterministic vs random error

Deterministic (systematic) errors:

  • Have static or known time-dependent behavior.
  • Show up as bias or DC offset in the data.
  • Are repeatable from one trial to the next.
  • Can be inferred and removed through calibration against a known standard.
    • Example: a multimeter consistently reads 3.8V when measuring a certified 3.3V source → deterministic error of 0.5V.

Random errors:

  • Cannot be predicted at any given time but fall within a statistical range.
  • Typically follow a Gaussian (normal) distribution.
  • Are not repeatable from trial to trial.
  • Defined mathematically as the difference between a measured value and the average of all measured values: ε_n[t_i] = x_n[t_i] − x_ave[t_i].
  • Sources: electrical noise coupled into the signal path, thermal noise in resistors, capacitors, and circuit traces.

Don't confuse: random error is assessed after removing the mean (bias); deterministic error is the difference between the mean and the true value.

📊 Repeated measurements and averaging

  • For static measurands: take repeated measurements at fixed intervals (e.g., measuring a 3.3V reference voltage once per second for an hour).
  • For dynamic but repeatable measurands: take measurements at the same time point within each cycle across multiple trials.
    • Example: an ECG signal repeats thousands of times per day; error at time t_i is found by comparing trial n's value at t_i to the average of all trials at t_i.
  • Single measurements are insufficient: only repeated measurements allow statistical assessment of system behavior.

🎯 Precision, accuracy, and uncertainty

🎯 Precision: repeatability

Precision: the repeatability of a measurement; how small the variance is between individual measurements and their mean.

  • High precision means measurements cluster tightly together.
    • Example: four caliper readings of 3.121", 3.122", 3.122", 3.123" → very small variance → good precision.
  • Precision does not tell you if the measurements are close to the true value.

🎯 Accuracy: closeness to truth

Accuracy: how close the average (mean) of the readings is to the true value.

  • Assessed using percent error: % Error = 100 · (Measured Value − True Value) / True Value.
    • Example: if the true shaft diameter is 3.000" and the mean of four readings is 2.999", the accuracy is high (only 0.001" off).
  • A system can be accurate but imprecise (large spread around the true value) or precise but inaccurate (tight cluster far from the true value).

Target analogy:

  • Precise but inaccurate: tight grouping of hits, but off-center.
  • Accurate but imprecise: hits scattered widely, but centered on the target.

Don't confuse: precision with accuracy—precision is about consistency, accuracy is about correctness.

🌐 Uncertainty: the range of possible values

Uncertainty: the range or envelope of values that the measurement system may return; the true value is assumed to exist somewhere within this space.

  • Uncertainty is multidimensional.
    • Example: GPS position has uncertainty in latitude, longitude, and altitude; the altitude uncertainty is usually larger, so the volume is ellipsoidal.
  • Error vs uncertainty: error is the difference between a single measurement and the truth; uncertainty is the range in which we expect the error to exist.

🔧 Improving precision and accuracy

  • Inaccuracy can be improved if the bias is determined (via calibration) and removed.
  • Poor precision can be overcome by taking multiple trials; the number of trials needed depends on how imprecise the system is.
  • Both require testing against a known standard.

🔬 Calibration and standards

🔬 Why calibrate

  • To determine whether a measurement system is precise, accurate, both, or neither.
  • To ensure correctness when it matters.
    • Example: a multimeter showing 117 Vac is "good enough" to check if an outlet is live, but medical imaging pixel intensity must be highly accurate to distinguish malignant from benign tumors.

🔬 How calibration works

Calibration: using the measurement system to measure a known standard with a verified pedigree and value.

  • Scales are calibrated with known weights; length devices with known length standards; medical monitors with luminance meters.
  • Many measurements are taken against multiple known standards, and a best-fit line is often used because actual measurements have some error relative to the expected value.
    • Example: to calibrate a chemical concentration sensor, use tightly controlled solutions of known concentration and plot the results; fit a line to the trend.

📈 Goodness of fit: R² value

R² value: a measure of how well the best-fit line matches the actual data points; ranges from 0 (no fit) to 1 (perfect fit).

  • Sometimes a linear fit cannot pass through all data within their variance, indicating nonlinearity in the measuring device.
  • Design goal: minimize or eliminate nonlinearity.

📏 Accuracy limits and units

  • Even after calibration, a measurement can only be guaranteed to a certain level.
    • Example: a ruler marked at 1/32" intervals can measure to within ±1/64" (half the sub-marking interval).
  • Units are critical: all parties must use the same unit system (MKS: meter-kilogram-second; FPS: foot-pound-second).
    • Example: the Mars Climate Observer crashed in 1999 because JPL software expected MKS units but Lockheed Martin provided FPS units—specification called for MKS, but the error was missed.

Don't confuse: calibration with one-time setup—calibration must be repeated periodically and verified against standards.

📏 Linearity and measurement range

📏 What linearity means

Linearity: whether the accuracy of a measurement is consistent across the entire range of values the device can measure.

  • Most measurement devices have a clearly defined range.
    • Example: a meter stick measures from a few centimeters to one meter.
  • The question: is the device just as accurate at the low end as at the high end?
    • Example: does a scale weigh a 5-pound item as accurately as a 100-pound item?

📏 Nonlinearity in practice

  • Cable-driven speedometers (pre-late 20th century cars) were only linear over a certain range.
  • Manufacturers designed speedometers to read well into the hundreds of mph, even if the car couldn't reach that speed, to increase accuracy at typical driving speeds (where the scale was more linear).
  • Significant nonlinearities often occur at the extremes of the measurement range.

Don't confuse: a device's maximum range with its linear range—just because a device can measure up to a value doesn't mean it's accurate there.

⏱️ Static vs dynamic measurements

⏱️ Static measurements

Static measurement: any instantaneous measurement is representative of the measurand, which is not changing with time.

  • The system is at steady state: the signal has settled to its final value.
  • Example: measuring a fixed voltage reference.

⏱️ Dynamic measurements

Dynamic measurement: the measurand is changing with time, and the system response adjusts over time as well.

  • The system response may lag the actual physical change.
    • Example: a thermocouple's voltage takes finite time to adjust to ambient temperature because the metals have temperature-related characteristics that prevent instantaneous change.

⏱️ Response time and bandwidth

Response time: the time it takes for a measurement system to respond to a changing measurand.

  • Choose a sensor with response time faster than the expected dynamic changes.
  • Bandwidth: the frequency range over which the sensor can accurately respond (e.g., DC to 100 Hz).
    • Frequency and time are inversely related: higher frequency changes require faster response time.
    • Example: an accelerometer with bandwidth DC to 100 Hz cannot accurately measure vibrations above 100 Hz.

⏱️ Reaching steady state

  • Electronic components often produce decaying exponential signals (e^(−αt)) that approach zero as steady state is reached.
  • The signal may oscillate around the final value a few cycles before settling.
  • First few milliseconds of readings should be disregarded as invalid because steady state has not been reached.
    • Example: if a sensor detects a near-instantaneous change, the measurement system's front-end electronics respond more slowly; wait for the output to settle.

⏱️ Slew rate in operational amplifiers

Slew rate (SR): the maximum rate at which an operational amplifier's output voltage can change, typically defined as SR = ΔV / Δt.

Full power bandwidth (FPB): the maximum frequency at which a sinusoidal input will not cause slew rate distortion, defined as FPB = SR / (2π · V_o,max).

  • General-purpose op-amps typically have slew rates less than 1 μV/s.
  • If the input voltage changes faster than the slew rate, the output will be distorted because the op-amp cannot change voltage fast enough.
    • Example: a 10 kHz sinusoid with 4V amplitude input into an LM324 op-amp (SR ≈ 0.48 V/μs) will not distort because the fastest change (at zero crossing) is ~0.25V per μs, which is less than 0.48 V/μs. Maximum amplitude before distortion at 10 kHz is ~7.64V.

Don't confuse: response time (how long to settle) with slew rate (how fast the output can change)—both limit dynamic performance but in different ways.

3

Conclusion

1.3 Conclusion

🧭 Overview

🧠 One-sentence thesis

This conclusion previews the book's structure, which deepens data acquisition fundamentals, noise and signal conditioning, sensors, statistics, practical projects, and LabVIEW tutorials to build a comprehensive understanding of measurement and instrumentation.

📌 Key points (3–5)

  • Book structure: the remainder covers data acquisition elements (Chapter 2), noise and signal conditioning (Chapter 3), sensor types (Chapter 4), statistics (Chapter 5), and applied projects (Chapters 6–8).
  • Practical emphasis: numerous examples throughout help students internalize and apply theory.
  • LabVIEW integration: a detailed tutorial is provided in the Appendix and should be reviewed before working through projects.
  • How to use the book: projects in Chapters 6–8 apply concepts from Chapters 1–5, useful even for readers not taking the course at OU.

📚 Book roadmap

📚 Core theory chapters (2–5)

The excerpt outlines four foundational chapters:

ChapterTopicWhat it covers
2Data acquisitionFundamental elements of data acquisition
3Noise and signal conditioningMeasurement noise and signal conditioning concepts
4SensorsSensor types and mechanisms
5StatisticsStatistics and distributions as they relate to measurements
  • These chapters build on the introduction (Chapter 1) and go deeper into mentioned concepts plus many more advanced topics.
  • The excerpt emphasizes "thoroughly covered" for Chapter 3, indicating detailed treatment of noise and conditioning.

🛠️ Applied project chapters (6–8)

  • Chapters 6 through 8 present summaries of projects used in the Measurement and Automation course at OU.
  • These projects apply many concepts from Chapters 1–5.
  • The excerpt notes they are useful for gaining deeper understanding "even if you aren't taking the course at OU"—i.e., the projects have standalone educational value.

🖥️ LabVIEW and practical learning

🖥️ LabVIEW tutorial in Appendix

  • LabVIEW is used for the Measurement and Automation course at OU.
  • The excerpt describes it as "a highly recommended software package for measurement systems."
  • A detailed tutorial is provided in the Appendix.
  • Recommended workflow: review the Appendix before going through projects in Chapters 6–8.

🔬 Practical examples throughout

  • The excerpt states that "numerous practical examples are included throughout this book."
  • Purpose: to help students internalize and apply the theory they are learning.
  • Don't confuse: examples are not just illustrations—they are tools for internalizing concepts, not merely demonstrating them.

🔄 Preview of Chapter 2

🔄 Data acquisition concept

The excerpt briefly introduces Chapter 2's opening topic:

Data acquisition: a concept that can be applied in both analog and digital domains.

Three-step process:

  1. Identify a physical phenomenon of interest.
  2. Use a transducer to transform that phenomenon into a measurable physical quantity.
  3. Acquire data (i.e., make measurements) that represents the phenomenon.

🛩️ Bourdon tube example

The excerpt provides a simple analog measurement system example:

  • Physical phenomenon: suction (negative pressure) in an engine's pipes (e.g., in small airplanes).
  • Transducer: Bourdon tube—a soft metal tube that bends slightly under pressure, converting negative pressure to mechanical movement.
  • Measurement: the tube's end connects to a lever (sometimes with gearing) that moves a needle on a dial, showing vacuum level.
  • Key point: this is a strictly analog data acquisition system—no digital conversion occurs; data representing vacuum is displayed directly on the dial.

Example: In older aircraft, a vacuum gauge on the instrument panel uses a Bourdon tube to measure engine vacuum; the tube bends, the lever moves, and the pilot reads the vacuum level from the needle position.

4

Data Acquisition

2.1 Data Acquisition

🧭 Overview

🧠 One-sentence thesis

The bouncy-toy project demonstrates that successful data acquisition requires choosing appropriate hardware (sensor and DAQ device), configuring sampling parameters (rate, dynamic range, connection type), and then analyzing the captured data to extract meaningful physical relationships.

📌 Key points (3–5)

  • The measurement goal: acquire accelerometer voltage data to determine motion equations (position, velocity, acceleration over time) for a mass-spring-damper system.
  • Hardware already chosen: an accelerometer (0–5 V single-ended output with DC offset) and a National Instruments USB-6211 DAQ device are pre-selected; students focus on software and analysis.
  • Critical configuration decisions: sampling rate and dynamic range must be determined based on fundamental concepts; connection configuration (single-ended vs differential) must be set correctly.
  • Common confusion: engineering courses teach concepts (sampling theory, dynamic range), not software products—programming constructs (loops, conditionals) are the same across languages, so learning a new tool is about vocabulary, not fundamentals.
  • The full workflow: sensor selection → DAQ device selection → system assembly → software development → data analysis (steps 1–3 are pre-completed in this project; students do steps 4–5).

🎯 The bouncy-toy measurement system

🧸 Physical system: mass-spring-damper

  • The toy consists of:
    • A mass (the toy body plus attached accelerometer)
    • A spring with spring constant K
    • Damping from the non-ideal spring and large paddle-shaped rotor blades
  • A tape measure allows the user to record initial displacement as an input for determining the mathematical expression for position x(t).
  • This is a classic second-order dynamic system used to teach vibration and control concepts.

📡 The accelerometer sensor

The accelerometer outputs a single-ended voltage ranging between 0 volts and +5 volts.

  • Unknown sensitivity: the relationship between voltage and actual acceleration is not given.
  • DC offset present: because the 0–5 V range must represent both positive and negative acceleration, expect a non-zero voltage when the system is idle (not moving).
  • What the sensor provides: a time series of voltages that represent acceleration, but the conversion factor is unknown.

Don't confuse: the raw voltage is not acceleration—it is a proxy that must be calibrated or analyzed to extract motion equations.

🎯 Project goal

Determine the motion equations for the device:

  • x(t): position as a function of time
  • v(t): velocity as a function of time
  • a(t): acceleration as a function of time

The available data are voltage samples over time; the challenge is to infer these three motion equations from voltage data alone.

🔧 The five-step workflow

📋 General project steps

The excerpt outlines a typical data acquisition project:

StepTaskWhat it involves
1Choose the sensorConsider expected g-load, bandwidth (oscillation speed), electrical output type
2Choose the DAQ deviceSingle-ended or differential capability, dynamic range capability
3Assemble the systemSensor wiring and connection to the DAQ device
4Produce acquisition softwareSet dynamic range, sample rate, format, duration
5Analyze the dataUse software to determine motion equations

🎓 What students must do

  • Steps 1–3 are pre-completed: the accelerometer and National Instruments USB-6211 DAQ device are already chosen and connected.
  • Students complete steps 4 and 5: write the data acquisition software and analyze the data.
  • Step 5 requires Chapter 2 concepts: students must understand fundamental principles to determine sampling rate and dynamic range.

Example: to set the sampling rate, the student must apply Nyquist theory (from Chapter 2) to the expected oscillation frequency of the bouncy toy.

⚙️ Critical configuration decisions

📊 Sampling rate

  • The question: "How does one go about this?"
  • The excerpt emphasizes that the student must determine the sampling rate, not guess.
  • This requires understanding the system's bandwidth (how fast it oscillates) and applying fundamental sampling theory.

📏 Dynamic range

  • The question: "How does one determine that?"
  • Dynamic range refers to the span of voltage values the DAQ can distinguish.
  • The accelerometer outputs 0–5 V; the DAQ must be configured to capture this range without clipping or losing resolution.

🔌 Connection configuration

  • Must be set correctly: single-ended or differential.
  • The accelerometer is described as single-ended (voltage relative to ground), so the DAQ configuration must match.

Don't confuse: these are not arbitrary settings—they must match the sensor's electrical characteristics and the system's dynamic behavior.

💻 Programming and software philosophy

🛠️ LabVIEW as a tool, not the subject

Engineering courses do not award ABET accredited engineering credits for learning a commercial software product.

  • The project uses LabVIEW, and Appendix A.2 provides a basic tutorial.
  • Core principle: the book is not a LabVIEW training manual.
  • Engineering students learn programming fundamentals in other courses; the task here is to apply those fundamentals in a new language.

🌐 Programming constructs are universal

  • A WHILE loop is a WHILE loop.
  • A FOR loop is a FOR loop.
  • Conditional statements (if-then) work the same way across languages.
  • Analogy from the excerpt: like a native English speaker who knows French needing to interact in Spanish at a restaurant—with a vocabulary book, it can be done, because romance languages share fundamental grammar.

Example: if you know how to write a loop in Python, you can learn the LabVIEW syntax for a loop by looking up the vocabulary; the logic is the same.

🎯 The real task

"Learn enough of another language to get the job done."

  • Focus on applying measurement concepts (sampling, dynamic range, data analysis).
  • Use the programming environment as a means to that end, not as the learning objective itself.
5

Computer-Based Data Acquisition

2.2 Computer-Based Data Acquisition

🧭 Overview

🧠 One-sentence thesis

Computer-based data acquisition systems convert continuous analog sensor signals into discrete digital values through analog-to-digital converters (ADCs), and the quality of the resulting data depends critically on both the resolution (number of bits) and the sampling rate chosen.

📌 Key points (3–5)

  • Why digital systems: Computerized measurement systems are more consistent, can store large volumes of data, and enable powerful digital signal processing (e.g., filtering noise).
  • ADC resolution trade-off: More bits mean finer voltage steps and lower quantization error, but also higher cost and memory usage; the goal is to match resolution to the actual measurement needs.
  • Quantization error is unavoidable: Because ADCs use a finite number of bits, infinite analog voltages must be mapped to discrete steps, introducing rounding errors of up to ±0.5 step.
  • Common confusion—range vs resolution: Setting the ADC range too wide wastes resolution; setting it too narrow clips the signal and produces false (non-linear) results.
  • Sampling rate and aliasing: To avoid ambiguity (aliasing), the sampling rate must be greater than twice the highest frequency in the signal (Nyquist criterion).

🔌 Why use computer-based systems

🔌 Advantages over analog systems

  • Consistency: Computerized systems produce repeatable results, unlike manual methods that vary with the operator.
  • Data capacity: Computers can store and process large volumes of data; manual analysis becomes impractical at scale.
  • Digital signal processing: Real-time filtering (e.g., removing 60 Hz electrical noise) and other transformations are possible during data collection.

🔧 Sensor and signal types

  • Most transducers convert physical quantities (pressure, temperature, etc.) into electrical signals: voltage, current, or resistance.
  • Data acquisition (DAQ) hardware typically expects voltage inputs, so current and resistance must be converted (e.g., current → voltage via a Bipolar Junction Transistor; resistance → voltage via a Wheatstone Bridge).
  • The excerpt uses "sensor" and "transducer" interchangeably; both refer to the physical component that produces a voltage proportional to the measured phenomenon.

Example: A pressure sensor outputs 0 to 5 volts for a pressure range of 0 to -10 inches of mercury (vacuum).

🔢 Analog-to-digital conversion (ADC)

🔢 The core problem

Analog-to-digital converter (ADC): A device that uses a fixed number of binary digits (n bits) to represent a continuous voltage range.

  • Sensor voltages are real-valued (infinite possible values between, say, 0 and 5 V).
  • Computers use finite binary digits, so the infinite range must be divided into discrete steps.
  • The number of distinct values an n-bit ADC can represent is 2 to the power n.

📏 Resolution

ADC resolution: The size of each voltage step, calculated as (Dynamic Range) divided by (2 to the power n).

  • Units: volts per step (or milliamps per step, etc., depending on the signal).
  • Formula in words: resolution equals the full voltage range divided by the number of steps.
  • Lower resolution (larger step size) means coarser measurements; higher resolution (smaller step size) means finer measurements.

Example: A 12-bit ADC measuring 4 to 20 mA has a range of 16 mA and 4096 steps (2 to the power 12), so resolution = 16 mA / 4096 ≈ 0.003906 mA/step or 3.906 microamps/step.

⚖️ Choosing the right number of bits

  • Match the specification: If you need resolution "no greater than" X, choose the smallest number of bits that meets the requirement.
  • Avoid over-engineering: Extra bits waste memory and increase cost without improving measurement quality.
  • Avoid under-engineering: Too few bits produce coarse, inaccurate data.

Example: For a 4–20 mA range, if you need ≤ 1 nA/step, a 24-bit ADC is ideal (0.954 nA/step); a 12-bit ADC (3.906 µA/step) would be insufficient.

⚠️ Quantization error and noise

⚠️ What is quantization error

Quantization error: The difference between the true analog voltage and the discrete digital value it is mapped to by the ADC.

  • Because the ADC rounds to the nearest integer step, the maximum error is ±0.5 times the resolution (±0.5 step).
  • This error is also called quantization noise.

Example: A 3-bit ADC with a 0–5 V range has resolution = 5 / 8 = 0.625 V/step, so quantization error = ±0.3125 V. A true voltage of 2.36 V would be rounded to 2.5 V, producing a bias error of 0.14 V.

📊 Root-mean-square (RMS) quantization noise

  • The RMS quantization noise is Q divided by the square root of 12, where Q is the step size (resolution).
  • This measures the "power" of the quantization ambiguity.

📈 Signal-to-noise ratio (SNR)

  • SNR compares the signal power to the quantization noise power, typically expressed in decibels (dB).
  • Formula in words: SNR (in dB) ≈ 6.02 times the number of bits, plus 1.76.
  • Each additional bit improves SNR by about 6 dB.

Example: A 24-bit ADC (common in smartphones at the time of writing) has SNR ≈ 6.02 × 24 + 1.76 ≈ 146 dB.

🎚️ Setting the ADC range

🎚️ User-adjustable limits

  • Most DAQ systems let the user set the minimum and maximum voltage limits of the ADC.
  • Goal: Match the ADC range to the sensor's actual output range.

⚠️ Range too narrow (clipping)

  • If the sensor voltage exceeds the ADC limits, the output is capped at the max or min.
  • This produces non-linear behavior and false results—the worst outcome.

Example: A sensor outputs 0–5 V, but the ADC is set to ±2 V. Any voltage above 2 V is recorded as 2 V, losing all information about the true signal.

⚠️ Range too wide (wasted resolution)

  • If the ADC range is much larger than the sensor range, the resolution is unnecessarily coarse.
  • The system remains linear, but quantization error increases.

Example: A sensor outputs ±2 V, but the ADC is set to ±10 V. With a 12-bit ADC, the quantization error is ±2.441 mV instead of ±0.488 mV—a 400% increase.

Don't confuse: Clipping (too narrow) is far worse than wasted resolution (too wide), but both should be avoided by setting the range correctly.

🕒 Sampling rate and the Nyquist criterion

🕒 How often to sample

  • Sensors produce continuous signals (voltage at every instant in time).
  • Computers can only store a finite number of samples, so we must decide how often to convert the voltage to a digital value.
  • Sampling too slowly risks missing important information; sampling too quickly wastes memory and processing power.

🌊 Fourier Transform and frequency content

Fourier Transform: A mathematical tool that represents any time-varying signal as a sum of weighted sinusoids (sine and cosine waves) at different frequencies.

  • A signal's frequency content describes which frequencies (and how much of each) are present in the signal.
  • The Fourier Series applies to periodic signals and sums harmonics (integer multiples) of a fundamental frequency.
  • The Discrete Time Fourier Transform (DTFT) applies to sampled signals and repeats every 2π (or every sampling frequency interval).

Example: A square wave can be created by summing the odd harmonics (1st, 3rd, 5th, …) of a sine wave with specific weights. The more harmonics you add, the sharper the transitions.

🔁 Aliasing and ambiguity

Aliasing: When the sampling rate is too low, high-frequency components in the signal are mistakenly interpreted as lower frequencies, creating ambiguity.

  • The DTFT of a sampled signal repeats at intervals equal to the sampling frequency.
  • If the signal contains frequencies higher than half the sampling rate, the repeated spectra overlap, and we cannot tell which frequency is real.

Don't confuse: A high frequency that is under-sampled looks like a low frequency in the data—this is aliasing, not just poor resolution.

✅ Nyquist Sampling Criterion

Nyquist criterion: To accurately represent a signal, the sampling rate must be greater than twice the maximum frequency in the signal.

  • In symbols: sampling rate > 2 × (highest frequency in the signal).
  • This ensures no overlap between the repeated spectra, avoiding aliasing.

Example: If a sensor signal contains frequencies up to 1000 Hz, the sampling rate must be greater than 2000 Hz (samples per second).

📐 Practical ADC design decisions

📐 Balancing cost, resolution, and range

FactorTrade-offRecommendation
Number of bitsMore bits → better resolution, higher costChoose the minimum that meets the resolution spec
ADC rangeToo narrow → clipping; too wide → wasted resolutionSet range slightly wider than sensor output
Sampling rateToo low → aliasing; too high → wasted memoryMust exceed twice the maximum signal frequency

🧮 Example workflow

  1. Determine the sensor's voltage range (e.g., 0–5 V).
  2. Determine the required resolution (e.g., ≤ 0.5 mV/step).
  3. Calculate the minimum number of bits: 2 to the power n ≥ (range / resolution).
  4. Round up to the next available ADC (e.g., 12-bit, 14-bit, 16-bit).
  5. Set the ADC range to match the sensor range (with a small safety margin).
  6. Determine the maximum frequency in the signal and set the sampling rate to at least twice that value.

Don't confuse: Resolution (step size) and sampling rate (how often you measure) are independent choices; both must be set correctly for accurate data acquisition.

6

2.3 Digital-To-Analog Output

2.3 Digital-To-Analog Output

🧭 Overview

🧠 One-sentence thesis

Digital-to-analog conversion requires a lowpass filter to smooth the stair-step output signal, and the filter cutoff is determined by the Nyquist criterion to remove high-frequency quantization artifacts while preserving the intended signal.

📌 Key points (3–5)

  • What DAC does: converts discrete data stored or generated in the computer into analog output voltages.
  • The stair-step problem: digital systems output discrete voltage levels held constant until the next sample, producing quantization artifacts that appear as steps.
  • The solution: pass the digital output through a lowpass filter designed to block frequencies above f_s / 2 (half the sampling rate).
  • Why this works: the step transitions contain much higher frequency content than f_s / 2, so the lowpass filter removes them while preserving the intended signal.
  • Common confusion: the lowpass filter in DAC is not arbitrary—it is directly tied to the sampling rate via the Nyquist criterion.

🔧 The digital-to-analog conversion problem

🔧 What DAC produces initially

  • A digital system can only output a discrete set of voltages.
  • Each voltage level is held constant until the next sampling cycle.
  • This creates a stair-step signal rather than a smooth analog waveform.

🎵 Why stair-steps are undesirable

Quantization produces a stair-step signal that would not be pleasing in an audio or visual application.

  • The excerpt emphasizes practical applications: audio playback and visual displays require smooth signals.
  • The stair-step artifacts are a direct result of the discrete nature of digital output.
  • Example: playing music from a computer—without smoothing, the output would sound harsh and distorted due to the abrupt voltage jumps.

🧹 The lowpass filter solution

🧹 How the filter removes stair-steps

This is solved by passing the digital output signal through a lowpass filter to smooth out the stairsteps.

  • The lowpass filter is applied after the initial digital-to-analog conversion.
  • It removes the high-frequency components introduced by the quantization steps.
  • The result is a smooth analog signal that approximates the intended waveform.

📐 Filter design based on sampling rate

The excerpt states:

For a given digital sample rate at the output, we know that the analog signal being represented can contain no frequency greater than one half of the sampling rate.

Filter specification:

  • The lowpass filter is designed to block frequencies above f_s / 2.
  • This cutoff is not arbitrary—it follows directly from the Nyquist sampling criterion.
  • Assumption: the Nyquist requirement (f_s > 2 f_max) is already met in the system.

⚡ Why the filter works

It should be evident that the frequency content in the step transitions we want to remove is much higher than f_s / 2.

  • The step transitions (the vertical jumps in the stair-step signal) contain high-frequency components.
  • These high frequencies are much higher than f_s / 2.
  • The intended signal content is below f_s / 2 (by the Nyquist criterion).
  • Therefore, the lowpass filter removes the unwanted step artifacts while preserving the actual signal.

Don't confuse:

  • The filter does not remove "all high frequencies"—it removes frequencies above f_s / 2.
  • The step transitions are artifacts of the conversion process, not part of the original signal.

📊 Visual representation

📊 Two-stage output process

Figure 2.9 in the excerpt shows:

StageDescriptionAppearance
First stepComputer outputs discrete voltage based on n-bit conversion, held until next sampleBlue stair-step signal
Final outputAfter lowpass filteringRed smooth curve

📊 The conversion pipeline

  1. Digital data (stored or generated in computer)
  2. DAC initial output → discrete voltage levels, held constant per sample cycle
  3. Lowpass filter → cutoff at f_s / 2
  4. Final analog output → smooth waveform

Example: If the sampling rate is 1000 Hz, the lowpass filter blocks frequencies above 500 Hz. The stair-step transitions (which contain frequencies well above 500 Hz) are removed, leaving only the intended signal content (which must be below 500 Hz by design).

7

Digital-to-Analog Conversion and Data Acquisition Fundamentals

2.4 Conclusion

🧭 Overview

🧠 One-sentence thesis

Dynamic range and sampling rate are the two foundational parameters that must be verified before data acquisition, and proper signal conditioning (analog or digital) is essential to minimize measurement uncertainty.

📌 Key points (3–5)

  • Digital-to-analog conversion challenge: Digital systems output discrete voltage steps that create stair-step signals requiring smoothing.
  • Lowpass filtering solution: A lowpass filter removes high-frequency step transitions while preserving signal content below f_s / 2.
  • Two fundamental parameters: Dynamic range and sampling rate must be addressed, or the acquired data is "hopelessly useless."
  • Signal conditioning necessity: Analog signals often need conditioning (filtering, amplification) before digitization to preserve information while making signals more amenable to detection.
  • Common confusion: Signal conditioning can occur in either the analog domain (before digitization) or digital domain (after digitization)—both approaches exist.

🔄 Digital-to-analog conversion process

🔄 The stair-step problem

  • Digital systems can only output a discrete set of voltages.
  • The output voltage is held constant until the next sampling cycle.
  • This quantization produces a stair-step signal.
  • Such signals would not be pleasing in audio or visual applications.

Digital-to-analog conversion: The process of creating analog output voltages from discrete data stored or generated in the computer.

🎚️ How the lowpass filter solves it

  • The digital output signal is passed through a lowpass filter to smooth out the stairsteps.
  • The filter is designed based on the sampling criterion: for a given digital sample rate, the analog signal can contain no frequency greater than one half of the sampling rate (f_s / 2).
  • The lowpass filter blocks frequencies above f_s / 2.
  • The frequency content in the step transitions is much higher than f_s / 2, so the filter removes them while preserving the actual signal.

Example: The computer outputs discrete voltage values in steps (the blue line in the excerpt's figure). After lowpass filtering, the output becomes smooth (the red line), representing the final analog output.

⚠️ Nyquist requirement still applies

  • The excerpt assumes the Nyquist sampling requirement (f_s > 2 f_max) is met.
  • This ensures the signal frequency content is less than half the sampling frequency.
  • Don't confuse: This requirement applies to both analog-to-digital and digital-to-analog conversion.

🎯 Fundamental data acquisition parameters

🎯 The two critical parameters

Dynamic range and sampling rate: The two fundamental parameters to be addressed in data acquisition.

  • These are the minimum set of parameters to verify prior to data acquisition.
  • If not addressed, the data should be considered "hopelessly useless."
  • Even when adequately accounted for, these only reduce ambiguity to its minimum—uncertainty still exists.

📊 What each parameter controls

ParameterWhat it addressesConsequence if ignored
Dynamic rangeThe span of signal amplitudes that can be measuredCannot capture full signal variation
Sampling rateHow frequently the signal is sampledCannot capture signal frequency content

🔧 Signal conditioning essentials

🔧 What signal conditioning means

Signal conditioning: Altering the signal to ensure the information conveyed remains present, but the signal itself is more amenable to detection and digitization.

  • The goal is to preserve information while making the signal easier to work with.
  • Most commonly needed types: amplification and filtering.

🔀 Analog vs digital conditioning

The excerpt distinguishes two domains where conditioning can occur:

Analog signal conditioning:

  • Occurs prior to the signal being digitized.
  • Implemented with electronic hardware.
  • Often needed to condition the native analog signal to adequately digitize it.
  • Examples: hardware-based amplification and filtering, converting sensor electrical output from one form to another.

Digital signal conditioning:

  • Occurs after digitization.
  • Many software-based solutions available.
  • Example: LabVIEW Express Virtual Instruments (VIs) for filtering, scaling, and mapping.

Don't confuse: The excerpt focuses on analog conditioning because it concerns measurement and instrumentation, but digital conditioning is equally valid for post-acquisition processing.

🎛️ Common conditioning operations

  • Filtering: Removing unwanted frequency components.
  • Amplification: Increasing signal strength for better detection.
  • Conversion: Transforming sensor electrical output from one form to another.

Example: A sensor's native analog signal might be too weak or contain noise; amplification increases its strength, and filtering removes noise, making it suitable for the data acquisition system's dynamic range and sampling rate.

8

Signal Conditioning

3.1 Signal Conditioning

🧭 Overview

🧠 One-sentence thesis

Signal conditioning—primarily amplification and filtering—alters analog signals to preserve their information content while making them more suitable for detection and digitization.

📌 Key points (3–5)

  • Purpose of signal conditioning: to modify the native analog signal so that the information it conveys remains present but the signal becomes easier to detect and digitize.
  • Two main types: amplification (increasing signal strength) and filtering (removing unwanted signals).
  • Analog vs digital domain: signal conditioning can be done with hardware before digitization (analog) or with software after digitization (digital).
  • Common confusion: signal conditioning is not just about making signals bigger—it also includes converting electrical forms (e.g., resistance-to-voltage, current-to-voltage).
  • Real-world necessity: even after accounting for calibration and measurement uncertainty, signal conditioning is essential to minimize ambiguity in data acquisition.

🔧 What signal conditioning does

🎯 Core purpose

Signal conditioning: altering a signal so that the information it conveys remains present, but the signal itself is more amenable to detection and digitization.

  • The goal is not to change the information, only to make the signal easier to work with.
  • Native analog signals from sensors often need modification before they can be properly digitized by a data acquisition system.
  • Example: A sensor outputs a very weak voltage signal; amplification increases its strength without losing the underlying measurement information.

🔄 Two primary functions

  • Amplification: increasing the signal's amplitude (strength).
  • Filtering: removing unwanted components that obscure the desired information.
  • These are the most commonly needed types of signal conditioning in measurement systems.

🌐 Analog vs digital signal conditioning

💻 Digital domain conditioning

  • Occurs after the signal has been digitized.
  • Implemented using software-based solutions.
  • Example: LabVIEW Express Virtual Instruments (VIs) can perform filtering and scaling/mapping on digital data.
  • The excerpt mentions that many software tools are available for digital amplification and filtering.

⚡ Analog domain conditioning

  • Occurs before the signal is digitized.
  • Implemented using electronic hardware (physical circuits).
  • Amplification and filtering functions are built into the measurement system's front end.
  • Don't confuse: analog conditioning happens in the physical signal path, not in software.

🔌 Electrical conversion

  • Another form of analog signal conditioning beyond amplification and filtering.
  • Converts a sensor's electrical output from one form to another.
  • Two most common conversions:
    • Resistance-to-voltage: converting a resistance change into a voltage signal.
    • Current-to-voltage: converting a current signal into a voltage signal.
  • These conversions are discussed further when covering sensors (Chapter 4 of the source text).

🫀 ECG as a practical example

🩺 Why ECG needs signal conditioning

Electrocardiogram (ECG): the recording of the heart's electrical activity as it appears on the body surface.

  • The heart generates an electrical signal that manifests on the skin.
  • Raw signals from electrodes have amplitudes on the order of microvolts—extremely small.
  • Multiple electrodes are positioned around the body, and their signals must be processed together.
  • Without significant signal conditioning, the ECG traces seen on monitors would be unreadable.

🏗️ ECG signal conditioning stages

The ECG system typically consists of multiple sub-systems in sequence:

StageFunctionWhy it's needed
First stageInstrumentation amplifierTarget signal is extremely small; specialized amplifier handles tiny signals from multiple electrodes
Second stageHighpass filterRemoves low-frequency unwanted signals
Third stageLowpass filterRemoves high-frequency unwanted signals
Last stageMain amplificationProvides most of the voltage gain; can use many different amplifier configurations
  • Amplification is a necessity: the raw signal is too weak to work with directly.
  • Filtering is also needed: unwanted signals make it harder to interpret the information from the electrodes.
  • Example: Without filtering, electrical noise from the environment or muscle activity could obscure the heart's electrical pattern.

🎓 Learning from the ECG example

  • The excerpt uses ECG to illustrate how amplification and filtering apply in any measurement system.
  • The primary goal is to explain the various sub-systems in detail.
  • Before diving into amplifier configurations, instrumentation amplifiers, and filters, the source text indicates that noise will be discussed more thoroughly.
  • Don't confuse: the ECG example is not unique—the same principles (amplification, filtering, multi-stage processing) apply broadly to measurement and instrumentation systems.

🔍 Minimizing uncertainty

📉 Residual ambiguity

  • Even after accounting for calibration and measurement uncertainty, ambiguity (uncertainty) still exists in data acquisition.
  • Signal conditioning is a key method to minimize this remaining uncertainty.
  • The excerpt emphasizes that proper signal conditioning reduces ambiguity to its minimum, but does not eliminate it entirely.

🎯 Focus on the analog world

  • Because the text is concerned with measurement and instrumentation, the discussion confines itself to the analog domain.
  • Specifically: conditioning the native analog signal in order to adequately digitize it.
  • The next chapter (referenced in the excerpt) discusses signal conditioning concepts in more depth.
9

3.2 ECG - A Practical Signal Conditioning Example

3.2 ECG - A Practical Signal Conditioning Example

🧭 Overview

🧠 One-sentence thesis

The ECG signal conditioning system demonstrates how amplification and filtering work together in stages to transform a microvolt-level body-surface electrical signal into a clear, interpretable trace by first amplifying the tiny signal, then removing unwanted frequencies, and finally applying large gain after filtering to avoid amplifying noise.

📌 Key points (3–5)

  • Why ECG needs signal conditioning: the heart's electrical signal on the body surface is extremely small (microvolts) and contaminated with unwanted signals, requiring both amplification and filtering.
  • Multi-stage architecture: ECG systems use multiple sub-systems in sequence—instrumentation amplifier first, then highpass and lowpass filters, then large gain stages last.
  • Order matters: large gain is saved until the end so that unwanted frequencies are not amplified, which would make them harder to filter out later.
  • Common confusion: amplification alone is not enough—filtering must remove noise before final amplification, otherwise noise gets amplified too.
  • Real-world context: the familiar ECG traces seen on monitors have undergone significant signal conditioning; the raw signal from electrodes is not directly usable.

🩺 The ECG signal and why it needs conditioning

🩺 What the ECG signal is

Electrocardiogram (ECG): the recording of the electrical activity generated by the heart that appears on the body surface.

  • The heart generates an electrical signal that "evinces itself" (shows up) on the body surface.
  • This signal is captured by multiple electrodes positioned around the body.
  • The raw signal from these electrodes has amplitudes on the order of microvolts—extremely small.

🔍 Why raw ECG signals are not usable

  • Amplitude problem: microvolt-level signals are too small to digitize or display directly.
  • Noise problem: unwanted signals contaminate the electrode readings, making it harder to interpret the information.
  • The familiar ECG traces on Hollywood show monitors have undergone significant signal conditioning—they are not the raw electrode output.

🏗️ Multi-stage ECG signal conditioning architecture

🏗️ The four-stage block diagram

The ECG amplifier system consists of multiple sub-systems in a specific order (Figure 3.2):

StageFunctionWhy this stage
1. Instrumentation amplifierAmplify the tiny signalTarget signal source is extremely small; this amplifier type is preferred for small signals
2. Highpass filterRemove low-frequency unwanted signalsFiltering out noise components
3. Lowpass filterRemove high-frequency unwanted signalsChoice between 150 Hz or 250 Hz cutoff depending on sampling and desired resolution
4. Large gain stageMost of the voltage gain occurs hereCan be accomplished using many different amplifier configurations

🎯 Why the order is critical

  • Large gain stages are saved until the end so that no unwanted frequencies are amplified.
  • If you amplify first and filter later, the noise gets amplified too, making it harder to filter out.
  • Example: if a 1 microvolt signal has 0.5 microvolt noise, amplifying by 1000× gives 1 millivolt signal + 0.5 millivolt noise; filtering the amplified noise is harder than filtering the original 0.5 microvolt noise first.

🔧 What each stage does

  • First stage (instrumentation amplifier): handles the extremely small input from multiple electrodes; this amplifier type is discussed in section 3.5.2.
  • Filter stages: remove unwanted frequency components; covered in section 3.5.4.
  • Final gain stage: provides most of the voltage gain after unwanted frequencies are removed; many different amplifier configurations can be used (section 3.5).

🎓 Broader context and chapter goals

🎓 ECG as a teaching example

  • The ECG system is a practical, real-world example of signal conditioning principles.
  • The chapter's primary goal is to explain the various sub-systems in the ECG block diagram in more detail.
  • The principles of amplification and filtering shown here apply to any measurement system, not just ECG.

🎓 What comes next

  • Section 3.5 will provide an overview of amplifier configurations.
  • Then move on to more complicated topics: instrumentation amplifiers and filters.
  • Before introducing amplifiers, the issue of noise will be discussed more thoroughly (section 3.3).

🎓 Analog vs digital signal conditioning

  • Signal conditioning can occur in the analog world (before digitization) or the digital world (after digitization).
  • This text focuses on the analog world because it is concerned with measurement and instrumentation.
  • Analog signal conditioning uses electronic hardware to implement amplification and filtering functions.
  • Digital alternatives exist (e.g., LabVIEW Express VIs for filtering and scaling), but the ECG example emphasizes hardware-based analog conditioning.
10

Noise and the Common Mode Rejection Ratio

3.3 Noise and the Common Mode Rejection Ratio

🧭 Overview

🧠 One-sentence thesis

Noise in measurement systems can be reduced not only by filtering but also by amplifier design, with differential amplifiers offering better noise rejection than single-ended amplifiers through the Common Mode Rejection Ratio (CMRR).

📌 Key points (3–5)

  • What noise means: any signal value that does not represent the actual signal-of-interest; it is the error between sampled and true values.
  • How amplifiers affect noise: single-ended amplifiers amplify noise along with the signal, keeping the signal-to-noise ratio (SNR) unchanged; differential amplifiers can reject noise that appears on both inputs.
  • Common confusion: filtering is not the only way to remove noise—amplifier configuration and design also play a large role in noise reduction.
  • Key metric: Common Mode Rejection Ratio (CMRR) quantifies how well an amplifier rejects noise; some configurations are better at noise reduction than others.
  • Mathematical model: incoming signal x(t) = s(t) + η(t), where s(t) is the desired signal and η(t) is time-changing noise.

🔊 Understanding noise in measurement systems

🔊 What noise is

Noise: any signal value that does not represent the actual signal-of-interest.

  • Noise is the error component—the deviance between the sampled value and the actual true value.
  • Sources of noise include both the environment and the measurement system itself.
  • Example: In an ECG recording, the blue trace shows the measured signal with noise; the red trace shows what we think is the true ECG signal.

📐 Mathematical representation

The incoming signal is broken into two terms:

  • x(t) = s(t) + η(t)
    • x(t): the total incoming signal
    • s(t): the desired signal (signal-of-interest)
    • η(t): the time-changing noise

This separation helps with mathematical modeling and understanding how to address noise.

⚡ Single-ended amplifiers and their noise problem

⚡ What single-ended amplifiers do

  • A single-ended amplifier assumes the incoming signal is a single signal referenced to the same system ground as the amplifier.
  • One terminal is connected directly to system ground (the secondary terminal: - is inverting, + is non-inverting).
  • Inverting and non-inverting amplifier configurations are both considered single-ended amplifiers.

🚨 The noise amplification problem

The input signal is: v_i(t) = s_i(t) + η(t)

The output becomes: v_o(t) = G · v_i(t) = G · [s_i(t) + η(t)] = G · s_i(t) + G · η(t)

  • The gain G multiplies both the desired signal and the noise.
  • Major drawback: the noise η(t) is amplified along with the desired signal s_i(t).
  • The signal-to-noise ratio (SNR) of the output is no better than the SNR of the input signal.
  • Example: If the input has a certain amount of noise, the output will have that same proportion of noise, just louder.

🔀 Differential amplifiers and noise rejection

🔀 How differential amplifiers work differently

  • A differential amplifier does not ground one of the terminals.
  • Instead, it has two separate inputs: v_1(t) and v_2(t).
  • Both inputs are assumed to have the same noise: η(t).

✨ The noise cancellation advantage

The output is: v_o(t) = G · [difference between the two inputs]

  • If both inputs have the same noise η(t), the difference operation subtracts it out.
  • This means noise that appears equally on both inputs (common-mode noise) can be rejected.
  • Don't confuse: single-ended amplifiers amplify all noise; differential amplifiers can reject noise that is common to both inputs.

📏 Common Mode Rejection Ratio (CMRR)

📏 What CMRR measures

Common Mode Rejection Ratio (CMRR): a metric that quantifies how well an amplifier rejects noise.

  • CMRR helps compare different amplifier configurations.
  • Some amplifier configurations are better at noise reduction than others, as measured by their CMRR.

🎯 Why amplifier design matters for noise

ApproachEffect on noise
FilteringRemoves unwanted frequency components
Amplifier configurationCan reject noise through differential operation (CMRR)
  • The way the amplifier is configured and designed plays a large role in noise reduction, not just filtering.
  • Higher CMRR means better rejection of common-mode noise.
  • Example: In ECG signal conditioning, an instrumentation amplifier (a type of differential amplifier) is preferred as the first stage because the target signal is extremely small and noise rejection is critical.
11

Amplifier Concepts

3.4 Amplifier Concepts

🧭 Overview

🧠 One-sentence thesis

Differential amplifiers reject noise by subtracting two inputs that share the same noise component, whereas single-ended amplifiers amplify both the desired signal and the noise together, making differential configurations far superior for noise reduction.

📌 Key points (3–5)

  • What gain means: the ratio of output voltage to input voltage (G = V_out / V_in), the primary purpose of an amplifier.
  • Single-ended vs differential: single-ended amplifiers ground one terminal and amplify noise along with the signal; differential amplifiers use two inputs and subtract them to cancel shared noise.
  • Common confusion: differential amplifiers do not completely remove noise in real systems, but they reject most of it—the ideal equation shows perfect cancellation, but practice differs.
  • Why differential matters for ECG: the ECG amplifier uses differential configuration (e.g., right arm minus left arm, both referenced to right leg ground) to maximize signal and minimize noise.
  • Op Amp assumptions: ideal op amps have infinite input resistance, zero output resistance, and infinite internal gain, simplifying gain equations.

🔌 Single-ended amplifiers and their noise problem

🔌 What single-ended amplifiers do

Single-ended amplifier: an amplifier configuration where one terminal is connected directly to system ground and the other receives the input signal.

  • The input signal v_i(t) is the sum of the desired signal s_i(t) and noise η(t).
  • The output is simply the gain G multiplied by the entire input: v_o(t) = G · v_i(t) = G · s_i(t) + G · η(t).
  • Both inverting and non-inverting amplifier configurations (covered in basic electronics courses) are single-ended.

⚠️ The noise amplification drawback

  • Because the noise term η(t) is multiplied by the same gain G as the desired signal, the signal-to-noise ratio (SNR) of the output is no better than the SNR of the input.
  • The excerpt highlights this as a "major drawback" and marks the noise term G · η(t) in red.
  • Example: if the input has noise and you amplify it by a factor of 100, the noise also grows by 100—you cannot improve the quality of the signal this way.

🔀 Differential amplifiers and noise cancellation

🔀 How differential amplifiers work

Differential amplifier: an amplifier configuration that uses two separate inputs (v_1(t) and v_2(t)) instead of grounding one terminal, and outputs the gain multiplied by the difference between the two inputs.

  • The two inputs are assumed to have the same noise η(t):
    • v_1(t) = s_1(t) + η(t)
    • v_2(t) = s_2(t) + η(t)
  • The output is: v_o(t) = G · [v_2(t) − v_1(t)] = G · [s_2(t) − s_1(t)].
  • When you subtract the two inputs, the shared noise term η(t) cancels out algebraically.

🎯 Ideal vs real noise rejection

  • The differential output equation implies complete rejection of all noise associated with the inputs.
  • The excerpt warns this is unrealistic in real systems—more details are promised in Section 3.5.2.
  • In practice, most of the noise is removed, but not all.
  • Don't confuse: the ideal equation shows perfect cancellation; real amplifiers achieve very good (but not perfect) noise rejection.

🩺 ECG amplifier example

  • The classic ECG amplifier uses differential configuration.
  • Ground point: right leg.
  • Two voltage points recorded: e.g., right arm and left arm.
  • The amplifier subtracts the single-ended signal of the left arm (with respect to ground) from the single-ended signal of the right arm (with respect to ground).
  • The instrumentation amplifier (Figure 3.2, mentioned earlier) is a differential amplifier built from multiple op-amps to achieve this noise-reduction goal (details in Section 3.5.3).

🧮 Gain and amplifier configurations

🧮 Definition of gain

Gain (G): the ratio of output voltage to input voltage, G = V_out / V_in.

  • Gain is predominantly used for voltage gain in this book.
  • Occasionally the term refers to current gain, but voltage gain is the focus here.
  • The primary purpose of an amplifier is to supply gain.

🔄 Single-ended vs differential summary table

ConfigurationTerminalsNoise behaviorOutput equationSNR improvement
Single-endedOne input, one groundedNoise amplified with signalv_o = G · (s_i + η)None—output SNR = input SNR
DifferentialTwo inputs, no groundShared noise cancels (ideally)v_o = G · (s_2 − s_1)Yes—most noise rejected
  • Single-ended: simpler, but amplifies noise.
  • Differential: more complex, but maximizes signal and minimizes noise effect.

🔧 Operational amplifiers (Op Amps)

🔧 What an Op Amp is

Operational Amplifier (Op Amp): an integrated circuit (IC) used to implement amplifiers, with very large input resistance, small output resistance, and very large internal (open loop) gain.

  • Op Amps are the most common building block for amplifier circuits.
  • They can be designed with simple gain equations (Equations 3.1, 3.2, and 3.3) when considered ideal.

🧪 Ideal Op Amp assumptions

The excerpt lists several primary assumptions for an Op Amp to be considered ideal:

  • Input resistance = infinity: no current flows into the input terminals.
  • Output resistance = 0: the output can drive any load without voltage drop.
  • Internal (open loop) gain = infinity: the amplification factor without feedback is extremely large.

(The excerpt cuts off mid-sentence; more assumptions may follow in the full text.)

  • These assumptions simplify circuit analysis and make gain equations straightforward.
  • Real Op Amps approximate these ideals closely enough for many applications.
12

Amplifier Circuits

3.5 Amplifiers Circuits

🧭 Overview

🧠 One-sentence thesis

Amplifier circuits, particularly operational amplifiers (Op Amps), are essential for signal conditioning because they can increase signal strength while managing noise, with differential and instrumentation amplifiers offering superior noise rejection compared to single-ended configurations.

📌 Key points (3–5)

  • Single-ended vs differential: Single-ended amplifiers amplify both signal and noise equally, while differential amplifiers subtract common noise between two inputs, significantly improving signal quality.
  • Ideal vs real Op Amps: Ideal Op Amp assumptions (infinite input resistance, zero output resistance, infinite gain) simplify design equations, but real devices have limitations with small load resistances and high frequencies.
  • Inverting vs non-inverting configurations: Inverting amplifiers can provide fractional gains (attenuation) and are affected by source resistance, while non-inverting amplifiers have minimum gain of 1 V/V and are immune to source resistance effects.
  • Common confusion—CMRR limits: Even perfectly balanced differential amplifiers cannot completely eliminate noise due to resistor tolerances, internal Op Amp differences, and common-mode input limits.
  • Instrumentation amplifiers: Multi-Op-Amp designs with laser-trimmed resistors provide the best noise rejection and high input impedance, making them ideal for weak biological signals like ECG and EEG.

🔌 Single-Ended Amplifiers

🔌 What single-ended means

Single-ended amplifier: an amplifier configuration where the input signal is referenced to system ground, with one terminal connected directly to ground.

  • The input signal v_i(t) consists of the desired signal s_i(t) plus noise η(t).
  • Since one terminal is grounded, the output is simply the gain G multiplied by the entire input: v_o(t) = G · [s_i(t) + η(t)].
  • Major drawback: Noise is amplified at the same rate as the signal, so the signal-to-noise ratio (SNR) does not improve.
  • Example: If a 10 mV signal has 2 mV noise and both are amplified by 100×, the output has 1 V signal with 200 mV noise—same SNR as the input.

⚡ Inverting configuration

  • Input connects to the negative (-) terminal; positive (+) terminal connects to ground.
  • Gain equation: G = -R₂/R₁ (negative sign means output is inverted).
  • Can produce fractional gains less than 1 V/V for attenuation (e.g., R₂ = 1 kΩ, R₁ = 10 kΩ gives G = -0.1 V/V).
  • Disadvantage: Source resistance R_s adds to R₁ in the gain denominator, reducing overall gain.
  • Example: With R₁ = 100 Ω, R₂ = 200 Ω, and R_s = 50 Ω, the actual gain becomes -200/150 = -1.33 V/V instead of the ideal -2 V/V (a 33.5% reduction).

⚡ Non-inverting configuration

  • Input connects to the positive (+) terminal; negative (-) terminal connects through feedback.
  • Gain equation: G = 1 + R₂/R₁ (always positive, minimum gain is 1 V/V).
  • Advantage: Source resistance has no effect on gain because the input goes directly to the high-impedance (+) terminal.
  • Better choice when R₁ is not much greater than source resistance.

🔋 Rail voltage constraints

  • Op Amps require supply voltages (±V_cc or rails) to function; these limit the maximum output voltage.
  • Output voltage cannot exceed the rail voltages; signals that would exceed rails are "clipped" to the rail level.
  • Example: With ±5V rails, an ideal calculation of 55V output will be clipped to 12V if ±12V rails are used instead.
  • Single supply: negative rail connected to ground, positive rail at specified voltage (e.g., 5V single rail means +5V and 0V).
  • Design consideration: Choose rails slightly larger than expected output to avoid clipping, but not excessively large to minimize power dissipation.

🌊 AC signal amplification

  • Op Amps amplify AC signals using the same gain equations as DC.
  • Peak-to-peak voltage (V_pp) is used for AC measurements: for a sinusoid, V_pp = 2 × amplitude.
  • RMS voltage relates to peak voltage: V_peak = V_rms × √2, so V_pp = 2 × V_rms × √2.
  • Example: Standard 120 V_rms AC outlet has V_pp = 339.4 V.
  • Step-down transformers often used to reduce high AC voltages to levels compatible with Op Amp circuits.

⚖️ Differential Amplifiers

⚖️ How differential amplifiers reject noise

Differential amplifier: an amplifier with two separate inputs that amplifies the difference between them, ideally rejecting signals common to both inputs.

  • Two inputs: v₁(t) = s₁(t) + η(t) and v₂(t) = s₂(t) + η(t), where η(t) is common noise.
  • Output: v_o(t) = G · [v₂(t) - v₁(t)] = G · [s₂(t) - s₁(t)] + G · [η(t) - η(t)] = G · [s₂(t) - s₁(t)].
  • Key principle: When noise is identical on both inputs, subtraction cancels it out.
  • Reality check: Noise is never perfectly identical (η₁ ≠ η₂), so some noise remains as ε(t) = η₂(t) - η₁(t), but this difference is much smaller than the original noise.

⚖️ Balanced differential amplifier

  • Requires resistor ratio balance: R₂/R₁ = R₄/R₃ (typically R₂ = R₄ and R₁ = R₃).
  • When balanced:
    • Differential gain: A_d = R₂/R₁ (positive, unlike inverting amplifier)
    • Common-mode gain: A_CM = 0 (ideally)
    • Differential input: V_i,diff = v₂ - v₁
    • Common-mode input: V_i,CM = (v₂ + v₁)/2
  • Example: With 10 Hz signal and 60 Hz noise, the differential amplifier amplifies the 10 Hz signal to 1.1 V_pp while completely removing visible 60 Hz noise.

📊 Common-Mode Rejection Ratio (CMRR)

CMRR: the ratio of differential gain to common-mode gain, measuring how well an amplifier rejects common signals while amplifying differential signals.

  • Formula: CMRR = |A_d|/|A_CM| (in V/V) or 20·log₁₀(CMRR) in dB.
  • Higher CMRR is better; ideal is infinite (A_CM = 0).
  • Conversion: CMRR in V/V = 10^(CMRR_dB/20).

Three factors that limit CMRR in practice:

  1. External resistor imbalance: Resistor tolerances prevent perfect R₂/R₁ = R₄/R₃ balance, making A_CM non-zero.

    • Example: With 10% tolerance resistors in worst case, CMRR can drop to 8.12 V/V (18.2 dB) instead of infinite.
  2. Internal Op Amp differences: Real Op Amps have finite CMRR even with perfectly balanced external resistors.

    • Example: LM324 datasheet specifies 85 dB CMRR for DC signals, dropping to ~65 dB at 100 kHz.
  3. Common-mode input limits: Exceeding specified common-mode input range invalidates CMRR specifications.

    • Example: LM324 requires common-mode input to be at least 2V below rail voltage magnitude.

🔧 Design considerations

  • Transmission lines for both inputs should be close together and travel through the same environment to ensure noise is truly common.
  • Longer transmission lines increase noise pickup probability.
  • Use superposition analysis when resistor ratios are not balanced to find actual output.
  • Don't confuse: The differential amplifier's positive gain (R₂/R₁) differs from the inverting amplifier's negative gain (-R₂/R₁), even though the resistor ratio is the same.

🎯 Instrumentation Amplifiers

🎯 Why instrumentation amplifiers are superior

  • Multi-Op-Amp design (typically 2 or 3 Op Amps) that improves on basic differential amplifiers.
  • Manufactured as monolithic integrated circuits with laser-trimmed resistors for precise balance.
  • Key advantage: Both inputs go through non-inverting terminals, ensuring very high input resistance for weak signals.
  • Ideal for biomedical applications: ECG (heart signals) and EEG (brain signals) require high input impedance.

🎯 Three-Op-Amp design

  • Two input buffer Op Amps (A1 and A2) plus one differential output stage (A3).
  • Only one external resistor (R_G) set by user; all other resistors are internal and precisely matched.
  • Gain equation provided in datasheet (varies by manufacturer).
  • Example: INA217 has gain G = 1 + 10kΩ/R_G; default gain is 1 V/V when R_G is omitted.

🎯 Two-Op-Amp design

  • Two Op Amps serve as both input buffers and gain stages.
  • Also requires only one external R_G resistor.
  • Example: INA126 and INA2126 (dual version) have gain G = 5 + 80kΩ/R_G; default gain is 5 V/V when R_G is omitted.
  • Available in through-hole DIP packages for easy prototyping on breadboards.

🎯 Practical design example

  • To convert 3V and 2.23V inputs to 10V output: Required gain = 10/(3-2.23) = 13 V/V.
  • For INA2126: 13 = 5 + 80kΩ/R_G → R_G = 10 kΩ.
  • To get default gain of 5 V/V: omit R_G entirely.
  • Don't confuse: Instrumentation amplifiers are not just "better Op Amps"—they are complete multi-stage systems optimized for differential measurement with minimal user configuration.

🎛️ Ideal vs Real Op Amp Behavior

🎛️ Ideal Op Amp assumptions

Five primary assumptions for ideal analysis:

  1. Input resistance = ∞ (no current flows into input terminals)
  2. Output resistance = 0 (output voltage unaffected by load)
  3. Internal (open loop) gain = ∞
  4. Can source infinite current
  5. CMRR = ∞ (perfect common-mode rejection)

Virtual short concept: When feedback is present, voltages on + and - terminals are forced to be approximately equal: V(-) = V(+).

  • Only applies when feedback resistor connects output to V(-) pin.
  • Without feedback, Op Amp acts as comparator, output goes to rail.

⚠️ When ideal assumptions fail

Situation 1: Small load resistance

  • Op Amp cannot supply enough current to produce calculated voltage across small load resistor.
  • Output voltage equation: V_o = I · R_L requires sufficient current I.
  • Example: With 10 Ω load requiring 1.51 A for 15.1 V_pp output, but LM324 can only supply ~51.8 mA, output is severely reduced.
  • Short circuit current: maximum current when output connected directly to ground (typically ±20 to ±40 mA for LM324).
  • Solution: Load resistance must be increased (>200 Ω in example) or use power transistor output stage.

Situation 2: High frequency signals

  • Open loop gain decreases at high frequencies, making ideal gain equations inaccurate.
  • Example: At 60 kHz, LM324 output is much lower than ideal equation predicts, while ideal (virtual) Op Amp maintains correct output.
  • Gain Bandwidth Product (GBP) in datasheet indicates frequency limitations.
  • Higher GBP Op Amps reduce gain reduction effects at high frequencies.

🔍 Virtual vs real Op Amp simulation

  • Virtual (3-terminal) Op Amps in Multisim force ideal assumptions, always producing calculated results.
  • Real IC models (e.g., LM324) in Multisim attempt to reflect actual device limitations.
  • Virtual Op Amps useful for verifying circuit design math; real models show practical performance.
  • Power dissipation limits: LM324 has ~1 Watt limit (varies by package type: 1130 mW for PDIP, 800 mW for SOIC).

🎚️ Active Filter Circuits

🎚️ Why filters are frequency-selective

Filter circuits: circuits that select or block certain frequency ranges within a signal.

  • Any signal can be decomposed into a sum of sinusoids (Fourier analysis).
  • Noise often occupies different frequency ranges than the signal of interest.
  • Filters condition signals before digitization to improve signal quality.
  • Frequency-dependent gain described by transfer function H(s) in Laplace domain or H(jω) in frequency domain.
  • Magnitude |H(jω)| plotted with frequency (Hz) on x-axis and gain (dB) on y-axis.

🎚️ Cutoff frequency and roll-off

Cutoff frequency (f_c): the frequency where attenuation reaches 0.707 (or 1/√2) of passband gain, equivalent to -3 dB point.

  • Not the frequency where gain equals -3 dB, but where gain is 3 dB lower than passband gain.
  • Roll-off region: where passband begins to decrease in magnitude.
  • Roll-off steepness depends on filter order: higher order = steeper roll-off.
  • First-order filter: modeled by first-order differential equation, gradual roll-off.
  • Conversion between frequency units: ω (rad/s) = 2π·f (Hz).

📉 Lowpass filters

Lowpass filter: allows low frequencies to pass while blocking high frequencies.

  • First-order transfer function: H(s) = (k·ω_c)/(s + ω_c), where ω_c = 1/(R₂C).
  • Basic inverting configuration: uses one Op Amp with capacitor in feedback path.
  • Cascading n identical first-order filters changes effective cutoff: ω_cn = √(2^(1/n) - 1).
  • Must adjust single-stage cutoff by 1/ω_cn to achieve desired overall cutoff.

Butterworth lowpass filter:

  • Second-order design with maximally flat passband (uniform gain over more of passband).
  • Sallen-Key configuration commonly used.
  • Corner frequency: ω_c = √(1/(R₁R₂C₁C₂)).
  • Typical design: C₁ = 2C₂, both resistors fixed.
  • Fourth-order response: cascade two second-order Butterworth stages with adjusted cutoffs.
  • Superior to cascaded first-order filters: remains flat until ~0.7 rad/s vs ~0.1 rad/s for standard.

Two main uses:

  1. Anti-aliasing: Prevents aliasing by ensuring no frequencies exceed f_s/2 (half sample rate) reach digitizer.
  2. Noise reduction: Removes high-frequency noise before digitization (e.g., EMG interference in ECG).

📈 Highpass filters

Highpass filter: allows high frequencies to pass while blocking low frequencies.

  • First-order transfer function: H(s) = (k·s)/(s + ω_c), where ω_c = 1/(R₁C).
  • Basic inverting configuration: capacitor in series with input.
  • Unity gain when R₁ = R₂.

Butterworth highpass filter:

  • Second-order Sallen-Key configuration for maximally flat passband.
  • Transfer function: H(s) = s²/(s² + (1/R₂C₁ + 1/R₂C₂)s + 1/(R₁R₂C₁C₂)).
  • Corner frequency: ω_c = √(1/(R₁R₂C₁C₂)).
  • Typical design: C₁ = C₂ = C, then R₂ ≈ 2R₁.
  • Fourth-order: cascade two second-order stages.

Common use: Block DC offsets (0 Hz) in signals, such as electrode impedance mismatches in biomedical recordings (e.g., 0.5 Hz highpass removes DC offset).

🎛️ Bandpass and bandstop filters

Bandpass filter:

  • Passes frequencies in limited range from f_low to f_high.
  • Can be modeled as lowpass filter cascaded with highpass filter (order doesn't matter).
  • f_low set by highpass stage, f_high set by lowpass stage.
  • Example: To pass only 100 Hz from signal containing 10 Hz, 100 Hz, and 500 Hz, set f_low = 30 Hz and f_high = 300 Hz.

Bandstop filter:

  • Blocks frequencies over some range.
  • Cannot be cascaded; requires parallel processing: input goes through both highpass and lowpass separately, then outputs are summed.
  • Lowpass defines lower frequency, highpass defines upper frequency.
  • Example: To block 100 Hz while passing 10 Hz and 500 Hz, use same corner frequencies as bandpass but in parallel configuration.

Notch filters:

  • Specialized narrow bandstop filters.
  • 60 Hz notch filter common for power line interference in data acquisition.
  • Digital filters often preferred over analog for notch filtering due to adaptability and precision.

🔧 Filter design considerations

  • Analog filters mainly for signal conditioning before digitization.
  • Digital filters (DSP) offer more flexibility and control for intensive filtering after acquisition.
  • Goal: condition signal so digitization preserves signal fidelity relative to original physical phenomenon.
  • Always better to minimize noise through proper grounding and shielding first, then use filters as needed.
13

3.6 Conclusion

3.6 Conclusion

🧭 Overview

🧠 One-sentence thesis

Amplification and filtering are the two most essential signal conditioning techniques, with analog conditioning preparing signals for accurate digitization while digital filters provide flexible post-acquisition processing.

📌 Key points (3–5)

  • Two core needs: amplification and filtering are the most needed types of signal conditioning.
  • Highpass filters block DC offsets: commonly used to eliminate constant voltage offsets (treated as 0 Hz) caused by impedance mismatches in electrodes.
  • Lowpass filters reduce noise and prevent aliasing: they block high-frequency noise and ensure incoming signals meet the Nyquist criterion (less than half the sampling rate).
  • Common confusion: analog vs digital filters—analog conditioning prepares signals for digitization without losing fidelity; digital filters offer flexibility for intense analysis after acquisition.
  • Why it matters: proper analog signal conditioning is fundamental to accurately measuring real-world phenomena with computer-based systems.

🎯 The two essential signal conditioning techniques

🔊 Amplification

  • One of the two most needed types of signal conditioning (the excerpt pairs it with filtering).
  • Purpose: boost weak signals to levels suitable for digitization.

🎛️ Filtering

  • The second core technique.
  • Purpose: select desired frequency content and remove unwanted components.
  • The excerpt focuses on four filter types (lowpass, highpass, bandpass, bandstop) and their practical applications.

🚫 Highpass filters: blocking DC offsets

🔌 What DC offsets are and why they occur

  • DC offset: a constant voltage component in a signal.
  • Common in biomedical signal acquisition due to impedance mismatches between electrodes.
  • Each electrode has its own impedance depending on technician skill; slight mismatches create DC offsets.

🔢 How highpass filters remove DC

The Fourier transform shows that the DC element of a signal can be thought of as 0 Hz.

  • A highpass filter blocks frequencies below its cutoff.
  • Example: a 0.5 Hz highpass filter would block the 0 Hz DC component while passing higher-frequency signal content.
  • Don't confuse: "DC has no frequency" in everyday terms, but mathematically it is treated as 0 Hz for filtering purposes.

🔇 Lowpass filters: reducing noise and preventing aliasing

🌊 Reducing white noise interference

  • White noise sources:
    • Environmental (e.g., electromyographic activity in an ECG).
    • Intrinsic to the system (e.g., thermal noise in electronics).
  • These noises generally have a wide frequency band, often at the higher frequency end.
  • A lowpass filter blocks high-frequency noise while preserving the signal of interest.

⚠️ Preventing aliasing with anti-aliasing filters

  • The aliasing problem: if incoming frequency content exceeds half the sampling rate, aliasing occurs (violates the Nyquist sampling criterion).
  • Solution: a properly designed lowpass filter limits incoming frequencies to less than one half of the sampling rate.
  • When used for this purpose, the lowpass filter is called an anti-aliasing filter.
  • Example: if the sampling rate is 1000 Hz, the anti-aliasing filter should cut off frequencies above 500 Hz to guarantee no aliasing.

🖥️ Analog vs digital signal processing

🔧 Analog signal conditioning

  • Purpose: condition the signal so it can be digitized without losing signal fidelity with respect to the originating physical phenomenon.
  • When used: mainly before digitization.
  • Characteristics: fundamental step in accurately measuring real-world phenomena with computer-based data acquisition systems.

💾 Digital signal processing (DSP)

  • What it is: a well-developed field with many techniques for modifying discrete signal content.
  • Advantages: significant flexibility; frequency response can be easily controlled.
  • When preferred: for intense filtering and signal analysis needs (after the signal has been digitized).

🔄 How they complement each other

AspectAnalog conditioningDigital filters
TimingBefore digitizationAfter digitization
PurposePrepare signal for accurate digitizationFlexible post-acquisition analysis
FlexibilityLess flexibleHighly flexible, easily controlled
RolePreserve fidelity of physical phenomenonModify discrete signal content
  • Don't confuse: analog is not "worse" than digital; each serves a distinct role in the measurement chain.
  • A reasonable understanding of filters and the ability to implement them is critical to good measurement and instrumentation practices.
14

Sensors

4.1 Sensors

🧭 Overview

🧠 One-sentence thesis

Sensors convert physical phenomena into electrical signals for computer-based measurement systems, and their effectiveness depends on understanding sensitivity, bandwidth, and proper signal conditioning to ensure accurate measurements.

📌 Key points (3–5)

  • Transducer vs sensor distinction: transducers convert any physical phenomenon to another form, while sensors specifically convert to electrical signals (though terms are often used interchangeably).
  • Sensitivity defines conversion: the ratio of electrical output to physical input determines how to scale measurements, and many sensors require both a slope (sensitivity) and offset (y-intercept) for accurate conversion.
  • Bandwidth prevents aliasing: sensors must respond to the maximum expected frequency, and anti-aliasing filters are critical to prevent high-frequency noise from corrupting measurements.
  • Common confusion: sensitivity vs offset—sensitivity is the slope (change per unit), while offset is the baseline electrical output when the physical input is zero; both must be accounted for.
  • Signal conditioning matters: techniques like the Wheatstone bridge amplify small resistive changes into measurable voltage differences, especially important for sensors with poor sensitivity.

🔄 Transduction fundamentals

🔄 What transduction means

Transduction: the process of converting one physical phenomenon to another physical phenomenon.

Transducer: a device that can perform transduction (general conversion between any physical forms).

Sensor: a device that converts one physical form specifically into an electrical signal.

  • Sensors are a subset of transducers—all sensors are transducers, but not all transducers are sensors.
  • Example from the excerpt: The Atmos clock uses temperature changes to expand/contract a gas bellows, creating mechanical movement via the ideal gas law (PV = nRT). This is transduction but not sensing, because no electrical signal is produced.
  • In practice, "sensor" and "transducer" are often used interchangeably, but the distinction matters: sensors produce electrical outputs suitable for computer-based data acquisition.

📋 Common sensor types

The excerpt lists sensors for various phenomena:

  • Strain Gauge
  • Linear Variable Differential Transformer (LVDT)
  • Potentiometric Sensing
  • Ultrasound Sensor
  • Accelerometer
  • Pressure sensor
  • Encoders
  • Temperature Sensors (multiple types)

📏 Sensitivity and calibration

📏 What sensitivity measures

Sensitivity: the ratio of the electrical output to the physical quantity input.

  • Also called the scale factor or conversion factor.
  • Poor sensitivity means a large physical change produces only a small electrical change.
  • Example: J-Type thermocouple has sensitivity of only 50 μV/°C. To distinguish a 0.1°C temperature change, you'd only get a 5 μV voltage change—likely smaller than environmental noise.
  • When sensitivity is poor, signal conditioning becomes vitally important for accurate measurement.

📐 Linear sensitivity models

Most sensors follow the equation y = mx + b, where:

  • m (slope) = sensitivity
  • b (y-intercept) = offset
Sensor typeSensitivity exampleOffsetTemperature range
K-Type Thermocouple41 μV/°C~0 (passes through origin)-270°C to 1372°C
PT100 RTD0.385 Ω/°C100 Ω at 0°CSmaller range, nonlinear above 300°C
Omega Thermistor (4-20 mA)0.21333 mA/°C7.2 mA at 0°C-15°C to 60°C

⚠️ Nonlinearity considerations

  • Linear sensitivity models provide close approximations, but nonlinearities exist at extremes.
  • Manufacturer charts are more accurate than linear models for wide ranges.
  • Example: K-Type thermocouple is fairly linear at low positive temperatures but shows nonlinearity at temperature extremes.
  • PT100 RTD is very linear at negative temperatures but becomes nonlinear above ~300°C.
  • Don't confuse: A published sensitivity value gives you a working linear model, but for high-precision work across wide ranges, use the full manufacturer calibration chart.

🔌 The 4-20 mA current loop standard

  • Popular protocol where 0-100% of measurement range maps to 4-20 mA current.
  • Benefits:
    • Current near zero indicates a fault condition (not just low reading)
    • High enough current that noise floor is usually avoided
    • Easily converted to voltage using a resistor (e.g., 250 Ω resistor converts 4-20 mA to 1-5 V)
  • This standard inherently includes a non-zero offset (4 mA at zero input).

🎯 Calibration requirements

  • Every sensor needs sensitivity (slope) and offset parameters to convert electrical readings to physical units.
  • This is typically a programmatic conversion during data acquisition.
  • A robust measurement system allows users to adjust these parameters based on calibration testing against a standard.
  • Example: The Omega EWS-RH manual defines specific calibration procedures for humidity and temperature sensors.

📡 Bandwidth and sampling

📡 What bandwidth means for sensors

Bandwidth: in the context of sensors, refers to the highest frequency contained in the signal coming from the sensor.

  • Directly related to how quickly the physical phenomenon is changing.
  • The sensor bandwidth must be capable of responding to the maximum frequency expected in the physical system.
  • Example: A guitar string vibrates at hundreds of Hz, while a bridge truss may vibrate at only a few Hz. An accelerometer with 10 Hz bandwidth works for the bridge but is "utterly inadequate" for the guitar string.

🛡️ Anti-aliasing filters

  • High-frequency noise can violate the Nyquist sampling criterion and cause aliasing.
  • An anti-aliasing filter is a lowpass filter that removes (or greatly attenuates) signals beyond the expected maximum sensor frequency.
  • Can be designed into the sensor or added externally.
  • By removing unwanted high-frequency components before sampling, it greatly reduces the likelihood of aliasing affecting the measurement.

✅ Two critical requirements

Once the proper sensor is selected:

  1. Sampling frequency must meet Nyquist criterion: F_s > 2 × F_max
  2. Signal conditioning must prevent corruption above the expected maximum frequency (via anti-aliasing filter)

🧮 Aliasing calculation method

The excerpt provides a simplified method when normalized radian frequency ω_norm is between π and 3π:

  • Calculate ω_norm = 2π × F_actual / F_s
  • If π ≤ ω_norm ≤ 3π, then F_alias = |F_s - F_actual|
  • Example: With F_actual = 1202 Hz and F_s = 2000 Hz, ω_norm = 1.2π, so F_alias = |2000 - 1202| = 798 Hz

Don't confuse: Sampling above Nyquist rate detects the correct frequency, but sampling moderately higher makes the connected samples look more like a true sinusoid when plotted.

⚠️ Noise and aliasing interaction

  • If a 1202 Hz noise signal exists and the sensor bandwidth is only 300 Hz, an anti-aliasing filter (lowpass with corner ~500 Hz) can eliminate it.
  • Without the filter, if sampling at 1000 Hz, the 1202 Hz noise would alias down to 202 Hz—appearing within the sensor bandwidth and corrupting the measurement.

🌉 Wheatstone bridge signal conditioning

🌉 Why bridges are used

  • Some sensors (RTDs, strain gauges) use small resistive changes to transduce physical phenomena.
  • These resistance changes can be very small, making the linear voltage or current change also small.
  • The Wheatstone bridge arrangement better assesses resistive changes by comparing voltage dividers.

⚖️ Bridge balance concept

Balanced bridge: when the voltage at point v_a equals the voltage at point v_b, so v_a - v_b = 0.

  • Balance condition requires: R₁/R₂ = R₃/R_x
  • When the strain gauge (R_x) is compressed or tensioned, its resistance changes, so R₁/R₂ ≠ R₃/R_x
  • This imbalance creates a non-zero differential voltage: v_a - v_b ≠ 0

🧮 Bridge mathematics

The differential voltage is given by:

V_diff = V_BAT × [(R₂R₃ - R₁R_x) / ((R₁ + R₂)(R₃ + R_x))]

When balanced (R₂R₃ = R₁R_x), V_diff = 0.

The unknown resistance can be solved as:

R_x = [R₂R₃ - V_diff/V_BAT × (R₁R₃ + R₂R₃)] / [R₁ + V_diff/V_BAT × (R₁ + R₂)]

🔧 Practical implementation

  • Many DAQ systems provide the partial bridge with known R₁ and R₃ values.
  • R₂ can be adjusted under zero-input condition to balance the bridge (V_diff = 0).
  • After calibration, any non-zero input creates a V_diff that converts first to R_x, then to the physical phenomenon via the sensor's sensitivity.

🛠️ Manual measurement method

  • Set R₁ and R₃ to identical values
  • Use a precision variable resistor for R₂
  • Adjust R₂ until bridge is balanced (V_diff = 0 V)
  • Since balanced and R₁ = R₃, then R_x equals the known value of the variable resistor
  • The variable resistor must have sufficient resolution for the desired R_x measurement accuracy

📊 Example calculations

The excerpt provides worked examples showing:

  • A thermistor bridge with V_BAT = 4.5 V, R₁ = R₂ = 10 kΩ, R₃ = R_x = 10 kΩ at 72°F
  • When R_x is heated to ~93°F, V_diff = 0.27 V
  • Calculated R_x = 7,857 Ω vs measured 7.76 kΩ = -1.24% error
  • This demonstrates the bridge's ability to detect small resistance changes accurately
15

Sensor Types

4.2 Sensor Types

🧭 Overview

🧠 One-sentence thesis

Different sensor technologies convert physical phenomena—strain, displacement, temperature, pressure, acceleration, and angular position—into electrical signals through specialized transduction mechanisms, each suited to particular measurement ranges, accuracies, and environmental conditions.

📌 Key points (3–5)

  • Strain gauges measure micro-scale bending/flexing by changing resistance when stretched or compressed, typically used in Wheatstone bridge circuits.
  • Displacement sensors (LVDT, potentiometric) measure linear or angular position changes through induction or variable resistance.
  • Temperature sensors (thermocouples, thermistors) differ fundamentally: thermocouples generate voltage from dissimilar metals (μV/°C sensitivity, harsh environments), while thermistors change resistance (higher accuracy, lower temperature range).
  • Common confusion: Thermocouple vs. thermistor—thermocouples work in extreme temperatures but need amplification and are less accurate; thermistors are very accurate (±0.1°C) but limited to ~300°C maximum.
  • Ranging and motion sensors (ultrasonic, laser, accelerometers, encoders) measure distance, vibration, or rotation using time-of-flight, suspended mass deflection, or optical/magnetic pulse counting.

🔧 Strain measurement technology

🔧 How strain gauges work

Strain gauge: a resistive sensor that detects bending or flexing by changing resistance according to R = ρL/A when stretched or compressed.

  • The resistive element changes in three ways under strain:
    • Compression: element thickens (A↑) and shortens (L↓) → resistance decreases (R↓)
    • Tension: element thins (A↓) and lengthens (L↑) → resistance increases (R↑)
    • Piezoresistive effect: resistivity (ρ) itself increases under strain

⚖️ Gauge factor and strain calculation

The gauge factor (GF) describes sensitivity to deformation:

  • For metal strain gauges, GF ≈ 2
  • Formula: GF = (ΔR/R₀) / (ΔL/L)
  • Strain = (Rₓ - R₀) / (R₀ · GF)

Sign convention:

  • Positive strain (ΔR > 0) → tension
  • Negative strain (ΔR < 0) → compression

Example: A strain gauge in a Wheatstone bridge is balanced at no-load by adjusting R₂. When loaded, measure Vₓ to find Rₓ, then calculate strain from the resistance change.

🌉 Bridge circuit implementation

  • Strain gauge forms the fourth leg (Rₓ) of a Wheatstone bridge
  • R₂ is often a precision potentiometer adjusted to balance the bridge (Vₓ = 0) under no-load
  • Initial resistance R₀ = R₂ · R₃/R₁ when balanced
  • After loading, use bridge equation to solve for Rₓ from the differential voltage

Don't confuse: The manual balancing method (adjust R₂ until Vₓ = 0 after loading) vs. the voltage measurement method (measure Vₓ and calculate Rₓ from Equation 4.2).

📏 Displacement and position sensors

📏 LVDT (Linear Variable Differential Transformer)

LVDT: a sensor using three coils and induction to measure rectilinear motion with micrometer accuracy over ranges up to several inches.

Operating principle:

  • Central coil energized with AC creates alternating magnetic field in movable ferrite core
  • Core induces voltage in two secondary coils based on overlap amount
  • Differential voltage between secondary coils indicates core position
  • Δx = k · ΔVout (where k is sensitivity in V/mm or V/in per excitation volt)

Key characteristics:

  • Linear ranges from ±1 mm to ±50 cm
  • Resolution essentially infinite (limited only by signal conditioning circuit's voltage detection)
  • Typical linearity: ±0.25% over range
  • No electrical contact between moving element and circuit
  • Robust to extreme temperatures (used in aviation)

Don't confuse: Center position produces 0V differential, but any movement changes induced voltage in both coils—the LVDT measures the difference between the two secondary coils.

🎚️ Potentiometric sensing

A potentiometer is a three-terminal variable resistor:

  • Two end terminals span the full resistive element (fixed resistance)
  • Wiper terminal varies resistance as it moves across element
  • Forms a voltage divider: V₀ = Vs · (x · Rpot)/Rpot = Vs · x

Common implementations:

  • Single-turn rotary: 360° total rotation
  • Multi-turn (e.g., 10-turn): 3600° total rotation for finer resolution
  • Throttle position sensors in vehicles

Example: For a desired voltage range of 6–9V from a 12V supply across a 1000Ω potentiometer, the wiper position x ranges from 0.5 to 0.75 (50% to 75%). A 10-turn potentiometer provides 900° of adjustment vs. 90° for single-turn.

📡 Ranging and motion detection

📡 Ultrasonic ranging

Principle: Time-of-flight measurement using sound waves

  • Range = ½ · speed of sound · Δt
  • Speed of sound ≈ 340.29 m/s (varies with temperature: s = 331.5 + 0.6·T(°C))
  • Transmitter and receiver co-located to minimize angle effects
  • Practical range: typically under a few meters

Temperature sensitivity: At 20°C vs. 25°C, a 20 cm measurement differs by ~1 μsec in time—small variation but measurable.

🔦 Infrared and laser ranging

TypeRangeAccuracyPrinciple
IR2–80 cmLimited, nonlinearLED reflection to phototransistor
UltrasonicFew metersModerateSound time-of-flight
Laser20+ km (military)High but not sub-mmLight time-of-flight, Range = ½·c·Δt

Don't confuse: Laser's much faster speed (c = 299,792,458 m/s) enables long-range measurement but prevents achieving ultrasonic-level precision at short distances.

📳 Accelerometers

Accelerometer: a sensor that measures acceleration (force per unit mass, a = F/m) by detecting displacement of a suspended internal mass.

Transduction methods:

  • Piezoelectric: crystal deformation generates voltage (dynamic process only)
  • Capacitive: mass displacement changes plate separation, altering capacitance (C = εA/d)

Key specifications:

  • Bandwidth: maximum detectable vibration frequency (e.g., 10 Hz bandwidth cannot measure 20 Hz vibration)
  • Sensitivity: typically V/G or mV/G (wider range reduces discrimination ability)
  • Shock rating: maximum instantaneous G's before damage (e.g., from dropping)

Vibration measurement: Accelerometer on a flexing beam detects back-and-forth motion. For a mass-spring-damper system, the acceleration data reveals:

  • Resonance frequency (from peak intervals or spectral analysis)
  • Decay rate (from exponential fit of peak magnitudes)

Don't confuse: Single-supply accelerometers output positive voltage only; no-acceleration state has a DC offset that must be removed before analysis.

🔄 Encoders (angular position/velocity)

Optical encoders:

  • IR LED beam passes through slotted wheel to phototransistor
  • Beam blocked/passed as shaft rotates → square wave output
  • Resolution = 360°/number of slits (e.g., 36 slits = 10°/step)
  • Count pulses for displacement; measure frequency for velocity

Magnetic encoders (Hall effect):

  • Hall effect sensor detects magnetic field strength
  • Magnets on rotating wheel generate voltage pulses as they pass sensor
  • Latching digital sensors toggle between 0V and VCC on polarity change
  • Used for vehicle speed sensing (e.g., flywheel with multiple magnets)

Example: Wheel with 4 magnets requires 5 pulses for one complete revolution (starting magnet passes sensor again).

🌡️ Temperature sensing technologies

🌡️ Thermocouples

Thermocouple: two dissimilar metals joined together that generate voltage proportional to temperature due to the thermo-electric (Seebeck) effect.

Basic equation: Vₓ = kT(Tₓ - Tref)

  • Sensitivity kT typically in μV/°C range
  • Requires reference junction (often ice bath at 0°C) or cold-junction compensation
  • Linear approximation often used, but reference charts provide better accuracy

Common types and ranges:

TypeRange (°C)AccuracyMaterials
K-270 to 1260±2.2°C or ±0.75%Nickel-Chromium/Nickel-Alumel
J-210 to 760±2.2°C or ±0.75%Iron/Constantan
T-270 to 370±1.0°C or ±0.75%Copper/Constantan
E-270 to 870±1.7°C or ±0.5%Nickel-Chromium/Constantan

Signal conditioning needs:

  • External amplification required (μV-level signals)
  • Lowpass filtering for noise reduction and anti-aliasing
  • Calibration must account for amplification gain

Don't confuse: Some thermocouples are linear over full range (Type E above 0°C), others are not (Types S and B)—use only in guaranteed linear region or apply corrections.

🌡️ Thermistors

Thermistor: a thermally sensitive resistor with pronounced resistance change with temperature.

Two types:

  • NTC (Negative Temperature Coefficient): resistance decreases as temperature increases (inverse relationship)
  • PTC (Positive Temperature Coefficient): resistance increases with temperature (direct relationship)

B-value equation: B(T₁/T₂) = (T₂ × T₁)/(T₂ - T₁) × ln(R₁/R₂)

  • Temperatures in Kelvin (K = °C + 273.15)
  • B-value specified for temperature range (e.g., B₁₀/₁₀₀ = 2552 for 10–100°C)
  • Nominal resistance given at lower temperature

Comparison with thermocouples:

FeatureThermocoupleThermistor
AccuracyModerateVery high (±0.1°C)
Temperature rangeVery wide (to 1700°C)Limited (~300°C max)
OutputVoltage (μV/°C)Resistance change
LinearityApproximately linearNonlinear (exponential)

Circuit configurations:

  1. Wheatstone bridge (preferred): More sensitive to small resistance changes, better accuracy
  2. Voltage divider (simpler): VT = Vs · Rthermistor/(R₁ + Rthermistor)
    • NTC in lower position: VT decreases as temperature increases (inverse)
    • Swap positions for direct relationship

🛡️ Temperature protection circuits

Comparator-based control:

  • Voltage divider with thermistor feeds comparator
  • Reference voltage (vref) sets threshold
  • Output goes HI when temperature exceeds limit
  • Can control cooling devices or shut down processes

MOSFET switching:

  • Thermistor voltage divider drives MOSFET gate
  • When gate voltage exceeds threshold (~2V for BS170), MOSFET turns on
  • Can control LEDs, relays, or other loads
  • Example: NTC thermistor heated → resistance drops → gate voltage rises → MOSFET conducts

🔩 Pressure and specialized sensors

🔩 Pressure sensor types and technologies

Pressure measurement categories:

  • Gauge pressure: relative to atmospheric pressure
  • Absolute pressure: relative to perfect vacuum
  • Vacuum pressure: below atmospheric pressure

Transduction technologies:

  1. Capacitive: Parallel-plate capacitor where pressure deflects one plate, changing capacitance (C = εA/d)
  2. Piezoelectric: Crystal deformation generates voltage during pressure change (dynamic process only—requires integration for long-term measurement)
  3. Strain gauge: Pressure-sensing diaphragm deflection measured by strain gauge
  4. LVDT-based: Ferrite core coupled to pressure-sensing surface tracks deflection

Applications:

  • Tank fluid level (pressure = fluid weight/volume)
  • Automotive: MAP sensor (manifold absolute pressure for fuel control), tire pressure monitoring, oil pressure
  • Aviation: altitude, control surface position (temperature-robust)
  • Rotating machinery balance (spring-loaded LVDT detects eccentricity)

Don't confuse: Piezoelectric pressure sensors only generate voltage during deformation—stable pressure produces no output unless integrated over time.

🔌 Practical implementation considerations

🔌 Signal conditioning requirements

Different sensors need different conditioning:

  • Thermocouples: Amplification (μV signals), lowpass filtering, cold-junction compensation
  • Strain gauges: Bridge balancing, precision voltage measurement
  • Thermistors: Bridge or voltage divider, comparator for protection circuits
  • Accelerometers: DC offset removal (single-supply), lowpass filtering, gain adjustment

🎯 Sensor selection criteria

When choosing sensors, consider:

  • Range: Physical phenomenon limits (temperature, displacement, acceleration)
  • Accuracy: Thermocouple ±2°C vs. thermistor ±0.1°C
  • Environment: Harsh conditions favor thermocouples, LVDTs; controlled environments allow thermistors
  • Resolution: LVDT essentially infinite (circuit-limited); encoder depends on slot count
  • Response time: Piezoelectric for dynamic only; capacitive for static and dynamic
  • Cost: IR ranging most economical; laser most expensive but longest range

⚠️ Common measurement pitfalls

  • Temperature effects: Ultrasonic ranging speed varies with temperature; must account for or control environment
  • Linearity assumptions: Not all sensors linear over full range (thermocouples, thermistors)—verify operating region
  • Calibration: Amplification gain must be incorporated; reference points needed (ice bath for thermocouples)
  • Dynamic range matching: ADC resolution wasted if sensor range doesn't match input voltage range
  • Noise sensitivity: Low-level signals (thermocouples) require careful shielding and filtering
16

4.3 Conclusion

4.3 Conclusion

🧭 Overview

🧠 One-sentence thesis

The interface between physical phenomena and computer-based measurement systems is critical, and selecting the right sensor depends on understanding sensitivity, bandwidth, and environmental constraints to achieve the best measurement results.

📌 Key points (3–5)

  • Core role of sensors: Sensors transduce nearly any physical phenomenon into an electrical signal, forming the bridge between the physical world and computers.
  • Two key features for users: sensitivity (how many volts or amps per unit of physical dimension) and bandwidth (how quickly the sensor responds to external changes).
  • Environmental considerations: Temperature, pressure, and vibration in the deployment environment must be addressed for optimal sensor selection.
  • Common confusion: Sensitivity vs. bandwidth—sensitivity is about signal strength per unit input; bandwidth is about response speed to changes.
  • Why it matters: Good sensor selection is essential for achieving the best possible measurement results in ubiquitous computer-based acquisition systems.

🔌 The sensor interface role

🔌 What sensors do

Sensors transduce nearly any physical phenomenon into an electrical signal.

  • They act as the interface between the physical world and computer-based measuring systems.
  • The excerpt emphasizes this interface is a "critical component" of the overall measurement process.
  • Example: A temperature sensor converts heat into voltage; a pressure sensor converts force into current.

🖥️ Connection to computer systems

  • The excerpt notes the "ubiquitous nature of computer-based acquisition systems."
  • Sensors enable computers to measure and respond to physical changes.
  • Without proper sensors, the measurement chain breaks down regardless of computational power.

📏 Key sensor characteristics

📏 Sensitivity

Sensitivity: how many volts or amps are generated per unit of physical dimension.

  • This tells you the signal strength you get for each unit of the physical quantity being measured.
  • Higher sensitivity means a larger electrical signal for the same physical change.
  • Example: A sensor with 10 mV per degree Celsius produces 100 mV for a 10°C change; one with 1 mV per degree produces only 10 mV for the same change.

⚡ Bandwidth

Bandwidth: how quickly the sensor responds to external changes.

  • This measures the sensor's speed in tracking changes in the physical phenomenon.
  • Higher bandwidth means the sensor can follow faster changes.
  • Don't confuse with sensitivity: a sensor can have high sensitivity (strong signal) but low bandwidth (slow response), or vice versa.
FeatureWhat it measuresExample implication
SensitivitySignal strength per unit inputMore volts/amps → easier to detect small changes
BandwidthResponse speedHigher bandwidth → can track rapid fluctuations

🌡️ Environmental factors

🌡️ Deployment conditions

  • The excerpt lists three key environmental factors: temperature, pressure, and vibration.
  • These conditions affect sensor performance and durability.
  • "Consideration for the environment in which the sensor will be placed... must be addressed for the best sensor selection."

🔍 Why environment matters

  • A sensor that works well in a laboratory may fail in harsh field conditions.
  • Example: A sensor rated for room temperature may give inaccurate readings or break down in extreme heat or cold.
  • Matching sensor specifications to the actual deployment environment is part of "good sensor selection."

🎯 Achieving best measurement results

🎯 Importance of sensor selection

  • The excerpt concludes that "the reader should thoroughly appreciate the importance of good sensor selection."
  • Good selection means considering sensitivity, bandwidth, and environmental constraints together.
  • Poor sensor choice undermines the entire measurement system, even if other components are excellent.

🔧 Practical takeaway

  • The preceding sections (referenced but not included in this excerpt) provide "an understanding of the fundamentals of signal conditioning and data acquisition."
  • While not exhaustive, they cover enough to guide informed sensor selection.
  • The goal: achieve "the best possible measurement results" by matching sensor capabilities to application needs.
17

Statistical Distributions

5.1 Statistical Distributions

🧭 Overview

🧠 One-sentence thesis

Statistical distributions describe how measurements spread across possible values, and understanding them is essential because all computer-based measurement systems collect discrete samples that contain both random and deterministic errors.

📌 Key points (3–5)

  • Continuous vs discrete domains: Real-world phenomena exist continuously in time, but computer systems can only store discrete samples at specific moments.
  • Why distributions matter: Every measurement has error (noise + systemic), making the collection of measurements a random process that needs statistical analysis.
  • Common confusion: Sample vs population—a sample is the finite set of measurements we collect; the population is the entire (often infinite) set of all possible measurements.
  • Key distribution types: Uniform (equal probability for all values) and normal/Gaussian (clustered around a mean) are the most common in engineering applications.
  • Probability density functions: The pdf shows how likely each outcome is; the area under the curve must equal 1 because one outcome must always occur.

🔄 Continuous vs Discrete Time Domains

⏱️ Continuous time signals

Continuous time, x(t): a signal that exists at every moment in time, where t is an element of the real numbers.

  • The excerpt uses a car battery voltage example with three phases: before starting (constant ~12.546 V), during starting (linear rise), and after starting (constant ~14.4 V).
  • In the continuous domain, the battery holds a voltage at every moment, even between measurements.
  • Mathematical models can describe continuous functions, but computers cannot store infinite data points.

🔢 Discrete time signals

Discrete time, x[n]: a signal represented by samples taken at specific moments, indexed by n.

  • When a data acquisition system (DAQ) samples at 10 Hz, it collects 10 samples per second.
  • The excerpt shows that moving from continuous x(t) to discrete x[n] changes the mathematical model: time variable t becomes sample index n.
  • Example: Phase 1 in continuous time is "0 < t < 3" but in discrete time becomes "0 < n < 30" (30 samples over 3 seconds at 10 Hz).
  • Don't confuse: The physical phenomenon remains continuous; only our measurement and storage are discrete.

📊 Why measurements don't match models

  • The excerpt shows that actual measured data do not fit the mathematical model perfectly.
  • Each measurement has error—a combination of random (noise) and deterministic (systemic) elements.
  • Even when measuring a "constant" 12.546 V battery, zooming in reveals that each sample is slightly different.
  • This measurement uncertainty makes the overall collection of measurements a random process requiring statistical analysis.

📚 Core Statistical Definitions

📦 Sample

Sample: the collection of measurements actually stored by the measurement system.

  • In the battery example, if we measure once per minute for one hour, we store 60 measurements—this is the sample.
  • The battery voltage exists continuously for all 3600 seconds, but we only capture 60 discrete points.
  • Computer-based systems compute statistical parameters over the sample, which may or may not directly relate to the entire population.

🌍 Population

Population: the entire set of all possible measurements.

  • In a continuous system, the population is infinite because time t is an element of the real numbers.
  • We cannot store a measurement for every possible time t—that would require infinite storage and infinite time.
  • By definition, any computer-based measurement system is discrete, not infinite.
  • The Nyquist Sampling Criterion allows us to infer properties of the continuous signal from discrete samples if we sample at least twice the maximum frequency.
  • We often use a sample set to infer statistical properties of a population (in engineering, biology, epidemiology, etc.).

🎯 Sample space

Sample space: the set of all possible values in the outcome of a measurement.

  • The excerpt uses the Fluke 289 Multimeter example: at 50 V range, resolution is ±0.001 V; at 5 V range, resolution is ±0.0001 V.
  • When measuring a 12 V battery on the 50 V range, possible values are: 5.001, 5.002, 5.003...49.998, 49.999, 50.000.
  • However, meter accuracy is 0.025%, so measuring an expected 12.6 V battery gives a range between 12.560 V and 12.640 V.
  • The sample space contains many more possible values than we would ever expect to see in actual measurements.
  • Key point: Sample space = all possible outcomes; actual sample = subset of outcomes that occur.

📈 Types of Distributions

📐 What a distribution describes

Distribution of a random variable: describes how the random variable is spread out over the sample space.

  • Many different distributions occur in engineering and scientific applications.
  • The excerpt focuses on uniform and normal distributions as the most common.

⚖️ Uniform distribution

  • Each value in the sample space has equal probability.
  • The excerpt shows LabVIEW's basic random number generator (dice icon) produces uniform distribution between 0 and 1.
  • Example: In a histogram of uniform distribution exam scores, values are spread relatively equally across the range (though small sample sizes may not look perfectly uniform).

🔔 Normal (Gaussian) distribution

  • Values cluster around a central mean with decreasing probability toward the extremes.
  • Also called Gaussian distribution.
  • The excerpt shows LabVIEW's Gaussian White Noise VI produces normal distribution; it requires an input for standard deviation, and the mean can be added.
  • Example: In a histogram of normal distribution exam scores, most values fall in a narrower space defined by standard deviation, with the histogram outline showing the characteristic bell curve shape.

🎭 Other distribution types

DistributionDescription from excerpt
PoissonInherently discrete; describes probability of some number of events in a fixed time interval; used in industrial engineering for queuing studies
RayleighUsed in RF communication systems; describes probability of signal strength along non-uniform paths (e.g., Rayleigh Fading in troposphere/ionosphere)
BimodalTwo distinct peaks; common in academia when a large group "gets it" and another large group does not
  • The excerpt notes hundreds of other distributions exist and provides a Wikipedia link for reference.
  • Don't confuse: Different data sets are better represented by different distributions; not everything is normal or uniform.

🎲 Probability Concepts

🎯 Probability and events

Probability: describes how likely an event is to occur.

  • An event is a specific outcome or set of outcomes.
  • Probability quantifies the likelihood of that event happening.

📊 Probability density function (pdf)

Probability density function (pdf): uses the x-axis for each possible outcome and the y-axis to show the likelihood of each outcome.

  • The area under the pdf curve (the integral of the pdf) must equal 1, because one of the possible outcomes must always occur.
  • The excerpt provides a detailed example for normal distribution.

📦 Understanding the normal distribution pdf

Median and interquartile region (IQR):

  • The median is the middle point of both the pdf and the box plot.
  • The IQR contains 50% of measurements centered about the median.
  • For normal distribution, the middle 50% occurs from −0.6745σ to +0.6745σ (where σ is standard deviation).

Standard deviation regions:

  • The excerpt shows what percentage of measurements fall within multiples of standard deviation from the median.
  • The pdf is symmetric about the mean for normal distribution.
  • Example: The lower plot in the excerpt shows what percent of the distribution occurs within ±1 standard deviation.

📏 Statistical Measures

🏷️ Parameters and moments

Parameter: each type of statistical attribute.

  • Common parameters of normal distribution: mean, standard deviation, and variance.
  • The excerpt mentions moment generation can produce any number of parameters for a continuous distribution (beyond scope of text).
  • For most applications, mean and variance are the two main parameters—they are the first two moments of the normal distribution.
  • Standard deviation is simply the square root of variance.

📍 Mean value

Sample mean: the average of the discrete set of data points in our sample.

For discrete sample data:

  • Computed by: sum of all x_i divided by k (the number of samples).
  • In the battery example, the central tendency (mean) is 12.546 Volts.

For discrete population:

  • Uses the same formula, but k becomes K (the full population count, not just a sample subset).
  • The symbol μ (mu) represents the population mean.

For continuous population:

  • The true population mean μ is more difficult to compute.
  • If f(x_i) describes the probability of a continuous distribution on some interval between x and x + Δx, then the population mean is computed by: the integral from negative infinity to positive infinity of x times f(x) dx.
  • The excerpt notes that the function f(x) will be explained more thoroughly later.

📊 Other measures of central tendency

  • The excerpt mentions the median value as another common measure, particularly useful in discrete sample sets.
  • (The excerpt ends before completing the discussion of median and other measures.)
18

Statistical Measures

5.2 Statistical Measures

🧭 Overview

🧠 One-sentence thesis

Statistical parameters—especially mean, standard deviation, and variance—quantify the central tendency and spread of measurement data, enabling us to describe both where data clusters and how much it varies around that center.

📌 Key points (3–5)

  • Parameters describe distributions: mean, standard deviation, and variance are the most common parameters; they constitute the first two "moments" of a normal distribution.
  • Mean measures central tendency: the mean is the average value; it can be computed for samples or populations, discrete or continuous.
  • Standard deviation and variance measure spread: standard deviation quantifies how much individual values deviate from the mean; variance is its square.
  • Common confusion—sample vs population: sample statistics (x̄, S) are computed from a subset of data; population parameters (μ, σ) describe the entire population; in practice, computer-based measurements almost always compute sample statistics.
  • Alternative central-tendency measures: median (center value) and mode (most frequent value) can better represent central tendency when outliers skew the mean.

📊 Core concepts

📊 What a parameter is

Parameter: each type of statistical attribute of a distribution.

  • The excerpt lists mean, standard deviation, and variance as common parameters of the normal distribution.
  • These are not the only parameters; any number can be generated using "moment generation," but mean and variance are the two main ones for most applications.
  • Standard deviation is simply the square root of the variance.

🎯 Mean as central tendency

Mean: a measure of the tendency of repeated measures.

  • The mean represents where data clusters on average.
  • Example: in the battery measurement example, the central tendency (mean) is 12.546 Volts.
  • The excerpt emphasizes that the mean is computed from the data points available, whether a sample or the full population.

🔢 Computing the mean

🔢 Sample mean (discrete data)

  • Sample mean is computed from a subset of data points.
  • Formula (in words): sum all data points and divide by the number of points in the sample (k).
  • Notation: x̄ (x-bar) denotes the sample mean.

🔢 Population mean (discrete)

  • When you have the full population count (K, not just a sample subset k), the formula is the same: sum all data points and divide by the total count.
  • Notation: μ (mu) denotes the population mean.

🔢 Population mean (continuous)

  • For a continuous distribution, the true population mean is harder to compute.
  • It requires integrating over the entire range: multiply each value x by its probability density f(x) and integrate from negative infinity to positive infinity.
  • The function f(x) describes the probability of the continuous distribution on an interval.

📐 Other measures of central tendency

📐 Median value

Median: the value at the center of a sample set.

  • In discrete data, the median may better represent central tendency than the mean, especially when outliers are present.
  • Example: an exam had a mean of 78.2 but a median of only 71 because two exceptional students aced it while most others performed below that level; eliminating the two high scores brought the mean much closer to the median.
  • Don't confuse: the median is not affected by extreme values the way the mean is.

📐 Mode

Mode: the value that occurs the greatest number of times in a sample set.

  • Some data sets may not have a specific mode (e.g., a uniform distribution), but most distributions do.
  • The mode identifies the most frequent value, not the average or center.

📏 Measuring spread: standard deviation and variance

📏 What standard deviation measures

Standard deviation: a measure of how each value deviates from the mean value.

  • It quantifies the spread of data around the central value.
  • The excerpt defines deviation for a single data point as d_i = x_i − x̄ (the difference between the data point and the mean).
  • Why square the deviation? Deviations can be positive or negative; squaring makes all values positive so we can track deviations on either side of the mean, then we take the square root at the end.

📏 Sample standard deviation (S)

  • Formula (in words): for each data point, compute its squared deviation from the sample mean, sum all squared deviations, divide by (k − 1), then take the square root.
  • Notation: S denotes sample standard deviation.

📏 Sample variance (S²)

Sample variance: the sample standard deviation squared.

  • Formula (in words): sum all squared deviations from the sample mean and divide by (k − 1).
  • Variance is expressed in squared units (e.g., volts²).

📏 Population standard deviation (σ) and variance (σ²)

  • For discrete populations, substitute μ (population mean) for x̄ (sample mean) in the formulas.
  • Notation: σ (sigma) for population standard deviation, σ² for population variance.
  • For continuous populations, the population variance is computed by integrating the squared deviation (x − μ)² multiplied by the probability density f(x) over the entire range.
  • Population standard deviation is the square root of population variance.

🖥️ Sample vs population in practice

🖥️ Why we compute sample statistics

  • Computer-based measurement systems almost always compute sample-based statistics (x̄, S, S²).
  • We may infer population-based values from our data, but the actual computations are sample-based.
  • Don't confuse: even though we may want to know population parameters, we work with sample parameters in practice.

🖥️ Example: battery voltage measurements

The excerpt provides a table of computed sample parameters for a battery voltage data set:

ParameterSymbolValue
Mean12.0002 volts
Medianx_median12.0003 volts
Modex_mode11.9971 volts
Standard DeviationS0.0012 volts
Variance0.0000014766 volts²
  • All values are sample statistics, not population parameters.
  • The mean, median, and mode are very close, suggesting a symmetric distribution without strong outliers.
  • The small standard deviation (0.0012 volts) indicates that measurements cluster tightly around the mean.

🩺 Application example: signal averaging

🩺 ECG signal averaging

  • The excerpt mentions an ECG (electrocardiogram) example where repeated acquisitions are averaged to improve signal-to-noise ratio.
  • Subtle cardiac signals may be too small to see in a single measurement, so signal averaging was developed.
  • Method: acquire many ECG samples (e.g., 200), time-align each measurement, and compute the average on a point-by-point basis.
  • This assumes each measurement can be aligned so the average is taken over the same point in time for the data set.
  • Example: the bottom trace in the figure represents the average of each separate ECG acquisition, revealing signals that were hidden by noise in individual traces.
19

Measurement Spread

5.3 Measurement Spread

🧭 Overview

🧠 One-sentence thesis

Standard deviation and variance quantify how spread out data points are around the mean, enabling better assessment of measurement quality and noise reduction in techniques like signal averaging.

📌 Key points (3–5)

  • What spread measures: how far individual data points deviate from the central value (mean), not just the central tendency itself.
  • Standard deviation vs variance: standard deviation is the square root of variance; variance is the average of squared deviations from the mean.
  • Sample vs population: sample statistics use and S; population statistics use μ and σ; computer-based measurements almost always compute sample parameters.
  • Common confusion: deviation terms are squared (then square-rooted) because deviations can be positive or negative—squaring makes all values positive so they don't cancel out.
  • Why it matters: variance serves as a proxy for noise level in signal-averaging techniques, allowing rejection of noisy measurements that would degrade the average.

📏 Core spread concepts

📐 What standard deviation measures

Standard deviation: a measure of how each value deviates from the mean value.

  • It quantifies the "spread" of data around the central tendency.
  • Not about the mean itself, but about how tightly or loosely data cluster around it.
  • The excerpt defines deviation for a single data point as: deviation = data point minus mean.

🔢 Why deviations are squared

  • Deviations can be positive (above the mean) or negative (below the mean).
  • If you simply summed deviations, positive and negative values would cancel each other out.
  • Solution: square each deviation to make all values positive, then take the square root at the end to return to the original units.
  • Example: A data point 2 units below the mean and another 2 units above both contribute equally to spread, so squaring ensures both count as +4 in the calculation.

🧮 Standard deviation formula (sample)

For a sample of k data points:

  • Standard deviation S = square root of [ sum of (each data point minus mean)² divided by (k − 1) ]
  • The numerator sums all squared deviations; the denominator (k − 1) averages them.
  • The excerpt uses (k − 1) rather than k for sample calculations (a standard statistical adjustment).

📊 Variance: the squared version

Sample variance: the sample standard deviation squared, or S².

  • Variance = sum of (each data point minus mean)² divided by (k − 1).
  • It is simply standard deviation without the final square root step.
  • Units are squared (e.g., volts²), which can be less intuitive than standard deviation (in original units like volts).

🔬 Sample vs population parameters

🧪 Sample statistics (computer-based measurements)

ParameterSymbolFormula structure
Sample meanAverage of data points
Sample standard deviationSSquare root of [ sum of squared deviations / (k − 1) ]
Sample varianceS²Sum of squared deviations / (k − 1)
  • The excerpt emphasizes: "computer-based computations that we make are almost always the sample-based computations."
  • Sample parameters are calculated from actual collected data.

🌍 Population statistics (theoretical)

ParameterSymbolFormula structure
Population meanμTrue mean of entire population
Population standard deviationσSquare root of population variance
Population varianceσ²Integral of (x − μ)² times probability density function over all x
  • Population variance uses an integral over the entire range (from negative infinity to positive infinity) of (xμ)² times the probability density function f(x).
  • Population standard deviation is the square root of population variance.
  • Don't confuse: sample parameters are computed from finite data sets; population parameters describe the theoretical underlying distribution.

🔄 When to use which

  • Sample parameters: when working with actual measurements (almost all practical cases).
  • Population parameters: when inferring or modeling the true underlying distribution.
  • The excerpt notes: "we may infer population-based values from our data," but computations are sample-based.

🩺 Real-world application: ECG signal averaging

🫀 The signal-averaging technique

  • Goal: improve signal-to-noise ratio in ECG measurements where subtle signals are too small to see in a single acquisition.
  • Method: acquire multiple ECG beats, time-align them, and average corresponding points across all beats.
  • Original approach: simply average 200 ECG samples.
  • Improved approach: use variance to decide whether to include each new beat.

📉 Using variance to filter noisy beats

  • Compute variance in a "quiescent" (quiet) portion of the ECG cycle for each new beat.
  • Decision rule:
    • If variance decreases with the new beat → include it in the average (improves signal-to-noise).
    • If variance increases with the new beat → discard it (would degrade the average).
  • Example from the excerpt:
    • First 20 beats: mean variance = 0.013424 volts.
    • Adding the 21st beat: mean variance increases to 0.014316 volts → reject the 21st beat.
  • The excerpt notes: "variance [is] our proxy estimate of the noise level."

🔍 Why variance works as a noise proxy

  • Variance quantifies how much data points scatter around the mean.
  • Higher variance in the quiet region = more noise.
  • By monitoring variance, the system can automatically exclude excessively noisy measurements.
  • Don't confuse: variance here is not measuring the ECG signal itself, but the consistency (noise level) of the measurement in a region where the signal should be stable.

📋 Example: battery voltage statistics

The excerpt provides a worked example for a battery sample set (from Figure 5.1):

ParameterSymbolValue
Mean12.0002 volts
Medianx<sub>median</sub>12.0003 volts
Modex<sub>mode</sub>11.9971 volts
Standard DeviationS0.0012 volts
VarianceS²0.0000014766 volts²
  • All three central tendency measures (mean, median, mode) are very close, suggesting a symmetric distribution without major outliers.
  • Standard deviation of 0.0012 volts indicates data points are tightly clustered around the mean.
  • Variance is in squared units (volts²), making it less intuitive but mathematically useful.
20

Probability Distributions

5.4 Probability Distributions

🧭 Overview

🧠 One-sentence thesis

Probability distributions describe how measurement data are spread across the sample space, with the normal and uniform distributions being the most important for measurement and instrumentation, while Poisson and binomial distributions address discrete event counting.

📌 Key points (3–5)

  • What distributions reveal: the manner in which data are distributed across the sample space gives information about the underlying statistics and the probability of each sample occurring.
  • Histograms vs continuous distributions: histograms are discrete summaries of actual data divided into bins, while probability density functions (pdfs) are continuous mathematical models that describe probabilities over the entire sample space.
  • Normal vs uniform: normal (Gaussian) distributions are most common in measurement noise and many natural processes, whereas uniform distributions assign equal probability to every value in the sample space.
  • Common confusion: discrete vs continuous—normal distributions are inherently continuous, but discretized measurements can approximate them; uniform distributions can be either continuous or discrete (e.g., dice).
  • Why it matters: understanding distributions helps model noise processes, design filters, perform quality control, and predict event probabilities in engineering systems.

📊 Building histograms from data

📊 What a histogram is

Histogram: a plot of the number of occurrences of each result within the sample space, computed from actual data.

  • Histograms are discrete by nature because they are generated from discrete data sets.
  • The underlying process being measured is often continuous, so histograms approximate continuous distributions.

🗂️ Using bins to organize data

  • To simplify histogram construction, data are assigned to bins that cover subsets of the sample space.
  • Example: measuring a 3.3 Volt battery over a 0–5 Volt range, bins might be 0.1 Volts wide (first bin: 0 to <0.1 V, second bin: 0.1 to <0.2 V, etc.).
  • The excerpt shows 1024 measurements divided into 50 bins (~0.1 V per bin), producing a histogram that approximates a Gaussian distribution.
  • Don't confuse: a histogram based on a finite number of data points will not perfectly match the theoretical continuous distribution; more data points improve the match.

🎯 Probability density functions (pdfs)

  • Probability distributions often have mathematical models that describe the probability of specific values or ranges.
  • Graphing these probabilities over the sample space produces the probability density function (pdf).
  • The area under the entire pdf curve is always 1 (certainty that some value in the sample space will occur).

🔔 Normal (Gaussian) distribution

🔔 What the normal distribution is

Normal distribution (also called Gaussian distribution): the most common distribution in technical fields, describing many natural processes.

  • Example from the excerpt: heights of men and women in the US follow a normal distribution.
    • Women: average 64 inches, standard deviation 3.5 inches.
    • Men: average 70 inches, standard deviation 4 inches.
  • The difference in standard deviations affects the width and height of the distribution curve.

📐 Mathematical form

The probability density function is:

  • p(x) = (1 / (σ √(2π))) × e^(−(x − μ)² / (2σ²))
  • μ (population mean): centers the distribution on the x-axis.
  • σ (population standard deviation): describes how spread out the central region is; higher σ makes the peak lower and the central lobe wider.

📏 Standard deviation ranges

RangePercentage of population
μ ± σ~68.26%
μ ± 2σ~95.5%
μ ± 3σ~99.7%
  • These probabilities are found by integrating the pdf over the specified range.
  • Example: the probability that a randomly chosen item falls within μ − σ to μ + σ is approximately 0.6826.

🔊 Normal distribution in measurement noise

  • Many noise processes in engineering have a normal distribution.
  • Example: zero-mean, white, Gaussian (ZMWG) noise in ECG recordings.
    • Zero-mean: μ = 0.
    • White: fills the spectrum within the sampling criterion (e.g., 0 to 500 Hz if f_max = 500 Hz).
    • Gaussian: distributed normally.
  • Thermal noise in electronics is usually modeled with a normal distribution.
  • Some ideal filters (e.g., Kalman Filter) are only ideal when additive noise is Gaussian.
  • Don't confuse: the normal distribution is inherently continuous, not discrete, but discretized measurements can approximate it well.

⚖️ Uniform distribution

⚖️ What the uniform distribution is

Uniform distribution: every value in the sample space is equally probable; the probability is the same for all possible samples.

  • The probability density function is a horizontal line over the sample space.
  • Mathematically, for a sample space from a to b:
    • Pr(X) = 0 if x < a
    • Pr(X) = 1 / (b − a) if a ≤ x ≤ b
    • Pr(X) = 0 if x > b

📈 Continuous uniform distribution

  • Example from the excerpt: sample space between 3 and 8.
  • Pr(X) = 1 / (8 − 3) = 0.2 for all values between 3 and 8.
  • Probability is zero outside this interval.

🎲 Discrete uniform distribution

  • Uniform distributions do not have to be continuous.
  • Example: tossing a fair die.
    • Sample space: {1, 2, 3, 4, 5, 6} (discrete and finite).
    • Each face is equally likely, so probability of each outcome is 1/6.
    • Plotted as discrete probabilities; no possibility of values between integers.
    • Called a probability mass function (pmf) for discrete data.
  • Don't confuse: discrete uniform distributions have zero probability between values in the sample space, not just outside it.

🎰 Where uniform distributions appear

  • Common in probability theory related to gambling (dice, cards, roulette wheels).
  • Engineers encounter both continuous and discrete uniform distributions.

🔢 Poisson distribution

🔢 What the Poisson distribution is

Poisson distribution: a discrete distribution providing information about the probability of x number of events occurring within a specific time period, where each event is independent.

  • Possible outcomes are countable whole numbers.
  • Example: an Industrial Engineer estimating how many customers arrive at cash registers per half-hour or hour to decide how many registers to activate.

📐 Mathematical form

  • p(k) = (e^(−λT) × (λT)^k) / k!
  • λ: occurrence rate.
  • T: time interval.
  • k: number of occurrences being tested.

📡 Example: wireless communication

  • Multiple transmitters send packets to one receiver; transmitters are unaware of each other.
  • Collisions occur when multiple transmitters transmit simultaneously; packets must be retransmitted.
  • Packet arrivals modeled with Poisson distribution.
  • Successful transmission occurs if zero other arrivals occur in a time slot.
  • Probability of zero other packets: P₀ = e^(−2λN).
  • Successful transmission rate: S = N × e^(−2N).
  • Maximum throughput occurs when mean attempts = 0.5 per time unit.

🎯 Binomial distribution

🎯 What the binomial distribution is

Binomial distribution: measures the probability of success for n independent trials (experiments), where each outcome is either a success or a failure.

  • An individual experiment with true/false, success/failure, yes/no outcomes is called a Bernoulli trial.
  • Repeating a Bernoulli trial n times gives the binomial distribution.

📐 Mathematical form

  • Pr(X = k) = (n choose k) × p^k × (1 − p)^(n−k), for k = 0, 1, 2, ..., n.
  • X: random variable counting the number of successes.
  • k: number of successes.
  • p: probability of success in a single trial.
  • (1 − p): probability of failure.
  • n: total number of trials.
  • (n choose k) = n! / (k! × (n − k)!).

🏭 Example: quality control in manufacturing

  • An appliance manufacturer produces washing machines.
  • QC specification: no more than 2 units fail system test for 95% of machines produced each day.
  • If failures exceed 2, the entire day's production must be re-verified.
  • QC department tests 8 machines per day.
  • Question: what is the probability of incorrectly certifying production when only 70% meet specifications (i.e., 30% fail)?
  • Calculate Pr(X ≤ 2) = Pr(X = 0) + Pr(X = 1) + Pr(X = 2), with n = 8, p = 0.3.
  • Result: Pr(X ≤ 2) = 0.05765 + 0.19765 + 0.29648 = 0.5518.
  • Don't confuse: "success" can be defined as finding a failure if that is what you are tracking; the two probabilities (success and failure) must sum to one.

🔧 Applications

  • Useful in Quality Control departments in manufacturing facilities.
  • Helps determine if manufacturing processes conform to specifications.
21

Spectral Analysis

5.5 Spectral Analysis

🧭 Overview

🧠 One-sentence thesis

Spectral analysis, particularly power spectrum computation via the Fourier Transform, enables detection of periodic signals (such as frequency-encoded information) even when buried in significant noise.

📌 Key points (3–5)

  • What the power spectrum is: the squared magnitude of the Fourier Transform of a signal, used to detect periodic patterns like line noise or encoded frequencies.
  • How it's computed: apply the Discrete Fourier Transform (DFT) to the signal, then multiply each coefficient by its complex conjugate; practical implementation uses the FFT algorithm.
  • Why it works in noise: periodic signals concentrate power at specific frequencies, while noise spreads across the spectrum, making even weak signals detectable.
  • Common confusion: the mean (DC offset) of a signal appears at zero frequency and can overwhelm other content—always remove the mean before spectral analysis.
  • Real-world application: DTMF (push-button phone) encoding demonstrates that two sinusoids can be reliably detected even when noise amplitude is more than three times greater than the signal.

🔍 What spectral analysis does

🔍 Purpose and starting point

  • Spectral analysis helps detect errors or patterns of a periodic nature in measurement series (e.g., line noise).
  • The starting point is the power spectrum of the discrete signal.

📐 Definition of power spectrum

Power spectrum: the Fourier Transform of the signal squared.

  • "Squared" means multiplying each Fourier coefficient by its complex conjugate (not simple squaring, because coefficients are complex numbers).
  • This produces a real-valued measure of power at each frequency.

🧮 Computing the power spectrum

🧮 Discrete Fourier Transform (DFT)

The DFT for a discrete signal x[n] is given by:

  • Tilde X[k] equals one over N times the sum from n = 0 to N minus 1 of x[n] times e to the negative j times (2 pi k n over N), for k = 0, 1, 2, ..., N minus 1.
  • Here k and n are discrete integer indices.
  • Sampled frequencies are integer multiples of a fundamental frequency: omega equals 2 pi over (N times T), where T is the sampling period.

🧮 Power spectrum formula

The power spectrum is computed as:

  • Tilde P subscript x [omega k] equals one over N times tilde X[omega k] times tilde X star [omega k], where the star denotes complex conjugation.

⚡ Fast Fourier Transform (FFT)

  • What it is: a specific, highly efficient computer algorithm for computing the DFT.
  • Where to find it: built into nearly every engineering software package (MATLAB, LabVIEW, Mathematica, etc.).
  • Example MATLAB code:
    • X = fftshift(fft(x));
    • P = X.*conj(X);
    • plot(X);
  • This produces a dual-sided spectrum from negative omega_s/2 to positive omega_s/2 (or negative f_s/2 to positive f_s/2), where the s subscript denotes sampling frequency.

📞 DTMF example: detecting frequencies in noise

📞 Background: Dual Tone Multi-Frequency signaling

  • What it is: an encoding scheme developed when telephone companies converted from analog rotary dials to push-button dialing.
  • Each digit on a push-button phone encodes two sinusoids (one high frequency, one low frequency).
  • During transmission, these sinusoids are corrupted by additive noise.

📊 DTMF frequency table

Frequency1209 Hz1336 Hz1477 Hz1633 Hz
697 Hz123A
770 Hz456B
852 Hz789C
941 Hz*0#D
  • The phone company samples analog signals at 8000 Hz, so all DTMF frequencies are well below the Nyquist-defined frequency of 4000 Hz.
  • Frequencies were carefully designed to have no common harmonics, making spectral estimation definitive.

📞 Example: detecting digit "6"

  • Setup: 0.1 seconds of DTMF data encoding the digit "6" (frequencies 770 Hz and 1477 Hz).
  • Noise level: Gaussian additive noise with signal-to-noise ratio (SNR) of -11 dB, meaning noise amplitude is more than 3 times greater than the sinusoid amplitude.
  • Result: After computing the FFT and multiplying by its complex conjugate, the spectrum clearly shows the two sinusoids as spectral peaks that dominate the residual noise.
  • Why it works: Gaussian noise spreads its power across much of the available spectrum (4000 Hz), so power at any given frequency for noise alone is smaller, even when a slight periodic signal is present.
  • Robustness: The simple power spectrum can still detect the two sinusoids even when SNR reaches -20 dB.

⚠️ Critical preprocessing step

⚠️ Removing the DC offset

DC offset: the mean value of a signal; in the frequency domain, it appears at zero frequency.

  • Why it matters: In small signal analysis, a DC offset can swamp any other frequency content.
  • What to do: Always remove the mean of a signal before performing any spectral analysis.
  • If the mean value is important, compute and store it separately, but remove it before further spectral or statistical analysis.
  • Don't confuse: the DC offset is not part of the periodic signal content; it's a constant shift that distorts frequency analysis.

🔧 Extensions and advanced methods

🔧 Periodogram

  • What it does: breaks data into (possibly overlapping) sections, computes the power spectrum of each section, then takes the average.
  • Assumption: relies on data statistics being relatively stationary over the recording epoch.
  • Benefit: makes spectral analysis even more robust when significant noise is present.

🔧 Parametric methods

  • Generally taught in graduate-level signal processing classes.
  • Beyond the scope of this text; for more in-depth information, see texts such as Monson Hayes.

🔧 Practical note

  • Power spectral analysis is not necessarily exactly how the phone company decodes each button push, but the example demonstrates the utility of the basic power spectrum.
22

5.6 Conclusion

5.6 Conclusion

🧭 Overview

🧠 One-sentence thesis

Statistical concepts—especially the sample mean, sample standard deviation, and power spectrum—are foundational for understanding measurement uncertainty and extracting underlying characteristics from measured processes.

📌 Key points (3–5)

  • Core statistics: sample mean and sample standard deviation are the most commonly used across all disciplines.
  • Power spectrum utility: the power spectrum and its derivatives help reveal underlying characteristics of measured processes.
  • Measurement uncertainty: statistical computations are the foundation for understanding the veracity (or lack thereof) of measurements.
  • Practical application: the concepts are demonstrated in practical applications in Chapters 6–8.

📊 Essential statistical tools

📊 Most common statistics

  • Sample mean and sample standard deviation are highlighted as the most commonly used statistics across all disciplines.
  • These are foundational for measurement and instrumentation work.

📊 Other useful parameters

  • The chapter presents additional parameters beyond mean and standard deviation.
  • These are specifically useful in the measurement and instrumentation field.

🔬 Power spectrum and measurement

🔬 Role of the power spectrum

The power spectrum and its derivatives are often employed in understanding underlying characteristics of a measured process.

  • The power spectrum is not just a mathematical tool—it reveals hidden properties of what you're measuring.
  • Example: as demonstrated in the DTMF example earlier in the chapter, the power spectrum can detect signal patterns even in noisy data.

🔬 Practical demonstrations

  • The practical applications are discussed in detail in Chapters 6 through 8.
  • These chapters show how the statistical concepts apply to actual systems.

🎯 Understanding measurement uncertainty

🎯 Why statistics matter for measurements

  • Measurement uncertainty finds its roots in statistical computations.
  • You must understand these computations to properly appreciate the veracity (truthfulness/reliability) of a given measurement.

🎯 What this means in practice

  • Not all measurements are equally reliable.
  • Statistical tools let you quantify how much you can trust a measurement.
  • Without understanding the underlying statistics, you cannot judge whether a measurement is meaningful or not.
23

Physics of a Second-Order Mechanical System

6.1 Physics of a Second-Order mechanical system

🧭 Overview

🧠 One-sentence thesis

Second-order mechanical systems like car suspensions can be modeled by a differential equation whose characteristic roots determine whether the system oscillates or decays smoothly, and the sign of the real part of those roots determines stability.

📌 Key points (3–5)

  • What a second-order system is: many mechanical systems (e.g., car suspension: mass = car, spring = suspension, damper = shock absorber) obey a second-order differential equation based on Newton's Second Law.
  • How the transfer function reveals behavior: the roots of the characteristic equation (denominator set to zero) determine the system's natural response—real roots produce smooth decay, complex conjugate roots produce oscillations.
  • Stability criterion: negative real part of the roots → stable (decays to zero over time); positive real part → unstable (grows over time).
  • Common confusion—real vs. complex roots: real roots mean the system is heavily damped (no oscillations); complex conjugate roots mean underdamped (oscillatory behavior).
  • Why it matters: understanding these roots helps engineers design systems (e.g., car suspensions) that balance comfort (some damping) and handling (not too stiff, not too bouncy).

🚗 The Mass-Spring-Damper Model

🚗 Real-world example: car suspension

  • Mass (M): the car itself.
  • Spring (k): the suspension spring.
  • Damper (D): the shock absorber (in real cars, compression and extension rates differ; here we assume a single damping rate).
  • Fixed surface: the road.
  • Position variable: x(t) represents the vertical position of the car.
  • The suspension system minimizes vertical movement experienced by the driver when the car hits bumps.

⚖️ Forces acting on the mass

The excerpt applies Newton's Second Law (F = Ma) and sets all forces to zero to conserve forces.

ForceFormulaExplanation
Gravity (mass)F_mass = M · (d²x(t)/dt²)Acceleration is the second derivative of position
DamperF_damper = D · (dx(t)/dt)Damping force is proportional to velocity (first derivative of position); faster motion → more damping
SpringF_spring = K · x(t)Hooke's Law: force is proportional to displacement from rest
External forceF_ext(t)Bumps in the road; without this, the mass should not move vertically
  • Sign convention: positive x(t) is upward.
  • Force balance: F_spring + F_damper + F_mass = F_ext.
  • This yields the governing equation:
    M · (d²x(t)/dt²) + D · (dx(t)/dt) + K · x(t) = F_ext(t).

🔄 Transfer Function and Laplace Transform

🔄 What the transfer function is

Transfer function H(s): the Laplace Transform of the output when the input is an impulse function.

  • The excerpt uses Laplace notation L(·) to transform the time-domain differential equation into the s-domain.
  • Assuming zero initial conditions (initial velocity and position are zero), the Laplace transforms are:
    • Second derivative: L(d²x(t)/dt²) = s² X(s)
    • First derivative: L(dx(t)/dt) = s X(s)
    • Position: L(x(t)) = X(s)
  • The s-domain equation becomes:
    M s² X(s) + D s X(s) + K X(s) = L(F_ext(t)).

🎯 Deriving the transfer function

  • Rearrange to isolate X(s) / F_ext(s):
    H(s) = X(s) / F_ext(s) = 1 / (M s² + D s + K).
  • Because the Laplace transform of an impulse δ(t) is 1, H(s) equals X(s) when the input is an impulse.
  • Key insight: solving for X(s) gives both the impulse response and the transfer function.

🧮 Characteristic equation and eigenvalues

Characteristic equation: the denominator of the transfer function set equal to zero.

  • The roots of the characteristic equation are the eigenvalues of the system.
  • These roots determine the natural response (the system's behavior without external forcing).
  • The general solution involves the natural exponent e^(αt), where α can be complex (α ∈ ℂ).
  • Stability rule: if α is negative, e^(−αt) decays over time → stable system (natural response goes to zero as time approaches infinity).

🔀 Real Roots vs. Complex Roots

🔀 Real roots: heavily damped systems

  • When eigenvalues α₁ and α₂ are real, the impulse response decays without oscillations.
  • Two cases:
    • Repeated roots (α₁ = α₂): the system is critically damped.
    • Distinct roots (α₁ ≠ α₂): the system is overdamped.
  • Figure 6.2 in the excerpt shows both cases: after reaching a peak, the signal monotonically decays to zero.
  • Example: a car suspension with very stiff damping would produce this behavior—unacceptably stiff and undesirable ride.

🌊 Complex conjugate roots: underdamped systems

  • When roots are complex (α ± jω), the impulse response oscillates.
  • Why oscillations occur: complex conjugate roots in the Laplace domain translate into a decaying sinusoid in the time domain.
  • The excerpt explains this via Euler's identity:
    cos(ωt) = (e^(jωt) + e^(−jωt)) / 2.
  • Stability: α must still be negative for the oscillations to decay over time (stable underdamped system). If α > 0, oscillations grow → unstable.
  • Figure 6.3 shows a decaying sinusoid: the system oscillates but the amplitude decreases.
  • Don't confuse: complex roots always come as conjugate pairs; any odd-order system must have at least one real root.

🎢 Underdamped vs. overdamped trade-off

  • Underdamped (complex roots): not enough damping to remove oscillations; the car would bounce down the road.
  • Overdamped (real roots): too much damping; the ride is stiff and uncomfortable.
  • Auto designers balance damping and spring action to provide comfort and good handling.

🧮 Converting from Laplace to Time Domain

🧮 Partial fraction expansion (PFE)

  • To get the system into the time domain, use partial fraction expansion.
  • For a second-order system with complex conjugate roots, the transfer function is:
    H(s) = 1 / ((s + (α − jω))(s + (α + jω))).
  • After PFE, you get two first-order terms with complex conjugate numerators (residuals).
  • The excerpt shows the step-by-step derivation:
    1. Factor the denominator using the roots.
    2. Expand into two terms.
    3. Convert the numerators to polar form (magnitude and phase).
    4. Apply the inverse Laplace transform.

🕰️ Time-domain impulse response

  • The final time-domain expression for the impulse response h(t) is:
    h(t) = (2 / (2ω)) · e^(−αt) · cos(ωt − π/2).
  • This can also be written using sine (since cos(ωt − π/2) = sin(ωt)).
  • Key features:
    • Decay rate: controlled by α (the real part of the roots).
    • Oscillation frequency: controlled by ω (the imaginary part of the roots).
    • Phase: determined by the angle of the complex numerator k; the excerpt warns to ensure the correct quadrant when computing θ = arctan(Im(k) / Re(k)).

📐 Important Laplace relationship

  • The seminal relationship:
    A e^(αt) ↔ A / (s + α).
  • For complex α (α ∈ ℂ), you get a complex exponential in the time domain.
  • Two complex-conjugate exponentials in the time domain produce the oscillatory cosine behavior via Euler's identity.

🛠️ Practical Design Considerations

🛠️ Controlling system behavior

  • By adjusting M (mass), D (damping), and K (spring constant), engineers can control whether the roots are real or complex.
  • Real roots: highly damped, no oscillations, but stiff ride.
  • Complex roots: some oscillation, more comfortable but must avoid excessive bouncing.
  • The excerpt notes that auto designers work hard to find the appropriate balance.

🎯 Project setup

  • The excerpt describes a project using a suspended mass-spring-damper system (a "bouncy toy") instead of a car on a fixed surface.
  • The toy is suspended from the ceiling with a tape measure to track vertical motion.
  • The unknowns are M (mass of the toy), k (spring constant), and D (damping coefficient).
  • The same mathematical derivation applies, yielding a second-order differential equation for the position of the mass with respect to time.
24

6.2 Project

6.2 Project

🧭 Overview

🧠 One-sentence thesis

This project uses a suspended mass-spring-damper system (bouncy toy) with an accelerometer to experimentally determine motion equations by acquiring acceleration data and analyzing it to extract system parameters like oscillation frequency and decay rate.

📌 Key points (3–5)

  • What the project measures: acceleration data from a bouncy toy to derive position, velocity, and acceleration equations over time.
  • Why direct integration fails: double-integrating raw accelerometer data introduces numerical errors and lacks necessary calibration information (sensitivity, units).
  • The analysis strategy: find the position equation first using frequency analysis and decay rate extraction, then analytically differentiate to get velocity and acceleration.
  • Common confusion: continuous vs N-samples acquisition mode—continuous mode maintains timing consistency between samples across loop cycles, while N-samples mode creates gaps.
  • Key deliverable: motion equations x(t), v(t), and a(t) with correct units, derived from experimental data in both live and playback modes.

🧸 Physical system setup

🧸 The bouncy toy apparatus

  • A mass-spring-damper system suspended from the ceiling (not resting on a surface like a car suspension).
  • Components:
    • Unknown mass M
    • Unknown spring constant k
    • Unknown damping constant D (provided by paddle-shaped blades)
  • A tape measure runs from ceiling to table, parallel to vertical motion, to measure initial displacement.

📡 The accelerometer sensor

The sensor outputs a single-ended voltage between 0 volts and +5 volts that represents acceleration.

  • Sensitivity is unknown (voltage-to-acceleration relationship must be determined).
  • Both positive and negative acceleration are represented in the 0–5V range.
  • Expect a DC offset when the system is idle (because the voltage range must encode both directions).
  • The accelerometer is attached to the bottom of the toy.

🎯 Project goal

Acquire accelerometer voltage data and use it to determine the motion equations: x(t) for position, v(t) for velocity, and a(t) for acceleration.

🖥️ Data acquisition programming

🖥️ Hardware and software context

  • Steps 1–3 (sensor selection, DAQ device choice, system assembly) are already completed.
  • The accelerometer is connected to a National Instruments USB-6211 DAQ device.
  • Students implement steps 4 (write acquisition software) and 5 (analyze data).
  • LabVIEW is used as the programming environment, but the focus is on engineering concepts, not software training.

⚙️ Setting acquisition parameters

The DAQ Assistant wizard guides the user through configuration. Critical parameters include:

ParameterWhat it controlsConsiderations
Dynamic rangeUpper and lower voltage limits expectedMust cover the full signal range
Connection configurationDifferential or Reference Single-ended (RSE)Choose based on signal type
Sampling rateFrequency of A/D conversion (samples/second)Must satisfy Nyquist: Fs > 2·Fmax
Bin sizeNumber of samples in hardware buffer before transferStart with 10–40% of sample rate
Acquisition modeSingle sample, N samples, or continuousContinuous mode required for ongoing collection

🔄 Continuous vs N-samples mode

Why continuous mode is essential for this project:

  1. Timing consistency:

    • Continuous mode: the A/D converter keeps converting at the specified sample rate; samples are evenly spaced in time across all loop cycles.
    • N-samples mode: the converter stops after N samples, then restarts on the next loop cycle; the gap between cycles is inconsistent and depends on processor load.
    • Example: with 100 Hz sampling and bin size 20, continuous mode guarantees 0.01-second intervals; N-samples mode creates unknown gaps between the last sample of cycle n and first sample of cycle n+1.
  2. Filtering behavior:

    • Continuous mode: filters maintain their internal state from one loop cycle to the next, producing smooth output.
    • N-samples mode: filters treat each new set of N samples as independent data, causing cyclic transients (e.g., a filter might need 50 samples to initialize, creating artifacts every loop).

🗂️ Using the Collector buffer

A Collector implements a FIFO (first-in-first-out) buffer that stores N samples in time order.

  • Example: sampling at 100 Hz with bin size 50 and Collector size 3000 stores 30 seconds of data.
  • After the buffer fills, each new cycle drops the 50 oldest samples, shifts remaining samples down, and adds 50 newest samples.
  • The Collector always contains the most recent time window of measurements.
  • Benefit: allows the user to stop the program at small intervals (e.g., every 0.5 seconds) instead of waiting for the full data collection period.

🛑 Proper program termination

  • When continuous acquisition mode is selected, LabVIEW prompts to create a WHILE loop automatically—always accept this.
  • The stop control on the user interface is wired to shut down the A/D converter cleanly before terminating the loop.
  • Never press the abort button when connected to hardware; it leaves the DAQ in an unknown state with a partially filled buffer.

🔬 Data analysis strategy

🔬 Why not just integrate the raw data?

  • Problem 1: Numerical integration is prone to errors that require significant signal processing to mitigate.
  • Problem 2: The raw accelerometer data is in volts, not acceleration units; we don't know the sensitivity (volts per g).
  • Without knowing the voltage-to-acceleration conversion and proper initial conditions, double integration cannot produce a unit-correct position equation.

🎼 Finding the position equation first

The system has an underdamped second-order response (decaying oscillation), so the position equation has the form of complex conjugate roots.

Key insight: Position is formed from eigenfunctions (cosine and sine, or exponential forms); the relationship between eigenfunctions and eigenvalues allows extraction of system parameters from frequency-domain analysis.

Analysis steps (see Figure 6.15 hints):

  1. Find oscillation frequency: use spectral analysis or tone analysis VIs to identify the dominant frequency in the acceleration data.
  2. Find decay rate: identify points on the decaying oscillation envelope; extract the exponential decay constant α.
  3. Construct x(t): combine frequency (ω or f in Hz) and decay rate into the position equation with correct units.

📐 Deriving velocity and acceleration

  • Once x(t) is known with correct units, analytically differentiate (not numerically):
    • v(t) = first derivative of x(t)
    • a(t) = second derivative of x(t)
  • This approach avoids numerical differentiation errors and ensures proper units.

💾 Live mode vs playback mode

The VI should operate in two modes:

  1. Live Mode: connected to the DAQ device, records new data and immediately analyzes it.
  2. Playback Mode: analyzes data from a previously saved file (use Write to Measurement File VI to save as LVM format; use Read from Measurement File VI to load).
    • Important: when saving data, include a time column so the analysis code knows the sampling rate.

📊 Deliverables and output format

📊 Required outputs

  • Plots: one graph each for x(t), v(t), and a(t) with axes labeled correctly for units.
  • Text expressions: display the mathematical equations for x(t), v(t), and a(t) on the front panel near the corresponding graphs.
    • Report frequency in Hertz (Hz), not radians per second (recall ω = 2π·f).
    • Include units at the end of each time-domain expression.

📊 VI functionality

  • Acquire data using the DAQ device with appropriate parameters.
  • Save data to file for later analysis.
  • Analyze data (live or from file) to solve for oscillation frequency and decay rate.
  • Output the three motion equations with correct units and plots.
25

The Musical Spectrum

7.1 The Musical Spectrum

🧭 Overview

🧠 One-sentence thesis

Music signals can be analyzed by transforming them from the time domain into the frequency domain (spectrum), where filters can be designed to isolate specific frequency bands, and Parseval's Theorem allows us to measure signal strength in either domain to control external devices like dancing lights.

📌 Key points (3–5)

  • Audio spectrum range: The industry standard audio spectrum runs from 20 Hz to 20 kHz, though most people perceive only up to about 10 kHz and lose high-frequency hearing with age.
  • Frequency vs. time domain: Music is analyzed in the frequency domain (via Fourier Transform/spectrum) but filters operate on the actual time-domain signal.
  • Filters and cutoff: Bandpass filters isolate frequency sub-bands; the half-power point (cutoff) occurs at -3 dB, which corresponds to a voltage gain of 0.707.
  • Common confusion: dB conversion differs for voltage/current (multiplier of 20) vs. power (multiplier of 10), but the half-power point is always -3 dB in both cases.
  • Signal strength measurement: Use root-mean-square (RMS) instead of mean value, because sinusoids average to zero; Parseval's Theorem links time-domain power to frequency-domain power.

🎵 Audio spectrum fundamentals

🎵 The audible frequency range

Industry standard audio spectrum: 20 Hz to 20 kHz.

  • This range applies when we are very young.
  • As eardrums stiffen with age, we lose the ability to hear the upper frequency range.
  • Most people perceive only up to about 10 kHz; the 20 Hz–20 kHz range is an idealization.

🔄 Time domain vs. frequency domain

  • Time domain: the recorded signal plotted as amplitude over time (e.g., the waveform of Beethoven's Fifth Symphony chords).
  • Frequency domain (spectrum): the Fourier Transform shows the magnitude of contribution each frequency makes to the signal.
  • Music is analyzed in the frequency domain but "massaged" (filtered) in the time domain.
  • Example: An equalizer is designed using frequency-domain algorithms, but the filters themselves act on the actual time-domain signal.

Don't confuse: "Spectrum" always means data are in the frequency domain; filters are designed and studied in the frequency domain but operate on time-domain signals.

🔧 Filters and frequency response

🔧 Bandpass filters

  • A bandpass filter passes frequencies within a specified range (e.g., 20 Hz to 150 Hz) and attenuates frequencies outside that range.
  • The excerpt describes a second-order bandpass filter overlaid on a music spectrum.
  • The filter is not ideal: it does not have a perfect gain of one across the passband.

📉 Half-power point and cutoff frequency

Half-power point: the frequency cutoff defined as the point where power gain is one half of the passband gain.

  • At the half-power point, power gain = 0.5 W/W, which equals -3 dB.
  • At -3 dB, the voltage (or current) gain is 0.707 V/V (or A/A).
  • Frequencies below the -3 dB point are "filtered" (strongly attenuated).

📊 Decibel (dB) conversions

The conversion from raw gain to dB differs for voltage/current vs. power:

QuantityFormulaMultiplier
Voltage gain (dB)20 · log₁₀(V/V)20
Current gain (dB)20 · log₁₀(A/A)20
Power gain (dB)10 · log₁₀(W/W)10
  • Why it matters: Power spectrum plots use the multiplier of 10; magnitude spectrum plots use the multiplier of 20.
  • Key insight: Because of the different conversion formulas, the half-power point always occurs at -3 dB in both cases.

Don't confuse: A power spectral density plot (power spectrum) uses the multiplier of 10 for dB conversion; a magnitude spectrum plot uses the multiplier of 20. Always check the y-axis label.

🎛️ Filtering in practice

🎛️ How discrete filters work

  • Filters are applied in the time domain using a difference equation of the form:
    • y[n] + a₁·y[n-1] + a₂·y[n-2] + ... + aₚ·y[n-p] = b₀·x[n] + b₁·x[n-1] + b₂·x[n-2] + ... + bₘ·x[n-m]
    • where p and m describe the filter order.
  • This is a time-domain process; it is computationally inefficient to filter by taking the Fourier Transform, zeroing out undesired frequencies, and then taking the Inverse Fourier Transform.

🎼 Effect of filtering on music

  • In the time domain: A filtered signal loses high-frequency components (sharp amplitude changes) and becomes smoother, retaining only lower frequencies.
  • In the frequency domain: Spectral peaks in the passband remain; spectral peaks outside the passband are eliminated.
  • Example: After applying a 20–150 Hz bandpass filter to Beethoven's Fifth Symphony, the filtered music spectrum shows fewer spectral peaks, with only those in the passband remaining.

🔋 Measuring signal strength

🔋 Power spectral density and magnitude spectrum

Power spectral density (power spectrum): the squared magnitude of the Fourier Transform, S_xx(jω) = X(jω) · X*(−jω).

  • A complex number times its complex conjugate produces a real value.
  • The magnitude of the Fourier Transform is the square root of the power spectrum.
  • The discrete version must be normalized by the number of samples.

⚡ Parseval's Theorem

Parseval's Theorem: signal power in the time domain is proportional to power in the frequency domain.

  • Mathematically (in discrete terms):
    • Sum from n=0 to N-1 of |x[n]|² = (1/N) · Sum from k=0 to N-1 of |X[k]|²
    • where |X[k]|² = X[k] · X*[k] (complex conjugate).
  • Why it matters: We can use a bandpass filter to access a specific sub-band of music, then measure the power in the time domain to determine if signal strength is above or below a threshold.

📐 Root-mean-square (RMS)

  • Why not use mean value? The signal is made up of many sinusoids, and the average value of sine or cosine is zero. Even if we add up many frequencies, the whole signal will have a mean of zero.
  • Better measure: Root-mean-square (RMS), which reduces the entire data set to a single scalar parameter estimating signal strength.

Three-step process (poorly named; should be "square-mean-root"):

  1. S (Square): For a signal vector of k data points, square each x[i] for i = 0, 1, 2, ..., k.
  2. M (Mean): Compute the mean of the k squared values.
  3. R (Root): Take the square root of the mean value.

💡 Application: dancing light shows

💡 Frequency analysis for control

  • Music can be divided into multiple sub-bands (like an equalizer in an audio system).
  • External control of devices (e.g., colored lights) can be based on the signal strength within a sub-band.
  • How it works: Use a bandpass filter to isolate a sub-band, measure the power in the time domain (via RMS), and determine if the signal strength exceeds a threshold to turn a light on or off.

🎨 Project overview

  • Goal: Create a system that brings in stereo (two) channels of music and controls four colored lights using digital outputs (on-off).
  • Key requirements:
    • Time-domain plot of both channels.
    • A single power spectrum plotted with a linear y-axis (reduce two channels to one spectrum).
    • A Lissajous plot.
    • Four sensitivity knobs (one for each light color) to adjust for different music characteristics.
    • Dancing light behavior must be independent of stereo volume level.
    • Color-coordinated front panel indicators showing the state of each light.
26

7.2 Project

7.2 Project

🧭 Overview

🧠 One-sentence thesis

This project creates a dancing light show system that analyzes stereo music in real time and controls four colored lights based on the character of the music, independent of volume level.

📌 Key points (3–5)

  • What the project does: brings in two stereo music channels and controls four colored lights using digital outputs based on frequency analysis.
  • Key technical challenge: the light behavior must be independent of stereo volume level, requiring signal processing beyond simple amplitude thresholds.
  • User interface requirements: must include time-domain plots, a single power spectrum, a Lissajous plot, and four sensitivity knobs for each light color.
  • Engineering decisions needed: sampling rate, bin size, dynamic range, acquisition mode, connection configuration, and how to divide the music spectrum to control four different lights.
  • Common confusion: bin size trade-off—too small causes excessive data transfer and program failure; too large makes the display update jerky instead of smooth.

🎛️ System architecture and requirements

🎛️ Data acquisition specifications

  • Input configuration: RSE (reference single-ended) configuration on two channels specified by the instructor.
  • Output control: four digital outputs (on-off) for four colored lights.
  • Color coordination: front panel indicators must match the color of each light and tie to same-colored LEDs on the student's protoboard for local testing.

🖥️ User interface requirements (minimum)

The front panel must include:

ElementDescription
Time domain plotBoth channels plotted on a single graph
Power spectrumSingle spectrum with linear y-axis (student must decide how to reduce two channels to one)
Lissajous plotParametric plot of the two channels against each other
Sensitivity knobsFour knobs, one per light color, to adjust for different music characteristics
State indicatorsColor-coordinated indicators showing each light's on/off state

🧹 Cleanup behavior

  • All graphs should clear when the VI is turned off.
  • All lights should be turned off when the program stops.
  • The excerpt emphasizes adding a DAQ assistant that sends Boolean false to all LEDs on program stop, otherwise they stay on until the computer is turned off.

🔧 Engineering design decisions

📊 Data acquisition parameters

The excerpt lists several critical questions the student must answer:

  • Sampling rate: Consider the standard audio frequency range or investigate the sampling rate of a music CD.
  • Bin size: Must be large enough to avoid excessive data transfer (which causes program failure) but small enough to update graphs smoothly—aim for loop iteration time of about 1/10th of a second or less.
  • Dynamic range: Assess by monitoring channels in the DAQ Assistant wizard (click the run button to see live data while setting up).
  • Acquisition mode: Experiment with both continuous and N-Sample modes; both can provide workable solutions.
  • Connection configuration: Differential or reference-single-ended.
  • Zero-mean channels: Check if each channel is zero-mean; if not, mitigate by subtracting the offset or highpass filtering (the latter is reasonable since the low end of the audio band is 20 Hz).

🎵 Music analysis for light control

The fundamental concept is that we can reduce the entire data set (the k samples in our bin size) to a single scalar parameter that estimates signal strength.

  • Four decision gates needed: The student must determine how to analyze music to create four different decision gates for controlling the four lights.
  • Analysis approach: The excerpt references the first section of the chapter, suggesting bandpass filtering and RMS computations.
  • Spectrum division: The student must decide whether to divide the spectrum evenly or asymmetrically across the four lights.
  • Volume independence: The system must work regardless of stereo volume level—this requires normalization or relative measures rather than absolute amplitude thresholds.

Example: Instead of turning on a light when bass exceeds a fixed threshold (which depends on volume), use relative strength of bass compared to other frequencies or compared to recent history.

📈 Special display elements

📈 Lissajous plot

A Lissajous plot is a parametric plot of two time domain signals against each other.

  • How it works: If we have signal x(t) and signal y(t), plot x(t_k) on the x-axis and the corresponding y(t_k) on the y-axis.
  • Time alignment: Both x(t) and y(t) are functions of time, but values must be time-aligned, then plot y versus x (not versus time).
  • What signals to use: The excerpt asks "What two signals do we have available?"—referring to the two stereo channels.

Example: At each time point, use the left channel value as the x-coordinate and the right channel value as the y-coordinate; the resulting pattern reveals the stereo relationship.

🎨 User interface design principles

The excerpt emphasizes that appearance and usability matter:

  • Visual appeal: The interface should have the appearance of a stereo system.
  • Balance and symmetry: A key feature of good user interfaces; the excerpt states this explicitly.
  • Beyond defaults: Simply accepting default LabVIEW element configurations will be functional but receive low marks.
  • Engineering skill: Engineers need user-interface programming experience; LabVIEW provides a reasonable environment because detail work is well-defined and easily edited.
  • Encouraged additions: Students should think of other elements beyond the minimum that provide useful information and appealing visual effects, while maintaining focus on balance and symmetry.

🔌 Digital output implementation

🔌 Setting up digital control

The excerpt provides step-by-step instructions:

  1. Access the control: Express → Output → Generate Signals → DAQ Assist
  2. Choose output type: Generate Output → Digital Output → Line Output
  3. Select lines: Choose all four lines of Port 1 (easier to select all four as separate lines for per-light programmatic control)
  4. Generation mode: Set to "1 sample (On Demand)" in the DAQ Assistant wizard

🔌 Wiring Boolean controls

  • Build array required: A Build Array VI must be added between the Boolean controls and the DAQ Assistant input terminal.
  • Four-element vector: The input becomes a simple 4-element Boolean vector with one Boolean value for each light.
  • Shutdown code: Copy the wiring code but replace the 4 Boolean controls with False constants to turn off all LEDs when the program stops.

Don't confuse: Without the shutdown code, LEDs will stay on until the computer is turned off, not just until the program stops.

27

Dual-Tone Multi-Frequency (DTMF) Signaling

8.1 Dual-Tone Multi-Frequency (DTMF) Signaling

🧭 Overview

🧠 One-sentence thesis

DTMF signaling encodes telephone digits by combining one low and one high frequency tone, and can be decoded through spectral analysis, parallel bandpass filtering, or cross-correlation with gold standards.

📌 Key points (3–5)

  • What DTMF is: a robust signaling scheme that uses combinations of two frequencies (one low, one high) to represent telephone keypad symbols.
  • Why it replaced rotary dials: DTMF operates effectively on lower signal strengths, is more robust, and is quicker to use.
  • How to decode DTMF: three methods—FFT spectral analysis with peak detection, parallel bandpass filters measuring output power, or cross-correlation with ideal reference signals.
  • Common confusion: the detected frequencies may not match the table exactly due to sampling rate limitations, but the closest values identify the symbol.
  • Key design feature: Bell Labs chose frequency sets that are immune to confusion between symbols, even when noise is present.

📞 What DTMF is and why it exists

📞 The DTMF standard

Dual-Tone Multi-Frequency (DTMF): a signaling standard that uses two sets of frequencies—four high and four low—in combinations of one low and one high to encode symbols.

  • Each button press generates exactly two simultaneous tones.
  • The standard provides up to 16 separate symbols (2 × 4 = 16 combinations), enough for the 12 buttons on standard touch-tone phones.
  • Modern cell phones still use DTMF on virtual keypads for navigating automated answering systems, even though they don't use it for dialing.

🔄 Replacement of rotary dial technology

  • Rotary dial method: the dial was rotated to a number, and as it relaxed, N pulses were generated; switching equipment counted the pulses to determine the digit.
  • Why DTMF replaced it:
    • Operates effectively on lower signal strengths.
    • More robust against errors.
    • More user-friendly (quicker to dial).
  • Introduced in the mid-1960s by Bell Telephone.

🎯 Design goal: robustness

  • Bell Telephone engineers worked diligently to develop a signaling scheme that was robust and immune to confusion between symbols.
  • The chosen frequency sets are a primary reason for this robustness: even with noise and imprecise frequency detection, the closest values reliably identify the intended symbol.

🔢 DTMF frequency assignments

🔢 The telephone keypad table

The excerpt provides the standard DTMF frequency table:

Low Freq \ High Freq1209 Hz1336 Hz1477 Hz
697 Hz123
770 Hz456
852 Hz789
941 Hz*0#
  • Each digit is encoded by one row frequency (low) and one column frequency (high).
  • Example: pressing "4" generates 770 Hz and 1209 Hz simultaneously.

🔍 Real-world detection example

  • The excerpt describes pressing "4" and recording the sound with additive noise.
  • Sampled at 8 KHz (maximum representable frequency: 4 KHz).
  • Frequency peaks detected at 771 Hz and 1211 Hz (not exact due to sampling rate).
  • The closest table values are 770 Hz and 1209 Hz → symbol "4".
  • Don't confuse: detected frequencies won't be exact; the processor identifies the closest known DTMF values.

🛠️ Three decoding methods

🛠️ Method 1: Spectral analysis (FFT)

How it works:

  1. Sample a short segment of the signal (e.g., 0.2 seconds at 8000 Hz).
  2. Compute the FFT (Discrete Fourier Transform).
  3. Calculate the magnitude: square root of the complex magnitude squared.
  4. Implement a peak detection algorithm to locate the indices of the two peaks.
  5. Map the vector indices to corresponding frequencies.
  6. Coerce the detected frequencies to the closest known DTMF values.

Example: The excerpt shows the time-domain signal for "4" (upper panel) and its frequency spectrum (lower panel). Peaks appear at 771 Hz and 1211 Hz, which coerce to 770 Hz and 1209 Hz.

Trade-off: Requires a coercion step (mapping detected frequencies to the table), which can be difficult to implement.

🛠️ Method 2: Parallel bandpass filters

How it works:

  1. Pass the signal through seven parallel narrow bandpass filters (four for low frequencies, three for high frequencies).
  2. Compute the signal power (RMS measure) at the output of each filter.
  3. The two filters with maximum output power indicate the two frequencies.

Example: For the "4" signal (770 Hz + 1209 Hz):

  • Pass the signal through four low-frequency filters: 697 Hz, 770 Hz, 852 Hz, 941 Hz.
  • The 770 Hz filter will have significantly higher power than the others.
  • Repeat for high-frequency filters to identify 1209 Hz.

Advantage: Removes the difficult coercion step; the filter with the highest power directly identifies the frequency.

Don't confuse: This method doesn't detect exact frequencies; it identifies which filter responds most strongly.

🛠️ Method 3: Cross-correlation with gold standards

How it works:

  1. Store 12 ideal (gold standard) versions of each DTMF symbol.
  2. Cross-correlate the incoming signal with each of the 12 gold standards.
  3. The cross-correlation producing the highest value corresponds to the encoded symbol.

Example from the excerpt:

  • Upper panel: incoming signal encodes "2"; correlated with ideal "4" → correlation near 0 (no match).
  • Lower panel: incoming signal encodes "4"; correlated with ideal "4" → correlation coefficient about 0.77 at lag 0 (signal-to-noise ratio: -2 dB).

Advantages:

  • No need to separate low and high frequencies.
  • Can execute cross-correlation on a shorter segment (fewer computations per correlation).

Trade-off: Requires memory to store 12 gold-standard signals and compute 12 partial correlations.

Normalization: Correlation coefficients are normalized so perfect correlation equals 1; noise reduces the coefficient (e.g., 0.77 instead of 1).

🎓 Project implementation overview

🎓 Signal structure

The excerpt describes a transmitted DTMF signal (Figure 8.4):

  • Higher amplitude sections: the phone number digits.
  • Smaller amplitude sections: spaces between digits and additive noise when no button is pressed.
  • Longer low-amplitude section: before the first number is sent.
  • Button-press occurrences have greater power and are easily distinguished from noise.

🎓 Project requirements (partial)

  • Use LabVIEW's Sine Waveform VI or Simulate Signal Express VI to add two sinusoids together.
  • Build a 7-digit telephone number signal.
  • Approximate amplitudes and sample lengths to match the example in Figure 8.4.
  • The beginning dead zone should be a random length between 0.1 and 0.5 seconds (realistic: you don't know when someone will start dialing).
  • Set the length of each number and spaces between numbers to look similar to the example.

Note: The excerpt cuts off mid-sentence; full project requirements are not provided.

28

8.2 Project

8.2 Project

🧭 Overview

🧠 One-sentence thesis

This project requires students to create and decode a DTMF-encoded seven-digit telephone number using signal processing techniques, demonstrating the ability to distinguish button presses from noise and accurately extract the dialed digits.

📌 Key points (3–5)

  • What the project does: create and decode a DTMF-encoded seven-digit telephone number from both live signals and files.
  • Key challenge: discriminate between button presses (higher amplitude) and spaces/dead zones (lower amplitude with noise) using signal analysis.
  • Implementation requirement: use a State Machine architecture to divide detection into distinct processes.
  • Common confusion: don't decode the same button press multiple times—must detect when a button is released before decoding the next press.
  • Algorithm choice: students can use correlation, bandpass filters, FFT peak detection, or multiple methods for validation.

📞 Signal structure and characteristics

📊 DTMF signal anatomy

The transmitted signal (Figure 8.4) has distinct sections:

  • Higher amplitude sections: the actual phone numbers being pressed.
  • Smaller amplitude sections: spaces between each number.
  • Longer small-amplitude section: dead zone before the first number is sent.
  • Background noise: even when no button is pressed, additive noise is present at lower levels.

🔍 Distinguishing button presses from noise

Button-press occurrences clearly have greater power and can easily be distinguished from noise periods.

  • The key observable: button presses have significantly higher amplitude/power than the inter-digit spaces.
  • Example: by breaking the signal into smaller arrays and calculating RMS (root mean square), a change in RMS indicates transition from space to number or vice versa.
  • Don't confuse: the spaces are not silent—they contain noise, just at lower amplitude than button presses.

🛠️ Creation requirements

🎵 Signal generation

Students must create a VI (Virtual Instrument) that:

  • Adds two sinusoids together (DTMF uses two frequencies per digit).
  • Builds a complete 7-digit telephone number.
  • Uses either the Sine Waveform VI or Simulate Signal Express VI.

⏱️ Timing specifications

  • Initial dead zone: random length between 0.1 and 0.5 seconds (simulates realistic delay before dialing starts).
  • Button press duration: all buttons pressed for approximately the same length (as shown in Figure 8.4).
  • Space duration: time between button releases approximately equal across all digits.
  • Amplitude matching: approximate the amplitudes shown in Figure 8.4 for both numbers and spaces.

Rationale: In a real system, you don't know when someone will start dialing, but once dialing begins, press and release times are typically consistent.

🔓 Decoding requirements

🔀 Dual input modes

The decoding program must handle:

  • Live signals: DTMF signal sent through the DAQ (Data Acquisition hardware).
  • File playback: DTMF signal loaded from a stored file.
  • Toggle switch: user can switch between live and file modes.

📱 Output format

The phone number needs to be displayed as a string with a – sign between the 3rd and 4th digit (i.e. 555-1234).

  • Must produce a properly formatted string output.
  • Example: seven digits displayed as XXX-XXXX format.

🧮 Algorithm selection

Students must determine which method(s) to use:

MethodDescription
CorrelationCompare signal segments against known DTMF patterns
Bandpass filtersIsolate the two frequency components of each digit
FFT peak detectionFind frequency peaks and coerce to nearest DTMF frequencies
Multiple algorithmsUse agreement between methods to minimize errors
  • The excerpt notes that multiple algorithms can be combined for validation.
  • Cross-correlation advantage (from earlier context): fewer computations per correlation, no need to separate frequencies across entire signal.

🎯 Single-decode constraint

Determine a method to decode a button press only once and then ignore the button press until it is released.

Critical requirement:

  • Must detect when a button is released before attempting to decode the next press.
  • Prevents the same digit from being decoded multiple times during a single button press.
  • Implementation hint: monitor for RMS drop signaling the space after a number.

🏗️ Implementation architecture

🔄 State Machine requirement

For this project (both the DTMF creation VI and the decoding VI), the students are required to use a State Machine.

  • Divide detection steps into distinct processes.
  • Develop a separate state for each process.
  • Both creation and decoding VIs must use this architecture.

⚙️ Configuration parameters

Students must set appropriately:

  • Sampling rate: determines how many samples per second.
  • Bin size: affects frequency resolution.
  • Dynamic range: the span between minimum and maximum signal levels.
  • Channel configuration: how input channels are set up.

🔍 Detection workflow

Recommended cycle for decoding:

  1. Break signal into smaller arrays: enables RMS analysis without processing too many points at once.
  2. Look for RMS change: indicates transition from space to number.
  3. Capture number segment: take just enough data points to accurately represent the number.
  4. Apply algorithm: determine which digit was pressed.
  5. Look for RMS drop: signals the next space (button released).
  6. Repeat cycle: continue until all seven digits decoded.

Warning: Don't use too many points in each array when searching for number boundaries—this reduces responsiveness to transitions.

🖥️ User interface

📺 Display requirements

Develop a user interface that clearly informs the user of the decoded telephone number.

  • Must present the decoded result in a clear, user-friendly format.
  • Should display the formatted string (XXX-XXXX).
  • The interface uses indicators to show outputs on the front panel (from LabVIEW context in Appendix A).