Research Methods for the Social Sciences An Introduction

1

What are Research Methods?

1.1 What are Research Methods?

🧭 Overview

🧠 One-sentence thesis

Research methods are a systematic process of inquiry that enables us to learn about the social world in a formal, structured way, transforming everyday curiosity into rigorous knowledge-building.

📌 Key points (3–5)

  • Research is systematic: there is a "right way" to do research, not just casual searching or asking questions.
  • We already research informally: everyday activities like Google searches show that asking questions is part of being human, but formal research requires structured methods.
  • Basic vs applied research: basic research builds theory without a specific application goal; applied research addresses real-world problems—but both can ultimately inform each other.
  • Common confusion: basic research may seem "not useful," but it can later be applied; applied research may not solve the intended problem but can advance theory.
  • Research is a nine-step process: from choosing a topic through communicating findings, each step is part of a structured system.

🔍 What research methods are

🔍 Definition and core idea

Research methods: a systematic process of inquiry applied in such a manner as to learn something about our social world.

  • The key word is systematic—there is a system, a structured approach, not random exploration.
  • Research is about engagement and thinking critically about the world around us.
  • It formalizes the intuitive questioning we already do (e.g., Google searches) into a rigorous process.

🤔 Why study research methods if we already research?

  • We intuitively research things all the time (asking questions, figuring out why things happen).
  • Formal research methods teach more formal ways of collecting and sharing knowledge.
  • Understanding the "right way" to do research is essential for any profession and for undertaking research projects correctly.
  • Example: You might Google "why do people survive crises?" informally, but formal research would systematically investigate psychological characteristics and factors attributed to survival during active crises.

🧪 Types of research approaches

🧪 Basic research

  • Goal: contribute to theories or knowledge without a specific application in mind.
  • Example: A researcher modifies an existing theory related to post-traumatic stress disorder.
  • Important note: even basic research may ultimately be used for some applied purpose later.

🛠️ Applied research

  • Goal: address real-world social problems and shape social life.
  • Example: A study helps policy makers change an existing policy or create a new one.
  • Important note: applied research might not solve the intended problem but can still advance theoretical understanding.

🔄 How they relate

TypePrimary goalCan it cross over?
BasicBuild theory, modify existing knowledgeYes—may later be applied to real-world problems
AppliedSolve specific real-world problemsYes—may improve theory even if it doesn't solve the intended problem
  • Don't confuse: basic ≠ useless; applied ≠ always practical. Both contribute to knowledge and can inform each other.

📋 The research process

📋 Nine-step structure

Research is defined by the approach taken; it is a structured process with nine steps:

  1. Choose a topic
  2. Review the literature (past research)
  3. Formulate the problem (find the gap in past research)
  4. Develop a research question
  5. Choose and organize the research design
  6. Gather the data
  7. Analyze the data
  8. Interpret the data
  9. Communicate the findings

📚 Role of literature review

Literature review: surveys books, scholarly articles, and any other sources relevant to a particular issue and area of research, and provides a critical evaluation of these works in relation to the current research problem being investigated.

  • The literature review (step 2) is foundational—it shows what past research has done and where gaps exist.
  • It connects past research to the current problem and informs the research question.
  • Why it matters: research reflects not just "how the world is" but also "how, where, and when we have asked the questions"—understanding past work shapes how we ask new questions.

🔄 Research as a reflective process

  • Research uncovers aspects of how the world is.
  • But it also reflects the approach, timing, and framing of the questions asked.
  • This means the process itself shapes the findings—systematic methods ensure rigor and transparency.
2

1.2 The Process of Undertaking Research

1.2 The Process of Undertaking Research

🧭 Overview

🧠 One-sentence thesis

Research is a structured nine-step process shaped by how, where, and when questions are asked, and it bridges past findings with future inquiry through iterative refinement.

📌 Key points (3–5)

  • Research as a process: research follows a defined nine-step sequence from choosing a topic to communicating findings.
  • Literature review's dual role: it surveys past work to identify gaps for the current study and generates questions for future research.
  • Applied vs basic research contributions: applied research shapes social life (e.g., policy changes), while basic research advances theory—but both can ultimately serve either purpose.
  • Common confusion: research methods, research techniques, and research methodology are distinct terms that students should not conflate.
  • Iterative nature: research starts with broad "why" or "how" questions and requires continuous refinement.

🔄 The nine-step research process

🔄 The sequential steps

The excerpt defines research as a nine-step process:

  1. Choose a topic.
  2. Review the literature (past research).
  3. Formulate the problem (find the gap in past research).
  4. Develop a research question.
  5. Choose and organize the research design.
  6. Gather the data.
  7. Analyze the data.
  8. Interpret the data.
  9. Communicate the findings.
  • Each step builds on the previous one.
  • The process is iterative, meaning refinement happens throughout—not just a linear march from step 1 to 9.
  • Example: An organization wants to understand survival factors in active crises → the researcher moves through these nine steps to investigate psychological characteristics and factors.

🔍 Why the process is shaped by approach

Research uncovers some aspect of how the world is, but it also reflects in large part how, where, and when we have asked the questions.

  • The approach taken defines the research process itself.
  • The same phenomenon can yield different findings depending on the questions asked, the timing, and the context.
  • Don't confuse: research is not just "discovering facts"—it is also shaped by the researcher's framing and context.

📚 The role of literature review

📚 What literature review does

Literature review: surveys books, scholarly articles, and any other sources relevant to a particular issue and area of research and provides a critical evaluation of these works in relation to the current research problem being investigated.

  • It is step 2 in the nine-step process.
  • It serves two purposes:
    • Backward-looking: understand what past research has found.
    • Forward-looking: identify gaps that form the basis for the current study and future research.

🔗 Past and future research connection

  • The excerpt emphasizes that literature review links past findings to the current problem (step 3: formulate the problem by finding the gap).
  • At the end of the research, any new gaps identified become the starting point for future research.
  • Example: A researcher reviews studies on post-traumatic stress disorder, finds a gap in existing theory, modifies the theory, and in doing so creates new questions for others to explore.

🧪 Applied vs basic research contributions

🧪 Applied research

  • Goal: shape social life by addressing real-world problems.
  • Example: A study helps policy makers change an existing policy or create a new one.
  • The excerpt notes that applied research can make a contribution by directly influencing social practices.

🧪 Basic research

  • Goal: advance theories or knowledge without a specific application in mind.
  • Example: A researcher modifies an existing theory related to post-traumatic stress disorder.
  • Important caveat: even basic research may ultimately be used for applied purposes.

🔄 The overlap between applied and basic

TypePrimary goalCan it serve the other purpose?
AppliedSolve a real-world problemYes—might improve theoretical understanding even if it doesn't solve the original problem
BasicAdvance theoryYes—may eventually be applied to practical issues
  • Don't confuse: the distinction is about initial intent, not final outcome.
  • Both types can cross over: applied research might deepen theory, and basic research might later inform policy.

🌱 Where research ideas originate

🌱 Common sources of inspiration

The excerpt lists several origins for research ideas:

  • Replicating, clarifying, or challenging previous research: resolving conflicting results is a common reason.
  • New technology: e.g., the impact of Facebook or Twitter on society.
  • Serendipity: surprise findings the researcher wants to explore further.
  • Anomalies: unexpected situations that should not technically exist.
  • Common sense research: challenging what history, tradition, or basic common sense says is true.
  • Applied field problems: for public safety professionals, research often comes from a problem supplied by an agency (e.g., a policy concern or a goal to achieve).

🔍 The iterative refinement process

  • Research often starts with broad questions: "why" or "how".
  • The excerpt emphasizes that research is an iterative process, meaning it requires continuous refinement.
  • Example: An individual makes an observation or has a question about the world → the question is too broad at first → through the nine-step process, the question is refined into a specific research problem and question (steps 3 and 4).

🔑 Key terminology distinctions

🔑 Research methods, techniques, and methodology

  • The excerpt explicitly warns that these three terms are distinct and should not be confused.
  • It directs readers to a separate document for definitions (not included in this excerpt).
  • Why it matters: understanding these distinctions is important for the final assignment and for navigating the research process correctly.
  • Don't confuse: "methods," "techniques," and "methodology" are not interchangeable—each has a specific meaning in the research context.
3

Where Do Research Ideas Come From?

1.3 Where Do Research Ideas Come From?

🧭 Overview

🧠 One-sentence thesis

Research ideas emerge from diverse sources—including gaps in prior work, new technologies, unexpected findings, and practical problems—and must be refined through an iterative process to develop appropriate research questions.

📌 Key points (3–5)

  • Multiple sources of inspiration: replicating/clarifying/challenging past research, new technology, serendipity, anomalies, common-sense challenges, and applied problems.
  • Why vs. how questions: research generally starts with broad "why" or "how" questions that require refinement through an iterative process.
  • Five types of research: exploratory, descriptive, relational, explanatory, or transformative—each with different methods and objectives.
  • Common confusion: research does not begin with a fixed, narrow question; it starts broad and is refined iteratively.
  • Objective determines method: identifying the research objective is essential to selecting the most appropriate research method.

💡 Sources of research inspiration

💡 Building on prior research

  • Replicating, clarifying, or challenging previous studies are common starting points.
  • Resolving conflicting results from earlier work also motivates new research.
  • These approaches help fill gaps identified at the end of existing research, which form the basis for future inquiry.

🔬 Unexpected and emerging phenomena

  • New technology: innovations like Facebook or Twitter create new research opportunities by changing society.
  • Serendipity: surprise findings that researchers want to explore further.
  • Anomalies: unexpected situations that should not technically exist, prompting investigation.
  • Example: A researcher notices an outcome that contradicts established theory and designs a study to understand why.

🧠 Common-sense challenges

Common sense research: history, tradition, or basic common sense says this is how things are, until someone challenges it.

  • Sometimes researchers explore further "something we all believe we know."
  • This involves questioning accepted wisdom or conventional beliefs.
  • Example: An organization assumes a policy works because "it always has," until someone tests whether it actually achieves its goals.

🚨 Applied and practical problems

  • For those in applied fields like public safety, research often comes from problems supplied to the researcher.
  • Agency goals or policy concerns: an organization may have a goal they are trying to achieve or a concern about a policy change.
  • Individual observations or questions: you, as an individual, make an observation or have a question about the world around you.
  • Research is everywhere and often starts with practical needs.

🔄 The iterative refinement process

🔄 Starting broad, refining iteratively

  • Research generally starts with the questions of why or how.
  • Even if the research starts with these basic, and often broad, questions, it is an iterative process, meaning that it requires refinement.
  • Don't confuse: the initial question is not the final research question; it must be narrowed and clarified over time.

🎯 Matching objectives to methods

  • The reasons for beginning a research project vary, so do the types of research questions.
  • Research can be:
    • Exploratory
    • Descriptive
    • Relational
    • Explanatory
    • Transformative
  • Each has different methods and end objectives.
  • It is important to identify the objectives of the research project to determine the most appropriate type of research method to use.
  • The next step after identifying objectives is to develop a research question.

📋 Summary of research idea origins

SourceDescriptionExample context
Prior researchReplicating, clarifying, challenging, or resolving conflictsFilling gaps identified in existing studies
New technologyInnovations that change societySocial media platforms like Facebook or Twitter
SerendipitySurprise findingsUnexpected result prompts further exploration
AnomaliesSituations that should not existOutcome contradicts established theory
Common senseChallenging accepted beliefsTesting "we've always done it this way" assumptions
Applied problemsGoals, policy concerns, observationsAgency needs or individual questions in public safety
4

Understanding Key Research Concepts and Terms

1.4 Understanding Key Research Concepts and Terms

🧭 Overview

🧠 One-sentence thesis

Research planning requires understanding foundational philosophical concepts—ontology and epistemology—which shape how researchers frame reality and knowledge, ultimately guiding methodological choices between objectivist and subjectivist approaches.

📌 Key points (3–5)

  • Ontology and epistemology are foundational: ontology asks "what is" (nature of reality), epistemology asks "how we know what is" (nature of knowledge).
  • Two ontological stances: objectivism views social entities as existing externally to social actors; subjectivism views social phenomena as created by actors' perceptions and actions.
  • Research questions determine approach: the way you frame your question dictates whether you need an objectivist or subjectivist lens, which in turn affects who you can interview and what data counts as valid.
  • Common confusion: same job role ≠ same data source—in objectivist research, role structure matters; in subjectivist research, individual perceptions and interactions matter, so changing personnel changes the phenomenon itself.
  • Multiple pathways to knowledge: epistemology recognizes different valid methods (interviews, observation, surveys, documents) each with their own assumptions about how to uncover knowledge.

🧩 Foundational philosophical concepts

🧩 What ontology addresses

Ontology: the study, theory, or science of being; concerned with "what is" or the nature of reality.

  • Ontology tackles large, fundamental questions about existence and reality.
  • Examples from the excerpt: "What is the purpose of life?" "Is there such a thing as objective reality?" "What does the verb 'to be' mean?"
  • In research context: ontology shapes what you consider "real" and therefore what you can study.

🔍 What epistemology addresses

Epistemology: deals with questions of how we know what is; concerned with the nature and sources of knowledge.

  • Rather than asking "what exists," epistemology asks "how can we find out about what exists?"
  • Researchers usually start with some understanding of: 1) what is; 2) what can be known about what is; 3) what the best mechanism is for learning about what is.
  • Different data collection methods (interviews, observation, surveys, documents) carry different epistemological assumptions about valid ways to uncover knowledge.

🔀 Two ontological approaches

🏛️ Objectivism

Objectivism: social entities exist externally to the social actors who are concerned with their existence.

  • The organization, structure, or phenomenon exists independently of the people currently occupying roles within it.
  • Example from excerpt: The City's formal management structure, hierarchy, and procedures remain the same regardless of which specific managers occupy the positions.
  • Implication: if the structure is stable, different people in the same role can provide equivalent data about that role.

👥 Subjectivism

Subjectivism: social phenomena are created from the perceptions and actions of the social actors who are concerned with their existence.

  • Reality is continuously constructed through social actors' perceptions and interactions.
  • Example from excerpt: corporate culture's effect on community relationships is shaped by each manager's perceptions and interactions; the relationship is "in a constant state of revision."
  • Implication: changing the people changes the phenomenon itself, so you need data from specific individuals at specific times.

📚 Comparing objectivist vs subjectivist research

📚 How the same topic differs by approach

The excerpt provides two students researching emergency management at the same organization:

AspectAna (Objectivist)Robert (Subjectivist)
Research questionWhat is the role of managers in enabling positive community relationships?What is the effect of corporate culture in enabling managers to develop positive relationships?
What is "real"Formal management structure, job descriptions, hierarchy, proceduresPerceptions of corporate culture and its effects on relationships
When managers changeStructure remains the same; new managers can answer questions about the rolePerceptions and interactions change; need both old and new managers' perspectives
Data focusRoles and duties (stable, external)Perceptions and social interactions (fluid, constructed)

⚠️ Common confusion: same job ≠ same data source

  • Don't assume: "The new managers have the same job duties, so they can answer the same questions."
  • Objectivist view: Yes, if you're studying the role itself (structure, duties, hierarchy), new managers can speak to it because the role exists independently.
  • Subjectivist view: No, if you're studying perceptions and interactions, each manager's unique experience creates different social phenomena; losing access to previous managers means losing access to their constructed reality.
  • The research question determines which view is appropriate.

🛠️ How philosophy guides practical decisions

🛠️ From philosophy to method

  • The excerpt emphasizes that ontological stance flows from your research question and then guides practical research decisions.
  • Example decision: whom to interview when personnel change.
  • Ana's objectivist question → she can interview only new managers (structure is stable).
  • Robert's subjectivist question → he should interview both new and old managers (each constructs different perceptions).

🗺️ The research process overview

  • The excerpt references Figure 1.1 (not shown in text) that maps the research process from ontology and epistemology through to three methodological approaches: qualitative, quantitative, and mixed methods.
  • The excerpt notes this is a general overview; each strategy has its own data collection and analysis approaches.
  • Research decisions don't end with choosing methods—"the work is just beginning at this point."
5

Research Paradigms in Social Science

1.5 Research Paradigms in Social Science

🧭 Overview

🧠 One-sentence thesis

Social science research is guided by different paradigms—each with distinct assumptions about reality, truth, and how knowledge should be pursued—that fundamentally shape how researchers approach their studies.

📌 Key points (3–5)

  • What a paradigm is: a way of viewing the world and framing what we know, what we can know, and how we can know it.
  • Five major paradigms: positivism (objective truth), interpretivism (understanding social actors), social constructionism (truth as socially created), critical paradigm (focus on power and change), and postmodernism (questioning absolute truth).
  • Common confusion: positivism vs interpretivism—positivism seeks objective facts (e.g., job positions, promotions), while interpretivism focuses on subjective meanings (e.g., feelings, attitudes).
  • Why paradigms matter: they shape researchers' ontological and epistemological assumptions, determining what questions are asked and how data is collected.

🔭 Understanding paradigms

🔭 What a paradigm is

A paradigm is a way of viewing the world, a set of ideas that is used to understand or explain something, often related to a specific subject.

  • It frames what we know, what we can know, and how we can know it.
  • Paradigms shape our stances on issues and guide our assumptions about how the world works.
  • Example: Different views on abortion illustrate paradigms—one person sees it as an individual medical decision, another as a collective moral issue; each operates under different assumptions about how society should work.
  • Don't confuse: a paradigm with a simple opinion—paradigms are comprehensive frameworks that include assumptions about reality, knowledge, and methods.

🧩 Paradigms and research foundations

  • Each paradigm has its own unique ontological (what exists) and epistemological (how we know) perspective.
  • The excerpt emphasizes that paradigms are not just theoretical—they directly guide practical research decisions.

🔬 The five major paradigms

🔬 Positivism

Positivism is guided by the principles of objectivity, knowability, and deductive logic.

  • Core assumption: Society can and should be studied empirically and scientifically.
  • Calls for value-free sociology—researchers aim to abandon biases in pursuit of objective, empirical truth.
  • Example: Leah (psychology PhD student) studies women's success in business by collecting "facts" like job position, promotions, and compensation—tangible things with separate existence from the researcher.

👥 Interpretivism

  • Core assumption: People interpret their social roles in relationships, which influences how they give meaning to those roles and the roles of others.
  • Emphasizes understanding differences among humans as social actors (not objects).
  • Uses the theatre analogy—actors interpret their parts in specific ways, just as people interpret their social roles.
  • Example: Krista (public health student) studies "feelings" and "attitudes" of male workers toward female managers—focusing on subjective meanings rather than objective facts.

🏗️ Social constructionism

  • Core assumption: Reality is created collectively; social context and interaction frame our realities.
  • Posits that "truth" is varying, socially constructed, and ever-changing (not a fixed reality to discover).
  • Key idea: we create reality through our interactions and interpretations, rather than discovering pre-existing reality.
  • Example: Hand gestures have different meanings across regions, demonstrating that meaning is constructed socially and collectively.
  • Not purely individualistic: Groups (from couples to nations) often agree on notions of what is true and what "is."

⚖️ Critical paradigm

  • Core assumption: Social science can never be truly value-free and should be conducted with the express goal of social change in mind.
  • Focused on power, inequality, and social change.
  • Unlike positivism, rejects the idea that research can be objective or neutral.
  • Scientific investigation should explicitly seek social change.

🌀 Postmodernism

  • Core assumption: Truth in any form may or may not be knowable.
  • Claims there are no definite terms, boundaries, or absolute truth.
  • Argues we can never really know truth because researchers put their own truth onto the investigation.
  • Asks: whose power, whose inequality, whose change, whose reality, whose truth?
  • Challenge for researchers: How does one study something that may or may not be real or that is only real in your current and unique experience?

📊 Comparing the paradigms

ParadigmEmphasisCore Assumption
PositivismObjectivity, knowability, deductive logicSociety can and should be studied empirically and scientifically
InterpretivismResearch on humans as social actorsPeople interpret their social roles, which influences how they give meaning to those roles
Social constructionismTruth as varying, socially constructed, ever-changingReality is created collectively; social context frames our realities
Critical paradigmPower, inequality, social changeSocial science can never be value-free and should aim for social change
PostmodernismTruth may or may not be knowableNo objective, knowable truth exists independent of the researcher

🔍 Distinguishing key paradigms

🔍 Positivism vs interpretivism in practice

The excerpt provides a detailed comparison through two students:

Leah (positivist approach):

  • Collects "facts" about women's success in business
  • Reality represented by tangible things: job position, promotions, compensation
  • These objects have separate existence from the researcher
  • Data collection considered less open to bias, more objective

Krista (interpretivist approach):

  • Collects data about "feelings" and "attitudes"
  • Focuses on male workers' perspectives toward female managers
  • While some argue feelings are subjective and not measurable, human feelings can and are frequently measured
  • Questions whether statistical data deserves more authority than feelings data

🔄 Constructionism vs positivism

  • Positivists seek "the truth" (singular, discoverable)
  • Social constructionists see truth as varying and socially constructed
  • Constructionists study how people come to socially agree or disagree about what is real and true
  • Interest in both how meanings are created and how people work to change them
6

Inductive Approaches to Research

1.6 Inductive Approaches to Research

🧭 Overview

🧠 One-sentence thesis

Inductive research moves from collecting specific observations and data to identifying patterns and building general theories that explain those patterns.

📌 Key points (3–5)

  • Direction of reasoning: inductive research moves from specific data to general theory (data → patterns → theory).
  • Starting point: begins with data collection relevant to the topic, not with a pre-existing theory.
  • Process: collect substantial data first, then step back to look for patterns, then develop explanatory theory.
  • Common confusion: inductive vs deductive—inductive goes from specific to general (data first), while deductive goes from general to specific (theory first).
  • Complementary nature: inductive and deductive approaches are different but can work together in research.

🔄 The inductive research process

📊 Three-stage workflow

The excerpt describes a clear sequence:

  1. Gather data – collect information relevant to your topic of interest
  2. Look for patterns – step back and take a "bird's eye view" to identify recurring themes
  3. Develop theory – create explanations that account for the patterns you observed

🎯 Level of focus shift

StageFocus levelWhat happens
Data gatheringSpecificWorking with particular observations and experiences
Pattern analysisAnalysisExamining relationships across data points
Theory developmentGeneralCreating broader propositions about the phenomenon

🔍 How inductive reasoning works

🧩 From particular to general

Inductive approach: researchers start with a set of observations and move from those particular experiences to a more general set of propositions about those experiences.

  • You don't begin with a hypothesis or theory to test.
  • Instead, the theory emerges from what you observe in the data.
  • Example: An organization collects feedback from many users, notices recurring complaints about a specific feature, then develops a theory about why that feature creates problems.

⏸️ The "breather" moment

  • After collecting substantial data, researchers deliberately pause data collection.
  • This stepping-back phase is crucial—it allows you to see the bigger picture rather than getting lost in individual data points.
  • Don't confuse: this isn't abandoning data collection permanently; it's a strategic pause to analyze before potentially collecting more.

🔀 Relationship to deductive approaches

🔄 Complementary opposites

  • The excerpt emphasizes that inductive and deductive approaches are "quite different" but "can also be complementary."
  • They represent opposite directions of reasoning but both are valid research strategies.
  • Deductive research essentially reverses the inductive steps: it starts with theory and tests it with data (covered in section 1.7).

🚫 Common confusion: inductive vs deductive

Don't confuse the direction of reasoning:

  • Inductive: specific observations → general theory (bottom-up)
  • Deductive: general theory → specific tests (top-down)

The excerpt notes that researchers must think about which approach they plan to employ when designing their research.

7

Deductive Approaches to Research

1.7 Deductive Approaches to Research

🧭 Overview

🧠 One-sentence thesis

Deductive research reverses the inductive process by starting with existing theory and testing its implications through data analysis, moving from general theoretical propositions to specific empirical observations.

📌 Key points (3–5)

  • Direction of reasoning: moves from general theory to specific data (opposite of inductive).
  • Starting point: begins with existing social theories and hypotheses derived from them.
  • Process: theorize/hypothesize → analyze data → determine if hypotheses are supported.
  • Common confusion: deductive goes theory-first (general→specific), while inductive goes data-first (specific→general); they reverse each other's order.
  • Association with science: deductive approaches are what people typically associate with scientific investigation.

🔄 How deductive reverses inductive logic

🔄 The reversal of steps

  • Deductive research takes the inductive steps and reverses their order.
  • Instead of starting with data collection, deductive research starts with theory.
  • The flow is opposite: where inductive builds up from observations to theory, deductive tests down from theory to observations.

📊 Direction comparison

AspectInductiveDeductive
Starting pointData collectionExisting theory
DirectionSpecific → GeneralGeneral → Specific
End goalDevelop theoryTest hypotheses
Focus progressionData → Patterns → TheoryTheory → Hypotheses → Data analysis

🧪 The deductive research process

🧪 Step 1: Theorize and hypothesize

  • The researcher begins by studying what others have done.
  • They read existing theories about the phenomenon they are studying.
  • From these theories, they develop hypotheses to test.
  • This stage operates at a general level of focus.

📐 Step 2: Analyze data

  • After formulating hypotheses, the researcher collects and analyzes data.
  • The data is used to test the implications of the theory.
  • This is the middle analytical stage of the process.

✅ Step 3: Determine support

  • The final step evaluates whether the hypotheses are supported or not supported by the data.
  • This conclusion operates at a specific level of focus.
  • The outcome either confirms or challenges the original theory.

🔬 Association with scientific investigation

🔬 The typical scientific approach

A deductive approach to research is the one that people typically associate with scientific investigation.

  • This approach aligns with common perceptions of how science works.
  • It follows the traditional scientific method of hypothesis testing.
  • The emphasis is on testing theoretical predictions against empirical evidence.

🤝 Complementary relationship

  • While deductive and inductive approaches are quite different, they can be complementary.
  • Don't confuse: these are not mutually exclusive methods; researchers may use both approaches in different phases or projects.
  • Example: A researcher might use inductive methods to discover patterns and build theory, then later use deductive methods to test that theory with new data.
8

2.1 A Humanistic Approach to Research

2.1 A Humanistic Approach to Research

🧭 Overview

🧠 One-sentence thesis

Researchers must balance their scientific obligation to uncover beneficial information with their humanistic obligation to minimize all forms of harm to research participants, both during and after the research.

📌 Key points (3–5)

  • Dual obligations: researchers face both a humanistic duty to protect participants and a scientific duty to benefit society.
  • What harm means: harm can be mental, physical, or emotional; it can occur during the research or in the future after it ends.
  • Minimize harm principle: researchers must give substantial thought to potential negative impacts and do everything possible to reduce them.
  • Applies to animals too: pain and suffering of animal participants must be weighed against the overall benefits of the research.
  • Common confusion: ethics is not simply "right vs wrong"—even minimum standards are complex and require regulation through codes and standards from associations, societies, and universities.

🤝 The dual nature of research obligations

🤝 Humanistic obligation

  • Researchers have a duty to care for those who participate in their research.
  • This is a people-centered responsibility that prioritizes participant well-being.
  • The obligation extends beyond the immediate research period—researchers must consider long-term impacts on participants.

🔬 Scientific obligation

  • Researchers also have a duty to uncover information that benefits society.
  • This creates a tension: the pursuit of knowledge must be balanced against participant protection.
  • Example: A study might yield valuable societal insights, but if it causes significant harm to participants, the humanistic obligation takes precedence.

🛡️ Understanding and minimizing harm

🛡️ Types of harm to consider

The excerpt identifies three categories of harm that researchers must address:

Type of harmDescription
MentalPsychological distress, trauma, or cognitive impact
PhysicalBodily injury, pain, or physiological effects
EmotionalFeelings of distress, anxiety, shame, or other negative emotions

⏰ Temporal dimension of harm

  • Harm is not limited to the moment of research participation.
  • Researchers must consider:
    • Immediate harm: what happens during the research experience itself (e.g., the questions asked, the experiences participants have).
    • Future harm: what might occur after the research is finished.
  • Example: A participant might feel fine during an interview but later experience emotional distress when reflecting on sensitive topics discussed.

🎯 The researcher's responsibility

Researchers must give substantial thought to the impact their research might have on a participant, and do all they can to minimize negative impacts.

  • This is not a passive requirement—it demands active, thoughtful consideration.
  • Researchers must anticipate potential harms before they occur.
  • The standard is to minimize harm, recognizing that eliminating all risk may be impossible.

🐾 Extension to animal research

🐾 Similar principles apply

  • The humanistic approach extends beyond human participants to animal subjects.
  • Researchers must give "similar attention" to animal welfare as they do to human welfare.

⚖️ Weighing harm against benefits

  • Pain and suffering of animals must be weighed against the overall benefits of the research.
  • This creates an ethical calculation: the potential societal benefit must justify any animal suffering.
  • Don't confuse: this is not permission to harm animals freely—it is a requirement to carefully justify any harm in terms of research benefits.

📋 Why regulation is necessary

📋 Complexity of ethical standards

  • The excerpt emphasizes that ethics in research is more complicated than simply considering whether the research is right or wrong.
  • Even "minimum standards" are described as "less than straightforward."
  • This complexity explains why formal regulation is needed.

🏛️ Role of codes and standards

  • Research is typically regulated by:
    • Associations
    • Societies
    • Universities
  • These organizations outline codes or standards to minimize the risk that research participants are harmed.
  • The goal is to provide clear guidance when ethical decisions are not obvious.

🎓 Institutional context

The excerpt mentions NCEHR (National Council on Ethics in Human Research):

  • A national, non-governmental agency established in 1989 in Canada.
  • Its mandate is to advance the protection and well-being of human research participants.
  • It seeks to encourage and enable "high ethical standards related to the conduct of research involving humans."
9

2.2 Research on Human Participants: An Historical Look

2.2 Research on Human Participants: An Historical Look

🧭 Overview

🧠 One-sentence thesis

Historical abuses in human research—from Nazi medical experiments to deceptive social science studies—led to the creation of ethical codes and institutional review boards that now regulate research to protect participants.

📌 Key points (3–5)

  • Historical turning point: Research on humans was largely unregulated until after World War II, when Nazi medical experiments on concentration camp inmates led to the Nuremberg Code in 1949.
  • Social science violations: Psychologists and sociologists also conducted unethical research, including Milgram's obedience experiments (deception causing distress) and Humphreys' tearoom trade study (covert observation and deception).
  • Regulatory response: Public concern over these studies led to the 1974 US National Research Act, the Belmont Report, and the requirement for institutional review boards (IRBs).
  • Common confusion: Ethical violations are not limited to medical research—social scientists have also caused harm through deception, lack of consent, and breaches of confidentiality.
  • Ongoing tensions: Researchers face dilemmas between protecting participant confidentiality and legal obligations, as illustrated by the Scott DeMuth case.

⚖️ The Nuremberg Code and post-WWII regulation

⚖️ Nazi medical experiments and the first trial

  • In 1946, twenty-three war criminals from Germany's Third Reich faced trial for crimes against humanity.
  • Twenty of the defendants were doctors who conducted medical experiments on concentration camp inmates.
  • Inmates were tortured and murdered during these experiments.
  • Sixteen of the 23 were found guilty; sentences ranged from execution to 10 years' imprisonment.

📜 Creation of the Nuremberg Code

The Nuremberg Code (1949): a 10-point set of research principles designed to guide doctors and scientists who conduct research on human participants.

  • Created following the Nuremberg trials in Germany.
  • Today it guides medical and social scientific research on human participants.
  • The excerpt emphasizes that research on humans was not always regulated as it is today—history is "rife with disturbing human experiments" that continued without much intervention until after World War II.

🧪 Unethical social science research

🧪 Stanley Milgram's obedience experiments (1960s)

  • What happened: Psychologist Stanley Milgram conducted experiments to understand obedience to authority.
  • The deception: Participants were tricked into believing they were administering electric shocks to other participants; the shocks were not real.
  • The harm: Some participants experienced extreme emotional distress after the experiment.
  • Why it was traumatic: Realizing that one is willing to administer painful shocks to another human being just because an authority figure ordered it could be traumatizing—even after learning the shocks were not real.

🚻 Laud Humphreys' tearoom trade study (1970)

  • Research goal: Sociology graduate student Laud Humphreys wanted to understand men who engaged in anonymous sexual encounters in public restrooms (the "tearoom trade").
  • Method: Humphreys served as a "watch queen" (the person who keeps an eye out for police) and watched sexual encounters in park washrooms.
  • Ethical violations:
    • Did not identify himself as a researcher to participants.
    • Watched participants for several months without their knowledge.
    • Jotted down license plate numbers without participants' knowledge.
    • Used license plate numbers to obtain names and home addresses.
    • Visited participants in their homes disguised as a public health researcher and interviewed them about their lives and health.

🔍 What Humphreys learned vs. the controversy

  • Findings: The research dispelled myths and stereotypes—over half of participants were married to women and many did not identify as gay or bisexual.
  • Controversy: The work created controversy at his university (the chancellor tried to have his degree revoked), among sociologists, and among the public, raising concerns about the purpose and conduct of sociological research.

🤔 Humphreys' later reflections

  • In 2008, years after his death, his book was reprinted with a retrospective on the ethical implications.
  • His defense: Tearoom observations were ethical because they occurred in public places.
  • What he would change: He would not trace license numbers and interview participants under false pretenses; instead, he would spend more time in the field and cultivate a pool of informants who knew he was a researcher and could fully consent.
  • His conclusion: "There is no reason to believe that any research participants have suffered because of my efforts, or that the resultant demystification of impersonal sex has harmed society."

🏛️ Other landmark cases

The excerpt mentions but does not detail:

  • Stanford Prison Experiment (1970s)
  • Russell Ogden and Simon Fraser University case (1990s, British Columbia, Canada)

🔐 Confidentiality and legal obligations: The Scott DeMuth case

🔐 The case background

  • Who: Scott DeMuth, a graduate student at the University of Minnesota.
  • Research topic: Radical animal rights and environmental groups.
  • The incident: In 2004, the University of Iowa's animal research laboratory was vandalized and rodents were removed; the Animal Liberation Front claimed responsibility. Professors and graduate students lost years of work.

⚖️ The legal conflict

  • DeMuth was ordered to appear before a grand jury hearing on the vandalism because it was believed he had knowledge of who might have been involved.
  • He refused to reveal what he knew, maintaining that his knowledge was based on pledges of confidentiality to participants.
  • Consequences: He was briefly jailed, then charged with conspiracy to commit "animal enterprise terrorism" and "damage to the animal enterprise."

🛡️ Academic freedom and confidentiality

  • The dilemma: Researchers promise confidentiality to participants. If DeMuth had revealed what he knew, he would have breached his promise and lost participants' trust.
  • Professional obligation: Researchers have an obligation to protect confidential information, including participants' identities (unless participants agree otherwise).
  • The American Sociological Association's Code of Ethics (2009) states:

"Sociologists have an obligation to ensure that confidential information is protected. They do so to ensure the integrity of research and the open communication with research participants and to protect sensitive information obtained in research, teaching, practice, and service."

🤔 Questions raised by the case

The excerpt asks readers to consider:

  • Do you agree or disagree with DeMuth's position?
  • Does a promise of confidentiality take precedence when the law has been broken?
  • What are the implications for researchers who promise confidentiality and then reveal their sources—willingly, accidentally, or because they believe they have no choice?

Don't confuse: This is not just an ethical issue but also a legal one—researchers face potential legal consequences for protecting participant confidentiality.

📋 Regulatory frameworks established

📋 The National Research Act (1974)

  • Enacted by the US Congress in response to increasing public awareness and concern about research on human participants.
  • Created the National Commission for the Protection of Human Participants in Biomedical and Behavioral Research.

📄 The Belmont Report

The Belmont Report: a document outlining basic ethical principles for research on human participants, produced by the National Commission (1979).

🏛️ Institutional Review Boards (IRBs)

  • The National Research Act required all institutions receiving federal support to establish IRBs to protect the rights of human research participants.
  • Since then, many organizations beyond those receiving federal support have also established review boards to evaluate the ethics of their research.
  • Purpose: Ensure that the rights and welfare of human and non-human animal research participants will be protected.

🧑‍🤝‍🧑 IRB composition and function

  • IRBs consist of members from a variety of disciplines (sociology, economics, education, social work, communications, etc.).
  • Most IRBs include representatives from the community (e.g., from nearby prisons, hospitals, or treatment centers).
  • Why diversity matters: The diversity of membership helps ensure that the many and complex ethical issues that may arise from research will be considered fully and by a knowledgeable and experienced panel.

📝 IRB review process

  • Investigators conducting research on human participants must submit proposals outlining their research plans to IRBs for review and approval before beginning research.
  • Even students conducting research on human participants must have their work reviewed and approved by the IRB before beginning (though some campuses make exceptions for classroom projects that will not be shared outside the classroom).

🔄 Context and ongoing importance

🔄 Why history matters

  • The excerpt emphasizes that understanding the historical context of research decisions is important.
  • These decisions impact not only the research itself but also the mental and physical health of participants.
  • Research on humans has not always been regulated as it is today—the current system emerged from a history of abuses.

🌍 Broader implications

  • As the previous examples demonstrate, ethical issues in research are "more complicated than simply considering whether the research is right or wrong."
  • All researchers are expected to adhere to minimum standards, but even these can be "less than straightforward."
  • Research is typically regulated by codes or standards outlined by associations, societies, and universities to minimize the risk that research participants are harmed.

Example: A researcher studying a sensitive topic might promise confidentiality to participants. If later subpoenaed, the researcher faces a conflict between legal obligations and ethical commitments to participants—a situation with no simple "right or wrong" answer.

10

2.3 Institutional Research Review Boards (IRBs)

2.3 Institutional Research Review Boards (IRBs)

🧭 Overview

🧠 One-sentence thesis

IRBs protect research participants by reviewing and approving research proposals, but their biomedical focus sometimes creates challenges for open-ended sociological research.

📌 Key points (3–5)

  • What IRBs do: review and approve research proposals to ensure the rights and welfare of human and non-human animal participants are protected at federally funded institutions.
  • Who sits on IRBs: members from multiple disciplines plus community representatives, ensuring diverse perspectives on ethical issues.
  • Mandatory review: all investigators, including students, must get IRB approval before starting research on human participants.
  • Common confusion: IRBs are not opposed to ethical research, but their biomedical/experimental expertise can clash with open-ended qualitative methods common in sociology.
  • The tension: IRBs often require detailed advance plans (who, where, when, what questions), which may be impossible for long-term, flexible studies like participant observation.

🛡️ What IRBs are and why they exist

🛡️ Origin and mandate

  • The National Research Act (1974) required all federally funded institutions to establish IRBs.
  • Many organizations beyond those receiving federal support have also created review boards.
  • Purpose: ensure the rights and welfare of human and non-human animal research participants are protected.

👥 Composition and diversity

IRBs typically consist of members from a variety of disciplines, such as sociology, economics, education, social work, and communications.

  • Most IRBs also include community representatives (e.g., from nearby prisons, hospitals, or treatment centers).
  • Why diversity matters: the many and complex ethical issues from research are considered fully by a knowledgeable and experienced panel.
  • Example: a university IRB near a prison might include a prison representative to evaluate research involving incarcerated people.

📋 How IRBs work in practice

📋 The approval process

  • Investigators must submit proposals outlining their research plans to IRBs for review and approval before beginning research.
  • This requirement applies to all researchers, including students conducting research on human participants.
  • Some campuses make exceptions for classroom projects that will not be shared outside the classroom.

🔍 What IRBs want to know

The excerpt lists the kinds of details IRBs often request in advance:

  • Exactly who will be observed
  • Where and when observations will occur
  • How long the study will last
  • Whether and how participants will be approached
  • Exactly what questions will be asked
  • What predictions the researcher has for findings

⚠️ Tensions between IRBs and sociological research

⚠️ Why some researchers find IRBs problematic

  • Not a rejection of ethics: researchers do want to conduct ethical research.
  • The core issue: IRBs are most knowledgeable in reviewing biomedical and experimental research, neither of which is particularly common in sociology.
  • Much sociological research, especially qualitative research, is open-ended in nature—a fact that can be problematic for IRBs.

🔄 The open-ended research challenge

  • Example: a year-long participant observation within an activist group of 200-plus members.
    • Providing the level of detail IRBs want would be "extraordinarily frustrating" in the best case.
    • Most likely it would prove to be impossible.
  • Don't confuse: IRBs do not intend to have researchers avoid controversial topics or sound data-collection techniques, but unfortunately, that is sometimes the result.

🔧 The proposed solution

ProblemNot the solutionThe solution
IRBs sometimes block valid sociological methodsDo away with review boardsEducate IRB members about the variety of social scientific research methods and topics
  • Review boards serve a necessary and important function.
  • The goal is to help IRB members understand the range of methods and topics covered by sociologists and other social scientists.
11

2.4 Guiding Ethical Principles

2.4 Guiding Ethical Principles

🧭 Overview

🧠 One-sentence thesis

A common set of ethical principles—ranging from respect for human dignity and informed consent to balancing harms and benefits—guides researchers worldwide in protecting human participants across all disciplines.

📌 Key points (3–5)

  • Core framework: Multiple institutions have developed guiding ethical principles that represent shared standards, values, and aspirations of the global research community.
  • Foundational principle: Respect for human dignity is the foremost principle, protecting bodily, psychological, and cultural integrity.
  • Special protections: Vulnerable people (children, institutionalized individuals) require heightened ethical obligations and special procedures.
  • Balancing act: Research must balance harms and benefits—harms should not outweigh anticipated benefits, and participation must be essential to achieving important objectives.
  • Common confusion: Justice means both protecting segments from unfair harm and ensuring no segment is excluded from research benefits—it is bidirectional fairness, not just harm prevention.

🌍 The global ethical framework

🌍 Origins and scope

  • The principles come from Canadian research councils (health, natural sciences, engineering, social sciences, and humanities).
  • Despite their Canadian origin, these principles are used by researchers from many disciplines around the world.
  • They represent a common set of ethical standards, values, and aspirations of the global research community.

🎯 Purpose

  • These principles guide research undertaken with human participants.
  • They complement institutional review boards (IRBs) in protecting research participants.
  • They apply across disciplines, not limited to biomedical or experimental research.

🛡️ Core respect principles

🛡️ Respect for human dignity

Respect for human dignity: the foremost principle of modern research ethics, aspiring to protect people's bodily and psychological integrity, including cultural integrity.

  • This is explicitly identified as the most important principle.
  • Protection extends to three dimensions:
    • Bodily integrity
    • Psychological integrity
    • Cultural integrity
  • Example: A researcher studying a cultural community must protect not only physical safety and mental well-being but also the community's cultural practices and identity.

✍️ Respect for free and informed consent

Respect for free and informed consent: individuals are presumed to have the right to make their own free and informed decisions.

  • Two components:
    1. Free decision: participants decide freely to participate (no coercion).
    2. Informed decision: participants have been fully informed of the research.
  • Researcher obligation: ensure participants have decided freely and give their informed consent.
  • Don't confuse: "informed" means fully informed of the research details, not just aware that research is happening.

🔒 Respect for privacy and confidentiality

Standards of privacy and confidentiality are considered fundamental to human dignity, protecting access to, control, and dissemination of personal information.

  • Privacy and confidentiality are not separate concerns—they are fundamental to human dignity.
  • Researchers must value three related rights:
    • Privacy
    • Confidentiality
    • Anonymity
  • These standards protect how personal information is accessed, controlled, and shared.

👥 Special protections for vulnerable populations

👥 Respect for vulnerable people

Researchers must maintain high ethical obligations toward vulnerable people, such as those with diminished competence and/or decision-making capacity.

  • Who qualifies as vulnerable:
    • Children
    • Institutionalized people
    • Others who are vulnerable and entitled (those with diminished competence or decision-making capacity)
  • Extended obligations include:
    • Human dignity
    • Caring
    • Solidarity
    • Fairness
    • Special protection against abuse, exploitation, or discrimination
  • Action required: The researcher must develop a special set of procedures to protect vulnerable people.
  • Example: A study involving children in an institutional setting requires not only standard consent procedures but also additional safeguards tailored to their diminished decision-making capacity.

⚖️ Justice and harm-benefit balance

⚖️ Respect for justice and inclusiveness

Justice is associated with fairness and equity, and is concerned with the fair distribution of benefits and burdens of research.

  • Justice has two directions:
DirectionWhat it meansImplication
Protecting from harmNo segment should be unfairly burdened by harmsAvoid exploiting or overusing certain groups
Ensuring access to benefitsNo segment should be neglected or discriminated against regarding benefitsInclude diverse populations in research benefits
  • Don't confuse: Justice is not only about preventing harm to disadvantaged groups; it also requires ensuring they are not excluded from the benefits of research outcomes.

⚖️ Balance harms and benefits

Modern research requires that the harms of research should not outweigh the anticipated benefits.

  • This is a requirement, not a suggestion.
  • The comparison is between harms and anticipated benefits (benefits may not be certain but must be reasonably expected).
  • Example: A study with moderate psychological discomfort might be acceptable if it is expected to produce significant societal knowledge, but not if it offers only trivial insights.

🛡️ Minimizing harm

Researchers have a duty to avoid, prevent, or minimize harm to others.

  • Two requirements:
    1. Participants must not be subjected to unnecessary risk of harm.
    2. Their participation must be essential to achieving scientific and societally important objectives that cannot be achieved without their participation.
  • The second requirement means: if the objectives can be achieved without human participants, using them is unethical.
  • Example: If archival data can answer the research question, recruiting participants who might experience distress is not justified.

📈 Maximizing benefit

Researchers have a duty to maximize net benefits for the research participants, individuals, and society.

  • Benefits are directed at three levels:
    • Research participants themselves
    • Individuals (beyond participants)
    • Society as a whole
  • In most research, this means the results benefit society and the advancement of knowledge.
  • Don't confuse: "Maximizing benefit" does not mean only direct benefit to participants; societal and knowledge benefits also count.
12

A Final Word about the Protection of Research Participants

2.5 A Final Word about the Protection of Research Participants

🧭 Overview

🧠 One-sentence thesis

Protecting participant identities through anonymity or confidentiality is a commonly misunderstood but essential aspect of ethical research, with anonymity being more stringent but not always feasible for all sociological methods.

📌 Key points (3–5)

  • Two forms of identity protection: anonymity (researcher cannot link data to identities) vs. confidentiality (researcher knows identities but promises not to disclose publicly).
  • Anonymity is more stringent: even the researcher cannot connect participants to their data.
  • Method constraints: some sociological data collection methods (participant observation, face-to-face interviews) make anonymity impossible because the researcher must know participants' identities.
  • Common confusion: anonymity vs. confidentiality—anonymity means no one (including the researcher) can link data to identities; confidentiality means only the researcher knows but promises not to share.
  • Special challenges: protecting identities becomes especially difficult when researching stigmatized groups or illegal behaviors.

🔐 Two levels of identity protection

🔐 Anonymity: the stricter standard

Anonymity: a protection level where not even the researcher is able to link participants' data with their identities.

  • This is the highest level of identity protection available.
  • No identifying information is collected or retained that could connect a specific person to their responses.
  • Example: An anonymous online survey with no login requirement and no IP address tracking—the researcher receives responses but has no way to know who submitted them.

🤝 Confidentiality: the more common promise

Confidentiality: a protection level where some identifying information on participants is known and may be kept, but only the researcher can link participants with their data, and he or she promises not to do so publicly.

  • The researcher knows who said what but commits to keeping that connection private.
  • Identifying information exists but is protected from public disclosure.
  • Example: A researcher conducts face-to-face interviews, keeps a master list linking names to interview codes, but stores it securely and never reveals which participant said what in publications.

🔬 Why sociologists often cannot promise anonymity

🔬 Method-driven limitations

The excerpt identifies specific data collection modes that require researchers to know participants' identities:

MethodWhy anonymity is impossible
Participant observationResearcher directly observes and interacts with participants
Face-to-face interviewingResearcher meets participants in person
  • These methods are fundamental to sociological research but inherently reveal participant identities to the researcher.
  • In these cases, researchers should promise confidentiality instead.
  • Don't confuse: the inability to promise anonymity doesn't mean the research is unethical—confidentiality is still a valid and acceptable protection.

🛡️ When confidentiality should be offered

  • When the research method requires knowing participants' identities, confidentiality becomes the appropriate standard.
  • The researcher must still protect participants by not publicly linking them to their data.
  • The key obligation: the researcher is the only person who can make the connection, and they promise not to reveal it.

⚠️ Special challenges in identity protection

⚠️ Stigmatized groups and illegal behaviors

The excerpt notes that protecting identities "is not always a simple prospect, especially for those conducting research on stigmatized groups or illegal behaviors."

  • These contexts create heightened risks for participants if their identities are revealed.
  • Stigmatized groups: participants may face social harm, discrimination, or other negative consequences if their participation becomes known.
  • Illegal behaviors: participants could face legal consequences if their identities are disclosed.
  • Researchers must take extra precautions in these situations, even when promising confidentiality.

🧩 A commonly misunderstood aspect

  • The excerpt emphasizes that identity protection "is one of the most commonly misunderstood aspects of research."
  • Researchers must clearly understand the difference between anonymity and confidentiality to make accurate promises to participants.
  • Participants need to understand what level of protection they are receiving to make truly informed consent decisions.
  • Example of confusion: A researcher promises "anonymity" but then conducts in-person interviews—this is impossible and misleading; they should promise confidentiality instead.
13

Normative Versus Empirical Statements

3.1 Normative Versus Empirical Statements

🧭 Overview

🧠 One-sentence thesis

Sociologists focus on empirical questions—those answerable by real-world evidence—rather than normative judgments, because empirical statements are fact-based and informative while normative statements are judgmental.

📌 Key points (3–5)

  • Core distinction: normative statements are judgmental; empirical statements are informative and facts-based.
  • What sociologists prioritize: empirical questions that can be answered by real experience in the real world.
  • How they relate: normative statements can underlie empirical statements, but research focuses on the empirical layer.
  • Common confusion: a statement may sound factual but still be normative if it contains judgment words like "best" without measurable criteria.

🔍 The two types of statements

🔍 Normative statements

Normative statements are judgmental.

  • They express opinions, values, or evaluations about what is good, bad, best, or worst.
  • They cannot be directly tested or verified by observation alone.
  • Example from the excerpt: "Canada has one of the best science programs in the world."
    • The word "best" is a judgment; different people may define "best" differently.

📊 Empirical statements

Empirical statements are informative and facts-based.

  • They describe observable, measurable facts about the real world.
  • They can be verified or falsified through evidence and experience.
  • Example from the excerpt: "In 2015, Canada ranked 4th overall in science education performance of 15-year-old high school students in a study conducted by the Organization for Education Cooperation and Development (OECD, 2015)."
    • This provides specific data: a year, a ranking, an age group, and a source.
    • Anyone can check the OECD study to verify this claim.

🔗 How normative and empirical relate

🔗 Normative can underlie empirical

  • The excerpt notes that "normative statements can underlie an empirical statement."
  • This means a researcher might start with a normative belief (e.g., "Canada has a great education system") but then translate it into an empirical question (e.g., "How does Canada rank compared to other countries?").
  • The research itself, however, focuses on the empirical question.

⚠️ Don't confuse the two

  • A statement that sounds factual may still be normative if it includes judgment language without clear criteria.
  • Look for words like "best," "should," "ought," "good," or "bad"—these often signal normative content.
  • Empirical statements use measurable terms: rankings, percentages, counts, dates, and specific observations.

🎯 Why this matters for research

🎯 Sociologists answer empirical questions

  • The excerpt emphasizes that "sociologists focus on answering empirical questions."
  • Empirical questions are "those that can be answered by real experience in the real world."
  • This focus ensures that research findings can be observed, tested, and verified by others.

🎯 Building researchable questions

  • To develop a good research question, translate any normative interest into an empirical form.
  • Instead of asking "Is this policy good?" ask "What are the measurable outcomes of this policy?"
  • This shift allows the researcher to collect data and draw evidence-based conclusions.
14

Exploration, Description, Explanation

3.2 Exploration, Description, Explanation

🧭 Overview

🧠 One-sentence thesis

Research can be categorized into three types—exploratory (to understand new phenomena), descriptive (to define patterns), and explanatory (to identify causes)—and each serves distinct purposes in answering different kinds of questions.

📌 Key points (3–5)

  • Three research types: exploratory research investigates unfamiliar topics, descriptive research documents patterns or characteristics, and explanatory research seeks to understand causes and effects.
  • When to use each: exploratory is often a first step when little is known; descriptive tells "what" is happening; explanatory answers "why" questions.
  • Key variable definition: exploratory and descriptive research typically do not require fully defined key variables at the start, while explanatory research examines relationships between variables.
  • Common confusion: don't confuse descriptive (documenting what exists) with explanatory (uncovering why it exists)—they answer fundamentally different questions.
  • Practical application: market researchers use descriptive research to understand consumer opinions; explanatory research would dig into the reasons behind those opinions.

🔍 Three types of research

🔬 Exploratory research

Exploratory research: investigates a phenomenon that is not well understood, often as a necessary first step to satisfy researcher curiosity and design larger subsequent studies.

  • Purpose: to explore unfamiliar territory when little is known about a topic.
  • When it's used: as a preliminary step before more structured research.
  • Why it matters: helps researchers understand the phenomenon and participants well enough to design better follow-up studies.
  • Example: "Would people be interested in our new product idea?" or "How important is business process reengineering as a strategy?"

📊 Descriptive research

Descriptive research: aims to describe or define a particular phenomenon, often documenting patterns or characteristics.

  • Purpose: to document "what" exists—patterns, trends, or characteristics.
  • Common uses: market research to understand consumer opinions; documenting trends over time.
  • Everyday relevance: the excerpt notes that people rely on descriptive research findings without realizing it.
  • Example: "What have been the trends in organizational downsizing over the past ten years?" or "Has the average merger rate for financial institutions increased in the past decade?"

🔎 Explanatory research

Explanatory research: seeks to answer "why" questions by identifying causes and effects of phenomena.

  • Purpose: to understand causation—why something happens and what effects it produces.
  • Key focus: examining relationships between variables (e.g., family history, activities, social connections).
  • Don't confuse: explanatory research goes beyond describing patterns (descriptive) to uncovering the reasons behind them.
  • Example: studying why college students become addicted to electronic gadgets—does it relate to family history, hobbies, or peer groups? Or "Which of two training programs is more effective for reducing labour turnover?"

📋 Comparing the three approaches

📋 Key differences

AspectExploratoryDescriptiveExplanatory
Degree of problem definitionKey variables not definedKey variables not definedKey variables not defined
Main question type"What's going on here?""What are the patterns?""Why does this happen?"
Example question"The quality of service is declining and we don't know why""Did last year's product recall impact our share price?""Can I predict energy stock value from dividends and growth rates?"
Typical useFirst step; understanding new phenomenaDocumenting trends or characteristicsTesting causal relationships

🎯 Choosing the right type

  • The research type depends on what you need to know and how much is already understood about the topic.
  • Exploratory → Descriptive → Explanatory often represents a natural progression as knowledge accumulates.
  • Each type has distinct applications: exploratory for new territory, descriptive for documentation, explanatory for understanding mechanisms.

❓ Developing research questions

✅ Practical requirements

A good research question must be:

  • Something you are genuinely interested in
  • Feasible with available resources (money, technology, assistance)
  • Answerable with accessible data (human, animal, or numerical)
  • Operationalized appropriately
  • Focused on a specific objective (explaining or describing something)

🎯 Quality characteristics

Strong research questions have these features:

  • Written in question form
  • Well-focused (not too broad or too narrow)
  • Cannot be answered with simple yes/no
  • Has multiple plausible answers
  • Considers relationships among multiple concepts

🔧 Improving weak questions

The excerpt provides an example of refinement:

  • Too narrow: "How many paramedics were registered in the province of British Columbia in 2017?"
    • This can be answered with a single number; it's a fact-finding question, not a research question.
  • Improved: "What factors lead individuals to choose paramedics as professions in British Columbia?"
    • This explores relationships and has multiple possible answers; it invites investigation rather than simple lookup.

🧭 Question guides method

  • Your research question will guide whether quantitative, qualitative, mixed methods, or other approaches are most appropriate.
  • The question type (exploratory, descriptive, or explanatory) shapes the entire research design.
15

3.3 Developing a Researchable Research Question

3.3 Developing a Researchable Research Question

🧭 Overview

🧠 One-sentence thesis

A good research question must be focused, answerable through data collection, consider relationships among multiple concepts, and cannot be resolved with a simple yes or no.

📌 Key points (3–5)

  • Practical requirements: A good research question must be interesting to you, feasible with available resources, provide data access, be operationalized appropriately, and have a specific objective.
  • Formal characteristics: Written as a question, well-focused, not yes/no answerable, has multiple plausible answers, and considers relationships among multiple concepts.
  • Common confusions: Too narrow (answerable with a simple statistic), too broad (methodology becomes difficult), too objective (just factual), or too simple (answerable with online search) all make poor research questions.
  • Relationship to methods: Your research question guides whether quantitative, qualitative, mixed methods, or other approaches are most appropriate.
  • Not the same as survey questions: Research questions (covered here) are distinct from survey questions (covered in Chapter 8).

🎯 Practical requirements for research questions

💡 Personal and resource feasibility

A good research question must meet five practical criteria:

  1. Personal interest: You must be interested in the topic
  2. Resource availability: You have the money, technology, and assistance needed
  3. Data access: You can reach the human, animal, or numerical/file data required
  4. Operationalization: The question is appropriately operationalized
  5. Specific objective: The question has a clear goal, whether explaining or describing something

Why these matter: Without these practical elements, even a well-formed question becomes impossible to answer in practice.

Example: A question requiring expensive laboratory equipment you cannot access fails the resource test, even if it is otherwise well-designed.

🔍 Don't confuse

  • Interest vs. feasibility: A fascinating question that requires inaccessible data or unavailable resources is not a good research question for your project.

📐 Formal characteristics of research questions

❓ Question format and focus

A good research question is generally written in the form of a question and is well-focused.

  • Question form: Phrased as an interrogative, not a statement
  • Well-focused: Neither too narrow nor too broad; targets a specific, manageable scope

🚫 Not yes/no answerable

A good research question cannot be answered with a simple yes or no.

  • Yes/no questions do not allow for investigation, analysis, or argument development
  • They close off exploration rather than opening it up

Example: "Do buyers prefer our product in a new package?" (from the excerpt's non-researchable examples) can only be answered yes or no, limiting research depth.

🔀 Multiple plausible answers

A good research question should have more than one plausible answer.

  • If only one answer is possible, there is no room for investigation or discovery
  • Multiple plausible answers create space for data collection, analysis, and argument formation

🔗 Relationships among concepts

A good research question considers relationships amongst multiple concepts.

  • Good questions explore how concepts connect, influence, or relate to one another
  • Single-concept questions tend to be too simple or factual

⚠️ Common problems and how to fix them

📏 Too narrow

ProblemImproved version
Too narrow: How many paramedics were registered in the province of British Columbia in 2017?Less narrow: What factors lead individuals to choose paramedics as professions in British Columbia?

Why it's problematic: Can be answered with a simple statistic; no opportunity for analysis or argument.

How to fix: Broaden to explore factors, relationships, or processes rather than a single data point.

🌊 Unfocused and too broad

ProblemImproved version
Unfocused and too broad: What are the effects of Post-Traumatic Stress Disorder (PTSD) on firefighters in Ontario?More focused: What are the social effects of PTSD on families of firefighters in Ontario?

Why it's problematic: So broad that research methodology would be very difficult; too broad to be discussed in a typical research paper.

How to fix: Narrow the scope by specifying which type of effects (social), which population (families), and maintaining geographic focus (Ontario).

📊 Too objective

ProblemImproved version
Too objective: How much money does the average downtown Vancouver store spend on security guards?More subjective: What is the relationship between security spending and product loss through theft at downtown Vancouver stores?

Why it's problematic: Allows data collection but only produces factual information; does not lend itself to creating a valid argument.

How to fix: Reframe to explore relationships between variables rather than collecting a single factual measure.

🔍 Too simple

ProblemImproved version
Too simple: What are municipal governments doing to address the problem of sexism in policing?More complex: What is the relationship between the 2017-2018 publicized incidents of sexism in the RCMP and the number of females applying for entry to police departments in St. John's, Newfoundland?

Why it's problematic:

  • Can be obtained without collecting unique data
  • Answerable with an online search
  • Does not provide opportunity for analysis
  • Uses leading language ("problem" assumes sexism exists)

How to fix:

  • Make the question more complex by specifying time periods, locations, and relationships
  • Require both investigation and evaluation
  • Lead to more valuable and specific research

🚨 Don't confuse

  • Research questions vs. survey questions: The excerpt explicitly notes these are different; survey questions are covered in Chapter 8, while research questions guide the overall study.

🧪 Connection to research methods

🛠️ How questions guide methods

Generally speaking, your research question will guide whether your research project is best approached with quantitative, qualitative, or mixed methods, or other approaches.

  • The structure and focus of your research question naturally point toward certain methodological approaches
  • A well-formed question makes the appropriate method clearer
  • The question comes first; the method follows from it

Example: A question about relationships between measurable variables (like security spending and theft) suggests quantitative methods, while a question about social effects on families suggests qualitative or mixed approaches.

16

Hypotheses

3.4 Hypotheses

🧭 Overview

🧠 One-sentence thesis

Hypotheses are statements describing a researcher's expectations about relationships between variables, and they are typically tested in deductive research to see whether theories accurately reflect real-world phenomena.

📌 Key points (3–5)

  • What a hypothesis is: a statement (sometimes causal) describing expected findings, often about the relationship between two variables.
  • How hypotheses relate to theory: researchers following a deductive approach draw hypotheses from theories and test whether observations match predictions.
  • Directional vs null hypotheses: directional hypotheses predict how one variable affects another (e.g., as age increases, support decreases); null hypotheses predict no relationship.
  • Common confusion: researchers say hypotheses are "supported" or "not supported," never "proven," because absolute certainty is impossible and new evidence may emerge.
  • Quantitative vs qualitative approaches: quantitative research tests hypotheses empirically; qualitative research develops theory and may generate hypotheses for later testing.

🔬 What hypotheses are and how they work

🔬 Definition and purpose

A hypothesis is a statement, sometimes but not always causal, describing a researcher's expectations regarding anticipated finding.

  • Hypotheses often (but not always) describe the expected relationship between two variables.
  • They guide research that aims to test specific predictions rather than explore open-ended questions.
  • To develop a hypothesis, you need to understand independent and dependent variables, and units of observation and analysis.

🧩 Where hypotheses come from

  • Hypotheses are typically drawn from theories.
  • They describe how an independent variable is expected to affect some dependent variable(s).
  • Researchers using a deductive approach hypothesize what they expect to find based on the theory framing their study.
  • If the theory accurately reflects the phenomenon, the hypotheses about real-world observations should bear out.

🔀 Types of hypotheses

➡️ Directional hypotheses

  • Sometimes researchers hypothesize that a relationship will take a specific direction.
  • An increase or decrease in one area might cause an increase or decrease in another.
  • Example: "Age is negatively related to support for marijuana legalization" means as people get older, their likelihood of supporting legalization decreases—age moves up, support moves down.
  • Tip: if writing hypotheses feels tricky, draw them out to visualize how the two variables move.

⭕ Null hypotheses

A null hypothesis predicts no relationship between the variables being studied.

  • If a researcher rejects the null hypothesis, they are saying the variables are somehow related to one another.
  • The null hypothesis serves as a baseline for comparison.

🗣️ How researchers talk about findings

🗣️ "Supported," not "proven"

  • You will almost never hear researchers say they have "proven" their hypotheses.
  • Saying "proven" implies absolute certainty with no chance the hypothesis would fail under any conditions.
  • Instead, researchers say hypotheses have been "supported" (or not).
  • This cautious language allows for the possibility that new evidence or new ways of examining a relationship will be discovered.

Don't confuse: "supported" ≠ "proven." Research findings are always provisional and open to revision.

🔍 Quantitative vs qualitative approaches to hypotheses

🔍 Different goals and methods

ApproachGoalRole of hypotheses
QuantitativeEmpirically test hypothesesHypotheses are generated from theory and tested against observations
QualitativeDevelop or construct theoryResearcher may begin with vague expectations but does not test them; instead, builds theory from which hypotheses can later be drawn

🔄 How the two approaches complement each other

  • Qualitative researchers may develop theories from which hypotheses can be drawn.
  • Quantitative researchers may then test those hypotheses.
  • Both types of research are crucial to understanding the social world.
  • Both play an important role in hypothesis development and testing.

Don't confuse: qualitative research is not "hypothesis-free"—it generates theory and expectations that can become hypotheses for quantitative testing.

17

Quantitative, Qualitative, & Mixed Methods Research Approaches

3.5 Quantitative, Qualitative, & Mixed Methods Research Approaches

🧭 Overview

🧠 One-sentence thesis

Quantitative and qualitative research represent different philosophical approaches to discovering or constructing knowledge, and increasingly researchers combine both through mixed methods to gain more complete understanding of research problems.

📌 Key points (3–5)

  • Quantitative approach: assumes one reality exists to be discovered (realism); uses aggregate data, prediction, and observable cause-effect relationships.
  • Qualitative approach: assumes multiple realities are constructed based on perspective (constructionism); focuses on understanding meaning, thoughts, feelings, and experiences.
  • Common confusion: adding a few open-ended questions to a survey is not mixed methods—true mixed methods means conducting both quantitative methods (e.g., surveys) and qualitative methods (e.g., interviews, focus groups).
  • Mixed methods rationale: combines strengths of both approaches through triangulation to address real-life contextual problems requiring multiple perspectives.
  • Philosophical tension: the two approaches rest on fundamentally different beliefs about knowledge, leading to debates about whether they can or should be combined.

🔬 Quantitative research philosophy and methods

🎯 Core philosophical stance: realism

Realism: the belief that there is one reality or truth that simply requires discovering.

  • Arises from natural sciences (chemistry, biology).
  • The researcher's job is to ask the "right" questions to uncover this single truth.
  • Assumes an objective, neutral outsider perspective is possible.

📊 Key characteristics of quantitative methods

  • Observable focus: favours observable causes and effects; outcome-oriented.
  • Aggregate data: uses patterns across many cases to reveal "truth" about the phenomenon.
  • Prediction as understanding: true understanding is determined by the ability to predict the phenomenon.
  • Hypothesis testing: the goal is to empirically test hypotheses generated from theory.

🔢 Data and analysis approach

AspectQuantitative characteristics
ConceptsDistinct variables
MeasuresSystematically created before data collection; standardized (e.g., job satisfaction scales)
Data formNumbers from precise measurement
TheoryLargely causal and deductive
ProceduresStandard and replication is assumed
AnalysisStatistics, tables, charts; relates findings to hypotheses

🌐 Qualitative research philosophy and methods

🧑 Core philosophical stance: constructionism

Constructionism: knowledge is created, not discovered, and there are multiple realities based on someone's perspective.

  • Qualitative researchers are phenomenologists or human-centred researchers.
  • Must account for the "humanness" of participants—their thoughts, feelings, and experiences.
  • No neutral or objective outsider exists; the researcher is part of the process.

🔍 Key characteristics of qualitative methods

  • Understanding over prediction: seeks to understand why, how, and to whom a phenomenon applies.
  • Unobservable focus: examines thoughts, feelings, experiences—things that cannot be directly observed.
  • Perception matters: focuses on participants' interpretive meaning rather than the researcher's external interpretation.
  • Process-oriented: emphasizes understanding action and the meaning of that action.
  • Theory construction: develops theories from which hypotheses can later be drawn (rather than testing existing hypotheses).

📝 Data and analysis approach

AspectQualitative characteristics
ConceptsThemes, motifs, generalizations, taxonomies
MeasuresSpecific to individual setting or researcher (e.g., a specific scheme of values)
Data formWords from documents, observations, transcripts (though quantification is still used)
TheoryCan be causal or non-causal; often inductive
ProceduresParticular to the study; replication is difficult
AnalysisExtracting themes from evidence; organizing data into coherent picture; generalizations can generate hypotheses

🎯 Discovery vs. encapsulation

  • Quantitative: tests hypotheses the researcher generates beforehand.
  • Qualitative: discovers and encapsulates meanings once the researcher becomes immersed in the data.

🔀 Mixed methods and triangulation

🤝 What mixed methods means

Mixed methods: more of an approach to examining a research problem than a methodology; characterized by intentionally combining rigorous quantitative and qualitative techniques.

Three defining requirements:

  1. Examination of real-life contextual understandings, multi-level perspectives, and cultural influences.
  2. Intentional application of both rigorous quantitative research (assessing magnitude and frequency) and rigorous qualitative research (exploring meaning and understanding).
  3. Drawing on strengths of both data-gathering techniques to formulate a holistic interpretive framework.

⚠️ Common confusion: what is NOT mixed methods

  • Not mixed methods: adding a few open-ended questions to a quantitative survey.
    • This is still quantitative methods with some open-ended questions.
  • True mixed methods: undertaking quantitative methods (e.g., survey) and qualitative methods (e.g., interviews, focus groups, observation).

🔺 Triangulation

Triangulation: using a combination of multiple and different research strategies.

  • Allows researchers to take advantage of the strengths of various methods while overcoming weaknesses.
  • Some of the most highly regarded social scientific investigations combine approaches for the most complete understanding possible.
  • Other forms mentioned:
    • Triangulation of measures: multiple approaches to measure a single variable.
    • Triangulation of theories: relying on multiple theories to explain a single event or phenomenon.

⚖️ The debate about mixing approaches

Arguments for mixing:

  • Can be most effective at getting to "the truth" or "a truth."
  • Provides more complete understanding of research topics.

Arguments against mixing:

  • The fundamentally different beliefs about knowledge (realism vs. constructionism) may hamper ability to get at truth.
  • Discovery vs. creation of knowledge are incompatible philosophical stances.

Example: Some researchers believe you cannot simultaneously hold that there is one objective reality to discover (quantitative) and that multiple realities are constructed by perspective (qualitative).

💡 Practical implications and trends

📈 Growing acceptance of qualitative methods

  • Researchers in emergency and safety professions are increasingly turning toward qualitative methods.
  • The excerpt references peer papers on qualitative research in emergency care, indicating expanding application beyond traditional social sciences.

💰 Power-knowledge and funding dynamics

  • Reflecting on Foucault's idea of power-knowledge: people tend to like to quantify things.
  • Funding often goes to quantitative researchers because it's easier to demonstrate what money was used for (given focus on cause/effect and outcomes).
  • Qualitative researchers often are left out of funding decisions.
  • Critical question: What does this mean for our understanding of the world if certain approaches are systematically favored?

🔄 Relationship between approaches

  • Both types of research are crucial to understanding the social world.
  • Both play important roles in hypothesis development and testing.
  • Qualitative researchers may develop theories → quantitative researchers test hypotheses from those theories.
  • The distinction has led to "bitter rivalries and divisions in the research world," though most researchers recognize advantages of combining methods.
18

Mixed-Methods Research Approaches

3.6 Mixed-Methods Research Approaches

🧭 Overview

🧠 One-sentence thesis

Mixed-methods research combines quantitative and qualitative approaches to examine research problems holistically, drawing on the strengths of both to generate more complete understandings than either method alone.

📌 Key points (3–5)

  • What mixed methods is: an approach (not just a methodology) that intentionally combines rigorous quantitative and qualitative techniques to address complex research problems.
  • When to use it: problems requiring real-life contextual understanding, multi-level perspectives, cultural influences, and both magnitude/frequency assessment and meaning exploration.
  • Triangulation: using multiple different research strategies together to leverage strengths and overcome individual weaknesses.
  • Common confusion: adding a few open-ended questions to a quantitative survey is NOT mixed methods; true mixed methods means conducting both quantitative methods (e.g., surveys) AND qualitative methods (e.g., interviews, focus groups, observation).
  • Debate: some researchers believe mixing approaches gets closer to "the truth," while others argue the fundamentally different epistemological beliefs hamper truth-seeking.

🔬 What defines mixed-methods research

🎯 Core characteristics

Mixed methods is characterized by three requirements for the research problem:

  1. Contextual complexity: examination of real-life contextual understandings, multi-level perspectives, and cultural influences
  2. Dual rigor: intentional application of both rigorous quantitative research (assessing magnitude and frequency) and rigorous qualitative research (exploring meaning and understanding)
  3. Holistic integration: drawing on strengths of both data-gathering techniques to formulate a holistic interpretive framework for generating solutions or new understandings

🔍 More than methodology

  • The excerpt emphasizes that mixed methods represents "more of an approach to examining a research problem than a methodology."
  • It's about how you frame and tackle the entire research question, not just which tools you use.

🤝 The triangulation concept

🔺 What triangulation means

Triangulation: using a combination of multiple and different research strategies.

  • Allows researchers to take advantage of the strengths of various methods
  • Simultaneously helps overcome weaknesses inherent in individual methods
  • Example: A researcher might use surveys to measure frequency of a phenomenon (quantitative strength) while also conducting interviews to understand why it occurs (qualitative strength)

🔭 Other forms of triangulation

The excerpt mentions additional types beyond method triangulation:

TypeWhat it involves
Triangulation of measuresUsing multiple approaches to measure a single variable
Triangulation of theoriesRelying on multiple theories to help explain a single event or phenomenon

⚖️ The debate around mixing approaches

👍 Arguments for mixing

  • Researchers who favor mixed methods believe it can be "the most effective at getting to 'the truth' or at least 'a truth.'"
  • Some of the most highly regarded social scientific investigations combine approaches to gain the most complete understanding possible.
  • The combination allows researchers to address different aspects of complex problems simultaneously.

👎 Arguments against mixing

  • Some researchers argue against mixing these approaches.
  • Their concern: the fundamentally different beliefs about knowledge and its creation or discovery across approaches hampers one's ability to get at the truth.
  • This reflects the deeper epistemological divide between quantitative (realist, discovery-oriented) and qualitative (constructionist, creation-oriented) paradigms discussed earlier in the text.

⚠️ Common misconceptions

❌ What is NOT mixed methods

The excerpt provides an important clarification:

Not mixed methods: Adding a few open-ended questions (that collect qualitative data) to a quantitative survey

  • This is still quantitative methods
  • You're collecting data via a survey and adding some open-ended questions
  • The approach remains fundamentally quantitative

True mixed methods: Undertaking quantitative methods (e.g., a survey) AND qualitative methods (e.g., interviews, focus groups, observation, etc.)

  • Both approaches must be genuinely present
  • Each must be conducted with appropriate rigor
  • The integration must be intentional and meaningful

🎯 Why the distinction matters

  • Simply adding qualitative elements to a quantitative study doesn't create the holistic interpretive framework that defines mixed methods.
  • True mixed methods requires planning and executing both approaches as substantial components of the research design.
  • Don't confuse: surface-level mixing of data types vs. deep integration of different methodological approaches.
19

Reliability

4.1 Reliability

🧭 Overview

🧠 One-sentence thesis

Reliability in measurement ensures that applying the same measure consistently to the same person yields the same result each time, which is essential for meaningful research findings.

📌 Key points (3–5)

  • What reliability means: consistency—if the same measure is applied to the same person repeatedly, the result should be the same.
  • Why it matters: without reliable measures, we cannot be certain our findings mean what we think they mean.
  • Common problem—memory: asking people to recall past behavior can introduce inconsistencies, especially over long time periods.
  • Common confusion: reliability vs validity—reliability is about consistency of measurement; validity (mentioned briefly) is about shared understanding and whether the measure captures the concept's true meaning.
  • Practical challenge: observational measures can fail reliability if the researcher cannot consistently observe every instance of the behavior.

🔍 What reliability is

🔍 Core definition

Reliability in measurement is about consistency. If a measure is reliable, it means that if the same measure is applied consistently to the same person, the result will be the same each time.

  • Reliability does not ask "is this the right concept?"—it asks "does this measure give the same answer under the same conditions?"
  • The excerpt frames reliability as a prerequisite for meaningful findings: without it, we cannot trust that our results mean what we think.

🧪 Simple test for reliability problems

  • Ask: would the same person respond differently to the same question at different points in time for reasons unrelated to actual change in the concept?
  • Example: A person might answer "Have you ever had a problem with alcohol?" differently the morning after a wild night out versus the day before, even if their underlying alcoholism status has not changed.
  • Another example: A teetotaler with a headache from one glass of wine might say "yes" in the moment but "no" the day before drinking—this inconsistency signals a reliability problem.

🧠 Memory as a reliability challenge

🧠 Why memory matters

  • If researchers ask participants to recall past behavior, the difficulty of recalling accurately affects reliability.
  • The harder the recall task, the more likely responses will be inconsistent or inaccurate.

⏳ Time span and detail

  • Long, detailed recall: asking respondents to remember how much wine, beer, and liquor they consumed each day over three months is very difficult.
    • Unless someone keeps a journal, responses will likely contain inaccuracies.
    • Inaccuracies mean the measure is not reliable—different attempts to recall might yield different answers.
  • Short, simple recall: asking how many drinks of any kind a person consumed in the past week is simpler and more likely to produce accurate, consistent responses.
  • Don't confuse: the issue is not whether people lie; it is whether they can consistently and accurately remember.

👁️ Observational reliability challenges

👁️ The observer's limitations

  • Even when a researcher directly observes behavior (rather than relying on self-reports), reliability can be an issue.
  • Example scenario: A field researcher observes patrons at a pub, counting drinks consumed and noting behavior changes.
    • If the researcher leaves briefly (e.g., to use the restroom) and misses three shots of tequila consumed by a patron, her count is incomplete.
    • Her measure of alcohol intake depends on her ability to observe every instance of consumption.
    • If she cannot consistently observe all instances, the measure is not reliable.

🔧 Implication for research design

  • Reliability depends on the researcher's ability to apply the measure consistently.
  • If consistent observation is unlikely, the measurement mechanism may need to be redesigned (e.g., using video recording, multiple observers, or self-report logs).

🆚 Reliability vs validity (preview)

🆚 How they differ

AspectReliabilityValidity
FocusConsistencyShared understanding / accuracy
QuestionDoes the measure give the same result each time?Does the measure capture the true meaning of the concept?
Example issueSame person answers differently at different timesDifferent people interpret "alcoholic" differently
  • The excerpt introduces validity briefly at the end: validity asks whether our measures "accurately get at the meaning of our concepts."
  • Don't confuse: a measure can be reliable (consistent) but not valid (not measuring what we think it measures), or vice versa.
  • Both are necessary for quality measurement, but they address different problems.
20

Validity

4.2 Validity

🧭 Overview

🧠 One-sentence thesis

Validity ensures that a measure actually captures the concept it intends to measure, and achieving it requires careful design to avoid measuring something else by mistake.

📌 Key points (3–5)

  • What validity means: whether your measure truly tests what you set out to test, not something else.
  • Internal validity: ensuring your study actually tests the causal relationship you claim, without confounding variables.
  • External validity: whether your findings generalize beyond your specific study to other situations and real-world contexts.
  • Common confusion: a measure can seem reasonable at first glance but still be invalid if it captures unrelated behaviors or subjective interpretations.
  • Validity as social agreement: no measure is perfect; validity is about how closely your measure approximates the true concept, like a portrait resembles a person.

🎯 What validity is and why it matters

🎯 The core idea of validity

Validity: whether a measure actually captures the concept it is intended to measure.

  • It is not about consistency (that's reliability); it's about accuracy of meaning.
  • A measure can be reliable (consistent) but still invalid if it consistently measures the wrong thing.
  • The excerpt emphasizes that validity is about testing "the very thing it seeks to test."

🖼️ The portrait analogy

  • Think of validity like a portrait of a person:
    • Some portraits look exactly like the person.
    • Others (caricatures, stick drawings) are less accurate.
    • What matters is how closely the representation approximates the real person.
  • No measure is exact, but some are more accurate than others.
  • Validity is about the degree of approximation, not perfection.

⚠️ How measures can fail to be valid

⚠️ The alcoholism question example

  • Initial measure: "Have you ever had a problem with alcohol?"
  • Why it fails validity:
    • "A problem" is subjective and varies dramatically between people.
    • For some, one embarrassing moment counts as a problem.
    • For others, the threshold is much higher (e.g., numerous incidents but still functioning at work).
  • Result: responses don't objectively measure alcoholism; they measure each person's subjective interpretation of "problem."
  • Example: if your goal is to objectively count how many participants are alcoholics, this measure won't yield useful or meaningful results.

⚠️ The gym visits example

  • Initial measure: count the number of times per week someone visits the gym (to measure dedication to healthy living).
  • Why it fails validity:
    • Gym visits can include activities unrelated to fitness: tanning beds, flirting, sitting in the sauna.
    • These activities are not good indicators of healthy living.
  • Result: the measure doesn't actually capture dedication to healthy living; it captures gym attendance for any reason.
  • Don't confuse: frequency of gym visits ≠ dedication to fitness, because the activities inside the gym vary.

🔬 Types of validity

🔬 Internal validity

Internal validity: whether a study actually tests the causal relationship it claims to test.

  • The challenge: in social sciences, causation is rarely as simple as "A causes B."
    • Many other variables may occur at the same time.
    • These variables may cause both A and B, creating confounding.
  • Threats to internal validity (mentioned in the excerpt):
    • History, maturation, testing, regression to the mean.
    • Selection biases.
    • Instrumentation issues.
  • How to control threats: use experiments and control/comparison groups (details in Chapter 6, per the excerpt).
  • Example: if you want to test whether a program causes behavior change, you must ensure the change isn't due to other events happening at the same time.

🌍 External validity

External validity: whether a study's findings generalize to other situations, contexts, and real-world environments beyond the current project.

  • What it means:
    • Can the results apply to different settings?
    • Do the findings reflect real-world environments where the phenomenon occurs?
    • Can you prove the findings were not due to chance?
  • Important clarification: external validity does not necessarily depend on having a representative sample.
    • It depends on the nature of the phenomenon and the research objectives.
  • Example: a study on one organization may still have external validity if the findings apply to similar organizations, even if the sample wasn't randomly selected.

🤝 Ensuring validity in practice

🤝 Validity as social agreement

  • Validity is fundamentally about social agreement: do others agree that your measure captures what you claim?
  • One quick way to help ensure validity: discuss your measures with others.
    • Get feedback on whether your operationalization makes sense.
    • Check if others interpret your measure the same way you do.

🤝 Accepting imperfection

  • No measure is an exact representation of a concept.
  • The goal is to get as close as possible to the true concept.
  • Some measures are more accurate than others; choose the one that best approximates your target concept.
  • Don't confuse: validity is not binary (valid/invalid); it's a matter of degree—how well does your measure approximate the concept?
21

Complexities in Measurement

4.3 Complexities in Measurement

🧭 Overview

🧠 One-sentence thesis

Social science measurement involves four distinct levels—nominal, ordinal, interval, and ratio—each with increasing mathematical precision, and understanding these levels is essential for choosing appropriate measurement strategies.

📌 Key points (3–5)

  • Four levels of measurement: nominal, ordinal, interval, and ratio, each building on the previous level's properties.
  • What distinguishes the levels: whether attributes can be rank-ordered, whether distances between attributes are equal, and whether a true zero point exists.
  • Common confusion: ordinal vs interval—ordinal lets you rank (more/less) but not measure exact distances; interval adds equal spacing between values.
  • Complexity varies by concept: some concepts (e.g., political party) are simpler to measure than others (e.g., sense of alienation).
  • Mathematical operations depend on level: nominal cannot be quantified mathematically; ratio allows full arithmetic including ratios.

📏 The four levels of measurement

📏 Overview of levels

Social scientists use the language of variables and attributes:

Variable: a grouping of several characteristics.

Attributes: the individual characteristics within a variable.

  • A variable's attributes determine its level of measurement.
  • There are four possible levels: nominal, ordinal, interval, and ratio.
  • Each level builds on the previous one, adding more mathematical properties.

🏷️ Nominal level

Nominal level: the most basic level of measurement where variable attributes meet the criteria of exhaustiveness and mutual exclusivity.

Key properties:

  • Exhaustive: everyone fits into one of the categories (e.g., partnered or single covers all relationship statuses).
  • Mutually exclusive: a person cannot occupy more than one category simultaneously (if single, cannot also be partnered).
  • Cannot be mathematically quantified: you cannot say one attribute has "more" or "less" value than another.

Examples from the excerpt:

  • Relationship status (partnered vs single)
  • Gender
  • Race
  • Political party affiliation
  • Religious affiliation

Example: Asking respondents if they are partnered or single exhausts the possibilities for relationship status, and no one can be both at once—this meets nominal criteria.

Don't confuse: Being unable to quantify does not mean the variable is unimportant; it simply means you cannot assign numerical order or distance (e.g., "partnered" is not mathematically "more" than "single").

📊 Ordinal level

Ordinal level: attributes can be rank-ordered, though we cannot calculate a mathematical distance between those attributes.

Key properties:

  • All nominal properties: exhaustive and mutually exclusive.
  • Can rank order: you can say one attribute is "more" or "less" than another.
  • Cannot measure exact distance: you know the order but not how much more or less.

Examples from the excerpt:

  • Social class
  • Degree of support for policy initiatives
  • Television program rankings
  • Prejudice

Example: One person's support for a public policy may be more or less than a neighbor's, but you cannot say exactly how much more or less.

Don't confuse: Ordinal with interval—ordinal gives you ranking (first, second, third) but not equal spacing between ranks.

📐 Interval level

Interval level: measures meet all criteria of nominal and ordinal levels, plus the distance between attributes is known to be equal.

Key properties:

  • All ordinal properties: exhaustive, mutually exclusive, and rank-ordered.
  • Equal distances: the gap between consecutive values is the same.
  • No true zero point: you cannot say what the ratio of one attribute is to another.

Examples from the excerpt:

  • IQ scores
  • Temperatures

What you can do:

  • Say how much more or less one attribute differs from another (because distances are equal).

What you cannot do:

  • Make ratio statements (e.g., "50 degrees is half as hot as 100 degrees" does not make sense).

Don't confuse: Interval with ratio—interval has equal spacing but no meaningful zero, so ratios are not valid.

🎯 Ratio level

Ratio level: attributes are mutually exclusive and exhaustive, can be rank-ordered, the distance between attributes is equal, and attributes have a true zero point.

Key properties:

  • All interval properties: exhaustive, mutually exclusive, rank-ordered, and equal distances.
  • True zero point: zero means the complete absence of the attribute.
  • Can calculate ratios: you can say one value is twice, three times, etc., another value.

Examples from the excerpt:

  • Age
  • Years of education

Example: A person who is 12 years old is twice as old as someone who is six years old—this ratio statement is valid because age has a true zero (birth).

Why it matters: Ratio is the highest level of measurement and allows the full range of mathematical operations.

🔍 Comparing the levels

LevelExhaustive & Mutually ExclusiveRank OrderEqual DistanceTrue ZeroExample
NominalPolitical party
OrdinalDegree of support
IntervalTemperature
RatioAge

How to distinguish:

  • Nominal → Ordinal: Can you rank the attributes? If yes, at least ordinal.
  • Ordinal → Interval: Are the distances between ranks equal? If yes, at least interval.
  • Interval → Ratio: Is there a true zero point? If yes, ratio.

🧩 Measurement complexity

🧩 Why complexity varies

  • Some concepts are inherently more complex to measure than others.
  • The excerpt contrasts:
    • Simpler: political party affiliation (straightforward categories).
    • More complex: sense of alienation (abstract, multifaceted).

Implication: Researchers must choose measurement strategies appropriate to the concept's complexity and the level of precision needed.

🧩 Variables and attributes framework

  • Social scientists use the variable/attribute language to structure measurement.
  • The attributes you define determine which level of measurement you achieve.
  • Designing good attributes (exhaustive, mutually exclusive, and appropriately scaled) is foundational to quality measurement.
22

Units of Analysis and Units of Observation

4.4 Units of Analysis and Units of Observation

🧭 Overview

🧠 One-sentence thesis

Units of analysis define what entities a researcher ultimately wants to describe, while units of observation define what items are actually measured or collected—and these two units are often different, requiring researchers to be clear about both.

📌 Key points (3–5)

  • Unit of analysis: the entity you wish to say something about at the end of your study (the main focus).
  • Unit of observation: the item(s) you actually observe, measure, or collect during data collection.
  • They can differ: the unit of observation might be individuals, but the unit of analysis could be groups, organizations, or social phenomena.
  • Common confusion: observing individuals does not mean your unit of analysis is individuals—if you generalize about groups or phenomena, those become your unit of analysis.
  • Determined by different factors: unit of analysis is determined by your research question; unit of observation is determined by your data collection method.

🔍 Core concepts

🎯 What is a unit of analysis?

Unit of analysis: the entity that you wish to be able to say something about at the end of your study, probably what you would consider to be the main focus of your study.

  • It is not what you collect data from; it is what you want to describe or explain.
  • Your research question determines your unit of analysis.
  • Example: if you ask "Which students are most likely to be addicted to their cell phones?" your unit of analysis is individuals (students).
  • Example: if you ask "What are the various types of cell phone addictions?" your unit of analysis is social phenomena (types of addictions).

📊 What is a unit of observation?

Unit of observation: the item (or items) that you actually observe, measure, or collect in the course of trying to learn something about your unit of analysis.

  • It is what you directly gather data from.
  • Your data collection method determines your unit of observation.
  • Example: you might mail a survey to students—your unit of observation is individuals (the students who respond).
  • Example: you might analyze written policies—your unit of observation is documents.

🔄 How they relate

  • Units of analysis and observation can be the same, but they are not required to be.
  • What is required: researchers must be clear about how they define both units, to themselves and to their audiences.
  • Don't confuse: observing individuals does not automatically mean your unit of analysis is individuals—it depends on what you want to say at the end.

🗂️ Common units of analysis

🧑 Individuals

  • You want to describe characteristics of individual people.
  • Example: "Which students are most likely to be addicted to their cell phones?"
  • You may generalize about populations, but your unit of analysis is still the individual.

👥 Groups

  • Groups vary in size: micro-level (families, friendship groups, street gangs), meso-level (employees in an organization, professionals like chefs or lawyers, club members), macro-level (citizens of entire nations, residents of continents).
  • Example: "Do certain types of social clubs have more or fewer cell phone-addicted members than other sorts of clubs?"
  • Your unit of analysis is groups (clubs).
  • Don't confuse: if you ask whether people who join cerebral clubs are more likely to be addicted, your unit of analysis shifts to individuals.

🏢 Organizations

  • Entities like corporations, colleges and universities, night clubs.
  • Example: "How do different colleges address the problem of cell phone addiction?"
  • Your interest is in campus-to-campus differences, so the college is your unit of analysis.
  • You might examine schools' written policies (unit of observation = documents), but you ultimately describe differences across campuses.

🌐 Social phenomena

  • Social interactions, social problems, and other phenomena: murder, rape, counseling sessions, Facebook chatting, wrestling, voting, cell phone use or misuse.
  • Example: "What are the various types of cell phone addictions that exist among students?"
  • You might discover types centered on social media versus single-player games.
  • The resultant typology tells you something about the social phenomenon (unit of analysis).
  • Unit of observation would likely be individual people.

📜 Policies and principles

  • Studies that analyze policies and principles typically rely on documents as the unit of observation.
  • Example: a researcher hired by a college to help write an effective policy against cell phone use in the classroom might gather all previously written policies from campuses across the country.
  • The researcher compares policies at campuses with low versus high cell phone use in classrooms.
  • Unit of analysis: policies and principles.
  • Unit of observation: documents.

📋 Worked examples: cell phone addiction study

The excerpt provides a detailed table showing how different research questions yield different units of analysis and observation. Below is a summary:

Research QuestionUnit of AnalysisData CollectionUnit of ObservationWhat You Can Say
Which students are most likely to be addicted?IndividualsSurvey of studentsIndividualsMedia majors, men, and high-SES students are more likely to be addicted.
Do certain types of social clubs have more addicted members?GroupSurvey of studentsIndividualsClubs with a scholarly focus have more addicted members than socially focused clubs.
How do different colleges address the problem?OrganizationsContent analysis of policiesDocumentsCampuses without policies prohibiting cell phone use have high addiction levels.
What are the various types of addictions?Social phenomenaObservations of studentsIndividualsThere are two main types: social and antisocial.
What are the most effective policies?Policies and principlesContent analysis of policies and student recordsDocumentsPolicies requiring group counseling for one semester treat addictions more effectively than expulsion policies.

🧩 Key takeaway from the table

  • In multiple examples, the unit of observation is individuals (students surveyed or observed), but the unit of analysis varies: individuals, groups, organizations, social phenomena, or policies.
  • This shows that observing individuals does not lock you into analyzing individuals—what you want to say determines your unit of analysis.

⚠️ Common confusions and how to distinguish

🔀 "I'm surveying students, so my unit of analysis must be students"

  • Not necessarily. If you survey students but want to describe groups (e.g., clubs), your unit of analysis is groups.
  • If you survey students but want to describe types of addictions (a social phenomenon), your unit of analysis is the social phenomenon.
  • The unit of observation is students (you collect data from them), but the unit of analysis is determined by your research question.

🔀 "If I observe individuals, I can only talk about individuals"

  • False. You can observe individuals in order to describe some social phenomenon.
  • Example: observing students to create a typology of cell phone addictions means your unit of analysis is the social phenomenon, not the individual.

🔀 "Unit of analysis and unit of observation must match"

  • They can match, but they are not required to match.
  • Example: you might observe documents (unit of observation) to say something about organizations (unit of analysis).
23

Independent and Dependent Variables

4.5 Independent and Dependent Variables

🧭 Overview

🧠 One-sentence thesis

Independent variables cause changes in dependent variables, but researchers must always check whether extraneous variables might actually explain the outcome instead.

📌 Key points (3–5)

  • Independent variable (IV): the variable that causes another variable to change.
  • Dependent variable (DV): the variable that is caused by or depends on the independent variable.
  • Simple trick to identify them: ask "Is X dependent upon Y?"—whatever is dependent is the DV, and what it depends on is the IV.
  • Common confusion: extraneous variables can compete with the IV in explaining the outcome; if an extraneous variable is the real cause, it becomes a confounding variable that undermines the causal claim.
  • Why it matters: failing to account for extraneous variables destroys the integrity of cause-and-effect research.

🔗 Understanding independent and dependent variables

🔗 What they are

Independent variable (IV): one that causes another variable.

Dependent variable (DV): one that is caused by the other; dependent variables depend on independent variables.

  • The relationship is directional: IV → DV.
  • Example from the excerpt: gender causes cell phone addiction → gender is the IV, cell phone addiction is the DV.

🧩 How to identify which is which

The "dependent upon" trick:

  • Ask: "Is X dependent upon Y?"
  • Substitute the actual variables for X and Y.
  • Whichever variable is dependent is the DV; what it depends on is the IV.

Example from the excerpt:

  • Question: "Is success in an online class dependent upon time spent online?"
  • Success in an online class is dependent → DV = success in an online class.
  • Time spent online is what success depends on → IV = time spent online.

📝 Practice examples

The excerpt provides four practice questions. Here are the patterns:

QuestionDependent Variable (DV)Independent Variable (IV)
Is success in an online class dependent upon gender?Success in online classGender
Is the prevalence of PTSD in Canada dependent upon the level of funding for early intervention?Prevalence of PTSDLevel of funding for early intervention
Is the reporting of incidents of high school bullying dependent upon anti-bullying programs?Reporting of high school bullyingAnti-bullying programs in high schools
Is the survival rate of female heart attack victims correlated to hospital emergency room procedures?Survival rate of female heart attack victimsHospital emergency room procedures
  • Note: even when the question uses "correlated to" instead of "dependent upon," the same logic applies—the outcome being studied is the DV, and the factor being tested is the IV.

⚠️ Extraneous and confounding variables

⚠️ What extraneous variables are

Extraneous variable: a variable that may compete with the independent variable in explaining the outcome.

  • Extraneous variables are less commonly discussed but critically important.
  • They can destroy the integrity of a cause-and-effect study.
  • Always check for extraneous variables when identifying cause-and-effect relationships.

🔀 When extraneous becomes confounding

Confounding variable: an extraneous variable that really is the reason for an outcome (rather than the IV); it has confused or confounded the relationship we are interested in.

  • If an extraneous variable turns out to be the true cause, it "confounds" the causal claim.
  • The researcher mistakenly attributes the effect to the IV when the extraneous variable is actually responsible.

🧪 Example: new curriculum study

Setup:

  • Goal: test whether a new online curriculum improves student learning compared to the old curriculum.
  • Method: one experienced teacher uses the new curriculum with one class and the old curriculum with another class.
  • Result: students in the new curriculum class got higher grades.

Problem:

  • The excerpt asks: can we claim the new curriculum caused the higher grades?
  • Alternative explanations (extraneous variables):
    • The new curriculum class had more experienced students (older, more third-year students vs. first-year students).
    • The old curriculum class had a higher percentage of students for whom English is not their first language, who struggled with language barriers unrelated to the curriculum.

Why this matters:

  • Without random assignment to equate the groups, student experience level and language proficiency are extraneous variables.
  • Either could be the real cause of the grade difference, not the new curriculum.
  • If one of these is the true cause, it becomes a confounding variable that invalidates the causal claim about the curriculum.

🚨 Don't confuse

  • Independent variable vs. extraneous variable: the IV is the variable you are testing as the cause; an extraneous variable is an alternative explanation you must rule out.
  • Extraneous variable vs. confounding variable: all confounding variables are extraneous, but an extraneous variable only becomes confounding if it actually explains the outcome instead of the IV.
24

Extraneous Variables

4.6 Extraneous Variables

🧭 Overview

🧠 One-sentence thesis

Extraneous variables can destroy the integrity of cause-and-effect research by offering alternative explanations for outcomes, but researchers can control them through standardized procedures and random assignment.

📌 Key points (3–5)

  • What extraneous variables are: variables that may compete with the independent variable in explaining the outcome.
  • When they become confounding: if an extraneous variable is actually the reason for an outcome (rather than the IV), it confounds the relationship of interest.
  • Why they matter: they threaten claims of cause-and-effect relationships by providing alternative explanations for findings.
  • Common confusion: extraneous variables occur especially when random assignment is not used—groups may differ in ways unrelated to the IV.
  • How to control them: standardized procedures (keep everything except the IV constant) and random assignment (equalize participant characteristics across groups).

🧩 What extraneous variables are

🧩 Definition and role

An extraneous variable is a variable that may compete with the independent variable in explaining the outcome.

  • It is not the variable you are studying (the IV), but another factor that could explain your results.
  • The excerpt emphasizes that extraneous variables are "less common" to hear about but critical because they can destroy research integrity.
  • If you want to identify cause-and-effect relationships, you must always determine whether extraneous variables exist.

⚠️ Confounding variables

  • When an extraneous variable really is the reason for an outcome (rather than the IV), it is called a confounding variable.
  • "Confounding" means it has confused or confounded the relationship you are interested in.
  • Don't confuse: an extraneous variable is a potential threat; a confounding variable is an extraneous variable that actually explains the outcome instead of the IV.

📖 Example scenario: curriculum study

📖 The setup

  • Researchers want to test the effectiveness of a new online course curriculum on student learning, compared to the old curriculum.
  • One experienced online teacher uses the new curriculum with one class and the old curriculum with another class.
  • Random assignment is not used to equate the groups.
  • Result: students in the new curriculum course (experimental group) got higher grades than the control group (old curriculum).

🚨 The problem: alternative explanations

The excerpt asks: "Do you see any problems with claiming that the reason for the difference is because of the new curriculum?"

Alternative explanations (extraneous variables):

Extraneous variableHow it could explain the outcome
Student experience levelThe new curriculum group had more experienced students (more third-year than first-year students, older age).
English language proficiencyThe old curriculum class had a higher percentage of students for whom English is not their first language; they struggled with material due to language barriers, not the curriculum itself.
  • These alternative explanations mean the difference between groups could be due to experience level or language proficiency, rather than the IV (new vs. old curriculum).
  • The excerpt emphasizes: "we have a problem, in that there could be alternative explanations for our findings."

🔍 Why this happens

  • Extraneous variables "can occur when we do not have random assignation."
  • Without random assignment, groups may differ systematically in ways unrelated to the IV.
  • Example: the two classes may have differed in student characteristics before the curriculum was even introduced.

🛡️ How to control extraneous variables

🛡️ Method 1: Standardized procedures

This means that the researcher attempts to ensure that all aspects of the experiment are the same, with the exception of the independent variable.

What to standardize:

  • Recruitment: use the same method for recruiting participants.
  • Setting: conduct the experiment in the same setting.
  • Instructions and feedback: give the same explanation at the beginning and the same feedback at the end, in exactly the same way.
  • Rewards: offer any rewards for participation in the same manner for all participants.
  • Timing and environment: ensure the experiment occurs on the same day of the week (or month), at the same time of day, with constant temperature, brightness, and noise level in the lab.

Why it works:

  • By keeping everything constant except the IV, you reduce the chance that other factors (extraneous variables) explain the outcome.

🎲 Method 2: Random assignment

Random assignment means that every person chosen for an experiment has an equal chance of being assigned to either the test group or the control group.

Why it works:

  • Random assignment reduces the likelihood that characteristics specific to some participants have influenced the independent variable.
  • It equalizes participant characteristics across groups, so differences in outcomes are more likely due to the IV rather than pre-existing differences.

Note:

  • The excerpt mentions that Chapter 6 provides more detail on random assignment and explains the difference between a test group and a control group.

🔑 Key takeaway

🔑 Always check for extraneous variables

  • The excerpt stresses: "Remember this, if you are ever interested in identifying cause and effect relationships you must always determine whether there are any extraneous variables you need to worry about."
  • Extraneous variables threaten the validity of cause-and-effect claims by offering rival explanations.
  • Controlling them through standardized procedures and random assignment is essential for research integrity.
25

Rival Plausible Explanations

4.7 Rival Plausible Explanations

🧭 Overview

🧠 One-sentence thesis

Rival plausible explanations (RPEs) are alternative factors that may account for research results instead of the expected cause, and while careful design can reduce many RPEs, some unavoidable events can still threaten a study's validity.

📌 Key points (3–5)

  • What an RPE is: an alternative explanation for your research results, different from what you expected or hypothesized.
  • Relationship to internal validity: RPEs are considered threats to internal validity, similar to extraneous variables.
  • Design vs unavoidable events: careful research design can eliminate many RPEs, but some external events cannot be controlled or predicted.
  • Common confusion: not all RPEs are obvious—some are blatant (like a violent incident during data collection), while others are subtle (weather, strikes, policy changes).
  • Researcher responsibility: researchers must assess how likely and significant an RPE is, discuss less obvious RPEs in limitations, and may need to scrap a project if an RPE is too severe.

🔍 What RPEs are and why they matter

🔍 Definition and nature

Rival plausible explanation (RPE): an alternative factor that may account for the results you observed in your research, other than what you might have been expecting.

  • RPEs function similarly to extraneous variables—both offer competing explanations for your findings.
  • The excerpt explicitly states that "threats to internal validity are considered RPEs."
  • They challenge whether your study actually measures what it claims to measure.

⚖️ The design limitation

  • Most RPEs can be eliminated through careful research design.
  • However, the excerpt emphasizes "it is important to acknowledge that some cannot."
  • This acknowledgment is crucial: no matter how well-designed a study is, unexpected events can introduce alternative explanations.

🎯 The safe injection centre example

🎯 The scenario setup

The excerpt provides a detailed example to illustrate how RPEs work in practice:

  • Research goal: measure a downtown Vancouver community's satisfaction with a safe injection centre after one year of operation.
  • Careful design: mail-out survey to every household on the voters' list and all community businesses; designed to eliminate threats to internal validity.
  • Unexpected event: shortly after mailing the survey, a violent incident occurs—a client attacks and seriously injures a staff member, then escapes and remains at large.

💥 How the RPE emerges

The excerpt asks critical questions about the incident's impact:

  • How will this incident affect community members and local businesses?
  • How might it affect how survey participants fill out the survey regarding their feelings about the centre?
  • How might answers differ if the survey had occurred before the incident?

The conclusion: "It is quite likely that this event will impact or 'colour' the responses of your participants."

  • There is now a strong likelihood of an RPE explaining negative reactions.
  • The researcher is not getting the "true feelings" of the community—feelings have been negatively influenced by the recent incident.
  • The excerpt states the findings are "essentially null" because the violent incident, not the centre's year-long operation, is driving responses.

🚨 Severity and decision-making

  • RPEs are described as "serious" and can "sink a research project."
  • The researcher spent significant time designing and planning, but the RPE may render the work unusable.
  • Key decision: the researcher must decide how significant and how likely it is that the RPE influenced results, then determine whether to scrap the project.

🌫️ Blatant vs subtle RPEs

🌫️ Recognizing different types

The excerpt distinguishes between obvious and hidden RPEs:

TypeCharacteristicsExamples from excerpt
Blatant RPEObvious, dramatic events that clearly affect responsesThe violent incident at the safe injection centre
Less obvious RPESubtle factors that may still influence resultsWeather, postal strikes, new government policy, recent media attention to an incident related to your research

📝 Handling less obvious RPEs

  • Researchers must "always consider the likelihood that an RPE explains the results of their findings when analyzing data."
  • Less blatant RPEs cannot simply be ignored or cause the project to be scrapped.
  • Instead, they "must be discussed in the limitations section of the research findings."
  • This transparency allows readers to judge for themselves how much the RPE might have affected results.

🛡️ Researcher responsibilities

🛡️ Ongoing vigilance

  • RPE awareness is not a one-time design consideration—it extends through data collection and analysis.
  • Researchers must actively monitor for events or factors that could provide alternative explanations.
  • The assessment is both about likelihood (how probable is it that the RPE affected results?) and significance (how much impact would it have?).

🔬 Integration with research design

  • The excerpt connects RPEs back to the broader discussion of internal validity and extraneous variables from earlier sections.
  • While standardized procedures and random assignment (mentioned in the previous section) can control many threats, they cannot prevent all external events.
  • Don't confuse: controlling extraneous variables through design is different from managing unavoidable RPEs that emerge during the study—the former is preventable, the latter requires acknowledgment and assessment.
26

The Literature Review

5.1 The Literature Review

🧭 Overview

🧠 One-sentence thesis

A literature review surveys existing research on a topic to identify what has been studied, reveal gaps or contradictions, and position new research within the existing body of knowledge.

📌 Key points (3–5)

  • What it is: a survey of everything written about a topic, theory, or research question—not just a list, but an analysis and synthesis of key themes.
  • Core purposes: provide context, justify proposed research, ensure the problem hasn't already been solved, and show where new research fits.
  • Three main activities: research (discover what's been written), critical appraisal (evaluate and relate sources), and writing (explain findings).
  • Common confusion: a literature review is not the same as an essay or annotated bibliography—it analyzes relationships between sources and identifies gaps.
  • The funnel approach: start broad with general research on the issue, then narrow down to specific aspects that lead to the gap your research will address.

📚 What a literature review is

📚 Definition and scope

A literature review is a survey of everything that has been written about a particular topic, theory, or research question.

  • "Literature" means "sources of information"—the research already conducted on your chosen subject.
  • It informs you about existing work so you don't repeat research unnecessarily (unless there's a good reason: new developments, new populations, or testing reproducibility).
  • It can serve as background for a larger work (e.g., research proposal) or stand alone.
  • Much more than a simple list: it analyzes and synthesizes information about key themes or issues.

🔍 Beyond information search

The literature review goes deeper than just finding information:

  • Identifies relationships: articulates connections between existing literature and your research field.
  • Discovers contributions: helps understand what material exists and how various sources contribute to the topic.
  • Resolves contradictions: enables identification and (if possible) resolution of conflicting findings.
  • Finds gaps: determines research gaps and unanswered questions.

🎯 Why literature reviews matter

🎯 Core purposes

The basic purposes remain constant across different study types:

Purpose categoryWhat it does
Context & justificationProvides context for your research and justifies the research you're proposing
Avoiding duplicationEnsures your proposed research hasn't been done (or specifies why replication is necessary)
PositioningShows where your research fits into the existing body of knowledge
LearningEnables learning from previous theories and illustrates how the subject has been studied
Critical analysisHighlights flaws in previous research and outlines gaps
ContributionShows how your research can add to understanding and knowledge; helps refine, refocus, or move the topic in a new direction

🧪 Understanding the research landscape

The literature review advises you whether:

  • The problem you identified has already been solved by other researchers.
  • The status of the problem is confirmed.
  • What techniques have been used by others to investigate the problem.
  • Other related details exist.

Example: If an organization wants to study employee satisfaction, the literature review would reveal what methods previous researchers used, what factors they identified, and what questions remain unanswered.

🔧 The three-part process

🔧 Research phase

Research – to discover what has been written about the topic.

  • Conduct a library search of academic research on your topic (electronically or in-person).
  • Use library computers to find electronic and print holdings.
  • Google Scholar can be used for searches.
  • Research conducted outside academia can serve as an important source with practical implications (versus academic research which tends toward theoretical applications).

Important consideration: Understand who funded the research you review, plus the perspective and purpose of the research—this is increasingly relevant as universities and colleges turn to industry for research funding grants.

🧐 Critical appraisal phase

Critical Appraisal – to evaluate the literature, determine relationships between sources, and ascertain what has been done and what still needs to be done.

Key questions to consider while reviewing:

  • Who: Which researchers have studied this topic? Who are the most prolific writers? Who are the pioneers or leaders in this field?
  • Definitions: How have researchers defined key terms relevant to your topic? Have definitions evolved over time?
  • Theories: What theories have been examined and applied? Have they evolved over time?
  • Methods: What methodologies have been used? Have methodologies evolved over time?

✍️ Writing phase

Writing – to explain what you have found.

The funnel approach is helpful:

  • Start with a broad examination of research related to the issue.
  • Work down to look at more specific aspects of the issue.
  • Lead to the gap or specific issue that your research will address.

Don't confuse: This narrowing process is deliberate—you're guiding the reader from general knowledge to the specific problem you'll tackle.

📝 Practical note-taking strategies

📝 Organizing your research

Keep notes in an organized format (e.g., Excel file) that includes:

For empirical articles:

  • Write down research results in one or two sentences in your own words.
  • Example: "people who are between ages 18–35 are more likely to own a smart phone than those in an age range above or below."
  • Note the methods, research design, number of participants, and sample details.
  • Sometimes record names of statistical procedures used for analysis.

General information to track:

  • Key definitions and how they've evolved.
  • Theoretical frameworks applied.
  • Methodological approaches used.
  • Main findings and conclusions.

This systematic approach helps you later synthesize and analyze the relationships between sources rather than just listing them.

27

What is involved in writing a literature review?

5.2 What is involved in writing a literature review?

🧭 Overview

🧠 One-sentence thesis

Writing a literature review involves three core activities—researching what has been written, critically appraising the sources to find gaps and connections, and writing an organized synthesis that shows how your research fits into the existing knowledge.

📌 Key points (3–5)

  • Three main activities: research (discover what's been written), critical appraisal (evaluate and find relationships), and writing (explain your findings).
  • The funnel approach: start broad with general research on the topic, then narrow down to specific aspects that reveal the gap your research will address.
  • Structure matters: organize by themes, methodology, or chronology—not alphabetically by source—to show connections and evolution of ideas.
  • Common confusion: a literature review is not an annotated bibliography (separate summaries) or an essay (proving a point); it's a synthesis looking for patterns, themes, and gaps across sources.
  • Expect to read more than you cite: if you need 20 sources in your final review, plan to read approximately 30 to find the most relevant contributions.

📚 The three core activities

🔍 Research: discovering what exists

  • Conduct library searches (electronic databases, Google Scholar) to find academic research on your topic.
  • Consider both academic and non-academic research, but understand who funded the research and their perspective/purpose.
  • Industry-funded research may have practical implications but requires careful evaluation of potential bias.

🧪 Critical appraisal: evaluating and connecting

  • Determine each source's contribution to the topic.
  • Identify and (if possible) resolve contradictions between sources.
  • Find research gaps and unanswered questions.
  • Evaluate the relationship between different sources rather than treating them as isolated pieces.

✍️ Writing: explaining your findings

  • Synthesize what you've learned into a coherent narrative.
  • Show how pieces of research connect and relate to each other.
  • Demonstrate where your proposed research fits into the existing body of knowledge.

🔬 How to undertake a literature review

📖 First step: library search and initial reading

Key questions to consider while reviewing:

  • Who: Which researchers are most prolific? Who are the pioneers or leaders in this field?
  • Definitions: How have key terms been defined? Have definitions evolved over time?
  • Theories: What theories have been examined? How have they evolved?
  • Methods: What methodologies have been used? Have they changed over time?

📝 Taking effective notes

Keep notes in an organized format (e.g., Excel file):

  • For empirical articles: write results in 1-2 sentences in your own words; note methods, design, participants, sample details, and statistical procedures.
    • Example: "people who are between ages 18–35 are more likely to own a smart phone than those in an age range above or below."
  • For review articles: identify the main points by reading/skimming, then looking away and asking yourself the main idea.
  • Always note: limitations, gaps, contradictions with other sources, and anything important or interesting.

🎯 Looking for the big picture

  • You're seeking common themes and patterns across research, not collecting random separate articles.
  • Identify how various pieces of research are linked.
  • Find research gaps—areas requiring further research related to your topic.
  • Don't confuse: You're not writing an annotated bibliography (separate summaries) or an essay (proving a point); you're synthesizing patterns.

🔎 Finding research gaps

  • Read the conclusion sections of journal articles and conference proceedings—researchers often identify gaps there.
  • Focus on the most recent research (within 2-3 years) to find current gaps.
  • Be aware: if a gap was suggested 10 years ago, it's likely been addressed by now.
  • Your research question may need to change or adapt based on what you discover—this is normal and common.

🏗️ Structure of a literature review

📌 Introduction

  • Identify the topic: briefly discuss its significance.
  • State your conclusion: outline what will be drawn from the literature review.
  • Defend importance: provide a broad overview of the scope (e.g., statistics showing impact).
  • Clarify scope: specify whether you're covering the entire history of the field or just a particular period.
  • If part of larger work: explain the importance of the review to your research question.

📊 Body

  • Organize by principles, not sources: discuss research according to organizational principles (thematic, methodological, chronological), not alphabetically.
  • Most paragraphs should discuss more than one source.
  • Compare, contrast, connect: show how various pieces of research relate; identify themes within the research.
  • Synthesize the evolution: demonstrate how research has evolved over time, who the main researchers are, and how methods/theories have changed.
  • Prioritize: spend more time on researchers and bodies of research considered most important or most relevant to your topic.
  • Use logical organization and clear transitions.

🎯 Conclusion

  • Suggest future directions: based on your research, indicate where the field will or should go next.
  • Show your contribution: if proposing your own study, explain how you'll fill gaps.
  • Reinforce importance: defend the topic's significance now that you've demonstrated the current state of thinking.

🗂️ Three ways to organize your review

Organization techniqueWhen to useWhat it looks like
Thematically (most common)Explaining key themes or issues relevant to the topicIdentify variables or themes across studies (e.g., "10 variables relevant to user adoption of mobile technology: perceived usefulness, perceived ease of use, income/wealth...")
MethodologicallyDiscussing interdisciplinary approaches or studies with different methodsEvaluate different models or frameworks used in the research area to determine if standardized methodologies exist
ChronologicallyWhen historical changes are central to explaining the topicShow evolution over time (e.g., "evolution of post-traumatic stress disorder and its impact on firefighters from the late 1970s through to the present")

🎨 Thematic organization

Most common approach: organizing by key themes or issues relevant to the topic.

  • Groups research around central concepts or variables.
  • Example: reviewing mobile technology adoption might organize around themes like "perceived usefulness," "perceived ease of use," "income/wealth," "education," etc.

🔬 Methodological organization

Also called a methodology review: organizing by research approaches or frameworks.

  • Useful when the field uses various models or interdisciplinary approaches.
  • Example: evaluating different models used in e-business adoption literature to determine if standardized methodologies exist.

⏳ Chronological organization

Organizing by time periods when historical changes are central.

  • Shows how definitions, methods, or treatment options have evolved.
  • Example: tracing how PTSD definition, study methods, or treatments have changed from the 1970s to present.
  • Don't confuse: This is not just listing sources by publication date; it's showing meaningful evolution of ideas, methods, or understanding over time.

🎓 Acceptable sources (in order of preference)

RankSource typeKey characteristics
1 (Best)Peer-reviewed journal articlesRigorous blind review by 2-3 experts; 12-18 months review process; foundation of your literature review
2Edited academic booksCollection of original scholarly papers; reviewed but often not blind review; use alongside peer-reviewed articles
3Professional journal articlesUse with caution—not usually peer-reviewed; check "About Us" section to verify review process
4Government statistical dataAcceptable for factual/statistical information
5 (Use sparingly)Professional association websitesUse carefully and sparingly

🔬 Peer-reviewed journal articles (premier source)

Papers that have undergone rigorous blind review by 2-3 expert peers before publication in scholarly journals.

  • Blind review process: author names removed to minimize bias.
  • Long process (12-18 months) with multiple rounds of edits.
  • Reviewers may reject papers for unclear methods, lack of contribution, etc.
  • Should serve as the foundation for your literature review.

📚 Edited academic books (acceptable secondary source)

Collections of original scholarly papers by different authors, not published elsewhere.

  • Papers go through review but often not blind (authors are invited).
  • Fine to use but ensure your review contains mostly peer-reviewed journal papers.

📰 Professional journals (use with caution)

  • Articles are not usually peer-reviewed despite appearances.
  • Check the "About Us" section or Google the journal to verify review process.
  • Should not form the bulk of your literature review.
28

Acceptable sources for literature reviews

5.3 Acceptable sources for literature reviews

🧭 Overview

🧠 One-sentence thesis

Peer-reviewed journal articles form the foundation of credible literature reviews, with other sources like edited academic books and government data serving as supplementary material in descending order of acceptability.

📌 Key points (3–5)

  • Source hierarchy exists: acceptable sources range from peer-reviewed journals (most acceptable) to professional association websites (least acceptable, use sparingly).
  • Peer review = rigor: peer-reviewed articles undergo blind review by 2–3 experts over 12–18 months, making them the premier research source.
  • Common confusion: professional journal articles may appear peer-reviewed but often are not—check the journal's "About Us" section or search "[journal name] peer reviewed."
  • Government and association websites have limited roles: use them for statistical data or contextual numbers, not as primary evidence.
  • Foundation principle: peer-reviewed journal articles should serve as the foundation; other sources supplement but do not replace them.

📊 The source hierarchy

📊 Five tiers of acceptability

RankSource typeAcceptability level
1Peer-reviewed journal articlesMost acceptable—should be foundation
2Edited academic booksAcceptable but use alongside peer-reviewed papers
3Professional journal articlesUse with caution—often not peer-reviewed
4Government statistical dataGood for statistics only
5Professional association websitesUse sparingly and carefully

🎯 Why the hierarchy matters

  • The ranking reflects the rigor of the review process each source type undergoes.
  • Your literature review's credibility depends on building from the most rigorous sources.
  • Lower-tier sources supplement but cannot substitute for higher-tier evidence.

🔬 Peer-reviewed journal articles

🔬 What makes them peer-reviewed

A peer-reviewed journal article is a paper that has been submitted to a scholarly journal, accepted, and published after going through a rigorous, blind review process.

  • Blind review process: author names are removed to minimize bias toward the researchers.
  • Expert evaluation: 2–3 experts in the research area review and must accept the paper.
  • Timeline: the process is long, often 12 to 18 months.
  • Iterative editing: many back-and-forth edits as researchers address reviewers' concerns.

❌ Why papers get rejected

Reviewers may reject papers for:

  • Unclear or questionable methods
  • Lack of contribution to the field
  • Other quality concerns

✅ Why they're the premier source

  • The rigorous review process ensures quality and credibility.
  • They have survived expert scrutiny before publication.
  • Foundation principle: peer-reviewed journal articles should serve as the foundation for your literature review.

🔍 How to verify peer review

  • The excerpt notes that sometimes a savvy reviewer can discern authorship based on previous publications, despite the blind process.
  • Don't confuse: a paper submitted to a journal is not the same as a paper accepted and published after peer review.

📚 Edited academic books

📚 What they are

An edited academic book is a collection of scholarly scientific papers written by different authors; the works are original papers, not published elsewhere.

🔄 Review process differences

  • Papers within edited books go through a review process.
  • Key difference from peer review: the review is often not blind because authors have been invited to contribute.
  • This invited-author model means less anonymity than journal peer review.

⚖️ When to use them

  • Edited academic books are fine to use for your literature review.
  • Balance requirement: ensure your literature review contains mostly peer-reviewed journal papers, not primarily edited book chapters.

⚠️ Lower-tier sources

⚠️ Professional journal articles

  • Use with caution: articles in trade journals are not usually peer-reviewed, even though they may appear to be.
  • Common confusion: professional journals ≠ peer-reviewed journals—the format may look similar, but the review process differs.

How to verify:

  • Read the journal's "About Us" section, which should state whether papers are peer-reviewed.
  • Google the journal name plus "peer reviewed" to check.

📈 Government statistical data

  • Best use: excellent sources for statistical data.
  • Example from the excerpt: Statistics Canada collects and publishes data related to the economy, society, and the environment.
  • Limitation: use for data, not for research arguments or theoretical frameworks.

🌐 Professional association websites

  • Use sparingly and carefully: material from these websites can serve as a source for statistics you may need.
  • Appropriate scenario: Example from the excerpt—to justify research on PTSD in police officers, you might:
    • Use peer-reviewed articles to determine PTSD prevalence in Canadian police officers over ten years.
    • Use the Ontario Police Officers' Association website to find the approximate number of officers employed in Ontario.
    • Combine these to estimate how many officers could be suffering from PTSD, helping justify a research grant.
  • Key restriction: this type of website-based material should be used with caution and sparingly—it supplements peer-reviewed evidence but does not replace it.

🚫 What not to do

  • Don't rely on professional association websites for primary research evidence.
  • Don't use them as a substitute for peer-reviewed sources.
  • Don't overuse them—they provide context or supplementary numbers, not core arguments.
29

5.4 The Five 'C's of Writing a Literature Review

5.4 The Five 'C's of Writing a Literature Review

🧭 Overview

🧠 One-sentence thesis

The five 'C's framework—Cite, Compare, Contrast, Critique, and Connect—provides a structured approach to writing a literature review that synthesizes existing research and positions new work within the scholarly conversation.

📌 Key points (3–5)

  • The five 'C's framework: Cite, Compare, Contrast, Critique, and Connect—each serves a distinct purpose in organizing and presenting research literature.
  • Compare vs Contrast distinction: Comparing identifies where researchers agree and methods align; contrasting highlights disagreements, debates, and controversial areas.
  • Critique requires judgment: You must evaluate which arguments are more persuasive and which methods are more reliable, not just summarize neutrally.
  • Common confusion: A literature review is not the same as an essay or annotated bibliography—it focuses on the research landscape, not proving a single point or listing sources.
  • Connection to your work: The final 'C' explains how your research utilizes, extends, or departs from previous studies.

📝 The Five 'C's Framework

✍️ Cite

  • Reference all material used to define the research problem you will study.
  • This establishes the foundation and shows you have engaged with existing scholarship.
  • Proper citation gives credit and avoids plagiarism.

🔄 Compare

Compare: describe where various researchers agree and where they disagree; describe similarities and dissimilarities in approaches to studying related research problems.

  • Focus on alignment and agreement across studies.
  • Identify common themes, shared findings, and similar methodological approaches.
  • Example: Multiple researchers might agree that a particular intervention is effective, using similar experimental designs.

⚖️ Contrast

Contrast: describe what major areas are contested, controversial, and/or still in debate.

  • Focus on disagreement and divergence in the literature.
  • Highlight where researchers take different positions or where controversies exist.
  • Don't confuse with Compare: this is about identifying debates and unresolved questions, not just differences in approach.

🎯 Critique

  • Evaluate which arguments are more persuasive and explain why.
  • Assess which approaches, findings, and methods seem most reliable, valid, appropriate, or popular—and justify your assessment.
  • Pay attention to verb choice: use precise verbs like "asserts," "demonstrates," "argues," "clarifies" to characterize what researchers have stated.
  • This is not neutral summary—you must make judgments based on the evidence.

🔗 Connect

Connect: describe how your work utilizes, draws upon, departs from, synthesizes, adds to, or extends previous research studies.

  • Position your own research within the existing body of knowledge.
  • Explain the relationship between prior studies and your planned work.
  • This answers: "Where does my research fit in the scholarly conversation?"

🆚 Distinguishing a Literature Review from Other Academic Writing

📄 Literature Review vs Essay

AspectLiterature ReviewEssay
FocusEverything written about a topic; focused on the research and researchersProving a specific point
CoverageExtensive coverage of material on the topicSelective—only sources that prove the point
PerspectiveMust discuss multiple perspectives comprehensivelyMay acknowledge contrary views but emphasizes one argument
  • Key difference: Emphasis placement—where you put the focus in your writing.
  • The same resources can be used for both, but the purpose differs.
  • Example: An essay on negative effects of shiftwork on nurses would gather material to prove that point; a literature review would survey all research on shiftwork effects, including studies showing no effect or positive effects.

📚 Literature Review vs Annotated Bibliography

Annotated bibliography: provides all reference details plus a short (approximately 150 words) description of each reference.

  • An annotated bibliography is a list format with individual summaries.
  • A literature review is a synthesized narrative that integrates and analyzes sources together.
  • Don't confuse with a regular bibliography (just a list of resources used) or a reference list (sources cited in the paper).
  • Annotated bibliographies describe sources individually; literature reviews connect them thematically.

📖 Supporting Elements

🔍 Sources for Literature Reviews

The excerpt mentions acceptable sources that can support your literature review:

  • Peer-reviewed journal articles: Primary source for academic research.
  • Statistical data from governmental websites: For demographic or economic data (e.g., Statistics Canada).
  • Professional association websites: For contextual statistics (e.g., number of members) to justify research importance—use sparingly and with caution.

📋 APA Referencing

  • Social sciences literature reviews require APA (American Psychological Association) style referencing.
  • Purpose: give credit to original authors and avoid plagiarism.
  • The excerpt references the 6th edition of the APA manual (note: this may be outdated depending on current standards).
30

The Difference between a Literature Review and an Essay

5.5 The Difference between a Literature Review and an Essay

🧭 Overview

🧠 One-sentence thesis

A literature review surveys all research on a topic with emphasis on the research itself, whereas an essay selectively uses sources to prove a specific point.

📌 Key points (3–5)

  • What a literature review emphasizes: everything written about a topic, theory, or research question—focused on the research and researchers.
  • What an essay emphasizes: proving a specific point by selecting only sources that support (or occasionally challenge) that argument.
  • Common confusion: students can use the exact same resources for both, but the difference lies in where the writing emphasis is placed.
  • Key distinction: a literature review provides extensive coverage of material; an essay is selective and argument-driven.
  • Example contrast: an essay on shiftwork's negative effects on nurses would gather material to prove harm, not comprehensively review all perspectives.

📚 Where the emphasis differs

📚 Literature review focus

A literature review focuses on everything that has been written about a particular topic, theory, or body of research. It is focused on the research and the researchers who have undertaken research on your topic.

  • The goal is comprehensive coverage of existing scholarship.
  • You survey the landscape of what has been studied and by whom.
  • The research itself—its approaches, findings, methods—is the center of attention.
  • Example: If reviewing shiftwork effects on nurses, you would cover all documented research perspectives, methods, and findings on this topic.

✍️ Essay focus

An essay focuses on proving a point. It does not need to provide an extensive coverage of all of the material on the topic.

  • The writer selects only sources that help prove the specific argument.
  • Coverage is intentionally limited to what supports (or provides necessary counterpoints to) the thesis.
  • Most professors expect a few contrary perspectives to be discussed, but these serve the argument rather than comprehensively mapping the field.
  • Example: An essay arguing shiftwork negatively affects nurses would gather material showing harm and various effects, mentioning contrary research briefly if at all.

🔄 Same resources, different treatment

🔄 Why students get confused

  • A student can use the exact same resources to create either a literature review or an essay.
  • The source material may be identical; what changes is how you organize and present it.
  • Don't confuse: having the same sources does not mean producing the same type of writing—the structural purpose differs fundamentally.

🎯 Where emphasis is placed

AspectLiterature ReviewEssay
CoverageExtensive—all material on the topicSelective—only what proves the point
FocusThe research and researchersThe argument/point being made
PurposeSurvey the fieldProve a specific claim
Treatment of contrary viewsIncluded as part of comprehensive coverageMentioned briefly to acknowledge, not to survey

💡 Practical example: shiftwork and nurses

The excerpt provides a concrete scenario to illustrate the difference:

  • Essay approach: You want to write about the negative effects of shiftwork on nurses.

    • You gather material showing shiftwork negatively affects nurses and the various ways it does so.
    • You might find occasional research stating no effect, but you focus on providing information that proves your point.
    • Contrary evidence is acknowledged but not comprehensively explored.
  • Literature review approach (implied contrast): You would need to cover all research on shiftwork and nurses—positive findings, negative findings, neutral findings, different methodologies, different researcher perspectives—regardless of which direction you personally find most convincing.

31

APA Referencing (from JIBC Online Library)

5.7 APA Referencing (from JIBC Online Library)

🧭 Overview

🧠 One-sentence thesis

APA referencing is a standardized system of citation rules used in social sciences literature reviews to credit original authors and avoid plagiarism.

📌 Key points (3–5)

  • What APA referencing is: a set of rules developed by the American Psychological Association for writing and citing sources in social sciences.
  • Why it exists: to give credit to original authors and to prevent plagiarism (presenting someone else's work as your own).
  • When it's required: when creating a social sciences focused literature review, you must provide a reference list of all sources that appear in your paper.
  • Current standard: the 6th edition of the APA manual is the version in use (according to the excerpt).
  • Common confusion: APA referencing differs from a bibliography (which lists all resources used, even if not cited in the paper) and from an annotated bibliography (which adds short descriptions to each reference).

📚 What APA referencing covers

📝 Two main functions

APA referencing: a set of rules for writing and referencing (citing) your sources.

  • Writing rules: how to format and structure your academic writing according to APA standards.
  • Referencing (citing) rules: how to properly acknowledge sources both in-text and in the reference list.
  • The excerpt emphasizes that these are interconnected—APA governs both the writing style and the citation mechanics.

🎯 The dual purpose

The excerpt identifies two core reasons for using APA referencing:

PurposeWhat it meansWhy it matters
Give creditAcknowledge someone else's workEthical obligation to recognize original authors
Avoid plagiarismDon't present others' work as your ownAcademic integrity; plagiarism is a serious offense
  • These purposes are complementary: proper citation simultaneously credits the author and protects you from plagiarism accusations.
  • Example: If you use an idea or data from a journal article in your literature review, APA referencing tells you exactly how to cite it so readers know it came from that source, not from you.

🔍 Context and application

📖 Where APA is used

  • Social sciences focus: The excerpt specifies that APA is "widely accepted in the social sciences."
  • Literature reviews: The context is creating a "social sciences focused literature review," which requires a reference list of all sources that appear in your paper.
  • Don't confuse: A reference list (sources cited in the paper) is different from a bibliography (all resources consulted, whether cited or not).

🆚 Distinguishing related concepts

The excerpt places APA referencing in contrast to two other academic writing formats mentioned earlier:

  • Bibliography: A list of all resources someone used to write a research paper; may include items not cited in the body of the paper.
  • Annotated bibliography: A bibliography plus a short description (approximately 150 words) of each reference.
  • APA reference list: Only the sources that actually appear in your paper, formatted according to APA rules.

Example: If you read ten articles but only cited seven in your literature review, your APA reference list would include only those seven; a bibliography might list all ten.

📘 The APA manual

  • The excerpt states that "the current version of the APA manual in use is the 6th edition."
  • The manual is the authoritative source for all APA rules.
  • The excerpt directs readers to a link (American Psychological Association Reference) for complete APA referencing guidance.

🛡️ Preventing plagiarism

⚠️ What plagiarism means

Plagiarism: putting forth someone else's work as your own.

  • The excerpt frames plagiarism as the central risk that APA referencing helps you avoid.
  • It's not just about copying text word-for-word; it includes presenting ideas, data, or arguments from others without attribution.

🔐 How APA protects you

  • By following APA rules, you create a clear trail showing which ideas are yours and which come from other sources.
  • Proper citation ensures that "you avoid being accused of plagiarism"—the excerpt emphasizes the protective function.
  • Example: If you summarize a study's findings in your literature review and cite it in APA format, readers can verify the original source and see that you're not claiming the research as your own.
32

Experiments

6.1 Experiments

🧭 Overview

🧠 One-sentence thesis

Experiments are controlled data collection methods that test hypotheses by comparing groups exposed to different conditions, using random assignment to eliminate threats to internal validity.

📌 Key points (3–5)

  • What experiments are: controlled methods (often in labs) that test hypotheses by comparing an experimental group (receives stimulus) with a control group (does not receive stimulus).
  • Three key features of true experiments: independent and dependent variables, pretesting and post-testing, and experimental and control groups.
  • Random assignation: the process of assigning participants to groups so each has an equal chance of being in any condition, ensuring groups are equivalent before treatment.
  • Common confusion: not all research is an "experiment"—the term has a specific meaning in social science and should not describe all empirical research.
  • Why it matters: experiments aim to eliminate threats to internal validity and control for rival explanations.

🔬 What defines a true experiment

🔬 Core definition

An experiment is a method of data collection designed to test hypotheses under controlled conditions (often in a laboratory), with the goal to eliminate threats to internal validity.

  • Most commonly a quantitative research method.
  • Used more often by psychologists than sociologists, but useful for all social scientists to understand.
  • Takes place in a lab or controlled environment.

🎯 The classic experiment design

In the classic experiment, the effect of a stimulus is tested by comparing two groups: one that is exposed to the stimulus (the experimental group) and another that does not receive the stimulus (the control group).

  • Experimental group: receives the independent variable (stimulus).
  • Control group (comparison group): treated equally in all respects except does not receive the independent variable.
  • Purpose of control group: to control for rival plausible explanations.

⚠️ Important distinction

  • Students often use "experiment" to describe all empirical research projects.
  • In social scientific research, the term has a unique meaning and should not be used to describe all research methodologies.
  • Social sciences research usually takes place in natural settings using quasi-experimental designs rather than true experimental designs.

🧩 Three key features of true experiments

🧩 Feature 1: Independent and dependent variables

  • The researcher tests the effects of an independent variable upon a dependent variable.
  • The independent variable is what the researcher manipulates.
  • The dependent variable is what the researcher measures.

🧩 Feature 2: Pretesting and post-testing

  • Pre-test: measures participants on the dependent variable before the stimulus is administered.
  • Post-test: measures participants on the dependent variable after the stimulus is administered.
  • Both steps are important in a classic experiment.
  • Example: In the PTSD study, all police officers were given the same pre-test to assess PTSD levels before any video was shown, then a post-test afterward.

🧩 Feature 3: Experimental and control groups

  • Both groups must be present.
  • Both groups receive pre-tests and post-tests.
  • Only the experimental group receives the treatment/stimulus.
  • The control group provides a baseline for comparison.

📊 How experiments work in practice

📊 Example structure: PTSD study

ComponentDetails
Participants100 police officers randomly assigned to groups
Pre-testAll participants assessed for PTSD levels; no significant differences found
TreatmentExperimental group watched video on scenic travel routes
Post-testBoth groups re-measured for PTSD symptoms
ResultsExperimental group reported greater symptoms than control group
  • Dependent variable: reported levels of PTSD symptoms (measured through pre- and post-test).
  • Independent variable: visual exposure to trauma (video).
  • Key question: Is the reported level of PTSD symptoms dependent upon visual exposure to trauma?

📊 Example structure: Depression study

  • All participants randomly assigned to experimental or control group.
  • Pre-test showed no significant differences in depression between groups.
  • Experimental group read an article suggesting severe prejudice against their racial group.
  • Post-test showed experimental group reported greater depression than control group.
  • Dependent variable: depression levels.
  • Independent variable: feelings that prejudice is a significant issue within your racial group.

🎲 Random assignation

🎲 What it is and why it matters

Random assignation is a powerful research technique that addresses the assumption of pre-test equivalence—that the experimental and control group are equal in all respects before the administration of the independent variable.

  • Primary way researchers control extraneous variables across conditions.
  • Associated with experimental research methods.
  • Ensures groups are equivalent before treatment begins.

🎲 Two strict criteria

  1. Each participant has an equal chance of being assigned to each condition (e.g., 50% chance for two conditions).
  2. Each participant is assigned to a condition independently of other participants.

🎲 Methods of random assignation

  • Coin flip method: Heads = Condition A, Tails = Condition B.
  • Computer-generated method: For three conditions, generate random integer 1-3; each number corresponds to a condition.
  • Pre-created sequence: A full sequence of conditions is usually created ahead of time, and each new participant is assigned to the next condition in the sequence.

🎲 Block randomization

  • A modified approach to keep group sizes equal.
  • Problem with strict random procedures: likely to result in unequal sample sizes.
  • In block randomization, all conditions occur once in the sequence before any are repeated.
  • Then all occur again before any are repeated again.
  • Within each "block," conditions occur in random order.
  • Statistically most efficient to divide participants into equal-sized groups.
  • Note: Never throw away data already collected to achieve equal sample sizes.
33

Random Assignation

6.1.1 Random Assignation

🧭 Overview

🧠 One-sentence thesis

Random assignation is a powerful technique in true experiments that controls extraneous variables by giving each participant an equal and independent chance of being assigned to any condition, thereby addressing the assumption that groups are equivalent before treatment.

📌 Key points (3–5)

  • Core purpose: Random assignation controls extraneous variables across conditions and ensures pre-test equivalence between experimental and control groups.
  • Two strict criteria: each participant has an equal chance of assignment to each condition, and each assignment is independent of other participants.
  • Common confusion: Do not confuse random assignation (assigning participants to conditions) with random sampling (selecting a sample from a population).
  • Practical modification: Block randomization is often used instead of pure random assignment to keep group sizes equal, which is statistically more efficient.
  • Limitations and strengths: Random assignation is not guaranteed to control all extraneous variables but is always considered a strength; any confounds are likely detected through replication.

🎯 What random assignation does

🎯 Controlling extraneous variables

  • Random assignation is the primary way researchers attempt to control extraneous variables across conditions.
  • It addresses the assumption of pre-test equivalence: the experimental and control groups are equal in all respects before the independent variable is administered.
  • Example: Without random assignation, one group might accidentally have more motivated or less depressed participants, creating a confound.

🔗 Association with experimental methods

  • Random assignation is associated specifically with experimental research methods, not nonexperimental designs.
  • It is one of the defining characteristics of a true experiment.

🎲 How random assignation works

🎲 The two strict criteria

Random assignment in its strictest sense should meet two criteria: (1) each participant has an equal chance of being assigned to each condition, and (2) each participant is assigned independently of other participants.

  • Equal chance: For two conditions, each participant has a 50% chance of being assigned to either condition.
  • Independence: One participant's assignment does not influence another's assignment.

🪙 Simple methods

  • Coin flip for two conditions: Heads → Condition A; Tails → Condition B.
  • Computer-generated random integers for three conditions: 1 → Condition A; 2 → Condition B; 3 → Condition C.
  • Pre-generated sequence: In practice, a full sequence of conditions is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as they are tested.

📦 Block randomization

📦 Why modify strict random assignment

  • Problem with strict procedures: Coin flipping and other pure random methods are likely to result in unequal sample sizes in different conditions.
  • Unequal sample sizes are generally not a serious problem, and you should never throw away data to achieve equal sizes.
  • However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups.

🧱 How block randomization works

In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any is repeated again. Within each "block," the conditions occur in a random order.

  • This keeps the number of participants in each group as similar as possible.
  • The sequence is usually generated before any participants are tested.
  • When computerized, the program often handles the random assignment automatically.
  • Tools like the Research Randomizer website can generate block randomization sequences for any number of participants and conditions.

⚖️ Limitations and strengths

⚖️ Not a perfect guarantee

  • Random assignation is not guaranteed to control all extraneous variables across conditions.
  • It is always possible that, just by chance, participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than participants in another condition.

💪 Why it remains a strength

Three reasons random assignation is still considered strong:

  1. Works better than expected: Random assignment works better than one might expect, especially for large samples.
  2. Inferential statistics account for fallibility: The statistical tests researchers use to decide whether a group difference reflects a population difference take the "fallibility" of random assignment into account.
  3. Replication detects confounds: Even if random assignment does result in a confounding variable and produces misleading results, this confound is likely to be detected when the experiment is replicated.

🚫 Common confusion to avoid

  • Do not confuse random assignation with random sampling.
    • Random assignation: assigning participants to experimental conditions.
    • Random sampling: selecting a sample from a population (covered in Chapter 7).
ConceptWhat it doesWhen it's used
Random assignationAssigns participants to conditionsIn experiments, to control extraneous variables
Random samplingSelects participants from a populationIn sampling, to ensure representativeness
34

Nonexperimental Research

6.2 Nonexperimental Research

🧭 Overview

🧠 One-sentence thesis

Nonexperimental research—which lacks manipulation of variables or random assignment—is often more appropriate than experiments when the research question involves single variables, non-causal relationships, unmanipulable factors, or ethical constraints.

📌 Key points (3–5)

  • What defines it: no manipulation of the independent variable and/or no random assignment of participants to conditions.
  • When to use it: when manipulation is impossible, unethical, or when the question is exploratory or involves pre-existing differences.
  • Three main types: cross-sectional (comparing pre-existing groups), correlational (examining relationships without manipulation), and observational (watching behavior in natural settings).
  • Common confusion: nonexperimental ≠ inferior; it is simply suited to different research questions, and experiments cannot answer all questions.
  • Internal validity trade-off: nonexperimental designs generally have lower internal validity than experiments but are necessary and valuable for many research contexts.

🔍 What nonexperimental research is

🔍 Core definition

Nonexperimental research: research that lacks manipulation of an independent variable and/or random assignment of participants to conditions.

  • It is not less important or inferior to experimental research; the distinction is about method, not value.
  • The choice depends on the nature of the research question, not a hierarchy of quality.

🧩 When nonexperimental methods are appropriate

The excerpt lists four scenarios where nonexperimental research is better suited:

ScenarioExample from excerptWhy nonexperimental fits
Single variable"How accurate are people's first impressions?"No relationship between two variables to manipulate
Non-causal relationship"Is there a correlation between verbal and mathematical intelligence?"Question asks about association, not cause
Unmanipulable variable"Does hippocampus damage impair long-term memory?"Cannot ethically or practically assign brain damage
Exploratory/experiential"What is it like to be a working mother with depression?"Broad, qualitative exploration of lived experience
  • Don't confuse: a research project can contain both experimental and nonexperimental elements; for example, a correlational study might identify a relationship that is later tested experimentally.

🔬 Three types of nonexperimental research

🔬 Cross-sectional research

Cross-sectional research: comparing two or more pre-existing groups without manipulating the independent variable or randomly assigning participants.

  • How it works: the researcher selects groups that already differ on the variable of interest.
  • Example from excerpt: comparing memory ability in people who eat a balanced diet (per Canada Food Guide 2019) versus those who do not.
    • It would be unethical to randomly assign people to an unhealthy eating group.
  • Key limitation—selection bias: pre-existing groups may differ in other ways (e.g., the healthy-eating group might also exercise more and sleep better), so you cannot isolate the effect of diet alone.

📊 Correlational research

Correlational research: examining the relationship between variables without attempting to influence them.

  • The researcher observes naturally occurring variation; no manipulation.
  • Visualizing relationships—scatterplots: graphs that show two dimensions:
    1. Direction: positive (rises left to right), negative (falls left to right), curvilinear, or no relationship.
    2. Magnitude (strength): how closely points cluster around a line; tighter clustering = stronger relationship.
  • Pearson's r: a statistical measure of linear relationship strength.
    • Sign (+ or −) indicates direction; number (0 to 1.0) indicates strength.
    • 0 = no linear relationship; 1.0 = perfect linear relationship (all points on a straight line).
  • Don't confuse: correlation measures association, not causation; a strong correlation does not tell you which variable causes the other or whether a third variable is responsible.

👁️ Observational research

Observational research: exploring an aspect of the world by watching behavior, often as one component of a larger research project.

  • Everyday example: choosing the fastest airport security line by observing which has fewer people or moves quickest.
  • Research example: studying nutrition choices in a high school cafeteria by watching students' actual food selections (not just asking them via questionnaire).
  • Covert vs. overt observation:
    • Covert: participants do not know they are being observed (e.g., watching cafeteria choices without students' awareness). Reduces social desirability bias but raises ethical concerns (lack of consent).
    • Overt: participants know and usually approve of being observed.
  • Observer involvement (three levels):
    1. Strictly as an observer (watching from outside).
    2. Strictly as a participant (involved in the activity).
    3. Both observer and participant (e.g., the Humphreys 1970 study of public washroom encounters, a controversial covert participant observation).
  • Observational research is rarely standalone; it typically complements other methods like surveys or interviews.

⚖️ Quasi-experiments and internal validity

⚖️ What quasi-experiments are

  • When used: when random assignment to treatment and control groups is not possible; selection is by participants, researcher, or both.
  • Key feature: the independent variable is manipulated (like an experiment), and the design tests causal hypotheses.
  • Three criteria for inferring causality:
    1. X (independent variable) comes before Y (dependent variable) in time.
    2. X and Y are related (they occur together).
    3. The relationship is not explained by other causal agents.
  • Comparison group: the researcher identifies a group as similar as possible to the treatment group at baseline; techniques like regression discontinuity and propensity score matching help reduce selection bias.

📉 Internal validity comparison

Internal validity: the extent to which the study design supports the conclusion that changes to the independent variable caused the observed changes in the dependent variable.

Research typeInternal validity levelWhy
ExperimentalHighestManipulation + random assignment control for extraneous variables and direction of causality
Quasi-experimentalMiddleManipulation is present, but lack of random assignment allows other explanations
Correlational (nonexperimental)LowestNo manipulation; causality direction could be reversed, or a third variable could explain both
  • Example from excerpt: A researcher implements a PTSD awareness program at one fire hall but not another. Lower PTSD at the treatment fire hall could be due to the program, or it could be due to pre-existing differences between firefighters at the two halls (e.g., job conditions, leadership). Without random assignment, we cannot rule out these alternative explanations.
  • Don't confuse: lower internal validity does not mean the research is useless; quasi-experimental and correlational designs are the most common in social sciences because many variables cannot be manipulated or randomly assigned.

🧷 Practical and ethical considerations

🧷 Why manipulation or random assignment is often impossible

  • Ethical issues: denying needed treatment, assigning people to harmful conditions (e.g., unhealthy diet, brain damage), or providing large rewards only to some participants.
  • Feasibility: some variables are inherent traits (e.g., intelligence, pre-existing medical conditions) or life circumstances (e.g., being a working mother) that cannot be assigned.
  • The excerpt emphasizes that the nature of the research question guides the choice of method, not a preference for one type over another.
35

6.2.1 Cross-Sectional Research

6.2.1 Cross-Sectional research

🧭 Overview

🧠 One-sentence thesis

Cross-sectional research compares pre-existing groups without manipulating variables or randomly assigning participants, making it useful when ethical or practical constraints prevent true experiments but introducing the risk of selection bias.

📌 Key points (3–5)

  • What cross-sectional research is: a non-experimental method that compares two or more pre-existing groups of people.
  • Key constraint: the independent variable is not manipulated, and participants are not randomly assigned to groups.
  • When to use it: when it would be unethical or impractical to assign participants to conditions (e.g., assigning people to an unhealthy eating group).
  • Common confusion: pre-existing groups may differ in multiple ways beyond the variable of interest, creating selection bias—don't assume the measured variable is the only difference.
  • Main limitation: because groups are not randomly assigned, other variables may confound the results, making it hard to isolate the effect of the variable being studied.

🔍 What cross-sectional research is

🔍 Definition and core features

Cross-sectional research: a type of non-experimental research used to compare two or more pre-existing groups of people.

  • The independent variable is not manipulated by the researcher.
  • There is no random assignment of participants to groups.
  • Participants are already in their groups before the study begins (hence "pre-existing").

🆚 How it differs from experimental research

  • In experimental research, the researcher manipulates the independent variable and randomly assigns participants to conditions.
  • In cross-sectional research, the researcher observes and compares groups that already exist.
  • Example from the excerpt: comparing people who already eat a balanced diet versus those who do not, rather than assigning participants to eat healthily or unhealthily.

🧪 When and why to use cross-sectional research

🚫 Ethical and practical constraints

  • Cross-sectional research is appropriate when manipulation or random assignment would be unethical or impractical.
  • The excerpt gives the example of healthy eating: it would not be ethical to randomly assign participants to an unhealthy eating group.
  • Similarly, the broader context mentions that denying needed treatment or providing unequal rewards can raise ethical issues in true experiments.

🎯 Research questions suited to cross-sectional methods

  • Questions that involve comparing naturally occurring groups.
  • Questions where the independent variable cannot be manipulated (e.g., existing dietary habits, pre-existing conditions).
  • Example: "Do people who regularly eat a balanced diet have better memory ability than those who do not?"

⚠️ The selection bias problem

⚠️ What selection bias means

  • Selection bias: the danger that pre-existing groups differ in ways beyond the variable of interest.
  • Because participants are not randomly assigned, other characteristics may systematically differ between groups.
  • The excerpt emphasizes: "there is a danger of introducing a selection bias to the research, because the groups may differ in other ways."

🧩 How selection bias confounds results

  • In the healthy eating example, the healthy food eating group may also be more likely to exercise and get more sleep.
  • Both exercise and sleep can increase memory function.
  • The problem: the researcher cannot know whether memory differences are due to healthy eating alone or to these other variables.
  • The excerpt states: "We would not know then what the effect of healthy eating is, in isolation, upon memory ability, because there may be other variables (e.g. exercise, sleep) that factor into memory ability."

🔄 Don't confuse: correlation vs. causation

  • Cross-sectional research can show that groups differ, but it cannot prove that the variable of interest caused the difference.
  • Other unmeasured variables (confounds) may be responsible.
  • Example: if healthy eaters have better memory, it could be the diet, the exercise, the sleep, or a combination—cross-sectional design cannot isolate the cause.

📊 Summary: strengths and limitations

AspectDescription
StrengthAllows comparison when manipulation or random assignment is unethical or impractical
LimitationCannot isolate the effect of the variable of interest due to potential confounding variables
Key riskSelection bias—pre-existing groups may differ in multiple ways
Causal inferenceCannot establish causation; can only show associations or differences between groups
36

Correlational Research

6.2.2 Correlational Research

🧭 Overview

🧠 One-sentence thesis

Correlational research examines relationships between variables without manipulating them, using scatterplots and statistical measures to reveal the direction and strength of associations.

📌 Key points (3–5)

  • What correlational research is: a non-experimental method that studies relationships between variables without attempting to influence or manipulate them.
  • How relationships are visualized: scatterplot diagrams show both the direction (positive, negative, curvilinear, or none) and the strength (how tightly points cluster around a line) of relationships.
  • How relationships are measured: Pearson's r quantifies linear relationships with a sign (+ or -) for direction and a number (0 to 1.0) for strength.
  • Common confusion: correlation vs. causation—correlational research reveals associations but does not establish cause-and-effect because variables are not manipulated and other factors may be involved.
  • Key limitation: without manipulation or random assignment, pre-existing group differences (selection bias) can confound results, as illustrated by the healthy eating and memory example.

🔬 What correlational research is

🔬 Definition and core features

Correlational research: a type of non-experimental research in which the researcher is interested in the relationship between variables; however, the researcher does not attempt to influence the variables.

  • The researcher observes and measures variables as they naturally occur.
  • This contrasts with experimental research, where the researcher actively manipulates variables.
  • The independent variable is not manipulated, and there is no random assignment of participants to groups.

⚠️ Selection bias and confounding variables

  • Because correlational research compares pre-existing groups, there is a danger of introducing selection bias.
  • Groups may differ in other ways beyond the variable of interest.
  • Example: A researcher wants to compare memory ability of people who eat a balanced diet (according to Canada Food Guide 2019) versus those who do not. It would be unethical to randomly assign participants to an unhealthy eating group, so the researcher must compare pre-existing groups. However, the healthy eating group may also be more likely to exercise and get more sleep—both of which increase memory function. The researcher cannot isolate the effect of healthy eating alone because other variables (exercise, sleep) may also influence memory ability.

📊 Visualizing relationships with scatterplots

📊 What scatterplots show

Scatterplot diagram: a graph that visualizes relationships between variables.

  • Scatterplots provide information on two dimensions: direction and strength.

🧭 Direction of relationship

Scatterplots reveal the direction of the relationship between variables:

DirectionDescription
Positive (linear)Points rise from left to right; as one variable increases, the other increases
Negative (linear)Points fall from left to right; as one variable increases, the other decreases
CurvilinearPoints follow a curved pattern
No relationshipPoints are scattered with no discernible pattern
  • A positive relationship is also called a positive correlation.
  • A negative correlation falls from left to right.

💪 Strength (magnitude) of relationship

The strength of a relationship is indicated by how closely points cluster around a line:

  • Strongest relationship: all points fall along the same straight line (the regression line).
  • Next strongest: a little bit of dispersion around the line; if you draw an oval close to the line, all points are captured within the oval.
  • Weaker relationship: the more dispersed the points (i.e., the points do not adhere as closely to the line), the weaker the relationship.

📐 Measuring relationships with Pearson's r

📐 What Pearson's r measures

Pearson Product-moment Correlation Coefficient (Pearson's r): a statistical method developed by Karl Pearson near the beginning of the 20th century to measure the strength of linear relationships between variables.

  • Pearson's r measures the strength of linear relationships only.
  • It does not measure curvilinear or other non-linear relationships.

➕➖ Two aspects of Pearson's r

Pearson's r has two components:

AspectWhat it representsInterpretation
Sign (+ or -)Direction of relationship+ = positive or directional relationship; - = negative or inverse relationship
Number (0 to 1.0)Strength of relationship0 = no linear relationship; 1.0 = perfect linear relationship
  • A 1.0 is represented on a scatterplot whenever all points lie on the same straight line.
  • Example: A Pearson's r of +0.85 indicates a strong positive linear relationship; a Pearson's r of -0.30 indicates a weak negative linear relationship.

🔍 Don't confuse: direction vs. strength

  • The sign tells you the direction (whether variables move together or in opposite directions).
  • The number tells you the strength (how tightly the relationship holds).
  • A strong negative relationship (e.g., -0.90) is just as strong as a strong positive relationship (e.g., +0.90); the difference is only in direction.
37

Observational Research

6.2.3. Observational Research

🧭 Overview

🧠 One-sentence thesis

Observational research explores aspects of the world by watching behavior, and it is typically combined with other methods to provide a fuller picture, though it raises ethical questions about whether participants should know they are being observed.

📌 Key points (3–5)

  • What observational research does: explores aspects of the world by watching behavior, often as one part of a larger research project.
  • Covert vs overt observation: covert means participants don't know they're being watched; overt means they know and usually give approval.
  • Common confusion: observational research is rarely stand-alone—it usually complements questionnaires, interviews, or other methods.
  • Observer involvement varies: researchers can be strictly observers, strictly participants, or both observer and participant.
  • Ethical challenge: covert observation can capture natural behavior but raises consent issues.

🔍 What observational research is

🔍 Definition and everyday use

Observational research: seeks to explore an aspect of the world, for a variety of purposes.

  • Many people do informal observational research without thinking about it.
  • Example: deciding which airport security line to join by watching which has fewer people, moves fastest, or has fewer children—you observe to make a decision when pressed for time.

🧩 Role in research projects

  • From a research perspective, observational research is usually one aspect of an overriding research project.
  • It is rarely a stand-alone method of data collection.
  • Example: studying nutrition in high school cafeterias—you might distribute questionnaires and conduct interviews, but your research would not be complete without standing back and watching the actual food choices students make.

🎭 Covert vs overt observation

🎭 Covert observation

Covert research: when research participants do not know they are being observed.

  • Used when researchers want to capture natural behavior without influence.
  • Example: in the high school cafeteria nutrition study, you would not want students to know you are watching them, because they may make different choices than they normally would due to your presence (this is related to social desirability bias).
  • Ethical challenge: observing in a covert fashion raises issues, such as not securing participants' consent to be observed.

👁️ Overt observation

Overt observation: when participants know and give their approval (usually, although not always).

  • Participants are aware they are being observed.
  • Approval is typically obtained, though the excerpt notes "usually, although not always."
  • Don't confuse: overt observation may still influence behavior, but it addresses the consent problem that covert observation raises.

🧑‍🔬 Levels of observer involvement

🧑‍🔬 Three aspects of involvement

According to the excerpt, there are three aspects of observer involvement:

TypeDescription
Strictly as an observerThe researcher only watches, does not participate
Strictly as a participantThe researcher participates without observing in a detached way
Both observer and participantThe researcher both observes (covert and overt) and participates

📚 Historical example

  • One of the most infamous covert participant observational studies is that of Humphreys (1970).
  • The study involved covert observation of homosexual encounters in public washrooms.
  • Humphreys published his findings in a book that later won the C. Wright Mills Award, one of the most prestigious book awards for sociological research and writing.
  • Today, the awarding of this award to Humphreys is almost as controversial as the study itself.
  • This example illustrates the ethical tensions in covert participant observation.
38

Quasi-Experiments

6.3 Quasi-Experiments

🧭 Overview

🧠 One-sentence thesis

Quasi-experiments allow researchers to test causal hypotheses by manipulating an independent variable when random assignment to groups is not possible, though they require careful comparison-group design to infer causality.

📌 Key points (3–5)

  • When quasi-experiments are used: when it is not possible to randomly assign participants to treatment and control groups.
  • How selection works: participants are assigned to groups by the participants themselves, the researcher, or both—not by random assignment.
  • Three criteria for inferring causality: the independent variable must come before the dependent variable in time; they must be related; and the relationship cannot be explained by other causal agents.
  • Common confusion: quasi-experiments vs. true experiments—quasi-experiments manipulate the independent variable and test causal hypotheses like experiments, but lack random assignment.
  • How to reduce bias: techniques like regression discontinuity design and propensity score matching help create comparison groups that are as similar as possible to treatment groups at baseline.

🔬 What quasi-experiments are

🔬 Definition and alternative name

Quasi-experiments (also known as field experiments): research designs used when it is not possible to randomly assign participants to treatment and control groups.

  • The independent variable is still manipulated, just as in a true experiment.
  • Like experiments, quasi-experiments test causal hypotheses.
  • The key difference from true experiments is the absence of random assignment.

👥 How group assignment works

  • Selection to a group happens through:
    • The participants themselves
    • The researcher
    • Both the participant and the researcher
  • This is in contrast to true experiments, where random assignment determines group membership.
  • Example: A researcher studying a new teaching method in schools might assign entire classrooms (not individual students randomly) to treatment or control, because random assignment within a classroom is not feasible.

🧪 Inferring causality in quasi-experiments

🧪 The three criteria for causality

Quasi-experiments allow researchers to infer causality by using experimental logic in a different way, but three criteria must be satisfied:

  1. Temporal order: The independent variable (X) must come before the dependent variable (Y) in time.
  2. Relationship: X and Y must be related to each other (i.e., they occur together).
  3. No confounding: The relationship between X and Y cannot be explained by other causal agents.
  • All three criteria must be met to infer that X causes Y.
  • Don't confuse: meeting these criteria does not guarantee causality as strongly as in a true experiment, but it provides a logical basis for causal inference.

🔍 Why these criteria matter

  • Temporal order ensures that the cause comes before the effect, ruling out reverse causation.
  • Relationship confirms that changes in X are associated with changes in Y.
  • No confounding rules out alternative explanations—other variables that might be the real cause.
  • Example: If a new policy (X) is introduced in January and health outcomes (Y) improve in March, the temporal order is satisfied; if the two are statistically related and no other major changes occurred, causality may be inferred.

🎯 Creating comparison groups

🎯 The comparison-group challenge

  • In a quasi-experiment, the researcher identifies a comparison group that is as similar as possible to the treatment group.
  • Similarity is judged based on baseline (pre-intervention) characteristics.
  • Because assignment is not random, the comparison group may differ from the treatment group in ways that affect the outcome, introducing selection bias.

🛠️ Techniques for reducing selection bias

The excerpt mentions two techniques for creating better comparison groups:

TechniquePurpose
Regression discontinuity designReduces selection bias by using a cutoff rule to assign groups
Propensity score matchingMatches treatment and comparison participants on observed characteristics to make groups more similar at baseline
  • These techniques help make the comparison group more comparable to the treatment group, strengthening causal inference.
  • Don't confuse: these techniques reduce bias but do not eliminate it entirely—they are not substitutes for random assignment.

⚖️ Internal validity of quasi-experiments

⚖️ Where quasi-experiments stand

  • The excerpt notes that different research types are not equal in internal validity.
  • Internal validity: the extent to which the study design supports the conclusion that changes to the independent variable were responsible for observed changes in the dependent variable.
  • Experimental research usually has the highest internal validity because it uses manipulation and random assignment to control for extraneous variables and address directional and third-variable problems.
  • Correlational research has the lowest internal validity because changes in the dependent variable across conditions could be due to many factors, not just the independent variable.
  • Quasi-experiments fall between these two: they manipulate the independent variable (like experiments) but lack random assignment (like correlational studies).

🔄 Why random assignment matters

  • Random assignment addresses directional problems (which variable causes which) and third-variable problems (confounding variables).
  • Without random assignment, quasi-experiments must rely on careful comparison-group design and statistical techniques to approximate the control that randomization provides.
  • Example: If the average score on the dependent variable changes across conditions in an experiment, it is likely due to the independent variable; in a quasi-experiment, the same change could be due to pre-existing differences between groups.
39

Internal Validity

6.4 Internal Validity

🧭 Overview

🧠 One-sentence thesis

Experimental research has the highest internal validity because it controls for directional and third-variable problems through manipulation and random assignment, while correlational research has the lowest and quasi-experimental research falls in between.

📌 Key points (3–5)

  • What internal validity measures: the extent to which study design supports the conclusion that changes to the independent variable caused observed changes in the dependent variable.
  • Hierarchy of internal validity: experimental (highest) > quasi-experimental (middle) > correlational (lowest).
  • Why experimental is strongest: addresses directional and third-variable problems through manipulation and random assignment to control extraneous variables.
  • Common confusion: quasi-experimental manipulates the independent variable like experiments do, but lacks random assignment, creating potential confounds.
  • Why correlational is weakest: changes in the dependent variable could be due to reversed causality or third variables, not just the independent variable.

🔬 Understanding internal validity

🔬 What internal validity means

Internal validity: the extent to which the study design supports the conclusion that changes to the independent variable were responsible for the observed changes in the dependent variable.

  • It is about causal confidence—can you trust that X caused Y?
  • Not just "did something change," but "did the thing you manipulated cause the change?"
  • The excerpt emphasizes that different research designs offer different levels of this confidence.

🎯 Why it matters

  • Determines how confidently you can claim causation.
  • Affects interpretation of results—same data pattern means different things depending on design.
  • Guides choice of research method based on what conclusions you need to draw.

📊 Comparing research types by internal validity

🥇 Experimental research (highest)

Why it ranks highest:

  • Manipulation: researcher controls the independent variable.
  • Random assignment: participants randomly placed in conditions, controlling for extraneous variables.
  • Result: if the average score on the dependent variable changes across conditions, it is likely due to the independent variable.

What it solves:

  • Directional problems (which came first?)
  • Third-variable problems (is something else causing both?)

🥈 Quasi-experimental research (middle)

What it shares with experiments:

  • Manipulates the independent variable.
  • Tests causal hypotheses.

Why it ranks lower:

  • Lacks random assignment: participants not randomly placed in groups.
  • Selection is by participants, researcher, or both—not random.
  • Lacks experimental control: creates potential problems even with manipulation.

The trade-off:

  • Most common approach in social sciences research.
  • Used when random assignment is not possible.

🥉 Correlational research (lowest)

Why it ranks lowest:

  • If the average score on the dependent variable changes across conditions, it could be the independent variable.
  • But there could be other reasons:
    • Direction of causality might be reversed.
    • A third variable might be causing differences in both independent and dependent variables.

Don't confuse: correlation with causation—even strong relationships don't prove one thing caused the other.

🔍 Detailed example: the fire hall study

🚒 The scenario

  • Researcher finds two similar fire halls.
  • Creates a PTSD awareness program.
  • Implements program at one fire hall (treatment), not the other (control).
  • Finds lower PTSD levels at treatment fire hall.

✅ What the design got right

  • No directional problem: researcher did not choose which fire hall got the program based on existing PTSD levels.
  • The independent variable (program) came before measuring the dependent variable (PTSD levels).

❌ What the design got wrong

  • No random assignment: firefighters were not randomly assigned to fire halls.
  • Potential confound: firefighters at the treatment fire hall might differ from those at the control fire hall.
  • Alternative explanations: differences in the firefighters themselves, their jobs, their superiors, etc., could be responsible for lower PTSD—not the program.

📝 The lesson

This illustrates why quasi-experimental research has moderate internal validity: manipulation without random assignment leaves room for alternative explanations.

🎓 Practical implications

🎓 Choosing a design

Research typeWhen to useInternal validity trade-off
ExperimentalWhen random assignment is possibleHighest confidence in causation
Quasi-experimentalWhen random assignment is not possibleModerate confidence; most common in social sciences
CorrelationalWhen manipulation is not possible/ethicalLowest confidence; can only show relationships

🎓 Interpreting results

  • Same pattern of results means different things depending on design.
  • Experimental: likely causal.
  • Quasi-experimental: possibly causal, but check for confounds.
  • Correlational: relationship exists, but causation unclear.
40

Sampling

7.1 Sampling

🧭 Overview

🧠 One-sentence thesis

Sampling—the process of selecting a subset of observations from a population—is an unavoidable and critical choice in research that shapes the conclusions researchers can draw, especially when populations are heterogeneous.

📌 Key points (3–5)

  • Sampling is mandatory: researchers cannot gather all data from all sources at all places and times, so they must choose whom or what to study.
  • Homogeneous vs heterogeneous: sampling matters most when units of analysis differ in characteristics; if everyone is the same on the characteristic of interest, any sample will do.
  • Population vs sample: the population is the full group of interest; the sample is the subgroup actually studied—they are rarely the same.
  • Common confusion: students often assume the sample and population are identical, but resource and access limits mean researchers almost never study the entire population.
  • Why it matters: how and whom you sample determines what conclusions you can draw and whether you can generalize or make theoretical contributions.

🎯 What sampling is and why it is necessary

🎯 Definition and purpose

Sampling is the process of selecting observations that will be analyzed for research purposes.

  • In other words: choosing a subset of your group of interest and drawing conclusions from that subset.
  • The excerpt emphasizes that sampling is not optional—the question is not if you will sample, but how you will sample.
  • Why sampling is unavoidable: researchers cannot gather all data from all sources in all places at all times due to practical limits.

🔧 What determines how you sample

  • The answer to "how will I sample?" depends on:
    • The methods you use.
    • The objectives of the study.
  • Sampling applies to people or objects (your units of analysis).

🧬 Homogeneous vs heterogeneous populations

🧬 When sampling matters most

  • Sampling becomes highly relevant when people or objects are heterogeneous (have different characteristics).
  • If the population is homogeneous (everyone is the same on the characteristic of study), any sample will do—everyone you sample would be identical on that trait.

🔍 Reflecting variability

  • When there is diversity or heterogeneity, the researcher must ensure the sample reflects that variability in the population.
  • Example: if studying income levels and the population has wide income variation, the sample should capture that range—not just one income group.
  • Don't confuse: homogeneity means "sameness on the characteristic of interest," not sameness in all respects.

🌍 Population versus sample

🌍 Definitions

Population: the cluster of people, events, things, or other phenomena in which you are most interested; often the "who" or "what" you want to say something about at the end of your study.

Sample: the cluster of people or events from or about which you will actually gather data.

  • The excerpt stresses: it is quite rare for a researcher to gather data from the entire population of interest.
  • Populations may be large (e.g., "the Canadian people") but are typically more focused (e.g., "Canadian adults over 18" or "Canadian citizens or legal residents").

💰 Why researchers sample instead of studying the whole population

  • Resource limits: money and resources constrain sampling.
  • Access limits: all members of a population may not be identifiable or reachable in a way that allows sampling.
  • As a result, researchers take a subgroup (sample) from the population and study that instead.

📊 The gap between population and sample

ConceptWhat it isKey point from excerpt
PopulationFull group of interestWhat you want to say something about
SampleSubgroup actually studiedWhat you actually gather data from
The gapPopulation ≠ sample"One of the most surprising and often frustrating lessons" for students
  • The excerpt notes that while there are exceptions, more often than not a researcher's population and sample are not the same.
  • Example from the excerpt: a research question like "How do men's and women's college experiences differ and how are they similar?" would require data from all college students across all nations and all historical time periods—clearly impossible for one researcher.
  • Don't confuse: not being able to study the whole population does not mean giving up your research interest; it means making hard choices about sampling and being honest about those choices.

🎲 What sampling strategies allow

🎲 Two types of contributions

  • Some sampling strategies allow researchers to make claims about populations much larger than the actual sample with a fair amount of confidence.
  • Other sampling strategies are designed to allow researchers to make theoretical contributions rather than sweeping claims about large populations.
  • The excerpt promises to discuss both types later in the chapter (not included in this excerpt).

✅ Being honest about sampling choices

  • The excerpt emphasizes: researchers must be honest with themselves and their readers about the sampling choices they make.
  • This honesty is necessary because the sample shapes the sorts of conclusions that can be drawn.
41

Population versus Samples

7.2 Population versus Samples

🧭 Overview

🧠 One-sentence thesis

Researchers typically study a sample—a subgroup drawn from the population of interest—rather than the entire population, because of practical constraints like time, money, and accessibility.

📌 Key points (3–5)

  • Population vs. sample: the population is the entire group you want to understand; the sample is the subset you actually collect data from.
  • Why sampling is necessary: studying the whole population is usually impossible due to limited resources, large population size, or lack of access to all members.
  • When you can study the whole population: only when the population is small, accessible, willing to participate, or you have complete records.
  • Common confusion: students often expect the sample and population to be the same, but they rarely are; the key is being honest about the limitations this creates.
  • Sampling shapes conclusions: how and whom you sample determines what you can claim at the end of your study.

🎯 Defining population and sample

🎯 What is a population?

Population: the cluster of people, events, things, or other phenomena in which you are most interested; often the "who" or "what" you want to say something about at the end of your study.

  • Populations can be large (e.g., "the Canadian people") but are usually more focused.
  • Example: instead of "all Canadians," a study might specify "Canadian adults over 18" or "Canadian citizens or legal residents."
  • The population is your target group—the one you wish to draw conclusions about.

🎯 What is a sample?

Sample: the cluster of people or events from or about which you will actually gather data.

  • The sample is a subgroup of the population.
  • You collect data from the sample, not the entire population.
  • Example: if your population is "all college students," your sample might be 750 students from 13 colleges across Canada.

🔍 Key distinction: population ≠ sample

  • One of the most surprising lessons: there is almost always a difference between your population of interest and your study sample.
  • Exceptions exist, but more often than not, the two are not the same.
  • Don't confuse: the sample is not a failed attempt to reach everyone; it is a deliberate, practical choice.

💰 Why researchers sample instead of studying the whole population

💰 Practical constraints

  • Money and resources: studying everyone is expensive and time-consuming.
  • Identifiability: not all members of a population may be identifiable or reachable.
  • Example: if your research question is "How do men's and women's college experiences differ?" you cannot collect data from all college students across all nations and all historical time periods—it would take more than a lifetime.

💰 Large and diverse populations

  • If you had unlimited money and resources, you could potentially sample the whole population.
  • In reality, researchers take a sample and study that instead.
  • The excerpt emphasizes: "Does not having the time or resources to gather data from every single person of interest mean having to give up your research interest? Absolutely not."

💰 What to do instead

  • Make hard choices about sampling.
  • Be honest with yourself and your readers about the study's limitations based on the sample you were able to collect.
  • Some sampling strategies allow you to make claims about much larger populations with confidence; others are designed for theoretical contributions rather than sweeping claims.

✅ When you can study the entire population

✅ Conditions for full population access

You can access every member of the population when:

  • The population is small.
  • The population is accessible.
  • Members are willing to participate.
  • You have access to relevant records.

✅ Example: university dean scenario

  • Population: final graduating scores for all students enrolled in the university's health sciences program, 2015–2019.
  • Research question: Is there a trend toward an average increase in final graduating scores over this period?
  • Why the whole population is feasible: the dean is only interested in her particular university and a specific program over a defined time period; she can easily access all the records.
  • This is a rare case where the population is small, accessible, and fully documented.

✅ Summary rule

We use sampling when the population is large and we simply do not have the time, financial support, and/or ability (e.g., lack of laboratory equipment) to reach the entire population.

📊 Examples of population vs. sample

📊 Comparison table

PopulationSampleMethodology
Resumes submitted to security firms in Canada for security guard positions120 resumes for security guard positions submitted to Canada's three largest security firms in 2019 (40 from each firm)Non-obtrusive methods, content analysis
Canadian residents who tested positive for COVID-19 and were hospitalized, but now test negative300 Canadian residents who tested positive for COVID-19 and were hospitalized, but now test negative in British Columbia and QuebecQuantitative research methods, likely survey methods
Undergraduate students currently enrolled at colleges across Canada750 undergraduate students from 13 colleges (one from each of the country's 10 provinces and 3 territories)Quantitative research, likely survey methods
Individuals employed in management positions at firehalls in Nova Scotia30 managers from Nova Scotia's two largest firehalls (15 from each)Qualitative research, likely interviews and/or focus groups

📊 What the table shows

  • Each row shows a broad population and a practical, smaller sample drawn from it.
  • The sample is always a subset: fewer people, fewer locations, or a narrower time frame.
  • Different research methodologies (quantitative, qualitative, content analysis) are suited to different population-sample pairs.

🧠 Implications for research conclusions

🧠 How sampling shapes what you can claim

  • Sample selection matters: how you sample and whom you sample determines the sorts of conclusions you can draw.
  • Some sampling strategies allow you to make claims about populations much larger than your sample with confidence.
  • Other strategies are designed to make theoretical contributions rather than sweeping claims about large populations.

🧠 Being honest about limitations

  • Researchers must be honest about the limitations of their study based on the sample they were able to collect.
  • Don't confuse: a sample is not a "failure" to reach everyone; it is a deliberate choice with trade-offs.
  • The key is transparency: clearly state who was included, who was not, and what that means for your conclusions.
42

Probabilistic and Non-Probabilistic Sampling Techniques

7.3 Probabilistic and Non-Probabilistic Sampling Techniques

🧭 Overview

🧠 One-sentence thesis

Researchers choose between probabilistic sampling (which allows statistical generalization to a population) and non-probabilistic sampling (which strategically selects participants for in-depth understanding) based on their research questions, objectives, and whether they need formal representativeness or theoretical insight.

📌 Key points (3–5)

  • What distinguishes the two approaches: probabilistic sampling gives every element an equal, known chance of selection and enables generalization; non-probabilistic sampling strategically selects participants when the population is not uniform or when depth matters more than breadth.
  • When to use each: probabilistic techniques suit quantitative research with well-defined populations; non-probabilistic techniques suit qualitative research where certain participants are more valuable for advancing research objectives.
  • Core principle of probability sampling: random selection ensures every element has an equal chance, allowing researchers to estimate sampling error and generalize findings.
  • Common confusion: random sampling vs. random assignment—random sampling concerns who gets into the study (generalizability); random assignment concerns who gets which treatment (causality).
  • Practical trade-offs: probability samples require accessible sampling frames and more resources but yield statistical representativeness; non-probability samples are more flexible but cannot generalize to populations.

🎲 Probabilistic sampling fundamentals

🎯 What makes sampling probabilistic

Probabilistic sampling: sampling techniques for which a person's (or event's) likelihood of being selected for membership in the sample is known.

  • The key feature is that selection probability can be calculated.
  • Researchers aim to identify a representative sample—one that resembles the population in all ways important to the research.
  • Example: if gender differences matter to your study, your sample must reflect the gender distribution of the population.

🔑 Random selection principle

Two criteria must be met:

  1. Chance governs selection: no human judgment determines who gets in.
  2. Equal probability: every element has the same likelihood of being chosen.
  • This is a mathematical process, not just "picking randomly."
  • Random selection allows researchers to estimate sampling error: the degree to which the sample deviates from the population's characteristics.

📊 Sampling error explained

Sampling error: the statistical calculation of the difference between results from a sample and the actual parameters of a population.

Two sources:

  • Random error: due to chance alone; cannot be controlled.
  • Systemic error: bias in selection makes certain individuals more likely to be chosen.

Example: If a school list alternates public and private schools every 5th entry, and you sample every 5th school, you might end up with only private schools—that's systemic error.

🎯 Why error matters

  • Using correct techniques minimizes sampling error.
  • Smaller error = stronger ability to say results reflect the population.
  • Larger samples generally reduce error, but there's a point of diminishing returns.
  • Research aims to benefit society, so results should reflect what we'd see in the real world.

🌍 Generalizability

Generalizability: the idea that a study's results will tell us something about a group larger than the sample from which the findings were generated.

  • This is the key feature distinguishing probability from non-probability samples.
  • Only achievable when all population elements have an equal chance of selection.
  • Requires a representative sample drawn through random selection.

🔢 Four probability sampling techniques

🎲 Simple random sampling

  • How it works: number every population element sequentially, then use a random number table or generator to select participants.
  • Requires: a complete sampling frame (list of every population element).
  • Drawback: tedious and time-consuming to implement.
  • When to use: when you have a manageable population list and need the purest form of random selection.

📏 Systematic sampling

  • How it works: select every kth element from your list, where k = population size ÷ desired sample size.
  • Start at a random number between 1 and k, then select every kth element after that.
  • Example: 150 law students, want 25 participants → k = 6. Start at random number 3, then select students 3, 9, 15, 21, etc.
  • Major caution—periodicity problem: if your list has a repeating pattern, you'll introduce bias.

Example of periodicity: Observing public space use over 28 days, want 4 observation days. If k = 7 and you start at day 2, you'll observe only Tuesdays—missing weekend vs. weekday variation.

🎯 Stratified sampling

  • How it works: divide the population into relevant subgroups (strata), then randomly sample from each subgroup.
  • When to use: when a subgroup of interest is a small proportion of the population, or when you want to ensure representation of key categories.
  • Two approaches:
    • Proportional stratification: each subgroup's sample size matches its population proportion (use when population is homogeneous within strata).
    • Disproportional stratification: subgroups have different sampling fractions to enable between-group comparisons.

Example: Studying Canadian police officers' views on drug use. Stratify by gender and location (urban/rural) to ensure adequate representation for comparisons, even if some groups are small.

🗂️ Cluster sampling

  • How it works: sample groups (clusters) first, then sample elements within those clusters.
  • When to use: when you cannot access a complete list of population elements, but can list groups.
  • Key requirement: clusters should be heterogeneous (internally diverse) but similar to each other in distribution of characteristics.
  • Trade-off: more efficient but introduces more error (each stage has its own sampling error).

Example: Studying Canadian college instructors' workplace experiences. Can't get a list of all instructors, but can list all colleges. Randomly select colleges (clusters), then randomly select instructors within those colleges.

Multi-stage cluster sampling: when even cluster lists are unavailable; involves multiple levels of random selection.

📋 Summary table

Sample typeDescriptionRequires frame?
Simple randomRandomly select elements from complete listYes
SystematicSelect every kth elementYes
StratifiedCreate subgroups, then randomly select from eachYes
ClusterRandomly select groups, then elements within groupsPartially

🎨 Non-probabilistic sampling fundamentals

🎯 What makes sampling non-probabilistic

Non-probabilistic sampling: sampling techniques for which a person's (or event's) likelihood of being selected is unknown.

  • We cannot know if the sample represents the larger population.
  • But: this doesn't mean samples are arbitrary—they're drawn with specific purposes in mind.
  • Goal is not representativeness but strategic selection for depth and insight.

📚 When to use non-probability sampling

At design/exploratory stages:

  • Pilot studies to test instruments.
  • Exploratory research to identify themes.

In full research projects:

  • Qualitative research seeking in-depth, idiographic understanding (not broad, nomothetic patterns).
  • Evaluation research describing specific small groups.
  • Theory development—seeking anomalous cases to expand, modify, or challenge existing theories.

🔍 Key distinction

  • Probability samples: aim for breadth and generalization.
  • Non-probability samples: aim for depth and theoretical insight.
  • Don't confuse: non-representative ≠ invalid. Different goals require different approaches.

🎨 Four non-probability sampling techniques

🎯 Purposive sampling

  • How it works: begin with specific perspectives you want to examine, then seek participants who cover that full range.
  • Two uses:
    1. Include participants representing a broad range of perspectives.
    2. Include only people meeting very narrow, specific criteria.

Example: Studying student satisfaction with college programs. Include students from all programs, genders, ages, employment statuses, delivery modes (online/face-to-face), and past/present enrollment.

❄️ Snowball sampling

  • How it works: start with one or two participants, then ask them to refer others; sample builds like a rolling snowball.
  • Also called chain referral sampling.
  • When to use: studying stigmatized groups or rare populations where trust is essential.
  • Advantage: previous participants vouch for researcher's trustworthiness, making new participants more comfortable.

Example: Researching a stigmatized behavior—initial participants help identify others who might be willing to participate, building a chain of referrals.

📊 Quota sampling

  • How it works: identify important categories, create subgroups, decide how many elements to include from each, then collect that number.
  • Similar to stratified sampling but without random selection within subgroups.
  • Strength: accounts for variation across study elements.
  • Limitation: does not yield statistically representative findings.

Example: Studying Emergency Services Management (ESM) student satisfaction. Interview 40 students: 20 from degree program, 20 from diploma program. Further divide by age (≤29 vs. ≥30) and gender. Findings won't represent all ESM students everywhere, but that's irrelevant if you only care about this specific program.

🚶 Convenience sampling

  • How it works: collect data from whoever or whatever is most conveniently accessible.
  • Also called haphazard sampling.
  • When to use: exploratory research, journalism needing quick access.
  • Major caution: be very cautious about generalizing from convenience samples.

Example: News reporters interviewing people on the street—quick and easy, but not representative of any broader population.

⚖️ Comparing the two approaches

📊 Key differences table

AspectProbabilityNon-probability
SelectionRandomArbitrary or logical
Chance of selectionFixed and knownUnknown
Research typeConclusiveExploratory
InferenceStatisticalAnalytical
HypothesisTestedDeveloped
Primary useQuantitativeQualitative (and some quantitative)

🔄 Random sampling vs. random assignment

Don't confuse these:

  • Random sampling: concerns who gets into the study; affects generalizability to the population.
  • Random assignment: concerns who gets which treatment within the study; affects ability to infer causality.

A study can have:

  • Both (ideal for generalizable causal claims).
  • Random assignment only (causal claims about this sample).
  • Random sampling only (descriptive claims about population).
  • Neither (exploratory, limited generalizability and causality).

⚠️ Critical considerations and cautions

🤔 Questions to ask about any sample

When reading research, always ask:

  1. Who was sampled? What population do they represent?
  2. How were they sampled? What technique was used?
  3. For what purpose? Does the sampling strategy match the research goals?

🌍 The WEIRD problem

  • Many behavioral science studies draw samples exclusively from WEIRD societies: Western, Educated, Industrialized, Rich, Democratic.
  • Even narrower: many use only college students from psychology courses.
  • Concern: sweeping claims about "human nature" based on a tiny, unrepresentative slice of humanity.
  • Implication: findings about fairness, cooperation, perception, trust, etc., may not apply beyond WEIRD populations.

📚 Academic sampling bias

  • Study of top psychology journals found 68% of participants were from the United States.
  • Two-thirds of US-based studies in major journals used only American undergraduates in psychology courses.
  • Question this raises: What do we actually learn, and about whom?

💡 Being a responsible research consumer

  • Don't just focus on findings—examine procedures.
  • Ask whether the sample matches the claims being made.
  • Recognize that interesting conclusions don't validate poor sampling.
  • Understand that different research goals legitimately require different sampling approaches.

Markdown output complete.

43

Who Sampled, How Sampled, and for What Purpose?

7.4 Who Sampled, How Sampled, and for What Purpose?

🧭 Overview

🧠 One-sentence thesis

Researchers must critically evaluate who participates in studies—especially the overreliance on WEIRD (Western, educated, industrialized, rich, and democratic) populations like college students—because sample characteristics fundamentally limit what claims can be made about human behavior in general.

📌 Key points (3–5)

  • The WEIRD problem: Over two-thirds of psychology research samples come from the United States, with many studies relying entirely on American undergraduates, yet researchers often make sweeping claims about "human nature."
  • Sample quality vs. sampling method: A perfect sampling method means nothing if only a handful of selected people actually participate; quality depends on who was actually obtained, not just who was intended.
  • Common confusion: Distinguish between the population sampled and the population researchers claim their findings apply to—researchers may (often unintentionally) generalize beyond their actual sample.
  • Sampling bias: When elements selected don't represent the larger population (e.g., online polls exclude those without internet access).
  • Three critical questions: Always ask who was sampled, how they were sampled, and for what purpose they were sampled.

🎓 The college student problem

🎓 Overreliance on campus samples

  • Social science researchers at universities have easy access to student participants—a "luxury" that comes at a cost.
  • Study findings:
    • 68% of participants in top psychology journals were from U.S. samples
    • Two-thirds of U.S.-based work in the Journal of Personality and Social Psychology used samples made up entirely of American undergraduates in psychology courses
  • This raises the question: what do we actually learn, and about whom do we learn it?

🌍 The WEIRD critique

WEIRD: Western, educated, industrialized, rich, and democratic societies

  • Henrich, Heine, and Norenzayan (2010) argued that behavioral scientists make sweeping claims about "human nature" based on samples drawn only from WEIRD societies.
  • Many "robust findings" about fairness, cooperation, visual perception, and trust are based on studies that:
    • Excluded participants from outside the United States
    • Sometimes excluded anyone outside the college classroom
  • Don't confuse: "Human behavior" with "U.S. resident behavior" or "U.S. undergraduate behavior"—the samples are much narrower than the claims being made.

⚠️ Sampling bias and quality issues

⚠️ What is sampling bias

Sampling bias: when elements selected for inclusion in a study do not represent the larger population from which they were drawn

  • Example from the excerpt: An online newspaper poll asking for public opinion will not represent "the public" because it excludes:
    • Those without computer or internet access
    • Those who don't read that newspaper's website
    • Those without time or interest to respond

🔍 Hidden relevance problem

  • A sample may appear representative in all respects the researcher thinks are relevant.
  • But there may be relevant aspects the researcher didn't consider when drawing the sample.
  • No magic rule exists to know when you can fully trust reported results.

📋 Three guidelines for evaluating samples

📋 Guideline 1: Actual sample vs. intended sample

  • Sample quality is determined only by the sample actually obtained, not by the sampling method itself.
  • Example scenario: A researcher correctly uses random selection to administer a survey, but only a handful of selected people actually respond.
    • The researcher must be very careful about claims made from those findings.
    • The method was sound, but the result may not be representative.

📋 Guideline 2: Watch for claim inflation

  • Researchers may talk about implications as though findings apply to groups other than the population actually sampled.
  • The excerpt notes this tendency is "usually quite innocent" and "all too tempting."
  • It's likely unintentional, but consumers of research have a responsibility to notice this "bait and switch."
  • Always compare: Who was actually sampled vs. who the researcher claims the findings apply to.

📋 Guideline 3: Value of theoretical comparisons

  • A sample that allows comparisons of theoretically important concepts or variables is better than one that does not.
  • Even in a non-representative sample, we can learn about the strength of social theories by comparing relevant aspects of social processes.
  • The purpose of sampling matters: exploratory comparison can be valuable even when generalization is limited.

🎯 Core questions to ask

🎯 The three essential questions

At their core, questions about sample quality should address:

QuestionWhy it matters
Who has been sampled?Identifies the actual population and any exclusions
How were they sampled?Reveals potential bias in selection method
For what purpose were they sampled?Clarifies what claims are appropriate
  • Being able to answer these questions helps you:
    • Better understand research results
    • More responsibly read and evaluate research claims

🎯 Responsible consumption of research

  • Don't overlook sampling procedures just because the conclusion is interesting.
  • It's easy to focus only on findings when busy, but the "really interesting stuff" in conclusions depends entirely on sound procedures.
  • Now equipped with knowledge of sampling variety, you can ask important questions and be a more responsible consumer of research.
44

Survey Research: What Is It and When Should It Be Used?

8.1 Survey Research: What Is It and When Should It Be Used?

🧭 Overview

🧠 One-sentence thesis

Survey research is a quantitative method best suited for describing or explaining features of large groups by posing predetermined questions to an entire population or sample.

📌 Key points (3–5)

  • What survey research is: a quantitative method using predetermined questions posed to groups or samples.
  • When to use it: especially useful for describing/explaining features of very large groups or for quickly gathering general details about a population.
  • Strategic use: surveys can prepare for deeper study by helping identify specific individuals or locations for follow-up research.
  • Common confusion: survey vs. questionnaire—surveys aggregate data for statistical analysis benefiting a group; questionnaires collect information for one individual without aggregation.
  • Method fit: like all data collection methods, survey research answers some research questions better than others.

📋 What survey research is and when to use it

📋 Definition and core characteristics

Survey research: a quantitative method whereby a researcher poses a set of predetermined questions to an entire group or sample of individuals.

  • The questions are predetermined—decided in advance, not improvised during data collection.
  • Applied to groups or samples, not just individuals.
  • Quantitative in nature, designed for systematic data collection.

🎯 When survey research works best

Survey research is particularly effective in specific scenarios:

  • Large-scale description: when you need to describe or explain features of very large groups.
  • Quick reconnaissance: when you want general details about a population quickly.
  • Preparatory research: when planning a more intensive study and need to identify where to focus next.

Example: A researcher might survey a broad population first, then use those results to select specific individuals for in-depth interviews.

🔍 Identifying follow-up opportunities

  • Surveys can help pinpoint specific individuals worth studying further.
  • They can identify locations from which to collect additional data.
  • This makes surveys useful as a first step before time-intensive methods like in-depth interviews or field research.

🔄 Survey vs. questionnaire: key distinctions

🔄 Core differences

The excerpt emphasizes that surveys and questionnaires both use questions but differ fundamentally:

AspectQuestionnaireSurvey
PurposeCollects information for one individual's benefitGathers information for statistical analysis benefiting a group
Data treatmentDoes NOT aggregate data for statistical analysisAggregates responses to draw conclusions
NatureA set of written questionsA process of collecting and analyzing data

📝 What each term means

  • Questionnaire: the set of questions used to gather information; if data is solely for the respondent and not aggregated, it's a questionnaire.
  • Survey: the entire process—both collecting and analyzing data; if collected data will be aggregated for group-level conclusions, it's a survey.

⚠️ Don't confuse

  • A questionnaire is a tool (the questions themselves).
  • A survey is a method (the whole research process).
  • The key distinction: aggregation for statistical analysis makes it a survey; individual-only benefit without aggregation makes it a questionnaire.

🛠️ Practical considerations

🛠️ Design complexity

  • The excerpt notes that "constructing a good survey requires more technique than meets the eye."
  • Survey design demands thoughtful planning and often many rounds of revision.
  • The effort is worthwhile given the method's benefits.

🎯 Method appropriateness

  • Survey research is "better suited to answering some kinds of research question than others."
  • Researchers must consider whether their specific research question fits the survey method's strengths.
  • Not every question can or should be answered through surveys.
45

8.2 Understanding the Difference between a Survey and a Questionnaire

8.2 Understanding the Difference between a Survey and a Questionnaire

🧭 Overview

🧠 One-sentence thesis

The key distinction between a survey and a questionnaire lies in whether the collected data is aggregated for statistical analysis to benefit a group (survey) or used solely for the benefit of a single individual without aggregation (questionnaire).

📌 Key points (3–5)

  • What each is: A questionnaire is a set of written questions for one individual's benefit; a survey is a process of gathering, aggregating, and analyzing information for a group.
  • The aggregation test: If data is aggregated for statistical analysis, it is a survey; if not aggregated and solely for the respondent, it is a questionnaire.
  • Common confusion: The same set of questions can shift from questionnaire to survey if the data is later aggregated—sometimes without the participant's knowledge.
  • Process vs. instrument: A survey is a process (collecting and analyzing); a questionnaire is an instrument (the set of questions itself).

🔍 Core definitions

📝 What a questionnaire is

A questionnaire is a set of written questions used for collecting information for the benefit of one single individual.

  • It is an instrument of data collection—the physical or digital form containing the questions.
  • The data collected is not aggregated for statistical analysis.
  • The information serves only the respondent or the immediate purpose, not a broader research goal.
  • Example: An organization asks an individual to fill out a form about their preferences; the answers are used only for that person's account setup.

📊 What a survey is

A survey is a process of gathering information for statistical analysis to the benefit of a group of individuals (a research method).

  • It is a process, not just a set of questions—it includes collection, recording, and analysis.
  • Survey responses are aggregated to draw conclusions about a larger group.
  • The purpose is to describe or explain features of a population, not to serve one individual.
  • Example: A researcher collects responses from many people, combines the data, and reports trends or patterns.

🔄 How questionnaires can become surveys

🔄 The transformation through aggregation

  • The excerpt emphasizes that the same data can change category depending on how it is used.
  • If questionnaire data is later aggregated and analyzed statistically, it becomes survey data.
  • This can happen without the participant's knowledge.

🏦 Bank loan application example

  • A bank asks individuals to fill in loan applications (questionnaire data—each form serves one person's loan process).
  • Later, the bank aggregates all 2017 loan application data and presents it to shareholders at the 2018 annual meeting.
  • At that point, the questionnaire data has been turned into survey data because it is now aggregated and used for statistical purposes.

Don't confuse: The original intent matters less than the final use—if data is aggregated, it functions as survey data even if it started as a questionnaire.

📋 Side-by-side comparison

BasisSurveyQuestionnaire
MeaningCollection, recording, and analysis of information on a subject, area, or groupA form containing a list of ready-made questions for obtaining statistical information
What it isProcess of collecting and analyzing dataInstrument of data collection
TimeTime-consuming processFast process
UseConducted on the target audience (implies analysis and aggregation)Distributed or delivered to respondents (implies individual use)

🧩 Key distinctions to remember

  • Survey = process + aggregation + analysis; questionnaire = tool + individual benefit.
  • A questionnaire can exist without becoming a survey, but a survey always uses some form of questionnaire (or similar instrument) to collect data.
  • The excerpt's table highlights that surveys are more time-consuming because they involve the full cycle of data processing, not just distribution.
46

8.3 Pros and Cons of Survey Research

8.3 Pros and Cons of Survey Research

🧭 Overview

🧠 One-sentence thesis

Survey research offers standardized, versatile data collection that is cost-effective and generalizable, but it sacrifices flexibility and may struggle with validity when questions must remain broad and general.

📌 Key points (3–5)

  • Standardization as a strength: surveys pose identical questions to all participants, which increases reliability compared to less-structured methods like qualitative interviews.
  • Versatility across professions: surveys are used by lawyers, social organizations, businesses, governments, and media to serve diverse purposes from jury selection to product marketing.
  • Inflexibility as a weakness: once a survey is distributed, researchers cannot revise confusing questions or adapt to emerging insights, unlike in-depth interviews.
  • Validity trade-off: standardized questions must be general enough for broad audiences, which limits depth and may reduce validity compared to more comprehensive methods.
  • Common confusion: reliability vs. validity—surveys can be reliable (consistent) yet still suffer from validity problems if questions are too general or poorly phrased.

✅ Strengths of survey research

💰 Cost-effectiveness

  • Surveys are listed as cost-effective, meaning they allow researchers to collect data from many people without the high expense of individualized methods.
  • The excerpt does not detail specific cost comparisons, but implies surveys are economical relative to alternatives.

🌍 Generalizability

  • Because surveys can reach large, representative samples, findings can often be extended to broader populations.
  • This is a key advantage when researchers want to make claims beyond their immediate participants.

🔁 Reliability through standardization

Surveys are standardized; the same questions, phrased in exactly the same way, are posed to participants.

  • Why it matters: standardization ensures every respondent encounters identical wording, reducing variation caused by how questions are asked.
  • Comparison to other methods: qualitative interviewing does not offer the same consistency, so surveys have an edge in producing reliable (repeatable) results.
  • Don't confuse: reliability does not guarantee quality—a poorly-phrased question can still cause respondents to interpret meanings differently, which reduces that question's reliability.

🔧 Versatility across contexts

  • Surveys are used by:
    • Lawyers: to select juries
    • Social service and community organizations (churches, clubs, fundraising groups, activist groups): to evaluate program effectiveness
    • Businesses: to learn how to market products
    • Governments: to understand community opinions and needs
    • Politicians and media outlets: to understand constituencies
  • Implication: understanding survey construction and administration is a useful skill across many jobs and sectors.

⚠️ Weaknesses of survey research

🔒 Inflexibility once deployed

  • The problem: the researcher is generally stuck with a single instrument (the questionnaire) after distribution begins.
  • Example scenario: You mail a survey to 1,000 people, then discover that a question's phrasing confuses respondents as answers come in—at that stage, it is too late to change the question for those who have not yet returned their surveys.
  • Contrast with in-depth interviews: an interviewer can provide further explanation if a respondent is confused, and can tweak questions as they learn how respondents understand them.
  • How to mitigate: conducting a pilot study first should help avoid such situations by catching confusing wording before full deployment.

📉 Validity challenges

Survey questions are standardized; thus, it can be difficult to ask anything other than very general questions that a broad range of people will understand.

  • Why this reduces validity: because questions must be general enough for diverse audiences, surveys may not capture the depth or nuance of the topic being studied.
  • Comparison: methods that allow more comprehensive examination of a topic (e.g., qualitative methods) may yield more valid results.
  • Don't confuse validity with reliability: a survey can be reliable (consistent across respondents) but still lack validity if its general questions fail to accurately measure the underlying concept.

📊 Summary of pros and cons

DimensionStrengthsWeaknesses
ConsistencyStandardization increases reliabilityPoorly-phrased questions can still reduce reliability
FlexibilityVersatile across many professions and purposesInflexible once deployed; cannot adapt questions mid-study
ScopeCost-effective and generalizable to large populationsMust use general questions, which may limit validity
AdaptationN/ACannot provide clarification or adjust wording like interviews can
47

Types of Surveys

8.4 Types of Surveys

🧭 Overview

🧠 One-sentence thesis

Surveys are classified by timing—cross-sectional (one point in time) versus longitudinal (repeated over time)—and longitudinal surveys include trend, panel, cohort, and retrospective types, each trading off the ability to track change against time and cost.

📌 Key points (3–5)

  • Cross-sectional vs longitudinal: cross-sectional surveys are administered once; longitudinal surveys observe over extended periods to capture changes over time.
  • Three main longitudinal types: trend (same questions, different people), panel (same people repeatedly), cohort (same category, different people).
  • Retrospective surveys: administered once but ask about the past, bridging cross-sectional and longitudinal approaches.
  • Common confusion: panel vs cohort—panel tracks the exact same individuals; cohort tracks people who meet the same category criteria but not necessarily the same individuals.
  • Trade-offs: longitudinal surveys track change better but require more time and resources; retrospective surveys save time but rely on potentially faulty memory.

📐 Cross-sectional vs longitudinal surveys

📐 What distinguishes them

  • Cross-sectional: administered at one point in time; provides a snapshot.
  • Longitudinal: administered over an extended period; enables observation of changes over time.
  • The excerpt notes that longitudinal surveys overcome "occasional problematic aspects" of cross-sectional surveys by tracking change.

⏳ Why timing matters

  • If behavior or phenomena change due to world events or aging, longitudinal surveys capture those changes.
  • Cross-sectional surveys cannot distinguish whether differences are due to time, cohort effects, or other factors.

🔄 Three types of longitudinal surveys

📊 Trend surveys

Trend surveys: researchers are interested in how people's inclinations change over time.

  • How they work: the same questions are administered to people at different points in time.
  • Key feature: different people may participate each time.
  • Example: Gallup opinion polls ask the same questions repeatedly to learn how public opinion changes.

👥 Panel surveys

Panel surveys: the same people participate each time the survey is administered.

  • How they work: researchers track the exact same individuals over multiple administrations.
  • Challenges: difficult and costly—requires tracking where people live, when they move, and when they die.
  • Benefit: when resources are available, results can be "quite powerful" because individual-level change is directly observed.
  • Don't confuse: panel ≠ cohort; panel requires the same individuals, not just people in the same category.

🎓 Cohort surveys

Cohort surveys: a researcher identifies some category of people that are of interest and then regularly surveys people who fall into that category.

  • How they work: the same people do not necessarily participate from year to year, but all participants must meet the categorical criteria.
  • Common cohorts: people of particular generations, graduating classes, people who began work in an industry at the same time, or those with specific shared life experiences.
  • Example: surveying each year's graduating class of an organization—different individuals, but all meet the "graduating class" criterion.

📋 Comparison table

TypeSame people?Same questions?Key feature
TrendNoYesTracks how inclinations change over time in the population
PanelYesUsually yesTracks the exact same individuals; costly but powerful
CohortNo (but same category)Usually yesTracks people who share a defining characteristic or experience

🔙 Retrospective surveys

🔙 What they are

Retrospective surveys: similar to other longitudinal studies in that they deal with changes over time but, like a cross-sectional study, they are administered only once.

  • How they work: participants are asked to report events, behaviors, beliefs, or experiences from the past.
  • Position: fall somewhere in between cross-sectional and longitudinal surveys.

⚖️ Trade-offs

  • Benefit: gather longitudinal-like data without the time or expense of repeated administrations.
  • Cost: people's recollections of their pasts may be faulty.
  • Example: asking respondents to recall their voting behavior over the past decade in a single survey, rather than surveying them every election year.

💰 Practical considerations

💰 Time and cost

  • Longitudinal surveys are "certainly preferable" for tracking changes over time.
  • However, the time and cost required can be "prohibitive."
  • Researchers must weigh the benefit of tracking change against available resources.

🔬 Research design context

  • The excerpt notes that cross-sectional vs longitudinal distinctions are not unique to surveys—other data collection methods can also be cross-sectional or longitudinal.
  • These are "really issues of research design."
  • The terms are most commonly used by survey researchers to describe the type of survey administered.
48

Administration of Surveys

8.5 Administration of Surveys

🧭 Overview

🧠 One-sentence thesis

The choice of survey delivery method—whether self-administered (hard copy, mail, online) or interview-based—depends on resources, participant characteristics, and practical trade-offs between cost, speed, and response rates.

📌 Key points (3–5)

  • Self-administered questionnaires: participants receive written questions and respond on their own, delivered in person, by mail, or online.
  • Delivery trade-offs: online surveys are faster and cheaper but require technology access; mailed surveys reach more people but have lower return rates.
  • Improving response rates: advance notice, follow-up reminders, and incentives (gift cards, prize draws) help increase completion.
  • Common confusion: don't assume one method is always best—the appropriate mechanism depends on your population's resources and characteristics.
  • Interview surveys exist but differ: when researchers pose questions directly, different guidelines apply (covered separately in qualitative methods).

📬 Self-administered questionnaires

✍️ What self-administration means

Self-administered questionnaires: a research participant is given a set of questions, in writing, to which he or she is asked to respond.

  • The participant reads and answers questions independently, without a researcher asking them aloud.
  • This contrasts with interview-based surveys where the researcher poses questions directly.

📄 Hard copy delivery options

In-person delivery:

  • Researchers give surveys directly to participants and may collect them immediately or arrange pickup later.
  • Door-to-door delivery involves visiting each sample member's home.
  • Example: An organization hands out surveys at an event and waits while people complete them.

Mail delivery:

  • Surveys are sent through regular postal mail when in-person visits are impractical.
  • Less ideal because participants are more likely to ignore or forget surveys without a researcher present.
  • The excerpt notes: "imagine how much less likely you would be to return a survey that did not come with the researcher standing on your doorstep waiting to take it from you."

📧 Online survey administration

💻 How online surveys work

  • Researchers use subscription services or free tools to deliver surveys electronically.
  • The excerpt mentions SurveyMonkey as an example offering both free and paid options.
  • Results can be exported in formats readable by data analysis programs (SPSS, Systat, Excel), eliminating manual data entry.

⚡ Advantages of online delivery

  • Speed: faster than door-to-door or waiting for mail returns.
  • Cost: relatively cheap compared to other methods.
  • Data processing: automatic formatting for analysis software saves time.

⚠️ Technology access limitation

  • Don't assume universal access: "can you be certain that every person in your sample will have the necessary computer hardware, software, and internet access?"
  • This is a critical consideration when choosing online delivery—understanding your population's resources is essential.

📈 Improving response rates

🔔 Follow-up strategies for mailed surveys

Advance notice:

  • Researchers notify respondents before the survey arrives to prepare them mentally.
  • Gets people "thinking about and preparing to complete it."

Post-survey follow-up:

  • Contact sample members a few weeks after sending the survey.
  • Serves dual purposes: remind non-responders to complete it and thank those who already returned it.
  • The excerpt states: "Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys' return rates."

🎁 Incentives across delivery methods

  • Many suggestions for hard copy surveys also apply to online surveys.
  • Online incentives differ from in-person or mailed ones but are still effective.
  • Examples: gift cards or entry into a prize draw.
  • Don't confuse: online delivery doesn't mean you can't offer incentives—you just need different types.

⚖️ Choosing the right delivery method

🔄 Trade-offs between methods

MethodAdvantagesDisadvantages
OnlineFaster, cheaper, automatic data formattingRequires technology access; may exclude some populations
MailReaches entire sample, no technology barrierMore likely to be lost or ignored; lower return rates
In-personHigher completion rates, immediate collectionTime-consuming, expensive, less common now

🎯 Key decision factors

The excerpt emphasizes that the best mechanism depends on:

  • Your resources: budget and time available.
  • Participants' resources: technology access, mailing addresses, availability.
  • Time constraints: how long you can wait for responses.

Critical principle:

"Understanding the characteristics of your study's population is key to identifying the appropriate mechanism for delivering your survey."

🚫 Interview surveys (brief mention)

  • Some surveys involve researchers posing questions directly to respondents.
  • These are a form of interview with different guidelines and concerns.
  • The excerpt notes these will be covered separately in qualitative methods chapters, as personal interaction requires distinct considerations.
49

8.6 Designing Effective Survey Questions

8.6 Designing Effective Survey Questions

🧭 Overview

🧠 One-sentence thesis

Effective survey questions must be clear, relevant, unbiased, and carefully worded to yield usable data while respecting respondents' time and avoiding confusion or socially desirable answers.

📌 Key points (3–5)

  • Start with clarity of purpose: identify exactly what you need to know before writing questions, consulting literature and brainstorming all possible factors.
  • Balance completeness and burden: include all important topics but avoid the "everything-but-the-kitchen-sink" approach that wastes respondents' time.
  • Ensure relevance: every question must be relevant to every respondent—they must have knowledge and experience with what you're asking.
  • Common confusion: double-barreled questions: asking two questions in one (e.g., "Do you enjoy biking and hiking?") makes responses impossible to interpret; always separate distinct questions.
  • Avoid bias and ambiguity: loaded terms, negative wording, jargon, and social desirability all distort responses; pre-test questions and imagine yourself answering them.

📝 Planning your questions

🎯 Identify what you need to know

  • Before writing any questions, determine exactly what you wish to learn.
  • The excerpt stresses how easy it is to forget important questions during design.
  • Example: if studying the transition from high school to college, you must include questions about all possible factors that could contribute to success or difficulty.
  • Consult the literature, brainstorm independently, and talk with others to identify relevant factors.
  • Rank your questions by importance if time or space limits prevent including everything.

⚖️ Balance thoroughness and respect

  • Include questions on all topics important to your research question.
  • But do not take an "everything-but-the-kitchen-sink approach" by uncritically including every possible question.
  • Doing so puts an unnecessary burden on respondents.
  • Show respect by only asking questions you view as important.
  • The best way to show appreciation for respondents' time is to not waste it.

✍️ Writing clear and direct questions

📏 Keep questions clear and succinct

  • Questions should be as clear and to the point as possible.
  • A survey is a technical instrument, not a creative writing exercise.
  • Write in a way that is direct and succinct.
  • Avoid overly wordy questions—this shows gratitude for respondents' time.

🎯 Ensure relevance to every respondent

Relevance means two things:

  1. Respondents have knowledge about your survey topic.
  2. Respondents have experience with the events, behaviors, or feelings you are asking them to report.

Example: in a survey about the transition to college, respondents must understand what "transition to college" means and have actually experienced it themselves.

🔍 Use filter questions when needed

Filter question: a question designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample.

  • Use filter questions when only a portion of respondents will have experience with certain matters.
  • This ensures you don't ask irrelevant questions to people who cannot meaningfully answer them.

🚫 Avoiding common pitfalls

🎭 Context effects

  • Questions asked earlier can prime (make more salient) certain views or thoughts that impact how respondents answer later questions.
  • This can be intentional (funneling) or inadvertent.
  • Example: asking several questions about harm reduction and a Safe Injection Site before asking about support for the site may increase support; asking about crime in the area first may decrease support.

🗣️ Context-appropriate wording

  • Wording must be appropriate for the people answering your questions.
  • Do not ask questions people cannot understand due to age or language barriers (including jargon).
  • Use vocabulary appropriate for your audience.
  • Acronyms can make questions difficult if unknown to respondents, though sometimes they are appropriate for the audience.

⚠️ Minimizing bias

  • Avoid questions with loaded terms:
    • Adjectives like "disgusting," "dangerous," or "wonderful"
    • Terms like "always" or "never"
  • Non-neutral wording leads people to the "correct" answer.
  • The tone of the question impacts how people answer.
  • Respondents should not feel judged for their response or opinion.
  • If they feel judged, they are less likely to answer honestly and will instead answer the way they think you want them to respond.

🌫️ Avoiding ambiguity

  • Questions can be ambiguous in many ways.
  • Words like "often" or "sometimes" can result in different interpretations.
  • Even words that appear clear to the researcher can be misinterpreted by respondents.
  • Acronyms can make questions difficult if unknown.
  • Pilot testing (pre-testing) helps determine which questions can be interpreted differently from your intended meaning.

❓ Meaningless responses

  • People can and do respond to questions about things about which they have no knowledge.
  • As a researcher, you want responses from people who have some knowledge of the subject or ability to meaningfully answer the question.

🔀 Double-barreled questions

  • This type of question should be avoided at all costs.
  • A double-barreled question has more than one question within it.
  • Example: "Do you enjoy biking and hiking in your free time?" If a respondent enjoys biking but not hiking, how do they respond?
  • Don't confuse: a single question that touches on multiple aspects vs. a double-barreled question that asks two distinct things at once—always separate distinct questions.

🔄 Double negatives and confusing terms

Questions bound to confuse respondents include:

  • Questions that pose double negatives
  • Those that use confusing or culturally specific terms
  • Those that ask more than one question but are posed as a single question

Avoiding negatives: In general, avoiding negative terms in your question wording will help increase respondent understanding.

Cultural specificity: Avoid terms or phrases that may be regionally or culturally specific unless you are absolutely certain all your respondents come from that region or culture.

😊 Social desirability

Social desirability: the idea that respondents will try to answer questions in a way that will present them in a favorable light.

  • We all want to look good and probably know the politically correct response to many questions.
  • Example: asking whether respondents ever cheated on an exam—people know cheating is wrong, so it may be difficult to get honest answers.
  • Solutions:
    • Guarantee respondents' confidentiality or, even better, their anonymity.
    • Phrase difficult questions in the most benign way possible.
    • Imagine how you would feel responding to your survey questions—if you would be uncomfortable, chances are others would as well.

🧪 Testing and feedback

🔬 Pre-testing your questions

  • Get feedback on your survey questions from as many people as possible.
  • Especially seek feedback from people who are like those in your sample.
  • Ask friends, mentors, and family to review your survey.
  • The more feedback you get, the better your chances of creating questions understandable to a wide variety of people and, most importantly, to those in your sample.

✅ Summary checklist

In order to pose effective survey questions, researchers should:

StepAction
1Identify what it is they wish to know
2Keep questions clear and succinct
3Make questions relevant to respondents
4Use filter questions when necessary
5Avoid questions that are likely to confuse respondents (double negatives, culturally specific terms, double-barreled questions)
6Imagine how they would feel responding to these questions themselves
7Get feedback, especially from people who resemble those in the researcher's sample
50

Response Options

8.7 Response Options

🧭 Overview

🧠 One-sentence thesis

Providing unambiguous response options is as important as posing clear questions, and researchers must balance trade-offs like fence-sitting versus floating when designing closed-ended surveys.

📌 Key points (3–5)

  • What response options are: the potential answers you provide to survey respondents, usually requiring them to choose a single (or best) response.
  • Closed-ended vs open-ended: closed-ended questions provide a limited set of options; open-ended questions let respondents reply in their own words.
  • Mutually exclusive rule: response options must not overlap (e.g., age categories should not allow someone to fit into two categories).
  • Common confusion—fence-sitting vs floating: fence-sitters choose neutral options even when they have an opinion; floaters choose substantive answers even when they have no opinion or don't understand the question.
  • Why it matters: poorly designed response options add complexity to analysis and can bias results by forcing or allowing non-opinions.

🎯 Closed-ended questions

🎯 What closed-ended questions are

Closed-ended questions: questions where the researcher provides respondents with a limited set of options for their responses.

  • In quantitative written surveys, most or all questions will be closed-ended.
  • The researcher controls the possible answers, making tallying and analysis more straightforward.
  • Example: "How old are you?" with predefined age ranges.

✅ Mutually exclusive response options

  • Response options must not overlap—each respondent should fit into exactly one category.
  • Bad example (not mutually exclusive): Age categories 19–29, 29–39, 39–49, 49–59, 59 or older. A 39-year-old could choose option 2 or 3.
  • Good example (mutually exclusive): Age categories 20–29, 30–39, 40–49, 50–59, 60 or older. Each age fits into only one category.
  • Keep the span of numbers consistent across categories (e.g., all represent 10 years, except possibly the last category).

🗣️ Open-ended questions

🗣️ What open-ended questions are

Open-ended questions: questions that do not include response options; respondents reply in their own way, using their own words.

  • Used to gather additional details about experiences or feelings.
  • Example: After closed-ended questions about weekly physical activity levels, an open-ended question could ask "What physical activities do you participate in?"
  • Benefits:
    • Makes the survey experience more satisfying for respondents.
    • Can reveal new motivations or explanations the researcher had not considered.
  • Trade-off: Responses are harder to tally and analyze than closed-ended responses.

⚖️ Fence-sitting and floating

🪜 Fence-sitting

Fence-sitters: respondents who choose neutral response options even if they have an opinion.

  • Occurs when you offer a neutral middle option (e.g., five-point scale: strongly agree, agree, no opinion, disagree, strongly disagree).
  • Some people are drawn to "no opinion" even when they have an opinion, especially if their true opinion is socially undesirable.
  • Example: A respondent who disagrees but fears judgment may select "no opinion" instead.

🎈 Floating

Floaters: respondents who choose a substantive answer to a question when they really do not understand the question or do not have an opinion.

  • Occurs when you force a choice without a neutral option (e.g., four-point scale: strongly agree, agree, disagree, strongly disagree).
  • Respondents with no opinion have no choice but to select a response that suggests they have an opinion.
  • Example: A respondent unfamiliar with the topic must pick "agree" or "disagree" even though they have no real view.

🔄 The trade-off

  • Floating is the flip side of fence-sitting: solving one problem often causes the other.
  • How to decide:
    • If you want to learn about people who claim to have no opinion → allow fence-sitting (include a neutral option).
    • If you are confident all respondents will be familiar with every topic → force a choice (no neutral option).
  • There is no always-correct solution; it depends on your research goals.

📋 Types of closed-ended questions

TypeExampleNotes
Rating Scale (Likert)"Rate the following statement: I like my job. 1=strongly agree, 2=agree"Measures degree of agreement or intensity of feeling.
Categorical response"How much did you earn in 2018? 1. $0–$20,000; 2. $20,001–$40,000; etc."Respondent selects one category; categories must be mutually exclusive.
Single response"What was your income for 2018? _____"Respondent provides a single numeric or text answer.
Semantic differential"How would you rate the Vancouver Police Department? Fair _____ unfair; Respectful _____ disrespectful"Uses opposing adjectives on a scale to measure attitudes.

🚫 Common question-wording pitfalls

🚫 Loaded terms

  • Problem: Using inflammatory or biased language that signals the "expected" response.
  • Example: "Agree or Disagree: Hookers on the streets are a threat to public safety." The term "hookers" is inflammatory and biased.

🚫 Double-barreled questions

  • Problem: Asking two questions in one; respondents who feel differently about each part cannot answer accurately.
  • Example: "Agree or Disagree: I support the legalization of street drugs and their taxation." This asks about both legalization and taxation—respondents may support one but not the other.

🚫 Ambiguous language

  • Problem: Using vague terms or concepts that respondents may interpret differently.
  • Example: "Agree or Disagree: Canada has good immigration policies." What does "good" mean? Also, respondents may not know enough about immigration policies to answer.

🚫 Use of acronyms

  • Problem: Assuming respondents know what abbreviations stand for.
  • Example: "Agree or Disagree: I believe that the VPD should increase the number of NCOs by increasing the number of Cpls." Not everyone knows VPD, NCO, or Cpls.

🚫 Lack of filter questions

  • Problem: Asking questions that anyone could answer without checking if they have relevant knowledge.
  • Example: "Agree or Disagree: Canada has good immigration policies" could be answered by anyone, but does not indicate whether they have knowledge of the topic. Use filter questions first to determine knowledge level.
51

8.8 Designing Effective Surveys

8.8 Designing Effective Surveys

🧭 Overview

🧠 One-sentence thesis

Effective survey design requires deliberate decisions about question grouping, order, length, and presentation to maximize respondent engagement and data quality.

📌 Key points (3–5)

  • Grouping questions thematically: organize questions by topic or chronology to help respondents follow the survey logic.
  • Question order matters: start with engaging (not boring or scary) questions; demographic placement depends on topic sensitivity and sample characteristics.
  • Survey length trade-off: balance what you need to know against respondent willingness; general rule is under 15 minutes.
  • Pre-testing is essential: reveals unclear wording, boring/offensive items, timing issues, and missing filter questions before actual administration.
  • Common confusion: there is no one-size-fits-all rule for question order or survey length—decisions must fit the unique research topic, questions, and sample.

📋 Organizing survey content

📋 Thematic grouping

  • Once you have developed quality questions, group them by theme or topic.
  • Example: in a survey about college transition, you might group questions about study habits together, friendship questions together, and exercise/eating habits together.
  • Alternatively, organize chronologically: questions about precollege life first, then questions about life after beginning college.
  • The excerpt emphasizes being deliberate about presentation—grouping is not arbitrary.

🔢 Question order principles

Most survey researchers agree that it is best to begin a survey with questions that encourage respondents to continue—do not bore respondents, but do not scare them away either.

  • The opening matters: avoid starting with questions that make the survey seem boring, unimportant, or not worth completing.
  • Demographic questions are debated:
    • Placing age, gender, race questions at the beginning may bore respondents or make them uncomfortable with personal questions.
    • However, if the survey topic is very sensitive (e.g., child sexual abuse, criminal activity), starting with the most intrusive questions may scare respondents away or shock them.
  • No universal rule: the best order is determined by the unique characteristics of the research—the topic, the questions, and most importantly, the sample.
  • The researcher should consult with people willing to provide feedback and keep in mind the characteristics and needs of those who will complete the survey.

⏱️ Survey length and respondent burden

⏱️ Balancing depth and brevity

  • Surveys vary from a page or two to a dozen or more pages, affecting completion time.
  • What determines length:
    • First, what do you wish to know? A simple research question (e.g., how grades vary by gender and year) requires fewer questions than a complex one (e.g., how college experiences are shaped by demographics, housing, family background, major, friendships, and extracurricular activities).
    • Even if your research question requires many questions, keep the survey as brief as possible—any hint of useless filler questions will turn off respondents.
  • Respondent willingness:
    • Consider how long respondents are likely to be willing to spend.
    • Example: college students may not want to spend more than a few minutes of their fun time on a survey, unless a professor allows in-class administration.
    • The general rule: try to keep completion time under 15 minutes.

⏱️ Don't confuse

  • Survey length is not just about your research needs—it must also fit the sample's willingness and context.
  • A longer survey is acceptable only if respondents have the time and motivation; otherwise, you risk non-completion or poor-quality responses.

🧪 Pre-testing the survey

🧪 Why pre-test

Pre-testing allows you to get feedback on your survey, so you can improve it before you actually administer it.

  • What you learn from pre-testing:
    • How understandable your questions are.
    • Feedback on question wording and order.
    • Whether any questions are exceptionally boring or offensive.
    • Whether you need filter questions in certain places.
    • Timing: how long it takes to complete the survey.
  • Pre-testing can be expensive and time-consuming if done on a large, representative sample, but you can learn a lot from a small number of accessible people (e.g., a few friends).

🧪 How to pre-test for timing

  • Ask pre-testers to complete the survey as though they were actual members of your sample.
  • Time them to estimate completion time for the real survey.
  • This tells you whether you have room to add items or need to cut some.

🧪 Don't confuse

  • Pre-testing is not the same as the actual survey administration—it is a trial run to catch problems early.
  • Even informal pre-testing with a few people is better than no pre-testing at all.

🎨 Presentation and formatting

🎨 Making the survey attractive and clear

  • A messy presentation can confuse or annoy respondents.
  • Best practices:
    • Be brief, to the point, and as clear as possible.
    • Avoid cramming too much into a single page.
    • Use readable font size (at least 12 point).
    • Leave reasonable space between items.
    • Make sure all instructions are exceptionally clear.
  • Think about documents you have read that were easy to read—mimic those features in your survey.

🎨 Why presentation matters

  • Poor formatting can lead to respondent confusion or frustration, reducing data quality.
  • Clear, attractive presentation encourages completion and accurate responses.

🔍 Summary insights from the excerpt

🔍 Broader lessons

  • The excerpt's summary emphasizes that question design and wording significantly impact survey outcomes.
  • Ensuring content reflects study objectives is only one aspect—researchers must also maximize accurate responses without biasing them.
  • Pilot testing (pre-testing) is very important because researchers who designed the questions may not spot ambiguities or context effects; other people's feedback is informative.
  • It is easy for a survey to end up with a "bad" question that must be thrown out of analysis—methods to minimize this should be utilized.

🔍 Beyond surveys

  • The excerpt notes that these lessons apply beyond formal surveys/interviews—they matter in professional contexts.
  • Example: the types of questions you ask and how you ask them can lead to different conclusions, such as choosing an ineffective treatment due to wrong diagnosis or identifying the wrong suspect in an investigation.
  • Focusing on the objective (treating the patient, arresting a suspect, identifying the cause of a fire) keeps you focused on the right types of questions and how to ask them.
52

From Completed Survey to Analyzable Data

9.1 From Completed Survey to Analyzable Data

🧭 Overview

🧠 One-sentence thesis

Survey researchers must transform stacks of completed questionnaires into condensed, analyzable numerical data through systematic coding and data entry processes while managing response rates and non-response bias.

📌 Key points (3–5)

  • Response rate matters: the proportion of returned surveys affects data quality; low rates risk non-response bias where only people with strong opinions respond.
  • Codebooks translate words to numbers: researchers create documents that assign numerical values to survey responses, making large datasets manageable.
  • Aggregate vs disaggregate: aggregate data compiles information from multiple sources into summaries; disaggregate data breaks summaries back into component parts.
  • Common confusion: data analysis is not just collecting surveys—it requires systematic conversion from raw responses to numerical formats before any patterns can be identified.
  • Why it matters: condensing large amounts of information into usable chunks enables researchers to describe patterns and share findings.

📊 From euphoria to organization

📊 The initial challenge

  • Receiving completed surveys can feel exciting at first, then overwhelming.
  • The excerpt describes moving from "initial euphoria to dread" when faced with many completed questionnaires.
  • Data can be both fun and overwhelming without a systematic approach.

🎯 The core goal

The goal with data analysis is to be able to condense large amounts of information into usable and understandable chunks.

  • Survey research generates large volumes of responses that cannot be interpreted in raw form.
  • Researchers need methods to make data "manageable and analyzable."
  • Example: 75 completed surveys with 20 questions each = 1,500 individual responses that need organizing.

📈 Response rates and bias

📈 Calculating response rate

Response rate: the number of completed surveys you receive divided by the number of surveys you distributed.

  • Formula: (completed surveys ÷ distributed surveys) × 100
  • Example from excerpt: 75 returned out of 100 distributed = 75% response rate
  • The excerpt notes that 75% "would be considered good, even excellent, by most survey researchers."
  • Getting all 100% back has "about zero" chance of happening.

⚠️ Non-response bias risk

Non-response bias: when a low rate of response introduces skewed findings into a study.

  • The problem: What if only people with strong opinions return surveys?
  • The consequence: Findings may not represent how things really are.
  • The limitation: Researchers become limited in the claims they can make about patterns.
  • Don't confuse: a low response rate doesn't automatically mean bad data, but it raises concerns about representativeness.

🔧 Improving response rates

The excerpt lists several strategies researchers use:

  • Personalization: address surveys to specific respondents rather than generic "madam" or "sir"
  • Credibility enhancement: provide study details, researcher contact information, partner with respected organizations (universities, hospitals)
  • Follow-up: send pre-survey notices and post-survey reminders
  • Incentives: include small tokens of appreciation (e.g., one dollar) with mailed surveys

🗂️ Creating codebooks

🗂️ What a codebook does

A codebook: a document that outlines how a survey researcher has translated her or his data from words into numbers.

  • Codebooks are essential for converting qualitative responses into quantitative data.
  • They provide a systematic translation system that can be replicated.
  • The excerpt emphasizes this as necessary "in order to condense your completed surveys into analyzable numbers."

📋 Codebook structure

The excerpt provides a table example with these components:

ComponentPurposeExample from excerpt
Variable #Sequential numbering11, 12, 13, 14
Variable NameShortened identifier for computer entryFINSEC, FINFAM, FINFAMT, FINCHUR
QuestionsThe actual survey question text"In general, how financially secure would you say you are?"
OptionsNumerical values assigned to each response1 = Not at all secure, 2 = Between not at all and moderately secure, etc.

💡 Why shortened variable names

  • The excerpt notes that shortened names "come in handy when entering data into a computer program for analysis."
  • Example: instead of typing the full question repeatedly, researchers use "FINSEC" for financial security questions.
  • This makes data entry faster and reduces errors.

🔢 Numerical coding examples

From the excerpt's codebook sample:

  • Scaled responses: Financial security uses 1–5 scale (1 = Not at all secure up to 5 = Very secure)
  • Yes/No questions: Binary coding (0 = No, 1 = Yes) for questions like "have you ever received money from family"
  • Frequency categories: 1 = 1 or 2 times, 2 = 3 or 4 times, 3 = 5 times or more

💻 Data entry and analysis tools

💻 The data entry task

  • The excerpt acknowledges that "there probably is not much to be said about this task that will make you want to perform it."
  • The reward: "having a database of your very own analyzable data."
  • Manual data entry is tedious but necessary for researchers without automated collection methods.

🛠️ Analysis software options

SPSS (Statistical Package for the Social Sciences)

  • Designed specifically for the type of data quantitative survey researchers collect
  • Can perform basic descriptive statistical analysis to complex inferential statistical analysis
  • "Touted by many for being highly accessible and relatively easy to navigate (with practice)"

Excel

  • "Far less sophisticated in its statistical capabilities"
  • "Relatively easy to use"
  • "Suits some researchers' purposes just fine"

Don't confuse: SPSS is more powerful but requires learning; Excel is simpler but more limited.

🔍 Aggregate vs disaggregate data

🔍 Aggregate data

Aggregate data: numerical or non-numerical information that is (1) collected from multiple sources and/or on multiple measures (variables or individuals) and (2) compiled into data summaries or summary reports to examine trends or statistical analysis.

  • Combines information from many sources or measures
  • Creates summaries and reports
  • Purpose: examine overall trends
  • Example: combining all 75 respondents' financial security ratings into one summary table

🔍 Disaggregate data

Disaggregate data: breaks down aggregated data into component parts or smaller units of data.

  • The opposite process of aggregation
  • Takes summaries and separates them back into smaller units
  • Allows researchers to examine subgroups or specific cases
  • Example: after seeing overall financial security trends, breaking down responses by age group or gender

🔄 Why both matter

  • Aggregation helps see big-picture patterns
  • Disaggregation helps understand variation and subgroup differences
  • Researchers move between both levels depending on their analytical questions

📊 Beginning pattern identification

📊 Univariate analysis basics

Univariate analysis: the most basic form of analysis that quantitative researchers conduct, where researchers describe patterns across just one variable.

  • "Uni" = one variable at a time
  • Includes frequency distributions and measures of central tendency
  • Foundation for more complex analysis later

📊 Frequency distributions

A frequency distribution: a way of summarizing the distribution of responses on a single survey question.

The excerpt provides an example table showing:

  • Question: "In general, how financially secure would you say you are?"
  • Total valid cases: 180 respondents (3 non-responses)
  • Results:
    • 46 people (25.6%) = Not at all secure
    • 43 people (23.9%) = Between not at all and moderately secure
    • 76 people (42.2%) = Moderately secure
    • 11 people (6.1%) = Between moderately and very secure
    • 4 people (2.2%) = Very secure

🔎 What frequency distributions reveal

From the example in the excerpt:

  • Most common response: "Moderately secure" (42.2%)
  • Key finding: Fewer than 10% reported being in the two most secure categories
  • Shows the full distribution, not just averages
  • Reveals where responses cluster and where they are sparse

📏 Measures of central tendency

The excerpt mentions these but notes the text cuts off before explaining them fully.

  • Described as another form of univariate analysis
  • "Tell us what" [excerpt ends here]
  • Typically include mean, median, and mode (though the excerpt doesn't complete this explanation)
53

Identifying Patterns in Quantitative Data

9.2 Identifying Patterns

🧭 Overview

🧠 One-sentence thesis

Data analysis reveals patterns through univariate analysis (examining single variables), bivariate analysis (exploring relationships between two variables), and multivariate analysis (investigating relationships among more than two variables).

📌 Key points (3–5)

  • Univariate analysis: the most basic form, describing patterns in just one variable using frequency distributions and measures of central tendency.
  • Measures of central tendency: mode (for nominal data), median (for ordinal data), and mean (for interval/ratio data) tell us the most common or average response.
  • Bivariate analysis: examines whether two variables co-vary (change together) or have independence (no relationship).
  • Common confusion: dependent variables go in rows, independent variables in columns when creating contingency tables—this makes comparison easier.
  • Multivariate analysis: allows simultaneous examination of relationships among more than two variables.

📊 Understanding aggregate vs disaggregate data

📦 Aggregate data

Aggregate data: numerical or non-numerical information that is (1) collected from multiple sources and/or on multiple measures (variables or individuals) and (2) compiled into data summaries or summary reports to examine trends or statistical analysis.

  • Combines information from multiple sources into summaries.
  • Used to examine overall trends and conduct statistical analysis.

🔍 Disaggregate data

  • Breaks down aggregated data into component parts or smaller units.
  • Allows examination of specific subgroups or individual elements within the larger dataset.

📈 Univariate analysis: examining one variable

📊 Frequency distributions

Frequency distribution: a way of summarizing the distribution of responses on a single survey question.

  • Shows how responses are distributed across different answer categories.
  • Includes the value, frequency (count), and percentage for each response option.
  • Example: In the older worker survey, 76 respondents (42.2%) reported feeling "moderately secure" financially—the most common response category.
  • Helps identify which response categories are most and least common.

🎯 Measures of central tendency

Measures of central tendency: tell us what the most common, or average, response is on a question.

Three types based on variable level:

MeasureVariable LevelHow to CalculateWhen to Use
ModeNominalMost frequent responseCategorical data with no order
MedianOrdinalMiddle value when orderedRanked/ordered data
MeanInterval/RatioSum all values ÷ total responsesNumeric data with equal intervals

🔢 Calculating the median

  • List all responses in order from lowest to highest.
  • Find the middle point in that ordered list.
  • Divide the number of valid cases by 2 to identify which position is the median.
  • Example: With 10 responses, divide 10 by 2 = 5, so the 5th value in the ordered list is the median.
  • In the financial security example with 10 responses, the median value was $128,000.

🔗 Bivariate analysis: examining two variables

🤝 Co-variation and independence

Bivariate analysis: allows us to assess co-variation among two variables—whether changes in one variable occur together with changes in another.

  • Co-variation: changes in one variable occur together with changes in another variable.
  • Independence: no relationship exists between the two variables; they do not co-vary.
  • Purpose: determine whether a relationship exists between two variables.

📋 Contingency tables

Contingency table: shows how variation on one variable may be contingent on variation on the other.

Standard table structure:

  • Columns: independent variables (the variables that influence others).
  • Rows: dependent variables (values contingent on other values).
  • Bottom row: total number of respondents for each independent variable category.
  • Top: table heading describing what is presented.

📊 Reading contingency tables

Example from the older worker survey (gender × financial security):

  • 44.1% of men reported not being financially secure.
  • 51.8% of women reported not being financially secure.
  • Conclusion: More women than men reported financial insecurity.
  • Researchers sometimes collapse response categories (e.g., from 5 to 3 categories) to make tables easier to read.

Don't confuse: The placement matters—putting independent variables in columns and dependent variables in rows makes it simpler to compare across categories by reading horizontally.

🔀 Multivariate analysis: examining multiple variables

🧩 Beyond two variables

Multivariate analysis: simultaneously analyzing relationships among more than two variables.

  • Goes beyond simple two-variable relationships.
  • Allows testing of more complex hypotheses.
  • Example hypothesis: "Financial security declines for women as they age but increases for men as they age" would require examining gender, financial security, AND age simultaneously.
  • Requires more advanced statistical techniques than univariate or bivariate analysis.

📚 Further learning

  • The excerpt notes this is an introductory overview and does not go into detail about conducting multivariate analysis.
  • Strategies for reading and understanding multivariate statistics tables are covered in Chapter 16.
  • Taking statistics courses is recommended for gaining deeper quantitative data analysis skills.
54

Interview Research

10.1 Interview Research

🧭 Overview

🧠 One-sentence thesis

Qualitative interviews are a flexible, semi-structured data collection method best suited for gathering detailed, in-depth information about complex topics where participants need space to explain their experiences in their own words.

📌 Key points (3–5)

  • What interviews are: a method involving two or more people exchanging information through questions and answers designed to elicit information on specific topics.
  • Key advantage over surveys: real-time interaction allows researchers to ask follow-up questions when responses spark further inquiry.
  • When to use interviews: complex topics, lengthy explanations needed, studying processes, or when detailed personal experiences are the focus.
  • Common confusion: qualitative vs quantitative interviews—qualitative interviews use open-ended questions and are semi-structured, not rigidly standardized.
  • Core technique: semi-structured approach with an interview guide that directs but does not rigidly constrain the conversation.

🎯 What interview research is

🎯 Definition and scope

Interview (social scientific perspective): a method of data collection that involves two or more people exchanging information through a series of questions and answers.

  • Questions are designed by a researcher to elicit information from participants on specific topics.
  • Not limited to two people or in-person meetings—can take various forms.
  • Used across fields: market research, journalism, and social science.

🔄 How interviews differ from surveys

  • Real-time flexibility: if a response sparks a follow-up question, you can ask it immediately.
  • Surveys lack this flexibility—once distributed, you cannot probe deeper into interesting responses.
  • Example: A participant mentions an unexpected experience; in an interview you can explore it further, but in a survey you cannot.

📋 When to use interview research

📋 Five key circumstances

The excerpt identifies interviews as especially useful when:

CircumstanceWhy it matters
Very detailed information neededInterviews allow depth that surveys cannot capture
Follow-up questions anticipatedReal-time dialogue enables probing
Lengthy explanations requiredTopics about lived experience (emotional, psychological, physical, intellectual, cultural, racial) need space
Complex or confusing topicsParticipants may need dialogue to work through responses
Studying processesProcesses require explanation and description

🔍 Complexity and explanation needs

  • Use interviews when topics require lengthy explanation.
  • When participants need time or dialogue with others to work through their responses.
  • When people will have a lot to say or want to provide explanations.
  • Don't confuse: not every research question needs interviews—only those requiring depth, complexity, or process description.

🗣️ Qualitative interview characteristics

🗣️ Semi-structured approach

Semi-structured interviews: the researcher has a particular topic to explore but questions are open-ended and may not be asked in exactly the same way or order to each respondent.

  • Also called "intensive" or "in-depth" interviews.
  • The goal is to hear from respondents in their own words what they think is important.
  • Flexibility is key—not rigidly standardized like quantitative interviews.

❓ Open-ended questions

Open-ended questions: questions for which a researcher does not provide answer options.

  • Participants must come up with their own words, phrases, or sentences.
  • Demand more from participants than closed-ended questions.
  • This is a key difference between qualitative and quantitative interviewing.
  • Example: Instead of "Do you agree or disagree?", ask "What are your thoughts on...?"

📝 Interview guide

Interview guide: a list of topics or questions that the interviewer hopes to cover during the course of an interview.

  • Used to guide the interviewer, not to rigidly constrain.
  • Not inflexible—think of it like an agenda.
  • The researcher usually develops it in advance and refers to it (or memorizes it) during the interview.
  • Allows the conversation to flow naturally while ensuring key topics are covered.

💪 Advantages of interview research

💪 Gathering detailed information

  • Interviews excel at capturing depth and nuance.
  • Participants can explain processes, describe experiences, and provide context.
  • The conversational nature may feel more natural to respondents than formal surveys.

💪 Real-time responsiveness

  • The interviewer can adapt based on what the participant says.
  • Follow-up questions can clarify, deepen, or explore unexpected directions.
  • This responsiveness is impossible with pre-set survey instruments.
55

10.2 When should qualitative data collection be used?

10.2 When should qualitative data collection be used?

🧭 Overview

🧠 One-sentence thesis

Qualitative interviews should be used when researchers need detailed, explanatory information that may require follow-up questions, lengthy responses, or exploration of complex topics that participants need time to work through.

📌 Key points (3–5)

  • Main advantage over surveys: Interviews allow real-time follow-up questions when responses spark further inquiry, whereas surveys do not.
  • When to use interviews: Best for gathering detailed information, exploring complex or confusing topics, studying processes, or when questions require lengthy explanations about lived experience.
  • Qualitative interview structure: Semi-structured with open-ended questions; researchers use an interview guide (flexible topic/question list) rather than rigid scripts.
  • Common confusion: Interview guides are not inflexible checklists—the flow depends partly on what the participant says, not just the researcher's pre-set order.
  • Key skill requirement: Skilled interviewers must listen actively, pick up cues about when to follow up or move on, and let participants speak without unnecessary interruption.

🎯 When interviews are the right method

🎯 Advantages over surveys

  • Real-time follow-up: If a participant's response raises a new question, the interviewer can ask immediately.
  • Surveys lack this flexibility—once a survey is distributed, no clarification or probing is possible.
  • Interviews reveal "the story behind responses" that might appear in written surveys.

📋 Five conditions for using interviews

The excerpt lists five situations when interview research is especially useful:

ConditionWhy it matters
1. Very detailed information neededInterviews allow depth that surveys cannot capture
2. Anticipate needing follow-up questionsReal-time conversation enables probing
3. Questions require lengthy explanationTopics about lived experience (emotional, psychological, physical, intellectual, cultural, racial, etc.) need space for participants to elaborate
4. Complex or confusing topicParticipants may need dialogue or time to work through their responses
5. Studying processesInterviews suit topics where participants must describe how things unfold

🧩 When complexity demands conversation

  • If the topic is complex or may confuse respondents, interviews provide opportunity for clarification.
  • If participants need time or dialogue with others to formulate responses, interviews accommodate that need.
  • If people will have a lot to say or want to explain/describe processes, interviews are the best method.

🗣️ Qualitative interview characteristics

🗣️ What makes interviews "qualitative"

Qualitative interviews are sometimes called intensive or in-depth interviews.

  • Semi-structured format: The researcher has a particular topic but does not ask every question in exactly the same way or order to every respondent.
  • Open-ended questions: Researchers do not provide answer options; participants must come up with their own words, phrases, or sentences.
  • Primary aim: Hear from respondents in their own words what they think is important about the topic.

🔍 Open-ended vs closed-ended questions

  • Open-ended questions: No answer options provided; demand more from participants because they must generate their own responses.
  • Closed-ended questions (not the focus here): Provide answer options; typical in surveys.
  • Don't confuse: Open-ended questions require more effort from participants but yield richer, more detailed data.

🎙️ Conversational feel with research purpose

  • Qualitative interviews might feel like a conversation to respondents.
  • However, the researcher is usually guiding the conversation with the goal of gathering information.
  • Example: A participant may feel they are chatting freely, but the interviewer is steering toward specific topics of research interest.

📝 Interview guides and flexibility

📝 What is an interview guide?

An interview guide is a list of topics or questions that the interviewer hopes to cover during the course of an interview.

  • It is called a "guide" because it guides the interviewer but is not inflexible.
  • Think of it like a to-do list or daily agenda: you hope to check off everything, but it is not mandatory to accomplish everything in the exact order written.
  • Emerging events (or participant responses) may influence you to rearrange or skip items.

🔄 Why guides are flexible

  • The opening question may be the same across all interviews, but from that point on, what the participant says shapes how the interview proceeds.
  • Each interview is likely to flow a little differently because participants provide answers in their own words and raise points they believe are important.
  • This flexibility makes in-depth interviewing exciting but also challenging to conduct.

📋 Two-version strategy

  • Some researchers create two versions of the guide:
    • Brief outline: Just topic headings.
    • Detailed version: Full questions underneath each topic heading.
  • The detailed guide is used to prepare and practice in advance; the brief outline may be used during the interview itself.
  • This approach balances preparation with conversational flow.

🎓 Skills for effective interviewing

🎓 What skilled interviewers do

  • Ask questions and actually listen: Not just waiting for their turn to speak.
  • Pick up on cues: Recognize when to follow up, when to move on, and when to let the participant speak without guidance or interruption.
  • Balance structure and responsiveness: Use the guide to stay on track but remain flexible to participant input.

⚠️ Challenges of in-depth interviewing

  • It takes skill to conduct well—not as simple as reading questions from a script.
  • The interviewer must manage the tension between covering planned topics and allowing the conversation to unfold naturally.
  • Don't confuse: A flexible guide does not mean lack of preparation; thoughtful and careful work is required to design the guide and anticipate how topics will flow.

🧠 Thematic organization

  • Interview guides should organize topics and questions thematically and in the order they are likely to proceed.
  • Keep in mind: The flow of a qualitative interview is partly determined by what the respondent has to say, not solely by the researcher's plan.
  • Example: If a participant mentions a related topic early, the interviewer may choose to explore it immediately rather than waiting for the "scheduled" moment in the guide.
56

Conducting Qualitative Interviews

10.3 Conducting Qualitative Interviews

🧭 Overview

🧠 One-sentence thesis

Qualitative interviews use open-ended questions and flexible interview guides to let participants share information in their own words, requiring skilled interviewers who can listen actively and adapt the conversation flow.

📌 Key points (3–5)

  • Open-ended questions are central: participants provide answers in their own words without pre-set response options, demanding more effort but yielding richer data.
  • Interview guides are flexible tools: they list topics or questions to cover but are not rigid scripts—the participant's responses shape how the interview unfolds.
  • Skilled interviewing requires active listening: interviewers must simultaneously ask questions, listen carefully, pick up on cues, and decide when to follow up or let participants speak uninterrupted.
  • Common confusion—guide vs. script: an interview guide is like a to-do list (flexible, reorderable) not a mandatory checklist; it guides rather than dictates the conversation.
  • Preparation is essential: guides must be carefully constructed, practiced in advance, and ideally recorded (audio) so the interviewer can focus on interaction rather than note-taking.

🎯 What makes qualitative interviews distinctive

🗣️ Open-ended questions

Open-ended questions are questions for which a researcher does not provide answer options.

  • Participants must come up with their own words, phrases, or sentences to respond.
  • This contrasts with quantitative interviews that use closed-ended questions with pre-set answer choices.
  • The approach demands more cognitive effort from participants but allows them to raise points they believe are important.
  • Example: Instead of "Do you support policy X? Yes/No," ask "What are your thoughts on policy X?"

💬 Conversational feel with research purpose

  • Qualitative interviews might feel like a conversation to respondents.
  • However, the researcher is actively guiding the conversation with the goal of gathering information.
  • The interviewer balances making participants comfortable while systematically covering research topics.

📋 Creating and using interview guides

📝 What an interview guide is

An interview guide is a list of topics or questions that the interviewer hopes to cover during the course of an interview.

  • It is called a "guide" because it guides the interviewer but is not inflexible.
  • Think of it like a daily agenda or to-do list: you hope to accomplish everything, but it's not mandatory to complete every item or follow the exact order.
  • Emerging events (participant responses) may influence you to rearrange or skip items.

🔄 Flexibility in practice

  • While the opening question may be the same across all interviews, from that point on the participant's responses shape how the interview proceeds.
  • Each interview is likely to flow differently because participants provide answers in their own words.
  • This flexibility makes in-depth interviewing exciting but also challenging to conduct.

🛠️ Two-version strategy

Some researchers create two versions:

VersionContentPurpose
Detailed guideFull questions under each topic headingPrepare and practice in advance
Brief outlineJust topic headingsBring to the actual interview
  • Bringing only the outline encourages the researcher to actually listen to participants.
  • An overly-detailed guide is difficult to navigate and may give the impression the interviewer cares more about questions than answers.

🏗️ Constructing an effective interview guide

🧠 Brainstorming and organizing

  1. Brainstorm freely: List all topics and questions that come to mind about your research question (no rules at this stage).
  2. Pare down: Cut redundant questions and group similar topics together.
  3. Create headings: Develop topic headings for grouped categories.
  4. Consult literature: Find out what questions other researchers have asked in similar studies.
  5. Get feedback: Ask friends, family, and professors for guidance—they will catch things you missed.

📍 Question placement and sensitivity

  • Do not place very sensitive or potentially controversial questions at the very beginning.
  • Participants need the opportunity to warm up and feel comfortable talking with you.
  • This mirrors best practices in quantitative survey research.

✅ Guidelines for specific questions

Avoid simple yes/no questions (or add follow-ups)

  • If you include yes/no questions, be sure to include follow-up questions.
  • One benefit of qualitative interviews is that you can ask participants for more information.

Avoid "why" as a follow-up

  • "Why" questions can appear confrontational even if unintended.
  • People often don't know how to respond because they may not know why themselves.
  • Better alternative: "Could you tell me a little more about that?"
  • This allows participants to explain further without feeling doubted or questioned hostilely.

Avoid leading questions

  • ❌ Leading: "What do you think about people who drink and drive?"
  • ✅ Better: "How do you feel about drinking and driving?"

Keep questions open-ended

  • The key to successful qualitative interviews is giving participants the opportunity to share information in their own words and in their own way.

🎙️ Recording and preparation logistics

📼 Audio recording considerations

  • Recording interviews is probably most common for qualitative interviewers.
  • Benefits: Allows the researcher to focus on interaction with the participant rather than being distracted by note-taking.
  • Challenges: Not all participants feel comfortable being recorded; sometimes the subject is too sensitive for recording.
  • Alternative: If not recording, the researcher must balance excellent note-taking with exceptional question-asking and listening—quite challenging to do all three simultaneously.

🎭 Practice is crucial

  • Practice the interview in advance, whether recording or not.
  • Ideally, find friends willing to participate in trial runs.
  • Even better: find friends similar in some ways to your sample—they can give the best feedback on questions and interview demeanor.

🎯 The interviewing skill set

It takes a skilled interviewer to:

  • Ask questions
  • Actually listen to respondents
  • Pick up on cues about when to follow up
  • Know when to move on
  • Know when to simply let the participant speak without guidance or interruption

🔍 Other qualitative data collection methods

👥 Focus groups

When multiple respondents participate in an interview at the same time, this is referred to as a focus group interview.

  • Occasionally more than one interviewer may be present.
  • Benefits:
    • Topics or questions that hadn't occurred to the researcher may be brought up by other participants.
    • Respondents talking with and asking questions of one another can be an excellent learning opportunity.
    • Researchers can learn from respondents' body language and interactions with one another.
  • Note: There are unique ethical concerns associated with collecting data in a group setting.

📜 Oral histories

An oral history is a less traditional form of data collection that can take the form of an interview.

  • Purpose: Record in writing material that might otherwise be forgotten by those unlikely to create written records or produce archival materials.
  • Involves interviewing people about their past to ensure their history is preserved.
57

10.4 Other Qualitative Data Collection Methods

10.4 Other Qualitative Data Collection Methods

🧭 Overview

🧠 One-sentence thesis

Beyond traditional one-on-one interviews, qualitative researchers can use focus groups, oral histories, and videography to capture richer data through group dynamics, preserve voices that might otherwise be lost, and record detailed behavioral information.

📌 Key points (3–5)

  • Focus groups allow multiple respondents to interact simultaneously, surfacing questions and topics the researcher might not have anticipated.
  • Oral histories preserve memories and experiences of people unlikely to create written records, addressing historical gaps left by those with less power and access.
  • Videography captures detailed behavioral data (body language, gaze) and allows repeated review, though it raises significant confidentiality concerns.
  • Common confusion: oral histories vs. written archives—oral histories are not inferior; Aboriginal oral histories, for example, have preserved accurate accounts across generations through structured public sharing.
  • Practical trade-offs: each method offers unique benefits (e.g., group interaction, permanent records) but also challenges (e.g., ethical concerns in groups, labor-intensive video coding, Hawthorne effect).

🗣️ Focus groups

🗣️ What focus groups are

A focus group interview occurs when multiple respondents participate in an interview at the same time; occasionally more than one interviewer may be present as well.

  • Unlike one-on-one interviews, focus groups involve several participants talking together.
  • The group setting creates interaction among respondents, not just between interviewer and interviewee.

💡 Why focus groups are valuable

  • Topics emerge organically: participants may bring up questions or issues the researcher had not considered.
  • Peer-to-peer learning: respondents ask questions of one another, revealing perspectives the researcher might miss.
  • Nonverbal data: the researcher can observe body language and interactions between participants, adding another layer of insight.
  • Example: A participant's reaction to another's comment might reveal disagreement or support that wouldn't surface in individual interviews.

⚠️ Unique ethical concerns

  • The excerpt notes that collecting data in a group setting raises specific ethical issues (though it does not detail them).
  • Don't confuse: focus groups are not just "efficient interviews"—the group dynamic itself is a source of data.

📜 Oral histories

📜 What oral histories are

Oral history is a less traditional form of data collection that can take the form of an interview; its purpose is to record, in writing, material that might otherwise be forgotten by those who are unlikely to create a written record or produce archival materials.

  • It involves interviewing people about their past to ensure their history is available to future generations.
  • History is defined broadly as "everything that happened before this moment in time."

📦 The "box" analogy: whose history gets preserved

  • Palys and Atchison use the analogy of a box containing historical facts.
  • Historians select which items go into the box, but many interesting and important facts remain outside—"one of the tragedies of history."
  • Power and access matter: governments, the wealthy, the powerful, the upper classes, the educated, and (historically) men have had easier access to the box.
  • Example: 17th-century England's historical accounts reflect the views of wealthy, educated males; the poor, lower classes, females, and uneducated left fewer records.
  • Oral history research aims to fill these gaps by documenting voices that would otherwise be lost.

🪶 Aboriginal oral histories

AspectAboriginal oral historiesWritten archives
Preservation methodVerbatim memorization and public recounting across generationsWritten documents
Accuracy mechanismPublic sharing at potlatches (feasts) where others can challenge the accountArchival storage
Historical assumptionEuropean reliance on written records led to the false assumption of "no history"Written evidence = history
ContinuityOral histories told today match those recorded by anthropologists at the turn of the 20th centuryVaries
  • Each generation was tasked with accurately remembering and preserving stories from previous generations.
  • The memories were not casual recollections but "lived memorialization and verbatim accounts repeated throughout the ages."
  • Public accountability: at potlatches, each speaker recounts their clan's history (territories, crests, songs); attendees can challenge inaccuracies, helping preserve accuracy.
  • Don't confuse: the lack of written records does not mean Aboriginal cultures lack history—they have successfully preserved it through structured oral transmission.

🎥 Videography

🎥 What videography offers

  • Videography can collect data during interviews, focus groups, and in natural settings (popular in ethnographic studies).
  • It has been under-utilized mainly due to confidentiality and privacy issues.

✅ Benefits of video data collection

  • Accurate event recording: captures what happened without relying on memory.
  • Multiple raters: enables verification of observations through cross-coding.
  • Repeated review: researchers can watch the video multiple times.
  • Performance measurement: particularly valuable for measuring behaviors.
  • Verification: allows comparison of self-reported behaviors against observed behaviors.
  • Detailed data: captures body language, gaze direction, and other nonverbal cues.
  • Example: A researcher studying workplace interactions can review the same 10-minute segment multiple times to code different behaviors (speech, gestures, eye contact).

🛠️ Steps for a successful video study

Asan and Montangue outline four major phases:

Conceptualizing the study

  • Choose a research question suited to video data.
  • Decide on time frame, scope, additional instruments (interviews, surveys), personnel needs, and analysis method.

Legal and ethical issues

  • Ensure compliance with ethical guidelines and local regulations.
  • Obtain legal consent for recording.
  • Address privacy and confidentiality (participant identification, data storage).
  • Submit and gain IRB (Institutional Review Board) approval.

Participants and sampling

  • Determine number of participants and unit of analysis.
  • Decide on recruitment method (random or eligibility requirements).
  • Inform participants of benefits and risks.
  • Obtain informed consent from all participants.

Data collection and management

  • Choose high-quality cameras and audio recording style.
  • Determine camera layout and angles for clear participant view.
  • Establish recording protocols and data linking procedures.
  • Sync audio and video data.
  • Create secure storage protocols and back up data.
  • Train all research team members.

Data analysis

  • Review data quality.
  • Identify analysis software.
  • Create coding schemes based on variables of interest.
  • Conduct pilot analysis on a smaller sample first.

⚖️ Pros and cons of video vs. traditional observation

MethodKey advantagesKey disadvantages
Traditional observationRich data; can ask follow-up questions; effective for shadowing; researcher sees entire spaceResearcher may be intrusive; aspects may be missed; hard to catch nonverbal cues; cognitive workload; low inter-rater reliability
Video methodLess intrusive; allows retrospective analysis; captures simultaneous complex interactions; permanent record; higher inter-rater reliability; enables self-evaluationLabor-intensive coding; additional IRB procedures; confidentiality concerns; equipment cost; data management challenges; higher overall cost
  • Both methods face the Hawthorne effect: people modify their behavior because they know they are being studied, which can distort findings.
  • Don't confuse: video is not automatically "better"—it trades intrusiveness for labor intensity and raises different ethical concerns.

🔒 Confidentiality as a primary concern

  • Most institutional research ethics boards require detailed plans for participant confidentiality.
  • Researchers must outline:
    • How video data will be collected
    • How it will be stored
    • Who will have access
    • When and how it will be destroyed
  • This is a significant barrier to wider adoption of videography in research.
58

Analysis of Qualitative Interview Data

10.5 Analysis of Qualitative Interview Data

🧭 Overview

🧠 One-sentence thesis

Qualitative interview analysis transforms recorded conversations into written transcripts and then systematically identifies patterns through iterative coding processes that condense large amounts of data into manageable, meaningful themes.

📌 Key points (3–5)

  • Transcription is the foundation: verbatim transcripts capturing words, tone, gestures, and nonverbal cues provide the raw material for analysis.
  • Coding condenses data: codes are shorthand representations of complex themes, discovered by reading transcripts repeatedly to identify patterns.
  • Two coding approaches exist: deductive coding starts with predefined interests, while inductive (open) coding lets themes emerge from the data itself.
  • Common confusion—deductive vs inductive: deductive begins with what you're looking for; inductive discovers what's there by reading without preset categories.
  • Analysis is iterative: open coding identifies broad themes, then focused/axial coding refines and connects them through multiple readings.

📝 Creating transcripts

📝 What transcription involves

To transcribe an interview means to create a complete, written copy of the recorded interview by playing the recording back and typing in each word that is spoken, noting who spoke which words.

  • Aim for verbatim transcription: word-for-word exactly what was said.
  • Include nonverbal elements: gestures, tone of voice, emphasis, pauses.
  • Example: an interviewee might roll their eyes or wipe tears—these nonverbal cues "speak volumes" about feelings but won't appear in audio alone.

🎯 Why researchers should transcribe their own interviews

  • The interviewer remembers context that audio cannot capture.
  • Nonverbal behaviors (eye rolls, obscene gestures, tears) are "invaluable" details.
  • Self-transcription allows the researcher to record associated interactions relevant to analysis.

🔍 The coding process

🔍 What coding means

A code is a shorthand representation of some more complex set of issues or ideas.

  • Purpose: achieve data management and data reduction—condense large amounts of data into smaller, understandable information.
  • Method: read and re-read transcripts until themes become clear.
  • The excerpt emphasizes reading "multiple times" as essential.

🧩 Inductive vs deductive coding

ApproachStarting pointProcessAlso called
DeductivePre-defined interests or well-specified topicsUse specific interests to identify relevant passages → descriptive coding → interpretative coding → pattern codingTop-down
InductiveNo preset categoriesGeneral themes emerge from reading the data → open coding → collapse or elaborate categoriesOpen coding; bottom-up

Don't confuse: Deductive means you know what you're looking for before you start; inductive means themes reveal themselves as you read.

🔄 Pattern coding example

  • The excerpt describes studying at-risk youth behaviors.
  • Discovery: behaviors have different characteristics depending on social context (school, family, work).
  • Recognizing this association = identifying a pattern.

🎨 Open coding stage

🎨 How open coding works

Open coding begins with the identification of general themes and ideas that emerge as the researcher reads through the data.

  • Requires "multiple analyses"—reading transcripts repeatedly.
  • Two possible directions:
    • Elaborate: make finer and finer distinctions within a category.
    • Collapse: start with very specific descriptive categories, then merge them into broader ones.
  • Codes "arise out of the material that is being examined."

🤔 Questions to ask when stuck

The excerpt cites Lofland and Lofland's helpful questions for identifying themes:

  1. Of what topic, unit, or aspect is this an instance?
  2. What question about a topic does this item of data suggest?
  3. What sort of answer does this data suggest (what proposition)?
  • Asking these about passages helps you begin to name potential themes and categories.
  • The excerpt notes that "getting started with the coding process is actually the hardest part."

🎯 Focused (axial) coding stage

🎯 What focused coding does

Focused coding involves collapsing or narrowing themes and categories identified in open coding.

Steps:

  • Read through open coding notes.
  • Identify related themes or categories.
  • Merge some themes together.
  • Give each collapsed/merged theme a name (code).
  • Identify passages of data that fit each named category.

📖 Defining your codes

  • Write brief definitions or descriptions of each code.
  • This gives meaning to your data.
  • Develops "a way to talk about your findings and what your data means."

🔁 Re-reading is essential

  • You will need to read through transcripts "several times."
  • The excerpt describes this as "tedious and laborious" but necessary.

📊 Coding example from research

📊 Sample codes table

The excerpt provides a concrete example from research on child-free adults, showing two codes:

Code 1: Reinforce Gender

  • Description: Participants reinforce heteronormative ideals by (a) calling up stereotypical images of gender and family, and (b) citing their own "failure" to achieve those ideals.
  • Example excerpts: "The woman is more involved with taking care of the child"; "I don't have that maternal instinct"; "I question myself, like if there's something wrong with me."

Code 2: Resist Gender

  • Description: Participants resist gender norms by (a) pushing back against negative social responses, and (b) redefining family in ways that challenge normative notions.
  • Example excerpts: "Am I less of a woman because I don't have kids? I don't think so!"; "Family is the group of people that you want to be with. That's it."

💡 What this example shows

  • Each code has a clear description.
  • Multiple interview excerpts support each code.
  • The codes capture opposing responses to the same social pressure (gender norms).

💻 Software tools for qualitative analysis

💻 Available programs

Just as quantitative researchers use SPSS or MicroCase, qualitative researchers have specialized software:

🛠️ What these tools do

  • Import interview transcripts from electronic files.
  • Label or code passages.
  • Cut and paste passages.
  • Search for words or phrases.
  • Organize complex interrelationships among passages and codes.
  • Help researchers "organize, manage, sort, and analyze large amounts of qualitative data."
59

10.6 Qualitative Coding, Analysis, and Write-Up: The How to Guide

10.6 Qualitative Coding, Analysis, and Write-Up: The How to Guide

🧭 Overview

🧠 One-sentence thesis

Qualitative coding follows a systematic inductive process—from open coding to identify concepts, through axial coding to explore relationships, to organized write-up that uses participant quotes to demonstrate validity.

📌 Key points (3–5)

  • Open coding: breaking down data into first-level concepts (master headings) and second-level categories (subheadings) using techniques like color-coded highlighting.
  • Axial coding: re-reading transcripts to confirm concepts/categories and explore relationships by asking about conditions, context, and consequences.
  • Common confusion: counting alone is insufficient for qualitative write-up—it cannot convey data richness; researchers must include direct participant quotes and deeper analysis.
  • Data organization: transferring concepts and categories into a data table helps structure both analysis and write-up.
  • Write-up approaches: multiple presentation methods exist (storytelling, metaphor, comparison, examining relations, counting), but all must draw on participant words to demonstrate validity.

🎨 Open coding process

🎨 What open coding does

Open coding: the first level of coding where the researcher identifies distinct concepts and categories in the data, which form the basic units of analysis.

  • The researcher breaks down data into:
    • First-level concepts = master headings
    • Second-level categories = subheadings
  • This is the foundational step that creates the structure for all subsequent analysis.

🖍️ How to execute open coding

  • Researchers commonly use color-coded highlighters to distinguish concepts and categories.
  • Example: If interviewees consistently mention teaching methods:
    • Each mention of teaching methods or related items gets the same color highlight
    • "Teaching methods" becomes a concept
    • Related items (types, etc.) become categories
    • All highlighted in the same color
  • Different colors distinguish each broad concept and category.
  • End result: transcripts contain many different colors of highlighted text.

📋 Transferring to outline

  • After highlighting, transfer the color-coded data into a brief outline:
    • Main headings = concepts
    • Subheadings = categories
  • This outline becomes the foundation for the next coding step.

🔄 Axial (focused) coding

🔄 Purpose and focus shift

Coding stagePrimary focusGoal
Open codingText from interviewsDefine concepts and categories
Axial codingConcepts/categories already developedConfirm accuracy and explore relationships
  • Axial coding involves re-reading the interview text while using the concepts and categories from open coding.
  • Two main purposes:
    1. Confirm that concepts and categories accurately represent interview responses
    2. Explore how concepts and categories are related

🔍 Key questions in axial coding

The researcher asks:

  • What conditions caused or influenced concepts and categories?
  • What is/was the social/political context?
  • What are the associated effects or consequences?

💡 Example walkthrough

  • Suppose one concept is "Adaptive Teaching"
  • Two categories are "tutoring" and "group projects"
  • The researcher asks: What conditions caused or influenced tutoring and group projects to occur?
  • From transcripts, participants linked this with having a supportive principal
  • Axial code created: "our principal encourages different teaching methods"
  • This discusses the context and suggests a new category may be needed: "supportive environment"

Don't confuse: Axial coding is not creating entirely new concepts from scratch; it is a more directed approach to ensure all important aspects have been identified.

📊 Building the data table

📊 Table structure and purpose

The excerpt provides a table format that organizes the coding process:

StepActionExample content
Step 1: Open CodingMajor category/concept + associated conceptsAdaptive teaching; tutoring; group projects
Step 2: Axial CodingThemes identifiedOur principal encourages different teaching methods
Step 3: New CategoryCategory emerging from axial codingSupportive environment
Step 4Add concepts relating to new category(Continue building)
Step 5Continue exhaustive analysis(Until complete)

⏱️ Time investment

  • While the table appears to show a quick process, it requires a lot of time to do well.
  • The table is effective for:
    • Organizing results
    • Organizing discussion in a research paper
    • Assisting with data analysis write-up

✍️ Analysis and write-up

✍️ Using the table for write-up

  • The data table is both an organizational tool and a write-up assistant.
  • First step: discuss the various categories and describe the associated concepts.
  • As part of this, describe the themes created in the axial coding process.

🎭 Presentation methods

The excerpt lists five ways to present qualitative data:

  1. Telling a story
  2. Using a metaphor
  3. Comparing and contrasting
  4. Examining relations among concepts/variables
  5. Counting

⚠️ Important limitation of counting

Counting should not be a stand-alone qualitative data analysis process because it cannot convey the richness of the data collected.

Appropriate uses of counting:

  • Stating the number of participants
  • Reporting how many participants spoke about a specific theme or category

Required addition: The researcher must present a much deeper level of analysis by:

  • Drawing out the words of participants
  • Including direct quotes from participant interviews
  • Using quotes to demonstrate the validity of various themes

🔢 Identifying participants in write-up

  • Best practice: "identify" participants through a number, alphabetical letter, or pseudonym (e.g., "Participant #3 stated…")
  • Why this matters: demonstrates you are drawing data from all participants.
  • Analogy to quantitative research:
    • Quantitative: present data for all 400 participants, show (n=400) in tables
    • Qualitative: assigning participant numbers/letters/pseudonyms serves the same purpose—confirming to the reader how many participants answered a particular research question

Don't confuse: This numbering system is not about anonymity alone; it is about demonstrating comprehensive data use across all participants.

60

Strengths and Weaknesses of Qualitative Interviews

10.7 Strengths and Weaknesses of Qualitative Interviews

🧭 Overview

🧠 One-sentence thesis

Qualitative interviews excel at gathering detailed, in-depth information in participants' own words but require significant time, resources, and emotional investment from researchers.

📌 Key points (3–5)

  • Core strength: Qualitative interviews allow exploration of topics in much greater depth than almost any other method, with participants sharing information in their own words and perspectives.
  • Unique observational advantage: In-person interviews enable researchers to observe body language, interview location choices, and other non-verbal data beyond oral responses.
  • Major limitation: The method is time-intensive and expensive, requiring interview guide creation, sampling, conducting interviews, transcription, and coding before analysis even begins.
  • Shared survey weakness: Like quantitative surveys, qualitative interviews depend on respondents' ability to accurately and honestly recall details about their lives, thoughts, and behaviors.
  • Emotional consideration: Researchers working with sensitive topics must prepare for the emotional toll of listening to difficult stories.

💪 What makes qualitative interviews powerful

💪 Depth of information

  • Qualitative interviews are described as "an excellent way to gather detailed information."
  • Whatever topic interests the researcher can be explored in much more depth than with almost any other method.
  • Participants can elaborate in ways not possible with other methods like survey research.

🗣️ Participant voice and perspective

  • Participants share information "in their own words and from their own perspectives."
  • This contrasts with fitting perspectives into "perhaps limited response options provided by the researcher."
  • The method respects how participants naturally frame and understand their experiences.

🔍 Understanding social processes

Qualitative interviews are especially useful when a researcher's aim is to study social processes, or the "how" of various phenomena.

  • The focus is on mechanisms and processes, not just outcomes.
  • Example: Understanding how someone navigates a decision over time, rather than just what they decided.

👁️ Beyond verbal data

  • An "often overlooked" benefit of in-person qualitative interviews: researchers can make observations beyond oral reports.
  • Observable elements include:
    • Respondent's body language
    • Choice of time for the interview
    • Choice of location for the interview
  • These observations can provide useful additional data.

⚠️ Practical and resource challenges

⏱️ Time intensity

The process involves multiple labor-intensive stages:

StageWhat it involves
PreparationCreating interview guide, identifying sample
Data collectionConducting interviews
TranscriptionLabor-intensive even before coding begins
AnalysisCoding and interpretation
  • The excerpt emphasizes that conducting interviews is "just the beginning of the process."
  • Transcription alone is described as "labor-intensive."

💰 Financial costs

  • Qualitative interviewing "can be quite expensive."
  • Researchers commonly offer monetary incentives or thank-you gifts to participants.
  • Justification: You are asking for more of participants' time than if you had mailed them a questionnaire with closed-ended questions.
  • Don't confuse: The cost isn't just researcher time—it includes direct payments to participants.

😰 Emotional demands

  • Conducting qualitative interviews is "not only labor intensive but also emotionally taxing."
  • Important consideration: Researchers working on sensitive topics should assess "their own abilities to listen to stories that may be difficult to hear."
  • This is a preparedness issue—researchers must consider their emotional capacity before beginning.

🔄 Methodological limitations

🔄 Reliance on respondent accuracy

As with quantitative survey research, qualitative interviews rely on respondents' ability to accurately and honestly recall whatever details about their lives, circumstances, thoughts, opinions, or behaviors are being examined.

  • This is a shared weakness with survey methods.
  • The method depends on:
    • Accuracy: Can respondents remember correctly?
    • Honesty: Will respondents report truthfully?
  • Applies to all aspects being examined: lives, circumstances, thoughts, opinions, behaviors.
  • Example: If studying past events, participants may have imperfect memories or may unconsciously reshape narratives over time.
61

11.1 Conducting Quantitative Interviews

11.1 Conducting Quantitative Interviews

🧭 Overview

🧠 One-sentence thesis

Quantitative interviews use standardized question-and-answer formats to collect data consistently across many respondents, minimizing interviewer effect while prioritizing large, representative samples.

📌 Key points (3–5)

  • What quantitative interviews are: sometimes called survey interviews or standardized interviews; questions and answer options are read aloud to respondents rather than self-completed.
  • How they differ from qualitative interviews: emphasize consistency in how questions are presented to every respondent, rather than letting respondents help determine interview progression.
  • Key tool—interview schedule: a rigid list of questions and answer options read identically to all respondents to minimize interviewer effect.
  • Common confusion: quantitative vs qualitative interview schedules—quantitative uses a rigid schedule for consistency; qualitative uses a flexible guide that adapts to respondents.
  • Practical challenges: telephone interviewing faces problems with representativeness (mobile vs landline), cooperation, engagement, and social desirability bias compared to face-to-face interviews.

📋 What quantitative interviews are

📋 Definition and format

Quantitative interviews (also called survey interviews or standardized interviews): interviews where questions and answer options are read to respondents rather than having respondents complete a survey on their own.

  • They resemble survey-style question-and-answer formats.
  • Questions are typically closed-ended (pre-set answer choices).
  • A few open-ended questions may be included, but coding differs from qualitative in-depth interview data.
  • Much of what applies to survey research also applies here.

🔍 How they differ from surveys

  • The key difference: in standardized interviews, the researcher reads questions and answer options aloud.
  • In surveys, respondents complete the questionnaire on their own.
  • The interaction is still researcher/respondent, but the process and analysis differ from qualitative interviews.

🎯 The interview schedule and consistency

🎯 What an interview schedule is

Interview schedule: a list of questions and answer options that the researcher reads to respondents; usually more rigid than an interview guide.

  • Contains the exact wording the researcher will use.
  • Designed to be followed strictly, not adapted during the interview.

🔒 Why consistency matters

  • Goal: pose every question and answer option in the very same way to every respondent.
  • Reason: to minimize interviewer effect.

Interviewer effect: possible changes in the way an interviewee responds based on how or when questions and answer options are presented by the interviewer.

  • Example: if one interviewer reads a question enthusiastically and another reads it flatly, respondents might answer differently—not because of real differences in opinion, but because of presentation differences.

🆚 Quantitative vs qualitative interview approach

AspectQuantitative interviewsQualitative interviews
Researcher roleMaintain strict consistency; read questions identicallyEmphasize respondents' roles in determining interview progression
Schedule/guideRigid interview scheduleFlexible interview guide
GoalMinimize interviewer effectExplore depth and context

Don't confuse: Both use interview schedules/guides, but qualitative researchers adapt and follow respondents' leads; quantitative researchers stick to the script for consistency.

🎙️ Recording and note-taking

🎙️ When recording is advised

  • Quantitative interviews may be recorded, but it's not always necessary.
  • Because questions are mostly closed-ended, taking notes during the interview is less disruptive than in qualitative interviews.
  • Recording is advised if:
    • The interview contains open-ended questions.
    • The researcher wants to assess possible interviewer effect (e.g., check if differences in responses are due to how the interviewer presented questions rather than real respondent differences).

📝 Note-taking practicality

  • Closed-ended questions allow the interviewer to quickly mark responses without extensive writing.
  • Example: checking boxes or circling numbers on the schedule while the respondent answers.

📞 Sample size and interview modes

📞 Sample priorities

  • Quantitative interviewers are usually more concerned with gathering data from a large, representative sample.
  • Collecting data from many people via interviews is laborious (time-consuming and resource-intensive).

📞 Telephone interviewing challenges

  • In the past, telephone interviewing was quite common.
  • Problem with landline phones: growth in mobile phone use raises concerns about whether traditional landline telephone interviews are representative of the general population.
  • Other drawbacks:
    • Not everyone has a phone (mobile or landline).
    • Research shows phone interview respondents were:
      • Less cooperative.
      • Less engaged in the interview.
      • More likely to express dissatisfaction with interview length (compared to face-to-face respondents).
      • More suspicious of the interview process.
      • More likely to present themselves in a socially desirable manner (giving answers they think are "correct" or acceptable rather than honest answers).

🆚 Telephone vs face-to-face interviews

ModeCooperation & engagementSocial desirability biasSuspicion
TelephoneLower cooperation, less engaged, more dissatisfactionHigher (more likely to give socially desirable answers)More suspicious
Face-to-faceHigher cooperation, more engagedLowerLess suspicious

Don't confuse: The problem is not just about who has phones—it's also about how respondents behave differently on the phone versus in person.

62

11.2 Analysis of Quantitative Interview Data

11.2 Analysis of Quantitative Interview Data

🧭 Overview

🧠 One-sentence thesis

Quantitative interview data analysis transforms responses into numerical codes for statistical pattern identification, with open-ended questions requiring more complex coding processes than simple closed-ended responses.

📌 Key points (3–5)

  • Core process: coding responses numerically, entering them into analysis software, and running statistical commands to find patterns.
  • Closed-ended questions: straightforward numerical coding (e.g., "no" = 0, "yes" = 1).
  • Open-ended questions: require more complex coding—either inductive (codes emerge from data) or deductive (researcher predicts likely responses and assigns values).
  • Common confusion: quantitative coding of open-ended questions differs from qualitative coding—the goal is to condense data into numbers, not to preserve verbatim excerpts.
  • Key distinction: quantitative methods aim to represent and condense data into numbers, unlike qualitative approaches that emphasize rich descriptions.

🔢 Basic quantitative interview analysis

🔢 Standard workflow

The typical analysis process follows three steps:

  1. Code response options numerically
  2. Enter numeric responses into a data analysis computer program
  3. Run statistical commands to identify patterns across responses
  • This mirrors survey data analysis procedures.
  • The goal is to transform qualitative responses into quantifiable data points.

🔐 Simple closed-ended coding

  • Straightforward assignment of numerical labels to response categories.
  • Example: assigning "no" a label of 0 and "yes" a label of 1.
  • This is the simplest form of quantitative coding.

🔓 Handling open-ended questions

🔓 Why open-ended questions complicate analysis

Open-ended questions in quantitative interviews are "typically numerically coded, just as closed-ended questions are, but the process is a little more complex."

  • Cannot simply assign 0 or 1 to responses.
  • Requires interpretation before numerical assignment.
  • Still aims for numerical representation, unlike qualitative analysis.

🔄 Inductive approach to open-ended coding

When using an inductive process:

  • The researcher develops codes from the data itself (as described in Chapter 10).
  • Key difference from qualitative coding: the researcher assigns a numerical value to codes rather than keeping descriptive labels.
  • Verbatim excerpts from interviews are typically not utilized in later reports.
  • The aim remains to represent and condense data into numbers.

🎯 Deductive approach to open-ended coding

The deductive process is often preferred for quantitative open-ended questions:

How it works:

  1. Researcher begins with an idea about likely responses before data collection
  2. Assigns a numerical value to each anticipated response
  3. Reviews participants' actual open-ended responses
  4. Assigns the numerical value that most closely matches the expected response
  • This approach is more structured than inductive coding.
  • Requires the researcher to predict response categories in advance.
  • Example: If a researcher expects responses about job satisfaction to fall into "very satisfied," "satisfied," "neutral," "dissatisfied," "very dissatisfied," they assign numbers 5, 4, 3, 2, 1 respectively, then match actual responses to the closest category.

🔀 Quantitative vs qualitative coding distinction

🔀 Core philosophical difference

AspectQuantitative codingQualitative coding
End goalRepresent and condense data into numbersPreserve rich descriptions and context
Use of excerptsMay not utilize verbatim excerpts in reportsEmphasizes verbatim excerpts and quotes
Output formatNumerical values for statistical analysisCodes, descriptions, and interview excerpts
Data reductionHigh—collapses responses into categoriesLower—retains nuance and detail

⚠️ Don't confuse

  • Quantitative open-ended coding still produces numbers for statistical analysis, even though it starts with text responses.
  • Qualitative coding of interviews ends with themes, descriptions, and rich excerpts—not numerical datasets.
  • Both may code open-ended responses, but the purpose and output differ fundamentally.
63

11.3 Strengths and Weaknesses of Quantitative Interviews

11.3 Strengths and Weaknesses of Quantitative Interviews

🧭 Overview

🧠 One-sentence thesis

Quantitative interviews offer higher response rates and opportunities to clarify confusion compared to mailed questionnaires, but they introduce interviewer effects, cost more, and take more time.

📌 Key points (3–5)

  • Main comparison framework: strengths and weaknesses are evaluated relative to hard copy (mailed) questionnaires.
  • Key strengths: higher response rates and ability to reduce respondent confusion through real-time clarification.
  • Key weaknesses: interviewer effect (additional variables introduced by a person administering questions), higher cost, and greater time consumption.
  • Common confusion: quantitative interviews vs. qualitative interviews—quantitative interviews resemble survey research in format but involve direct researcher-subject interaction like qualitative interviews.
  • Trade-off: quantitative researchers often choose written questionnaires over interviews to reach larger samples at lower cost despite the benefits of interaction.

✅ Advantages of quantitative interviews

📈 Higher response rates

  • Quantitative interviews tend to achieve higher response rates than mailed questionnaires.
  • The presence of an interviewer increases the likelihood that respondents will complete the interview.
  • Example: A researcher conducting phone or in-person interviews is more likely to get completed responses than someone who mails out questionnaires and waits for returns.

🗣️ Reducing respondent confusion

  • Interviews provide an opportunity for the researcher to clarify or explain confusing items.
  • With a questionnaire, if a respondent is unsure about the meaning of a question or answer option, they probably will not have the opportunity to get clarification from the researcher.
  • An interview, on the other hand, gives the researcher an opportunity to address confusion in real time.
  • Example: If a respondent doesn't understand what a rating scale means, the interviewer can explain it immediately rather than having the respondent guess or skip the question.

⚠️ Drawbacks of quantitative interviews

🎭 Interviewer effect

Interviewer effect: the additional variables that might influence a respondent when a person administers questions, beyond the impression created by how questions are presented on paper.

  • This is the largest drawback and of most concern to quantitative researchers.
  • Hard copy questionnaires may create an impression based on how questions are presented, but having a person administer questions introduces many additional variables.
  • These variables can influence respondent answers in ways that are difficult to control or measure.
  • Mitigation: The interviewer's best efforts to be as consistent as possible with quantitative data collection are key.
  • Don't confuse: This is distinct from question wording effects—interviewer effect refers specifically to the influence of the interviewer as a person (voice, appearance, manner, etc.), not just the questions themselves.

💰 Cost and time considerations

  • Interviewing respondents is much more time consuming and expensive than mailing questionnaires.
  • Quantitative researchers may opt for written questionnaires over interviews on the grounds that they will be able to reach a large sample at a much lower cost.
  • The trade-off: personal interaction benefits vs. ability to reach more people with limited resources.
  • Example: A researcher with a limited budget might choose to mail 1,000 questionnaires rather than conduct 100 interviews, accepting lower response rates in exchange for broader reach.

🔄 Positioning quantitative interviews

📊 Relationship to other methods

AspectQuantitative interviewsHard copy questionnairesQualitative interviews
Question formatStructured, like survey researchStructured, like survey researchOpen-ended, flexible
Researcher interactionYes, direct interaction with subjectsNo, no interactionYes, direct interaction with subjects
Response ratesHigherLowerVaries
Interviewer effectPresent (major concern)AbsentPresent (but handled differently)
Cost & timeHighLowHigh

🎯 When to choose quantitative interviews

  • The excerpt implies that quantitative interviews are chosen when:
    • Higher response rates are critical to the research design.
    • Respondent confusion is anticipated and clarification is important.
    • The researcher has sufficient resources (time and money) to conduct interviews.
  • Conversely, written questionnaires are preferred when:
    • The sample size needs to be large.
    • Resources are limited.
    • Standardization without interviewer influence is paramount.
64

Issues to Consider for All Interview Types

11.4 Issues to Consider for All Interview Types

🧭 Overview

🧠 One-sentence thesis

Researchers conducting interviews—whether qualitative or quantitative—must actively manage power imbalances, location choices, and interpersonal relationships to collect valid data while respecting participants' dignity and safety.

📌 Key points (3–5)

  • Power differential: The interviewer sets the agenda and asks participants to reveal information without reciprocating, creating an inherent imbalance that requires deliberate attention.
  • Location matters: Interview settings should balance participant comfort and convenience with practical concerns like minimizing distractions and ensuring researcher safety.
  • Rapport and respect: Building a bond of mutual trust is essential, but researchers must avoid manipulating participants into believing the relationship is closer than it actually is.
  • Active listening and probing: Both qualitative and quantitative interviewers request more information throughout the interview, though quantitative probes must be uniform while qualitative probes can follow the respondent's direction.
  • Common confusion: Giving participants significant control can help balance power, but it may also create ethical challenges like loss of academic freedom or oversimplification of theoretical constructs.

⚖️ Power dynamics in interviews

⚖️ Why power imbalance exists

The researcher controls the interview in several ways:

  • Sets the agenda and leads the conversation
  • Asks participants to reveal personal information they may not typically share
  • Does not reciprocate by revealing much or anything about themselves
  • Most respondents perceive the researcher as "in charge," even in qualitative interviews where participants have some control over topics

📋 Balancing power across research phases

The excerpt presents strategies organized into three phases:

Before the ResearchDuring the ResearchAfter the Research
Examine goals and reasons behind the studyEnsure language is tailored to the interviewee's capabilities and life experiencesCheck obligations to ensure the study population will not be hurt by what you publish
Examine personal commitment to ensure no harmShow awareness of developing power relationship; provide opportunities for feedback or objectionDo not distort the meaning participants intend
Clarify roles, responsibilities, and rights at various stagesProvide reminders about the nature of the study if an interviewee begins discussing intimate or sensitive issuesProtect anonymity of participants
Provide information about expected distribution of knowledgeCommit to the principle of justice, ensuring the burden of participating does not outweigh the benefitsUse participants' own language in writing to best reflect what they wanted to share
Commit to protecting privacy and anonymityResearch should ensure the right to collect and use the collected dataProvide thick description of the context, your own experience, values, and pressures that play a role in how you interpret and present the data
Use reflexology to be transparent and accountable for the limitations of your methodology

⚠️ Ethical challenges of participant involvement

Karnieli-Miller et al. (2009) warn that permitting participants to play a significant role in the research can lead to:

  • Loss of the researcher's right to intellectual and academic freedom
  • Oversimplification of theoretical constructs that may arise from the research

Don't confuse: balancing power does not mean giving participants complete control over the research process.

🔓 Transparency as a power-balancing tool

Another way to balance the power differential:

  • Make the intent of your research very clear to subjects
  • Share your rationale for conducting the research and the research questions that frame your work
  • Explain how the data you gather will be used and stored
  • Clarify how privacy will be protected, including who will have access to the data
  • Explain what procedures (such as using pseudonyms) you will take to protect identities
  • Many of these details will be covered by institutional review board informed consent procedures, but researchers should be attentive even beyond formal requirements

🤷 No easy answers

The excerpt notes that when it comes to handling the power differential:

  • There are no easy answers
  • There is no general agreement as to the best approach
  • It is nevertheless an issue researchers must note when conducting any form of research, particularly those involving interpersonal interactions and relationships with participants

📍 Location considerations

📍 Participant comfort vs. practical concerns

One way to balance power is to conduct the interview in a location of participants' choosing, where they will feel most comfortable answering questions.

Possible locations include:

  • Respondents' homes or offices
  • Researchers' homes or offices
  • Coffee shops
  • Restaurants
  • Public parks
  • Hotel lobbies

🔇 Avoiding distractions

While it is important to allow respondents to choose the location that is most convenient and comfortable, it is also important to identify a location where there will be few distractions.

Examples of distractions:

  • Some coffee shops and restaurants are so loud that recording the interview can be a challenge
  • The presence of children during an interview can be distracting for both interviewer and interviewee (though observing such interactions could be invaluable to your research, depending on the topic)

Suggestion: As an interviewer, you may want to suggest a few possible locations and note the goal of avoiding distractions when you ask respondents to choose a location.

🛡️ Safety and accessibility limits

The extent to which a respondent should be given complete control over choosing a location must be balanced by:

  • Accessibility of the location to you, the interviewer
  • Safety and comfort level with the location

While it is important to conduct interviews in a location that is comfortable for respondents, doing so should never come at the expense of your safety.

🤝 Building and maintaining rapport

🤝 What rapport means

Rapport: the sense of connection you establish with a participant; the development of a bond of mutual trust between the researcher and the participant.

According to Palys and Atchison (2014):

  • Rapport is the basis upon which access is given to the researcher
  • It is the basis upon which valid data are collected
  • A good rapport between you and the person you interview is crucial to successful interviewing

⚠️ Misguided approaches to rapport

Saylor Academy (2012) warns that some misguided researchers have attempted to develop rapport with their participants to a level that the participant believes the relationship is closer than it is.

Don't confuse: building rapport does not mean creating a false sense of personal closeness.

🙏 Respect as the foundation

The key is respect. At its core, the interview interaction should not differ from any other social interaction in which you:

  • Show gratitude for a person's time
  • Show respect for a person's humanity
  • Conduct the interview in a way that is culturally sensitive
  • In some cases, educate yourself about your study population and receive training to help you learn to communicate effectively with research participants

🚫 Non-judgmental stance

  • Do not judge your research participants
  • You are there to listen to them, and they have been kind enough to give you their time and attention
  • Even if you disagree strongly with what a participant shares in an interview, your job as the researcher is to gather the information being shared with you, not to make personal judgments about it

Example: A researcher studying controversial political views must listen and record what participants say, even if the researcher personally disagrees with those views.

👂 Active listening and probing

👂 What active listening means

Active listening: means that you will probe the respondent for more information from time to time throughout the interview.

Probe: a request for more information.

The questions you ask respondents should indicate that you have actually heard what they have said.

🔢 Probing in quantitative interviews

In quantitative interviews:

  • Probing should be uniform
  • Often quantitative interviewers will predetermine what sorts of probes they will use
  • The goal is consistency across all respondents

💬 Probing in qualitative interviews

In qualitative interviews:

  • Techniques are designed to go with the flow and take whatever direction the respondent establishes during the interview
  • Better lend themselves to following up with respondents and asking them to explain, describe, or otherwise provide more information

📝 Preparing probes in advance

It is worth your time to come up with helpful probes in advance of an interview, even in the case of a qualitative interview:

  • You do not want to find yourself stumped or speechless after a respondent has just said something about which you'd like to hear more
  • Practicing your interview in advance with people who are similar to those in your sample is a good idea

Both qualitative and quantitative interviewers probe respondents, though the way they probe usually differs.

65

12.1 Field Research: What it is?

12.1 Field Research: What it is?

🧭 Overview

🧠 One-sentence thesis

Field research enables researchers to understand behavior in its natural context by observing and interacting with people in their everyday settings, which is essential because behavior only has meaning in the context where it occurs.

📌 Key points (3–5)

  • Why field research matters: behavior only has meaning in context, so "in context" is the only place where it can be accurately observed and understood.
  • What field research is: a qualitative method of data collection that involves understanding, observing, and interacting with people in their natural settings.
  • The participant-observer continuum: researchers vary from complete participant to complete observer, though true "complete observer" status is increasingly questioned.
  • Common confusion: field research vs related terms—ethnography (used in anthropology), participant observation (used in sociology), and field research (the umbrella term covering all activities).
  • When to use it: field research is well equipped to answer "how" questions about processes, interactions, and how events unfold, rather than "why" questions.

🎯 Why conduct field research

🎯 Behavior requires context

  • From a qualitative perspective, behavior only has meaning in the context in which it occurs.
  • Therefore "in context" is the only place where behavior can accurately be observed.
  • Without field research, important truths about actual behaviors may remain hidden (the excerpt mentions Hochschild's study on household duties as an example where field research revealed realities that might otherwise have been missed).

🎯 Understanding requires replication of conditions

  • If the reason for research is to understand behavior, field research is the most relevant and valid option.
  • It enables the duplication of "in context" conditions that influence behavior.
  • It provides behavior with its meaning by preserving the natural setting.

🔍 What field research is

🔍 Core definition

Field research: a qualitative method of data collection aimed at understanding, observing, and interacting with people in their natural settings.

  • When social scientists talk about being in "the field," they are talking about being out in the real world and involved in the everyday lives of the people they are studying.
  • Observation in research is more than just looking—it involves looking in a planned and strategic way with a purpose.

🏷️ Related terminology

TermDisciplineRelationship
Field researchGeneralUmbrella term for all field activities
EthnographyAnthropologyMost commonly used term
Participant observationSociologyCommonly used term
  • The excerpt uses two main terms: field research and participant observation.
  • Field research is the umbrella term that includes the myriad activities that field researchers engage in when they collect data.
  • Don't confuse: ethnography is not the same as ethnomethodology (which will be defined elsewhere).

🧰 What field researchers do

Field researchers engage in multiple activities:

  • They participate
  • They observe
  • They usually interview some of the people they observe
  • They typically analyze documents or artifacts created by the people they observe

⚖️ The participant-observer continuum

⚖️ The spectrum

  • Researchers conducting participant observation vary in the extent to which they participate or observe.
  • This is referred to as the "participant-observer continuum," ranging from complete participant to complete observer.
  • However, researchers increasingly question whether a researcher can truly be at the "complete observer" end of the continuum.
  • The current view: even as an observer, the researcher is participating in what is being studied and therefore cannot really be a complete observer.

👁️ Observer role: pros and cons

Pros of observing (less participation):

  • Sitting back and observing may grant researchers opportunities to see interactions that they would miss if they were more involved.

Cons of observing:

  • Depending upon how fully researchers observe their subjects (as opposed to participating), they may miss important aspects of group interaction.
  • They may not have the opportunity to fully grasp what life is like for the people they observe.

🤝 Participant role: pros and cons

Pros of participating:

  • Participation has the benefit of allowing researchers a real taste of life in the group that they study.
  • Some argue that participation is the only way to understand what it is that is being investigated.

Cons of participating:

  • Fully immersed participants may find themselves in situations that they would rather not face but from which they cannot excuse themselves because they have adopted the role of a fully immersed participant.
  • Participants who do not reveal themselves as researchers must face the ethical quandary of possibly deceiving their subjects.

🎚️ The reality: middle ground

  • In reality, much field research lies somewhere near the middle of the observer/participant continuum.
  • Field researchers typically participate to at least some extent in their field sites.
  • There are also times when they may strictly observe.

❓ When field research is appropriate

❓ The types of questions it answers

Field research is well equipped to answer "how" questions.

  • Field researchers ask how the processes they study occur.
  • They ask how the people they spend time with in the field interact.
  • They ask how events unfold.
  • Don't confuse: survey researchers often aim to answer "why" questions, whereas field researchers focus on "how" questions.

📋 Examples of field research questions

The excerpt provides a table of real field research projects showing the range of "how" questions:

  • "What are the prospects for cross-cultural, interdisciplinary and methodologically plural approach to well-being?" (interviews in Zambia over 2 years)
  • "What are the novel and innovative solutions to critical limitations in existing police research?"
  • "What visions of the law do policewomen and popular legal advocates mobilize when they construct their responses to victims? How is evidence constructed in each specific setting?" (2012-2013)
  • "How do firefighters perceive their risk as part of the 'cancer'?" (over several months)
  • "What is the nature of the encounters police have with persons affected by mental illness, and the ways in which such encounters are resolved by policy?" (eighteen months)
  • "Are students engaging in social media and other non-study behaviours more often than they are studying?" (several weeks at two universities)

Note: Many of these studies had more than one research question; only one per study is listed for demonstration purposes.

66

12.2 Field Research: When is it Appropriate?

12.2 Field Research: When is it Appropriate?

🧭 Overview

🧠 One-sentence thesis

Field research is most appropriate for answering "how" questions about processes, interactions, and events as they unfold in natural settings, rather than "why" questions.

📌 Key points (3–5)

  • What field research asks: "how" questions—how processes occur, how people interact, how events unfold—rather than "why" questions that surveys typically address.
  • When to use it: when you need firsthand, detailed data about everyday life, social context, and processes over time.
  • Common confusion: field research vs. surveys—surveys aim for breadth and "why" explanations; field research sacrifices breadth for depth and "how" descriptions.
  • Strengths: yields very detailed data, emphasizes social context, and can uncover hidden or gradual social facts.
  • Trade-offs: narrow focus, time-intensive, emotionally taxing, and reaches fewer individuals than surveys.

🎯 What field research is designed to answer

🎯 "How" questions, not "why" questions

  • Field research is well equipped to answer "how" questions.
  • Survey researchers often aim to answer "why" questions.
  • Field researchers focus on:
    • How the processes they study occur
    • How the people they spend time with interact
    • How events unfold

🔍 Don't confuse with surveys

  • Surveys seek to explain causes ("why").
  • Field research seeks to describe mechanisms and sequences ("how").
  • Example: A survey might ask why people hold certain beliefs; field research observes how those beliefs shape daily interactions.

📋 Examples of field research questions

📋 Real research questions from the excerpt

The excerpt provides a table of actual field research projects. Each demonstrates the "how" focus:

Research questionDuration/settingFocus
What are the prospects for cross-cultural, interdisciplinary approaches to well-being?2 years of interviews in Chiawa, ZambiaHow well-being is understood across cultures
What are novel solutions to limitations in police research?UndefinedHow to improve police research methods
What visions of law do policewomen and advocates mobilize in responses to victims? How is evidence constructed?Between 2012 and 2013How legal actors construct responses and evidence
How do firefighters perceive their risk as part of "cancer"?Over several monthsHow risk perception operates
What is the nature of police encounters with persons affected by mental illness, and how are such encounters resolved?Eighteen monthsHow encounters unfold and are resolved
Are students engaging in social media more than studying?Several weeks in one semester at two universitiesHow students allocate time

📌 Note from the excerpt

Many studies had more than one research question; only one per study is listed for demonstration purposes.

✅ Strengths of field research

✅ Firsthand, detailed data

Field research allows researchers to gain firsthand experience and knowledge about the people, events, and processes that they study.

  • No other method offers quite the same kind of close-up lens on everyday life.
  • Researchers can obtain very detailed data about people and processes, perhaps more detailed than with any other method.
  • Example: observing how a group makes decisions in real time, rather than asking them to recall it later.

🌍 Understanding social context

  • Field research is an excellent method for understanding the role of social context in shaping people's lives and experiences.
  • It enables a greater understanding of the intricacies and complexities of daily life.
  • Don't confuse: other methods may collect data about context, but field research embeds the researcher in context.

🔎 Uncovering hidden social facts

  • Field research may uncover elements of people's experiences or group interactions of which we were not previously aware.
  • This is a unique strength of field research.
  • With other methods (interviews, surveys), we cannot expect respondents to answer questions they don't know the answer to or provide information of which they are unaware.
  • Because field research typically occurs over an extended period of time, social facts that may not be immediately revealed can be discovered over time.

📊 Summary of major benefits

The excerpt lists three major benefits:

  1. It yields very detailed data.
  2. It emphasizes the role and relevance of social context.
  3. It can uncover social facts that may not be immediately obvious, or of which research participants may be unaware.

⚠️ Limitations and trade-offs

⚠️ Narrow focus and limited reach

  • The fact that field researchers collect very detailed data does come at a cost.
  • Because a field researcher's focus is so detailed, it is, by necessity, also somewhat narrow.
  • Field researchers simply are not able to gather data from as many individuals as, say, a survey researcher can reach.
  • Field researchers generally sacrifice breadth in exchange for depth.

⏳ Time-intensive

  • Related to the narrow focus: field research is extremely time intensive.
  • The extended time in the field is necessary to uncover gradual or hidden social facts, but it limits the number of sites or groups a researcher can study.

😓 Emotionally taxing

  • Field research can also be emotionally taxing.
  • It requires, to a certain extent, the development of a relationship between a researcher and her participants.
  • The excerpt notes that interviews also require relationship development, but the text cuts off before completing the comparison.
67

12.3 The Pros and Cons of Field Research

12.3 The Pros and Cons of Field Research

🧭 Overview

🧠 One-sentence thesis

Field research offers uniquely detailed, context-rich data about everyday life but requires researchers to sacrifice breadth, invest significant time and emotional energy, and navigate difficult documentation challenges.

📌 Key points (3–5)

  • Core strength: Field research provides firsthand, very detailed data about people, events, and processes in their natural social context.
  • Unique discovery power: It can uncover social facts that are not immediately obvious or that participants themselves may not be aware of.
  • Major trade-off: Depth comes at the cost of breadth—field researchers cannot reach as many people as survey researchers can.
  • Common confusion: Field research vs interviews—both involve relationships, but field research is like "a full-blown, committed marriage" (sustained, intimate, long-term) while interviews are "casual dating" (brief, limited).
  • Hidden costs: The method is extremely time-intensive, emotionally taxing, and poses unique documentation challenges.

✅ Advantages of field research

🔍 Very detailed data

Field research allows researchers to gain firsthand experience and knowledge about the people, events, and processes that they study.

  • No other method offers "quite the same kind of close-up lens on everyday life."
  • The level of detail obtained may exceed what any other method can provide.
  • Example: A researcher observing daily interactions in a workplace can capture nuances of body language, tone, and informal exchanges that a survey question could never reveal.

🌐 Understanding social context

  • Field research is excellent for understanding "the role of social context in shaping people's lives and experiences."
  • It enables a greater understanding of the intricacies and complexities of daily life.
  • Why it matters: People's behaviors and experiences are shaped by their environments; field research captures this interplay in real time.

💡 Uncovering hidden social facts

  • Field research may uncover elements of people's experiences or group interactions "of which we were not previously aware."
  • This is a unique strength: with interviews and surveys, respondents cannot answer questions about things they don't know or aren't aware of.
  • Because field research typically occurs over an extended period, social facts that are not immediately revealed can be discovered over time.
  • Example: A researcher might notice a pattern of informal power dynamics in a group that members themselves don't consciously recognize.

❌ Disadvantages of field research

📏 Lack of breadth

  • Field researchers collect very detailed data, but this comes at a cost: the focus is "by necessity, also somewhat narrow."
  • Field researchers "simply are not able to gather data from as many individuals as, say, a survey researcher can reach."
  • The excerpt states: "field researchers generally sacrifice breadth in exchange for depth."
  • Don't confuse: This is not a flaw in execution—it is an inherent trade-off of the method's design.

⏳ Extremely time-intensive

  • Field research requires sustained engagement over an extended period.
  • Related to the breadth limitation: the time invested in depth means fewer participants can be studied.

💔 Emotionally taxing

  • Field research "requires, to a certain extent, the development of a relationship between a researcher and her participants."
  • The excerpt compares methods:
MethodRelationship metaphorDurationIntimacy
Interviews"Casual dating"An hour or twoLimited
Field research"Full-blown, committed marriage"Many months or yearsDeep, sustained
  • Researchers experience "not just the highs but also the lows of daily life and interactions."
  • Why it's taxing: Participating in day-to-day life with research subjects can result in "tricky ethical quandaries" and challenges to maintaining objectivity.
  • Example: A researcher embedded in a community may witness conflicts or distressing events and must navigate their role as both observer and participant.

📝 Documentation challenges

  • Unlike survey researchers (who provide questionnaires) or interviewers (who have recordings), field researchers "generally have only themselves to rely on for documenting what they observe."
  • Specific challenges:
    • It may not be possible to take field notes while observing.
    • Researchers may not know which details to document or which will become most important.
    • When notes are taken after observation, researchers "may not recall everything exactly as you saw it when you were there."
  • This is more challenging than with other methods because there is no external recording device or structured instrument.

📊 Summary comparison

AspectBenefitsCosts
Data qualityVery detailed, firsthandNarrow focus, fewer participants
ContextRich understanding of social contextTime-intensive to capture
DiscoveryCan uncover hidden social facts over timeRequires sustained presence
RelationshipsDeep, rewarding connectionsEmotionally taxing, ethical dilemmas
DocumentationCaptures nuances of daily lifeRelies solely on researcher memory and notes
68

Getting In and Choosing a Site

12.4 Getting In and Choosing a Site

🧭 Overview

🧠 One-sentence thesis

Field researchers must strategically decide where to observe and what role to adopt—overt or covert, insider or outsider—based on practical constraints, ethical considerations, and the nature of their research question, then systematically document and analyze observations through descriptive and analytic field notes to generate grounded theory.

📌 Key points (3–5)

  • Site selection factors: research question, time availability, access, social location (ascribed vs achieved statuses), and collaboration opportunities all shape where you can conduct field research.
  • Role decisions: researchers choose between overt (participants know they're being studied) vs covert (hidden identity) and between Martian (detached observer) vs Convert (fully immersed participant) approaches.
  • Common confusion: descriptive field notes vs analytic field notes—descriptive notes simply record observations straightforwardly; analytic notes include the researcher's interpretations and impressions.
  • Documentation challenge: field notes must be typed up immediately after leaving the field to fill in gaps from brief jottings made during observation.
  • Analysis approach: grounded theory works inductively, generating theory from the ground up rather than testing preset hypotheses, which can feel open-ended but yields rich theoretical insights.

🗺️ Selecting a research site

🗺️ Key questions for site selection

Before choosing where to observe, field researchers must consider several practical questions:

  • What do you hope to accomplish?
  • What is your topical/substantive interest?
  • Where can you observe relevant behavior?
  • How likely is access to locations of interest?
  • How much time do you have?
  • Will you observe in single or multiple locations?

⏱️ Time and geographic constraints

Limitations: practical constraints that shape where you can conduct participant observation, including time, travel, and resources.

  • Field researchers typically immerse themselves for many months or even years.
  • Consider: How much time per day? Per week? Over what total period?
  • Some researchers move to live with or near their population of interest.
  • Example: If you have only three months available, you cannot replicate studies that required years of immersion.

👤 Social location considerations

Researchers must account for both ascribed and achieved aspects of identity:

AspectDefinitionExample from excerptImpact on research
AscribedInvoluntary characteristicsAge, race, mobilityAdult status may limit complete participation in children's birthday parties
AchievedCharacteristics with some choiceProfessor statusCan choose whether/how much to reveal; may enhance or stifle rapport depending on context

🤝 Collaboration vs solo research

Benefits of collaboration:

  • Cover more ground and collect more data
  • Share trials and tribulations in the field
  • Emotional support during challenging fieldwork

Challenges of collaboration:

  • Possible personality conflicts among researchers
  • Competing commitments (time and contributions)
  • Differences in methodological or theoretical perspectives

Don't forget: also consider opportunities—existing memberships, social connections, friends who can provide housing—not just limitations.

🎭 Choosing your researcher role

🎭 Overt vs covert entry

Overt researcher: enters the field with participants aware they are subjects of social scientific research.

Covert researcher: enters as a full participant without revealing researcher identity or that the group is being studied.

Overt approach:

  • May experience trouble establishing rapport initially
  • Participants may behave differently when aware of being watched
  • Over time (months/years), participants typically become more comfortable
  • Avoids moral and ethical dilemmas

Covert approach:

  • "Getting in" may be quite easy
  • Faces ethical questions: How long to conceal identity? How will participants respond when they discover the truth? How to respond to unsafe activities?
  • Example: Researcher Jun Li initially tried covert research with female gamblers but switched to overt when ethical concerns arose; however, overt status made participants reluctant to "speak their minds," so she adjusted to a hybrid insider-outsider role.

Important: Institutional review boards (IRBs) at federally funded agencies may restrict or prohibit deception; public locations where people lack privacy expectations may allow more flexibility.

🔑 Key informants

Key informant: an insider at the research site with whom the researcher has a prior connection or closer relationship than with other participants.

  • Can provide a framework for observations
  • Help translate what you observe
  • Give important insight into group culture
  • Ideal to have more than one, as perspectives may vary

🛸 Martian vs Convert roles

Fred Davis coined these terms to describe researcher positioning:

Martian role: researcher stands back, not fully immersed, to better problematize, categorize, and see with newcomer's eyes; remains disentangled from too much engagement.

Convert role: researcher intentionally dives into life as a participant; understanding is gained through total immersion.

  • The choice affects what you can observe and how you interpret it.
  • Consider which approach best suits your personality and research goals.

⚖️ Power and relationships

  • The researcher/researched relationship is more complex in field studies than in interviews (lasting months/years vs one or two hours).
  • Potential for exploitation is greater because relationships are closer.
  • Lines between research and personal/off-the-record interaction may blur.
  • These precautions should be seriously considered before embarking on field research.

📝 Field notes and documentation

📝 Purpose and importance of field notes

Field notes: the record that affirms what you observed; the first and necessary step toward developing quality analysis.

  • Aim: record observations as straightforwardly and quickly as possible in a way that makes sense to you.
  • Not to be taken lightly or overlooked as unimportant.
  • Usually intended only for the researcher's own purposes related to recollections of people, places, and things.

📋 Descriptive field notes

Descriptive field notes: notes that simply describe a field researcher's observations as straightforwardly as possible, typically without explanations or comments.

  • Present observations on their own, as clearly as possible.
  • Do not contain the researcher's interpretations at this stage.
  • Must be typed up immediately upon leaving the field so researchers can fill in blanks from brief jottings made during observation.
  • You never know what might become important later, so note as much as possible.

🔍 Analytic field notes

Analytic field notes: notes that include the researcher's impressions about observations.

  • Move from description to analysis.
  • Analysis begins the moment a researcher enters the field and continues through interactions, writing descriptive notes, and reflecting on meaning.
  • Often develop when the researcher exits observation, sits at a computer, and types messy jotted notes into readable format.
  • Field researchers typically spend several hours typing up notes after each observation.
  • Having time outside the field to reflect is crucial to developing analysis.

Don't confuse: Descriptive notes record "what happened"; analytic notes add "what it might mean" and researcher impressions. The line can blur, but the distinction helps organize thinking.

🧪 Analysis of field research data

🧪 From description to analysis process

Analysis occurs over time in stages:

  1. While entering and interacting in the field
  2. While writing up descriptive notes
  3. While considering what interactions and notes mean
  4. After typing up notes into readable format (where analysis often begins)
  5. Through coding patterns across notes

🌱 Grounded theory approach

Grounded theory: the analytic process of field researchers who conduct inductive analysis; the goal is to generate theory from the ground up, grounded in empirical observations and tangible experiences.

Key characteristics:

  • Begin with open-ended, open-minded desire to understand a social situation
  • Systematic process where the researcher lets the data guide her rather than guiding data by preset hypotheses
  • Discoveries are made from the ground up
  • Theoretical developments are grounded in observations

The experience:

  • Can be intimidating and anxiety-producing because the open nature feels out of control
  • Without hypotheses to guide analysis, researchers may experience frustration or angst
  • Rewarding when a coherent theory emerges from empirical observations
  • Benefits peers (who can develop theories further) and participants (who get a bird's-eye view of their everyday life)

🔄 Iterative coding process

Once analytic field notes are written:

  • Look for patterns across notes by coding the data
  • Use iterative process of open and focused coding (as outlined in qualitative methods)
  • Things that seem unimportant at the time may later reveal relevance
  • The process is systematic but flexible, allowing themes to emerge rather than forcing predetermined categories

Example: A researcher observing library use might initially note "student sits at table with laptop." Later, when typing up notes, they might add analytic impressions: "Student seemed to be using the library more as a quiet workspace than to access library resources—laptop never closed, no books consulted." Over time, patterns across many such observations might generate theory about how students use library space.

69

13.1 Strengths of Unobtrusive Research

13.1 Strengths of Unobtrusive Research

🧭 Overview

🧠 One-sentence thesis

Unobtrusive research methods offer unique advantages by capturing what people actually do without researcher interference, at relatively low cost, with the flexibility to correct mistakes and examine long-term historical processes.

📌 Key points (3–5)

  • Core advantage: captures actual behavior rather than self-reported behavior, without the Hawthorne effect (researcher presence changing participant behavior).
  • Cost benefit: generally lower-cost because participants are typically inanimate objects (documents, archives) rather than people requiring compensation.
  • Forgiving nature: mistakes in data collection can be corrected by returning to the source, unlike interviews or field research where re-collection is often impossible.
  • Common confusion: unobtrusive research vs field research—both claim to study actual behavior, but field researchers cannot be certain their presence doesn't affect what they observe.
  • Temporal strength: uniquely suited to studying processes over long time periods, including events that occurred decades ago, without relying on potentially flawed retrospective memory.

🎯 No Researcher Effect

🔍 What people actually do vs what they say

  • Unobtrusive methods provide evidence of actual behavior, not self-reported behavior from surveys and interviews.
  • This addresses a fundamental limitation: people's accounts of their actions may differ from their real actions.

👻 The Hawthorne effect advantage

Hawthorne effect: the effect of the research (or researcher's presence) on subjects.

  • Field researchers also study actual behavior, but they face uncertainty about how their presence influences the people and interactions they observe.
  • Unobtrusive researchers do not interact directly with research participants, so the Hawthorne effect is not a concern.
  • Don't confuse: While all research faces risk of researcher bias, unobtrusive methods specifically eliminate the problem of researcher presence changing participant behavior.
  • This is identified as one of the major strengths of unobtrusive research.

💰 Practical Advantages

💵 Lower costs

  • Unobtrusive research can be relatively low-cost compared to other methods.
  • Why: Participants are generally inanimate objects (documents, archives) rather than human beings.
  • Researchers can access data without paying participants for their time.
  • Caveat: Travel to or access to some documents and archives can still be costly, but the participant-compensation cost is eliminated.

🔧 The "forgiving" nature

Forgiving: far easier to correct mistakes made in data collection when conducting unobtrusive research than when using other methods.

Comparison with other methods:

MethodProblem scenarioOptions if mistake discoveredFeasibility
In-depth interviewsOmitted 2 critical questions after 50 interviewsRe-interview all 50? Guess responses? Reframe question? Abandon project?None ideal
Survey researchSame mistake problemsSame difficult optionsNone ideal
Field researchMistake during data collection (e.g., political campaign)Often cannot re-do; campaign is over; need new data sourceOften impossible
Unobtrusive researchAny data collection errorReturn to source to gather more information or correct problemRelatively straightforward
  • Example: If you realize you need additional information after analyzing documents, you can simply go back to the archives.
  • This flexibility is a significant practical advantage for researchers.

⏳ Studying Processes Over Time

📜 Historical reach

  • Unobtrusive research is well suited to studies that focus on processes that occur over time.
  • Unique capability: can examine processes that occurred decades before data collection began.
  • Other methods (longitudinal surveys, long-term field observations) cannot study events that happened long ago.

🧠 Memory vs records

Comparison of longitudinal approaches:

ApproachCan study past decades?Cost-effectiveness for long processesReliability issue
Longitudinal surveysNo (only from start of survey)Not most cost-effectiveMay rely on retrospective accounts subject to memory errors
Long-term field observationsNo (only from start of observation)Not most cost-effectiveLimited to observable present
Unobtrusive methodsYesMore cost-effectiveUses existing records, not memory
  • Unobtrusive methods do not rely on retrospective accounts, avoiding errors in memory.
  • This makes them more reliable for studying long-ranging historical processes.
  • Example: A researcher can study organizational changes over 50 years by examining archived documents, rather than asking people to remember events from decades ago.
70

13.2 Weaknesses of Unobtrusive Research

13.2 Weaknesses of Unobtrusive Research

🧭 Overview

🧠 One-sentence thesis

Unobtrusive research, while offering many advantages, suffers from validity problems, data availability constraints, and difficulty accounting for social context because researchers analyze data created for purposes different from their own.

📌 Key points (3–5)

  • Validity problems: data may have been created or gathered for purposes entirely different from the researcher's aim, leading to validity issues.
  • Data availability limits research: if data sources measuring what a researcher wants to examine do not exist, researchers must adjust their original questions to suit available data.
  • Context is hard to capture: unlike field research where researchers observe events and responses directly, unobtrusive research struggles to ascertain why something occurred, though it can show what occurred.
  • Common confusion: unobtrusive research tells you what happened but not necessarily why—don't expect the same contextual depth as field observation.

⚠️ Three core weaknesses

⚠️ Validity concerns

Validity problems arise because unobtrusive researchers analyze data that may have been created or gathered for purposes entirely different from the researcher's aim.

  • The data were not designed with the researcher's question in mind.
  • This mismatch between original purpose and research use can undermine whether the data truly measure what the researcher intends.
  • Example: An organization's internal records were created for administrative purposes; using them to study employee satisfaction may not capture the concept accurately.

📦 Data availability constraints

  • The problem: data sources measuring whatever a researcher wishes to examine simply may not exist.
  • The consequence: unobtrusive researchers may be forced to tweak their original research interests or questions to better suit the data available to them.
  • This is a fundamental limitation—the researcher cannot create new data; they must work with what already exists.
  • Example: A researcher wants to study a specific social process, but no records or traces of that process were preserved, so the research question must be adjusted.

🔍 Difficulty accounting for context

  • In field research, the researcher sees what events lead up to an occurrence and observes how people respond.
  • Unobtrusive research lacks this direct observation.
  • What this means: while it can be difficult to ascertain why something occurred, unobtrusive research can gain a good understanding of what has occurred.
  • Don't confuse: unobtrusive methods excel at documenting events and patterns, but they struggle to explain motivations or causal mechanisms without additional context.

📋 Summary comparison

AspectWeaknessImplication
ValidityData created for other purposesMay not measure researcher's concept accurately
Data availabilitySources may not existResearcher must adjust questions to fit available data
ContextCannot observe events directlyCan show what happened, but harder to explain why

🔄 Contrast with strengths (from the excerpt)

The excerpt also lists strengths of unobtrusive research for context:

🔄 Key strengths mentioned

  1. No Hawthorne effect: subjects are not aware they are being studied, so behavior is not altered.
  2. Cost effective: does not require expensive data collection efforts.
  3. Easier to correct mistakes: going back to the source of the data to gather more information or correct problems is relatively straightforward, unlike field research where re-doing data collection may be impossible (e.g., a political campaign that has ended).
  4. Suitable for long-term processes: can examine events and processes that occurred decades before data collection began, without relying on retrospective accounts that may be subject to memory errors.

🔄 Trade-off to remember

  • Unobtrusive research trades direct observation and contextual depth (weaknesses) for cost-effectiveness, repeatability, and the ability to study the past (strengths).
  • Example: You can re-analyze historical documents easily (strength), but you cannot ask the authors why they wrote what they did (weakness).
71

Unobtrusive Methods

13.3 Unobtrusive Methods

🧭 Overview

🧠 One-sentence thesis

Unobtrusive methods—content analysis, physical trace, and archival measures—allow researchers to gather data without influencing subjects, though they require careful attention to context and validity.

📌 Key points (3–5)

  • Three main types: content analysis (studying texts and communications), physical trace (erosion and accretion evidence), and archival measures (documents, records, and digital materials).
  • Content analysis focus: addresses "Who says what, to whom, why, how, and with what effect?" and can be qualitative (themes) or quantitative (counting/statistics).
  • Primary vs secondary sources: content analysis usually examines original data, not already-analyzed materials; analyzing secondary sources focuses on the analyst's process, not the findings themselves.
  • Common confusion: content analysis of scholarly literature vs literature review—the former studies the studies (e.g., who publishes where, topic trends), while the latter summarizes what we know about a topic.
  • Key challenges: validity problems, difficulty reproducing data, and understanding context—especially when traces or artifacts come from different historical or cultural backgrounds.

📝 Content analysis

📝 What content analysis examines

Content analysis: a type of unobtrusive research that involves the study of human communications and texts.

  • "Text" is defined broadly: written copy (newspapers, letters), audio/visual content (speeches, performances, TV shows, advertisements, movies).
  • Core questions: "Who says what, to whom, why, how, and with what effect?"
  • Example: researchers examined tourism policies from around the world over 30 years, looking for evidence of growing concern for animal welfare by counting occurrences of welfare-related words.

🔍 Primary vs secondary sources

  • Primary sources: original data that has not yet been analyzed; content analysis usually focuses on these.
  • Secondary sources: data that has already been analyzed by someone else.
  • When content analysis does examine secondary sources, the focus shifts to how the original analyst reached conclusions or presented data, not the conclusions themselves.

📚 Content analysis vs literature review

AspectLiterature reviewContent analysis of scholarly literature
PurposeSummarize what we know/don't know about a topicStudy the studies themselves (e.g., publication patterns, topic trends)
SourcesPeer-reviewed, empirical researchSame sources, but examined differently
ApproachTake findings at face valueRaise questions about authors, outlets, topics, evolution over time
GoalArrive at conclusions about overall knowledgeLearn something about the research landscape itself
  • Don't confuse: a literature review synthesizes content; a content analysis of literature examines patterns (who publishes what, where, when).
  • Example: instead of summarizing tourism policy findings, researchers studied whether the policies showed growing concern for animal welfare over time.

🔢 Qualitative vs quantitative approaches

  • Qualitative content analysis: identify themes and underlying meanings in text.
  • Quantitative content analysis: assign numerical values to raw data for statistical analysis (e.g., counting word occurrences).
  • Researchers often use both to strengthen investigations.
  • Example: researchers counted welfare-related words (quantitative) and also drew blocks of text to demonstrate how policies indicated concern (qualitative).

⚠️ Reliability challenge

  • A significant challenge is the potential to reproduce the data.
  • Agreement coefficients can indicate reliability: agreement is what we measure; reliability is what we infer from that measurement.
  • The excerpt notes that improving reliability is discussed in a later section (13.4).

🔎 Physical trace methods

🔎 What physical traces reveal

Physical traces: evidence that humans leave behind that tells us something about who they are or what they do, including material artifacts that reflect beliefs, values, or norms.

  • Similar to how fire/police examine scenes for fingerprints, DNA, etc., researchers examine traces to understand human activity.
  • Example: medical professionals use trace evidence (bruising, cuts, pupil dilation) to determine what happened to a patient.

🏺 Two types of physical traces

TypeDefinitionExample
ErosionWearing away or removal of material due to physical activityA worn footpath
AccretionBuilding up of material due to physical activityA pile of garbage

🧩 Context challenges

  • A major challenge: you generally cannot access the people who left the traces or created the artifacts.
  • (If you contacted them, the research would no longer be unobtrusive.)
  • Especially tricky when traces come from a different historical or cultural context than your own.
  • Validity and reliability problems: How do you know you're viewing an object or trace as it was intended? Do you have enough background knowledge about the original creators or users?

🛠️ How to analyze physical traces properly

You must understand:

  • Who caused the trace or created the artifact
  • When they created it
  • Why they created it
  • For whom they created it

Answering these questions requires accessing additional materials beyond the traces themselves—historical documents or, for contemporary traces, possibly interviews with creators.

📂 Archival measures

📂 What archival measures include

Archival measures: hard copy documents or records, including written or tape-recorded material, photographs, newspapers, books, magazines, diaries, and letters; also webpages with documents, images, videos, and audio files.

  • While archival measures are products of human activity (like accretion measures), they are defined separately due to significant differences and the vast quantity of materials.

✅ Benefits of archival measures

  • Enable researchers to examine historical evidence and social processes over time.
  • Work well with longitudinal studies.
  • Researchers can potentially examine all relevant records (the entire "population") if records have been digitized, eliminating the need for sampling.

⚠️ Critical considerations

  • Original purpose: the sources were not created with research in mind.
  • Researchers must consider:
    • Why the documents were created
    • What may have influenced the content
  • When using data from previous studies (e.g., survey data) for secondary analysis, original issues remain (memory fade, telescoping, etc.), regardless of question quality.

🗄️ Sources of existing data

🗄️ Quantitative data sources

  • Many publicly available sources exist in Canada through Statistics Canada.
  • Example: the General Social Survey (GSS) covers a broad range of topics.
  • Statistics Canada also provides workshops, training, webinars, and conferences for a fee.

🗄️ Qualitative data sources

  • Far fewer free, publicly available qualitative data sources exist compared to quantitative.
  • This is slowly changing as technical sophistication grows and digitizing/sharing qualitative data becomes easier.
  • Examples mentioned:
    • Murray Research Archive at Harvard: offers case histories and qualitative interview data
    • Global Feminisms project at University of Michigan: interview transcripts and videotaped oral histories focused on feminist activism, women's movements, and academic women's studies in multiple countries (Brazil, China, India, Nicaragua, Poland, Russia, United States)

🎯 Advantage of using existing data

One advantage (or disadvantage, depending on preference) is that researchers may skip the data collection phase altogether, using already-collected data sets for new analyses.

72

13.4 Analyzing Others' Data

13.4 Analyzing Others’ Data

🧭 Overview

🧠 One-sentence thesis

Researchers can skip data collection by using publicly available datasets, though quantitative data sources are far more abundant than qualitative ones.

📌 Key points (3–5)

  • Main advantage: unobtrusive research allows researchers to bypass the data collection phase entirely by using existing free datasets.
  • Quantitative vs qualitative availability: many free quantitative datasets exist (e.g., Statistics Canada), but far fewer qualitative sources are publicly available.
  • Key quantitative source in Canada: Statistics Canada provides the General Social Survey and other datasets covering broad topics.
  • Emerging qualitative sources: technical advances are slowly making it easier to digitize and share qualitative data like interview transcripts and oral histories.
  • Common confusion: "publicly available" does not mean all data types are equally accessible—quantitative data is much easier to find than qualitative data.

📊 Quantitative data sources

📊 Statistics Canada (Stats Can)

  • The primary source of publicly available quantitative data in Canada.
  • Website: https://www.statcan.gc.ca
  • Offers training resources: workshops, webinars, and conferences (available for a fee).

📋 General Social Survey (GSS)

🌐 Advantage of digital records

  • Researchers can analyze the entire "population" of relevant records if they have been digitized.
  • No need to worry about choosing a representative sample—the computer can process all records.
  • Don't confuse: this applies only when all relevant records are digitized and accessible, not just a subset.

🎤 Qualitative data sources

🎤 Why qualitative data is scarcer

  • Far fewer sources of free, publicly available qualitative data exist compared to quantitative.
  • The situation is slowly changing as technical sophistication grows and digitization becomes easier.
  • Despite fewer sources, some options are available for researchers whose interests or resources limit their ability to collect data independently.

📚 Murray Research Archive (Harvard)

  • Housed at the Institute for Quantitative Social Science at Harvard University.
  • Offers case histories and qualitative interview data.
  • Website: https://murray.harvard.edu/

🌍 Global Feminisms Project (University of Michigan)

  • Focuses on feminist activism, women's movements, and academic women's studies.
  • Provides interview transcripts and videotaped oral histories.
  • Geographic coverage: Brazil, China, India, Nicaragua, Poland, Russia, and the United States.
  • Website: https://globalfeminisms.umich.edu/

📑 Summary of available data sources

📑 Overview table

The excerpt provides Table 13.1 summarizing publicly available data sources:

OrganizationFocus/TopicData TypeExample Content
Statistics CanadaNational demographics, social & economic characteristics, household informationQuantitativeComplements census data
National Opinion Research CenterGeneral Social Survey: demographic, behavioral, attitudinal questionsQuantitativeNational sample
Add HealthLongitudinal study of cohort from grades 7–12 (1994)QuantitativeSocial, economic, psychological, physical well-being
Center for Demography of Health and AgingWisconsin Longitudinal Study: life course of 1957 high school graduatesQuantitativeLife course data
Institute for Social & Economic ResearchBritish Household Panel SurveyQuantitativeLongitudinal study of British lives and well-being
International Social Survey ProgramInternational data similar to GSSQuantitativeCross-national comparisons
Institute for Quantitative Social Science (Harvard)Large archive of written, audio, and video dataQuantitative and QualitativeMixed-method resources

🔍 What to remember

  • The resources mentioned represent only a snapshot of available sources.
  • Many more publicly available datasets can be accessed easily via the web.
  • Researchers should explore beyond these examples depending on their specific research needs.
73

13.5 Reliability in Unobtrusive Research

13.5 Reliability in Unobtrusive Research

🧭 Overview

🧠 One-sentence thesis

Reliability in unobtrusive research depends on achieving stability (consistent coding by the same person over time), reproducibility (consistent coding across different coders), and accuracy (alignment with established standards).

📌 Key points (3–5)

  • Three dimensions of reliability: stability, reproducibility, and accuracy—each addresses a different source of variation in coding.
  • Stability: whether the same coder gets the same results when coding the same content at different times.
  • Reproducibility (intercoder reliability): whether different coders get the same results when coding the same text.
  • Common confusion: stability vs reproducibility—stability is about one person over time; reproducibility is about different people at the same time.
  • Accuracy: whether coding procedures match pre-existing standards or collective wisdom from prior literature.

🔍 Three dimensions of reliability

🔄 Stability

Stability: the extent to which the results of coding vary across different time periods.

  • What it measures: whether the same person coding the same content at different times gets the same result each time.
  • When it's a problem: if you code the same material today and next week and get different results, stability is low.
  • Why it matters: instability suggests your coding process is inconsistent, not the data itself.

Example: A researcher codes a set of newspaper articles on Monday and codes the same articles again on Friday; if the results differ, there is a stability problem.

🧩 Causes of instability

The excerpt identifies three main causes:

CauseExplanation
Ambiguous coding rulesYour instructions are unclear, so you interpret them differently at different times
Ambiguities in the textThe source material itself is unclear or open to multiple interpretations
Simple coding errorsAccidental mistakes like writing "1" instead of "10" on your code sheet
  • What you can do: clarify your coding rules; be aware of ambiguities in the data as you code.
  • What you cannot do: you cannot alter the original textual data sources, but awareness helps.

🤝 Reproducibility (intercoder reliability)

Reproducibility (intercoder reliability): the extent to which one's coding procedures will result in the same results when the same text is coded by different people.

  • What it measures: whether different coders, following the same rules, get the same results.
  • When it's a problem: if two coders analyze the same text and produce different results, reproducibility is low.
  • Don't confuse with stability: stability is one person, different times; reproducibility is different people, same (or different) times.

Example: Two researchers independently code the same set of interview transcripts; if their results match, reproducibility is high.

🧩 Causes of reproducibility problems

The excerpt identifies three main causes:

CauseExplanation
Cognitive differences among codersDifferent people think differently and may interpret the same text differently
Ambiguous coding instructionsUnclear rules leave room for different interpretations
Random coding errorsAccidental mistakes by individual coders
  • One solution: have coders code together, at the same time, to reduce variation.

🎯 Accuracy

Accuracy: the extent to which one's coding procedures correspond to some pre-existing standard.

  • What it measures: whether your coding approach matches established standards or best practices.
  • When it applies: this assumes a standard coding strategy already exists for the type of text you are analyzing.
  • When it doesn't apply: official standards may not exist for your particular content.

🧩 How to improve accuracy

  • Review prior literature: look at scholarship focused on similar data or coding procedures.
  • Learn from collective wisdom: see how others in your area have coded similar material.
  • Clarify and improve: use what you learn to refine your own coding procedures.
  • The excerpt emphasizes that time spent reviewing prior literature is "time well spent."

🔧 Practical implications

🔧 Addressing reliability problems

  • For stability: clarify coding rules; be aware of textual ambiguities; double-check for simple errors.
  • For reproducibility: have coders work together; provide clear, unambiguous instructions; train coders to reduce cognitive differences.
  • For accuracy: consult prior literature; adopt or adapt established standards; align your procedures with collective wisdom in your field.

🔧 Why all three matter

  • Each dimension addresses a different source of variation: time (stability), coder (reproducibility), and standard (accuracy).
  • High reliability across all three dimensions means your coding is consistent, replicable, and aligned with best practices.
  • Low reliability in any dimension undermines the trustworthiness of your unobtrusive research findings.
74

Ethnomethodology and Conversation Analysis

13.6 Ethnomethodology and Conversation Analysis

🧭 Overview

🧠 One-sentence thesis

Ethnomethodology and conversation analysis are distinct sociological approaches that examine how people construct and maintain social order through everyday routines and talk, rather than traditional data collection methods.

📌 Key points (3–5)

  • What ethnomethodology studies: the ordinary, routine details of everyday reality and how people create social order through mundane knowledge and reasoning.
  • How it differs from ethnography: ethnography is a research method, while ethnomethodology is an alternative approach describing the methods humans use to construct reality.
  • What conversation analysis does: a more formal approach to ethnomethodology that analyzes the details of conversation to understand how reality is constructed through verbal interaction.
  • Common confusion: ethnomethodology vs ethnography—ethnography collects data about cultures; ethnomethodology studies how people themselves create and maintain social order.
  • Core premise: both approaches focus on how reality is constructed (the process), not what it is (the content).

🔍 Understanding ethnomethodology

🔍 What ethnomethodology is

Ethnomethodology: the study of the ordinary—the routine and the details of everyday reality.

  • Developed by sociologist Harold Garfinkel in his 1967 publication Studies in Ethnomethodology.
  • According to Heritage (1984), Garfinkel created the term to encompass phenomena associated with how members of society utilize mundane knowledge and reasoning.
  • It is not a research method per se, but an alternative approach.

🎯 What ethnomethodologists investigate

An ethnomethodologist investigates:

  • How people construct, prolong, and maintain their realities.
  • How people make sense of their everyday activities in order to behave in socially acceptable ways.
  • The methods humans utilize to create social order.

Key question: How do people make sense of their everyday activities to behave in socially acceptable ways?

🌟 Distinctive characteristics

  • Emphasis on the everyday: focuses on ordinary people's methods for producing order in their social worlds.
  • Focus on mundane knowledge: examines routine details rather than extraordinary events.
  • This emphasis on the ordinary is perhaps ethnomethodology's most distinctive characteristic.

⚠️ Don't confuse with ethnography

AspectEthnographyEthnomethodology
NatureA research methodAn alternative approach
FocusStudying cultures and groupsDescribing methods humans use to create social order
GoalCollecting data about peopleUnderstanding how people construct and maintain reality
  • Ethnography is covered in Chapter 12 as a data collection method.
  • Ethnomethodology asks how people create order, not just what their culture looks like.

💬 Understanding conversation analysis

💬 What conversation analysis is

Conversation analysis: a qualitative method for organizing and analyzing the details of conversation.

  • It is a more formal approach to ethnomethodology.
  • Arose from the recognition that some categories (e.g., the meaning of gender) are socially constructed terms that lead to verbal interaction.
  • Similar to ethnomethodology, it focuses on how reality is constructed, as opposed to what it is.

🧩 Three foundational premises

Conversation analysis is premised on three points:

  1. Interaction is sequentially organized

    • Talk can be analyzed in terms of the process of social interaction.
    • Analysis focuses on the interaction process rather than motives or social status.
  2. Contributions to action are contextually oriented

    • Interaction both shapes and is shaped by the social context of that interaction.
    • Context is not just background; it actively influences and is influenced by the conversation.
  3. No details can be dismissed

    • The preceding processes are inherent in the details of the interaction.
    • Therefore, no details can be dismissed as being disorderly, accidental, or irrelevant.

🔬 What this means in practice

  • Every pause, word choice, turn-taking pattern, and conversational detail matters.
  • Analysts cannot ignore seemingly minor elements because they may be crucial to how participants construct meaning.
  • The method treats conversation as the site where social order is actively created and maintained.

Example: When analyzing a conversation, an analyst would not dismiss a pause or interruption as random; instead, they would examine how that pause or interruption contributes to the social interaction and context.

🔗 Relationship to ethnomethodology

  • Conversation analysis is a more formal version of ethnomethodology.
  • It applies ethnomethodological principles specifically to verbal interaction.
  • Both share the focus on how people construct reality through their everyday practices, but conversation analysis narrows this to the details of talk.
75

Goals of a Research Proposal

14.1 What are the Goals of a Research Proposal?

🧭 Overview

🧠 One-sentence thesis

A research proposal must justify why a problem deserves study, demonstrate a feasible method to investigate it, and show that the approach meets disciplinary standards.

📌 Key points (3–5)

  • Three core goals: present and justify the need to study a problem; present a practical way to conduct the study; demonstrate that the design meets disciplinary standards.
  • Three essential questions: What do you plan to accomplish? Why do you want to do it? How are you going to do it?
  • Common confusion: justifying the topic is not just describing it—you must answer the "so what?" question and prove the topic is worthy of study.
  • Feasibility matters: proposals must show that the research is doable given available time, resources, and stamina.
  • Audience shapes scope: doctoral proposals run approximately 25 pages; undergraduate proposals approximately 10 pages (excluding appendices and references).

🎯 The three core goals

🎯 Justifying the research problem

Goal 1: To present and justify the need to study a research problem.

  • A proposal is not just a description of what you want to study—it must make a case for why the problem needs investigation.
  • The excerpt emphasizes "justify the need," meaning you must provide evidence that the problem is significant.
  • Example: An organization wants to study employee turnover—the proposal must explain why this particular problem matters now and what gap in knowledge exists.

🛠️ Presenting a practical approach

Goal 2: To present a practical way in which the proposed research study should be undertaken.

  • The proposal must outline how the research will be conducted in concrete, actionable terms.
  • "Practical way" means the method must be realistic and executable, not just theoretically sound.

📏 Meeting disciplinary standards

Goal 3: To demonstrate that the design elements and procedures being set forth to study the research problem meet with the governed standards within the predominant discipline in which the problem resides.

  • Every discipline has established norms for research design and procedures.
  • The proposal must show awareness of and adherence to these standards.
  • Don't confuse: this is not about personal preferences—it's about demonstrating that your approach is recognized as valid within your field.

❓ Three essential questions every proposal must answer

❓ What do you plan to accomplish?

  • Be clear and succinct in defining the research problem.
  • The excerpt emphasizes clarity: readers must understand exactly what you are proposing to research.
  • Avoid vague or overly broad problem statements.

❓ Why do you want to do it?

  • This requires two components:
    • A thorough review of the literature
    • Convincing evidence that the topic is worthy of study
  • The "so what?" question: You must explain why anyone should care about this research.
  • Example: It's not enough to say "turnover is high"—you must explain what consequences this has and what new understanding your research will provide.

❓ How are you going to do it?

  • The proposal must demonstrate that what you propose is doable.
  • Three feasibility factors the excerpt highlights:
    • Time: Do you have enough time to complete the research?
    • Resources: Do you have access to necessary materials, data, participants, or funding?
    • Stamina: Can you realistically sustain the effort required?
  • Don't confuse: "doable" doesn't mean "easy"—it means realistic given your constraints.

📐 Practical considerations

📐 Length and audience

Audience levelApproximate lengthExpectations
Doctoral~25 pages (excluding appendices/references)Higher expectations
Undergraduate~10 pages (excluding appendices/references)More modest scope
  • The excerpt notes that length depends on the audience for whom the proposal is being prepared.
  • Higher academic levels require more comprehensive justification and detail.

📐 Starting questions

Before writing, the excerpt recommends asking yourself:

  • What do I want to study?
  • Why is the topic important?
  • In what ways is this topic significant? (text cuts off here)

These preliminary questions help clarify your thinking before formal writing begins.

76

14.2 Writing the Research Proposal

14.2 Writing the Research Proposal

🧭 Overview

🧠 One-sentence thesis

A research proposal must clearly define what you will study, justify why it matters, and demonstrate that your plan is feasible and methodologically sound.

📌 Key points (3–5)

  • Three core goals: present and justify the research problem, propose a practical study design, and demonstrate adherence to disciplinary standards.
  • Three essential questions: What do you plan to accomplish? Why do you want to do it? How are you going to do it?
  • Length varies by audience: doctoral proposals run approximately 25 pages (excluding appendices/references), undergraduate proposals approximately 10 pages.
  • Common confusion: feasibility vs ambition—proposals must be doable with available time, resources, and stamina, not just intellectually interesting.
  • Introduction as pitch: the opening must convey what you want to do, your passion, and the study's possible outcomes to engage the reader.

🎯 Core goals of a research proposal

🎯 What proposals must accomplish

A research proposal has three specific goals:

  1. Present and justify the need to study a research problem
  2. Present a practical way in which the proposed study should be undertaken
  3. Demonstrate adherence to standards—show that design elements and procedures meet governed standards within the predominant discipline

❓ Universal questions every proposal must address

Regardless of the research problem or methods chosen, all proposals must answer:

QuestionWhat it requires
What do you plan to accomplish?Be clear and succinct in defining the research problem and what you are proposing to research
Why do you want to do it?Conduct a thorough literature review and provide convincing evidence that the topic is worthy of study; answer the "so what?" question
How are you going to do it?Make sure what you propose is doable—you have the time, resources, and stamina to undertake it

Don't confuse: Justifying importance (why) vs demonstrating feasibility (how). Both are required but address different concerns.

📝 Preparing to write

📝 Pre-writing questions

Before starting the writing process, ask yourself:

  1. What do I want to study?
  2. Why is the topic important?
  3. In what ways is this topic significant within my particular field of study?
  4. What problems will this research help to solve (social, cultural, safety, environmental, economic, business, and/or governance issues)?
  5. How does it build upon and go beyond previous research on this topic?
  6. What exactly should I plan to do?
  7. Can I get it done in the time and with the resources available to me?

📏 Length expectations

Research proposals are generally organized in the same manner across most social science disciplines, but length depends on audience:

  • Doctoral level: approximately 25 pages (excluding appendices and references)
  • Undergraduate level: approximately 10 pages (excluding appendices and references)

Example: An organization preparing a proposal for a doctoral committee should expect higher expectations and more comprehensive coverage than a student preparing an undergraduate proposal.

🚀 Writing the introduction

🚀 Purpose of the introduction

The introduction sets the tone for what follows in your research proposal—treat it as the initial pitch of your idea.

After reading the introduction, your reader should:

  • Understand what it is you want to do
  • Have a sense of your passion for the topic
  • Be excited about the study's possible outcomes

✍️ Structure and content

Think of the introduction as a narrative written in one to three paragraphs.

Within those paragraphs, briefly answer:

  1. What is the central research problem?
  2. How is the topic of your research proposal related to the problem?
  3. What methods will you utilize to analyze the research problem?
  4. Why is it important to undertake this research?
    • What is the significance of your proposed research?
    • Why are the outcomes important?
    • To whom are they important?

📄 Optional abstract

You may be asked to include an abstract with your research proposal.

An abstract should provide:

  • An overview of what you plan to study
  • Your main research question
  • A brief explanation of your methods to answer the research question
  • Your expected findings

Note: The excerpt indicates "All of this information must be carefully..." but the sentence is incomplete in the source text.

77

Components of a Research Proposal

14.3 Components of a Research Proposal

🧭 Overview

🧠 One-sentence thesis

A research proposal must systematically present the research problem, justify its significance, review relevant literature, detail methods, and anticipate implications to persuade readers that the study is worth conducting and feasible.

📌 Key points (3–5)

  • Core purpose: A research proposal provides persuasive evidence of the need and rationale for the proposed research, serving as a roadmap for the study.
  • Six major components: Introduction, Background and significance, Literature review, Research design and methods, Preliminary suppositions and implications, and Conclusion.
  • Literature review challenge: Knowing when to stop reviewing literature—look for repetition in conclusions and be prepared to return to the literature if unexpected findings emerge.
  • Common confusion: A methods section is not just a task list; it must argue why and how the chosen methods will investigate the research problem.
  • Critical mistake to avoid: Failure to develop a coherent and persuasive argument, or being too vague about the research purpose.

📝 The Introduction section

📝 What the introduction must accomplish

The introduction "sets the tone" and serves as the "initial pitch" of your idea. After reading it, your reader should:

  • Understand what you want to do
  • Sense your passion for the topic
  • Be excited about possible outcomes

✍️ How to write it

  • Write one to three paragraphs as a narrative.
  • Briefly answer four key questions:
    1. What is the central research problem?
    2. How is your topic related to the problem?
    3. What methods will you use to analyze the problem?
    4. Why is it important? What is the significance? Whom does it matter to?

📄 Optional abstract

  • If required, write it last (after completing the entire proposal).
  • Must be 150–250 words covering: what you plan to study, main research question, brief methods explanation, and expected findings.
  • Include 5–7 keywords listed in order of relevance.

🎯 Background and significance

🎯 Purpose and audience assumption

The purpose of this section is to explain the context of your proposal and to describe, in detail, why it is important to undertake this research.

  • Assume your reader knows nothing or very little about the research problem.
  • Include the most relevant material to explain your research goals, not all knowledge you have.

🔑 Seven key points to address

  1. State the problem: Provide a more thorough explanation than in the introduction.
  2. Present the rationale: Clearly indicate why this research is worth doing—answer the "so what?" question.
  3. Describe major issues: Explain how your research builds upon previous related research.
  4. Explain your approach: How you plan to conduct the research.
  5. Identify key sources: Which sources you intend to use and how they will contribute.
  6. Set boundaries: State what you will study and what will be excluded to provide clear focus.
  7. Define key concepts: Since terms often have multiple definitions, state which definition you will use.

💡 Tip on conceptual categories

  • Conceptual categories generally reveal themselves only after reading most of the pertinent literature.
  • It is common to continually add new themes or revise themes already discovered.

📚 Literature review

📚 What it is and why it matters

The literature review provides the background to your study and demonstrates the significance of the proposed research.

  • It is a review and synthesis of prior research related to your problem.
  • Most time-consuming aspect of proposal preparation.
  • Your goal: place your study within the larger whole of past research while demonstrating your work is original, innovative, and adds to that whole.

🗂️ How to structure it

  • Break the literature into conceptual categories or themes rather than describing various groups of literature.
  • Think about:
    1. What questions other researchers asked
    2. What methods they used
    3. What they found
    4. What they recommended
  • Don't be afraid to challenge previous findings/conclusions.
  • Assess what is missing and explain how your research fills the gap or extends previous research.

⏹️ Knowing when to stop

  • A significant challenge is knowing when to stop reviewing.
  • Signal to stop: When you start to see repetition in conclusions or recommendations, you have likely covered all significant conceptual categories.
  • Don't confuse: "Stopping" doesn't mean never returning—researchers often return to the literature during data collection and analysis if unexpected findings develop.

🔍 Real example of returning to literature

The excerpt describes a study on community resilience where:

  • Participants discussed individual resilience factors not found in the original literature review on community and environmental resilience.
  • Researchers returned to the literature and discovered a small body of work in child and youth psychology.
  • They had to add a new section on individual resilience factors.
  • Their research appeared to be the first to link individual and community resilience factors.
  • Lesson: Be prepared to look outside your field if unexpected findings emerge.

🔬 Research design and methods

🔬 Objective of this section

Convince the reader that your design and methods will:

  • Enable you to solve the research problem
  • Enable you to accurately and effectively interpret results

The section must be well-written, clear, and logically organized to show you know what you are going to do and how.

🔗 Connection to other sections

  • Must be clearly tied to the specific objectives of your study.
  • Draw upon and include examples from the literature review that relate to your design and methods.
  • Demonstrate how your study utilizes and builds upon past studies.
  • Consider what methods others have used and what methods have not been used but could be.

⚠️ Common mistake to avoid

The methods section is not simply a list of tasks to be undertaken. It is also an argument as to why and how the tasks you have outlined will help you investigate the research problem and answer your research question(s).

📋 Five tips for writing this section

TipWhat to do
Specify approachesState the methodological approaches you intend to employ and the techniques you will use to analyze data
Specify operationsDescribe the research operations you will undertake and how you will interpret results in relation to the research problem
Go beyond hopesState how you will actually implement the methods (e.g., coding interview text, running regression analysis)
Anticipate barriersAcknowledge potential barriers you may encounter and describe how you will address them
Explain challengesIdentify where you expect challenges in data collection, including access to participants and information

🔮 Preliminary suppositions and implications

🔮 Purpose

Argue how you anticipate your research will refine, revise, or extend existing knowledge in your study area.

🌟 Questions to address

  • How might your anticipated findings impact future research?
  • Is it possible your research may lead to:
    • A new policy?
    • Theoretical understanding?
    • Method for analyzing data?
  • How might your study influence future studies?
  • What might your study mean for future practitioners?
  • Who or what might benefit from your study?
  • How might your study contribute to social, economic, or environmental issues?

⚖️ Balance realism with possibility

  • Important to think about and discuss possibilities.
  • Equally important to be realistic—do not delve into idle speculation.
  • Purpose: reflect upon gaps in current literature and describe how your research will begin to fill some or all of those gaps.

🏁 Conclusion and references

🏁 The conclusion

Should be only one or two paragraphs that:

  • Reiterate the importance and significance of your research proposal
  • Provide a brief summary of the entire proposed study

Five-point outline for the conclusion:

  1. Discuss why the study should be done—how it will advance existing knowledge and why it is unique
  2. Explain the specific purpose and research questions
  3. Explain why the chosen design and methods are appropriate, and why others were not chosen
  4. State the potential implications you expect to emerge
  5. Provide a sense of how your study fits within broader scholarship related to the research problem

📖 Citations and references: two formats

FormatDefinitionKey rule
Reference listLists the literature you referenced in the body of your proposalAll references must appear in the body; everything in the body must appear in the list
BibliographyEverything you used or cited, plus additional citations to key sources relevant to understanding the research problemSources may not necessarily appear in the body of the proposal

⚠️ Important citation rules

  • Check with your instructor which format is expected.
  • Never say "as cited in…"—always go to the original source and check it yourself.
  • Many errors are made in referencing, even by top researchers; don't perpetuate someone else's error.
  • For social sciences, use APA (American Psychological Association) format.
  • Usually the reference list or bibliography is not included in the word count (confirm with instructor).

🎯 What your citations demonstrate

Your list of citations should be a testament that you have done sufficient preliminary research to ensure your project will complement, but not duplicate, previous research efforts.

⚠️ Common mistakes to avoid

The excerpt lists seven common mistakes:

MistakeWhat it means
Failure to develop coherent argumentNot persuasively arguing for why the research should be undertaken
Failure to be conciseNot making the purpose clear; being "all over the map"
Failure to cite landmark workMissing significant pieces of work in your literature review
Failure to set boundariesNot defining contextual boundaries (time, place, people, etc.)
Failure to stay focusedGoing off on unrelated tangents instead of focusing on the research problem
Sloppy writingIncluding grammatical mistakes and imprecise writing
Wrong level of detailToo much detail on minor issues, not enough on major issues

💪 Final goal

At the end of the day, you want to leave the readers of your research proposal feeling, "Wow, this is an exciting idea and I cannot wait to see how it turns out!"

78

15.1 Deciding What to Share and With Whom to Share it

15.1 Deciding What to Share and With Whom to Share it

🧭 Overview

🧠 One-sentence thesis

Sociological researchers must share all aspects of their work—strengths and weaknesses—with appropriate audiences in formats tailored to each audience's needs while maintaining honest reporting of findings.

📌 Key points (3–5)

  • Share everything: ethical research requires sharing the good, bad, and ugly aspects to enable understanding, replication, and critique.
  • Know your audience: different stakeholders (scholars, funders, participants, public) require different framing, but findings themselves never change.
  • Three presentation formats: formal talks (15-20 min), roundtable discussions (conversational), and poster presentations (visual storytelling).
  • Common confusion: written scholarly reports vs presentations—don't read papers verbatim in talks; presentations highlight key points while written reports provide full detail.
  • Self-reflection questions: six critical questions help researchers identify what to share, from research motivations to limitations and unanswered questions.

🔍 What to share: the complete picture

🔍 The principle of full disclosure

Sociological research is a scholarly pursuit aiming for true understanding of social processes, requiring researchers to share all aspects of their work—the good, the bad, and the ugly.

  • This transparency serves two purposes:
    • Ethical obligation: honesty about methods and limitations
    • Replication: others can understand, build upon, and critique the work
  • Don't confuse: sharing weaknesses is not undermining your work—it's strengthening the scientific process.

❓ Six critical self-reflection questions

Before sharing, researchers should answer:

QuestionPurpose
Why did I conduct this research?Reveals personal interests, investments, or biases
How did I conduct this research?Ensures honesty about methods, sample, and analysis
For whom did I conduct this research?Identifies stakeholders (funders, participants, community members, professors, employers)
What conclusions can I reasonably draw?Highlights major strengths
What would I do differently?Acknowledges potential weaknesses
What questions remain unanswered?Points to future research directions

🎯 Understanding stakeholders

  • Who qualifies as a stakeholder: the researcher themselves, funders, research participants, community members sharing characteristics with subjects, professors (for class projects), or employers.
  • Example: Research on parents would include parents as stakeholders; research in a specific community includes that community.
  • These stakeholders help determine appropriate audiences for sharing findings.

👥 Knowing your audience

👥 Identifying potential audiences

Primary audiences for research include:

  • Other social scientists: most obvious audience, especially for scholarly work
  • Class context: professor and fellow students for course projects
  • Stakeholders: those with direct interest in the research
  • Media representatives: reporters seeking public-interest stories
  • Policy makers: those who might use findings for decisions
  • General public: broader community members

🎨 Tailoring the frame, not the findings

  • Critical principle: you never alter actual findings for different audiences.
  • What changes: how you frame the research to make it meaningful to that specific audience.
  • Example: The same study might emphasize theoretical implications for scholars but practical applications for policy makers—same data, different emphasis.

🎤 Three presentation formats

🎤 Formal talks (15-20 minutes)

Key characteristics:

  • Typical length: 15-20 minutes at professional conferences
  • May include visual aids (video, PowerPoint)
  • Requires advance preparation and timing practice

Common pitfalls and solutions:

PitfallSolution
Running over timePractice repeatedly and time yourself; keep a watch visible
Getting too engrossedRemember time flies when discussing your passion
Over-citing previous workFocus on YOUR original work, not long literature reviews
Reading paper verbatimNever do this—it bores audiences quickly

What to highlight:

  • Research question
  • Methodological approach
  • Major findings
  • A few final takeaways

Don't confuse: Written reports require extensive literature review; presentations use limited time to showcase your original contribution.

💬 Roundtable discussions

Purpose and format:

  • Stimulates conversation about a topic
  • Slightly shorter presentation time than formal talks
  • Includes participation in discussion after all presenters speak

When roundtables work best:

  • Early-stage research or pilot studies
  • Seeking suggestions for next steps
  • Getting preview of potential reviewer objections
  • Networking with scholars sharing common interests

Example: A researcher with preliminary findings can present initial results, discuss where to take the study next, and receive feedback on methodology or interpretation.

📊 Poster presentations

Visual storytelling approach:

In a poster presentation you visually represent your work through graphs, charts, tables, and other images rather than text-heavy displays.

Design principles:

  • Don't: Print and paste your paper onto a poster board
  • Do: Tell the "story" through visual elements
  • Use bulleted points sparingly
  • Ensure someone walking by slowly can grasp major argument and findings
  • Avoid excessive wordiness

Purpose and advantages:

  • Designed to encourage audience engagement and conversation
  • Excellent for early-stage projects
  • Share highlights, not every detail
  • Get feedback, hear questions, provide additional details in person

Don't confuse: Posters are conversation starters, not comprehensive reports—completeness comes through discussion, not the poster itself.

79

Writing up Research Results

15.2 Writing up Research Results

🧭 Overview

🧠 One-sentence thesis

Writing up research results requires tailoring the format and content to your audience—scholarly or public—while maintaining honesty, clarity, and rigorous attribution to avoid plagiarism.

📌 Key points (3–5)

  • Scholarly reports follow a standard structure: abstract, introduction, literature review, methodology, findings, conclusions, and references.
  • Public reports differ from scholarly ones: they must be shaped by audience expectations, outlet rules (style, length), and what readers need to hear.
  • Core responsibility: present social scientific evidence clearly and honestly, acknowledge prior scholars, and engage readers in discussion.
  • Common confusion: scholarly vs. public writing—scholarly reports are comprehensive and reference-heavy; public reports are tailored to outlet constraints and audience interests.
  • Critical warning: plagiarism (presenting others' words or ideas as your own) is a career-ending transgression that must be avoided with extreme care.

📝 Structure of scholarly research reports

📚 Standard components

Scholarly reports written for other scholars generally follow a consistent format:

  • Abstract: summary of the research
  • Introduction: framing the research question
  • Literature review: situating the work among prior scholarship
  • Methodology: discussion of research methods
  • Findings: presentation of results
  • Conclusions and implications: discussion of what the work means
  • References: list of cited sources
  • Visual aids: tables or charts representing findings

🎓 How to learn the format

  • Reading prior literature in your area is the best way to understand how scholarly reports are structured.
  • Many resources exist to guide students in writing scholarly research reports.
  • Example: by reading published articles, you learn how to write each component yourself.

🎯 Tailoring reports to your audience

🧑‍🤝‍🧑 Know your audience

Knowing your audience is crucial when preparing a report of your research.

Two key questions to answer:

  • What are they likely to want to hear about?
  • What portions of the research do you feel are crucial to share, regardless of the audience?

These answers will shape your written report.

📰 Public vs. scholarly consumption

AspectScholarly reportsPublic reports
FormatStandard structure (abstract, lit review, methodology, etc.)Shaped by outlet rules (style, length, presentation)
Content depthComprehensive, detailedTailored to audience needs and constraints
ReferencesExtensive citation listMay be limited or adapted
ExampleAcademic journal articleNewspaper editorial
  • Some outlets (e.g., newspapers) dictate the shape of your report through their rules.
  • Don't confuse: public reports are not "simplified" scholarly reports; they are fundamentally different in purpose and structure.

🔬 Your role as a social scientist

🎯 Core responsibilities

Remember what you are reporting: social scientific evidence.

Your duties:

  • Take your role seriously: you are a peer among scholars in your discipline.
  • Present clearly and honestly: findings should be as transparent as possible.
  • Acknowledge prior work: pay appropriate homage to scholars who came before you, even while questioning their work.
  • Engage readers: aim to start a discussion about your work and future research directions.

💭 Anticipate reader questions

  • Imagine what readers might ask upon reading your report.
  • Imagine your response.
  • Provide some of those details in your written report.
  • This approach works even if you will never meet your readers face-to-face.

⚠️ Avoiding plagiarism

🚨 What plagiarism is

Plagiarism: presenting someone else's words or ideas as if they are your own.

  • It is among the most egregious transgressions a scholar can commit.
  • Plagiarism has ended many careers, even many years after the offense.
  • Example: the excerpt references a case where plagiarism cost a senator his degree years later.

🛡️ How to protect yourself

  • Take extraordinary care not to commit plagiarism.
  • Take this warning very, very seriously.
  • If you feel a little afraid and paranoid after reading this warning, consider it a good thing.
  • Let that fear motivate you to take extra care to ensure you are not plagiarizing others' work.
  • Don't confuse: proper citation is not optional or a formality—it is a fundamental ethical requirement of scholarship.
80

Disseminating Findings

15.3 Disseminating Findings

🧭 Overview

🧠 One-sentence thesis

Disseminating research findings requires a deliberate, planned process of identifying target audiences, locating them, and communicating results in ways that facilitate uptake in decision-making and practice.

📌 Key points (3–5)

  • What dissemination means: a planned process involving consideration of target audiences and settings where findings will be received, not just writing up results.
  • The core challenge: writing up results and having others take notice are entirely different; people will not take notice unless you help and encourage them to do so.
  • Three-step framework: determine who your audience is, identify where they are, and discover how best to reach them.
  • Common confusion: "If you build it, they will come" does not apply—dissemination requires active effort, not passive availability.
  • Ethical duty: if you have conducted high-quality research with findings of interest beyond yourself, you have an obligation as a scholar to share those findings.

📢 What dissemination actually means

📚 Definition and scope

Dissemination: "a planned process that involves consideration of target audiences and the settings in which research findings are to be received and, where appropriate, communicating and interacting with wider policy and…service audiences in ways that will facilitate research uptake in decision-making processes and practice."

  • It is not simply publishing or posting results somewhere.
  • It involves careful planning, thought, consideration of target audiences, and communication with those audiences.
  • The goal is facilitating uptake—making sure findings actually influence decisions and practice.

⚠️ The passive approach doesn't work

  • The excerpt emphasizes a general rule: people will not take notice unless you help and encourage them to do so.
  • Reference to Field of Dreams: "just because you build it does not mean they will come."
  • Writing up results ≠ successful dissemination.
  • Don't confuse: completing the research write-up with completing the dissemination process.

👥 Identifying your audiences (the "who")

🎯 Potential audience categories

The excerpt lists several types of audiences to consider:

Audience typeWhy they matter
Research participantsThey contributed to your work and likely have interest in what you discovered
People sharing characteristics with participantsThey may benefit from awareness of your research
Other scholars studying similar topicsAn obvious audience for your work
Policy makersShould take note if your work has policy implications
Related organizationsGroups doing work in areas related to your research topic
Inquisitive public membersAny engaged members of the public represent a possible audience

🤔 Thinking beyond enthusiastic interest

  • Your audience might include those who do not express enthusiastic interest.
  • They might nevertheless benefit from an awareness of your research.
  • Consider both direct interest and potential benefit.

📍 Locating your audiences (the "where")

🗺️ Finding each audience type

The excerpt states that location "should be fairly obvious once you have determined who you would like your audience to be."

Research participants and similar populations:

  • You know where they are because you studied them.

Scholars:

  • On your campus (e.g., campus events for presenting findings)
  • At professional conferences
  • Via professional organizations' newsletters (described as "an often-overlooked source for sharing findings in brief form")
  • In scholarly journals

Policy makers:

  • Your state and federal representatives
  • In theory, they should be available to hear constituents speak on policy matters

Organizations:

  • If not already aware of relevant organizations, a simple web search can identify them

General public:

  • Local newspaper (letters to the editor)
  • Blogs
  • Other public forums

🛠️ Reaching your audiences (the "how")

📋 Strategy depends on audience norms

  • Your strategy should be determined by the norms of the audience.
  • Different audiences have different expectations and requirements.

📝 Specific mechanisms by audience

Scholarly journals:

  • Provide author submission instructions
  • Clearly define requirements for anyone wishing to disseminate work via that journal

Newspapers:

  • Check the newspaper's website for details about how to format and submit letters to the editor

Political representatives:

  • Call their offices
  • Use a simple web search to find contact procedures

🔍 Do your homework

  • The excerpt emphasizes that information about "how to reach" each audience is readily available.
  • It requires some effort to look up the specific requirements and norms.

🎓 Your scholarly duty

⚖️ The obligation to share

  • Whether or not you act on dissemination suggestions is ultimately your decision.
  • But: if you have conducted high-quality research with findings likely to be of interest to constituents besides yourself, it is your duty as a scholar and a sociologist to share those findings.

📊 The three-step process summary

The excerpt concludes with a clear framework:

  1. Determine who your audience is
  2. Identify where your audience is
  3. Discover how best to reach them
  • This is presented as a systematic, sequential process.
  • Each step builds on the previous one.
  • All three steps are necessary for successful dissemination.
81

Reading Reports of Sociological Research

16.1 Reading Reports of Sociological Research

🧭 Overview

🧠 One-sentence thesis

Understanding how to critically read and evaluate sociological research reports requires knowing what to look for in each section, from abstracts to conclusions, and being able to distinguish statistically significant findings from those that may have occurred by chance.

📌 Key points (3–5)

  • Strategic reading order: Start with the abstract, acknowledgments, discussion section, and tables before reading the full report to build context efficiently.
  • Understanding tables: Tables condense key findings; independent variables appear in columns, dependent variables in rows, allowing readers to scan how relationships change.
  • Statistical significance matters: The p-value tells you the probability that observed relationships occurred by chance; lower p-values (typically below 0.05) indicate more confidence in the relationship.
  • Common confusion: Don't confuse a low p-value with proof of causation—it only tells you the likelihood that the null hypothesis (no relationship) is incorrect.
  • Critical questions for each section: Every part of a research report—from sample selection to data analysis—should be evaluated with specific questions about strengths, weaknesses, and claims.

📖 How to approach reading research reports

📖 Start with quick-scan elements

Before diving into the full text, read these components first:

  • Abstract: A few hundred words summarizing major findings and the theoretical framework
  • Acknowledgments: Reveals who provided feedback, funding, or other support—gives contextual information about the research's backing
  • Discussion section: Near the end; provides interpretation of findings
  • Tables: Offer condensed summaries of key findings at a glance

This approach helps you quickly familiarize yourself with the study before committing to a full read.

🎯 Why this order matters

  • Builds context before encountering detailed methodology
  • Helps you decide whether the full report is relevant to your needs
  • Provides a framework for understanding technical details when you read them later

Example: If acknowledgments show funding from a particular organization, you can consider whether that might influence the research framing as you read.

📊 Understanding tables and statistical significance

📊 How tables are structured

A table provides a quick, condensed summary of the report's key findings.

Basic table organization:

  • Columns: Independent variable attributes (the presumed cause)
  • Rows: Dependent variable attributes (the presumed effect)
  • Reading strategy: Scan across rows to see how dependent variable values change as independent variable values change

Common table elements:

  • N: Frequencies (how many)
  • %: Percentages
  • Descriptive tables: Show sample characteristics (e.g., how many participants are women vs. men)
  • Causal relationship tables: Show how variables relate to each other

🔢 What statistical significance tells you

Statistical significance tells us the likelihood that the relationships we observe could be caused by something other than chance.

The p-value explained:

  • Measures the probability that there is no relationship between variables
  • Indicates whether to reject the null hypothesis (the assumption that no relationship exists)

Interpreting p-values:

p-valueMeaningConfidence level
0.623 (62.3%)High chance null hypothesis is correctLow confidence in relationship
0.039 (3.9%)Low chance null hypothesis is correctHigher confidence; "significant at .05 level"
Below 0.05Less than 5% chance due to sampling error aloneGenerally considered statistically significant

Example from the excerpt: In a workplace harassment study, threats to safety showed p = 0.623 (likely due to chance), while staring/invasion of personal space showed p = 0.039 (more likely a real relationship between gender and experiencing this behavior).

⚠️ Important distinction

Don't confuse: Social scientists state findings in terms of "rejecting the null hypothesis" rather than making bold claims about proven relationships. Statistical significance indicates confidence, not certainty.

🔍 Critical questions for each report section

🔍 Framework for evaluation

The excerpt provides a systematic approach to critically reading research reports. Here are the key questions organized by section:

📝 Abstract questions

  • What are the key findings?
  • How were those findings reached?
  • What framework does the researcher employ?

🤝 Acknowledgments questions

  • Who are the major stakeholders?
  • Who provided feedback or funding?
  • Are you familiar with those who supported the work?

🎯 Introduction questions

  • How does the author frame the research focus?
  • What other ways of framing the problem exist?
  • Why might this particular framing have been chosen?

📚 Literature review questions

  • How selective was the researcher in identifying relevant literature?
  • Does the review appear appropriately extensive?
  • Does the researcher provide a critical review (not just descriptive)?

👥 Sample questions

  • Was probability or nonprobability sampling employed?
  • What is the sample? What is the population?
  • What claims can be made based on this sample?
  • What are the major strengths and weaknesses?

📊 Data collection questions

  • How were data collected?
  • What are the strengths and weaknesses of this method?
  • What other methods might have been used, and why was this one chosen?
  • What do you know (and not know) about the data collection instruments?

🔬 Data analysis questions

  • How were data analyzed?
  • Is there enough information to feel confident that proper procedures were employed accurately?

📈 Results questions

  • What are the major findings?
  • Are findings linked back to research questions, hypotheses, and literature?
  • Are sufficient data provided (quotes for qualitative, statistics for quantitative) to support conclusions?
  • Are tables readable?

💭 Discussion and conclusion questions

  • Does the author generalize beyond the sample?
  • Are claims supported by data in the results section?
  • Have study limitations been fully disclosed and addressed?
  • Are implications sufficiently explored?

🎓 Being a responsible consumer

🎓 What responsible consumption means

Being a responsible consumer of research requires that you take seriously your identity as a social scientist.

Core responsibilities:

  • Distinguish what you do know from what you do not know based on the information provided
  • Have awareness about what you can and cannot reasonably know from research findings
  • Put your knowledge and skills to use when encountering research

📰 Context matters: scholarly vs. popular sources

Scholarly journal articles provide:

  • Detailed information about data collection methods
  • Sample characteristics and recruitment procedures
  • Full methodological context for assessing claims

Popular magazines/newspapers provide:

  • Less detailed information than scholarly sources
  • More limited basis for evaluation
  • What you "do and do not know is more limited"

Don't confuse: The level of confidence you can have in findings depends partly on how much methodological detail is available to you as a reader.

82

16.2 Being a Responsible Consumer of Research

16.2 Being a Responsible Consumer of Research

🧭 Overview

🧠 One-sentence thesis

Being a responsible consumer of research means distinguishing what you can know from what you cannot based on the information provided, while recognizing the limits of any study's transparency.

📌 Key points (3–5)

  • Core responsibility: taking your identity as a social scientist seriously by applying your knowledge to evaluate research findings.
  • What varies by source: scholarly journal articles provide detailed methods, samples, and recruitment information; popular media often omit these details, limiting what you can assess.
  • Funding matters: knowing who funded a study helps you weigh findings that may support particular political agendas.
  • Unavoidable unknowns: ethical protections mean you cannot know participants' identities, and researchers may not disclose personal stakes in their work.
  • Common confusion: not all information gaps are problems—awareness of what you cannot know is itself important contextual information.

📚 What responsible consumption requires

🎓 Your identity as a social scientist

  • Now that you understand how to conduct and read research, you have a responsibility to use that knowledge.
  • This is not passive reading—it requires active assessment.
  • You must distinguish:
    • What you do know based on the information provided.
    • What you do not know because information is missing or withheld.

🔍 Awareness of reasonable limits

  • Being responsible also means recognizing what you can and cannot reasonably know when encountering findings.
  • Not all missing information is a flaw; some gaps are inherent to research ethics or reporting format.

📰 Assessing different information sources

📖 Scholarly journal articles

When you read a scholarly journal article, you typically receive:

  • Detailed method of data collection.
  • Information about the sample.
  • How the researcher identified and recruited participants.

Why this matters: These details provide important contextual information that helps you assess the researcher's claims.

🗞️ Popular media (magazines, newspapers)

  • You will not find the same level of detail as in scholarly journals.
  • What you do and do not know is more limited than with a scholarly article.
  • Example: A newspaper may report "a study found X" without explaining sample size, method, or limitations.

Don't confuse: A finding reported in popular media is not necessarily wrong, but you have less context to evaluate its strength.

💰 The role of funding sources

💵 Why funding information matters

  • Most funders want and require that recipients acknowledge them in publications.
  • However, popular press may leave out the funding source.

🔎 How to find funding information

  • In the internet age, it is relatively easy to obtain information about how a study was funded.
  • If the source does not provide this information, do a quick web search to learn more about the researcher's funding.

⚖️ Weighing findings in context

Findings that seem to support a particular political agenda might have more or less weight once you know whether and by whom a study was funded.

  • Knowing the funder does not automatically invalidate findings, but it provides crucial context.
  • Example: An organization funded by an industry group reports findings favorable to that industry—this context helps you assess potential bias.

🚫 What you cannot know

🔒 Ethical protections: participant identities

  • Researchers are ethically bound to protect the identities of their subjects.
  • You will never know exactly who participated in a given study.
  • This is not a flaw—it is a necessary protection.

🤐 Researchers' personal stakes

  • Researchers may choose not to reveal personal stakes they hold in the research they conduct.
  • You cannot know for certain whether or how researchers are personally connected to their work unless they choose to share such details.

✅ Why "unknowables" are not necessarily problems

What you cannot knowWhyIs this a problem?
Participant identitiesEthical protectionNo—necessary for ethics
Researchers' personal connectionsResearcher's choiceNot necessarily—but awareness of the gap matters
  • Neither of these "unknowables" is necessarily problematic.
  • Key insight: Having awareness of what you may never know about a study provides important contextual information for assessing what you can take away from a given report of findings.

Don't confuse: "I cannot know this" does not mean "the study is flawed"—it means you should interpret findings with appropriate caution about the limits of your knowledge.

83

Sociological Research: It is everywhere?

16.3 Sociological Research: It is everywhere?

🧭 Overview

🧠 One-sentence thesis

Understanding sociological research methods equips you to recognize, evaluate, and question sociological claims that appear throughout everyday life, from expert testimony to casual assumptions.

📌 Key points (3–5)

  • Where sociology appears: in court cases, social policy, media reports, and everyday conversations—often without us realizing it.
  • The armchair sociologist problem: people make sweeping claims about society without evidence; research literacy helps you question these unfounded assumptions.
  • What you gain: ability to distinguish evidence-based knowledge from anecdotal claims and to understand brief mentions of sociology in daily life.
  • Common confusion: not all claims about society are sociological research—many are just personal opinions dressed up as facts.
  • Why it matters: research methods training prepares you to be a critical consumer of information and to challenge assumptions (including your own).

🔍 Encountering sociology in daily life

🔍 Where research shows up

The excerpt emphasizes that sociology appears in many contexts once you start paying attention:

  • Court cases and lawsuits (sociologists as expert witnesses)
  • Social policy development
  • Media reports and popular press
  • Everyday conversations

Key insight: Sometimes we encounter sociological research without recognizing it as such.

📰 Brief mentions require background knowledge

  • When sociology appears in everyday life (news, conversations, policy debates), it's often mentioned briefly.
  • Having research methods knowledge helps you understand these references better than you would without that background.
  • Example: A news article cites a sociological study—you can now assess its methods, funding, and limitations rather than accepting claims at face value.

🪑 The armchair sociologist problem

🪑 What armchair sociologists do

Armchair sociologists: people who make sweeping claims about how society "is" or how groups of people "are" without evidence beyond anecdotes (or no evidence at all).

Characteristics:

  • Tend to "wax poetic" about society
  • Rely on personal observations or assumptions
  • Lack systematic evidence to support their claims
  • May sound authoritative despite having no research basis

🛡️ How research literacy protects you

Now that you understand research methods, you are:

  • Better equipped to question armchair sociologists' assumptions
  • Prepared to ask: "What evidence supports that claim?"
  • Able to share knowledge about how we "know" things with others
  • Positioned to help others break the habit of making unfounded assumptions

Important acknowledgment: The excerpt notes that we have all probably made unfounded assumptions at some point—research literacy helps us recognize and correct this tendency.

🧠 Skills for responsible consumption

🧠 What responsible consumers do

The excerpt builds on earlier material about being a responsible consumer of research. Key abilities include:

SkillWhat it means
Understanding what you knowRecognizing the evidence and methods behind claims
Understanding what you don't knowAcknowledging gaps and limitations
Understanding what you can knowIdentifying what information is available or discoverable
Understanding what you cannot knowAccepting inherent unknowables (e.g., protected participant identities)

🔎 Practical application

  • Question everyday assumptions that others make
  • Distinguish between evidence-based sociology and personal opinion
  • Recognize when claims need supporting evidence
  • Apply critical thinking to information you encounter daily

💡 Beyond the classroom

The excerpt emphasizes that sociological research can benefit areas of your life outside academic settings:

  • Evaluating news and media claims
  • Participating in policy discussions
  • Understanding social issues
  • Making informed decisions based on evidence rather than assumptions

🎯 The broader context

🎯 Why this matters

The excerpt positions research literacy as:

  • A practical skill for navigating information-rich environments
  • A tool for social engagement (questioning claims, shaping discussions)
  • A defense against misinformation (recognizing unfounded assumptions)
  • A transferable ability that applies across life contexts

🌐 Paying attention

The opening line captures the main idea: "It is amazing where and how often you might discover sociology rearing its head when you begin to pay attention, look for it, and listen for it."

Don't confuse: Sociology appearing everywhere doesn't mean everything is sociology—it means that once you understand research methods, you'll recognize legitimate sociological work and distinguish it from casual claims about society.

84

Doing Research for a Living

17.1 Doing Research for a Living

🧭 Overview

🧠 One-sentence thesis

Social researchers can pursue careers in diverse settings including evaluation research, market research, and government research, each applying research methods to different practical purposes.

📌 Key points (3–5)

  • Where sociologists work: market research firms, corporations, think tanks, government agencies, universities, and specialized research firms.
  • Three main employment areas: evaluation research (assessing program effects), market research (guiding business decisions), and government/policy research.
  • What these roles share: all use the same data collection and analysis methods learned in sociology, but apply them to different purposes.
  • Common confusion: these are not different research methods—they are different uses of research methodology for particular purposes.
  • Skills in demand: understanding human preferences, attitudes, and behaviors through systematic data collection.

💼 Major employment sectors

💼 Types of employers

Organizations that hire social researchers include:

  • Market research firms (e.g., Gallup, Nielsen)
  • Corporations and businesses
  • Public relations and communications firms
  • Academic institutions and research institutes
  • Think tanks and private research firms
  • Public research firms and policy groups
  • All levels of government

🎓 What undergraduate sociology researchers do

The excerpt emphasizes that sociologists with undergraduate degrees in research are "most likely to find employment as researchers" in three specific areas: evaluation research, market research, and government research.

🔍 Evaluation research

🔍 What it measures

Evaluation research: research that is conducted to assess the effects of specific programs or policies.

  • Used when social interventions are planned (e.g., welfare reform, school curriculum changes)
  • Assesses whether interventions are necessary by defining and diagnosing social problems
  • Determines whether applied interventions achieved their intended consequences

🏢 Where it happens

  • Many firms specialize in evaluation research
  • Different firms may focus on different research areas
  • Example: searching "evaluation research firm" online reveals numerous specialized companies

🛒 Market research

🛒 Core purpose

Market research: research that is conducted for the purpose of guiding businesses and other organizations as they make decisions about how best to sell, improve, or promote a product or service.

  • Not a specific method, but a particular way of utilizing research methodology
  • Focuses on understanding consumers, competitors, and industries
  • Helps organizations reach and appeal to their target audiences

📊 What market researchers study

Market researchers gather data about:

  • Core markets and customers
  • Competitors
  • Industry trends more generally
  • Consumers' preferences, tastes, attitudes, and behaviors

🔧 Methods used in market research

MethodDescription
ObservationWatch customers in stores to see which displays attract them
SurveysAssess consumer satisfaction with goods or services
Covert observationAct as secret shopper or diner to experience service as a real customer
Focus groupsConduct group discussions with consumers

🏭 Where it happens

  • Specialized market research firms hired by other organizations
  • In-house research departments in large businesses
  • Non-profit organizations seeking to understand clientele needs and promote services

Don't confuse: Market research is not a unique research method—it uses the same social scientific methods (surveys, observation, focus groups) but applies them specifically to understand consumers and guide business decisions.

🏛️ Government and policy research

🏛️ Scale and scope

  • Governments are "one of the largest employers of applied social science researchers"
  • Policy and government research can cover any number of areas
  • Involves applying social science research methodology to public sector questions

🔑 Core insight about these careers

🔑 Shared foundation

All three career paths (evaluation, market, and government research) share a fundamental characteristic:

  • They may use "any of the data collection or analysis strategies" described in previous chapters
  • Their "purpose and aims may differ" from one another
  • Each represents "a particular use of research rather than a research method per se"

Example: A researcher might use surveys in all three contexts—to evaluate a social program's effectiveness, to assess consumer preferences for a product, or to gauge public opinion on a policy—but the ultimate goal differs in each case.

85

Doing Research for a Cause

17.2 Doing Research for a Cause

🧭 Overview

🧠 One-sentence thesis

Action research and public sociology enable researchers to collaborate with communities and practitioners to create knowledge that directly addresses real-world problems and promotes social change.

📌 Key points (3–5)

  • Action research purpose: creates new knowledge with practical application beyond basic science, often bridging the gap between researchers and practitioners.
  • Interdisciplinary nature: action research brings together diverse fields (social sciences, natural sciences, engineering, philosophy, history) to tackle complex problems.
  • Transferable skills from research methods: problem identification, problem-solving, investigative abilities, asking good questions, and critical thinking—all valuable across many careers.
  • Common confusion: critical thinking is not just criticizing everything; it means carefully evaluating assumptions, identifying both strengths and weaknesses, and understanding multiple positions.
  • Public sociology impact: makes sociology more visible and accessible to broader audiences beyond academic circles.

🔬 What action research accomplishes

🎯 Practical knowledge creation

Action research: research that creates new knowledge with a practical application and purpose in addition to the creation of knowledge for basic scientific purposes.

  • Goes beyond pure theory or basic science
  • Aims to solve real problems while generating knowledge
  • Example: A research project might study educational practices while simultaneously improving them

🌉 Bridging the researcher-practitioner gap

The Canadian Journal of Action Research (CJAR) explicitly aims to "mend the rift between researcher and practitioner" in educational research.

Three main goals of CJAR:

  • Make research outcomes widely available
  • Provide models of effective action research
  • Enable educators to share their experiences

This approach ensures research findings actually reach and benefit the people who can use them.

🤝 Collaborative and interdisciplinary approaches

🔀 Bringing disciplines together

Action research is often interdisciplinary, uniting researchers from:

  • Social sciences (sociology, political science, psychology)
  • Physical and natural sciences (biology, chemistry)
  • Engineering
  • Philosophy
  • History
  • And many others

🏘️ Community-researcher partnerships

The University of Maine's Sustainability Solutions Initiative (SSI) exemplifies this approach:

  • Unites campus researchers with local community members
  • Connects knowledge with action
  • Promotes strong economies, vibrant communities, and healthy ecosystems
  • The knowledge/action connection is essential to the mission
  • Collaboration between community stakeholders and researchers is crucial

Don't confuse: This is not researchers studying communities from the outside; it's researchers and community members working together as partners.

💼 Transferable skills from research methods

🧩 Problem identification and problem-solving

Transferable skills: the conglomeration of tasks in which a person develops proficiency from one realm that can be applied in another realm.

Social scientific research training develops the ability to:

  • Identify problems that others might overlook
  • Seek knowledge aimed at understanding problems
  • Develop tools to begin solving those problems

Career applications:

  • Journalism (investigative skills)
  • Criminal justice (questioning assumptions)
  • Any position requiring problem-solving and learning new approaches

❓ Asking good questions

This skill is:

  • Essential in many areas of employment and life
  • Linked to critical thinking abilities
  • Developed through research methods training

🧠 Critical thinking

Critical thinking: a skill that involves the careful evaluation of assumptions, actions, values, and other factors that influence a particular way of being or doing.

What critical thinking actually means:

  • NOT just criticizing every idea or person
  • Takes practice to develop
  • Requires identifying both weaknesses AND strengths in taken-for-granted ways
  • Involves understanding varying positions on issues, even without agreeing with them

Don't confuse: A critical thinker is not a chronic complainer; they systematically evaluate ideas from multiple angles.

🔍 Investigative abilities

Research methods training develops skills for:

  • Calling assumptions into question
  • Solving problems systematically
  • Asking questions effectively
  • Learning new ways of doing things

These abilities are valuable across virtually any profession.

📢 Public sociology and accessibility

🌐 Making sociology visible

Public sociology has made the discipline:

  • More visible to broader audiences
  • More accessible than perhaps ever before
  • Connected to real-world applications

📚 Professional dissemination

The Canadian Sociological Association (CSA):

  • Promotes research, publication, and teaching in sociology in Canada
  • Publishes the Canadian Review of Sociology (CRS), in existence since 1964
  • Disseminates innovative ideas and research findings

This infrastructure ensures sociological knowledge reaches beyond academic circles to inform practice and policy.

86

17.3 Revisiting an Earlier Question: Why Should We Care?

17.3 Revisiting an Earlier Question: Why Should We Care?

🧭 Overview

🧠 One-sentence thesis

Learning social scientific research methods equips you with transferable problem-solving, critical-thinking, and communication skills that are valuable across employment and life circumstances.

📌 Key points (3–5)

  • Core transferable skill: the ability to identify and solve problems, which is crucial in many areas of employment.
  • Critical thinking is not criticism: it means carefully evaluating assumptions and understanding multiple positions, not simply criticizing ideas.
  • Nine key skills: problem identification, problem-solving, investigation, asking questions, framing arguments, listening, critical thinking, analyzing/synthesizing/interpreting information, and oral/written communication.
  • Common confusion: critical thinking ≠ sitting back and criticizing everything; it requires practice and involves identifying both strengths and weaknesses in taken-for-granted ways.
  • Ultimate benefit: these skills help you understand yourself, your circumstances, and your world by enabling you to ask and answer pressing questions.

🛠️ Problem-solving and investigative abilities

🛠️ Identifying and solving problems

The primary transferable skill developed by learning social scientific research is the ability to solve problems.

  • Social researchers identify social problems and seek knowledge to understand and eradicate them.
  • You are now better equipped not only to solve problems but also to identify them in the first place.
  • Having the ability to seek out problems and the requisite knowledge and tools to begin solving them is crucial in many employment areas.

🔍 Investigative skills

  • The investigative skills you have developed can be put to use in any job where assumptions are called into question.
  • Example: journalism requires investigation; criminal justice requires investigative skills; any position that requires solving problems, asking questions, and learning new ways of doing things benefits from these skills.

🧠 Critical thinking and questioning

❓ Asking good questions

  • A talent for asking good questions is another important ability related to problem-identification and problem-solving skills.
  • The ability to ask good questions is essential in many areas of employment and most areas of life.
  • This skill is linked to critical thinking.

🧠 What critical thinking really means

Critical thinking: the careful evaluation of assumptions, actions, values, and other factors that influence a particular way of being or doing.

  • Don't confuse: critical thinking does not mean sitting back and criticizing every idea or person that comes your way.
  • It is a skill that takes practice to develop.
  • It requires an ability to identify both weaknesses and strengths in taken-for-granted ways of doing things.
  • A person who thinks critically should be able to demonstrate some level of understanding of the varying positions one might take on any given issue, even if he or she does not agree with those positions.

📊 Information and communication skills

📊 Analyzing, synthesizing, and interpreting information

  • Understanding sociological research methods means having some understanding of how to analyze, synthesize, and interpret information.
  • Having a well-developed ability to carefully take in, think about, and understand the meaning of new information will serve you well in all varieties of life circumstances and employment.

💬 Communicating effectively

  • The ability to communicate and clearly express oneself, both in writing and orally, is crucial in all professions.
  • As you practice the tasks described throughout the text, you will attain and improve the oral and written communication skills that many employers value.

🎯 Framing arguments and listening

  • The ability to effectively frame an argument or presentation is related to effective communication.
  • Successfully framing an argument requires not only good communication skills but also strength in the area of listening to others.

📋 Complete list of transferable skills

📋 Nine key skills gained

The transferable skills you have gained as a result of learning how to conduct social scientific research include:

#Skill
1Identifying problems
2Identifying solutions to problems
3Investigative skills and techniques
4Asking good questions
5Framing an argument
6Listening
7Thinking critically
8Analyzing, synthesizing, and interpreting information
9Communicating orally and in writing

🌍 Understanding yourself and your world

🌍 The ultimate reward

  • Perhaps the most rewarding consequence of understanding social scientific research methods is the ability to gain a better understanding of yourself, your circumstances, and your world.
  • Through the application of social scientific research methods, sociologists have asked and answered many of the world's most pressing questions.

🔄 The ongoing quest for knowledge

  • The answers are not always complete, nor are they infallible, but the quest for knowledge and understanding is an ongoing process.
  • As social scientists continue the process of asking questions and seeking answers, you may choose to participate in that quest now that you have gained some knowledge and skill in how to conduct research.
  • Having thought about what you know and how you know it, as well as what others claim to know and how they know it, should provide you with some clarity in an often murky world.
87

Understanding Yourself, Your Circumstances, and Your World

17.4 Understanding Yourself, Your Circumstances, and Your World

🧭 Overview

🧠 One-sentence thesis

Learning social scientific research methods enables you to gain deeper insight into yourself, your circumstances, and the world by equipping you to critically evaluate knowledge claims and participate in the ongoing quest for understanding.

📌 Key points (3–5)

  • Core benefit: research methods help you understand yourself, your circumstances, and your world more clearly.
  • What sociologists do: they use research methods to ask and answer pressing questions, though answers are never complete or infallible.
  • Knowledge evaluation: thinking about what you know, how you know it, and how others claim to know things provides clarity in a murky world.
  • Personal choice: whether you adopt these ways of knowing is up to you; the goal is usefulness in personal life, relationships, or career goals.
  • Common confusion: research knowledge is not about final, perfect answers—it's an ongoing process of asking questions and seeking understanding.

🌍 The broader impact of research methods

🌍 Addressing the world's questions

  • Social scientists use research methods to tackle many of the world's most pressing questions.
  • The answers are never complete or infallible—knowledge-seeking is an ongoing process.
  • Example: researchers continue asking new questions and refining understanding over time, rather than claiming to have solved problems definitively.

🔄 The ongoing nature of inquiry

The quest for knowledge and understanding is an ongoing process.

  • This is not a one-time achievement; social scientists continuously ask questions and seek answers.
  • The excerpt invites readers to participate in this quest now that they have gained research skills.
  • Don't confuse: having research skills doesn't mean you have all the answers—it means you can join the process of inquiry.

🧠 Gaining clarity about knowledge

🧠 Thinking about what you know

The excerpt emphasizes two dimensions of knowledge evaluation:

DimensionWhat to examine
Your own knowledgeWhat you know and how you know it
Others' claimsWhat others claim to know and how they know it
  • This dual reflection provides clarity in an often murky world.
  • It's about developing critical awareness of knowledge sources and methods.

🎯 Choosing your ways of knowing

  • Whether you adopt the particular ways of knowing described in the text is totally up to you.
  • The goal is not to force a single approach but to offer useful tools.
  • Example: you might use research methods in personal life, relationships, or career goals—wherever they prove helpful.

💼 Transferable skills from research training

💼 Nine key skills

The excerpt lists transferable skills gained from learning social scientific research:

  1. Identifying problems
  2. Identifying solutions to problems
  3. Investigative skills and techniques
  4. Asking good questions
  5. Framing an argument
  6. Listening
  7. Thinking critically
  8. Analyzing, synthesizing, and interpreting information
  9. Communicating orally and in writing

🗣️ Communication and argumentation

  • The ability to communicate and clearly express oneself (both writing and orally) is crucial in all professions.
  • Successfully framing an argument requires both good communication skills and strength in listening to others.
  • These skills are valued by employers across many fields.

📖 Processing information

Having a well-developed ability to carefully take in, think about, and understand the meaning of new information with which you are confronted will serve you well in all varieties of life circumstances and employment.

  • This is about careful intake, thoughtful consideration, and understanding meaning—not just passive reception.
  • The skill applies broadly across life circumstances and employment contexts.

🎓 Personal and practical applications

🎓 Where the knowledge is useful

The excerpt identifies three domains where research knowledge may prove valuable:

  • Personal life and interests: applying research thinking to your own questions and concerns.
  • Relationships with others: understanding and communicating more effectively.
  • Longer-range school or career goals: building foundations for future professional development.

🌟 The most rewarding consequence

Perhaps the most rewarding consequence of understanding social scientific research methods is the ability to gain a better understanding of yourself, your circumstances, and your world.

  • This is framed as the ultimate benefit—not just career skills but deeper self-awareness and world understanding.
  • The emphasis is on personal growth and insight, not just technical competence.