Introduction to Political Science Research Methods

1

Welcome to Political Science Research Methods

Section 1.1: Welcome

🧭 Overview

🧠 One-sentence thesis

Political science welcomes all students into a diverse scholarly community that borrows from many disciplines to understand politics and solve public problems, and this open textbook empowers the next generation to shape the field through accessible research methods education.

📌 Key points (3–5)

  • What political science is: a scholarly community of students, teachers, researchers, and practitioners studying who gets what, when, where, how, and why—not just behaviors and institutions.
  • Political science as a "borrowing" discipline: it uses knowledge from history, economics, psychology, sociology, statistics, anthropology, computer science, mathematics, cognitive science, and biology while maintaining its own core tenets.
  • Why this textbook matters now: students face food/housing insecurity, inequality, right-wing populist movements, and Big Data disruption—research methods training empowers them to address these challenges.
  • Common confusion: political science vs other fields—political science differentiated itself from history and economics in its early years but still borrows from them and other disciplines.
  • Open invitation: this is an Open Education Resource (OER) with a Creative Commons CC-BY-NC license, so students and faculty can contribute improvements and adaptations.

🌍 The Political Science Community

🌍 Who belongs

Political science: the scientific study of who gets what, when, where, how, and why.

  • It is more than the study of political behaviors, processes, and institutions.
  • It is a scholarly community of students, teachers, researchers, and practitioners who care about generating, disseminating, and applying knowledge.
  • The community is increasingly diverse and resides all over the planet.
  • You are warmly welcomed to learn about and join this community.

📜 A relatively young discipline

  • The American Political Science Association (APSA) was established in 1903 (turn of the last century).
  • Over the last 116 years, the discipline has dramatically evolved.
  • Early efforts focused on:
    • Inspiring a democratically minded public
    • Pioneering innovations in political institutions and processes
  • In its formative years, political science sought to differentiate itself from history and economics.

🔄 Political Science as a "Borrowing" Discipline

🔄 What "borrowing" means

A "borrowing" discipline: one that has core tenets, theories, and ways of analyzing the political world, but also utilizes and leverages knowledge from a range of other fields.

  • Political science borrows from:
    • History, economics, psychology, sociology, statistics
    • Anthropology, computer science, mathematics, cognitive science
    • Even biology
  • Borrowing is bidirectional: other fields can borrow from political science as well.
  • Example: political economics compares market-based systems with government-run systems.

🎓 Why borrowing matters for students

  • Students with diverse intellectual interests can explore them through the borrowing framework.
  • Don't confuse: "borrowing" does not mean political science lacks its own identity—it has core tenets and theories, but enriches them with insights from other fields.

📖 About This Textbook

📖 Purpose and audience

This book, Introduction to Political Science Research Methods (IPSRM), is an Open Education Resource (OER) written by community college faculty and supported by the Academic Senate for California Community Colleges (ASCCC).

Three-fold purpose:

  1. Introduce college students to research methods of political science
  2. Provide a no-cost textbook for adoption by faculty and use by students
  3. Invite faculty and students to contribute to the improvement of the book

🌟 Why now

The textbook comes at an important time in the discipline's history:

  • Advanced democracies are strained by right-wing populist movements promoting austerity
  • A rise in inequality manifests in students struggling with food and housing insecurity
  • A Big Data revolution is upending industries and displacing workers

Implication: There is a clear need for all political science students to have access to learning about research methods to creatively grapple with trends and challenges facing societies and governments.

🔓 Open and collaborative

  • Licensed under Creative Commons with Attribution and Non-Commercial (CC-BY-NC)
  • You can expand this textbook and make it your own
  • Contributions welcomed for:
    • Grammatical errors
    • Clarifications needed in chapter sections
    • Underrepresented communities or voices in examples
    • Missing topics

🎯 The Dual Task of Faculty and the Future of Students

🎯 Faculty's dual task

Faculty (teachers and researchers) have two responsibilities:

  1. Welcoming students to the discipline
  2. Imparting knowledge of political behaviors, processes, and institutions to create a publicly spirited, scholarly minded, and civically engaged public

🔥 The spark

  • Most students take only one course in political science (to fulfill a social science or national government requirement)
  • A fraction will continue their study because something sparked their interest
  • This spark, the authors hope, turns into a gleaming shine that motivates students to shape political institutions and processes at subnational, national, and global levels

🌱 Students as the future

  • Students are the future of any academic discipline and scholarly community
  • How students are educated now will shape the discipline for generations to come
  • Empowering current students and future researchers with research methods tools helps them address societal challenges
2

Section 1.2: The Social Network of Political Science

Section 1.2: The Social Network of Political Science

🧭 Overview

🧠 One-sentence thesis

Political science functions as a dynamic social network of students, teachers, researchers, and practitioners who interact within and across seven subfields to shape the discipline's evolution.

📌 Key points (3–5)

  • Political science as a social network: the discipline is best understood as a community of interconnected groups (students, teachers, researchers, practitioners) rather than just an academic field.
  • Interactions within and between groups: relationships happen both inside groups (e.g., student-to-student) and across groups (e.g., graduate student presenting to faculty), and these interactions shape the discipline.
  • Seven subfields organize the network: American Government and Politics, Comparative Politics, International Relations, Political Theory, Political Methodology, Public Policy, and Political Science Education.
  • Common confusion: don't think of political science as static or isolated—it's a living network where individuals belong to multiple subfields simultaneously and where diversity drives change.
  • Why it matters: understanding political science as a social network helps explain how the discipline evolves through interactions and representation (e.g., the first all-women editorial team for a flagship journal).

🌐 Political science as a social network

🌐 What the network metaphor means

Instead of thinking of political science as an academic discipline, we can think of it as a community, or better yet, a social network of individuals that associate in groups.

  • The excerpt shifts from viewing political science as a formal discipline to seeing it as a network of people who interact.
  • Four main groups populate this network: students, teachers, researchers, and practitioners.
  • These groups are not isolated; they connect and influence each other.

🔗 How interactions work

The excerpt distinguishes two types of interactions:

TypeDescriptionExample from excerpt
Within groupsRelationships among members of the same groupA student disagrees with a classmate during discussion and waits until next class to respond
Between groupsInteractions across different groupsA doctoral graduate student presents research at a conference and interacts with faculty from other universities for the first time
  • Within-group interactions "typically consume our time and attention."
  • Between-group interactions are less frequent but can be pivotal (e.g., a graduate student's first conference).
  • Don't confuse: the network is not just about formal hierarchies; it includes everyday classroom exchanges and milestone events.

🔄 The dynamic nature of the network

🔄 Why the network is dynamic

  • The excerpt emphasizes that "the social network of political science is dynamic."
  • Interactions between groups help shape the discipline in meaningful ways.
  • Change happens through these interactions, not in isolation.

🌊 Example: representation and sea change

The excerpt provides a concrete example of how the network evolves:

  • The American Political Science Review (APSR), a "flagship journal" where many researchers seek publication, appointed its first all-women editorial team in the Association's 100+ year history.
  • This represents a "sea change" toward both descriptive representation (who is present) and substantive representation (what perspectives are included).
  • The excerpt notes: "this sea change is only possible because the political science community is increasingly diverse and interacting regularly."
  • Example: as more diverse students become teachers and researchers, and as these groups interact more, the discipline's leadership and priorities shift.

🗂️ The seven subfields

🗂️ What the subfields are

The excerpt lists seven subfields that organize the discipline:

  1. American Government and Politics
  2. Comparative Politics
  3. International Relations
  4. Political Theory
  5. Political Methodology
  6. Public Policy
  7. Political Science Education
  • Each subfield has its own "sub-disciplinary networks of students, teachers, researchers, and practitioners."
  • These subfields "engage in the acquisition, creation, and dissemination of knowledge."

🔀 Belonging to multiple subfields

  • Individuals can be part of more than one subfield at the same time.
  • Example from the excerpt: a second-year community college student enrolled in Introduction to International Relations and Introduction to Political Science Research Methods is "a student in two of the seven subfields for the term."
  • The student's professors are teachers within those respective subfields.
  • Don't confuse: subfields are not rigid silos; people move across them and belong to several simultaneously, which increases the network's interconnectedness.

🧩 How subfields fit into the larger network

  • The excerpt presents a visualization concept: the overall social network (students, teachers, researchers, practitioners) is subdivided into seven subfield networks.
  • Each subfield is "populated by students, teachers, researchers, and practitioners."
  • This structure allows for both specialized focus (within a subfield) and cross-pollination (across subfields and groups).
3

Section 1.3: Organization of the Book

Section 1.3: Organization of the Book

🧭 Overview

🧠 One-sentence thesis

This textbook is a collaboratively authored Open Education Resource structured with ten chapters and seven recurring elements designed to support both stand-alone and sequential learning, with feedback from users expected to refine future iterations.

📌 Key points (3–5)

  • What the textbook is: an Open Education Resource (OER) co-authored by six political scientists at six California community colleges, covering political science research methods in ten chapters.
  • How each chapter is structured: seven recurring elements—Chapter Outline, Chapter Sections, Key Terms/Glossary, Summary, Review Questions, Critical Thinking Questions, and Suggestions for Further Study.
  • Recommended use: chapters are best followed in order for coherence, but faculty may assign specific chapters to complement other materials.
  • Common confusion: Chapter Sections vs. Summary—the Summary provides a one-paragraph synopsis of each section, but is not a replacement for reading the full section.
  • Why feedback matters: the textbook is expected to evolve based on feedback from faculty and students after adoption and use.

📚 Textbook structure and authorship

📚 What the textbook covers

Introduction to Political Science Research Methods (IPSRM): an Open Education Resource consisting of ten chapters.

  • The textbook is co-authored by a team of six political scientists at six different community colleges in California.
  • Each chapter has a specific title and designated author(s), as shown in Table 1-1 in the excerpt.
  • Topics range from introduction and history of empirical political study to research design, qualitative and quantitative methods, and research ethics.

✍️ Chapter authorship

The excerpt provides a table listing all ten chapters with their titles and authors:

ChapterChapter TitleAuthors
1IntroductionJosh Franco, Ph.D.
2History and development of the empirical study of politicsDino Bozonelos, Ph.D. and Josh Franco, Ph.D.
3The scientific methodJosh Franco, Ph.D. and Kau Vue, M.A., M.P.A.
4Theories, hypotheses, variables, and unitsJosh Franco, Ph.D.
5Conceptualization, operationalization and measurement of political conceptsCharlotte Lee, Ph.D.
6Elements of research design including the logic of samplingKau Vue, M.A., M.P.A.
7Qualitative research methods and means of analysisCharlotte Lee, Ph.D.
8Quantitative research methods and means of analysisMasa Omae, Ph.D. and Dino Bozonelos, Ph.D.
9Research EthicsMasa Omae, Ph.D. and Steven Cauchon, Ph.D.
10ConclusionJosh Franco, Ph.D.
  • Multiple authors contribute to different chapters, reflecting collaborative expertise.

🧩 Seven recurring chapter elements

🧩 Chapter Outline

  • Provides a list of the chapter's sections.
  • You can click on the name of a section to jump directly to it.
  • Important because it quickly and concisely gives an overview of the chapter and a clear sense of its contents.

📖 Chapter Sections

  • The body of the chapter; collectively include most of the substantive content.
  • Each chapter author has tried to write sections as stand-alone parts, but there will naturally be flow and integration across chapters.
  • Don't confuse: while sections can be read independently, they are designed to connect and build on one another.

📘 Key Terms/Glossary

  • A repository of definitions of key terms used throughout the chapter sections.
  • Key terms are listed in alphabetical order.
  • In some instances, key terms are linked to external content (e.g., Dictionary.com or Wikipedia) for further exploration.
  • Key terms are also linked within chapter sections—you can click on a term and be directed to the Key Terms/Glossary section.

📝 Summary of the chapter

Summary: a one-paragraph synopsis of each section of the chapter.

  • The goal is to distill each chapter section into a bite-sized chunk that can be quickly referenced.
  • Each synopsis highlights a major concept of the section and serves as a reference.
  • Important: these should not be viewed as replacements for reading a specific chapter section.
  • Example: if you want a quick reminder of what a section covered, read the summary; if you need to understand the concept fully, read the full section.

❓ Review Questions

  • Include at least 5 questions per chapter.
  • Can serve as a pop quiz, clicker questions, student self-check, or part of a question bank for summative assessments (e.g., midterm or final).
  • In future iterations, the authors plan to create a Learning Management System Course Shell that would convert these questions into both a Question Bank and Quiz.

💭 Critical Thinking Questions

  • Include at least 3 questions per chapter.
  • Can serve as short or long essay prompts for in-class or at-home assessments.
  • Designed to encourage deeper engagement with the material.

🔗 Suggestions for Further Study

  • Includes links to websites, journal articles, and books related to the chapter topic.
  • The goal is to build a robust repository of resources that can be explored by students and faculty.
  • While the authors make an effort to list OER or other open-access content, some resources may not be freely available.
  • As the textbook expands, this section will grow as well.

🗺️ Recommended use and flexibility

🗺️ Sequential vs. selective use

  • Recommended: chapters are followed in order for the most coherent use.
  • Flexibility: the authors recognize and encourage that some faculty will want to assign specific chapters to complement an existing textbook adoption.
  • Example: an instructor using another primary textbook might assign Chapter 5 (Conceptualization, operationalization and measurement) to supplement a unit on measurement.

🔄 Feedback and iteration

  • The textbook is expected to evolve after adoption and use.
  • Feedback from faculty and students will help the authors refine the content of each chapter and the ordering of the materials.
  • This reflects the Open Education Resource model: continuous improvement based on user experience.
  • Don't confuse: the current version is complete and usable, but the authors anticipate future iterations will incorporate user suggestions.
4

Section 1.4: Analyzing Journal Articles

Section 1.4: Analyzing Journal Articles

🧭 Overview

🧠 One-sentence thesis

Analyzing journal articles is a learnable skill that requires identifying twelve key parts—from title and puzzle to theory, hypotheses, and contribution—to understand how scholars communicate research and build disciplinary knowledge.

📌 Key points (3–5)

  • What journal articles are: peer-reviewed publications through which scholars communicate ideas, theories, empirical analyses, and conclusions.
  • The twelve-part framework: title, main point, question, puzzle, debate, theory, hypotheses, research design, empirical analysis, policy implications, contribution, and future research.
  • Peer-review process: manuscripts are submitted to an editor, who forwards them to 2–4 reviewers for evaluation (or desk-rejects them); reviewers recommend acceptance, revision, or rejection.
  • Common confusion—debates: debates can be normative ("what should be") vs. positive ("what is"), and positive debates can occur at conceptual, operational, or measurement levels.
  • Why it matters: critically reading journal articles is essential for university students, especially those considering graduate school, and helps scholars build on existing knowledge.

📚 Understanding journal articles and peer review

📰 What journal articles are

Journal articles are peer-reviewed publications that help scholars communicate ideas, theories, empirical analyses, and conclusions.

  • They are contained in journals typically owned by publishing companies.
  • Example: Cambridge University Press partners with the American Political Science Association (APSA) to publish journals like American Political Science Review, Perspectives on Politics, and PS: Political Science and Politics; APSA also partners with Taylor and Francis to publish the Journal of Political Science Education.
  • The excerpt emphasizes that every discipline—political science, anthropology, criminal justice, nursing, economics, biology, engineering—relies on knowledge debated, disseminated, and created in journal articles.

🔍 The peer-review process

  • A scholar submits a manuscript to a journal editor.
  • The editor decides whether to forward the manuscript to 2–4 other scholars for review or not.
  • Desk rejection: when an editor decides not to forward a manuscript.
  • Reviewers read the manuscript, comment on it, and suggest whether it should be accepted, revised and resubmitted, or rejected.
  • Manuscripts accepted for publication become journal articles.
  • Don't confuse: not all manuscripts become journal articles; only those that pass peer review are published.

🎓 Why this skill matters

  • The ability to critically read journal articles is developed with practice.
  • It is especially useful for university students.
  • It is an essential skill for those contemplating graduate school (Masters, professional, or Doctoral degrees).
  • The excerpt frames this as "standing on the shoulders of those who came before"—understanding and building upon research questions, data, and analysis generated by others.

🧩 The twelve-part framework for analysis

🧩 Overview of the framework

Journal Article Analysis consists of reading journal articles and analyzing them by identifying twelve parts: title, main point, question, puzzle, debate, theory, hypotheses, research design, empirical analysis and methods, policy implications, contribution to the discipline, and future research.

  • Journal articles vary in organization and inclusion of these twelve parts.
  • Some articles explicitly describe all or most parts; others may not state a part or may omit it entirely.
  • The excerpt notes diversity in article authors, writing styles, and approaches; this framework is one of multiple frameworks for analyzing political science research.

📝 Title, main point, and question (Parts 1–3)

PartWhere to find itWhat it is
TitleFirst pageBrief (5–10 words); identifies the subject; may include the primary independent variable, dependent variable, or question
Main PointAbstract (first page, after title) or IntroductionSummary of the article; derived after the research is completed, not where political scientists start
QuestionAbstract or IntroductionThe article's primary question; an article can have more than one question; keeping a list helps identify the primary vs. secondary questions
  • Don't confuse: the main point is presented at the beginning but is actually a result of the research process, not the starting point.

🧩 Puzzle, debate, and theory (Parts 4–6)

🧩 The puzzle (Part 4)

The Puzzle is a missing piece of knowledge that the article seeks to fulfill.

  • Puzzles are what political scientists try to solve.
  • To solve a puzzle, a political scientist needs:
    1. A sense of what the whole puzzle looks like (the "puzzle box image").
    2. Knowledge of how current pieces fit together (the partially complete puzzle).
    3. A decision about which pieces to add next.
  • Example: imagine a partially complete jigsaw puzzle; the researcher examines how existing pieces connect, then decides which new pieces to place.

🗣️ The debate (Part 5)

The Debate is how scholars currently argue the subject of the article.

  • Debates have at least two sides (familiar as "pro" and "con"), but can be more complex.
  • Normative vs. positive debates:
    • Normative debates: focus on "what should be"; typical in the practice of politics (e.g., U.S. House of Representatives members debating policy using philosophical and logical arguments).
    • Positive debates: focus on "what is"; most debates in political science are positive.
  • Three levels of positive debates:
    • Conceptual: scholars argue about broad concepts (e.g., democracy, representation, power).
    • Operational: scholars argue how broad concepts are represented in the real world (e.g., is the U.S. a representative democracy?).
    • Measurement: scholars argue how an operationalized concept is measured (e.g., how do we measure representative democracy—winner-take-all or proportional representation?).
  • Don't confuse: normative debates (values, "should") with positive debates (facts, "is"); also distinguish the three levels of positive debates.

🧠 The theory (Part 6)

The Theory is how the author thinks something works.

  • Theories consist of constants, variables, and the relationships between variables.
  • Constants: objects that do not change; stating constants simplifies the complex world by "holding things constant" so researchers can focus on variables and their relationships.
  • Variables: objects that do change; typically classified into three categories:
    • Independent variable: the object that "causes" something to happen.
    • Mediating variable: the object that "helps cause" something to happen.
    • Dependent variable: the object that is the "effect" of the "cause" and/or "helping cause."
  • Example: your interpretation of the President (dependent variable) may be caused by an action the President took (independent variable), but your view of the action is mediated by your partisan affiliation (mediating variable).
  • Theory is used to clearly explain the logic of constants, variables, and relationships.

🔬 Hypotheses, research design, and empirical analysis (Parts 7–9)

🔬 Hypotheses (Part 7)

A hypothesis is the expectation that one variable affects another variable in a specific way.

  • Hypotheses are derived from the theory.
  • Example (building on the President theory above):
    • Hypothesis 1: If the President takes no action, then you will have no interpretation of the President.
    • Hypothesis 2: If the President acts, then you will have a positive view of the President if you have the same partisan affiliation as the President.
    • Hypothesis 3: If the President acts, then you will have a negative view of the President if you have a different partisan affiliation as the President.

🔬 Research design (Part 8)

The Research Design is how the author compares the effect of the explanatory variable (X) on the outcome variable (O) in a group (G) or set of groups.

  • Some political scientists use notation to denote research design.
  • Common examples:
    • Example 1: G O (single group, observation only).
    • Example 2: G X O (single group, treatment then observation).
    • Example 3: G O X O (single group, observation before treatment, the treatment, then observation after treatment).
    • Example 4: G X O and G _ O (two-group design; Group 1 receives treatment then observed; Group 2 does not receive treatment then observed).
    • Example 5: G O X O and G O _ O (two-group design; both groups observed, then Group 1 receives treatment while Group 2 does not, then both observed again).
    • Example 6: G O X O _ O and G O _ O X O (two-group "switching replications" design; both groups observed, Group 1 receives treatment while Group 2 does not, both observed, then Group 1 does not re-receive treatment while Group 2 receives treatment for the first time, both observed again).

🔬 Empirical analysis (Part 9)

The Empirical Analysis is the use of quantitative or qualitative evidence to explore whether the hypothesized relationship between two variables does indeed occur in the world.

  • Quantitative evidence:
    • Numerical data, often organized via spreadsheets.
    • Political scientists conduct statistical analysis using statistical models.
    • Can be visualized (e.g., scatter plots) to help observe trends.
  • Qualitative evidence:
    • Individual or collection of text, images, and audio in paper or electronic documents.
    • Political scientists conduct content analysis or interpretation using theoretical or non-theoretical frameworks.
    • Example: Congressional Record Statements can be organized into categories to see if there is a noticeable pattern.
  • Both types can be analyzed in the context of a theoretical framework or to uncover descriptive trends.
  • Don't confuse: quantitative (numerical, statistical models) with qualitative (text/images/audio, content analysis/interpretation).

🎯 Policy implications, contribution, and future research (Parts 10–12)

🎯 Policy implications (Part 10)

The Policy Implications are how the findings of the article should influence the behavior of individuals, groups, organizations, or governments.

  • Typically stated by the political scientist towards the end of an article.
  • The researcher is predicting how their article and its findings would influence behavior.

🎯 Contribution to the discipline (Part 11)

The Contribution to the Discipline is how the article helps fill the missing Puzzle piece.

  • A statement of how the political scientist's research helps add a puzzle piece that was missing from our current world of knowledge.

🎯 Future research (Part 12)

Future Research offers suggestions for future research that build on the findings from the article.

  • Suggestions for what another political scientist can do to help build on the new knowledge that has been uncovered.
5

Research Paper Project Management

Section 1.5: Research Paper Project Management

🧭 Overview

🧠 One-sentence thesis

Writing a research paper should be approached as project management—breaking a large project into smaller, sequenced, manageable tasks—with the literature review serving as the crucial bridge between analyzing existing knowledge and contributing new insights.

📌 Key points (3–5)

  • Project management mindset: A research paper is a "big" project that must be disaggregated into specific, measurable, attainable, relevant, and timely smaller tasks.
  • Structure mirrors analysis: A research paper consists of introduction (title, main point, question, preview), body (puzzle, debate, theory, hypothesis, research design, empirical analysis), and conclusion (policy implications, contribution, future research).
  • Literature review is the key difference: Unlike analyzing a journal article (where you look for outputs like puzzle/debate/theory), writing a paper requires conducting a literature review—reading and analyzing 10–100 journal articles and books.
  • Common confusion: The process is nonlinear—you can jump between sections (e.g., from Literature Review to Policy Implications back to Empirical Analysis) rather than following a strict sequence.
  • Why it matters: First- and second-year students have unique lived experiences and perspectives that should permeate the discipline, making publication-quality work achievable early in one's academic career.

🎯 The project management approach

🎯 What project management means for research papers

Project management: taking a "big" project, organizing it into "smaller" projects, sequencing the smaller projects, completing the smaller projects, and then bringing all the smaller projects together to demonstrate completion of the "big" project.

  • You already have project management experience—planning a birthday party, organizing a family dinner, or writing a high school research paper are all examples.
  • The result of managing these projects was a "great time," "delicious dinner," or "excellent work."
  • Don't underestimate your ability to successfully manage a complex project.
  • In the real world, this is a valuable ability and skill to have.

📋 Workflows as templates

  • Workflows serve as a template for disaggregating a large project (writing a Research Paper) into specific tasks.
  • Tasks should be: specific, measurable, attainable, relevant, and timely.
  • Example timeline: The excerpt provides an 8-week segmented timeline for preparing a research paper, breaking it into constituent parts.

📚 The literature review process

📚 What a literature review is

Literature review: a process of collecting, reading, and synthesizing journal articles, books, and other scholarly materials related to your research topic.

  • Reading and analyzing anywhere from 10 to 100 journal articles and books related to your research paper topic.
  • This sounds like a lot, but you need to absorb existing knowledge to contribute new knowledge.
  • The literature review produces the outputs you look for when analyzing a journal article: puzzle, debate, and theory.

🚧 Why it's an obstacle

  • The sheer amount of reading required to understand a topic can be daunting.
  • Challenges may include learning disabilities, deficit disorders, or lack of access to articles and books.
  • The key is not to get caught up in what you cannot do or have trouble doing, but rather to focus on what you can accomplish.

🔍 How to conduct a literature review

🔍 Step 1: Select a topic you care about

  • Research something from your personal experiences, what you observed in your family and community, or what you think society is grappling with.
  • The world is complicated, so there is a lot to explore—choose something you care about.

🔍 Step 2: Search for information

  • Visit your campus library—it serves as a repository of information and knowledge.
  • Talk with a librarian—they are trained professionals who understand the science of information: what it is, how it's organized, and how we give it meaning.
  • Meet with your professor.
  • Visit reputable information sources online.

🔍 Step 3: Formulate a research question

  • Difference between a question and a research question: A research question typically starts with "why."
  • A "why" question suggests that there are two things (variables) that interact in a way that is perplexing and intriguing to you.
  • Example: "Why do some politicians tweet profusely, and other politicians don't even have a Twitter account?" or "What causes a politician to utilize social media?"
  • Don't confuse: A general question starts with who, what, when, where, why, or how; a research question typically starts with why and implies variable interaction.

🔍 Step 4: Consult reputable sources

  • In political science: university presses, journals of national and regional associations, and major news outlets all serve as reputable sources.
  • Consult with your professor about what are reputable books, journals, and news sources.

🔎 Using Google Scholar

🔎 What Google Scholar does

Google Scholar: a search engine that limits results to academic articles and books.

  • Unlike the Google search engine (which provides results from all over the World Wide Web), Google Scholar narrows results to help you cut through the noise on the Internet.
  • Access it at https://scholar.google.com/

🔎 How to shorten your reading list

  • One way is to see how many times something has been cited.
  • Example: If an article titled "What the hashtag?" has been cited 460 times, it should be on your reading list.
  • If an article or book is cited in the hundreds or thousands of times, it means a lot of people are focused on the topic, findings, or argument that object represents.

🏗️ Structure of a research paper

🏗️ Three main parts

PartComponents
IntroductionTitle, main point, question, preview of the body
BodyPuzzle, debate, theory, hypothesis, research design, empirical analysis
ConclusionPolicy implications, contribution to the discipline, future research

🏗️ Visual organization

  • The excerpt references "Figure 1-6: Visualization of research paper parts" showing how these components fit together.
  • The process of writing a political science research paper closely follows the process of analyzing a journal article.

🔄 The nonlinear writing process

🔄 What nonlinear means

  • Writing a political science research paper is a generally nonlinear process.
  • You can go from conducting a Literature Review, jump to Policy Implications, and then update your Empirical Analysis to account for new information you read.
  • The suggested process is not meant to be "the" process, but rather one of many creative processes that adapt to your way of thinking, working, and being successful.

🔄 Balancing creativity and order

  • While recognizing your creativity, it is important to give order to the process.
  • When taking a 10-week or 16-week long course, you need to take a big project and break it up into smaller projects.
  • Example: The excerpt proposes an 8-week timeline for segmenting a research paper into its constituent parts.

🎓 Why students can write publication-quality work

🎓 Challenging the tradition

  • Writing a publication-quality research paper is typically reserved for faculty who already hold a doctoral degree or advanced graduate students.
  • However, the idea that a first- or second-year student is not capable is a tradition in need of change.

🎓 Unique student perspectives

  • Students, especially those enrolled at community colleges, have a wealth of lived experiences and unique perspectives.
  • In many ways, these perspectives are not permeating throughout the current ranks of graduate students and faculty.
  • This makes the goal of preparing students to write a well-developed research paper for journal peer review both ambitious and necessary.
6

Section 2.1: Brief History of the Empirical Study of Politics

Section 2.1: Brief History of the Empirical Study of Politics

🧭 Overview

🧠 One-sentence thesis

The empirical study of politics shifted from institution-focused analysis (often mixed with moral arguments) to behavior-focused research after World War II, aiming for objectivity and importing methods from economics and psychology to explain political phenomena through generalizable theories.

📌 Key points (3–5)

  • What empirical study means: research seeking patterns and explanations for general political phenomena and specific cases (e.g., voter behavior, foreign policy).
  • The major shift: pre-WWII study centered on institutions with praise or criticism; post-WWII "behavioral revolution" focused on individual behavior and increased methodological rigor.
  • Goal change: from providing evidence with moral arguments to being "objective" and "concerned with ascertaining the facts needed to solve political problems."
  • Common confusion: empirical analysis existed before WWII, but it was not the dominant approach—most inquiry was institution-centered and normative.
  • Why it matters: the shift led to importing research methods from other disciplines, explosion of methodology research, and development of generalizable social theories.

🔄 The pre-WWII approach vs. the behavioral revolution

📜 What came before WWII

  • Most inquiry centered on the study of institutions (e.g., parliamentary democracy, military strategy, political-economic systems).
  • These studies were often accompanied with praise or criticism—normative judgments were mixed with analysis.
  • Important thinkers wrote about how institutions structure political, economic, and social interaction; their ideas still influence both normative and positivist political science today.
  • Don't confuse: empirical analysis did occur before WWII, but it was not the main focus and was often intertwined with moral arguments.

🌊 The behavioral revolution (post-WWII)

The behavioral revolution: the major shift to studying the behavior of individuals themselves, with a commensurate increase in methods.

  • This shift "indelibly changed the field."
  • Scholars aimed to become more "objective" or less normative in studying human behavior.
  • Goal transformation: no longer to provide evidence with moral arguments, but to "ascertain the facts needed to solve political problems."
  • Example: instead of arguing how institutions should work, researchers would study how voters actually behave or how parties actually form.

🧪 New methods and interdisciplinary imports

🔬 Formal theory and empirical foundation

  • Political scientists began using facts as their empirical foundation or assumptions to develop social theories.
  • These theories are generalizable to other areas of study—not just describing one case, but explaining patterns across cases.

🔀 Importing from other disciplines

The behavioral revolution led to borrowing research methods from:

DisciplineWhat was importedExample applications
EconomicsDiscussions on tradeoffs, alliances, rationalityVoting behavior, party formation
Psychology & SociologyMedia cues, opinion formation, effect of societal prejudices (e.g., racial attitudes)How voters form opinions, how prejudice affects political behavior
  • This importation set off an explosion of research into methodology and their application to political questions.
  • Institutions were no longer the focus—the spotlight moved to individual and group behavior.

🎯 What empirical study aims to do

🎯 Explaining political phenomena

Empirical study: research that seeks patterns and explanations for general phenomena and specific cases.

  • In political science, this means attempts to explain various political phenomena.
  • Examples given in the excerpt:
    • Understanding the behavior of voters
    • Understanding the foreign policy of a country
  • The emphasis is on patterns (what happens repeatedly) and explanations (why it happens).

🧭 Tracing the roots

  • The discipline of political science often says the empirical study of politics traces its roots to the behavioral revolution of the post-World War II era.
  • This does not mean no empirical work happened before, but that the behavioral approach became the dominant paradigm.
  • Don't confuse: "tracing roots to WWII" means the dominant shift happened then, not that earlier scholars did no empirical work at all.
7

Section 2.2: The Institutional Wave

Section 2.2: The Institutional Wave

🧭 Overview

🧠 One-sentence thesis

Institutionalism, the traditional study of political institutions, declined during the behavioral revolution but has returned as neoinstitutionalism with renewed focus on the state's role in society and economy.

📌 Key points (3–5)

  • What institutionalism studies: political institutions in a society and how they function.
  • Why institutions are hard to change: they reflect bargains between actors that determine society's rules, making reform or replacement difficult.
  • Historical trajectory: institutionalism ebbed during the behavioral revolution's heyday but has experienced a revival.
  • Neoinstitutionalism: the modern return to institutional study, emphasizing the role of the state in society and the economy.
  • Common confusion: institutionalism vs. neoinstitutionalism—the latter is not simply a return to old methods but brings new focus on state roles.

🏛️ What institutionalism studies

🏛️ Definition and focus

Institutionalism: the study of political institutions in a society.

  • Institutions are the formal and informal structures that organize political life.
  • The traditional wave of methodology in political science centered on these institutions.
  • This approach examines how institutions shape political outcomes and behaviors.

🤝 Why institutions reflect bargains

  • Institutions embody the bargains made between actors in each society.
  • These bargains determine how the rules of society should look.
  • Example: An organization's governing structure reflects agreements among its founding members about power distribution and decision-making processes.

🔒 Why institutions resist change

🔒 Difficulty of reform

  • Institutions are difficult to reform, replace, or dismantle.
  • This resistance stems from their nature as codified bargains between actors.
  • Changing an institution means renegotiating fundamental agreements about society's rules.

⚖️ Stability vs. flexibility

  • The bargain-based nature creates stability but also rigidity.
  • Actors who benefited from the original bargain may resist changes that threaten their position.
  • Don't confuse: institutional stability is not the same as institutional effectiveness—stable institutions may still be inefficient or outdated.

📉 The ebb and return

📉 Decline during behavioralism

  • Institutionalism ebbed during the heyday of the behavioral revolution.
  • The behavioral wave (1950s onward) shifted focus away from institutions toward individual political behavior.
  • This represented a major methodological shift in political science.

🔄 The revival as neoinstitutionalism

  • The desire to bring institutions back led to the development of neoinstitutionalism.
  • This is not simply a return to old institutionalism but a new approach.
  • Neoinstitutionalism focuses specifically on the role of the state in society and the economy.

🆚 Old vs. new institutionalism

🆚 Key distinction

AspectTraditional InstitutionalismNeoinstitutionalism
PeriodTraditional wave (pre-1950s)Revival (post-behavioral revolution)
Status during behavioral revolutionEbbed/declinedDid not yet exist
FocusGeneral study of institutionsRole of the state in society and economy
ContextOriginal methodological approachResponse to behavioral dominance

🎯 What neoinstitutionalism emphasizes

  • The role of the state in society: how state institutions shape social outcomes.
  • The role of the state in the economy: how state institutions influence economic activity and organization.
  • This represents a more focused lens than traditional institutionalism's broader approach.
8

Section 2.3: The Behavioral Wave

Section 2.3: The Behavioral Wave

🧭 Overview

🧠 One-sentence thesis

Behavioralism shifted political science toward studying political behavior through surveys and statistics, with the Chicago School driving a quantitative emphasis that has shaped the discipline's methodological expectations while prompting some scholars to call for renewed attention to normative questions.

📌 Key points (3–5)

  • What behavioralism is: the study of political behavior emphasizing surveys and statistics.
  • The Chicago School's influence: Charles Merriam at the University of Chicago had an outsized impact, establishing a quantitative methodology focus.
  • Methodological expectations today: many incoming scholars are now expected to understand statistical techniques for research.
  • Common tension: the Chicago School's quantitative emphasis often came "at the expense of normative questions."
  • Current response: some scholars are working to bring back normative discussion in reaction to the dominance of quantitative methods.

📚 What behavioralism means

📚 Definition and focus

Behavioralism: the study of political behavior and emphasizes the use of surveys and statistics.

  • This approach centers on political behavior—what people actually do politically (e.g., voting patterns mentioned in the broader context).
  • The methodological tools are surveys (collecting data from people) and statistics (analyzing patterns in that data).
  • It represents a shift from studying institutions (structures, rules) to studying human actions and choices.

🏛️ The Chicago School's role

🏛️ Charles Merriam's influence

  • Charles Merriam was a professor at the University of Chicago from 1900 to 1940.
  • He started what became known as the Chicago School, which focused on studying political behavior using surveys and statistics.
  • The excerpt notes Merriam had an "outsized influence on behavioralism"—his work and institutional position shaped the entire movement.

📊 Quantitative emphasis and trade-offs

  • The Chicago School "strongly influenced political science, through its emphasis on quantitative methodology."
  • This emphasis often came "at the expense of normative questions."
    • Normative questions concern what should or ought to be (values, ideals, personal judgments).
    • The focus on measurement and statistics meant less attention to questions of values or ideals.
  • Don't confuse: behavioralism is not anti-normative by definition, but the Chicago School's version prioritized quantitative methods, which left less room for normative inquiry.

🎓 Impact on the discipline today

🎓 Expectations for new scholars

  • "Many incoming scholars are expected to understand statistical techniques for use in their research."
  • This reflects the lasting influence of the behavioral wave: quantitative skills have become a baseline expectation in political science training.
  • Example: a new political scientist is now typically required to know how to design surveys, collect data, and perform statistical analysis, regardless of their specific research topic.

🔄 The normative pushback

  • "In response, some scholars are looking to bring back the normative discussion."
  • This is a reaction to the dominance of quantitative methods and the marginalization of normative questions.
  • The excerpt suggests an ongoing tension: the behavioral wave's success created a methodological imbalance, and scholars are now working to restore attention to questions of values and ideals.

🔍 Distinguishing behavioralism from institutionalism

AspectInstitutionalism (Section 2.2)Behavioralism (Section 2.3)
FocusPolitical institutions (structures, rules)Political behavior (human actions, choices)
MethodsStudy of institutional design, bargains, state roleSurveys and statistics
Historical timingTraditional wave; ebbed during behavioral revolutionRose in the 1950s; became dominant
Current statusRevived as neoinstitutionalismEstablished quantitative expectations; facing normative critique
  • Don't confuse the two waves as mutually exclusive: the excerpt (from Section 2.2 summary) notes that institutionalism "ebbed during the heyday of the behavioral revolution," meaning behavioralism displaced but did not eliminate institutional study.
  • The behavioral wave was a shift in focus and method, not a complete replacement of all prior approaches.
9

Section 2.4: Currents: Qualitative versus Quantitative

Section 2.4: Currents: Qualitative versus Quantitative

🧭 Overview

🧠 One-sentence thesis

Political science is divided into two major methodological currents—qualitative and quantitative—each using different techniques to collect and analyze data, a division that has created ongoing debate about their relative value since the behavioral revolution.

📌 Key points (3–5)

  • What methods are: the steps and techniques social scientists use to collect, construct, and consider data during research.
  • Two major currents: qualitative methods solve political science puzzles without mathematical analysis; quantitative methods rely on mathematical analysis or measurement.
  • Common confusion: the difference is not just "numbers vs. no numbers"—it's about whether mathematical analysis is the primary tool for solving the research puzzle.
  • Historical wedge: the behavioral revolution created a divide among political scientists, leading to strong ongoing debate about the value of each approach.
  • Mixed methods: some scholars use both qualitative and quantitative approaches together.

🔬 What research methods are

🔬 Definition and purpose

Methods: the steps taken by social scientists during their research; the techniques used to collect, construct, and consider data.

  • Methods are not the research question itself—they are the how of research.
  • They encompass the entire process: gathering information, organizing it, and analyzing it.
  • Both currents share the goal of solving puzzles in political science, but differ in their approach.

🌊 The two major currents

🗣️ Qualitative methods

Qualitative methods: solve puzzles in political science without using some type of mathematical analysis.

  • Typically involve interviews, archival research, and ethnographies to understand politics.
  • Focus on depth, context, and interpretation rather than numerical measurement.
  • Example: A researcher might conduct interviews with political actors to understand decision-making processes without converting responses into statistical data.

📊 Quantitative methods

Quantitative methods: prefer the use of mathematical analysis or measurement.

  • Generally use mathematical models and statistics to measure relationships between two variables.
  • Emphasize surveys, statistics, and numerical data.
  • Example: A researcher might analyze voting patterns using statistical techniques to identify correlations between demographic factors and electoral outcomes.

🔀 Mixed methods

Mixed methods: the use of both quantitative and qualitative methods of analysis.

  • Some scholars combine both approaches to gain different perspectives on the same research question.
  • Allows researchers to leverage the strengths of each current.

🧩 Key distinctions

🧩 How to tell them apart

AspectQualitativeQuantitative
Core approachWithout mathematical analysisWith mathematical analysis or measurement
Typical techniquesInterviews, archival research, ethnographiesSurveys, statistics, mathematical models
What is measuredContext, meaning, processesRelationships between variables, patterns

Don't confuse: The distinction is not simply "words vs. numbers"—it's whether mathematical analysis is the primary tool for solving the research puzzle. Qualitative research may involve some counting, but it doesn't rely on mathematical analysis as its core method.

⚔️ The methodological divide

⚔️ Origins of the wedge

  • The behavioral revolution (discussed in Section 2.3) created a division among political scientists.
  • The Chicago School's emphasis on quantitative methodology, often at the expense of normative questions, intensified this split.
  • Many incoming scholars are now expected to understand statistical techniques for their research.

💬 Ongoing debate

  • The division has led to "strong back and forth discourse on the value" of each approach.
  • Some scholars are looking to bring back normative discussion in response to the dominance of quantitative methods.
  • The debate reflects deeper questions about what counts as valid knowledge in political science.

🔮 Emerging trends

🤖 Big data and machine learning

Big data: the mountain of information, in the form of petabytes and exabytes, that is being stored on computers and servers around the world.

Machine learning: the ability of a computer program to start with an initial model data, analyze actual data, learn from this analysis, and automatically update that initial model to incorporate the findings from its analysis.

  • Machine learning operates iteratively, allowing software to uncover categories, patterns, and meanings through repeated cycles.
  • The next generation of political scientists will be leading efforts to utilize big data and machine learning to explain political behaviors, institutions, and processes.
  • The full implications for political science are still unknown—it's an exciting time for the field.

🔄 Experiments in political science

Experiments: laboratory studies in which researchers recruit subjects, randomly assign subjects to a treatment or control condition, and then determine the effect of the treatment on the subjects.

  • Experiments represent another methodological tool available to political scientists.
  • They allow researchers to test causal relationships in controlled settings.
10

Section 2.5: Currents: Normative and Positive Views

Section 2.5: Currents: Normative and Positive Views

🧭 Overview

🧠 One-sentence thesis

The distinction between normative ("what should be") and positive ("what is") views is essential for discerning fact from opinion in political science and engaging critically with political information.

📌 Key points (3–5)

  • Normative questions ask "what should be?" and involve arguments about how resources or policies ought to be allocated.
  • Positive questions ask "what is?" and describe objective reality without arguing for changes.
  • Common confusion: positive views describe and measure current reality (e.g., spending percentages), while normative views argue for different allocations or priorities.
  • Why it matters: the ability to distinguish fact from opinion is essential when reading research, news, or editorials; opinions should not replace objective reality.

🔍 Two fundamental perspectives

🔍 Normative view: "what should be?"

Normative questions ask "what should be?"—they involve arguments about how things ought to be.

  • Normative views make claims about desired outcomes or better allocations.
  • They argue for changes or different priorities.
  • Example: If a government spends 100% domestically, a normative view would argue that less than 100% should be spent domestically and more than 0% should go to foreign priorities.
  • When spending shifts to 90% domestic and 10% foreign, a normative view would further argue how the 10% should be divided (e.g., some for membership dues, some for foreign aid).

🔍 Positive view: "what is?"

Positive questions ask "what is?"—they describe objective reality without arguing for changes.

  • Positive views state facts and measurements about current conditions.
  • They do not argue that allocations should be different.
  • Example: If a government spends 100% domestically, a positive view simply states that fact. It might also describe the breakdown (e.g., 75% for infrastructure, 25% for salaries).
  • A positive view would not argue that the 100% domestic allocation is wrong or that the infrastructure/salary split should change.

📊 Comparing the two views

AspectPositive viewNormative view
Core question"What is?""What should be?"
FunctionDescribes current realityArgues for changes or priorities
Example (100% domestic spending)States: "Government spends 100% domestically, 0% foreign"Argues: "Less than 100% should be domestic; more than 0% should be foreign"
Example (spending breakdown)States: "75% infrastructure, 25% salaries"Does not make such statements; focuses on what ought to happen
Stance on changeDoes not advocate for changeAdvocates for different allocations

🖼️ Visual comparison

The excerpt provides a figure showing:

  • Status Quo: D: 100%, F: 0%
  • Positive: D: 100%, F: 0% (describes the status quo)
  • Normative: D: 90%, F: 10% (proposes a change)

🎯 Why the distinction matters

🎯 Discerning fact from opinion

  • When reading a journal article, book, or news article, we generally expect a focus on what is (positive), not what should be (normative).
  • When reading a newspaper editorial or watching a television program, we expect to see and hear opinions and speculations (normative).
  • The ability to discern between fact and opinion is essential to engaging in political science.

⚠️ Don't confuse opinion with fact

  • Politics is full of opinions from individuals, organizations, and leaders.
  • However, an opinion shouldn't stand for fact and should not replace objective reality.
  • The ability to acknowledge, identify, and categorize information helps us build our understanding of the world.

🧠 Building critical engagement

  • Recognizing whether a claim is normative or positive allows you to evaluate it appropriately.
  • Positive claims can be checked against data and evidence.
  • Normative claims should be evaluated based on values, reasoning, and arguments—not confused with empirical facts.
11

Section 2.6: Emerging Wave: Experimental Political Science

Section 2.6: Emerging Wave: Experimental Political Science

🧭 Overview

🧠 One-sentence thesis

Experimental political science is a growing methodological approach that uses random or quasi-random assignment to isolate precise cause-and-effect relationships between treatments and political outcomes.

📌 Key points (3–5)

  • What it is: a research method centered on using assignment techniques to explore causal relationships between a treatment and an outcome of interest.
  • Two main settings: laboratory settings (with random assignment) and other settings (with quasi-random assignment).
  • Why it matters: allows researchers to identify precise cause-and-effect relationships, not just correlations.
  • Common confusion: random assignment vs quasi-random assignment—true experiments use full randomization in labs; quasi-experiments approximate randomization in real-world settings.
  • Status in the discipline: experimental political science is an emerging wave, meaning it is growing but not yet dominant.

🔬 What experimental political science does

🎯 Core purpose: isolating causality

Experimental political science: centers on the researcher using random assignment in laboratory settings or quasi-random assignment in other settings, to explore precise cause-and-effect relationships between a treatment and outcome of interest.

  • The key goal is to establish causality, not just association.
  • By controlling who receives a treatment and who does not, researchers can isolate the effect of that treatment on the outcome.
  • Example: A researcher randomly assigns participants to receive different campaign messages (treatment) and measures changes in voting intention (outcome); the random assignment ensures that any difference in outcome is due to the message, not other factors.

🧪 The role of assignment

  • Random assignment means every participant has an equal chance of receiving the treatment; this eliminates bias from confounding variables.
  • Quasi-random assignment approximates randomization when full control is not possible (e.g., in field settings or natural experiments).
  • Don't confuse: random assignment is not the same as random sampling; assignment controls who gets the treatment, while sampling controls who is in the study.

🏛️ Settings for experimental research

🔬 Laboratory settings

  • Use random assignment to create controlled conditions.
  • Researchers have full control over the environment and can manipulate variables precisely.
  • Example: Participants in a lab are randomly shown different policy proposals and asked to rate their support; the lab setting ensures no outside interference.

🌍 Other settings (field and quasi-experiments)

  • Use quasi-random assignment when full randomization is impractical or unethical.
  • These settings are closer to real-world conditions but offer less control.
  • Example: A policy is rolled out to some regions but not others based on administrative rules (not researcher choice); researchers treat this as a quasi-experiment to study the policy's effect.

📈 Why experimental methods are growing

🚀 Emerging wave status

  • The excerpt describes experimental political science as an emerging wave, meaning it is gaining traction in the discipline.
  • It complements earlier waves (institutionalism, behavioralism) by adding a tool for causal inference.
  • The growth reflects a broader push in social science to move beyond correlation and toward causation.

🔍 Precision in causal claims

  • Experimental methods allow researchers to make precise cause-and-effect claims because they control for confounding factors.
  • This precision is harder to achieve with purely observational or qualitative methods.
  • Example: Instead of observing that voters who watch debates are more informed (which could be due to pre-existing interest), an experiment randomly assigns debate-watching and measures the causal effect on knowledge.
12

Section 2.7: Emerging Wave: Big Data and Machine Learning

Section 2.7: Emerging Wave: Big Data and Machine Learning

🧭 Overview

🧠 One-sentence thesis

Big Data and machine learning represent an emerging methodological wave in political science, where vast amounts of data generated by political actors and institutions are analyzed through sophisticated computational techniques to identify patterns.

📌 Key points (3–5)

  • What Big Data is: the growing mountain of data being generated by political actors and institutions.
  • What machine learning is: increasingly sophisticated computational methods for sifting, sorting, and identifying patterns in large datasets.
  • Current status: these waves are just beginning to influence political science as an emerging methodological approach.
  • Common confusion: Big Data is the data itself (the raw material), while machine learning is the analytical technique (the tool for processing that material).

📊 Understanding Big Data

📊 What Big Data means in political science

Big Data: the growing mountain of data being generated by political actors and institutions.

  • This is not traditional small-scale survey or archival data; it refers to the massive volume of information now available.
  • The excerpt emphasizes "growing mountain," suggesting both the scale and the continuous expansion of available data.
  • Sources include political actors (individuals, groups) and institutions (governments, organizations, formal structures).

🔍 Why Big Data matters

  • The sheer volume creates new research opportunities that were not possible with traditional data collection methods.
  • Political scientists can now access information at scales that earlier waves (institutional, behavioral) could not achieve.
  • Example: An organization tracking millions of social media posts by political actors generates Big Data that can reveal patterns in political communication.

🤖 Understanding Machine Learning

🤖 What machine learning does

Machine learning: the increasingly sophisticated way of sifting, sorting, and identifying patterns in these mountains of data.

  • It is a computational technique specifically designed to handle the scale of Big Data.
  • The excerpt highlights three key functions:
    • Sifting: filtering through large volumes of data
    • Sorting: organizing data into meaningful categories
    • Identifying patterns: discovering relationships and regularities that might not be visible through traditional analysis

⚙️ How machine learning works

Based on the review questions, the excerpt provides this definition:

Machine learning is the ability of a computer program to start with an initial model data, analyze actual data, learn from this analysis, and automatically update that initial model to incorporate the findings from its analysis.

  • The process is iterative and self-improving:
    1. Start with an initial model
    2. Analyze actual data
    3. Learn from the analysis
    4. Automatically update the model based on findings
  • The key word is "automatically"—the program improves without manual recoding for each new insight.
  • Example: A program starts with a basic model of voting patterns, analyzes millions of voter records, identifies new correlations, and updates its model to reflect these discoveries.

🔄 Relationship Between Big Data and Machine Learning

🔄 How they work together

ComponentRoleRelationship
Big DataThe raw materialProvides the massive datasets that need analysis
Machine LearningThe analytical toolProvides the techniques to process and find patterns in Big Data
  • Big Data creates the need for machine learning—traditional statistical methods cannot handle the volume and complexity.
  • Machine learning makes Big Data useful—without sophisticated analytical tools, the "mountain of data" would be overwhelming and unusable.
  • Don't confuse: Big Data is not a method; it is the phenomenon of data abundance. Machine learning is the method used to analyze it.

🌊 Position in Political Science Methodology

🌊 Emerging wave status

  • The excerpt states these are "just beginning to influence political science."
  • This contrasts with earlier established waves:
    • Institutional wave (traditional, later revived as neoinstitutionalism)
    • Behavioral wave (1950s, now mainstream)
    • Experimental political science (growing)
  • Big Data and machine learning are the newest methodological development, still in early adoption stages.

🔮 Implications for the discipline

  • Represents a continuation of the quantitative current in political science.
  • Builds on the behavioral revolution's emphasis on data and measurement, but at a vastly larger scale.
  • May require new technical skills beyond traditional statistics—political scientists need to understand computational techniques.
  • Example: A researcher studying political behavior might now need to understand both survey statistics (behavioral wave) and machine learning algorithms (emerging wave) to fully analyze available data.
13

Section 3.1: Philosophy of Science

Section 3.1: Philosophy of Science

🧭 Overview

🧠 One-sentence thesis

Philosophy of science explores the foundations, methods, and implications of science, with key contributions from Karl Popper's falsification principle and Thomas Kuhn's paradigm shifts.

📌 Key points (3–5)

  • What philosophy of science asks: three core questions about the foundations, methods, and implications of science.
  • Falsification (Popper): any theory can be proven false but never proven true—this shapes how we test scientific claims.
  • Paradigm shifts (Kuhn): current ways of thinking, doing, and understanding can change over time.
  • Common confusion: falsification does not mean "disproving everything"; it means theories remain open to being disproven, not that they are worthless.
  • Why it matters: these philosophical principles underpin how the scientific method operates and how knowledge advances.

🔍 Core questions of philosophy of science

🔍 The three foundational questions

Philosophy of science: exploration of the foundations, methods, and implications of science.

The excerpt identifies three questions that philosophy of science asks:

  • What are the foundations of science? (What grounds scientific knowledge?)
  • What are the methods of science? (How do we conduct scientific inquiry?)
  • What are the implications of science? (What does science mean for knowledge and society?)

These questions frame the discipline and guide how we think about scientific practice.

🧪 Key contributions: Popper and Kuhn

🧪 Falsification (Karl Popper)

Falsification: the principle that any theory, or explanation of how the world works, can always be proven false and that a theory can never be proven true.

  • What it means: no matter how much evidence supports a theory, we can never be 100% certain it is true; however, a single contradictory observation can prove it false.
  • Why it matters: this principle shapes how scientists test theories—they look for ways to disprove them, not just confirm them.
  • Don't confuse: falsification does not mean "all theories are wrong"; it means they are always open to being challenged and revised.

Example: A theory might predict that all swans are white. Observing a million white swans does not prove the theory true, but observing one black swan proves it false.

🔄 Paradigm shifts (Thomas Kuhn)

Paradigm: current way of thinking, doing, and understanding.

  • What it means: a paradigm is the dominant framework or worldview in a field at a given time.
  • Paradigm shifts: Kuhn is known for the concept that these frameworks can change—old ways of thinking are replaced by new ones.
  • Why it matters: science does not just accumulate facts; it sometimes undergoes fundamental changes in how it interprets the world.

Example: A scientific community might shift from one theory of how conflicts arise to a completely different framework based on new evidence or methods.

🔗 Connection to the scientific method

🔗 How philosophy shapes practice

The excerpt places philosophy of science as the foundation for understanding the scientific method:

  • The questions about foundations and methods directly inform how researchers observe, theorize, test hypotheses, and update knowledge.
  • Falsification influences how hypotheses are tested: researchers seek evidence that could disprove their theories.
  • Paradigms influence what questions are asked and what methods are considered valid.

🔗 Key terms supporting the method

The excerpt defines several terms that link philosophy to practice:

TermDefinitionRole in scientific method
TheoryA statement, derived from observations, that declares a relationship between at least two variablesExplains observed phenomena; generates hypotheses
HypothesisA statement derived from theory, providing the direction of the relationship between two variablesTestable prediction that allows theory to be evaluated
Scientific methodSystematic process of discovering new knowledgeThe practical application of philosophical principles
  • Theory comes from observations and proposes how the world works.
  • Hypothesis is derived from theory and specifies what we expect to see if the theory is correct.
  • Both are subject to falsification: they can be tested and potentially disproven.
14

Section 3.2: What is the Scientific Method?

Section 3.2: What is the Scientific Method?

🧭 Overview

🧠 One-sentence thesis

The scientific method is a systematic process that moves from observing the world to building theories, testing hypotheses, and analyzing data to discover new knowledge.

📌 Key points (3–5)

  • Core stages common to all models: observation of phenomena, theory-making, hypothesis derivation, data collection, and analysis.
  • Observation leads to theory: watching the world around us sparks inquiry and proposals about how things work.
  • Hypothesis tests theory: a hypothesis derived from theory allows researchers to test whether the theory holds.
  • Not all stages are always used: depending on the research question, political scientists may skip certain stages (e.g., some articles focus on theory and analysis without new data collection).
  • Common confusion: the scientific method is not a rigid checklist—researchers adapt stages to fit their specific research questions and contexts.

🔬 The systematic process of discovery

🔬 What the scientific method is

Scientific method: systematic process of discovering new knowledge.

  • It is not a single fixed formula but a structured approach shared across models of varying complexity.
  • The excerpt emphasizes that "common to all three [models] are the initial steps, observation and theory making."
  • The method provides a framework for moving from curiosity about the world to testable explanations.

🌍 Observation: the starting point

  • Observations of the world around us are the foundation.
  • Watching phenomena leads to inquiry: "Why does this happen?" or "How does this work?"
  • Example: observing inequality in society, lobbyists interacting with policymakers, or conflicts involving the United Nations.

🧩 From observation to theory and hypothesis

🧩 Theory-making

Theory: a statement, derived from observations, that declares a relationship between at least two variables.

  • Theories propose how we think the world works based on what we observe.
  • They are explanations that connect different factors or variables.
  • Example: a theory might link equal inheritance rights to societal equality, or coalition composition to lobbying success.

🎯 Hypothesis: testing the theory

Hypothesis: a statement derived from theory, providing the direction of the relationship between two variables.

  • A hypothesis is a specific, testable prediction drawn from the broader theory.
  • It allows researchers to check whether the theory holds in practice.
  • Example: "Do equal inheritances succeed in leveling the societal playing field?" or "Can diverse coalitions signal broad support to policymakers?"
  • Don't confuse: a hypothesis is not the same as a theory—it is narrower and designed for testing.

📊 Data collection and analysis

📊 Collecting evidence

  • To test the hypothesis, researchers gather data.
  • Data can be country-specific (e.g., Germany), issue-specific (e.g., 50 issues in five European countries), or based on other units of analysis.
  • The excerpt notes that not all studies include explicit data collection in their abstracts; some focus on other stages.

🔍 Analysis: making sense of the data

Analysis: using methods (e.g., statistical models, simulations) to examine data and assess the hypothesis.

  • Analysis involves applying tools to the collected data to see if the evidence supports the theory.
  • Example: using statistical models to evaluate combined impacts across pathways, or simulations to assess alternative policy scenarios.
  • The excerpt describes simulations as "a bit advanced" but clearly related to analysis.

🔄 Update: refining understanding

  • After analysis, researchers update their understanding or propose new insights.
  • Example: suggesting that more peacekeeping operations can reduce conflict impacts, or refining theories about coalition composition and lobbying success.
  • This stage may involve revising the theory or offering policy recommendations based on findings.

🗺️ Applying the method in political science

🗺️ Mapping journal articles to the method

The excerpt provides a table summarizing how three political science articles map onto the scientific method stages:

ArticleObservationTheoryHypothesisDataAnalysisUpdate
Hager and Hilbig 2019Society and inequalityEqual inheritance rights and societal equality"Do equal inheritances succeed in leveling the societal playing field?"Country-specific: GermanyNot in abstractNot in abstract
Junk 2019Lobbyists, coalitions, policymakersCoalition composition and lobbying success"Diverse coalitions signal broad support to policymakers"50 issues in five European countriesNot in abstractTheory of coalition composition, issue salience, and lobbying success
Hegre, et al. 2019United Nations, conflictsPeacekeeping operations and conflictNot in abstractNot in abstractStatistical models and simulationsMore peacekeeping operations reduces conflict impacts

🧩 Not all stages are always present

  • The excerpt emphasizes that "not all political scientists will utilize each stage of the scientific method due to the nature of their research question."
  • All three articles engage in observation and theory-making, but only two explicitly mention data collection in their abstracts.
  • Some articles focus more on analysis and updating existing theories rather than collecting new data.
  • Don't confuse: skipping a stage does not mean the research is incomplete—it reflects the specific goals and design of the study.

📄 Reading abstracts for method stages

Abstract: a summary of the article's contents.

  • Abstracts may not always make every stage explicit.
  • The excerpt notes that for Hegre et al., "the hypothesis and data are not clear, so we would need to read through the article to uncover this information."
  • Researchers must sometimes look beyond the abstract to fully map the scientific method stages.

🔑 Key concepts and terminology

🔑 Falsification

Falsification: the principle that any theory, or explanation of how the world works, can always be proven false and that a theory can never be proven true.

  • This concept (associated with Karl Popper) underscores that scientific knowledge is provisional.
  • Theories can be tested and potentially disproven, but never definitively proven.

🔄 Paradigm

Paradigm: current way of thinking, doing, and understanding.

  • A paradigm represents the dominant framework within which scientists operate.
  • Thomas Kuhn is noted for his concept of paradigm shifts, where fundamental changes in understanding occur.

🧪 Philosophy of science

Philosophy of science: exploration of the foundations, methods, and implications of science.

  • It asks foundational questions: What are the foundations of science? What are the methods? What are the implications?
  • The scientific method is one answer to the question about methods.
15

Section 3.3: Applying the Scientific Method to Political Phenomena

Section 3.3: Applying the Scientific Method to Political Phenomena

🧭 Overview

🧠 One-sentence thesis

Political scientists apply the scientific method to their research in varying ways, with not all studies engaging every stage of the method depending on the nature of their research question.

📌 Key points (3–5)

  • What this section does: maps three open-access journal article abstracts to show how political scientists use the scientific method in practice.
  • Common pattern: all three articles engage in observation and theory-making, but they differ in which other stages they include.
  • Key finding on data: only two of the three articles (Hager and Hilbig 2019; Junk 2019) explicitly collect data in their abstracts.
  • Key finding on updating: only two articles (Junk 2019; Hegre et al. 2019) update their theories based on findings.
  • Common confusion: researchers do not need to participate in every stage of the scientific method—the stages used depend on the research question.

📋 Mapping three political science articles

📋 The three articles examined

The section analyzes abstracts from three open-access journal articles to see how they map onto the scientific method stages:

  1. Hager and Hilbig 2019: focuses on equal inheritance rights and societal equality
  2. Junk 2019: examines coalition composition and lobbying success
  3. Hegre et al. 2019: studies UN peacekeeping operations and conflict

🔍 What the mapping reveals

  • All three articles begin with observation (noticing phenomena in the world) and theory (proposing relationships between variables).
  • The stages of hypothesis, data, analysis, and update are not present in all abstracts.
  • Some information is marked "NIA" (Not in Abstract), meaning it may exist in the full article but is not stated in the summary.

🧩 Stage-by-stage breakdown

🧩 Observation stage

All three articles observe political phenomena:

  • Hager and Hilbig: observe society and inequality in society
  • Junk: observe lobbyists, coalitions, and policy makers
  • Hegre et al.: observe the United Nations and conflicts

This confirms that observation is a universal starting point in political science research.

🧩 Theory stage

All three articles propose theories about relationships:

  • Hager and Hilbig: theory linking equal inheritance rights and societal equality
  • Junk: theory about coalition composition and lobbying success
  • Hegre et al.: theory connecting peacekeeping operations and conflict outcomes

🧩 Hypothesis stage

Only Hager and Hilbig and Junk present clear hypotheses in their abstracts:

  • Hager and Hilbig ask: "But do equal inheritances succeed in leveling the societal playing field?"
  • Junk states: "Based on pluralist theory, one can expect diverse coalitions, uniting different societal interests, to signal broad support to policy makers."
  • Hegre et al.: hypothesis not stated in abstract (NIA)

🧩 Data stage

Only two articles mention data collection in their abstracts:

  • Hager and Hilbig: use country-specific data from Germany
  • Junk: examine 50 issues in five European countries
  • Hegre et al.: data not mentioned in abstract (NIA)

🧩 Analysis stage

Only Hegre et al. explicitly describes analysis in the abstract:

  • Uses "statistical models and simulations" to evaluate combined impacts
  • The abstract mentions "simulations based on the statistical estimates to assess the impact of alternative UN policies for the 2001–13 period"
  • The other two articles do not describe analysis methods in their abstracts (NIA)

🧩 Update stage

Two articles update theories based on their findings:

  • Junk: updates "theory of coalition composition, issue salience, and lobbying success"
  • Hegre et al.: suggests "more peacekeeping operations reduces the impacts of conflicts"
  • Hager and Hilbig: no update mentioned in abstract (NIA)

📊 Summary comparison table

ArticleObservationTheoryHypothesisDataAnalysisUpdate
Hager and Hilbig 2019Society and inequalityEqual inheritance rights and societal equalityYes (question form)Germany dataNIANIA
Junk 2019Lobbyists, coalitions, policy makersCoalition composition and lobbying successYes (expectation statement)50 issues, 5 countriesNIAYes
Hegre et al. 2019UN, conflictsPeacekeeping operations and conflictNIANIAStatistical models and simulationsYes

🔑 Key takeaway about flexibility

🔑 Not all stages are required

The section emphasizes: "Not all political scientists will utilize each stage of the scientific method due to the nature of their research question."

  • Different research questions call for different approaches.
  • An article may focus on theory-building without collecting new data.
  • Another article may focus on data analysis without proposing a new theory.
  • The scientific method is a flexible framework, not a rigid checklist.

🔑 Don't confuse: method stages vs. research requirements

  • Common confusion: thinking every study must include all stages (observation, theory, hypothesis, data, analysis, update).
  • Reality: researchers engage with the stages that fit their specific research question and contribution.
  • Example: Hegre et al. focus on analysis and updating, while their hypothesis and data are not detailed in the abstract—this does not make their work unscientific.

🔬 Example: detailed reading of Hegre et al. abstract

🔬 Sentence-by-sentence mapping

The section walks through the Hegre et al. abstract to show how individual sentences map to scientific method stages:

  1. Sentence 1: describes observation (UN peacekeeping operations and conflicts)
  2. Sentence 2: uses a metaphor (carrying a bag with a hand vs. individual fingers) to argue for looking at combined effects—relates to analysis approach
  3. Sentence 3: declares a "novel method of evaluating the combined impact across all pathways based on a statistical model"—clearly analysis
  4. Sentence 4: describes "simulations based on the statistical estimates to assess the impact of alternative UN policies for the 2001–13 period"—also analysis
  5. Sentences 5 and 6: describe how alternative UN policy choices could have reduced conflict and lives lost—this is update, suggesting policy implications

🔬 What is missing

  • The abstract does not clearly state the hypothesis or describe the data in detail.
  • The section notes: "the hypothesis and data are not clear, so we would need to read through the article to uncover this information."
  • This illustrates that abstracts are summaries and may not include every stage of the scientific method.
16

Section 4.1: Correlation and Causation

Section 4.1: Correlation and Causation

🧭 Overview

🧠 One-sentence thesis

Correlation establishes connections between variables, but causation requires meeting four rigorous conditions—logical time ordering, correlation, mechanism, and non-spuriousness—before we can claim one variable truly causes another.

📌 Key points (3–5)

  • Correlation vs. causation: correlation shows two variables move together, but does not prove one causes the other; the adage "correlation does not equal causation" is central to political science.
  • Four conditions of causality: logical time ordering, correlation, mechanism, and non-spuriousness must all be satisfied to establish a causal relationship.
  • Common confusion: observing a connection between two variables does not automatically mean causation exists; correlation is only a prerequisite, not proof.
  • Why it matters: political phenomena are complex and intertwined, so researchers must critically evaluate whether observed relationships are truly causal or merely coincidental.

🔗 Understanding correlation

🔗 What correlation means

Correlation: a relationship or connection between two variables.

  • Correlation shows that two variables move together or are associated in some way.
  • It does not explain why they move together or whether one influences the other.
  • Example: the excerpt describes maps showing the percentage of women in U.S. states and the number of women in Congress; reviewing both maps suggests a correlation because states with fewer women tend to have fewer female representatives.

⚠️ Why correlation ≠ causation

  • The excerpt emphasizes the adage: "correlation does not equal causation."
  • Political science studies individuals, institutions, and processes that are inherently complex and intertwined, making it easy to mistakenly assume causation from observed connections.
  • Correlation is a prerequisite to causation, but other conditions must also be met.
  • Don't confuse: seeing two things happen together does not mean one caused the other.

🧱 The four conditions of causality

⏰ Logical time ordering

  • What it means: one variable must precede the other in time for the first to influence the second.
  • Why it matters: if the supposed cause happens after the effect, the causal claim makes no sense.
  • Example: the excerpt asks whether protests precede government responses. The answer is yes, because "why would the government respond to silence?"

🔗 Correlation (as a condition)

  • What it means: the two variables must move together; if they do not, it is difficult to suggest one influences the other.
  • Why it matters: correlation is the starting point for establishing causation.
  • Example: when people protest, governments pay attention (often due to media coverage), so there is a correlation between protest activity and government response.

⚙️ Mechanism

Causal mechanism: an explanation for how one variable influences the other.

  • What it means: you must explain the process or pathway by which one variable affects the other.
  • Explanations can range from straightforward to complex; both types are useful.
  • Example: the Arab Spring (starting in 2010) saw protesters organize via social media (Facebook, Twitter), which then prompted government reactions. Social media serves as the mechanism linking protest formation to government response.

🚫 Non-spuriousness

  • What it means: another variable is not actually driving the relationship; the observed relationship is not due to a third factor.
  • Why it matters: if a third variable is the real cause, the original causal claim is incorrect.
  • Example: international media outlets observing a protest may influence a government's response (e.g., governments may avoid lethal force to avoid international outcry). In this case, media presence is a potential spurious factor that affects the government's response, not just the protest itself.
  • Don't confuse: a relationship that looks causal may actually be driven by an unobserved third variable.

🧪 Establishing causality in practice

🧪 The difficulty of causal claims

  • The excerpt's running example (public protest and government action) shows that establishing causation is difficult.
  • All four conditions must be worked through using both reason and evidence.
  • The difficulty does not mean researchers should avoid causal claims; rather, it represents a rigorous standard for determining true causal relationships.

🧪 Why rigor matters

  • Political science deals with complex, intertwined phenomena.
  • Researchers can be susceptible to assuming causation from mere observation.
  • Taking the four conditions seriously helps avoid false causal claims and strengthens the quality of research.

Summary table: The four conditions of causality

ConditionWhat it requiresExample from excerpt
Logical time orderingCause must precede effect in timeProtests happen before government responds
CorrelationVariables must move togetherWhen protests occur, governments pay attention
MechanismExplain how one influences the otherSocial media helps protesters organize, prompting government reaction
Non-spuriousnessNo third variable is the real causeInternational media presence may also influence government response, not just the protest
17

Section 4.2: Theory Construction

Section 4.2: Theory Construction

🧭 Overview

🧠 One-sentence thesis

A theory explains how the world works by stating a relationship between at least two objects while holding all other factors constant, and good theories are general, parsimonious, and falsifiable.

📌 Key points (3–5)

  • What a theory is: a statement about the relationship between two objects (variables) with all other objects held constant.
  • How theories are generated: without reference to existing theories (rare), as extensions of existing theories (common), or as contradictions of existing theories.
  • Three characteristics of good theories: generality (applicable across contexts), parsimony (simplicity), and falsifiability (can be shown false by evidence).
  • Common confusion: a single object is not a theory—you need at least two objects and a relationship between them; observing one object alone is not theorizing.
  • Why theories matter: they help us focus on specific relationships in a complex world and lead us to explore possibilities and test predictions.

🧩 What a theory is

🧩 Basic definition

A theory is an explanation of how the world works.

  • Theories can explain natural, physical, chemical, biological, social, political, or historical phenomena.
  • Example: Political scientists theorize that elected officials are more responsive to voters during campaigns because they want to demonstrate proactive service to constituents.

🔗 Formal structure

A theory is a set of assumptions about constants, variables, and the relationship between variables.

  • More precisely: a theory is a statement about the relationship between two objects with all other objects held constant.
  • Why "constant" matters: In a complex world with many objects, holding all other objects still lets us focus on just two objects and their relationship.
  • Visual model: X and Y represent the two objects of interest; the relationship between them is the core of the theory.

⚠️ What is NOT a theory

  • Focusing on just one object is not a theory—it is merely observing an object for its own sake.
  • A theory requires at least two objects because it explains how one object relates to another.

🛠️ How theories are generated

🛠️ Three pathways

PathwayDescriptionFrequency
Without reference to existing theoryGenerate a theory from scratchVery rare
Extension of existing theoryBuild on and expand an existing theoryCommon
Contradiction of existing theoryChallenge or oppose an existing theoryAlso common
  • Most theories rely on existing theories because the two objects of interest usually already have some theoretical explanation.

🎯 Applying a model theory

🎯 The model

  • A model theory states: two objects (X and Y) exist, and a relationship exists between X and Y.
  • To apply it, identify the two objects of interest in your topic and explain why/how they relate.

📰 Example 1: Media and government

  • X = the media; Y = the government.
  • The theory should explain why and how a relationship exists between media and government.
  • Assumption: other political actors are held constant so we can focus on this relationship.

🗳️ Example 2: Information and voters

  • X = information; Y = voters.
  • Why the relationship exists:
    • Voters use information to make voting decisions.
    • Candidates and campaigns send information to voters to influence decisions.

🔀 Analyzing complex theories

🔀 Multiple variables

  • Theories can include more than two variables (e.g., X, Y, and Z).
  • Possible relationships: X and Y, Y and Z, X and Z.
  • The core remains: relationships between pairs of objects.

⚠️ Don't confuse complexity with necessity

  • Adding more variables increases the number of potential relationships.
  • However, as discussed below, parsimony (simplicity) is a desirable characteristic—more complexity is not always better.

✅ Three characteristics of good theories

🌍 Generality

A theory should be general, meaning it can include a variety of operationalizations and geographic contexts.

  • Specific theory example: How voters in a midwestern U.S. state decide to support a presidential candidate.
    • Useful for understanding Midwestern voters, campaigns, and news outlets.
    • Limitation: hard to extend beyond the Midwest.
  • General theory example: How voters respond to national-level candidates.
    • Can collect evidence from voters in Europe, South America, Africa, Asia, Oceania, and North America.
    • Findings about similarities and differences help us understand the relationship more broadly.
  • The specific theory can feed into the general theory; starting general lets us think broadly, then narrow down to specific places of interest.

✂️ Parsimony

Parsimonious means frugal or to use something sparingly—keep theories simple.

  • Why parsimony matters: Complicated theories make it harder to see generality and falsifiability.
  • Simple example: Gender influences who runs and wins elected office.
    • Hypothesis: In a study of voters, male candidates are more likely than female candidates to be elected.
    • Straightforward: one candidate attribute influences voter support.
  • Complex example: Candidate attributes → voter behavior → campaign strategies → election processes → policy outcomes.
    • Five concepts in a linear chain.
    • Problems:
      • Are candidate attributes the only thing influencing voting behavior?
      • Does voter behavior influence campaign strategies, or is it the other way around?
    • The length of the chain makes it susceptible to criticism and difficult to discern the nature of relationships.

🔬 Falsifiability

Falsifiability is the ability of a theory to be shown as false.

  • Why falsifiability is essential: If no amount of reason or evidence can show a theory is incorrect, the theory cannot be scrutinized.
  • Without falsifiability, the scientific method breaks down—new information cannot challenge a theory or suggest alternatives.
  • Don't confuse: A theory becoming a "law" does not mean it is ironclad; it is accepted by the scientific community for the time being but can still be falsified in the future with new evidence from different times, places, and contexts.

🔗 How the three work together

  • Generality, parsimony, and falsifiability make theories integral to the scientific method and the discovery and creation of new knowledge.

🔍 Creating a theory

🔍 Core principle

  • Theories are statements of relationships between two concepts.
  • Aim for the three characteristics: general, parsimonious, and falsifiable.

🧪 The process

  • Observe the world and propose how it works.
  • Identify at least two objects of interest.
  • State the relationship between them while holding all other objects constant.
  • Theories lead us to explore possibilities: What happens if one object changes? How does the other object respond?
18

Section 4.3: Generating Hypotheses from Theories

Section 4.3: Generating Hypotheses from Theories

🧭 Overview

🧠 One-sentence thesis

A hypothesis is an if-then statement derived from a theory that specifies the values of two concepts and how a change in one affects a change in the other.

📌 Key points (3–5)

  • What a hypothesis is: an if-then statement derived from a theory.
  • How hypothesis differs from theory: a theory states a relationship between concepts; a hypothesis declares specific values and how changing one value affects the other.
  • Three required elements: units of observation, a value of the independent variable, and a value of the dependent variable.
  • Common confusion: don't confuse theory (general relationship) with hypothesis (specific values and directional change).

🔬 From theory to hypothesis

🔬 What a hypothesis is

A hypothesis is an if-then statement that is derived from a theory.

  • A hypothesis translates a general theory into a testable statement.
  • It moves from abstract relationships to concrete, observable predictions.
  • Example: If a theory says "A influences B," a hypothesis specifies "If A increases, then B will increase by X amount."

🔄 How hypothesis differs from theory

AspectTheoryHypothesis
ScopeStates a relationship between two concepts or objectsDeclares specific values of the two concepts
SpecificityGeneral explanationSpecifies how change in one value affects change in the other
FormRelationship statementIf-then statement
  • Theory is broader and more abstract; hypothesis is narrower and more concrete.
  • Don't confuse: a theory establishes that a relationship exists; a hypothesis predicts the direction and nature of that relationship with specific values.

🧱 Three required elements

🧱 Units of observation

  • The hypothesis must identify what objects or entities are being observed.
  • These are the specific things the researcher will examine to test the hypothesis.
  • Example: If studying voting behavior, the units of observation might be individual voters or electoral districts.

➡️ Value of the independent variable

  • The hypothesis must specify a particular value or state of the independent variable (the presumed cause).
  • This is the "if" part of the if-then statement.
  • Example: "If voter turnout is high..." specifies a value (high) for the independent variable (voter turnout).

⬅️ Value of the dependent variable

  • The hypothesis must specify a particular value or state of the dependent variable (the presumed effect).
  • This is the "then" part of the if-then statement.
  • Example: "...then election legitimacy will increase" specifies a value (increase) for the dependent variable (election legitimacy).

🎯 Putting it together

🎯 Complete hypothesis structure

A well-formed hypothesis includes all three elements in an if-then format:

  • If [independent variable takes specific value] then [dependent variable takes specific value]
  • The units of observation are either stated explicitly or implied by the variables.
  • Example: "If an organization increases funding (independent variable value), then its program outcomes will improve (dependent variable value)" with organizations as units of observation.

⚠️ Common confusion reminder

  • Theory: "There is a relationship between A and B."
  • Hypothesis: "If A increases to level X, then B will increase to level Y."
  • The hypothesis is more specific and testable because it commits to particular values and directions of change.
19

Exploring Variables

Section 4.4: Exploring Variables

🧭 Overview

🧠 One-sentence thesis

Variables are objects that change, and they fall into two main categories—discrete (countable) and continuous (measurable)—each with further subtypes that determine how we work with them.

📌 Key points (3–5)

  • What a variable is: an object that can vary or change because of its inherent properties.
  • Two broad categories: discrete variables (values we can count) vs. continuous variables (values we can measure).
  • Discrete subtypes: nominal (categories with no order) and ordinal (categories with a meaningful order).
  • Continuous subtypes: interval (measurable with no true zero) and ratio (measurable with a true zero).
  • Common confusion: discrete vs. continuous—ask "Can I count distinct categories?" (discrete) or "Can I measure along a scale?" (continuous).

🔢 What variables are

🔢 Core definition

Variable: an object that can hold at least two values.

  • Variables are not fixed; they change or vary due to their inherent properties.
  • The excerpt emphasizes that variation is the defining feature: if an object can take on different values, it is a variable.
  • Example: a political attitude that can be "agree," "neutral," or "disagree" is a variable because it varies across people or time.

🗂️ Two main categories of variables

🗂️ Discrete vs. continuous

The excerpt divides all variables into two categories based on how we capture their values:

CategoryHow we capture valuesSubtypes
DiscreteValues we can countNominal, Ordinal
ContinuousValues we can measureInterval, Ratio
  • Discrete: think of distinct, separate categories or whole numbers.
  • Continuous: think of a scale or ruler where values can fall anywhere along a range.
  • Don't confuse: "discrete" does not mean "small number of values"; it means countable, separate categories. "Continuous" means measurable on a scale, not necessarily "infinite."

🏷️ Discrete variables

🏷️ Nominal variables

  • Nominal variables are discrete and represent categories with no inherent order.
  • Example: political party affiliation (Democrat, Republican, Independent) is nominal because the categories do not have a natural ranking.
  • You can count how many fall into each category, but you cannot say one category is "higher" or "lower" than another.

📊 Ordinal variables

  • Ordinal variables are discrete and represent categories with a meaningful order.
  • Example: education level (high school, bachelor's, master's, doctorate) is ordinal because the categories can be ranked from lower to higher.
  • You can count and rank, but the "distance" between categories is not necessarily equal or measurable.

📏 Continuous variables

📏 Interval variables

  • Interval variables are continuous and can be measured along a scale, but they have no true zero point.
  • Example: temperature in Celsius is interval because 0°C does not mean "no temperature"; it is an arbitrary point on the scale.
  • You can measure differences (e.g., the difference between 10°C and 20°C is the same as between 20°C and 30°C), but you cannot say "20°C is twice as hot as 10°C" because there is no true zero.

📐 Ratio variables

  • Ratio variables are continuous and can be measured along a scale with a true zero point.
  • Example: income in dollars is ratio because $0 means "no income," and you can say "$20,000 is twice as much as $10,000."
  • Ratio variables allow both measurement of differences and meaningful ratios between values.

🔍 How to distinguish variable types

🔍 Key questions to ask

  • Is it countable or measurable?
    • Countable (distinct categories) → discrete.
    • Measurable (on a scale) → continuous.
  • If discrete, is there an order?
    • No order → nominal.
    • Meaningful order → ordinal.
  • If continuous, is there a true zero?
    • No true zero → interval.
    • True zero → ratio.

🔍 Common confusion

  • Don't confuse ordinal with interval: ordinal has ranked categories but unequal spacing; interval has equal spacing on a scale.
  • Don't confuse interval with ratio: both are measured, but only ratio has a true zero that allows meaningful ratios.
20

Section 4.5: Units of Observation and Units of Analysis

Section 4.5: Units of Observation and Units of Analysis

🧭 Overview

🧠 One-sentence thesis

Political scientists distinguish between units of observation—the objects they observe to describe relationships—and units of analysis—the objects they actually analyze.

📌 Key points (3–5)

  • What units of observation are: objects that a researcher specifically observes with the goal of describing relationships between objects.
  • What units of analysis are: objects that a researcher specifically analyzes.
  • Key distinction: observation vs. analysis—not every object observed is the object being analyzed; they serve different purposes in research.
  • Common confusion: the two terms sound similar but refer to different stages/roles in research—observation is about gathering data on objects, analysis is about examining those objects.
  • Why it matters: political scientists observe a wide range of political objects, but these objects do not all have the same purpose in a study.

🔍 The two types of research objects

🔍 Units of observation

Units of observation: the objects that a researcher is specifically observing with the goal of describing the relationship between the objects.

  • These are the objects you look at or collect data from.
  • The purpose is to describe relationships between objects.
  • Example: A researcher observes voting records (the unit of observation) to describe how voter turnout relates to campaign spending.

🔬 Units of analysis

Unit of analysis: the object that a researcher is specifically analyzing.

  • This is the object you are actually studying or making claims about.
  • The focus is on analyzing this object, not just observing it.
  • Example: If a researcher analyzes election districts to understand policy outcomes, the district is the unit of analysis.

🧩 Understanding the distinction

🧩 Different purposes in research

  • The excerpt emphasizes that political scientists observe "a wide range of political objects," but these objects do not have the same purpose.
  • Some objects are observed to gather information; others are the target of the analysis itself.
  • The distinction helps clarify what role each object plays in the research design.

⚠️ Don't confuse observation with analysis

  • Observation = what you look at to collect data and describe relationships.
  • Analysis = what you examine and draw conclusions about.
  • The same study may involve multiple units of observation but focus analysis on a single unit of analysis.
  • Example: A researcher might observe individual voters (unit of observation) but analyze political parties (unit of analysis) to understand party strategy.

📊 Summary table

ConceptDefinitionPurpose
Unit of observationObjects specifically observed by the researcherDescribe relationships between objects
Unit of analysisObjects specifically analyzed by the researcherMake analytical claims about these objects

📊 Key takeaway

  • Not every object you observe is the object you analyze.
  • Clarifying which is which helps structure research and avoid confusion about what the study is actually about.
21

Causal Modeling

Section 4.6: Causal Modeling

🧭 Overview

🧠 One-sentence thesis

Causal modeling is a visual tool that helps researchers "see" and clarify the relationships between variables, including direct causes, mediators, and confounders.

📌 Key points (3–5)

  • What causal modeling does: visualizes simple and complex relationships between variables so researchers can see connections.
  • Three basic model types: direct cause (A→B), mediated relationship (A→M→B), and confounded relationship (C→A, C→B, A→B).
  • Mediator vs confounder: a mediator stands between cause and effect; a confounder influences both the cause and the effect but may not be explicitly included in the original model.
  • Common confusion: not every arrow means "cause"—in a mediated model, M is not considered the cause because A is present; context matters.
  • Why it matters: drawing causal models helps researchers explore political phenomena and consider other possible relationships between concepts.

🎨 What causal modeling is

🎨 Definition and purpose

Causal modeling: a visual method for describing simple and complex relationships between variables.

  • It allows researchers to "see" the relationships between objects of interest.
  • The excerpt emphasizes that drawing these models is useful for exploring political phenomena.
  • It helps researchers consider the possibility of other relationships between concepts that they might not have thought of initially.

🔍 How it works

  • Causal models use circles (or nodes) to represent variables.
  • Arrows show directional relationships: the arrow points from the influencing variable to the influenced variable.
  • Solid lines indicate explicitly included relationships; dotted lines (on circles) indicate variables not originally included in the model.

📐 Three basic model structures

📐 Model 1: Direct causation (A → B)

  • The simplest structure: A points directly to B.
  • A is considered the "cause" and B is the "effect."
  • No intermediary or outside variables are shown.

🔗 Model 2: Mediated relationship (A → M → B)

  • Three objects: A, M, and B.
  • An arrow points from A to M, and another from M to B.
  • M stands for mediator: it mediates or stands in between the relationship between A and B.
  • Important distinction: A influences B through M, so A is more precisely an "indirect cause."
  • Don't confuse: M is not considered the "cause" because the model includes A—the mediator is part of the pathway, not the origin.

🌀 Model 3: Confounded relationship (C → A, C → B, A → B)

  • Three objects: A, B, and C.
  • A points to B (A is a cause of B).
  • C is a confounder: it has a directional relationship with both A and B.
  • The confounder was not explicitly included in the original model, shown by dots instead of solid lines on the circle.
  • Why confounders matter: they reveal that the relationship between A and B may be influenced by an outside variable that affects both.

🧩 Key distinctions in causal models

🧩 Mediator vs confounder

ConceptRoleVisual cueExample structure
MediatorStands between cause and effect; transmits the influenceSolid circle, in the middle of the pathA → M → B
ConfounderInfluences both the cause and the effect from outsideDotted circle, points to both A and BC → A, C → B, A → B
  • Mediator: part of the causal pathway; explains how A affects B.
  • Confounder: an external factor; may create a spurious relationship or complicate interpretation.

⚠️ When something is not the "cause"

  • In Model 2, M is not considered the "cause" even though there is an arrow from M to B.
  • Reason: the model includes A, which is the origin of the influence.
  • Context matters: the same variable can play different roles depending on what else is in the model.

🛠️ Practical use of causal modeling

🛠️ Why draw causal models

  • Clarifies thinking: forces researchers to specify which variables influence which.
  • Reveals hidden relationships: helps identify mediators and confounders that might otherwise be overlooked.
  • Supports theory testing: visual models make it easier to generate hypotheses and design studies.

🗺️ Keeping the tool handy

  • The excerpt advises: "As you explore political phenomenon, keep the tool of causal modeling handy."
  • Causal models are not just for final presentations—they are useful throughout the research process for thinking through relationships.
  • Example: A researcher studying voter turnout might draw a model to see whether education directly affects turnout or works through political efficacy (a mediator), and whether income confounds both education and turnout.
22

Section 5.1: Conceptualization in political science

Section 5.1: Conceptualization in political science

🧭 Overview

🧠 One-sentence thesis

Conceptualization—the process of naming and defining abstract ideas through observation and imagination—provides the foundational building blocks for political science theories by organizing concepts into dimensions and indicators that can be systematically studied.

📌 Key points (3–5)

  • What conceptualization is: the process of naming things in the world (observed or imagined) and using language to communicate those names as concepts.
  • Concepts as building blocks: concepts are the foundation of theories; they are abstract names for things, feelings, and ideas that emerge from human interaction with each other and the environment.
  • Hierarchy of abstraction: concepts (most abstract) contain dimensions (less abstract), which contain indicators (most concrete and observable).
  • Common confusion: dimensions vs. indicators—dimensions are underlying variations within a concept (e.g., "leadership form"), while indicators are specific, observable aspects of those dimensions (e.g., "number of leaders").
  • Concept mapping as a tool: a visual method to organize concepts, dimensions, and indicators spatially, revealing relationships and knowledge gaps.

🧱 What conceptualization means

🧱 The core process

Concepts are "names for things, feelings, and ideas generated or acquired by people in the course of relating to each other and to their environment."

  • Conceptualization requires both observation (what exists) and imagination (what could exist).
  • It is one of the first steps to engaging with the world systematically.
  • The process involves using language to communicate names or concepts.

🔍 Two pathways to concepts

Political scientists create concepts through:

PathwayHow it worksExample from excerpt
ObservationNotice patterns in the real worldA political scientist observes that all groups abide by authority, which looks different across groups → conceptualizes "regime"
ImaginationEnvision possibilities not yet realizedA political theorist imagines organizing political authority for all humankind → conceptualizes "global government"
  • Don't confuse: concepts can be purely observed, purely imagined, or a mix of both.

📚 Historical example: Aristotle's conceptualization

Aristotle's Politics demonstrates early conceptualization in political science:

  • He first conceptualized basic elements: citizenship and the state.
  • He defined a citizen as someone "who has the power to take part in the deliberative or judicial administration of any state."
  • He then conceptualized varieties of government (regime types) by asking: how many forms of government exist, and what differentiates them?

Why it matters: Aristotle's work shows that concept building involves determining precise language for observations and ideas important for understanding social life.

🔢 Dimensions and indicators

🔢 Understanding dimensions

  • Dimensions are underlying variations within a single concept.
  • They are less abstract than the concept itself.
  • A single concept often has many dimensions.

Example from Aristotle's regime concept:

  • Dimension 1: How concentrated is political authority? (in one, a few, or many leaders)
  • Dimension 2: How are leaders selected?
  • Dimension 3: In whose interest do leaders rule? (common interest vs. private interest)

📊 Aristotle's regime classification

The excerpt provides Aristotle's framework as a concrete example:

Number of RulersRuling in Common InterestRuling in Private Interest
OneKingshipTyranny
FewAristocracyOligarchy
ManyPolityDemocracy
  • Aristotle identified two salient dimensions: (1) size of the ruling group, and (2) whose interests they serve.
  • This shows how dimensions organize variation within a concept.

🎯 Understanding indicators

Indicators are more concrete aspects of dimensions. They are more specific and are often what we observe in the world around us.

  • Indicators are the most concrete level.
  • They are what we can actually observe or measure.
  • There may be many indicators for a single dimension.

Example: For the dimension "leadership structure" of regime:

  • Indicator 1: "one, few, or many" rulers (Aristotle's approach)
  • Indicator 2: Specific number of rulers (e.g., U.S. federal government has 537 elected rulers: 535 legislators + 1 president + 1 vice president)

🔗 The hierarchy of abstraction

The excerpt emphasizes how these three levels relate:

CONCEPT (most abstract)
    ↓
DIMENSIONS (less abstract, many per concept)
    ↓
INDICATORS (most concrete/observable, many per dimension)

Example from the excerpt:

  • Concept: Regime
  • Dimension: Leadership selection
  • Indicators: Presence of elections vs. absence of elections

Another example—prosperity:

  • Concept: Prosperity
  • Dimensions: amount of wealth, health of society, equality of distribution, stability of wealth
  • Indicators: (the excerpt notes many possible measures exist for each dimension, covered in section 5.3)

⚠️ Don't confuse dimensions and indicators

  • Dimensions describe types of variation within a concept (e.g., "How is leadership structured?")
  • Indicators are specific observable features that show that variation (e.g., "537 elected officials")
  • The excerpt notes that dimensions and indicators can be variables (connecting to Chapter 4).

🗺️ Concept mapping method

🗺️ What concept mapping is

Concept mapping is a method for identifying concepts, dimensions and indicators, and their relationships to each other.

  • It is a visual tool that creates a pictorial understanding of relationships.
  • It can be done individually or in groups.
  • It helps with formulating research topics and eventually research questions.

📐 Three key conventions

📐 Convention 1: Enclosing concepts

  • Key concepts are enclosed in boxes or circles.
  • Alternative: write concepts on slips of paper to move them around.

Example: For the question "What are the consequences of different regime types in the world?":

  • Put "regime" in a box at the top
  • Put related concepts like "conflict," "prosperity," and "power" in other boxes

📐 Convention 2: Spatial organization

  • Organize from top to bottom.
  • More general concepts at the top of the mapping space.
  • More specific concepts at the bottom.
  • The mapping space can be anything from a piece of paper to a wall-sized whiteboard.

📐 Convention 3: Connecting relationships

  • Use lines or arrows to connect related concepts.
  • Label the connections with words describing the relationship.

Example from the excerpt:

  • Connect "regime" and "leadership form" with a line labeled "according to Aristotle, determined by"
  • Connect "regime" and "private interest" with "is perverted when rulers rule in the"

🎯 Why use concept mapping

Benefits identified in the excerpt:

PurposeHow it helps
Visualize knowledge scopeShows what you know about a central concept
Reveal organizationDisplays how knowledge is structured
Identify gapsShows areas for research
Systematic thinkingUses specific conventions (unlike general brainstorming)

🆚 Concept mapping vs. brainstorming

  • Brainstorming: more general, no conventions for visual organization, just jotting down related concepts.
  • Concept mapping: specific conventions for how to draw and organize concepts spatially.
  • Don't confuse: concept mapping is more structured and systematic than brainstorming.
23

Section 5.2: Operationalization

Section 5.2: Operationalization

🧭 Overview

🧠 One-sentence thesis

Operationalization transforms abstract concepts into measurable terms with variation, enabling researchers to collect data that can reveal patterns and answer research questions.

📌 Key points (3–5)

  • What operationalization means: defining a concept in measurable terms so that it can take on different values in the real world.
  • Why variation is essential: without variation in a measure, it is impossible to identify patterns of association, correlation, or causation; constants cannot explain things that vary.
  • How to collect data: determine what kind of data (quantitative, qualitative, or mixed), why you need it (to understand concepts and answer research questions), and how to obtain it (literature review, existing datasets, direct collection).
  • Common confusion: operationalizing too broadly (e.g., "presence of a government") creates a constant with no variation, making it useless for explaining outcomes that do vary.
  • Starting point for data collection: conduct a literature review to find existing datasets and avoid reinventing the wheel; use government statistics, public opinion polls, academic research, and commercial sources.

🔧 What operationalization means

🔧 Defining concepts in measurable terms

Operationalization: the process by which a researcher defines a concept in measurable terms; "to operationalize a concept means to put it in a form that permits some kind of measurement of variation."

  • It is not enough to name a concept; you must specify how to measure it in the real world.
  • The measure must be concrete enough to observe and record.
  • Example: the concept "regime" can be operationalized by counting the number of leaders in power (one leader, a few leaders, or many leaders).

📏 Variation is required

  • Variation means the measure takes on different values across observations.
  • Without variation, the measure becomes a constant.
  • Example: operationalizing "regime" as "presence of a government" produces no variation in the contemporary world—every country has a government—so this measure cannot explain differences in outcomes like interstate war.

⚠️ Why constants are problematic

  • A constant cannot explain something that varies.
    • Example: if regime type is constant (all countries have governments) but interstate war varies (some countries fight wars, others do not), the constant cannot explain the variation.
  • A constant cannot be explained by other variables.
    • Example: economic growth varies by country, but if regime type is constant, you cannot determine whether economic growth affects regime type.
  • Don't confuse: a poorly operationalized concept (too broad, no variation) with a well-operationalized concept (specific, varies across cases).

📐 Examples of operationalizing "regime"

📐 Aristotle's measures

The excerpt uses Aristotle's conceptualization of "regime" to illustrate operationalization:

MeasureCategoriesWhat it captures
Number of leadersOne, few, manyHow many people hold power
Whose interestPrivate, commonWhether rulers rule for themselves or the public
  • These measures have variation: different countries have different numbers of leaders and different ruling interests.
  • A third common measure today: presence of free and fair elections (binary: yes or no).

🧪 Example: counting leaders

  • Operationalizing "regime" as the number of leaders in power allows you to count individuals.
  • Real-world variation: Zimbabwe had a single leader (Robert Mugabe, 1980–2017); China's Politburo Standing Committee has varied from five to eleven decision-makers since 1949.
  • This variation makes it possible to explore patterns, such as whether the number of leaders affects conflict or prosperity.

📊 Collecting data

📊 Three central questions

The excerpt frames data collection around three questions:

  1. What kind of data should I collect?
  2. Why am I collecting this data?
  3. How can I collect this data?

🎯 What kind of data to collect

🎯 Scope considerations

  • Time and space: decide which period of time and which parts of the world to focus on.
  • Best strategy for beginners: ask yourself what you are interested in and whether you have prior knowledge that can help.
  • Genuine interest is crucial because research and data collection require sustained effort and often present unexpected challenges.

🎯 Quantitative, qualitative, or mixed

  • The method often depends on how the concept has been operationalized.
    • Quantitative: if you operationalize "regime" as a count of leaders, build a quantitative dataset.
    • Qualitative: if you want to collect the titles of political offices, use a qualitative approach.
    • Mixed: if you need both the number of leaders and their titles, collect both quantitative and qualitative data.
  • Chapters 7 and 8 (mentioned in the excerpt) cover qualitative and quantitative methods in more depth.

🤔 Why am I collecting this data?

  • Return to first principles: What is the underlying concept of interest? How has it been operationalized? Does the measure vary?
  • Data collection demands resources (time, money, carbon emissions), so evaluate whether the ideal data will help understand the concept and answer the research question.
  • Having a research question formulated helps sharpen the evaluation of proposed data collection.

🔍 How can I collect this data?

🔍 Conduct a literature review

Literature review: the process of reading relevant scholarly work on a research topic or research question of interest.

  • Why: to ascertain whether relevant data has already been collected and exists in an accessible dataset; to identify related research and datasets that might be used to build a new dataset.
  • Who can help: professors, librarians, and colleagues.
  • Don't reinvent the wheel: check if someone has already collected the data you need.

🔍 Common sources of data

The excerpt provides a table of common sources for social science data:

SourceDescription
Government StatisticsNational statistical agencies collect and publish comprehensive social statistics; the US spreads responsibility across many federal agencies; the UN and other international organizations publish comparative data; state, provincial, and municipal governments also publish statistics.
Public Opinion PollsNews and political organizations conduct polls; results can be found at archives like ICPSR or other poll archives (often via university library subscriptions).
Academic ResearchResearchers gather data as part of their studies; results are presented in published literature; search article databases to find these articles; complete datasets can often be obtained from the original researchers.
Commercial Market and Business ResearchCorporations and trade organizations collect economic statistics and sell them (often at high cost); university libraries purchase a limited number of these data products.

🔍 Quantitative vs qualitative datasets

  • Quantitative datasets: often available for download from the internet or via subscription from a university or college library.
  • Qualitative datasets: generally more difficult to find; sometimes available on scholars' personal webpages or research center webpages; consider contacting scholars directly to request their data.

🧩 Additional considerations

🧩 Validity and reliability

  • The excerpt mentions that operationalizing a concept must be done with additional considerations in mind: identifying valid and reliable measures.
  • These considerations are taken up in section 5.3 (mentioned but not included in this excerpt).
  • For now, the important thing is to think about ways to measure a concept and ensure there is variation on that measure.

🧩 Concept maps as a tool

The excerpt briefly mentions concept maps as a tool for visually depicting the scope of knowledge on a central concept, relationships between concepts, dimensions, and indicators.

  • Concept maps can reveal how knowledge is organized and gaps in knowledge (areas for research).
  • They are distinct from brainstorming because they follow specific conventions for how concepts are drawn and how space is utilized.
  • Example: a concept map around "What are the consequences of different regime types in the world?" might start with "regime" at the top and connect it to related concepts like "conflict," "prosperity," and "power."
24

Section 5.3: Measurement

Section 5.3: Measurement

🧭 Overview

🧠 One-sentence thesis

Measurement translates abstract concepts into concrete numerical or categorical values, and the quality of those measures—determined by their type, precision, reliability, and validity—shapes the robustness of political science research.

📌 Key points (3–5)

  • What measurement does: assigns numbers or labels to observations to represent variable categories, turning abstract concepts into analyzable data.
  • Four types of measures: nominal (classification), ordinal (rank ordering), interval (equal-distance scales), and ratio (interval with true zero)—each builds on the previous type's capabilities.
  • Quality criteria: precision (exactness), reliability (low measurement error, replicable), and validity (meaningfully captures the underlying concept).
  • Common confusion: reliability vs. validity—a measure can be reliable (consistent) without being valid (accurate to the concept), like darts consistently hitting the same wrong spot.
  • Real-world application: measures like Freedom House scores and Polity IV enable systematic comparison of regime types across countries and time.

📏 What measurement means

📏 Core definition

Measurement: "the assignment of numbers or labels to units of analysis to represent variable categories."

  • Measurement is the step where observations become data.
  • It translates the world into standard units—even abstract ones.
  • Example: Freedom House uses a 0–100 scale for freedom levels; 100 doesn't mean "100 units of something tangible" but allows precise comparison across countries and over time.
  • Why it matters: without measurement, concepts remain too vague to analyze systematically.

🔢 The four types of measures

🏷️ Nominal measures

Nominal measures: classify observations into two or more categories, with numerical values assigned to each category.

  • Purpose: classification only; numbers are arbitrary labels, not rankings.
  • Example: US Census racial/ethnic categories (e.g., "Black or African American," "Hispanic"); Aristotle's six regime types (democracy, tyranny, etc.).
  • Quality criteria:
    • Exhaustive: every observation fits into a category.
    • Mutually exclusive: no overlap between categories.
  • Problem highlighted: US Census categories may fail both tests—they don't cover "two or more races" (not exhaustive) and "White" and "Black or African American" can both include people from Africa (not mutually exclusive).

📊 Ordinal measures

Ordinal measures: rank-order observations, with numbers assigned to indicate ranking on some dimension.

  • Purpose: classification + ranking.
  • Example: survey responses from "strongly disagree" (1) to "strongly agree" (5); socioeconomic classes from "lower" to "upper."
  • What you can do: compare relative positions (upper class > lower class in income).
  • What you cannot do: mathematical operations like averaging—ordinal distances aren't equal.
  • Don't confuse: ordinal tells you order but not how much more.

📐 Interval measures

Interval measures: observations fall along a scale with standard, equal-distance units.

  • Purpose: classification + ranking + equal intervals.
  • Example: Freedom House 0–100 scale; exam scores 0–100.
  • What you can do: mathematical manipulation (averaging, addition).
  • What you cannot do: ratio statements—you can't say "60 is twice as free as 30" because there's no true zero.
  • Example: averaging two exam scores (80 and 70) yields 75.

⚖️ Ratio measures

Ratio measures: interval measures with a true (absolute) zero.

  • Purpose: classification + ranking + equal intervals + ratio comparisons.
  • Example: age, weight.
  • What you can do: all interval operations plus ratio statements—"a 40-year-old is twice as old as a 20-year-old."
  • Why the true zero matters: it anchors the scale so ratios are meaningful.

📋 Comparison table

TypeClassificationRank orderEqual intervalsRatio comparisonsExample
NominalCensus race categories
OrdinalSurvey agreement scales
IntervalFreedom House 0–100
RatioAge, weight

🎯 Quality of measures

🔍 Precision

Precision: how exact a measure is.

  • More fine-grained = more precise.
  • Example: measuring education by "years attended" is more precise than "schools graduated from" because it captures smaller differences.
  • Why it matters: precision allows for more detailed analysis.

🔁 Reliability

Reliability: low probability of measurement error; different researchers applying the same measure arrive at the same findings.

  • Key idea: consistency and replicability.
  • Example: if multiple researchers code the same countries and get the same results, the measure is reliable.
  • Dartboard metaphor: darts landing on the same spot repeatedly (even if not the bull's eye).

✅ Validity

Validity: whether a measure meaningfully captures the underlying concept it intends to measure.

  • Key idea: does it measure what you think it measures?
  • Example: Does an IQ test validly measure intelligence? Debated.
  • Dartboard metaphor: darts sometimes hitting the bull's eye (the true concept), even if scattered.
  • Harder to assess: validity is often hotly debated among researchers.

🎯 Reliability vs. validity: the dartboard analogy

ScenarioReliabilityValidityDartboard image
Reliable but not validHighLowDarts clustered together, but away from bull's eye
Valid but not reliableLowHighDarts scattered, some hit bull's eye
Both reliable and validHighHighDarts consistently strike bull's eye
NeitherLowLowDarts all over the wall, missing target
  • Don't confuse: a measure can be reliable (consistent) without being valid (accurate to the concept).

🌍 Real-world application: measuring regime type

🗂️ Geddes' nominal measure of dictatorship

  • Categories: personalist, military, single party, hybrid.
  • Purpose: classify the diverse universe of 20th-century nondemocracies.
  • Exhaustive: aims to fit every nondemocracy into one of four categories.
  • Reliability question: Would another researcher categorize China under Mao (1949–1976) as "personalist" or "single party"? Ambiguity suggests potential reliability issues.
TypeDescriptionExample
PersonalistRule by a single personZimbabwe under Robert Mugabe, 1980–2017
MilitaryRule by military leadersTurkey, 1960–1965
Single partyRule by a single political partyChina under Chinese Communist Party, 1949–present
HybridCombinations of two or three aboveNorth Korea (Kim family + Workers' Party + military)

📈 Polity IV: an interval measure of regime type

Polity IV: places regimes on a -10 (highly undemocratic) to +10 (highly democratic) scale based on political competition, citizen participation, and executive constraints.

  • Range: -10 (hereditary monarchy) to +10 (consolidated democracy).
  • Suggested categories: autocracies (-10 to -6), anocracies (-5 to +5), democracies (+6 to +10).
  • Data availability: 151 countries, 1800–2017, annual observations, publicly downloadable.
  • Example: Canada scores +10 from 1946–2017.
  • Strengths: considered one of the most precise and reliable measures of regime type.
  • Validity: debated, like most regime measures; at least nine interval measures of democracy exist, showing ongoing scholarly effort.
  • Why it matters: enables systematic comparison of regimes across countries and over time, supporting analysis of trends and outcomes.
25

Section 6.1: Introduction: Building with a Blueprint

Section 6.1: Introduction: Building with a Blueprint

🧭 Overview

🧠 One-sentence thesis

Research design serves as a blueprint that guides researchers in making advance decisions about how to gather evidence and support their theoretical explanations before collecting any data.

📌 Key points (3–5)

  • What research design is: an action plan or blueprint that comes before data collection, guiding how to provide evidence for a theory.
  • Three purposes of research: exploration (understanding what is going on), description (providing further information and relationships), and explanation (determining why/cause).
  • Common confusion: the temptation to jump straight to data collection—but design decisions must be made first, including purpose, types, sampling, and observations.
  • How purposes differ: exploratory asks "what is happening?"; descriptive asks "what are the patterns and relationships?"; explanatory asks "why did this happen?" (causality).

🏗️ What research design is and why it matters

🏗️ The blueprint metaphor

Research design: an action plan that guides researchers in providing evidence to support their theory.

  • Just as a house needs a blueprint before construction (deciding size, bedrooms, materials), research needs a design before data collection.
  • Design allows critical decisions to be made in advance.
  • Example: If you want to understand an election outcome, you must first decide what you are trying to learn (explore, describe, or explain) before gathering polls or voter data.

⚠️ Design comes first

  • The excerpt emphasizes that research design is a "critical first step."
  • There is a tendency to jump immediately into data collection and analysis because it feels exciting.
  • Don't confuse: gathering data is not the first step—deciding how and why to gather data is the first step.
  • Multiple initial decisions must be made: purpose, types of design, sampling, and observations.

🎯 Three purposes of research

🔍 Exploratory research

Exploratory research seeks to understand an issue, trying to figure out what is going on.

  • Used when a phenomenon has recently occurred and you do not know what is happening, or when you want to observe something to better understand it.
  • Focuses on understanding and identifying variables.
  • Example (2016 election): What rules allow someone to win by Electoral College votes rather than popular vote? How were polls conducted? Who was included? What circumstances led individuals to choose one candidate over another?

📋 Descriptive research

Descriptive research builds upon exploratory research to provide further information about a phenomenon.

  • Expands on exploratory research by collecting additional information on identified variables.
  • Can provide information about relationships between variables (also called correlational research).
  • Answers "what" questions with more detail.
  • Example (2016 election): What kind of people were most likely to vote for Trump vs. Clinton? Which voters were most likely to turn out? Were there voters who changed their minds at the last minute?

🧪 Explanatory research

Explanatory research seeks to explain "why" and tells us which variable likely led to a certain outcome.

  • Goes further than just describing relationships and providing predictions.
  • Determines what caused the outcome to occur.
  • Example (2016 election): It can be difficult to determine cause and effect in elections, but through research design researchers can try to create similar conditions and make causal inferences.

🔄 Comparison of research purposes

PurposeCore questionWhat it doesExample from excerpt
Exploratory"What is going on?"Understands an issue; identifies variablesWhat rules govern Electoral College wins? How were polls conducted?
Descriptive"What are the patterns?"Provides further information; describes relationships between variablesWhat kind of people voted for each candidate? Who turned out?
Explanatory"Why did this happen?"Determines cause; explains which variable led to the outcomeWhat caused the election outcome? (requires design to make causal inferences)

🧩 Multiple theories and the role of design

🧩 Why design matters for theory

  • Observations of the world lead to research questions and theories.
  • Multiple theories can explain the same phenomenon.
  • Example: Why do people vote for certain presidential candidates?
    • Theory 1: Individuals vote for those who share the same party identity (party provides an information shortcut, signaling shared views).
    • Theory 2: Individuals vote for the incumbent president when the economy is doing well, and against when it is not.
  • The challenge: If there are multiple answers to a research question, how can researchers show why their answer is the one to be considered?
  • Research design provides the tools to gather evidence that supports your answer and demonstrates why your theory is the best explanation.

🎯 Design determines the type of evidence

  • The excerpt states that "the type of design will be determined by its purpose."
  • Depending on whether you want to explore, describe, or explain, you will choose different design approaches.
  • Don't confuse: the purpose drives the design, not the other way around.
26

Section 6.2: Types of Design: Experimental and Nonexperimental Designs

Section 6.2: Types of Design: Experimental and Nonexperimental Designs

🧭 Overview

🧠 One-sentence thesis

Experimental designs are the gold standard in political science for establishing causality because random assignment, treatment manipulation, and control groups allow researchers to isolate the effect of an independent variable on an outcome, whereas nonexperimental designs sacrifice some of these features for practical or ethical reasons and thus have weaker causal claims.

📌 Key points (3–5)

  • Why experiments are the gold standard: random assignment ensures groups are equivalent, so any outcome difference can be attributed to the treatment alone.
  • Three crucial components of experiments: random assignment, manipulation of the treatment, and a control group.
  • Common confusion—experimental vs quasi-experimental: quasi-experiments look similar but lack random assignment, so unobserved variables may confound the treatment effect.
  • When nonexperimental designs are used: ethical concerns (e.g., denying beneficial treatment) or practical constraints may make experiments infeasible.
  • Trade-off: moving away from the classic experimental design diminishes the ability to establish causality.

🔬 The experimental design

🔬 What makes an experiment the gold standard

  • Experimental designs help determine the effect of the independent variable (treatment) on the dependent variable (outcome) by isolating the treatment as the likely cause.
  • Comparisons are made between the experimental group (receives treatment) and the control group (does not receive treatment).
  • Because random assignment ensures the two groups are the same except for the treatment, researchers can conclude that differences in outcomes are likely due to the treatment.
  • Best suited for explanatory research to establish causality.

🧩 Three crucial components

  1. Random assignment (R): placement of cases into control and experimental groups in an unbiased manner so that the likelihood of any case being placed into either group is exactly the same.
    • With random assignment, groups are equal to each other; any differences are due to chance, not systematic bias.
  2. Manipulation of the treatment (X): the researcher controls who receives the treatment.
  3. Existence of a control group: the control group represents what the experimental group would look like without the treatment.

📐 Research design notation

The excerpt introduces a visual notation system (borrowed from Trochim and Donnelly, 2005):

SymbolMeaning
RRandom assignment
NRNonrandom assignment
OObservation (pretest or posttest)
XTreatment
One lineOne group
Two linesTwo groups
Left to rightPassage of time
  • Example: a classic experiment would show two lines (two groups), both starting with R (random assignment), one line with X (treatment), and O symbols before and after X (pretest and posttest).

🔍 Pretests and posttests

  • Pretest: establishes a baseline before the treatment is implemented.
  • Posttest: provides information about outcomes after the treatment.
  • Comparisons between pretest and posttest, and between experimental and control groups, determine the effect of the treatment.
  • Don't confuse: the pretest is not the treatment; it measures the starting point so changes can be detected.

🔀 Variations on the classic experiment

🔀 Posttest-only design

  • One variation does not administer a pretest.
  • Reasons: fears that taking a pretest can affect the results, or inability to administer a pretest.
  • This makes it harder to attribute varying outcomes to the treatment, but causality conclusions are still possible because a control group exists.
  • Example: Researcher assigns cases randomly to experimental and control groups, administers treatment to the experimental group, then measures outcomes in both groups without a baseline measurement.

🔀 Solomon 4-Group Design

  • Addresses the concern that a pretest might affect outcomes.
  • Four groups total: two experimental and two control.
  • One experimental group and one control group receive a pretest and posttest; the other pair receives only a posttest.
  • Comparisons between the two pairs reveal whether the pretest had an effect on results.
GroupRandom assignmentPretestTreatmentPosttest
Experimental 1YesYesYesYes
Control 1YesYesNoYes
Experimental 2YesNoYesYes
Control 2YesNoNoYes

🚧 Nonexperimental designs

🚧 Why nonexperimental designs are used

  • As designs move further from the classic experiment, the ability to establish causality diminishes.
  • Nonexperimental designs may lack:
    • Random assignment into groups
    • Researcher control over the treatment
    • A control group
    • Or all of these characteristics
  • Ethical concerns may make experiments implausible.
    • Example: To test a treatment that could cure a serious illness, a researcher would need to randomly assign some individuals to a control group and deny them the treatment—ethical concerns may prevent this, so the treatment is provided to all who are willing.

🔀 Quasi-experimental designs

Quasi-experimental designs: designs that try to approximate experiments but lack a key component, random assignment.

  • Nonequivalent 2-group comparative design: cases are divided into an experimental group and a comparison group (meant to be like a control group), but assignment is not random.
    • Individuals may have self-selected into groups.
    • Because groups were not formed through random assignment, we do not know if they are equivalent.
    • Unaccounted-for variables could be affecting the outcome rather than the treatment.
  • Matching: cases are matched on multiple variables, with the only variation being the treatment variable.
    • Difficult to know whether unobserved variables are also evenly distributed.
    • Example: Individuals who chose the experimental group to receive a lifesaving treatment might have exhausted all other treatments (last resort) or might have a greater zest for life—characteristics not apparent in the matching phase could have an added effect.
  • Don't confuse quasi-experiments with true experiments: the absence of random assignment means groups may differ in ways that confound the treatment effect.

🔁 Within-group comparison (no control group)

  • Researchers administer the treatment but lack a control group for multiple considerations.
  • The same group acts as a control for itself: the pretest (before treatment) is compared with the posttest (after treatment).
  • If differences exist, they may be attributed to the treatment.
  • Threats to validity:
    • Maturation or normal growth: the results might have occurred without the treatment.
    • Pretest effect: administering a pretest may prime cases to be better prepared for the posttest.
  • Without a control group, it is difficult to attribute the outcome to the treatment.

🎯 Choosing the right design

🎯 Purpose dictates design

  • If the goal is to establish causality, experimental designs are the design of choice.
    • Experimental designs have internal validity, ensuring causal conclusions about an independent variable's effect on an outcome.
  • When experiments are not feasible and the research goal is not establishing causality but instead gathering information, nonexperimental designs (which do not require random assignment or a control group) will serve the research goal just as well.

🎯 Trade-offs summary

Design typeRandom assignmentControl groupCausal inference strengthWhen to use
Classic experimentYesYesStrongestExplanatory research, causality
Posttest-only experimentYesYesStrong (but no baseline)When pretest is impractical or might bias results
Solomon 4-GroupYesYesStrongest (tests pretest effect)When pretest effect is a concern
Quasi-experimentNoYes (comparison group)WeakerEthical or practical constraints prevent random assignment
Within-group (no control)N/ANoWeakestWhen control group is unavailable
27

Components of Design: Sampling

Section 6.3: Components of Design: Sampling

🧭 Overview

🧠 One-sentence thesis

Sampling allows researchers to draw valid conclusions about a population without studying every case, with probability sampling producing representative samples that approximate population values when the sample is large enough.

📌 Key points (3–5)

  • Why sampling matters: studying every case in a population is often too costly, time-consuming, or infeasible, so researchers select a subset of cases.
  • Law of large numbers: a large enough sample that is representative of the population will yield results close to what you would get from studying the entire population.
  • Representativeness is key: the sample must be similar to the population in important characteristics, or conclusions will only apply to the sample itself, not the broader population.
  • Common confusion—probability vs nonprobability: probability sampling uses random selection and produces representative samples; nonprobability sampling uses nonrandom processes and does not guarantee representativeness.
  • Practical trade-offs: while random sampling is ideal, nonprobability methods are useful when the population is hard to reach or when resources are limited.

🎯 Core sampling concepts

🎯 Population vs sample

Population: all cases that could be part of the study.

Case: a single unit of the population.

Sample: a selection of cases from the population.

  • The population defines the boundary of your study—who or what you want to learn about.
  • Example: if studying voter behavior, the population is all adults 18 or older who are registered to vote; a case is one registered voter.
  • Including every case in the population is often impractical (e.g., over 130 million voters in the U.S.).
  • Even a "complete" population study is a snapshot in time—elections happen repeatedly, so any single study is still limited.

📏 Law of large numbers

  • You do not need every single case to make a convincing argument.
  • A large enough sample that is representative of the population will approximate the results you would get from the entire population.
  • This principle justifies using samples instead of attempting to measure every case.

🔍 Representativeness and sampling frame

Representativeness: the sample is similar to the population in key characteristics.

Sampling frame: a complete list of all those in the population, often with information about their characteristics.

  • A sample is only valuable if it helps you draw conclusions about the population.
  • Example: studying only California voters will not tell you about all U.S. voters—the sample is not representative of the broader population.
  • To ensure representativeness, select cases from the sampling frame in a way that mirrors the population's characteristics.

🎲 Probability sampling methods

🎲 What probability sampling is

  • Uses random selection to place cases into the sample.
  • Each case has a known, nonzero chance of being selected.
  • Produces samples that are more likely to be representative of the population.

🎰 Simple random sampling

Simple random sampling: each case has an equal chance of being selected.

  • Argued to be the best approach for selecting a sample.
  • Example: putting 1,000 names in a hat and drawing names—each person has a 1 in 1,000 chance of being chosen.
  • Makes the sample much more likely to reflect the population.

📊 Stratified sampling

Stratified sampling: ensures the sample has similar characteristics to the population by taking those characteristics into account during selection.

  • Similar to random sampling, but addresses concerns about inclusion or exclusion of certain characteristics.
  • Requires knowing the population's characteristics before selecting the sample.

Two types:

TypeDescriptionExample
Proportionate stratified sampleThe sample mirrors the population's proportionsIf 20% of the population is African American and 20% is Latinx, the sample includes the same proportions
Disproportionate stratified sampleOversamples certain groups that make up a smaller portion of the populationDeliberately includes more cases from a small group to gain greater insight into that group
  • Don't confuse: stratified sampling is still random within each stratum; it is not arbitrary selection.

🗺️ Clustered sampling

Clustered sample: narrows down a dispersed population by randomly selecting geographic areas (clusters) and then sampling within those areas.

  • Used when a simple random sample is not feasible because the population is widely spread out.
  • Example: instead of randomly selecting individual voters across the entire U.S., randomly select states, then counties, then cities, then precincts, and measure all cases within the selected precincts.
  • Reduces travel and logistical costs (e.g., avoids flying across the country for individual interviews).

🎯 Nonprobability sampling methods

🎯 What nonprobability sampling is

  • Uses nonrandom processes to select cases.
  • Does not guarantee representativeness.
  • Useful when the population is small, hard to reach, or when resources are limited.

🚶 Convenience sampling

Convenience sampling: selecting cases that are available and willing to participate.

  • Almost like not sampling at all—no criteria other than being part of the population and willing to participate.
  • Example: asking people walking out of a polling place to answer questions.
  • Don't confuse: convenience does not mean the sample is representative; it only means the sample is easy to obtain.

📋 Quota sampling

Quota sampling: selecting cases according to a fixed number or quota.

  • Researchers set a target number of cases and create a sample to meet that number.
  • Can resemble stratified sampling if the researcher tries to ensure the sample looks similar to the population in key characteristics.
  • Difference from stratified sampling: quota sampling does not use random selection within groups.

❄️ Snowball sampling

Snowball sample: initial cases are identified, then those cases provide referrals to other individuals who could be part of the sample.

  • The sample size grows as referrals accumulate, like a snowball gaining mass and momentum.
  • Especially useful for hard-to-reach populations where no complete list exists.
  • Example: studying homelessness—no list of homeless individuals exists, so initial contacts refer you to others in similar circumstances.

📐 Practical rules of thumb

📐 When to include the entire population

  • If the population is small (equal to or less than 100 cases), the best strategy is to include all cases.
  • This avoids sampling error and provides the most convincing evidence.

📐 Aim for larger samples

  • Always aim for a larger sample because nonresponse (not receiving a reply from a case) is likely.
  • A larger sample provides a buffer against nonresponse and increases the likelihood that the sample approximates the population.

📐 Sample size and the law of large numbers

  • The law of large numbers tells us we do not need every case, but the sample must be "sufficiently large enough."
  • The excerpt does not specify an exact number; the key is that the sample is large enough to approximate population values and representative enough to reflect population characteristics.
28

Section 6.4: Components of Design: Observations

Section 6.4: Components of Design: Observations

🧭 Overview

🧠 One-sentence thesis

Research design must specify both how data will be collected (primary vs. secondary sources, surveys vs. interviews) and when observations will be taken (single point vs. multiple times), with each choice affecting what questions can be answered.

📌 Key points (3–5)

  • Primary vs. secondary data: secondary sources are existing datasets collected by others (saves time/money but limits topics); primary sources are original data you collect yourself (more time-consuming but tailored to your question).
  • How data is collected: surveys use closed-ended questions with predetermined answers; interviews use open-ended questions allowing detailed responses.
  • When data is collected: cross-sectional studies take observations at one point in time; longitudinal studies take multiple observations from the same cases over time.
  • Common confusion: repeated cross-sections collect multiple observations like longitudinal studies, but not necessarily from the same cases each time.
  • Why it matters: the timing and method of data collection determine whether you can track change over time and how much detail you can capture.

📊 Primary vs. Secondary Data Sources

📚 What secondary data sources are

Secondary data sources: existing data collected by someone else that researchers can compile and use without collecting it again.

  • Researchers do not need to collect the data themselves; they extract the variables they need for their studies.
  • Advantages: saves time and money.
  • Disadvantages: you are constrained by what topics the original institution collected; the available data might not answer your specific research question.

📋 Examples of secondary sources

The excerpt mentions two major secondary data sources for political scientists:

SourceWho collects itWhat it covers
American National Election Studies (ANES)Stanford University and University of MichiganVoting behavior and electoral participation
General Social Survey (GSS)National Research Center and University of ChicagoTopics of concern to social scientists (e.g., psychological well-being, morality)

🔬 What primary data sources are

Primary sources: original data collected by the researchers themselves, generally requiring the creation of a data collection instrument.

  • More time-consuming than using secondary sources.
  • Key advantage: ensures the data you get is exactly what you are looking for.
  • Example: If you are interested in local elections but ANES does not ask about them, you can create your own survey instrument specific to local elections.

🛠️ Methods of Data Collection

📝 Surveys

  • Surveys often contain closed-ended questions, limiting the responses that can be provided.
  • Answer choices are predetermined.
  • Example questions from the excerpt:
    • "Are you a registered voter?"
    • "Did you vote in the last election?"
  • Possible answers might be "yes," "no," or "not sure."

🎤 Interviews

  • Interviews use open-ended questions, allowing cases to provide detailed answers beyond limited response options.
  • Respondents can elaborate and explain their reasoning.
  • Example questions from the excerpt:
    • "Why did you register to vote?"
    • "Why did you choose to vote in the last election?"
  • These questions allow for more nuanced, detailed answers than surveys.

🔍 How to distinguish surveys from interviews

  • Surveys: predetermined answer choices → quick, standardized, easier to quantify.
  • Interviews: open responses → richer detail, more context, harder to standardize.

⏰ Timing of Data Collection

📸 Cross-sectional studies

Cross-sectional study: observations are taken at a single point in time.

  • A "one-shot" approach.
  • Provides a snapshot of the phenomenon at one moment.
  • Cannot track change over time.

📈 Longitudinal studies

Longitudinal study: multiple observations over a specified length of time with the same individuals.

  • Allows researchers to track changes in the same cases over time.
  • Two types mentioned:
    • Panel study: a sample of cases likely to be representative of the population; observations collected from the same cases multiple times.
    • Cohort study: cases share characteristics or experiences; multiple observations collected from these cases over time.

🔄 Repeated cross-sections

  • A combination of cross-sectional data and multiple observations.
  • Key difference from longitudinal: observations may not be collected from the same cases each time.
  • Can help provide insight into established patterns without tracking specific individuals.

⚠️ Don't confuse

  • Longitudinal studies = same cases observed multiple times.
  • Repeated cross-sections = multiple observations, but not necessarily from the same cases.
  • Both involve multiple time points, but only longitudinal tracks the same individuals.

🎯 Practical Considerations

🧩 Matching method to research question

  • If existing datasets (secondary sources) do not cover your topic, you must collect primary data.
  • If you need standardized, quantifiable responses, use surveys.
  • If you need detailed explanations and context, use interviews.
  • If you need to track change in specific individuals, use longitudinal design.
  • If you only need a snapshot, cross-sectional design is sufficient.

💡 Key trade-offs

DimensionOption AOption B
Data sourceSecondary (faster, cheaper, limited topics)Primary (slower, costlier, tailored)
Question typeClosed-ended (standardized, quantifiable)Open-ended (detailed, contextual)
TimingCross-sectional (snapshot, one time)Longitudinal (tracks change, multiple times)
29

What are Qualitative Methods?

Section 7.1: What are qualitative methods?

🧭 Overview

🧠 One-sentence thesis

Qualitative methods provide powerful tools for understanding political phenomena by focusing on non-numerical data that can illuminate causal mechanisms and nuanced details, though they require significant resources and face replicability challenges.

📌 Key points (3–5)

  • What qualitative research is: data collection focusing on non-numerical sources like texts, interviews, observations, and documents rather than statistics.
  • Why it matters: qualitative methods excel at identifying causal mechanisms (the "why" behind correlations) and producing fine-grained, nuanced analysis that quantitative methods may miss.
  • Mixed methods approach: combining qualitative and quantitative data can overcome the deficiencies of relying solely on one approach.
  • Common confusion: correlation vs causation—statistical analysis may show a relationship, but qualitative methods like process tracing help uncover why that relationship exists.
  • Key tradeoff: qualitative methods prioritize depth over breadth, yielding rich detail but often covering fewer cases and requiring more time and resources.

🔍 Core definition and scope

🔍 What qualitative research means

Qualitative research refers to data collection in which the focus is on non-numerical data.

  • This includes:
    • Texts
    • Interviews with individuals or groups
    • Observations recorded by researchers
    • Many other sources of knowledge
  • It is not about numbers, statistics, or quantitative measurements.
  • Example: Early political thinkers like Aristotle observed and recorded phenomena through non-numerical means, such as discussing possible types of regimes and arguing for polity as the best government based on observations of human behavior.

🧰 The qualitative toolkit

Political scientists employ multiple qualitative methods, often in combination:

MethodBrief description
InterviewingConversation with one or more people to collect data on a research question
Documentary sourcesTexts collected from field sites, organizations, libraries, archives; archival research focuses on primary sources (original documents)
Ethnographic researchSite-specific data collection ("fieldwork"); researcher records observations "in the field" and may also conduct interviews and collect documents
Digital ethnographyData collection in the cybersphere; observation of activity mediated by computers or information technologies, including virtual reality
Case studiesFocused examination of an event, place, or individual; may employ some or all of the above methods
  • Example: A Canadian political scientist studying US southern border policy might conduct fieldwork on the US-Mexico border, observing interactions between government authorities and citizens on both sides.
  • Example: Digital ethnographers map political communication strategies on social media platforms.

🔗 Mixed methods approach

🔗 Combining qualitative and quantitative

  • Mixed methods utilize both qualitative and quantitative approaches to answer research questions.
  • The combination can overcome deficiencies in relying solely on one or the other.

🔗 How they complement each other

Example scenario: "Under what conditions might Texas become a purple state?"

  • Quantitative data tells researchers about trends in voter registration and turnout over time.
  • Qualitative methods (interviewing Texans in focus groups or town hall meetings) illuminate how voters perceive their political choices and political future.
  • Together, they provide both the "what" (trends) and the "why" (perceptions and motivations).

💪 Strengths of qualitative methods

💪 Identifying causal mechanisms

  • First and foremost, qualitative methods are useful for identifying causal mechanisms.
  • Recall: hypotheses imply explanatory (independent) and outcome (dependent) variables; linking them requires causal logic.
  • Qualitative methods, particularly case studies, are powerful in illuminating causal mechanisms.
  • If theories are stories, qualitative methods knit together a narrative in a coherent and plausible way to help determine whether a story is true or false.

🔬 Process tracing example

Example: The democratic peace observation

  • Scholars have long observed that modern democracies tend not to go to war with one another.
  • Statistical analysis may yield a significant correlation between regime type and war outbreak, but correlation is not causation.
  • Qualitative methods such as detailed case studies of two democracies in a crisis situation can help uncover what led to reconciliation rather than war.
  • Process tracing (uncovering the process by which events unfolded) is a strength of qualitative approaches.
  • Don't confuse: finding a correlation (democracies rarely fight each other) with understanding the mechanism (why they choose reconciliation).

🔎 Fine-grained and nuanced analysis

  • A second strength is producing more fine-grained and nuanced analysis than widely used quantitative methods like regression analysis.
  • Regression analysis attempts to identify trendlines, fitting a straight line through a cloud of data points.
  • Qualitative methods are interested in the messiness of observed data.
  • Qualitative methods prioritize depth over breadth.
  • Example: Seeing that race is a key correlate of party affiliation in the US is illuminating, but interviewing individuals helps drill down into how racial identity might shape whether a person identifies as a Democrat, Republican, or independent.
  • Qualitative methods help understand the "why" by digging into the details.

⚠️ Limitations of qualitative methods

⚠️ Resource intensity

  • Qualitative methods are typically very resource-intensive.
  • Downloading publicly available data from the Internet is generally less costly than arranging interviews or making research plans to live in a location for a semester.
  • Resource-intensiveness applies to both time and money expended.

⚠️ Small sample sizes (depth over breadth)

  • The resource-intensiveness implies that a researcher may only generate one or a few cases to answer a research question.
  • Example scenario: Comparing quality of governance around the world
    • Quantitative starting point: Download the World Bank's Worldwide Governance Indicators (covering 215 countries and territories).
    • Qualitative approach: Read World Bank and other organizations' reports on select countries; craft case studies of even two countries.
    • Crafting case studies might take weeks, months, or years of careful data collection and writing.
    • This yields an "n" of two—again, the tradeoff is depth over breadth.

⚠️ Replicability challenges

  • A final critique relates to the difficulty replicating findings.
  • If one gold standard in hypothesis testing is replicability of research findings, this is challenging to achieve with many qualitative methods.
  • Example: Observations that a researcher might record while embedded in pro-independence organizations in a location are very difficult to confirm by subsequent researchers.
  • Even if a researcher has access to the same fieldwork sites, they will likely face very different circumstances.

⚠️ Access and reliability issues

  • Compounding replicability are issues with access to research sites.
  • Example: A researcher conducting fieldwork in China and visiting government bureaus may share findings in research papers, but due to the closed nature of the government, other researchers are unlikely to have access to the same bureaus.
  • This relates to the reliability of inferences reached solely from qualitative research.
  • If other researchers cannot confirm the data used for a research paper, how reliable are the findings?

🔄 Workaround: Triangulation

  • One workaround is employing mixed methods to triangulate across multiple sources and findings.
  • This can at least demonstrate that the findings within a study have internal validity (consistency within the study itself).
30

Section 7.2: Interviews

Section 7.2: Interviews

🧭 Overview

🧠 One-sentence thesis

Interviews are structured conversations with knowledgeable subjects that provide nuanced qualitative data, and researchers must decide how to select interviewees, how much structure to impose, and how to record the data.

📌 Key points (3–5)

  • What interviews are: conversations with relevant human subjects designed to answer a research question.
  • Three key decisions: interviewee selection (random vs. network-based), interview structure (structured vs. unstructured vs. semi-structured), and recording method (notes vs. audio/video).
  • Random vs. nonrandom selection trade-off: random sampling yields representativeness; network-based selection offers greater rapport and candor but may lack representativeness.
  • Common confusion: structured interviews maximize consistency and comparability; unstructured interviews maximize flexibility and depth—each serves different research stages and skill levels.
  • Recording trade-off: handwritten notes put subjects more at ease; recordings provide greater accuracy and free the interviewer to focus on guiding the conversation.

🎯 Selecting interviewees

🎯 Who to interview

Interviewee selection hinges on identifying those individuals who possess the knowledge and experience to best answer a research question.

  • The goal is to find people with relevant knowledge and experience.
  • The excerpt uses the example research question: "Under what conditions might Texas become a purple state?"
  • For this question, interviewees should be Texans who can discuss their political views and voting behavior.

🎲 Random selection (ideal approach)

  • What it is: randomly select and interview a sample that represents the diversity of the target population.
  • Why it works: ensures the sample reflects the full range of perspectives (racial, ethnic, gender, religious, educational, urban/rural, etc.).
  • Example: For the Texas question, a researcher would try to locate a mix of interviewees representing all relevant dimensions of diversity among the state's voters.
  • Limitation: often impractical, especially for solo or early-career researchers.

🔗 Network-based selection (realistic approach)

  • What it is: nonrandom selection through personal networks—interview people you know directly or through introductions.
  • How it works:
    • Consult your address book for contacts in the target population.
    • Ask initial interviewees to introduce you to others (snowball sampling).
  • Upside: greater rapport and candor—subjects are more likely to speak openly when they have a relationship with the researcher or were introduced by a trusted third party.
  • Downside: the sample may not be representative, opening the data to challenges of unreliability.
  • Don't confuse: even nonrandom interview data can provide nuanced insights into how people think and reveal trends, even if it cannot claim full representativeness.

📋 Structuring the interview

📋 Structured interviews

Structured interviews are interviews conducted with a pre-written set of questions which are read word-for-word to each interview subject. There is no deviation from these prescribed questions as the interview progresses.

  • Benefit: higher consistency and comparability across interviews.
  • When to use:
    • When a team of researchers is interviewing many subjects (ensures everyone asks the same questions).
    • For less experienced interviewers who benefit from careful preparation and a script.
  • Trade-off: no flexibility to explore unexpected topics that arise during the conversation.

🌊 Unstructured interviews

An unstructured interview is one where the researcher has a general sense of the topics or questions the interview will cover, but the intention is to ask follow-up questions in "real time" as the interview progresses.

  • Benefit: maximum flexibility; creates space for discovery and in-depth exploration.
  • When to use: ideal when a researcher is in the initial stages of exploring a research topic.
  • Demands on the interviewer: must balance probing interesting topics with covering all necessary topics, managing time, and monitoring the subject's energy.
  • Risk: the interview may wander too far off topic, leaving important questions unaddressed.

🔀 Semi-structured interviews

  • What it is: the researcher has a prepared list of questions but is willing to deviate when a question sparks curiosity or demands follow-up.
  • Goal: combine the benefits of both approaches—preparation and flexibility.
  • How it works: maximize preparation by the researcher on one hand, and flexibility when encountering unexpected information on the other.
Interview typeStructureBenefitBest for
StructuredPre-written questions, no deviationConsistency and comparabilityTeams, less experienced interviewers
UnstructuredGeneral topics, real-time follow-upFlexibility and depthEarly exploration, experienced interviewers
Semi-structuredPrepared list + flexibility to deviateBalance of preparation and discoveryCombining benefits of both

🎙️ Recording interview data

🎙️ Why recording matters

  • The interview is a key site for data collection.
  • Data must be collected, recorded, merged with other interviews, then analyzed or referenced when writing up findings.
  • Critical step: decide how to collect data from the interview itself.
  • Requirement: all data collection requires consent from each interview subject.

✍️ Handwritten notes

  • What it is: write notes during the interview or immediately after to recall as much as possible.
  • Advantage: sets the interview subject more at ease—subjects tend to be more restrained when they know they are being recorded.
  • When to use: if the subject matter is sensitive, handwritten notes are likely the better choice.
  • Limitation: less accurate; the interviewer must juggle notes and interview questions simultaneously.

🎤 Audio or video recording

  • What it is: record the interview (sound only or video), then transcribe the recording into text.
  • Why transcription is critical: having a text allows the researcher to search for key words or quotes that may inform research findings.
  • Transcription methods: use transcription software or manually transcribe a replay.
  • Advantage: greater accuracy; allows the interviewer to focus more on guiding the interview rather than juggling notes and questions.
  • When subjects are comfortable: public figures may be more used to having their comments recorded and hence more readily grant permission.

📊 Entering data into a database

  • At some point, data from all interviews must be entered into a larger database.
  • Simple approach: create a document or spreadsheet with notes from all interviews.
  • Software approach: use open-source software packages for entering and analyzing interview data.

⚖️ Human subjects protections

⚖️ Why protections matter

Research which involves engaging with people, or human subjects, must be accompanied by protections for those subjects.

  • Purpose: ensure the integrity of the research project and the credibility of both the researcher and any sponsoring institutions.
  • The excerpt notes that research with human subjects will be addressed in a later chapter on ethics in research.
  • Key point: consent and ethical treatment are foundational to credible interview research.
31

Section 7.3: Exploring Documentary Sources

Section 7.3: Exploring documentary sources

🧭 Overview

🧠 One-sentence thesis

Documentary sources offer researchers a wealth of information through primary source materials, and digitization has dramatically expanded access while content analysis techniques enable systematic examination of these documents.

📌 Key points (3–5)

  • What documentary sources are: primary sources or original source material that help answer research questions; they need not be created at the time or place being studied.
  • How digitization changes research: vastly increased accessibility and decreased costs through online archives and databases like ProQuest.
  • Finding documents requires strategy: knowing librarians, understanding organizational landscapes, and thinking creatively about which documents address your research question.
  • Common confusion: primary vs. original—a document can be a primary source even if you access a copy rather than the original physical artifact (e.g., accessing the UDHR online vs. at UN headquarters).
  • Analysis goes beyond quoting: content analysis techniques like keyword frequency counts and factor analysis allow systematic examination of patterns across documents.

📚 What counts as documentary sources

📄 Definition and scope

Documentary sources: primary sources or original source material that can help answer some aspect of a research question.

  • Documents need not be created at the time or place you are studying.
  • Example: A researcher studying post-WWII human rights codification can use the Universal Declaration of Human Rights (UDHR) as a key document, whether accessed online or at UN headquarters in New York City.
  • The copy vs. original distinction matters less for most research; credible copies suffice.

🔍 When originals matter vs. when they don't

  • Original access critical: Research on election fraud might require actual ballots.
  • Copies acceptable: Most research can rely on credible copies or reports by credible organizations when resource constraints or access difficulties are insurmountable.
  • Don't confuse: needing the content of a document vs. needing the physical original—most research needs only accurate content.

🚧 Limits of documentary sources

  • Some political phenomena are not inherently text-based (e.g., illicit activities like human trafficking or smuggling).
  • However, even non-textual activities often leave oblique documentary traces when they interact with the modern state (e.g., banking activities, government reports).
  • The rise of the bureaucratic state increased documentation of political life.

🔎 Finding and accessing documents

💻 The digitization revolution

  • What changed: Digitization has vastly increased accessibility to documents and decreased costs to researchers.
  • Key resources:
    • U.S. National Archives catalogs documents on its website
    • Databases like ProQuest (often available through university and community college libraries) provide digital access
    • Researchers no longer need costly trips to physical archives like Washington, D.C.

🗺️ Strategic approaches to finding documents

🤝 Leverage librarians and databases

  • Librarians often know about collections of documents, archives, or other repositories of key documents.
  • Get to know what databases and archives are available through your library.
  • Example: Many U.S. National Archives documents are now available through ProQuest's Congressional Research Digital Collection, including proposed legislation, laws, committee hearing transcripts, and committee reports.

🏢 Understand the organizational landscape

  • Think about which organizations are embedded in your research topic.
  • Example: For human rights codification research:
    • Explore UN archives (some digitized and online)
    • Contact law school libraries for their collections
    • Check whether human rights lawyer associations have libraries open to researchers
    • Investigate nongovernmental organizations active in human rights law for reports or draft language documents

💡 Think creatively about resources and access

  • Researcher resources vary: some have deep research pockets for travel to far-flung sites; most face limits on resources and access.
  • Relationships with government officials in relevant bureaucracies can improve access to key sources.
  • Creativity is essential when resources and access are limited.

📊 Analyzing documentary sources

✏️ Manual extraction and quotation

  • Draw out key sections to reference or quote when writing up research findings.
  • Low-tech approach: Manually highlight passages on paper copies and flag with sticky notes.
  • Digital approach: Use text-recognition software to search for key terms and passages in digital versions.

🔢 Content analysis techniques

📈 Basic frequency counting

  • Count how often a term appears in a set of documents.
  • Example: To examine change over time in the human right to asylum codification:
    • Collect human rights-related treaties from the UN during 1945–1985
    • Count the frequency of "asylum" in the documents
    • See whether this changed significantly over the chosen period

🧮 Advanced quantitative methods

  • Factor analysis: Determines whether there are underlying "factors" or common explanatory variables that explain variation observed across documents.
  • These techniques go beyond using documents as sources of quotable material.
  • Note: The actual mechanics are beyond the scope of this section, but knowing these methods exist expands how you can utilize documents.
Analysis approachWhat it doesExample use
Manual extractionIdentify key passages for quotingHighlighting relevant sections for research findings
Frequency countingTrack how often terms appearCounting "asylum" mentions across decades of treaties
Factor analysisFind underlying common variablesIdentifying patterns that explain variation across documents
32

Ethnographic Research

Section 7.4: Ethnographic research

🧭 Overview

🧠 One-sentence thesis

Ethnographic research generates deep understanding of social phenomena by immersing the researcher in the physical or digital contexts of their subjects to observe practices, culture, and behaviors that answer social science questions.

📌 Key points (3–5)

  • What ethnography is: close observation of practices, language, culture, beliefs, and life aspects of research subjects through researcher immersion in their social contexts.
  • Why it matters: builds holistic, in-depth understanding of complex phenomena that other methods cannot capture, connecting individual decisions to larger social patterns.
  • How it works: primarily through recorded observations (field notes), supplemented by interviews and documents to build a rich portrait addressing a research question.
  • Traditional vs digital: traditional ethnography requires physical presence at a site; digital ethnography immerses researchers in online spaces like social media platforms and chat rooms.
  • Common confusion: digital worlds are "neither more nor less material" than physical worlds—both are legitimate sites for ethnographic study of social and political life.

🔍 Defining ethnographic research

🔍 Core definition and scope

Ethnography is the study of social interactions, behaviors, and perceptions that occur within groups, teams, organizations, and communities.

  • Ethnography is particularly immersive compared to other research methods because it requires the researcher to situate themselves in the social contexts of their subjects.
  • The researcher becomes a close observer of multiple dimensions: practices, language, culture, beliefs, and other life aspects.
  • Range of settings: from observing political candidates on campaign trails in small-town settings to living in remote counties and interviewing local officials.

📜 Historical roots

  • Ethnography has its roots in anthropology.
  • Became prominent in the early twentieth century when scholars sought to document in detail the lives of people in remote locales.
  • The purpose then and now: not just to collect detailed notes, but to answer questions raised by social science theories about human behavior, motivations, and organization.

📝 Thick description

  • Ethnography calls upon the researcher to engage in "thick description" (attributed to Clifford Geertz) of a research site.
  • This means creating detailed, layered accounts that capture context and meaning, not just surface observations.

💪 Why conduct ethnographic research

💪 Depth over breadth

  • Ethnographic research is a powerful tool for building holistic understanding of unknown or superficially understood phenomena.
  • For solo researchers, it is particularly demanding and resource intensive.
  • Yet it has the potential to accomplish something highly valued: depth of understanding.

🌍 Connecting micro to macro

  • Helps anchor understanding of large, abstract global events by examining specific sites.
  • Example: When examining a complex world event like an economic rise, conducting ethnography at sites with economic dynamism can be illuminating.
  • The excerpt explains: large-scale phenomena result from decisions made by individuals in response to incentives embedded in their social context.
  • Ethnographic fieldwork, more than any other research tool, helps generate knowledge about these individual- and society-level factors.

🛠️ How to conduct ethnographic research

🛠️ Recording observations

  • First and foremost: a researcher must record their observations when engaged at their research site(s).
  • Observations may take different forms:
    • Narrative form: diary entries for further distillation when writing up findings.
    • Analytical form: noting categories of behavior and adding annotations from the outset.
  • Example: A researcher immersed in a township might sort field notes into observations about economic life, political life, social life, and so forth.
  • These initial recorded observations form the bulk of ethnographic data.

🔄 Supplementing with other methods

  • Second: ethnography may also draw on qualitative tools noted earlier in the chapter:
    • Interviews: researchers may shift from pure observation to conduct interviews with subjects for more focused data collection.
    • Documentary sources: documents can supplement (or call into question) observations.
  • The goal: build a rich portrait of a place and its people to address an underlying research question.

💻 Digital ethnography

💻 New sites for research

  • Given vast changes in information and communication technologies (ICT), new sites for ethnographic research have emerged in recent decades.
  • Traditional ethnography: relied on researchers being situated in a physical space and observing social life there.
  • Digital ethnography: challenges notions of physical immersion; instead, the researcher is immersed in relevant digital spaces such as:
    • Online chat rooms
    • Social media platforms where information is exchanged

🌐 Materiality of digital worlds

Digital ethnography asserts that there is a "materiality of digital worlds, which are neither more nor less material than the worlds that preceded them."

  • Don't confuse: digital spaces are not "less real" or "less material" than physical spaces—both are legitimate sites of social life and research.
  • The Internet, like all social spaces, is deeply political.

📱 Political dimensions of digital spaces

ActivityExample from excerpt
Government transparencyGovernment documents uploaded to webpages in "transparency" initiatives
Counter-narrativesSocietal actors upload leaked documents to sites to challenge official narratives
Community buildingGroups create pages, build virtual communities, push out information via ICT
Political outreachPolitical parties reach constituents via social media platforms
MobilizationFar-right movements create global networks through platforms, enabling rapid mobilization of like-minded individuals

🔬 Research opportunities

  • Digital ethnography creates rich opportunities for research and analysis.
  • Researchers seek to record and identify patterns in the digital worlds of their research subjects.
  • Example: A researcher mapping political strategies of groups supporting a president might:
    • Subscribe to Facebook pages of various supporting groups
    • Record messages posted on such sites
    • Conduct content analysis on vocabulary employed
    • Examine photos uploaded to determine tactics used to signal who "belongs" to a movement

💡 Benefits and debates

  • New ICT offer many potentially lower cost possibilities for conducting research on important political topics.
  • Important debate: To what degree are Internet-based technologies "liberation technologies" versus tools for continued repression by authoritarian governments?
  • Researchers engaging in digital ethnography are opening up a rich trove of data sources to weigh in on this and other debates.
33

Case Studies in Qualitative Research

Section 7.5: Case studies

🧭 Overview

🧠 One-sentence thesis

Case studies are intensive, in-depth examinations of a single unit (event, person, group, or place) that aim to test theory and draw inferences applicable to a larger class of similar units.

📌 Key points (3–5)

  • What a case study is: an intensive study of a single unit to understand a larger class of similar units, combining deep analytical description with theory testing.
  • Three pillars of case selection: relevance to theory, representativeness of a larger group, and practical feasibility (access, language, resources).
  • Common confusion: every case is unique, but the key distinction is whether differences make a case an outlier (too different) versus representative (similar enough to provide insights).
  • Methods integration: case studies may draw on all qualitative methods—interviews, ethnography, documentary sources—plus quantitative data to build comprehensive understanding.
  • Purpose: to investigate causal processes often lost in quantitative approaches and to test theory against real-world evidence.

📚 Defining the case study method

📚 What a case study is

Case study: "an intensive study of a single unit for the purpose of understanding a larger class of (similar) units" (John Gerring).

  • A case study is fundamentally an in-depth description and exploration of an event, person, group, and/or place.
  • It goes beyond description: case studies may be critical and present evidence to build counter-narratives to dominant narratives.
  • The "intensive study" draws on multiple methods—interviews, ethnographic fieldwork, documentary analysis—to build comprehensive understanding.
  • Quantitative data may also be used to deepen the case study.

🎯 The goal of case studies

  • The ultimate purpose is to draw inferences from the case to test theory.
  • Case studies are a means to investigate causal processes that traditional quantitative approaches (like regression analysis) often miss.
  • They are empirical, testing theory against what is happening in the "real" world.
  • They demand creative and holistic thinking, followed by deep immersion in learning about the subject.

🔍 Case selection criteria

🔍 Relevance to theory

  • The case must be relevant to the theory or hypothesis the researcher wishes to test.
  • Example: To investigate how mineral wealth contributes to poor governance, select a country with mineral wealth (like the Democratic Republic of Congo), not one without (like Haiti).
  • To strengthen inferences, a researcher might craft a second case study on a similar country without mineral wealth to explore whether governance outcomes differ.

🎲 Representativeness

The selected case should be representative of a larger group.

  • This addresses criticism that the chosen case is too much of an outlier to provide insights on the general phenomenon.
  • Every place and person is unique (sui generis), but the key question is: Are the differences so enormous that the case is an outlier rather than representative?
  • Example: If studying the DRC as a case of the "resource curse," ask:
    • In what ways is the DRC like other mineral-rich countries?
    • In which ways does it differ?
    • Are those differences so significant that the DRC is not representative of the "class" of mineral-rich countries?

Don't confuse: Uniqueness vs. outlier status—all cases are unique, but a case becomes problematic when its differences prevent it from shedding light on the broader class of units.

🛠️ Practical considerations

Case selection also depends on feasibility:

ConsiderationQuestions to ask
Secondary literatureDoes a robust body of literature exist to build baseline knowledge?
Language skillsDoes understanding the case require specific language abilities?
AccessDoes the researcher know which organizations/individuals to contact? Do they have access?
FieldworkWill the case require fieldwork? For how long? What are the funding requirements?
  • These practical constraints shape which cases are actually researchable, not just theoretically interesting.

🧩 Building a case study

🧩 Methodological integration

  • Case studies are not limited to one method; they may utilize:
    • Interviews with relevant subjects
    • Ethnographic fieldwork and observation
    • Documentary sources and archival research
    • Quantitative data to supplement qualitative insights
  • This multi-method approach enables the comprehensive understanding that defines strong case studies.

🧩 Analytical depth

  • Case studies provide deep analytical description, not just surface-level reporting.
  • They may present counter-narratives that challenge dominant interpretations of events.
  • The depth comes from the "intensive study" that explores multiple dimensions of the case.

💡 Why case studies matter

💡 Strengths of the method

  • Causal mechanisms: Case studies reveal how and why processes unfold, not just correlations.
  • Real-world testing: They are empirical, grounding theory in actual events and contexts.
  • Holistic understanding: They demand researchers think creatively about all aspects of a subject.
  • Depth over breadth: While quantitative methods may cover many units shallowly, case studies go deep into one unit to understand it thoroughly.

💡 Relationship to other methods

  • Case studies complement quantitative approaches by investigating processes that regression analysis cannot capture.
  • They can be combined with comparative analysis (e.g., studying two similar cases with one key difference) to strengthen causal inference.
  • Example: Comparing the DRC (mineral-rich) with a similar country without mineral wealth allows the researcher to isolate the effect of mineral wealth on governance.
34

Section 8.1: What are Quantitative Methods?

Section 8.1: What are Quantitative Methods?

🧭 Overview

🧠 One-sentence thesis

Quantitative methods in political science use numbers and mathematical analysis—especially statistical analysis of datasets—to solve research puzzles, in contrast to qualitative methods that rely on words as evidence.

📌 Key points (3–5)

  • What quantitative methods are: research that collects standardized data and uses numbers and statistics to analyze frequencies and distributions of issues, events, or practices.
  • How they differ from qualitative methods: quantitative methods draw conclusions from numbers; qualitative methods draw conclusions from words (interviews, archival research, ethnographies).
  • Main forms in political science: statistical analyses of datasets (most common, developed from the behavioral wave) and formal models (mathematical representations of institutions and choices).
  • Common confusion: cases vs. data—each case (one person, one decision, one institution) can produce multiple data points; they are intertwined but not the same thing.
  • Why coding matters: responses in words must be transformed into numerical values to create variables for analysis; coding is essential for quantitative research.

🔢 What quantitative methods are and how they differ

🔢 Core definition

Quantitative methods: "research interested in frequencies and distributions of issues, events, or practices by collecting standardized data and using numbers and statistics for analyzing them."

  • Political scientists solve puzzles using mathematical analysis or complex mathematical measurement.
  • The main source of evidence is numbers, not words.
  • Example: instead of interviewing voters about their experiences (qualitative), a researcher surveys voters and analyzes the numerical responses statistically (quantitative).

🆚 Contrast with qualitative methods

  • Qualitative methods: main evidence is words; appraise evidence through interviews, focus groups, archival research, digital ethnographies.
  • Quantitative methods: main evidence is numbers; draw conclusions through statistical or mathematical analysis.
  • Don't confuse: both solve puzzles, but the form of evidence and analysis differs fundamentally.

📊 Two main forms of quantitative methods in political science

📊 Statistical analyses of datasets

  • Most common quantitative method in political science.
  • Developed from the behavioral wave: scholars focused on how individuals make political decisions (e.g., voting, ideological expression).
  • Process:
    1. Use surveys to collect evidence about human behavior.
    2. Sample potential respondents with a questionnaire.
    3. Code respondent choices using a scale of measurement.
    4. Analyze data with statistical software.
    5. Probe for correlations among variables to test hypotheses.
  • Example: a survey asks citizens if they are registered to vote, if they intend to vote, and which candidate they might vote for; responses are coded and analyzed for patterns.

🧮 Formal models

  • Political scientists represent political institutions and choices in the abstract.
  • Rely on logic and causality; express relationships among concepts and variables in mathematical terms.
  • Use precise statements written as equations; results can be replicated through mathematical proof.
  • Why they matter: help predict effects of programs before implementation, especially useful in public policy making.
  • Example: elected officials and experts develop a program; a formal model projects the program's effects before it is implemented, helping policymakers decide.

🗂️ Large-n analysis: cases, data, and units of analysis

🗂️ What large-n analysis means

Large-n analysis: analysis of a large number of cases, often assembled as datasets; "n" stands for number.

  • Quantitative methods in political science often involve analyzing datasets with many cases.
  • The more cases, the stronger the inferences that can be drawn (generally 1,200 to 1,500 cases for survey-based datasets).

🧩 Cases and units of analysis

Cases: the people, places, things, or actions (subjects) being observed in a research project; often also the unit of analysis.

Unit of analysis: the "who" or the "what" that you are analyzing for your study.

  • For surveys: each case could be one respondent (one person).
  • For observational studies: each case could be one recorded action.
  • For institutional analyses: each case could be one senator, one representative, or one decision made by lawmakers/policymakers.

🔗 Cases vs. data points

  • Cases and data are intertwined but not the same.
  • Each case can produce numerous data points.
  • Example: one survey respondent (one case) answers multiple questions → many data points from that single case.
  • Example: in observational studies, researchers observe and record actions of individuals → a plethora of data points from each case.
  • Don't confuse: a case is the subject; data points are the individual pieces of information collected from that subject.

🔤 Coding and variables

🔤 What coding is

Coding: transforming responses provided in surveys into numerical expressions or values so that analysis can take place.

  • Why it's needed: words must be converted to numbers for statistical analysis.
  • Coding is essential for creating variables to analyze in quantitative research.
  • Example: a survey asks if someone voted; "yes" might be coded as 1, "no" as 0.

📐 What a variable is

Variable: "some characteristic of an observation which may display two or more values in a data set."

  • Variables are the building blocks of quantitative analysis.
  • They are created through coding or directly from numerical data.

🔢 When coding is not needed

  • Sometimes data is already in numerical form and forms the variable without changes.
  • Example: a survey asks how much money a respondent donated to a campaign; the dollar amount is already a number, so no coding is needed.
  • Example: respondents rate themselves on a scale of 1–5; the numbers can be brought in directly or recoded to create new variables.
  • Researchers can recode data points to change how a variable is analyzed or to create entirely new variables.
  • Example: numerical data like age or income may be recoded into ranges (age ranges, income levels).

📏 Steven's Four Scales of Measurement

📏 Why scales matter

  • These scales help researchers determine which statistical techniques are most proper to use.
  • Variables are measured, coded, and constructed differently; the scale tells you how to analyze relationships between them.

🏷️ Nominal Scale

Identifies the groups to which a participant belongs; does not measure quantity or amount.

  • What it does: classifies a respondent into categories.
  • The distance between categories is unimportant.
  • Example: political party identification (Democrat, Republican, Independent); there is no inherent order or distance between these categories.

📊 Ordinal Scale

Subjects are placed in categories, and the categories are ordered according to amount or quantity of the construct being measured. However, the variables are not equidistant from each other.

  • What it does: ranks variables, but the distance between ranks is not necessarily equal.
  • Normally constructed from one survey question (a single item).
  • Example: asking students on a scale of 1–5 how liberal they are; 1 might be "very conservative," 5 might be "very liberal," but the distance between 1 and 2 is not necessarily the same as between 4 and 5.

📈 Interval Scale

A quantitative variable that possesses the property of equal intervals, but does not possess a true zero.

  • What it does: constructed from a Likert-scale (multiple survey questions used to create a score, or multiple items).
  • The intervals between values are equal, but there is no true zero point.
  • Example: asking students to complete several survey questions about their ideology on scales of 1–5; responses are totaled and divided by the number of questions, providing a single score on ideological position.
  • Don't confuse with ordinal: interval scales have equal distances between values; ordinal scales do not.

📉 Ratio Scale

An interval quantitative variable that displays a true zero.

  • What it does: has equal intervals between responses or scores, and includes a zero option indicating no amount of the construct has been measured.
  • Example: income in dollars (zero dollars means no income), number of votes (zero votes means no votes).
  • Don't confuse with interval: ratio scales have a true zero; interval scales do not.
ScaleOrder matters?Equal intervals?True zero?Example
NominalNoNoNoPolitical party ID
OrdinalYesNoNo1–5 liberalism rating (single item)
IntervalYesYesNoIdeology score (multiple items averaged)
RatioYesYesYesCampaign donations in dollars
35

Making Sense of Data

Section 8.2: Making Sense of Data

🧭 Overview

🧠 One-sentence thesis

Political science research requires both describing and explaining the world, and descriptive statistics—including measures of central tendency and dispersion—provide the foundational tools for summarizing data and understanding patterns before attempting causal explanations.

📌 Key points (3–5)

  • Dual goals of research: description and explanation are interactive; researchers must first describe the world before explaining phenomena within it.
  • Organizing data: raw data should be converted into a data matrix (rows = observations, columns = variables), then summarized using descriptive statistics appropriate to the level of measurement (nominal, ordinal, interval, ratio).
  • Central tendency vs. dispersion: measures of central tendency (mode, median, mean) locate the "most typical case," while measures of dispersion (range, variance, standard deviation) show how spread out the data are around that center.
  • Common confusion: a data matrix shows individual observations clearly but does not summarize general patterns; frequency tables, proportions, and percentages are better for understanding relative standing.
  • Why it matters: descriptive statistics reduce complex data to simpler, understandable terms without losing information, enabling researchers to grasp general patterns and prepare for hypothesis testing.

📊 Organizing and presenting data

📋 From raw data to data matrix

Data matrix: a format where each row represents a unique entry (observation) and each column represents different variables.

  • Raw data must first be organized into a manageable format.
  • A data matrix allows researchers to see information about each observation and compare a few cases.
  • However, this format is not ideal for summarizing data or grasping general patterns.

📈 Choosing the right presentation format

  • The correct format for presenting numerical data depends on the level of measurement of the variables: nominal, ordinal, interval, or ratio.
  • Tables themselves are not the problem; the issue is whether the table presents summary information (descriptive statistics) or just lists individual cases.

Example: Table 8-2 (the data matrix) shows individual observations but does not summarize; Table 8-3 (a frequency table) includes frequency, proportion, percentage, and cumulative percentage, making it easier to understand one observation relative to others.

📊 Frequency tables and relative frequency

Descriptive statistics: the numerical representation of certain characteristics and properties of the entire collected data.

Frequency table: a table that includes frequency, proportion, percentage, and cumulative percentage of a particular observation.

  • Proportion and percentage (measures of relative frequency) allow easy comparison between different observations of the same variable.
  • The goal is to present numbers that describe the cases and the basic features of the data, not just list them.

📉 Graphical representations

📊 Histogram

Histogram: a type of graph where the height and area of the bars are proportionate to the frequencies in each category of a variable.

  • Used for interval or ratio variables with a relatively large number of cases.
  • The bars show how sizable each value is; height and area reflect frequency.

📊 Bar graph

Bar graph: a visual representation of the data, usually drawn using rectangular bars to show how sizable each value is; bars can be vertical or horizontal.

  • Used for categorical variables (ordinal or nominal).
  • Deals with a much smaller number of categories than a histogram.
  • Don't confuse: histograms are for continuous data (interval/ratio); bar graphs are for categorical data (nominal/ordinal).

🔗 Scatterplot

Scatterplot: a graph that uses Cartesian coordinates (x-axis and y-axis) to display values for two variables from a dataset, showing how one variable may influence the other.

  • Excellent choice for presenting a relationship between two variables in graphic format.
  • Each point represents an observation with values on both variables.

📈 Time-series plot

Time-series plot: a graph that displays the changes in the values of a variable measured at different points in history.

  • The x-axis represents the time variable (e.g., months, years); the y-axis represents the variable of interest.
  • Unlike scatterplots, each dot (observation) is connected to show changes over time.
  • Multiple lines can be used on the same graph to differentiate categories (e.g., female representatives in the House vs. Senate).

Example: displaying the number of proposed constitutional amendments in the U.S. since its founding, or the number of women in Congress over the years.

🎯 Measures of central tendency

🎯 What central tendency measures

Measures of central tendency: the mode, median, and mean—locate the center of a distribution of a particular dataset; they identify "the most typical case" in that data distribution.

  • These measures help researchers understand where the middle or center of the data lies.
  • They simplify data by summarizing it with a single representative value.

🔢 Mode

Mode: the category with the highest frequency.

  • Simply the value or category that appears most often in the dataset.
  • Useful for all levels of measurement, including nominal data.

➗ Median

Median: the point in the distribution that splits the observations into two equal parts; the middle point when observations are ordered by their numerical values.

  • If there are an odd number of observations, the median is the single middle measurement.
  • If there are an even number of observations, the median is the average of the two middle measurements.
  • Less sensitive to extreme values (outliers) than the mean.

➕ Mean

Mean (or average): the sum of the observed value of each subject divided by the number of subjects.

  • Formally expressed as: Mean = (sum of all Y values) ÷ (number of observations).
  • The notation: Y-bar represents the mean; Σ (sigma) means sum; n represents the number of observations.

Example: Five students have midterm scores of 80, 77, 91, 62, and 85. The sum is 395, and n = 5, so the mean is 395 ÷ 5 = 79.

  • Don't confuse: the mean is affected by extreme values (outliers), while the median is not; the mode simply counts frequency and does not involve arithmetic.

📏 Measures of dispersion

📏 Why measure dispersion

  • Measures of central tendency alone do not tell the full story; researchers also need to know how spread out the data are around the center.
  • Measures of variability help fully understand the data being utilized in research.

📐 Range

Range: the difference in value between the maximum and minimum value.

  • The simplest measurement of data variation.

Example: If the highest midterm test score is 100 and the lowest is 70, the range is 100 − 70 = 30.

📊 Interquartile range (IQR)

Interquartile range (IQR): the difference between the 75th percentile (where 75% of values are below that point) and the 25th percentile (where 25% of observations are below that point).

  • The IQR is the range where the maximum value is the third quartile (Q₃) and the minimum value is the first quartile (Q₁).
  • Tells us how spread the middle 50% of the observations are.
  • Some scholars use a boxplot to graphically display quartiles and the median.

📉 Deviation, variance, and standard deviation

📉 Deviation

Deviation: the distance of an observation from the mean.

  • Measures how far each individual observation is from the center (mean).
  • Observations can deviate in both positive and negative directions.

📊 Variance

Variance: the average of the squared deviations.

  • To calculate variance:
    1. Measure the distance of each observation from the mean.
    2. Square each distance.
    3. Add all the squared deviations.
    4. Divide by the number of observations (for population variance) or by the number of observations minus one (for sample variance).
  • Denoted by σ² (sigma squared).
  • Population variance: sum of (each Y − population mean)² ÷ N.
  • Sample variance: sum of (each Y − sample mean)² ÷ (n − 1).
  • Calculating by hand is tedious for large datasets; researchers often use statistical software or spreadsheets like Excel.

📐 Standard deviation

Standard deviation: the square root of the variance; it represents the typical deviation of observations (as opposed to the average squared distance from the mean).

  • Population standard deviation: square root of [sum of (each Y − population mean)² ÷ N].
  • Sample standard deviation: square root of [sum of (each Y − sample mean)² ÷ (n − 1)].
  • More interpretable than variance because it is in the same units as the original data.

📈 Interpreting standard deviation

Example: Your professor tells you the mean exam score was 85 with a standard deviation of 5.

  • About 68% of observations fall within the first standard deviation from the mean.
  • This means 68% of students scored between 80 (85 − 5) and 90 (85 + 5).
  • About 95% of observations fall within the second standard deviation from the mean.
  • This means 95% of students scored between 75 (85 − 10) and 95 (85 + 10).
  • If you scored 96, you are beyond the second standard deviation, meaning fewer than 5% of students scored higher than you (or about 95% scored lower).

Don't confuse: deviation is the distance from the mean; variance is the average of squared deviations; standard deviation is the square root of variance and is easier to interpret because it is in the original units.

🔔 Normal distribution

🔔 What is a normal distribution

Normal distribution: a bell-shaped curve where the value of the mean, median, and mode is the same, and data near the mean are more frequent in occurrence.

  • The height of the curve represents the density (frequency) of a particular observation.
  • The peak of the curve is located at the middle of the distribution, meaning there are many more observations with the value of the mean (or close to it) than any other values.
  • Most variables that political scientists study can be assumed to be normally distributed.

📊 Properties of the normal distribution

  • Bell-shaped: symmetric around the center.
  • Mean = median = mode: all three measures of central tendency coincide at the peak.
  • 68-95 rule: about 68% of data fall within one standard deviation of the mean; about 95% fall within two standard deviations.
  • This distribution is foundational for statistical inference and hypothesis testing (covered in the next section).
36

Introduction to Statistical Inference and Hypothesis Testing

Section 8.3: Introduction to Statistical Inference and Hypothesis Testing

🧭 Overview

🧠 One-sentence thesis

Statistical inference allows researchers to draw conclusions about entire populations from sample data by testing hypotheses through standardized measures like z-scores, and hypothesis testing determines whether observed differences are likely due to chance or represent real relationships.

📌 Key points (3–5)

  • What statistical inference is: analyzing sample data to determine characteristics of the larger population, enabling research without surveying everyone.
  • Normal distribution properties: bell-shaped curve where mean, median, and mode are identical; data near the mean occur most frequently; symmetrical with half above and half below the mean.
  • Z-scores standardize comparisons: they measure how many standard deviations an observation falls above or below the mean, allowing comparison across different scales.
  • Hypothesis testing logic: compare an observed test statistic to a critical threshold; if observed exceeds critical, reject the null hypothesis (no relationship).
  • Common confusion—Type I vs Type II errors: Type I is rejecting a true null (false positive); Type II is failing to reject a false null (false negative); alpha-level controls Type I risk, sample size affects Type II risk.

📊 Normal distribution fundamentals

📊 What the normal distribution represents

Normal distribution: a bell-shaped curve where the mean, median, and mode have the same value, and data near the mean are more frequent in occurrence.

  • The height of the curve at any point represents the density (frequency) of observations at that value.
  • Most variables political scientists study can be assumed to be normally distributed.
  • The peak is at the center because the most observations cluster around the mean.
  • As you move away (deviate) from the mean in either direction, fewer observations occur.

Example: In a test with mean score 85, most students score close to 85; very few score extremely high or extremely low.

🔔 Symmetry and standard deviations

  • The normal distribution is symmetrical: half of observations fall above the mean, half below.
  • About 68% of observations fall within one standard deviation from the mean (both directions).
  • Notation: a normal distribution is written as N(μ, σ²), where μ is the mean and σ² is the variance.

Example: If mean test score is 85 and standard deviation is 5, then 68% of students scored between 80 and 90.

🧮 Z-scores for standardization

🧮 What a z-score measures

Z-score: the number of standard deviations that a particular observation falls above or below the mean.

  • Formula (in words): z equals (observation minus mean) divided by standard deviation.
  • Z-scores allow comparison of values from different scales or measures.
  • A positive z-score means the observation is above the mean; a negative z-score means below the mean.

🎯 Comparing across different tests

The excerpt provides an SAT vs ACT comparison scenario:

TestMeanStandard DeviationScoreZ-scoreInterpretation
SAT110020013001.01 standard deviation above mean
ACT216240.50.5 standard deviations above mean
  • Carlos (SAT z = 1.0) performed better than Tomoko (ACT z = 0.5) because his score was further above the mean in standardized terms.
  • Don't confuse: raw scores cannot be directly compared across different scales; z-scores make them comparable by standardizing to the same reference (standard deviations from mean).

🔬 Hypothesis testing framework

🔬 Null and alternative hypotheses

Null hypothesis (H₀): a working statement that posits the absence of statistical relationship between two or more variables.

Alternative hypothesis (Hₐ): the claim a researcher is making when testing relationships between data; an alternative working statement to the null hypothesis.

  • In statistics, researchers aim to prove whether the null hypothesis can be shown to be false.
  • The alternative hypothesis (also called research hypothesis) represents the relationship or effect the researcher expects.
  • Important: we never "accept" the null hypothesis; we either "reject" it or "fail to reject" it.

📏 Statistical significance and critical values

Statistical significance (alpha level): the probability of rejecting the null hypothesis when it is true.

  • Alpha of 0.05 means 95% confidence and is the typical standard in political science.
  • The critical z-score is the threshold determined by the chosen alpha level (e.g., 1.96 for alpha = 0.05 in a two-tailed test).
  • Found using a z-score probability table.

🧪 Conducting the test

The excerpt walks through an example: does extra study session affect midterm scores?

Given information:

  • Population mean (all students): 75
  • Standard deviation: 7
  • Sample mean (extra session group): 82
  • Sample size: 50
  • Alpha: 0.05
  • Critical z-score: 1.96
  • Null hypothesis: sample mean equals population mean
  • Alternative hypothesis: sample mean does not equal (or is greater than) population mean

Steps:

  1. Calculate the observed z-score using the formula: (sample mean minus population mean) divided by (standard deviation divided by square root of sample size).
  2. In this case: absolute value of (82 minus 75) divided by (7 divided by square root of 50) equals 7.07.
  3. Compare observed z-score (7.07) to critical z-score (1.96).
  4. Since 7.07 is larger than 1.96, reject the null hypothesis.

Interpretation: The higher average score in the extra-support group was not the result of chance; the extra support may have contributed to the higher average.

⚖️ Decision rule

  • If observed test statistic exceeds critical value: reject the null hypothesis (your research claim may be correct).
  • If observed test statistic is smaller than critical value: fail to reject the null hypothesis.

⚠️ Errors in hypothesis testing

⚠️ Type I error (false positive)

Type I error: mistakenly rejecting the null hypothesis when it is actually true.

  • This is a "false-positive" conclusion—you conclude there is a relationship when there isn't one.
  • The alpha-level is the probability of committing a Type I error.
  • Safeguard: by choosing a smaller alpha-level (e.g., 0.01 instead of 0.05), you reduce the chance of this error.

⚠️ Type II error (false negative)

Type II error: failing to reject the null hypothesis when it is actually false.

  • This is a "false-negative" conclusion—you miss a real relationship.
  • The probability of Type II error relates to the concept of "power" in testing.
  • Safeguard: larger sample sizes reduce the likelihood of Type II error.

🔄 Trade-off reminder

Error TypeWhat happenedHow to reduce
Type IRejected true null (false positive)Lower alpha-level
Type IIFailed to reject false null (false negative)Increase sample size

Don't confuse: Type I is about being too eager to claim a relationship; Type II is about missing a relationship that exists.

🎓 Broader context

🎓 Why statistical inference matters

Statistical inference: the process of analyzing data generated by a sample, but then used to determine some characteristic of the larger population.

  • Surveys are the "bread and butter" of quantitative political science.
  • Researchers cannot survey everyone (e.g., all registered voters in the U.S.), so they use samples.
  • Samples allow testing relationships between variables without spending millions on researching the entire population.

🎓 Scope of this introduction

  • The excerpt covers comparison of means using z-scores.
  • The same concepts apply to comparison of means with t-tests and comparison of proportions.
  • This section is "a small tip of a huge statistical iceberg."
  • Students interested in quantitative political research are encouraged to enroll in an introductory statistics course, preferably in political science or other social/behavioral sciences.
37

Section 8.4: Interpreting Statistical Tables in Political Science Articles

Section 8.4: Interpreting Statistical Tables in Political Science Articles

🧭 Overview

🧠 One-sentence thesis

Understanding the three key numerical expressions in regression tables—coefficients, standard errors, and confidence levels—enables students to interpret quantitative research findings in political science journals even without advanced statistical training.

📌 Key points (3–5)

  • What regression tables show: the analytical results of quantitative research, displaying relationships between outcome (dependent) and explanatory (independent) variables.
  • Three essential numbers: coefficient (nature of relationship), standard error (uncertainty estimate), and confidence levels (statistical significance).
  • Common confusion: asterisks indicate confidence levels—more asterisks mean higher confidence (one * = 90%, two ** = 95%, three *** = 99%); no asterisk means statistically insignificant.
  • Prerequisites for reading tables: identify the causal relationship, outcome variable, explanatory variables, and how each is measured before analyzing the table.
  • Why basic understanding matters: political science students will encounter these tables in assigned readings, even if full interpretation requires additional coursework.

📋 Prerequisites for table analysis

🎯 Identifying the research structure

Before examining any regression table, students must complete several identification tasks:

  • Determine the outcome (dependent) variable(s) being explained
  • Identify the explanatory (independent) variables doing the explaining
  • Understand how each variable is quantified or measured
  • Recognize the statistical model being estimated

Important: The excerpt emphasizes that "the first task in the analysis of a statistical results table is to identify the causal relationship being examined in the article."

⚠️ Scope limitations

The excerpt explicitly states that:

  • In-depth discussion of regression and statistical techniques is beyond the textbook's scope
  • Students need "additional exposure and training in quantitative methods" for proper interpretation
  • Many considerations exist when analyzing regression tables that cannot be covered in this chapter
  • The goal is awareness and basic literacy, not mastery

🔢 The three essential numbers

📊 Coefficient

Coefficient: a numerical expression of the relationship between the outcome and explanatory variables.

What it tells you:

  • The sign (positive or negative) indicates the direction of the relationship
  • Negative sign: inverse relationship—when the explanatory variable goes up, the outcome variable goes down
  • Positive sign: direct relationship—when the explanatory variable increases, the outcome also increases

Important caveat: The substantive meaning of the coefficient depends on the specific statistical model used in the study.

Example: If a coefficient for "education level" is positive in a model predicting voter turnout, higher education is associated with higher turnout.

📏 Standard error

Standard error: an estimate of the standard deviation of the coefficient.

What it tells you:

  • Captures how much uncertainty exists in the model
  • Indicates "how potentially wrong the estimate is"
  • Shows how correlated the two variables truly are

How to interpret:

  • Higher standard error = weaker model relative to variables
  • Higher standard error means less certainty about the correlation between variables
  • The relationship may not be as certain as it appears

Why researchers care: Standard errors help researchers improve the certainty of their findings.

Don't confuse: Standard error is not the same as the coefficient itself—it measures confidence in the coefficient, not the relationship's direction or size.

⭐ Confidence levels

Confidence levels: representation of statistical significance or alpha levels on regression tables.

How they're reported: Researchers use asterisks (*) to indicate significance levels:

AsterisksConfidence LevelMeaning
*90%Relationship is significant at 90% confidence
**95%Relationship is significant at 95% confidence
***99%Relationship is significant at 99% confidence
(none)Not significantCannot distinguish if relationship is important or random

What "statistically insignificant" means:

  • Coefficients without asterisks are called statistically insignificant
  • The model could not determine if the relationship was important
  • The observed relationship could result from random or systematic factors rather than a true causal connection

Automatic reporting: Most statistical software (Stata, R, SPSS, SAS) automatically calculates and reports these significance levels.

🔗 Connection to earlier concepts

The confidence levels concept is "very similar to the concept of statistical significance or alpha levels introduced in Section 8.3."

📑 Additional table elements

🧩 Beyond the basics

The excerpt notes that regression tables often contain:

  • "Quite a few additional reported numerical indicators"
  • Different statistical figures depending on the model used
  • Additional diagnostic tests to ensure model robustness

🎓 Path to full competency

To become a confident "consumer" of quantitative political research:

  • Additional quantitative method courses are required
  • Statistics courses are necessary
  • The chapter aims to "pique your interest" rather than provide complete training

🔬 Context in political science research

📊 Why regression is common

The excerpt explains that quantitative research in political science:

  • Relies heavily on observational methods
  • Uses regression analysis as "the most common approach" after data is coded and arranged
  • Borrows techniques from related disciplines (economics, psychology)
  • Incorporates developments from statisticians and mathematicians

🎯 The chapter's goal

Even though full interpretation requires advanced training, the textbook considers it "important to introduce you to a basic understanding of a statistical table in a journal article, and how analytical results of quantitative research are generally presented."

This reflects the reality that political science students "will be required to read such tables in the articles they have been assigned in class."

38

Section 9.1 Ethics in Political Research

Section 9.1 Ethics in Political Research

🧭 Overview

🧠 One-sentence thesis

Research ethics in political science require researchers to exercise sound judgment guided by agreed-upon principles to protect research participants, maintain scholarly integrity, and prevent harm—even when exact application of those principles is unclear.

📌 Key points (3–5)

  • What ethics means: systems of principles that guide appropriate action for a particular group, derived from moral character and customary behavior.
  • Why ethics matter: failing to consider ethics can harm others, damage reputations (personal, institutional, and disciplinary), and lead to misuse of research.
  • IRBs as gatekeepers: Institutional Review Boards assess whether research designs protect human subjects' rights and well-being, developed in response to historical abuses.
  • Common confusion: some ethical principles are obvious (plagiarism, data fabrication), but others are less straightforward (anticipating societal effects of research, judgment calls in the field).
  • Guiding principle: when in doubt, err on the side of caution; researchers must grapple with principles that operate at the intersection of theory and practice.

🎯 What research ethics are

📖 Definition and origin

Ethics: systems of principles that guide a particular group's appropriate action, derived from the Greek ēthos (moral character) and ēthikos (pertaining to customary behavior).

  • Ethics are not universal rules but principles established and revised by "epistemic communities"—communities of learning and knowledge production.
  • All scientists are expected to conduct research in ways that observe these agreed-upon principles.
  • The principles depend on researchers exercising sound judgment when exact application may be unclear.

🔍 Obvious vs. less straightforward principles

The excerpt distinguishes two levels of ethical considerations:

TypeExamplesWhy it matters
Obvious principlesNot plagiarizing; not misrepresenting sources or inventing data; not using unreliable data; not distorting opposing viewsThese are clear violations of scholarly integrity
Less straightforwardContemplating potential effects of research on society; judgment calls during fieldworkScientists rarely control how their work will ultimately impact individuals, society, or the planet (examples: dynamite, internet)
  • Don't confuse: ethical research is not just about avoiding clear violations; it also requires anticipating consequences that may be difficult to foresee.

🏛️ Institutional Review Boards (IRBs)

🛡️ Purpose and function

Institutional Review Boards (IRBs): bodies that assess the degree to which researchers and their project designs have taken appropriate measures to protect the rights and well-being of "human subjects."

  • In the United States, political scientists must submit research proposals to IRBs before conducting research.
  • IRBs protect three parties: the researcher, the research participants, and the universities/institutions where they are housed.

📜 Historical context

  • IRBs were developed between 1970–1990 in response to unethical research on human subjects.
  • The excerpt cites research conducted by Dr. Josef Mengele and others during the Nazi Regime as examples of abuses that prompted IRB creation.
  • This history shows why formal oversight became necessary: past researchers caused irreparable harm.

⚖️ Limitations and critiques

  • IRB protocols and emphasis vary depending on location.
  • They have been critiqued for being overly bureaucratic and legalistic in nature.
  • IRBs cannot anticipate all the judgment calls researchers may confront when conducting research.
  • Common IRB refrain: when in doubt, err on the side of caution.
  • Don't confuse: IRBs provide important oversight, but they cannot prepare researchers for every ethical question or dilemma that may arise in the field.

🧩 The scope of ethical responsibility

🌐 Consequences of ethical failures

Failing to take ethical considerations seriously can cause harm at multiple levels:

  • To others: irreparable harm to research participants, communities, or society.
  • To your reputation: personal credibility as a researcher.
  • To your institution: the reputation of your university or organization.
  • To the discipline: the credibility of political science as a field.

Example: A researcher who fabricates data not only produces unreliable knowledge but also undermines trust in all political science research.

🎓 Preparing for ethical challenges

  • A comprehensive guide to ethical research is beyond the scope of any single text or course.
  • Instructors and textbook authors (the "guardians and practitioners of our discipline") bear responsibility for preparing young political scientists.
  • What follows in the chapter are key principles—some subject to debate—that researchers must grapple with and consider when engaged in research.
  • The principles operate at the intersection of theory and practice, requiring ongoing interpretation and judgment.

🤔 Fundamental ethical questions

The excerpt poses several questions that illustrate the complexity of research ethics:

  • What is the right way to frame questions without misleading research subjects?
  • How ought we interpret results that may be "fuzzy" or prone to manipulation?
  • What, if anything, do we owe the individuals and communities that make much of our scholarship possible?

These questions show that ethical research is not just about following rules but about making thoughtful decisions that respect participants and produce trustworthy knowledge.

39

Section 9.2 Ethics and Human "Subjects"

Section 9.2 Ethics and Human“Subjects”

🧭 Overview

🧠 One-sentence thesis

Ethical research in political science requires balancing the pursuit of knowledge with protecting human participants through informed consent, trust-building, and minimizing harm, because participants are both means to understanding and ends in themselves.

📌 Key points (3–5)

  • Why ethics matter: Unethical research harms participants, researchers' reputations, institutions, and the discipline; IRBs review proposals to protect human subjects.
  • Humans as both means and ends: Unlike studying atoms or rocks, political science participants are not just data sources but people deserving respect and protection.
  • Fully informed consent: Participants must know the study's nature, risks, data use, and their right to withdraw at any time before agreeing to participate.
  • Common confusion: Trust vs. legal protection—consent scripts provide legal safeguards and information, but trust through immersion and participant observation is often necessary for access and quality research.
  • The harm-benefit balance: Research always involves human costs (time, reliving trauma), but ethical research can lead to positive change, emancipation, and empowering voices when conducted responsibly.

🏛️ Institutional safeguards and their limits

🏛️ Institutional Review Boards (IRBs)

IRBs: bodies that assess whether research proposals take appropriate measures to protect the rights and well-being of human subjects.

  • Developed between 1970-1990 in response to unethical research (e.g., Nazi Regime experiments).
  • Protect researchers, participants, and host institutions.
  • Limitation: Critiqued as overly bureaucratic and legalistic.
  • Cannot anticipate all judgment calls researchers face in the field.
  • Common IRB guidance: "when in doubt, err on the side of caution."

👨‍🏫 Responsibility beyond IRBs

  • IRBs cannot provide comprehensive preparation for all ethical challenges.
  • Training young political scientists falls largely to discipline practitioners (textbook authors, instructors).
  • What follows in the text are key principles (some debatable) rather than a complete ethical guide.
  • Researchers must grapple with these principles throughout their careers.

🧑‍🤝‍🧑 The unique nature of human subjects research

🧑‍🤝‍🧑 Why political science is different

Political science relies heavily on humans as central to studies:

  • Qualitative methods involve interviewing human subjects.
  • Some research requires living with and immersing oneself in participants' cultures, communities, and ways of life.
  • Participant observation often requires establishing relationships to co-create knowledge, not just extract data.

🎯 Humans as ends, not just means

The distinct aspect: "subjects" are not only means to testing theories and discovering puzzles, but are also ends in themselves.

  • Unlike studying atoms, rocks, or the cosmos, political science participants have intrinsic value.
  • Requires balancing multiple roles: researcher, active participant, friend, and sometimes adversary.
  • Example: A researcher studying a community organization must navigate being both an objective scholar and a trusted community member.

⚖️ Weighing costs and benefits

⚖️ Inevitable human costs

Research always entails costs to participants:

  • Time given to the researcher.
  • Reliving private or traumatic events.
  • Potentially worse consequences.
  • True for both qualitative and quantitative approaches.

✨ Potential benefits of ethical research

When conducted ethically and minimizing costs, research can:

  • Better understand political phenomena.
  • Lead to positive change for humanity.
  • Contribute to emancipation for the oppressed.
  • Provide the empowering process of having one's voice heard.

⚖️ No exact formula

  • No precise calculation exists for when research ends justify the means.
  • The scientific community agrees on foundational principles and practices to guide ethical considerations.
  • Researchers must consider potential harm: physical, psychological, emotional, intentional or unintentional.

📋 Fully informed consent

📋 What informed consent requires

Fully informed consent: the principle that participants must be told about the study before agreeing to participate.

Participants must be informed about:

  • The exact nature of the study.
  • Potential implications for them.
  • What will happen during the process.
  • What will happen to the data they provide.
  • How data will ultimately be used.
  • Their right to withdraw at any time if uncomfortable or unwilling to continue.

📝 Consent scripts

  • Often read to all participants to maintain a common standard.
  • Typically reviewed by an IRB before the study begins.
  • Provides legal protection for researchers and their institutions.
  • Gives participants information needed to decide whether to proceed.

🤝 Trust beyond consent

Don't confuse: Legal consent ≠ sufficient trust for quality research.

  • Consent scripts are necessary but not sufficient.
  • Trust is often necessary for conducting qualitative research.
  • Participant observation and immersion research are instrumental for learning and engendering trust.
  • Without access to archives, organizations, communities, or trust of key individuals, projects cannot proceed beyond theory.

🔍 Benefits of deep engagement

🔍 Access and insider perspective

Once access and trust are established:

  • Multiple learning opportunities emerge.
  • Researchers gain unique perception as an "insider" rather than an "outsider" with only scholarly interests.
  • Access to rich details of participants' lives and communities.

🛠️ Improving research quality

Deep connections help:

  • Construct survey instruments that minimize confirmation bias.
  • Avoid misrepresenting study participants.
  • Correct initial hunches and provisional inferences made before fieldwork.
  • Lead to unpredictable discoveries not originally anticipated.

⚠️ Risks of closeness

Getting close to participants can expose researchers to:

  • Data that could be illegal.
  • Ethically dubious information.
  • Situations that might put researchers or participants in danger.
  • Friendships and deep connections that complicate the researcher role.

Example: A researcher studying a political organization might learn about activities that are legally questionable, creating tension between scholarly interest and participant protection.

🔄 Ongoing ethical navigation

🔄 The unpredictable nature of fieldwork

  • Qualitative research is inherently unpredictable.
  • Can lead to new discoveries beyond original plans.
  • Requires being forthcoming about interests and intentions as both scholar and participant.
  • Demands clarity about who the researcher is individually.

🛡️ Foundational principle

Researchers must avoid misleading participants because:

  • Research and its dissemination may put participants in harm's way.
  • The personal and individual nature of qualitative data collection creates vulnerability.
  • Given the rich access to participants' lives, ethical considerations extend throughout the research process.
40

Section 9.3: Navigating Qualitative Data Collection

Section 9.3: Navigating Qualitative Data Collection

🧭 Overview

🧠 One-sentence thesis

Ethical qualitative data collection requires protecting participant anonymity and confidentiality while adopting a reflexive approach to minimize bias and remain open to new perspectives.

📌 Key points (3–5)

  • Core ethical safeguards: anonymity and confidentiality of subjects must be ensured throughout qualitative data collection.
  • Reflexivity requirement: researchers must reflect on how their personal characteristics (biases, culture, etc.) may impact research design, data collection, and interpretation.
  • Ethical obligation to participants: researchers should avoid being parasitic and bring something of value back to the community or persons who made the study possible.
  • Common confusion: reflexivity is not just self-awareness—it actively minimizes bias and opens researchers to new ways of thinking.
  • Knowledge sharing: findings should be brought back to individuals/communities in a way they can understand, use, or verify.

🔒 Protecting Participants

🔒 Anonymity and confidentiality

The excerpt emphasizes ensuring the anonymity and confidentiality of subjects to achieve ethical data collection.

  • These are foundational safeguards in qualitative research involving human subjects.
  • Anonymity: protecting the identity of participants so they cannot be identified.
  • Confidentiality: keeping participant information private and secure.
  • Why it matters: protects participants from potential harm and builds trust necessary for honest data collection.

🤝 Avoiding parasitic research

  • Researchers are routinely criticized for failing to bring study findings back to the individuals or community under investigation.
  • The ethical imperative: avoid extracting knowledge without giving back.
  • What to provide: findings presented in a way participants can understand, use, or verify.
  • Example: An organization participates in a study—researchers should share results in accessible formats, not just academic publications.

🪞 Reflexivity in Practice

🪞 What reflexivity means

Reflexivity: the act of reflecting on the role that researcher's personal characteristics (biases, culture, etc.) may impact their research design, data collection, and interpretation processes.

  • Not passive self-awareness—it is an active analytical practice.
  • Requires ongoing examination throughout the research process, not just at the beginning.
  • Applies to three stages: design, data collection, and interpretation.

🎯 Goals of reflexive approach

The excerpt identifies two key outcomes:

  1. Minimize bias: recognize and reduce the influence of personal characteristics on research.
  2. Open to new thinking: reflexivity helps researchers remain receptive to alternative perspectives and unexpected findings.

Don't confuse: Reflexivity does not eliminate bias entirely (the excerpt notes in Section 9.5 that "we can never be entirely value-neutral"), but it helps minimize its impact.

🌐 Ethical Knowledge Sharing

🌐 Bringing value back

  • The excerpt frames this as an ethical responsibility, not an optional step.
  • May seem "onerous" with "little instrumental reward," but it is part of ethical research practice.
  • The underlying principle: participants made the study possible, so they deserve to benefit from it.

🔄 Community engagement

  • Findings should be shared with the community or individuals under investigation.
  • Format matters: information must be presented so participants can:
    • Understand it
    • Use it
    • Verify it
  • This addresses the criticism that researchers extract knowledge without reciprocating.

💡 The broader ethical framework

The excerpt quotes: "when you conduct and report your research ethically, you join a community in search for some common good…you discover that research focused on the best interest of others is also your own."

  • Ethical research is not just about following rules—it aligns researcher interests with participant interests.
  • Creates a shared pursuit of knowledge that benefits all parties.
  • Example: A researcher studying a community's political participation shares findings that help the community advocate for policy changes, while also producing valid academic knowledge.

🔗 Connection to Broader Research Ethics

🔗 Human subjects as ends, not means

The excerpt references Section 9.2's principle: subjects are "not only a means of testing theories, illuminating puzzles, and discovering new ones, but are also ends in themselves."

  • This philosophical foundation underpins the practical requirements of anonymity, confidentiality, and knowledge sharing.
  • Researchers must balance multiple roles: researcher, participant, friend, and sometimes adversary.

🔗 Fully informed consent

Fully informed consent: the process of obtaining permission from human subjects, after thoroughly conveying the risks, benefits, methods, and purpose of the study.

  • Essential when engaging in research involving human subjects.
  • Participants must be "fully informed as they consent to their participation."
  • Links to qualitative data collection: consent is not a one-time event but an ongoing ethical commitment throughout data collection.
41

Section 9.4: Research Ethics in Quantitative Research

Section 9.4: Research Ethics in Quantitative Research

🧭 Overview

🧠 One-sentence thesis

Quantitative research in political science shares core ethical principles with qualitative approaches but faces distinct challenges around objectivity, subjectivity, and the need for data access and transparency.

📌 Key points (3–5)

  • Core commonality: All political science research shares fundamental ethical principles regardless of method.
  • Key differences: Quantitative and qualitative approaches differ in how they address objectivity and subjectivity issues.
  • Quantitative-specific concerns: Data access, production transparency, and analytical transparency are central ethical and analytical priorities for quantitative researchers.
  • Common confusion: Ethical principles are universal across methods, but their application and specific challenges vary between qualitative and quantitative work.
  • Epistemic power and bias: Researchers possess "epistemic power" and can never be entirely value-neutral; awareness of personal biases is essential.

🔬 Shared foundations vs. methodological differences

🤝 Universal ethical principles

  • The excerpt emphasizes that "all political science share core ethical principles."
  • These principles apply regardless of whether the research is qualitative or quantitative.
  • The foundation includes protecting subjects, ensuring informed consent, and minimizing harm (as outlined in earlier sections).

🔀 How approaches diverge

  • Objectivity and subjectivity: Quantitative and qualitative methods have "respective approaches to addressing issues associated with objectivity and subjectivity."
  • The excerpt does not detail how they differ, but signals that the same ethical concern (balancing objectivity/subjectivity) is handled differently depending on method.
  • Don't confuse: having the same ethical principle does not mean using the same technique to uphold it.

📊 Quantitative-specific ethical priorities

🔓 Data access

One key concern for the quantitative researcher are the ethical and analytical benefits associated with facilitating data access.

  • Making data available to others serves both ethical and analytical purposes.
  • Ethically: transparency allows verification and accountability.
  • Analytically: others can replicate, validate, or build on findings.

🛠️ Production transparency

  • Production transparency: clarity about how data were generated or collected.
  • This includes documenting sources, measurement procedures, and any transformations applied to raw data.
  • Example: A researcher using survey data should disclose sampling methods, question wording, and coding decisions.

🔍 Analytical transparency

  • Analytical transparency: clarity about how conclusions were derived from data.
  • This means documenting statistical techniques, model specifications, and decision rules.
  • Readers and reviewers should be able to trace the path from data to results.
  • Why it matters: without analytical transparency, findings cannot be independently verified or critiqued.

🧠 Epistemic power and researcher responsibility

⚖️ Epistemic power and value-neutrality

Due to the fact that academics possess "epistemic power", it is essential to be aware that we can never be entirely value-neutral or eliminate our personal biases as we conduct political research.

  • Epistemic power: the authority and influence researchers have in shaping what counts as knowledge.
  • Researchers cannot be completely objective; personal biases and values inevitably influence design, analysis, and interpretation.
  • Ethical practice requires awareness of this limitation, not the impossible goal of eliminating it.

🤲 Giving back to communities

  • The excerpt stresses that researchers should "reflect on how the results of a study may impact those that made the study possible."
  • Ethically, researchers should "strive to bring something of value back to the community and/or persons that made your study possible."
  • This applies to quantitative work just as it does to qualitative: data sources often involve real people and communities.
  • Example: A quantitative study using survey responses from a community should consider sharing findings in an accessible format, not just in academic journals.
  • The excerpt warns against being "parasitic"—taking data without reciprocating benefit.

🔄 Joining a community of practice

  • When research is conducted ethically, "you join a community in search for some common good…you discover that research focused on the best interest of others is also your own."
  • Ethical research aligns the researcher's interests with those of subjects and the broader community.
  • This framing applies across methods, including quantitative work.

📋 Key terms from the broader chapter

TermDefinition (from excerpt)
EpistemologyConcerns the theory of knowledge creation, specifically its method, scope, and criteria for validation
Fully informed consentThe process of obtaining permission from human subjects after thoroughly conveying the risks, benefits, methods, and purpose of the study
ReflexivityThe act of reflecting on the role that researcher's personal characteristics (biases, culture, etc.) may impact their research design, data collection, and interpretation processes
  • These terms apply to both qualitative and quantitative research.
  • Reflexivity is especially relevant to the discussion of epistemic power and bias in quantitative work.
42

Section 9.5: Ethically Analyzing and Sharing Co-Generated Knowledge

Section 9.5: Ethically Analyzing and Sharing Co-Generated Knowledge

🧭 Overview

🧠 One-sentence thesis

Researchers possess "epistemic power" through the knowledge they generate and must ethically reflect on how their findings impact study participants while striving to return value to the communities that made their research possible.

📌 Key points (3–5)

  • Epistemic power: Academics have authority and reproduce power through the knowledge they generate and disseminate, carrying weight beyond individual opinion.
  • Value-neutrality is impossible: Researchers can never be entirely value-neutral or eliminate personal biases in their scholarship and methodological choices.
  • Common confusion: Quantitative methods are not inherently more objective or ethical than qualitative methods; both approaches require ethical vigilance against bias.
  • Protection vs. transparency: Researchers must balance protecting human subjects (anonymity, confidentiality) with research integrity and transparency.
  • Reciprocity obligation: Ethically, researchers should bring findings and value back to the communities and individuals who made the study possible, not just publish for academic audiences.

🔬 Methodological misconceptions

🔬 Quantitative vs. qualitative ethics

  • Many students mistakenly believe quantitative methods are always transparent, objective, and therefore ethical, while qualitative methods relying on human communication are always subjective.
  • Reality: Quantification of political data involves human processes with plenty of opportunities for bias, especially when processes are not transparent.
  • Conversely: Qualitative interviews can be conducted to reduce potential biases.
  • Don't confuse: The method itself with how it is practiced—both approaches require ethical vigilance.

🎯 What makes good political science

The excerpt cites Smith's argument that political scientists have not agreed on what makes good research:

  • Inference testing is essential but requires substantively interesting questions and hypotheses.
  • Both qualitative and quantitative approaches are essential: one for forming interesting hypotheses, the other for improving analytical technique.
  • Because approaches differ in how data are collected and analyzed, ethical standards must be approached accordingly—some standards are more or less relevant to each approach.

⚡ Epistemic power and researcher responsibility

⚡ What is epistemic power

"Epistemic power": the authority and influence academics have and reproduce through the knowledge they generate as researchers and disseminate through writing and lecturing.

  • Political scientists are perceived as experts on political and social problems.
  • When researchers make claims about political and social issues in the public sphere, these claims carry more weight than an individual's opinion.
  • Example: Research findings often become the basis for political and social changes with serious real-life implications; policies may be produced based on research.

🚫 The impossibility of neutrality

The excerpt emphasizes (citing Klotz and Lynch):

  • Researchers can never be entirely value-neutral or eliminate personal biases.
  • Biases are replicated or challenged through scholarship and individual methodological choices.
  • From using translators in the field to employing transcription services, researchers must consider potential for bias to creep into analysis.
  • Researchers must avoid misrepresenting study participants and always consider their wellbeing.

🔍 Ethical analysis strategies

🔍 Reflexive approaches during analysis

The excerpt recommends reflexivity when analyzing data:

  • Offer reflections on instances where fieldwork resulted in dissonance with the initial theoretical framework.
  • Note where interviewees challenged and/or enriched the initial line of inquiry.

✅ Member checking

"Member checking": a strategy in which findings are discussed with those studied in the field.

  • Purpose: Address dynamics associated with a researcher's subjectivities (e.g., confirmation bias).
  • Does not deny or undermine the researcher's epistemological role.
  • Balance: This is ultimately your study; it would be unwise to let participants editorialize findings.
  • However: If a quote might make participants uncomfortable, misrepresent their meaning, or worse, researchers must take this under consideration.

📝 Ethical publication practices

📝 Pre-publication considerations

Qualitative researchers in particular have an ethical responsibility to consider how research will be used, given the trust, intimacy, and potential for human impact.

Important to reflect on:

  • How information may impact those who made the study possible.
  • Many research agendas pertain to sensitive topics that might put researcher and/or participants in danger.
  • Interviews and surveys must be conducted on the basis of anonymity, with original data stored securely.
  • Data must be evaluated now that all pieces have come together and are almost ready for publication.

🔐 Balancing confidentiality and transparency

ChallengeEthical approach
Cannot provide full citations for confidential interviewsUse quotes/accounts shared by more than one person or source
Anonymous sources need credibilityTriangulate accounts and link with contextual information (e.g., "according to several soldiers involved in the conflict")
Subsequent researchers unable to replicate findingsBe as transparent as possible while protecting human subjects
  • The excerpt emphasizes a "delicate balance" between protecting human subjects and maintaining research integrity.

🤝 Reciprocity and giving back

After publication, researchers face an ethical obligation:

  • Study participants who made research possible likely do not subscribe to academic journals (e.g., American Journal of Political Science).
  • Researchers are routinely criticized for failing to bring findings back to individuals/communities under investigation.
  • When findings are shared, they may not be provided in a way participants can understand, use, or verify.

Ethical imperative: Avoid being parasitic with research; strive to bring something of value back to the community and/or persons that made the study possible.

The excerpt concludes (citing Booth, Colomb, and Williams):

"When you conduct and report your research ethically, you join a community in search for some common good…you discover that research focused on the best interest of others is also your own."

  • This may seem like an onerous last step with little instrumental reward, but it is part of ethical research practice.
43

Congratulations!

Section 10.1: Congratulations!

🧭 Overview

🧠 One-sentence thesis

Completing a political science research methods course is a significant accomplishment that opens pathways to further study, research opportunities, and advanced degrees in the discipline.

📌 Key points (3–5)

  • Recognition of achievement: Reading through new and challenging material from beginning to end deserves acknowledgment, as these wins are often minimized or ignored.
  • Rarity of the course: Political science research methods courses at the introductory level are uncommon—only 12 out of 114 California community colleges offer them.
  • Path forward involves consultation: Students should consult professors, explore upper-division courses, and consider graduate study.
  • Common confusion: Don't think of college as just earning a degree or taking single classes; instead, view it as continuous growth in knowledge, skills, and abilities.
  • Special opportunity: College is a unique time for personal growth, professional exploration, and deep intellectual engagement with topics of interest.

🎉 Recognizing the accomplishment

🎉 Why this matters

  • The excerpt emphasizes that completing the book is "important to recognize."
  • Reading new, challenging material requires time and effort.
  • Too often people "minimize these wins or ignore them completely."
  • Example: A student who finishes a difficult textbook should pause and acknowledge the work invested, rather than immediately moving to the next task.

🗺️ Understanding the landscape

🗺️ How rare this course is

Political science research methods courses at the introductory (first or second year) level are rare.

  • Only 12 out of 114 community colleges in California offer this course.
  • Even at the University of California, Merced, the political science program has just one lower-division and one upper-division research methods course.
  • The excerpt notes it is "fair to expect" these courses will become more common in the future.

📈 Why rarity matters

  • Because the course is uncommon, students who complete it have gained exposure that many peers have not.
  • This positions them well for future opportunities in the discipline.

🛤️ Next steps for students

🛤️ Consult your professor

  • Professors who taught the course "will have a sense of additional opportunities" available.
  • At community colleges, professors may offer individual or group research opportunities.
  • These can be informal (weekly office hours meetings) or formal (special topics or individualized studies courses).
  • Example: The excerpt mentions a student at Cerritos Community College who completed 5 units of Directed Studies in Political Science in spring and summer 2004.

🎓 Explore upper-division courses

  • Community college students cannot take upper-division courses at their two-year institutions, but they can research what's available at four-year institutions they plan to transfer to.
  • Students already at four-year colleges or universities should meet with professors and academic advisors to "map out what upper division courses can help strengthen the research methods."
  • Planning ahead is important for developing knowledge, skills, and abilities.

🎯 Consider graduate study

  • The path forward includes "seriously considering earning a master's or doctoral degree in the discipline."
  • Graduate study represents a continuation of the intellectual journey begun in this course.

💡 Reframing the college experience

💡 Beyond degrees and single classes

  • Don't confuse: The excerpt warns against being "fixated on the idea of a single class or just earning a degree."
  • Instead, think of college "in a continuous way"—focus on:
    • The amount of knowledge you're acquiring
    • The number of skills you're developing
    • The number of abilities you're practicing

💡 The special nature of this time

  • College or university is "a special time in your life" for three reasons:
    1. Personal growth: You grow as a person.
    2. Professional opportunities: You introduce yourself to career paths.
    3. Intellectual engagement: You engage with a range of topics and later focus on a specific discipline that intrigues you.
  • The excerpt emphasizes that "this intellectual experience is something to be embraced."
  • Example: Rather than viewing each course as a box to check, a student might see each one as adding to a cumulative understanding of political science research and methods.
44

Section 10.2: The Path Forward

Section 10.2: The Path Forward

🧭 Overview

🧠 One-sentence thesis

After completing an introductory political science research methods course, students should actively plan their next steps by consulting professors, exploring upper-division courses, and considering graduate education to deepen their methodological knowledge.

📌 Key points (3–5)

  • Rarity of early methods courses: only 12 out of 114 California community colleges offer this course, making completion a notable accomplishment.
  • Three concrete next steps: consult your professor, look ahead to upper-division courses, and seriously consider graduate study.
  • Shift in mindset: think of education as continuous accumulation of knowledge, skills, and abilities rather than just earning a degree.
  • Common confusion: students often fixate on single classes or degrees instead of viewing college as an ongoing intellectual journey.
  • Graduate school planning: even if not immediately relevant, start thinking about master's or PhD programs early by asking professors, visiting program websites, and making exploratory calls.

🎓 Why this course matters

🎓 The rarity of early methods training

  • Political science research methods courses are uncommon at the lower-division level.
  • Only 12 out of 114 California community colleges offer this course.
  • Even at four-year institutions like UC Merced, the political science program typically has just one lower-division and one upper-division methods course.
  • The excerpt predicts that methods courses will become a staple in most political science programs in the future.
  • Why this matters: completing this course puts you ahead of most students and positions you well for advanced work.

🏆 Recognizing the accomplishment

The excerpt emphasizes: "Taking the time and making the effort to read through material that is new, challenging, and worthy of recognition."

  • Students often minimize or ignore these wins.
  • Reading a methods book from beginning to end is a concrete achievement that deserves recognition.
  • Don't confuse: this is not just about finishing a textbook—it's about engaging with new, challenging material.

🗺️ Three concrete next steps

🗺️ Consult your professor

  • Your professor knows what additional opportunities are available at your institution.
  • For community college students: professors may offer individual or group research opportunities, either informally (weekly office hours) or formally (special topics or individualized studies courses).
  • Example: The author completed 5 units of Directed Studies in Political Science at Cerritos Community College in spring and summer 2004.
  • This one-on-one or small-group work can extend your methods training beyond the classroom.

🗺️ Look ahead to upper-division courses

  • For community college students: research what's available at four-year institutions you plan to transfer to (since two-year institutions don't offer upper-division courses).
  • For students already at four-year institutions: meet with professors and academic advisors to map out which upper-division courses will strengthen your research methods skills.
  • The excerpt stresses planning ahead: "It's important to plan ahead about how you want to develop your knowledge, skills, and abilities."

🗺️ Consider graduate education

  • Think about earning a master's or PhD, even if it's not at the forefront of your mind right now.
  • The excerpt advises: "My advice here is rather simple: just think about it."
  • Let the idea evolve as you work toward your bachelor's degree.
  • Take concrete steps to position yourself:
    • Ask professors about their graduate school experience.
    • Visit websites of graduate programs.
    • Call institutions and ask to speak with someone about what it takes to earn a master's or PhD.
  • These actions help you determine if graduate school is the right next step for you.

🧠 Reframing your educational mindset

🧠 From degrees to continuous growth

  • The excerpt challenges the traditional view: "Most of the time, we can be fixated on the idea of a single class or just earning a degree."
  • Better approach: think of college in a continuous way—focus on the amount of knowledge you're acquiring, the number of skills you're developing, and the number of abilities you're practicing.
  • Don't confuse: a degree is important, but it's the vehicle for intellectual growth, not the sole goal.

🧠 The intellectual experience

"Being in college or university is a special time in your life not only because you grow personally, introduce yourself to professional opportunities, but you get to intellectually engage in a range of topics and later on into a specific discipline that you're intrigued by."

  • This intellectual engagement is something to embrace for itself, not just for the credential.
  • The excerpt emphasizes "all the other tangibles and intangibles that come with learning about the world."

🧠 Balancing life and looking ahead

  • The excerpt acknowledges real-life concerns:
    • Traditional students: paying for college, where to live, making friends, doing laundry.
    • Returning/nontraditional students: balancing work and school, childcare, carving out homework time.
  • "All this is what we call life, but part of living life is looking ahead."
  • Even when stuck in day-to-day worries (rent, mortgage, dinner), you must think about the future.
  • Example: While concerned about immediate needs, still let the idea of furthering your education evolve in your mind during the "long hike up to your goal of earning a bachelor's degree."

📚 Practical advice summary

Next stepWhat to doWhy it matters
Consult professorAsk about research opportunities (informal or formal)Extends methods training beyond the classroom
Map upper-division coursesMeet with advisors; research transfer institutionsStrengthens knowledge, skills, and abilities systematically
Explore graduate schoolAsk professors, visit websites, make callsPositions you for advanced study even if not immediately pursuing it
Shift mindsetFocus on continuous learning, not just degreesMaximizes the intellectual and personal value of your education
45

Section 10.3: Frontiers of Political Science Research Methods

Section 10.3: Frontiers of Political Science Research Methods

🧭 Overview

🧠 One-sentence thesis

Geographic Information Systems (GIS) and spatial statistics represent a cutting-edge frontier in political science research methods by integrating geographic location data to reveal patterns, relationships, and influences that traditional methods cannot capture.

📌 Key points (3–5)

  • What GIS does: uses spatial data to understand the world, identify relationships, and discover patterns with respect to place.
  • How political science uses GIS: researchers conduct and visually present findings (e.g., evaluating nuclear power plant sites), and practitioners have long used maps for districting and campaign strategy.
  • Why spatial statistics matter: they allow researchers to mathematically connect units of observation based on geographic location, measuring influence between neighboring units.
  • Common confusion: traditional statistics vs spatial statistics—traditional methods assume independence (one unit doesn't affect another), but spatial statistics recognize that geographic neighbors often influence each other.
  • Real-world application: spatial methods help answer questions like whether neighboring states follow or oppose each other's policy changes (e.g., gas taxes).

🗺️ Geographic Information Systems (GIS)

🗺️ What GIS is and does

Geographic Information Systems (GIS): tools that use spatial data to help understand the world, identify relationships, and discover patterns with respect to place.

  • Before widespread GIS, people relied on paper maps and simple distance-over-speed calculations without integrating traffic or weather data.
  • Modern GIS integrates multiple data layers (location, traffic, weather, demographics) to provide richer analysis.
  • Example: Google Maps is a familiar GIS application that helps navigate from point A to point B using real-time spatial data.

🏛️ GIS in political science history

  • Political use of spatial data is not new—it dates back to the founding of the United States.
  • Historical examples:
    • State boundaries: longitudinal and latitudinal lines were used to carve out new states.
    • Redistricting: state legislatures used maps to draw congressional or legislative districts to favor the party in power.
    • Campaign strategy: campaigns used maps of polling locations to deploy volunteers and encourage voting.
  • These are "rudimentary GIS"—maps merged with political knowledge.

🔬 Modern GIS research applications

  • Researchers increasingly use GIS to conduct research and visually present findings.
  • Example: deciding where to build a nuclear power plant in Nigeria.
    • The Nigerian Atomic Energy Commission tasked with answering this question.
    • Eluyemi et al. (2020) used GIS software to compare proposed sites with tectonic maps.
    • They presented 12 figures to geographically contextualize potential sites.
    • This public information enables government officials, interest groups, and citizens to meaningfully debate the utility of nuclear energy.
  • Why it matters: GIS transforms technical or engineering questions into accessible political debates by visualizing spatial relationships.

📊 Spatial Statistics

📊 What makes spatial statistics unique

Spatial statistics: a field that integrates geocoded data (geographic location information) into statistical analyses.

  • Traditional statistics has been a staple in political science for decades.
  • Spatial statistics adds a geographic dimension to standard statistical methods.
  • The key difference: spatial statistics mathematically connect units of observation based on their geographic location.

🔗 The independence assumption problem

  • Traditional statistics assumption: units of observation are independent and identically distributed.
    • How one person responds to a survey should have no bearing on how another person responds.
    • What California does with gun control laws has no influence on what Oregon, Nevada, or Arizona do.
  • Reality: we can imagine how the actions of one person or state may influence another person or state.
  • The gap: traditional statistics cannot establish these geographic connections, even though we know they exist.

🧮 How spatial statistics work

  • Spatial statistics allow researchers to:
    • Mathematically connect units based on geographic proximity.
    • Measure the influence that one unit (person, state) can have on another.
    • Better determine the strength of relationships between factors and outcomes.
  • By accounting for geographic influence, researchers get more accurate estimates of causal relationships.

🚗 Example: gas tax policy diffusion

Scenario: California increases its gas tax. What happens in neighboring states?

  • Two possible outcomes:
    1. Oregon, Nevada, Arizona also increase their gas tax to keep up with California.
    2. Neighboring states lower their gas tax to demonstrate competitiveness compared to California.
  • What spatial statistics add: researchers can consider geographic proximity while also accounting for state demographics and political party control.
  • Why this matters: spatial methods reveal whether policies spread through imitation or competition among neighbors.

🔍 Traditional vs Spatial Methods

🔍 Key distinctions

AspectTraditional StatisticsSpatial Statistics
Independence assumptionAssumes units do not influence each otherRecognizes geographic neighbors influence each other
Geographic dataNot integratedGeocoded data mathematically connects units
What it measuresDirect relationships between variablesDirect relationships + spatial influence between units
Example limitationCannot measure how California's policy affects NevadaCan measure policy diffusion or competition between neighbors

⚠️ Don't confuse

  • Not just adding location as a variable: spatial statistics doesn't simply include "state" or "city" as a control variable; it mathematically models how proximity creates interdependence.
  • Not just mapping: GIS visualizes spatial patterns; spatial statistics quantifies spatial relationships and tests hypotheses about geographic influence.
46

Section 10.4: How to Contribute to this OER

Section 10.4: How to Contribute to this OER

🧭 Overview

🧠 One-sentence thesis

Open Educational Resources invite students—not just professors—to actively contribute to textbooks by updating, clarifying, or expanding content under a CC-BY-NC license.

📌 Key points (3–5)

  • Traditional textbook model: professors write, students consume—students are viewed as consumers, not producers of content.
  • OER changes the model: this textbook is freely available and openly editable by anyone, including students who just finished reading it.
  • What you can contribute: clarify definitions, expand under-explained sections, add images, or even draft entirely new chapters.
  • Common confusion: OER cultivation was historically reserved for professors and academics, but that restriction is changing—students are now personally invited to participate.
  • Why it matters: OERs democratize knowledge creation and broaden understanding by including more voices beyond the traditional academy.

🎓 The traditional textbook model vs. OER

📚 How traditional textbooks work

  • Who writes: individual professors or groups of professors with specialized knowledge, skill, and ability.
  • Who reads: students are the primary audience.
  • The problem: students consume the textbook but are not considered producers of the content.
  • The excerpt notes this is "interesting" and "in some ways, this makes no sense"—students can and should contribute to what they read.

🔓 What makes OER different

Open Education Resource (OER): a textbook or learning material that is freely available to everyone and invites everyone to participate in its cultivation.

  • License: this textbook uses a CC-BY-NC license, meaning it is open for modification and sharing (non-commercial).
  • Who can contribute: anyone, not just professors who "made it through graduate school, joined the ranks of the professoriate, and maintained their membership in the academy."
  • The shift: OER represents a change from exclusive academic authorship to broader, inclusive participation.

✏️ How students can contribute

✏️ Types of contributions you can make

The excerpt provides concrete examples of what students can do:

ProblemSolution
A key term definition was unclearUpdate it to make it clear
A chapter section was under-explainedAdd to it or re-write it altogether
A picture would have been worth a thousand wordsFind a CC-BY-NC picture and include it
An entire topic is missingDraft a new chapter and add it to the resource
  • Don't confuse: contributing does not mean you need to be an expert—if something was unclear to you as a student, your clarification will help future students.

🤝 How to get started

  • Reach out: contact the authors and co-authors of the various chapters; they will be "more than happy to answer your questions and encourage you to become a contributor."
  • Learn more: visit the Academic Senate for California Community Colleges' Open Educational Resources Initiative webpage for information and resources: https://www.asccc.org/directory/open-educational-resources-initiative-oeri
  • Recognize the opportunity: "the end of this book has been reached, but it is just the beginning for you to contemplate how you want to contribute."

🌍 Why OER matters

🌍 Democratizing knowledge creation

  • Broader participation: OER invites "more, and more people" to shape knowledge and create understanding of our world.
  • Breaking down barriers: cultivation of textbooks is no longer reserved only for those who completed graduate school and joined the academy—"changes afoot."
  • The beauty of OER: freely available to everyone, which naturally invites everyone to participate in improving and expanding the resource.

🎯 Shaping the future of learning

  • Example: a student who just finished reading this book can immediately become a contributor, turning their learning experience into material that helps future students.
  • The excerpt emphasizes that students are "personally invited" to contribute—this is not a passive suggestion but an active call to action.