🧭 Overview
🧠 One-sentence thesis
Mixed-methods data analysis involves separately analyzing qualitative and quantitative data using appropriate techniques for each, then integrating both types to yield meta-inferences that provide richer insights than either method alone could achieve.
📌 Key points (3–5)
- Core requirement: Mixed-methods analysis requires competence in both qualitative and quantitative techniques, plus knowledge of how to integrate findings to generate high-quality meta-inferences.
- Analysis must align with purpose: Data analysis strategies should be consistent with the study's purpose (e.g., completeness, complementarity) and address the research questions.
- Six critical criteria: Number of data types analyzed, number of analysis types used, time sequence, priority of components, number of phases, and analysis orientation.
- Common confusion: Integration vs. separate analysis—mixed-methods doesn't just mean doing both types separately; the key value comes from thoughtfully integrating findings.
- Quality assessment is essential: Before considering mixed-methods quality, researchers must establish validity/reliability for both qualitative and quantitative strands using accepted criteria for each.
🎯 Core criteria for mixed-methods data analysis
📊 Number of data types analyzed
- Traditional approach: Analyze qualitative data with qualitative methods, quantitative data with quantitative methods.
- Conversion designs: Sequential analysis where one data type is converted to another (monostrand design).
- Quantitizing: Transform qualitative data (e.g., narrative) into numeric form for statistical analysis.
- Qualitizing: Transform quantitative data into narrative form for qualitative analysis.
- Example: Content analysis of interviews followed by exploratory factor analysis of the coded themes (quantitizing).
- Don't confuse: Conversion designs with multistrand designs—conversion involves only one original data type that gets transformed, while multistrand designs collect both types independently.
🔢 Number of analysis types used
- Researchers must use at least one qualitative and one quantitative analysis technique.
- The decision depends on: (1) types of data collected; (2) purposes of the study.
- Example: If the purpose is to explain why a phenomenon occurs, descriptive statistics are insufficient—need interviews/grounded theory plus causal analysis techniques (regression, SEM).
⏱️ Time sequence of analysis
Three main strategies:
- Concurrent: Analyze qualitative and quantitative data simultaneously.
- Sequential qualitative→quantitative: Qualitative analysis informs quantitative analysis.
- Sequential quantitative→qualitative: Quantitative analysis informs qualitative analysis.
🎭 Priority of analytical components
- Equal-status design: Both qualitative and quantitative components receive equal emphasis.
- Dominant-less dominant design: One component has greater priority.
- If qualitative has significantly higher priority → qualitative-dominant mixed analysis.
- If quantitative has significantly higher priority → quantitative-dominant mixed analysis.
- Priority affects complexity of analysis—less priority typically means less sophisticated techniques for that component.
📈 Number of analytical phases
Seven phases proposed (not all required in every study):
- Data reduction: Reduce dimensionality (e.g., coding for qualitative, factor analysis for quantitative).
- Data display: Visually describe data (tables, diagrams, matrices).
- Data transformation: Use quantitizing or qualitizing methods.
- Data correlation/comparison: Correlate and compare quantitative with qualitative data.
- Data consolidation: Combine both types to create new/consolidated variables.
- Data comparison: Compare data from different sources.
- Data integration: Integrate both into a coherent whole.
🧩 Analysis orientation
Three approaches:
- Case-oriented: Focus on selected cases to analyze meanings, experiences, perceptions—goal is particularizing and analytical generalization.
- Variable-oriented: Identify probabilistic relationships among constructs treated as variables—goal is external generalization.
- Process-oriented: Evaluate processes/experiences over time, linking processes to variables and experiences to cases—combines case and variable approaches.
Don't confuse: Qualitative methods generally use case-oriented approaches, quantitative methods use variable-oriented approaches, but mixed-methods can combine both in process-oriented analysis.
✅ Validation in mixed-methods research
🔍 Validation in qualitative research
Design validity:
- Descriptive validity: Accuracy of what is reported (events, behaviors, settings).
- Credibility: Confidence in truth of findings—use prolonged engagement, peer debriefing, triangulation, member checks.
- Transferability: Degree results can generalize to other contexts—use thick descriptions, theoretical sampling.
Analytical validity:
- Theoretical validity: Extent theoretical explanation fits the data—use data collection over extended time, theory triangulation, pattern matching.
- Dependability: Describe changes in setting and how they affect the study—use audit trail, stepwise replication, peer examination.
- Consistency: Verify steps through examination of raw data, data reduction products, process notes.
Inferential validity:
- Interpretive validity: Accuracy of interpreting participants' views, thoughts, feelings—use participant feedback, confirmability audit.
- Confirmability: Degree results could be confirmed by others—use reflective journal, theory in single-case studies.
🔬 Validation in quantitative research
Measurement validity:
- Reliability: Repeatability/consistency—use split-half method, compute inter-rater, test-retest, internal consistency reliability.
- Construct validity: Degree inferences can be made from operationalizations to theoretical constructs—use multitrait-multimethod matrix, demonstrate convergent/discriminant validity.
Design validity:
- Internal validity: Whether observed covariation reflects causal relationship—demonstrate temporal precedence, use randomization, control groups.
- External validity: Whether cause-effect relationship holds across variations—use random selection/probability sampling.
Inferential validity:
- Statistical conclusion validity: Validity of inferences about correlation between variables—assess statistical significance, Type I/II errors.
🎯 Quality criteria in mixed-methods research
Two domains:
Design quality:
- Design suitability: Are methods appropriate for answering research questions? Does design match purpose?
- Design fidelity: Are methods implemented with necessary quality and rigor?
- Within-design consistency: Do components fit together seamlessly?
- Analytical adequacy: Are analysis procedures appropriate and adequate?
Interpretive rigor:
- Interpretive consistency: Do results from one strand match results from another?
- Theoretical consistency: Are meta-inferences consistent with theory?
- Interpretive agreement: Are other scholars likely to reach same conclusions?
- Interpretive distinctiveness: Is each inference more credible than other possible conclusions?
- Integrative efficacy: Do meta-inferences adequately incorporate inferences from each strand?
- Interpretive correspondence: Do inferences meet stated purposes/questions?
⚠️ Threats to quality
🚨 Threats to internal validity/credibility
Research design stage:
- Insufficient/biased knowledge of earlier studies and theories.
- Contradictions in logic among research questions, theory, hypotheses, tests.
Data collection stage:
- Observer-caused effects (subjects behave differently when observed).
- Observer bias (insufficient data, interpretation gaps filled with researcher's values).
- Researcher bias (personal biases/a priori assumptions not ruled out).
- Data access limitations.
- Serious reactivity (changes in responses from being conscious of participating).
Data analysis/interpretation stage:
- Lack of descriptive, interpretive, explanatory, or theoretical legitimation.
- Lack of generalizability.
- Confirmation bias (interpretations overly congruent with a priori hypotheses).
- Illusory correlation (identifying relationships that don't exist).
- Causal error (providing causal explanations without verification).
🌍 Threats to external validity/credibility
Qualitative research:
- Catalytic legitimation issues (degree study empowers research community).
- Communicative legitimation (disagreement on knowledge claims).
- Interpretive legitimation (extent interpretation represents group's perspective).
- Population generalizability (tendency to over-generalize findings).
Quantitative research:
- Selection bias (inadequate/non-random sample).
- Interaction of history and treatment effect (conditions change over time).
🔗 Threats to mixed-methods quality
Design quality threats:
- Design inconsistent with research questions/purpose.
- Observations/measures lack validity.
- Data analysis techniques insufficient/inappropriate.
- Results lack necessary strength for high-quality meta-inferences.
- Inferences inconsistent with results or research questions.
Interpretive rigor threats:
- Inconsistency between inferences and findings from each strand.
- Inconsistency with empirical findings of other studies.
- Inconsistency across scholars/participants' construction of reality.
- Failure to distinguish inferences from other possible interpretations.
- Failure to adequately integrate findings.
📋 Practical examples
🔬 Example: Sonenshein et al. (2014)
Purpose: Complementarity (with developmental element).
Design: Sequential qualitative→quantitative.
Qualitative phase:
- Interviewed 29 participants from Environment and Business Program.
- Used grounded theory approach: initial coding → theoretical categorization → theory induction.
- Developed causal network showing first-order categories → second-order themes → aggregate dimensions.
- Qualitative inference example: "Appraisal of characteristics related to promoting job performance."
Quantitative phase:
- Collected survey data from 402 environmental issue supporters.
- Conducted confirmatory factor analysis, then cluster analysis.
- Identified three profiles: self-affirmers, self-equivocators, self-critics.
- Used one-way ANOVA to examine how profiles related to issue-supportive behaviors.
Integration: Quantitized qualitative observation data (binary coding), used multiple analysis methods to ensure high-quality meta-inferences.
🏥 Example: Stewart et al. (2017)
Purpose: Completeness.
Design: Concurrent (field experiment + open-ended interviews).
Quantitative study:
- Longitudinal quasi-experiment with 224 providers.
- Used discontinuous growth modeling to analyze same-day appointment access.
- Found teams led by lower-status providers improved access faster than higher-status provider teams.
Qualitative study:
- Semi-structured interviews during and one year after implementation.
- Followed Miles & Huberman procedures: identifying themes → creating categories → connecting patterns.
- Used explanatory effect matrix to display findings.
- Found identity work and delegation behaviors facilitated team effectiveness.
Integration: Although data collected concurrently, analyzed sequentially. Integrated findings using causal model showing how status, identity, and delegation related to team effectiveness.
Don't confuse: Concurrent data collection with concurrent analysis—this study collected both types simultaneously but analyzed them sequentially to build on quantitative findings with qualitative insights.
Mixed-Methods Data Analysis Strategies
🧭 Overview
🧠 One-sentence thesis
Mixed-methods data analysis requires researchers to not only master both qualitative and quantitative analysis techniques but also to strategically integrate findings from both strands to generate high-quality meta-inferences that address the study's purpose and research questions.
📌 Key points (3–5)
- Core requirement: Competence in qualitative and quantitative analyses plus knowledge of integration techniques to generate meta-inferences
- Six critical criteria guide decisions: (1) number of data types; (2) number of analysis types; (3) time sequence; (4) priority of components; (5) number of phases; (6) analysis orientation
- Quality assessment is two-tiered: First establish validity/reliability for each strand using traditional criteria, then assess mixed-methods-specific quality (design quality and interpretive rigor)
- Common confusion—integration vs. juxtaposition: Simply conducting both analyses separately is insufficient; the value emerges from thoughtful integration that yields insights neither method alone could provide
- Validation must address both strands and their integration: Traditional qualitative and quantitative validity criteria apply to respective strands, but additional mixed-methods quality criteria assess the integration itself
🎯 Critical criteria for mixed-methods data analysis
📊 Number of data types analyzed
Traditional approach: Analyze qualitative data qualitatively, quantitative data quantitatively—both datasets remain distinct.
Conversion designs (monostrand): One data type is transformed into the other.
- Quantitizing: Qualitative → quantitative (e.g., content analysis of interviews → frequency counts → factor analysis)
- Qualitizing: Quantitative → qualitative (e.g., regression results → narrative profiling)
Example: Ferguson & Hull (2018) gathered quantitative personality data, conducted CFA, then used latent profile analysis to identify three qualitative profiles (reserved, well-adjusted, excitable), discussing results from both perspectives.
When to use conversion: Only if transformation brings additional insights beyond what the original data type provides. Gathering both types is preferable when feasible to avoid loss of depth (qualitative) or overgeneralization (quantitative).
Don't confuse: Monostrand conversion designs with multistrand designs—conversion starts with one data type and transforms it; multistrand collects both independently.
🔢 Number of analysis types used
Minimum requirement: At least one qualitative + one quantitative analysis technique.
Decision factors:
- Types of data collected
- Study purposes (e.g., if explaining why, descriptive statistics insufficient—need interviews/grounded theory + causal techniques like regression/SEM)
Table showing examples:
| Qualitative methods | Quantitative methods |
|---|
| Case study, grounded theory, narrative analysis, content analysis, discourse analysis | Descriptive stats, regression, ANOVA, factor analysis, SEM, cluster analysis |
⏱️ Time sequence of analysis
Three strategies:
- Concurrent: Analyze both simultaneously—appropriate for corroboration/complementarity purposes
- Sequential qual→quant: Qualitative findings inform quantitative phase—three variations:
- Form groups via qualitative data, compare on quantitative measures (typology development)
- Form themes qualitatively, confirm quantitatively (e.g., content analysis → factor analysis)
- Establish theoretical order qualitatively, confirm quantitatively
- Sequential quant→qual: Quantitative findings inform qualitative phase—three variations:
- Form groups quantitatively, compare qualitatively
- Form themes quantitatively, confirm qualitatively
- Establish theoretical model quantitatively, confirm qualitatively
Example: Sarker et al. (2018) used qualitative data to identify key variables affecting work-life conflict, then tested effects quantitatively.
🎭 Priority of analytical components
Equal-status design: Both components emphasized equally—more likely to involve sophisticated techniques for both.
Dominant-less dominant design: One component has greater priority.
- Qualitative-dominant: Qualitative component significantly higher priority
- Quantitative-dominant: Quantitative component significantly higher priority
- Less dominant component typically uses less complex techniques, may only address limited aspects
Priority affects analysis complexity and depth of insights from each strand.
📈 Number of analytical phases
Seven phases (Onwuegbuzie & Teddlie, 2003)—not all required in every study:
- Data reduction: Reduce dimensionality
- Qualitative: coding to create themes/categories
- Quantitative: factor analysis
- Data display: Visual presentation
- Qualitative: various matrices (see Table 7-2 in excerpt)
- Quantitative: tables, graphs
- Data transformation: Quantitizing or qualitizing
- Data correlation/comparison: Correlate/compare quantitative with qualitative data
- Data consolidation: Combine to create new/consolidated variables
- Data comparison: Compare data from different sources
- Data integration: Integrate into coherent whole
Data display formats (examples from Miles et al.):
- Within-case: context charts, time-ordered matrices, role-ordered matrices, conceptually clustered matrices, causal networks
- Cross-case: partially ordered meta-matrices, case-ordered descriptive matrices, scatterplots, causal models
🧩 Analysis orientation
Three approaches:
Case-oriented: Focus on selected cases to analyze meanings, experiences, perceptions—goal is particularizing and analytical generalization. Typically used in qualitative research.
Variable-oriented: Identify probabilistic relationships among constructs—goal is external generalization. Typically used in quantitative research.
- Strength: Good for finding relationships in large populations
- Weakness: Poor at handling causal complexity, findings often very general
Process-oriented: Evaluate processes/experiences over time, linking processes to variables and experiences to cases—combines case and variable approaches. Ideal for mixed-methods.
Example: Researchers studying companies implementing AI tools might use case-oriented methods for several companies plus surveys analyzed quantitatively.
Don't confuse: The orientation with the method—qualitative methods generally use case-oriented approaches and quantitative generally use variable-oriented, but mixed-methods can strategically combine both.
✅ Validation in mixed-methods research
🔍 Validation in qualitative research
Design validity:
- Descriptive validity: Accuracy of reported events, behaviors, settings
- Strategy: Provide actual descriptions
- Credibility: Confidence in truth of findings
- Strategies: Prolonged engagement, peer debriefing, triangulation, member checks, negative case analysis
- Transferability: Degree results generalize to other contexts
- Strategies: Thick descriptions, theoretical/purposeful sampling
Analytical validity:
- Theoretical validity: Extent theoretical explanation fits data
- Strategies: Extended data collection, theory triangulation, pattern matching
- Dependability: Describe changes in setting and their effects
- Strategies: Audit trail, stepwise replication, code-recode, peer examination
- Consistency: Verify research steps
- Strategies: Well-established protocols, qualitative database
- Plausibility: Whether findings fit data
- Strategies: Explanation-building, pattern-matching, address rival explanations, logic models
Inferential validity:
- Interpretive validity: Accuracy of interpreting participants' views/thoughts/feelings
- Strategies: Participant feedback, confirmability audit
- Confirmability: Degree results could be confirmed by others
- Strategies: Reflective journal, use theory (single-case) or replication logic (multiple-case)
🔬 Validation in quantitative research
Measurement validity:
- Reliability: Repeatability/consistency
- Types: Inter-rater, test-retest, parallel-forms, internal consistency
- Strategies: Split-half method, compute reliability scores
- Construct validity: Degree inferences can be made from operationalizations to theoretical constructs
- Types: Face, content, criterion-related, predictive, concurrent, convergent, discriminant, factorial
- Strategies: Multitrait-multimethod matrix, demonstrate theoretically-related measures are interrelated, unrelated measures are not
Design validity:
- Internal validity: Whether observed covariation reflects causal relationship
- Strategies: Demonstrate temporal precedence, covariation, use randomization, control groups, rapid experiments
- External validity: Whether cause-effect holds across variations in persons, settings, variables
- Strategies: Random selection/probability sampling, describe population to which results generalize
Inferential validity:
- Statistical conclusion validity: Validity of inferences about correlation between variables
- Strategies: Assess statistical significance, assess Type I/II error possibilities
🎯 Quality criteria specific to mixed-methods
Two quality domains:
Design quality (degree researcher selected appropriate procedures):
- Design suitability: Are methods appropriate for research questions? Does design match purpose?
- Design fidelity: Are methods implemented with necessary quality/rigor?
- Within-design consistency: Do components fit together seamlessly? Do strands follow logically?
- Analytical adequacy: Are analysis procedures appropriate/adequate to answer questions? Are mixed-methods strategies implemented effectively?
Interpretive rigor (degree of credible interpretations):
- Interpretive consistency: Do inferences closely follow findings? Are multiple inferences from same findings consistent?
- Theoretical consistency: Are inferences consistent with theory and field knowledge?
- Interpretive agreement: Would other scholars reach same conclusions? Do inferences match participants' constructions?
- Interpretive distinctiveness: Is each inference more credible/plausible than other possible conclusions?
- Integrative efficacy: Do meta-inferences adequately incorporate inferences from each strand? Are inconsistencies explored with explanations offered?
- Interpretive correspondence: Do inferences address study purposes/questions? Do meta-inferences meet stated need for mixed-methods?
Additional quality criteria (Onwuegbuzie & Johnson, 2006):
- Sample integration: For statistical generalizations from sample to population
- Inside-outside: Accurately present/utilize insider and observer views
- Weakness minimization: Weaknesses from one approach compensated by strengths from other
- Sequential legitimation: Minimize problems from revising sequence of phases
- Conversion legitimation: Quantitizing/qualitizing lead to interpretable data and high inference quality
- Paradigmatic mixing: Successfully combine paradigmatic assumptions into usable package
- Commensurability: Ability to make Gestalt switches between qualitative and quantitative lenses
- Multiple validities: Use all relevant strategies and meet multiple validity criteria
- Political legitimation: Consumers value meta-inferences from both components
⚠️ Threats to quality in mixed-methods research
🚨 Threats to internal validity/credibility
Research design stage:
- Insufficient/biased knowledge of earlier studies/theories
- Contradictions in logic among research questions, theory, hypotheses, statistical tests
Data collection stage (qualitative):
- Observer-caused effect (subjects behave differently when observed)
- Observer bias (insufficient data, gaps filled with researcher's values/projections)
- Researcher bias (personal biases/a priori assumptions not ruled out)
- Data access limitations (limited time on site, restricted access)
- Complexities of human mind (subjects may mislead, statements affected by fallibilities)
- Serious reactivity (changes from being conscious of participating)
Data collection stage (quantitative):
- Instrumentation issues
- Selection bias
- Researcher bias
Data analysis/interpretation stage (qualitative):
- Lack of descriptive legitimation (settings/events)
- Lack of interpretive legitimation (meanings/perspectives)
- Lack of explanatory/theoretical legitimation (causal processes)
- Lack of generalizability
- Issues in various legitimation types (ironic, paralogical, rhizomatic, voluptuous)
- Confidential information problems
- Difficulty interpreting typicality
- Not all data analyzed equally
- Lack of structural corroboration (multiple data types to support/contradict)
- Confirmation bias (interpretations overly congruent with a priori hypotheses)
- Illusory correlation (identifying relationships that don't exist)
- Causal error (providing causal explanations without verification)
Data analysis/interpretation stage (quantitative):
- Statistical regression bias
- Confirmation bias
🌍 Threats to external validity/credibility
Qualitative research:
- Catalytic legitimation (degree study empowers/liberates research community)
- Communicative legitimation (disagreement on knowledge claims in discourse)
- Interpretive legitimation (extent interpretation represents group's perspective)
- Population generalizability (tendency to generalize rather than obtain insights into specific processes/practices)
Quantitative research:
- Selection bias (inadequate/non-random sample)
- Interaction of history and treatment effect (conditions change over time)
🔗 Threats to mixed-methods quality
Design quality threats (within-design consistency):
- Design inconsistent with research questions/purpose
- Observations/measures lack validity
- Data analysis techniques insufficient/inappropriate
- Results lack necessary strength for high-quality meta-inferences
- Inferences inconsistent with results
- Inferences inconsistent with research questions/purposes
Interpretive rigor threats:
- Inconsistency between inferences and findings from each strand
- Inconsistency with empirical findings of other studies
- Inconsistency across scholars and participants' construction of reality (other scholars disagree; interpretations don't make sense to participants)
- Failure to distinguish inferences from other possible interpretations
- Failure to adequately integrate findings
Addressing threats: All should be properly addressed in mixed-methods inquiry. Not all threats appear in every study, but consider and address when necessary to ensure high-quality meta-inferences.
📋 Practical examples
🔬 Example 1: Sonenshein et al. (2014)
Purpose: Complementarity (with developmental element)
Design: Sequential qualitative→quantitative
Qualitative phase (Study 1):
- Sample: 29 participants (14 students, 15 alumni) from Environment and Business Program
- Justification: "Highly deviant set of individuals" who took distinctive steps to address climate change
- Data: Interviews, field observations, secondary documents
- Analysis: Grounded theory approach (Strauss & Corbin)
- Step 1: Initial data coding (open codes staying close to informants' interpretations)
- Step 2: Theoretical categorization (abstract codes grouping self-meaning into generalizable categories)
- Step 3: Theory induction (identified three aggregate dimensions: issue support challenges, self-assets, self-doubts)
- Display: Causal network showing first-order categories → second-order themes → aggregate dimensions; construct table with representative quotes
- Established credibility through mindfulness of different settings, theoretical validity through abstracting to match theoretical concepts
Quantitative phase (Study 2):
- Sample: 91 environmental issue supporters, active members of environmental groups
- Pre-test: Demonstrated construct validity for self-assets and self-doubts measures
- Data: Surveys + concealed observations (composting behavior, Earth Hour pledge signing)
- Analysis:
- Confirmatory factor analysis (bootstrapped due to small sample; CFI=.96, RMSEA=.08)
- Cluster analysis → identified three profiles: self-affirmers (low doubts, high assets), self-equivocators (high doubts, high assets), self-critics (high doubts, low assets)
- Coded observed behaviors (composting, interpersonal influence, collective advocacy)
- One-way ANOVA with post-hoc comparisons
- Findings: Self-critics engaged in lowest number of actions, followed by self-equivocators, then self-affirmers
Integration:
- Quantitized qualitative observation data (binary coding: 0 or 1)
- Used multiple analysis methods to ensure high-quality meta-inferences
- Components fit together seamlessly—Study 1 findings developed theory tested in Study 2
- Although didn't explicitly discuss quality criteria, methods appropriate for research questions, embedded multi-method element in Study 2
Don't confuse: This sequential design with pure quantitative follow-up—the qualitative phase was essential for theory development, not just exploratory; Study 2 embedded both quantitative (survey) and qualitative (observation) elements.
🏥 Example 2: Stewart et al. (2017)
Purpose: Completeness
Design: Concurrent (field experiment + open-ended interviews)
Quantitative study:
- Design: Longitudinal quasi-experiment
- Sample: 224 providers (142 physician, 82 nonphysician)
- Data: Pre- and post-intervention data; monthly time series (7 months before, 37 months after team-based empowerment adoption)
- Measure: Same-day appointment access (% of same-day requests granted)
- Coding: Status as dichotomous (1=higher-status physician, 0=lower-status nonphysician); absolute coding for time
- Analysis: Discontinuous growth modeling (using R nlme package)
- Added covariates
- Tested different time forms (linear, quadratic)
- Tested variations for random effects
- Added status as predictor
- Added interaction terms (status × time indicators)
- Calculated pseudo-R² values
- Findings: Teams led by lower-status providers improved access faster than higher-status provider teams
Qualitative study:
- Sample: Purposive sampling from larger VHA teams study
- Data: Semi-structured interviews (initial months + one-year follow-up)
- Focus: Facilitators and barriers to team-based empowerment implementation
- Analysis: Miles & Huberman (1994) procedures
- Step 1: Identifying themes (leader identity work, leader delegation)
- Step 2: Creating categories within themes
- Identity work: embracing empowering identity, protecting hierarchical identity
- Delegation: insufficient, overabundant, balanced
- Step 3: Connecting patterns (nonphysician providers associated with embracing new identity)
- Guard against bias: Research assistant blind to findings/theory involved in coding
- Display: Explanatory effect matrix
Integration:
- Data collected concurrently but analyzed sequentially
- Quantitative findings confirmed hypothesis (high-status leaders less effective)
- Qualitative findings examined leader behaviors in-depth (identity and delegation facilitated effectiveness)
- Integrated using causal model (joint display showing status → identity/delegation → team effectiveness)
- Should have used additional visual formats (e.g., time-ordered matrix for flow/sequence)
Quality assessment (not explicitly stated but evident):
- Methods appropriate for research questions
- Strands addressed different questions, together provided complete picture
- Integration well-reported through causal model
- Components fit together—quantitative identified what happened, qualitative explained how/why
Don't confuse: Concurrent data collection with concurrent analysis—this study collected both simultaneously but analyzed sequentially to build on quantitative findings with qualitative depth; also don't confuse completeness purpose with complementarity—here, neither strand alone provided complete picture (completeness), whereas complementarity examines different aspects that could stand alone.