Human and Machine: A Journey - Part 8
Good for Whom? Unpacking Values in Evidence and Practice
Throughout our journey, we've explored how welfare systems balance human judgment with systematic approaches. Now we confront the question: When we say something is evidence-based or good practice, we must ask - good for whom?
This question becomes increasingly crucial as welfare services and societal systems develop. While evidence-based practice and standardized methods can improve service quality, they also embed implicit value judgments about what matters and what counts as success. These choices shape how we work and understand the very purpose of welfare services.
The Twin Traps of Value Assessment
As welfare services evolve, we find ourselves navigating between two dangerous extremes in how we assess value and effectiveness. The perils of these extremes are vividly illustrated by several historical examples that demonstrate how both over-reliance on empirical evidence and purely mechanistic reasoning can lead us astray.
Consider California's failed implementation of smaller class sizes in schools during the 1990s. Inspired by successful randomized controlled trials (RCTs) in Tennessee showing improved reading outcomes with smaller classes, California implemented the same intervention - but with dramatically different results. The failure stemmed from overlooking crucial supporting factors not captured in the RCTs: the availability of qualified teachers and suitable facilities. This case demonstrates how even strong empirical evidence can mislead when we fail to understand the contextual mechanisms that make interventions work1.
Conversely, consider Dr. Benjamin Spock's mechanistic reasoning about infant sleeping positions2. Based on knowledge about how intoxicated adults might choke on their vomit while sleeping on their backs, he recommended against putting babies to sleep in this position. This seemingly logical deduction proved tragically wrong - empirical evidence later showed that sleeping infants on their front increased sudden infant death syndrome. The case illustrates the dangers of relying solely on mechanistic reasoning without empirical validation.
The Complex Dance of Performance Measurement
Our drive to ensure quality through metrics creates a fundamental paradox that echoes through the history of welfare services. Just as California's class size reduction initiative showed how focusing on a single measurable outcome (class size) can blind us to crucial supporting factors, our current performance metrics often create unintended consequences. When social workers face strict targets and documentation requirements, they may adapt their behavior to satisfy metrics rather than serve clients' needs - or - focusing on assessing needs may take focus away from working with the relation to and around the client or family.
This raises the crucial question: Good for whom? While performance metrics may look impressive to politicians and administrators, building trust in welfare services among external stakeholders, they don't necessarily reflect what's good for clients. The economist sees efficient resource utilization, managers see clear performance indicators, and politicians see accountability - but these measures may actually work against what's good for the person sitting across from the social worker, needing help with complex life challenges.
We need to balance measurments, documentations with relational work and interventions. This means creating frameworks that balance quantitative metrics with qualitative understanding, making space for professional judgment while maintaining necessary oversight. As we discussed in Part 4, some of the most valuable aspects of welfare work resist simple quantification - just as the supporting factors that made Tennessee's class size reduction successful couldn't be captured in simple metrics.
The Human Element in Value Judgment
A comical yet instructive example from Collins and Pinch3 illustrates the limitations of purely technical approaches to knowledge. Imagine a fictional condition called Undifferentiated Broken Limb (UBL) where one of four limbs is broken, but we don't know which one. In a hypothetical randomized controlled trial, researchers test the intervention of putting a cast on the left leg against a control group receiving neck braces. The intervention shows a promising 25% success rate - technically evidence-based since it helps those who happen to have broken their left leg.
This absurd scenario reveals a crucial truth about welfare services: without understanding underlying mechanisms and contexts, even statistically significant results can miss the point entirely. Just as a simple diagnostic test would achieve 100% success in treating broken limbs, taking time to understand the human elements of social problems often leads to more effective interventions than blind application of evidence-based approaches.
This illustrates why mechanical approaches alone fall short in social services. While standardized assessments and metrics serve important purposes, creating real change requires understanding the human context - the mechanisms, motivations, and meanings behind behaviors and situations. Effective practice combines systematic knowledge with deep human understanding, recognizing that what appears straightforward through metrics alone often masks crucial complexity.
We must remain humble though - many mechanisms in human services remain poorly understood, and continued research into these deeper patterns is vital.
Integrating Knowledge in Complex Practice
Drawing from Lydahl's4 research on person-centered care, we can reveal a crucial insight about knowledge in welfare services: while empirical evidence can demonstrate that continuity improves outcomes, understanding why requires diving deeper into mechanisms of human connection and trust. This mirrors what Howick describes as the placebo maximization challenge - how do we create space for the crucial human elements that amplify intervention effects while maintaining systematic approaches?
This tension points to three essential knowledge traditions in welfare services, each with distinct strengths and limitations:
The Scientific-Empirical tradition provides crucial evidence about what works at a population level. While this approach excels at identifying effective interventions, it can miss important mechanisms - like our UBL example showed, statistical significance alone doesn't guarantee meaningful improvement. However, systematic reviews and meta-analyses remain essential for understanding broad patterns of effectiveness.
The Professional-Mechanistic tradition represents practitioners' understanding of how interventions actually work in context. This knowledge of mechanisms helps predict what might work in specific situations, though as Howick warns, mechanism knowledge alone can be misleading when contexts change or understanding is incomplete. Success requires combining mechanism knowledge with empirical validation.
The Experiential-Relational tradition centers the human elements that often determine intervention success - relationships, trust, motivation, and meaning. Recent evidence on relationship continuity validates what practitioners have long known: the quality of human connections fundamentally shapes outcomes. This tradition reminds us that even evidence-based interventions require skillful human implementation And empathy.
Effective practice requires what we might call conscious integration - actively combining empirical evidence, mechanism understanding, and relational wisdom. This means:
Using empirical evidence to identify promising interventions while remaining aware of its limitations
Drawing on mechanism knowledge to adapt interventions thoughtfully while avoiding oversimplified causal assumptions
Maintaining space for relationship building and human connection even as we pursue systematic approaches
Documenting both quantitative outcomes and qualitative insights about how and why interventions work
As Howick notes, practitioners need time and space to maximize intervention effects through skillful human implementation. This requires organizational cultures that value both systematic knowledge and professional wisdom.
Evidence-Based Practice as Value-Aware Process
What I call configurative practice actively engages with different value perspectives while maintaining professional integrity. Social work requires sophisticated integration of multiple knowledge forms. When facing a challenging case, we typically have access to multiple perspectives - research evidence about risks and interventions, professional observations from years of practice, the family's own understanding of their situation, and insights from others involved in the case.
Traditional evidence-based practice might prioritize research evidence and standardized assessments, risking what Howick calls the mechanism blindness we saw in the California class size example. But configurative practice recognizes that values and contexts are integral to how we understand and use evidence. A risk factor that seems clear-cut in research might have different meanings in different cultural contexts, just as a standardized intervention might need significant adaptation to work within a family's value system.
Democratic Dialogue and Learning
Building on our discussion of knowledge politics in Part 7, effective practice requires ongoing dialogue between different stakeholders - practitioners, researchers, policymakers and service users. Just as the Cochrane logo we talked about in Part 3 reminds us that important insights often emerge from combining multiple studies, welfare development depends on integrating multiple perspectives and forms of knowledge.
This dialogue goes beyond just sharing information - it involves negotiating what counts as valuable knowledge and meaningful outcomes in welfare services. Like our UBL example showed, purely technical approaches can miss crucial contextual understanding. As Eliasson emphasized (in part 7), success requires organizing around real problems rather than bureaucratic convenience, maintaining evidence-based dialogue while recognizing that different forms of expertise - from clinical experience to lived experience - contribute essential insights to our understanding.
Configuring Systems for Value Integration
Modern welfare technology shapes how we define and measure good outcomes, much like how different research traditions shape our understanding of evidence. Our current systems excel at tracking measurable aspects - standardized processes, resource utilization, compliance metrics - but often struggle with crucial qualitative dimensions like relationship quality, contextual nuances and lived experiences. This mirrors the empiricist's dilemma: just as statistical significance doesn't guarantee meaningful improvement, meeting documentation requirements doesn't ensure quality care.
Consider Howick's insight about maximizing intervention effects through relationship quality - yet our systems often track meeting quantities while ignoring interaction quality. When we design systems that count client meetings but not therapeutic relationships, or measure intervention outcomes without considering mechanism understanding, we're making implicit value statements about what matters. It's like conducting our UBL study without first checking which limb is actually broken.
The challenge lies in consciously configuring our systems to support human judgment. This means creating documentation frameworks and decision support tools that capture both quantitative and qualitative dimensions that helps develop practice. Like successful healthcare implementations that balance systematic approaches with relationship continuity, we need systems that help practitioners integrate different forms of knowledge. The systems we build today will shape how future generations understand and practice welfare work tomorrow.
Looking Forward
As we move toward exploring Street-Level Bureaucracy in Part 9, the relationship between empirical evidence and mechanism understanding emerges as crucial for welfare development. Like Dr. Spock's mistaken reasoning about infant sleep positions showed, even well-intentioned systematic approaches can lead us astray when we don't fully understand underlying mechanisms. Yet the Cochrane logo reminds us that systematic knowledge-building remains essential for effective practice.
The complexity of modern welfare systems presents a fundamental challenge. When structures become highly interconnected, no single perspective - whether empirical evidence, mechanism understanding, or professional wisdom - can capture the full picture. This calls for humility in acknowledging that our understanding of complex social systems will always be partial.
The path forward lies in conscious integration: recognizing that different forms of knowledge must work together while maintaining focus on human dignity. Success requires creating space for both systematic approaches and the human elements that make interventions effective. Ultimately, welfare work isn't just about delivering services - it's about continuously engaging with what it means to create a good society.
Questions for Reflection
How do you navigate between immediate practical needs and broader societal values in your daily practice?
When working within complex systems, how do you maintain awareness of the "greater good" while handling individual cases?
What helps you recognize the limitations of your own perspective when making decisions that affect others?
How can we better balance systematic approaches with human wisdom in welfare services?
This is part 8 in our ongoing series exploring the intersection of human judgment and systematic knowledge in modern welfare services. Join the conversation by sharing your thoughts and experiences in the comments below.
Cartwright, N & Hardie, J. (2012) Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford: Oxford University Press.
Howick, J. (2011) The Philosophy of Evidence-Based Medicine. Oxford: Wiley-Blackwell.
Collins, H och Pinch, T (2005) Dr. Golem. Chicago: Chicago University Press.
Lydahl, D. (2021). Standard tools for non-standard care: The values and scripts of a person-centred assessment protocol. Health, 2021, Vol. 25, Iss. 1, Pp. 103-.120, 25(1), 103–120.
Well done!