Human and Machine: A Journey - Part 6
Artificial Intelligence in Social Services - A Paradigm Shift in Human-Machine Relations?
Throughout our journey exploring modern welfare systems, we've traced how standardization and systematization have transformed professional practice. We've examined the complex dance between national guidelines and local implementation, explored the crucial role of tacit knowledge, and delved into the ethical dimensions that underpin all welfare work. Now we face perhaps our most provocative question yet: Could artificial intelligence - often seen as the ultimate expression of mechanical logic - actually help preserve and enhance human judgment in welfare services?
The Current Crisis in Welfare Systems
As welfare professionals increasingly find themselves caught between multiple demands and control systems, we're witnessing fundamental changes in how social work is practiced. Hjärpe's1 ethnographic research shows how the very systems designed to ensure quality and accountability are creating new pressures and paradoxes in daily practice.
Let me share an example that illustrates this dynamic, drawing from observation at a child welfare office:
Under pressure to complete investigations within the four-month legal deadline, social workers find themselves developing what could be called tunnel vision - focusing on meeting formal requirements and documentation demands rather than building relationships with families. The pressure to produce measurable results and meet standardized requirements began to overshadow the complex relational work that social work traditionally entails.
This experience exemplifies what Hjärpe identifies as key tensions in modern welfare work:
Administrative growth versus client time
Standardization versus professional judgment
Measurable activities versus complex relational work
The paradox is clear: In our drive to ensure quality through standardization and documentation, we've created systems that often constrain the professional judgment and human connection they were meant to support. As Hjärpe observes, this leads to various practices where workers must constantly navigate between system requirements and professional assessments of what serves clients best.
Yet there may be ways to better balance these competing demands. This requires carefully examining how our measurement and documentation systems either support or hinder core professional work.
Theoretical Framework: Rethinking AI in Social Services
To understand how AI might transform professional practice, we need new theoretical tools. Bruno Latour's actor-network theory (ANT) offers valuable insights. Traditionally, we tend to view AI systems as autonomous tools that, once created, become detached from their creators and users. This view naturally leads to fears of replacement and displacement. ANT could help with a different perspective: AI exists within networks of human and non-human actors, constantly shaped by and shaping these relationships.
As Latour argues in Science in Action2 technology is never neutral - it always mediates how we carry out our actions. This mediation can take different forms, and here's where AI presents unique possibilities. Unlike traditional automation that simply replaces human activities, AI has the potential to transform how we practice.
Consider the development of Tesla's self-driving capabilities compared to traditional automotive development. Traditional manufacturers follow a deductive LEAN process, developing and safety-testing cars completely before market release. Tesla, in contrast, develops through massive real-world data collection from drivers, continuously configuring and improving systems based on actual usage patterns. This illustrates a fundamental shift in how technology and human practice can evolve together.
This transformation of practice through human-AI collaboration must be guided by clear ethical principles, without building too large bureaucracies of rules and formalizations to control the systems. We could see AI as part of a socio-technical network that transforms how we work while preserving essential human elements.
Knowledge Translation in the AI Era
In Part 3, we explored the hourglass model of knowledge flow in welfare services - how knowledge moves between research, policy and practice through various translation processes. AI has the potential to revolutionize this entire structure, potentially transforming our traditional hierarchical organization of knowledge and practice.
At the top of our hourglass, AI could dramatically accelerate and enhance systematic reviews and knowledge synthesis. What traditionally takes months or years of careful analysis could potentially be processed in days or weeks, with AI systems helping to:
Process vast amounts of research rapidly
Identify patterns and connections across studies
Generate preliminary synthesis for human review
Continuously update evidence bases with new research
In the middle, where analysts and directors traditionally mediate between research and practice, AI could help flatten organizational hierarchies by:
Automating many analytical tasks
Making complex information more accessible
Supporting direct knowledge translation
Reducing need for multiple layers of interpretation
At the bottom, practitioners could gain unprecedented access to knowledge resources. Imagine being able to:
Generate quick evidence summaries for specific cases
Access AI-assisted literature reviews in real time
Create custom knowledge syntheses through well-crafted prompts
Connect directly with relevant research without extensive hierarchical mediation
This flattening of the traditional hourglass structure could revolutionize how knowledge flows in welfare organizations. Instead of knowledge moving through multiple hierarchical layers, we might see more direct connections between research and practice, supported by AI tools that make complex knowledge more accessible and actionable.
However, this transformation points to a new form of professional judgment - one that skillfully integrates systematic knowledge, professional wisdom, and technological insights. The challenge becomes navigating this enhanced knowledge landscape while maintaining focus on direct human experience and ethical practice.
The Liberation Potential
A fundamental shift emerges when we introduce AI into welfare services: from systems focused on control to systems oriented toward value creation. This transformation begins with a crucial insight about data quality - garbage in, garbage out as we often say in evaluation work. But AI's potential to handle vast amounts of data, including unstructured text and documentation, creates new possibilities for turning our documentation into a resource rather than just a burden.
Currently, much of our documentation serves primarily control purposes - checking boxes, filling in required fields, proving compliance. The actual content of our professional observations and assessments, often captured in free text fields, remains largely unused for learning and improvement. With AI, this could change dramatically. Our documentation could become a rich resource for learning and improvement, with AI systems analyzing patterns and insights from our professional observations. Instead of just proving compliance, our documentation could actually enhance our understanding and improve our practice and societies.
This shift toward value creation naturally drives better data quality. When professionals see that their documentation actually contributes to understanding and improving practice, rather than just satisfying control requirements, the motivation for meaningful documentation increases. AI systems require quality data to function effectively, but they also make it worthwhile to provide that quality by turning our documentation into valuable insights for practice.
The transformation of documentation through AI creates new space for professional judgment. As natural language processing reduces the need for structured data entry and automates routine aspects of documentation, professionals can focus more on meaningful content rather than form. Smart summarization and automated cross-referencing save time while making information more accessible and useful. This would free up mental space and time for the complex reasoning and human connection that form the core of welfare work.
Enhanced decision support becomes possible as AI helps us recognize patterns across cases and integrate research knowledge with practice experience. The system becomes a partner in professional judgment, offering relevant information and pattern recognition while leaving the crucial interpretative and relational work to human professionals.
Perhaps most importantly, this transformation enables systematic learning from practice in new ways. By analyzing patterns across cases, connecting different sets of data and identifying successful approaches, we can better understand what works in different contexts. This creates a continuous feedback loop where practice insights inform system improvements and improved systems better support practice. The result is a virtuous cycle of learning and development grounded in actual practice rather than just theoretical models or administrative requirements.
Implementation Pathway
Healthcare's experience with AI implementation offers valuable lessons for welfare services. The Swedish healthcare sector's journey with AI in areas like diagnostic imaging and patient triage demonstrates both the potential and challenges of implementing AI in complex human service organizations.
A crucial insight from healthcare implementation is that smaller organizations often struggle to lead AI development independently. This isn't just about resources - it's about data. AI systems require large amounts of high-quality data for development and training, something individual municipalities or small organizations rarely can provide alone. This points to the need for coordinated, centralized data collection and development to ensure equitable access to AI capabilities across the welfare sector.
Balancing Central and Local Development
The challenge becomes finding the right balance between centralized development and local adaptation. Healthcare has shown that successful implementation often combines:
Central development of core AI capabilities
Local adaptation of implementation strategies
Shared data infrastructure for learning
Maintained professional autonomy in practice
This combined approach helps address the equity challenge while preserving necessary local flexibility.
Professional Engagement
Another lesson from healthcare is the necessity of professional engagement in AI development and implementation. This means:
Professional participation must start early in the development process, not just at implementation. Healthcare implementations that treated professionals as passive recipients of new technology often failed, while those that engaged professionals as active participants in development were more successful.
Knowledge translation becomes particularly crucial - not just technical knowledge about how to use AI systems, but deeper understanding of how these systems can enhance professional judgment. This requires ongoing dialogue between developers, implementers, and professionals throughout the process.
The implementation pathway is about finding the balance between Human & Machine. Success requires careful attention to both technical and human factors, with particular focus on how we maintain and enhance professional judgment while leveraging AI's capabilities.
The Emerging Paradigm
We stand at a pivotal moment in welfare services' development. While the past decades have focused heavily on standardization and control, often overwhelming rather than supporting human judgment, a new paradigm is emerging. With AI evolving, systems could potentially revolutionize in knowledge translation.
Knowledge in professional practice has never been purely mechanical or purely subjective. Instead, it involves complex interactions between different forms of understanding. In the AI era, this complexity increases as we integrate; Human wisdom, systematic knowledge and AI-generated insights.
The key is understanding how these different forms of knowledge complement each other and how we can configure our systems to support the humans in them. AI may provide new tools and insights in this journey.
Ethics and Risk in Modern Welfare Practice
The integration of new technologies into welfare services presents us with a complex ethical equation that goes beyond traditional professional ethics. While we must carefully consider risks around privacy, data security, and the appropriate balance between automation and human judgment, we also face what Munthe3 calls the price of precaution - the hidden costs of being too cautious. When we delay implementing potentially beneficial changes out of caution, we allow known problems to persist. Every additional safety measure or documentation requirement we add can actually create new risks by reducing time for direct client work and slowing urgent decisions.
The ethical challenge, therefore, isn't simply about protecting against potential harms - it's about balancing different types of risks against each other. We need thoughtful policies about data access, consent and the appropriate role of automation, but we must also recognize that maintaining the status quo carries its own ethical costs.
It's dizzying to contemplate where we'll find ourselves when ethical considerations actually compel us to adopt AI because the cost of not using it becomes ethically untenable. Consider a future where AI can process vast amounts of case data to identify children at risk of abuse, predict mental health crises before they occur, or optimize resource allocation to reach those most in need. At what point does refusing to implement such capabilities become an ethical failure? When does our cautious protection of current practices transform from prudent safeguarding into harmful resistance?
This represents more than just technological advancement - it signals a fundamental paradigm shift in how we conceive of professional responsibility and ethical practice. We're moving toward a reality where the integration of AI isn't just an option but an ethical imperative. This raises profound questions about the future shape of society: How will professional roles evolve when augmented by AI capabilities? What new ethical challenges will emerge as these systems become more sophisticated? How do we preserve human dignity and autonomy in a world where algorithmic insights increasingly influence human services?
This paradigm shift suggests we're on the cusp of redefining not just how we deliver welfare services, but what it means to be human in an AI-augmented society. The challenge lies in navigating this transformation thoughtfully, ensuring we harness AI's potential while preserving the essential human elements that give meaning to our work.
Looking Forward: The Path Ahead
The future development of welfare services will be shaped by how we navigate these emerging possibilities. As we've seen throughout this exploration, the challenge isn't simply technical - it's about fundamental questions of professional practice, knowledge development and ethical service delivery.
The path forward requires careful consideration of several critical dimensions:
How we maintain professional judgment and human connection while leveraging new capabilities
How we ensure system development serves rather than constrains practice wisdom
How we protect democratic control and ethical principles in an enhanced practice environment
How we balance the risks of action against what Munthe calls "the price of precaution"
In our next exploration, we'll examine the politics of knowledge in welfare services - who shapes development, whose interests are served, and how we ensure alignment with professional values and client needs. As this series continues to unfold, these questions become increasingly crucial for understanding how we can create welfare services that truly serve their purpose.
But before then, consider:
What aspects of your work would benefit most from enhanced support?
Where is human judgment most essential in your practice?
How do you balance the risks of change against the costs of maintaining current systems?
What safeguards would you want to see in an enhanced practice environment?
This is part 6 in our ongoing series exploring the intersection of human judgment and systematic knowledge in modern welfare systems. Join the conversation by sharing your thoughts and experiences in the comments below.
Hjärpe, T. (2020). Mätning och motstånd: Sifferstyrning i socialtjänstens vardag. Lund: Socialhögskolan, Lunds universitet.
Latour, B. (1986). The powers of association. In J. Law (Ed.), Power, Action and Belief: A New Sociology of Knowledge? (pp. 264-280). Routledge & Kegan Paul.
Munthe, C. (2006). The Price of Precaution and the Ethics of Risk.