Throughout our exploration of modern welfare systems, we've traced the complex dance between human judgment and systematic approaches. Now we turn to where this dance becomes most intricate: the daily reality of street-level bureaucracy, where policy meets practice and where welfare professionals navigate between system demands and human needs.
The Evolution of Street-Level Practice
When Michael Lipsky introduced street-level bureaucracy in 1980, he recognized these front-line workers as crucial policymakers through their exercise of discretion. His later work revealed how shared working conditions - chronically limited resources and non-voluntary clients - led to similar coping behaviors. Recent Danish research1 on child welfare workers adds nuance to this picture, showing how individual attitudes toward clients, job perceptions, and views of institutional capacity significantly influence coping strategies.
Individual Agency in Systematic Constraints
While systematic constraints shape practice, individual attitudes and perceptions significantly influence how street-level bureaucrats navigate these constraints. Research reveals three key factors affecting coping strategies:
First, attitudes toward clients matter deeply. A child welfare worker who views families through a strength-based lens typically develops different coping mechanisms than one focused primarily on risk assessment. As one worker explained: How you see families shapes everything - whether you bend rules to help them or stick rigidly to protocols.
Second, how workers conceptualize their job affects their approach to discretion. Those viewing their role as primarily supportive often find creative ways to work within constraints, while those emphasizing control tend toward stricter rule adherence.
Third, perceptions of institutional capacity - resources, political support, organizational effectiveness - influence how workers navigate constraints. Those who perceive strong institutional support often exercise discretion more confidently than those who feel undersupported.
These individual factors interact with two types of systematic constraints: formal rules and research-based frameworks. Interestingly, workers often find it easier to navigate formal rules than to integrate research insights into practice. As one social worker noted: Laws give clear boundaries. Research requires constant interpretation.
Knowledge Translation at Street Level
Between system demands and client needs lies a crucial but often overlooked aspect of street-level practice: the complex work of knowledge translation. Street-level bureaucrats don't just implement policies or follow procedures - they actively translate between different worlds of knowledge and understanding.
Consider how a child welfare worker translates between system requirements and family realities. When documenting a family's situation, they must convert rich, complex human experiences into categories that fit standardized assessment frameworks. Simultaneously, they translate system requirements and research evidence into language and actions that make sense to families.
This translation work happens in multiple directions:
Upward: Converting practice experiences into data that informs policy
Downward: Translating policies and research into meaningful interventions
Horizontal: Sharing practice wisdom with colleagues
Internal: Integrating different forms of knowledge in professional development
Ethical Navigation in Daily Practice
The ethical dimensions of welfare work also come alive at street level, where abstract principles meet human realities. Street-level bureaucrats face everyday ethical decisions - constant small choices that, while rarely dramatic, shape service quality and human dignity.
A reflection on the power of questions in practice;
Every question we ask carries hidden assumptions and ethical implications. When I ask a parent 'how often do you drink?' I've already implicitly focused on problems rather than strengths. If instead I ask 'what helps you be the parent you want to be?' I open up a completely different conversation. Each question either expands or limits what's possible in the relationship.
This reflection highlights how the very act of formulating questions involves ethical choices. When we ask about risks, we activate a deficit perspective. When we inquire about resources, we open possibilities for change. Every question simultaneously illuminates certain aspects of reality while obscuring others.
This ethical dimension of questioning becomes particularly crucial when we consider our earlier exploration of "good for whom?" When a street-level worker chooses between asking "what are your problems?" and "what are your hopes?", they're not just selecting different words - they're making profound choices about whose perspective matters and what counts as valuable knowledge. A standardized assessment might ask about risks and deficits because these are easier to measure and document. But such questions can silence crucial aspects of clients' lived experience and wisdom about their own situations.
This daily practice of questioning reveals the deeper political and ethical dimensions of street-level work - who gets to define what matters, whose knowledge counts and ultimately, what constitutes good in welfare services.
AI's Transformative Impact
When organizations implement AI in welfare services, something fascinating happens: while everyone talks about AI as revolutionary, in practice most organizations treat it as just another efficiency tool. This creates a dangerous blind spot in how we approach technological change in welfare services.
Let me share a concrete example: A welfare office implements an AI system to help screen initial client applications. On the surface, this looks like simple process improvement - the AI helps sort applications faster and more consistently. But look deeper and you'll see profound changes:
Social workers who previously used their judgment to prioritize cases now work from AI-generated lists
Client interactions shift as workers follow AI-suggested scripts and protocols
Professional relationships change as expertise increasingly centers on interpreting AI outputs
Knowledge flows differently as AI systems mediate between workers and information
These changes often happen gradually, what researchers2 call organic drift - small shifts that add up to fundamental changes in how welfare work happens. The danger isn't that these changes occur, but that they happen without conscious discussion or planning.
This matters deeply in welfare services where decisions affect vulnerable people's lives. When organizations implement AI without fully understanding its impact on professional judgment and discretion, they risk creating gaps in accountability. Who's responsible when an AI system influences a decision that affects a family's well-being? How do we ensure human wisdom still guides critical choices?
The solution isn't to avoid AI - it's to be more thoughtful about implementation. We need approaches that:
Explicitly consider how AI changes professional roles
Design systems that enhance rather than replace human judgment
Create clear accountability frameworks for AI-assisted decisions
Build organizational structures that support human-AI collaboration
The goal isn't just more efficient processes - it's better welfare services that thoughtfully combine human and machine capabilities. This requires careful attention to both technical and organizational dimensions of change. As one social worker recently told me: "AI should help me be better at the human parts of my job, not turn me into a machine.
Building Human-Centered Systems
So how do we build systems that balance human judgment and systematic approaches? This requires:
Understanding how different types of knowledge - legal, research-based, experiential - interact in front-line practice
Designing technology that supports rather than constrains professional judgment
Creating space for relationship-building amid increasing system demands
Building organizational cultures that value both systematic rigor and human wisdom
Looking Forward: Reimagining Street-Level Practice
As we look to the future of street-level bureaucracy, the integration of AI and digital systems isn't just changing how we work - it's forcing us to fundamentally rethink what welfare work means in the digital age. Let me share what I believe are the crucial challenges and opportunities ahead:
The Human-Machine Partnership
Recent experiences from welfare organizations offer contrasting visions of this future. In the Netherlands, some municipalities are exploring AI systems that augment professional judgment - helping social workers identify patterns and possibilities while leaving crucial decisions in human hands. Meanwhile, in parts of the United States, we're seeing attempts to automate decision-making entirely, reducing professional discretion to following algorithmic recommendations.
These divergent paths reflect different answers to a fundamental question: Should technology serve as a tool for enhancing human judgment, or as a replacement for it? The evidence increasingly suggests that the most successful approaches find ways to combine the strengths of both:
Human capabilities: Understanding context, building trust, making ethical judgments, navigating complexity
Machine capabilities: Processing vast data, identifying patterns, ensuring consistency, supporting documentation
The key lies in what I call augmented professional judgment - where technology enhances rather than replaces human wisdom. Imagine a future where AI handles routine documentation and data analysis, freeing social workers to focus on the complex human work of building relationships and supporting change. Where smart systems help identify patterns and possibilities, but experienced professionals interpret what these patterns mean for individual cases.
Building New Competencies
This future requires new skills from street-level bureaucrats. Beyond traditional social work competencies, professionals will need:
Digital literacy: Understanding how to work effectively with AI and digital systems
Critical data interpretation: Making sense of AI-generated insights in context
Enhanced ethical judgment: Navigating new moral questions raised by technology
Stronger relationship skills: As automation handles routine tasks, human connection becomes even more crucial
Creating Supportive Structures For this vision to succeed, organizations need to develop:
Clear frameworks for human-AI collaboration
Strong ethical guidelines for technology use
Protected spaces for professional judgment
Robust accountability systems
Continuous learning environments
Most importantly, we need to maintain what I call the human guarantee - ensuring that significant decisions affecting people's lives always involve meaningful human judgment and accountability. Technology should enhance, not replace, the human elements that make welfare work effective.
The Path Forward
Success requires careful attention to both immediate practical needs and longer-term development. We need to:
Actively shape how technology enters our practice rather than passively accepting whatever comes
Build evidence about what works in human-machine collaboration
Develop new models of professional practice that embrace technology while preserving human wisdom
Create organizational cultures that value both efficiency and human connection
The future of street-level bureaucracy lies not in choosing between human and machine approaches, but in thoughtfully combining them in ways that serve both systematic needs and human dignity. This requires ongoing dialogue between practitioners, researchers, technology developers, and the people we serve.
As we navigate this transformation, we must keep returning to fundamental questions:
How do we ensure technology serves human needs rather than the other way around?
What aspects of welfare work must remain fundamentally human?
How do we maintain professional wisdom in an increasingly automated world?
What new possibilities emerge when we get the human-machine balance right?
This is part 9 in our ongoing series exploring the intersection of human judgment and systematic knowledge in modern welfare systems. Join the conversation by sharing your thoughts and experiences in the comments below. How do you see the future of street-level practice evolving? What helps you maintain human connection amid increasing technological demands? Where do you see the most promising opportunities for better integration of human judgment and systematic approaches?
Baviskar, S., & Winter, S. C. (2017). Street-Level Bureaucrats as Individual Policymakers: The Relationship between Attitudes and Coping Behavior toward Vulnerable Children and Youth. International Public Management Journal, 20(2), 316–353. https://doi.org/10.1080/10967494.2016.1235641
Giest, S. N., & Klievink, B. (2024). More than a digital system: how AI is changing the role of bureaucrats in different organizational contexts. Public Management Review, 26(2), 379–398. https://doi.org/10.1080/14719037.2022.2095001
What I hoped to read at one point ☝️. Good luck implementing your strategies.