Skip to main content

Reclaiming humanity in the age of AI: how social work can prevent “cognitive debt”

As classrooms across the globe transform under the weight of rapid technological change, one reality is clear: artificial intelligence isn’t coming, it’s already here. According to a 2024 global survey, more than 86% of students now use generative AI tools in their academic work. But while student adoption has accelerated, higher education’s response has varied from institution to institution. Faculty guidance, institutional policy, and ethical frameworks are racing to keep pace with the rapid adoption and evolution of emerging AI technologies.

Dr. Keith J. Watts, assistant professor at the University of Kentucky College of Social Work (CoSW), explains that this widening gap is more than just a pedagogical challenge; it is becoming a growing cognitive and ethical crisis.

“The core issue isn’t the technology use itself, but the pedagogical vacuum it has entered,” Watts explains. “Student adoption has vastly outpaced faculty and institutional guidance, which means the default mode of AI use is naturally driven by the path of least cognitive resistance, emphasizing efficiency over learning. When students use these tools without a structured, critical framework, they’re not just completing tasks; they are accruing a cognitive debt. This debt is a cumulative deficit in the essential skills—critical thinking, ethical reasoning, and nuanced professional judgment—that are the bedrock of social work.”

Understanding “Cognitive Debt”

Watts’ recent paper introduces the concept of AI-induced cognitive debt, a term adapted from clinical neurology that captures what happens when individuals offload too much of their mental processing to external systems. In the classroom, this looks like students using AI to summarize readings, brainstorm papers, structure arguments or complete assignments without engaging in deep reflection.

Over time, this habitual outsourcing depletes necessary critical faculties. In many ways, this depletion is the intellectual equivalent of taking on debt that must eventually be repaid. The student may complete the assignment, but the learning that the assignment was designed to produce does not occur.

“Genuine learning is an act of investment in our own cognitive reserves,” Watts writes. “Every time a student offloads a critical thinking task to an AI without a structured pedagogical purpose, they are incurring an opportunity cost. They miss a vital chance to engage in the effortful thinking that builds durable mental models. This repeated failure to build cognitive assets results in a potentially debilitating cognitive debt.”

In social work education, that loss caries significant professional implications. Because the field depends on ethical decision-making, empathy, and nuanced human judgement, long-term dependence on automation can undermine the very competencies that define effective practice.

Without structured pedagogy to guide AI use, Watts argues, the profession risks producing graduates who can generate text but not reflection, who can analyze data but not meaning.

A Framework for Ethical, Experiential Learning

Rather than banning AI, which Watts describes as untenable, he proposes an experiential learning framework that turns the technology into an object of critical inquiry rather than a shortcut to completion. The model combines Kolb’s experiential learning cycle, Knowles’ andragogy, and the critical pedagogies of Paulo Freire and bell hooks, positioning AI as a mirror for reflection, not a replacement for thought. These theories are not competing alternatives but synergistic layers: Kolb provides the process for learning, Knowles defines the stance of the adult learner, and Friere and hooks provide the critical ethos required to challenge and deconstruct the technology itself.

“Our goal is to make AI the subject of learning—not the substitute for it,” Watts says. “When students use AI to generate something, then critically analyze its biases, revise its assumptions, and align it with social work ethics they’re not offloading cognition; they’re deepening it.”

In practice, this framework transforms classroom activities at every level of social work education:

  • Micro (clinical): Students use AI chatbots for simulated client interviews, then deconstruct the biases in language or cultural framing.
  • Mezzo (organizational): Learners draft AI-generated agency policies, critique their ethical implications, and rewrite them through an anti-oppressive lens.
  • Macro (policy): Students analyze public datasets using AI, then identify whose voices are missing, rewriting narratives to include marginalized perspectives.

These exercises, Watts explains, turn AI into a tool for empathy, justice, and reflection. These frameworks are designed to intentionally shift use of AI from a tool for passive cognitive offloading to an object of active, critical and reflective inquiry.

Aligning AI Literacy with Professional Ethics

Watts’ framework doesn’t just fit within social work ethics—it strengthens it. By directly mapping assignments to the Council on Social Work Education’s Educational Policy and Accreditation Standards (EPAS) and the National Association of Social Workers (NASW) Code of Ethics, the model ensures that critical and ethical reasoning remain at the heart of learning.

“Ethical use of technology is not optional for social workers. It’s an essential part of professional competence,” Watts notes. “We have an obligation to ensure that our students aren’t just technologically capable, but ethically fluent.”

He also encourages institutions to shift from broad or restrictive AI policies toward pedagogically grounded approaches that emphasize transparency, disclosure and skill development as part of responsible innovation.

The Future: A Profession That Thinks Critically and Leads Ethically

For Watts, preparing social workers for an AI-driven world isn’t about teaching them to code or use tools. The goal should be to help them remain profoundly human in the process.

“As AI automates routine tasks, it will only elevate the importance of the uniquely human skills that define our profession, including genuine empathy, sophisticated ethical reasoning, creativity, and the ability to build authentic therapeutic relationships,” he emphasizes. “Social work must lead the charge in reclaiming humanity in the age of automation.”

The framework he proposes offers a blueprint for educators, accrediting bodies, and policymakers seeking to align innovation with integrity. By reorienting AI from a cognitive crutch to a catalyst for critical inquiry, Watts envisions a future where social workers are not only technologically literate but critically conscious and deeply human.

“The question isn’t whether AI will change education—it already has,” Watts concludes. “The question is whether education will rise to meet that change with the critical, human-centered thinking our profession demands.”

About the Author:

Dr. Keith Watts is an assistant professor in the University of Kentucky College of Social Work. Guided by the principals of social justice, his research examines topics including  health,  minority stress, youth violence, and the ethical adoption of artificial intelligence in social work practice and education, with a particular emphasis on advancing outcomes for historically under-resourced communities.  Across his work, he explores how belongingness serves as a protective factor to promote resilience and well-being.