AI Agents
In 2024, generative AI took centre stage, reimagining how we create and interact with information systems. From generating case notes and reports or life story work with images, generative AI demonstrated its transformative potential in children’s social care. However, as we move into 2025, the spotlight is shifting towards AI agents. These intelligent systems are not just about creating content but also about performing complex tasks autonomously. AI agents promise to redefine workflows in customer service, healthcare, and beyond by learning from data, making decisions, and adapting to new information. While 2024 showcased the creative prowess of AI, 2025 is set to highlight the practical, everyday applications of AI agents, marking a significant evolution in the AI landscape.
AI agents are touted as the next big thing but do they live up to the hype and could they be used in children’s social care?
What are AI Agents?
An AI agent is a type of AI product specifically designed to perform tasks autonomously or semi-autonomously, often involving decision-making processes and interactions with users or other systems. Unlike other AI products, which might be more narrowly focused on specific functions like image recognition or language translation, AI agents are typically more versatile and adaptive. They can learn from data, make predictions, and take actions based on their environment and objectives. For example, while a language translation AI product focuses solely on converting text from one language to another, an AI agent in customer service might handle inquiries, provide recommendations, and escalate issues to human agents when necessary, all while continuously learning and improving its performance.
How are AI agents similar and different from predictive analytics?
AI agents and predictive analytics share some similarities but also have distinct differences.
Similarities:
Data-Driven: Both AI agents and predictive analytics rely heavily on data to function. They analyse historical data to identify patterns and make informed decisions.
Predictive Capabilities: Both can forecast future outcomes based on current and past data. For example, predictive analytics might forecast sales trends, while an AI agent could predict customer behaviour to optimise service delivery.
Differences:
Functionality: Predictive analytics primarily focuses on analysing data to make predictions. It provides insights and forecasts that help in decision-making. AI agents, on the other hand, go a step further by not only making predictions but also taking actions based on those predictions. They can interact with users, perform tasks, and adapt to new information autonomously.
Autonomy: Predictive analytics tools typically require human intervention to act on the insights they provide. AI agents can operate autonomously, executing tasks and making decisions without constant human oversight.
Adaptability: AI agents are designed to learn and adapt over time, improving their performance based on feedback and new data. Predictive analytics models may need to be manually updated and retrained to adapt to new data or changing conditions.
While both AI agents and predictive analytics leverage data to make predictions, AI agents are more versatile and autonomous, capable of performing tasks and adapting to new information without human intervention.
Accuracy and reliability of AI agents
The accuracy and reliability of AI agents can vary significantly depending on the quality of the data they are trained on and the robustness of their algorithms. AI is not infallible and can be prone to errors, especially if the data is biased or incomplete. While AI agents hold promise, consumer feedback indicates an unacceptable rate of inaccuracy and unreliability.
“Here is the fundamental paradox with agents. The capability of agents is already, in some ways, mind blowing. If agents could do, reliably and in the real world in the hands of consumers, everything they are capable of, it would truly be an economic transformation. But even if they are going to fail 10% of the time, its a useless product because no one want an agent that delivers ‘door-dash’ to the wrong address 10% of the time…these re the kinds of failures that consumers are actually reporting”. Arvind Narayanan, Professor of Computer Science, Princeton University.
Inaccuracy and unreliability are inconvenient in simple consumer systems but could be potentially catastrophic in complex child safeguarding systems.
Historic pilots of predictive analytics in children’s social care yielded poor predictive results and surfaced a range of ethical issues. These tools were designed to analyse vast amounts of data from various sources, including health, education, and social services, to identify patterns and predict children at risk of abuse and neglect. However, concerns about data quality, biases, and the ethical implications of profiling children and families were significant hurdles (Munro 2019). The models often produced high rates of false positives and negatives, leading to mistrust among social workers (WWC 2020) that persist to this day.
It is important to note that these tools were designed with humans in the loop, did not have decision-making capabilities, and did not execute follow up actions, as they would have if they were agentic.
What could this kind of inaccuracy mean in practice?
Misidentification of Risk: incorrectly identifying a family as high-risk due to an error in the data input could lead to an unnecessary investigation, causing stress and disruption for the family involved.
Bias in Decision-Making: Data bias could could disproportionately flag families from certain ethnic backgrounds as high-risk. Such bias can be embedded in historical data that can be replicated and amplified by AI algorithms, which can exacerbate existing disparities and eroding trust in the social care system.
Lack of Transparency: It can be difficult to understand the rationale behind an AI-generated recommendation (the ‘black-box’ effect). A lack of transparency in AI decision-making process can make it challenging to justify recommended actions, which is at odds with accountability requirements in social care.
What are the possible implications of inaccuracies and unreliability?
Local authorities are accountable assessments and decision-making. Algorithmic biases, inaccuracies, and a lack of transparency. In addition to the potentially harmful consequences and distress to children and families, there are legal and rights based considerations:
The Equality Act 2010 protects individuals from discrimination based on protected characteristics, including race and ethnicity. If AI algorithms result in biased decision-making that disproportionately flags certain ethnic groups as high-risk, it could constitute indirect discrimination under this act.
Article 8 of the Human Rights act 1998: Misidentification of risk equate to a breach of the families right to respect for private and family life without interference from the state.
Data Protection Act 2018: This act governs the processing of personal data in the UK. It includes provisions to ensure that data processing is fair and transparent. If AI systems use biased data, it could lead to unfair processing, violating principles of data protection.
Public Sector Equality Duty (PSED): Under the Equality Act 2010, public authorities must consider how their policies and decisions affect people with protected characteristics. Failure to address and mitigate bias in AI decision-making could breach this duty.
Conclusion
Despite these challenges, the potential for predictive analytics and AI agents in children's social care remains a compelling area of ongoing research and development. Addressing biases, inaccuracies, and unreliability is crucial to ensure AI serves children and families first and foremost. AI technology is advancing rapidly and while these technologies may not be suitable for children’s social care at this time it is likely that progress will be rapid. Children’s social care should consider the standards for evaluating model performance and thresholds for accuracy and precision. Clarity about these expectations provide a foundation for innovation and help ensure that AI models are accurate, reliable, and capable of performing well on new, unseen data.
References
AI Agents: Substance or snake oil with Arvind Narayanan
MACHINE LEARNING IN CHILDREN’S SERVICES - What Works for Children's ...
Munro, Eileen. (2019). Predictive analytics in child protection.