The Model for Responsible Innovation
The Model for Responsible Innovation is a practical tool created by DSIT’s Responsible Technology Adoption Unit (RTA) to help teams across the public sector and beyond to innovate responsibly with data and AI.
The Model does two things:
It sets out a vision for what responsible innovation in AI looks like, and the component Fundamentals and Conditions required to build trustworthy AI
It operates as a practical tool that public sector teams can use to rapidly identify the potential risks associated with the development and deployment of AI, and understand how to mitigate them.
The Model can help you if:
You are a public sector team delivering a project which contains some element of data-driven technology or AI
You are a private sector team building an AI or data-driven tool which you plan to use for a public sector purpose, or has a significant societal footprint.
The Model in Detail
Trustworthiness
At the centre of the Model is trustworthiness. The objective of responsible innovation is building justified trust in the AI and data tools we develop and use.
Justified trust comes through designing and deploying systems in a way that builds and deserves the trust of stakeholders to use them. Without confidence that a system has been built and deployed responsibly, stakeholders are unlikely to support and use it, and its full potential may not be realised.
Therefore, the different elements of the Model are carefully designed to help build towards this goal. By following the Fundamentals and Conditions, teams can build their systems so that they earn the justified trust of those using them.
The Fundamentals
To build this trustworthiness, the Model outlines eight Fundamentals that teams should work towards when developing and implementing their systems. These operate in part as principles to follow, but also as lenses to consider the broad range of ethical risks that might emerge when carrying out a complex data-driven project. They include:
Transparency - ensuring systems are open to scrutiny, with meaningful information provided to relevant individuals across their lifecycle.
Accountability - ensuring systems have effective governance and oversight mechanisms, with clear lines of appropriate responsibility across their lifecycle.
Human-centred Value - ensuring systems have a clear purpose and benefit to individuals, and are designed with humans in mind.
Fairness - ensuring systems are designed and deployed against an appropriate definition of fairness, and monitored for fair use and outcomes.
Privacy - ensuring systems are privacy-preserving, and the rights of individuals around their personal data are respected
Safety - ensuring systems behave reliably as intended, and their use does not inflict undue physical or mental harms.
Security - ensuring systems are measurably secure and resistant to being compromised by unauthorised parties.
Societal Wellbeing - ensuring systems support beneficial outcomes for societies and the planet.
The Conditions
Underlying the Fundamentals are the Conditions. These are the technical, organisational and environmental factors that must be satisfied in order for the Fundamentals to be met. Located on the inner ring of the Model, they are:
Meaningful Engagement - engaging effectively with experts, stakeholders, and the general public, using these insights to inform the system in question.
Robust Technical Design - ensuring that the functional (how a program will behave to outside agents) and technical (how that functionality is implemented in code) design of a system is robust.
Appropriate & Available Data - ensuring a system has access to the right data needed to achieve its desired outcomes and effectively monitor performance.
Clear Boundaries - ensuring there are clear boundaries on a system’s intended use, and clear understanding of the consequences of exceeding them.
Available Resources - ensuring the resources (technical, legal, financial, etc.) needed to effectively build and use a system are provided.
Effective Governance - ensuring that the right processes and policies are in place to guide the development and operation of a system, and ensure its adherence to the project’s goals, standards and regulations, providing recourse where necessary.
These six categories capture different types of measures that teams can take to mitigate the ethical risks present in their projects.
Underlying Themes
The Model sets out the key Fundamentals and Conditions required for responsible innovation. However, there are a number of key themes underpinning the Model which should be considerations for any team developing AI in the public sector.
These include:
Legal Compliance - compliance with the law is a necessary, but not sufficient, element to achieving trust in the use of AI.
Understanding - public sector organisations must have teams with the right understanding of the technology they are developing, with suitably trained or qualified individuals.
Continuous Evaluation - reflecting that AI systems are not static and often require an ongoing approach to risk and harms mitigation.
Organisational Culture - any team developing an AI tool should have a culture which enables and values responsible innovation.
learn more about The Model for Responsible Innovation at DSIT.