HomeInsightsAgentic AI: ICO publishes report on data protection implications

The Information Commissioner’s Office (ICO) has published a report on the data protection implications that organisations will have to consider if they adopt and deploy agentic AI.

The report follows the launch of the ICO’s AI and biometrics strategy last year (on which we commented here), which set out how the ICO intended to ensure that organisations develop and deploy new technologies in compliance with data protection law.

One area identified in that strategy was agentic AI, namely systems that can set and accomplish goals with limited human supervision or intervention. Agentic AI has already come under scrutiny in recent months, as we discussed here in relation to the Digital Regulation Cooperation Forum’s call for views on the regulatory challenges it presents.

In the context of data protection, the ICO is clear that “the widespread use of AI agents could raise challenges for privacy and data protection, including accountability, transparency, data minimisation and purpose limitation”. Whilst the ICO accepts that these challenges arise in relation to AI more generally, some are particularly acute in the case of agentic AI.

For example, it highlights the likelihood of increased automated decision-making as systems seek to automate increasingly complex tasks, as well as the risk of failing to comply with data minimisation and purpose limitation principles as systems ingest and retain information in order to perform multiple tasks “just because it might be useful in the future”. Similarly, the ICO points to the challenges of agentic AI “rapidly inferring and creating new personal information at scale”, drawing upon and generating special category data, and – as is the case with large language models – failing to ensure accuracy or transparency.

Despite these challenges, the ICO commits itself to working with developers to make sure that agentic AI is developed in ways that support data protection and information rights. As well as encouraging developers to use its Innovation Advice service and Regulatory Sandbox to help ensure that data protection sits at the heart of the design of systems, the ICO identifies various ways in which agentic AI could “contribute towards privacy-positive outcomes”, including by embedding legal obligations in agentic AI systems “at a fundamental level” and ensuring that agents generate outputs that comply with data protection law.

Looking ahead, the ICO is developing a statutory code on AI and automatic decision-making which will be relevant for those developing agentic AI. At the same time, it promises to “continue to work with stakeholders to further our understanding of agentic AI and to promote data protection by design and default in the development of agentic technologies”.

To read the report in full, click here.