Exploring AI Priorities in a Public Sector Context
A case study conducted with the Environment Agency
Case Study Authors
Louise Every, Environment Agency, Dr Ayse Begüm Kilic-Ararat and Dr Arsham Atashikhoei (University of Bath)
Context
As a large public body, the UK Environment Agency operates in a complex and highly scrutinised environment where digital transformation is both a strategic priority and a public responsibility. Digitally enabled transformation of services has become central to improving delivery, enhancing efficiency, and ensuring that public funds are used effectively. The organisation places strong emphasis on putting users at the centre of its services, including both external stakeholders and internal staff.
With artificial intelligence increasingly discussed across the public sector, the Agency sought to explore how staff across different levels and departments perceive the potential value of AI in their work. 20 participants joined the workshop, representing diverse roles, seniority levels and functional areas. The goal was not to evaluate a specific system already in place, but to understand what AI could and should deliver in a public service context, and how internal priorities align or differ when considering its adoption.
Objective
The workshop aimed to support structured reflection on AI as an emerging technology within the Agency. Rather than beginning with predefined use cases, the objective was to clarify what benefits matter most when considering AI adoption in a public sector environment. Given the responsibility of managing public funds and delivering essential services, determining which outcomes should guide investment decisions was considered particularly important. The Agency was interested in comparing and contrasting priorities across groups to better understand areas of alignment and divergence, and to enable more transparent and evidence-informed discussions about transformation.
Approach
For this workshop, we adapted the Metric Tool to reflect the specific context of a public sector organisation. The metrics were revised to align with the Environment Agency’s strategic priorities and service-oriented responsibilities. Participants completed a structured pairwise comparison exercise to evaluate the relative importance of metrics when assessing AI adoption.
Participants were asked to complete the survey in advance of the collective session, enabling us to efficiently gather a broad range of individual responses across 12 participants. While this approach supported scale and inclusivity, the experience suggested that completing the exercise together may offer greater value in future applications. Undertaking the pairwise comparisons in a shared setting can encourage immediate reflection, clarify interpretations, and stimulate richer discussion around the reasoning behind selections. The session reinforced that the tool functions not only as a data-gathering instrument but also as a facilitation mechanism, and that collective completion may better support dialogue and shared understanding.
The tool generated weighted outputs that transformed individual judgments into prioritised results, enabling comparison across the group. These findings were then shared and discussed collectively, providing a structured basis for reflection on how AI might support organisational goals and service delivery.
Insights
The workshop revealed that many participants felt uncertainty about what they truly want or need from AI in their work. AI was described as a broad and sometimes abstract concept, and the tool was valued for guiding participants through a structured prioritisation process that helped clarify thinking and make competing priorities visible. In a large public body where strategic decisions must be carefully balanced and justified, the ability to compare and contrast perspectives across groups was seen as particularly valuable.
The results themselves were described as surprising. After categorising the outputs into three groups: Business Performance, Operational and Human-centric metrics, clear differences emerged. Overall, Business Performance metrics ranked highest, Operational metrics second, and Human-centric metrics lowest. However, within specific groups, more nuanced patterns appeared. In one group, employee mental health emerged as the most important metric, signalling visible cultural shifts and growing attention to staff wellbeing. At the same time, some of the lowest-ranked metrics were consistently rated as low importance across all groups, despite being areas the organisation would normally consider strategically significant. This prompted reflection and signalled a potential need for further internal awareness and training to ensure strategic priorities are fully understood and embedded.
Participants appreciated how the tool surfaced the range of views within the organisation and acted as a catalyst for more open, evidence-based discussion. It highlighted not only differences between groups, but also deeper organisational signals relating to capability gaps, cultural change and readiness for AI adoption. For the first time, this group requested benchmarking their results against four previous use cases, demonstrating interest in understanding how public sector priorities compare with private sector contexts and situating their outcomes within a broader evidence base.
Overall, the workshop demonstrated that structured prioritisation can support more transparent, reflective and evidence-informed conversations about AI adoption within the public sector.
Impact
“As a large public body, digitally enabled transformation of our services is a high strategic priority for our organisation. Our focus is rightly on putting users at the centre of our services, and this includes our staff as well as external users. Determining the most important benefits of transformation is vital to ensure public money is spent in the areas where it can make the most difference. What this tool enables us to do is compare and contrast competing priorities across different groups. Making the range of views more transparent helps drive open discussions on what is important and why, and evidence-based decision-making.
The workshop provided the Environment Agency with a structured mechanism to explore AI priorities across departments and levels. By surfacing diverse perspectives and making them visible in a transparent format, the session supported more confident and informed conversations about digital transformation. The insights gained contribute to ongoing reflections about how AI can be responsibly and effectively integrated into public service delivery.”
Louise Every - Digital Strategy Manager (Environment Agency)
For further information on this case study please contact the P-LD at P-LD@bath.ac.uk
Acknowledgement
This work was supported by the Innovate UK led Made Smarter Innovation Programme: People-Led Digitalisation Engagement and Impact Acceleration [Grant Reference UKRI1436] Centre for People-Led Digitalisation, at the University of Bath, University of Nottingham, and Loughborough University.