IN December 2021, the Geospatial Commission published the findings of its independent public dialogue on location data ethics.1 The project was launched in March and was co-funded by UK Research and Innovation’s Sciencewise programme.
The report, delivered by public dialogue and data specialists Traverse and the Ada Lovelace Institute, identifies and unpacks views from members of the public about the ethical use of location data and opportunities for growing public trust in its use.
Evaluation
In support of Mission 1 of the UK Geospatial Strategy,2 to promote and safeguard the use of location data, the Geospatial Commission intends to publish guidance later this year on how to unlock value from location data in a manner that mitigates concerns and retains public confidence. The public dialogue, one of the UK’s first on location data, sought to gather evidence on public perceptions about location data use, to inform this guidance. The report offered valuable insights into what citizens believe are the key benefits and concerns.
At the Geospatial Commission, evaluating our work is a key part of all our projects. Evaluation takes many forms but it helps us to learn and adapt during projects and provides evidence of the impact we are having.
Why is evaluation so important in the public dialogue?
Public dialogues are naturally organic and iterative processes. There are objectives developed at the outset, of course, but much of the true insight comes from the open process which allows members of the public to talk about the aspects of the topic that matter to them, and how it fits into their lives, in their own words. Evaluating the quality of the process, therefore, calls for an approach that can accommodate this flexible style. Things might change along the way. Unexpected outcomes might arise.
Mostly, in any evaluation, you’re looking at the impact the project has had (what difference it has made) and the process which provides the context to that impact. A public dialogue is the same – you can assess the impact on public participants, on other stakeholders, and on policymaking. You can assess the process too, and in a final assessment, judge what difference those aspects of the process made to the overall impact of the dialogue. This helps lead to improvements for future public dialogue – a better process and greater impact.
However, it is not just about making a final assessment. Along the way, evaluation also helps to make improvements to the dialogue as it is developed and delivered. I have produced internal reports and been part of project team meetings to feedback on what has been learnt through the evaluation. This formative aspect of evaluation is important in creating a learning and development approach for the project, allowing the project team to react and adapt to the latest information.
Collaborations and methodologies
Sciencewise, the co-funder of the dialogue, is a programme led by UK Research and Innovation (UKRI) which supports policymakers to develop socially informed policy through public dialogue. Since its inception in 2004, and over 55 dialogues since then, Sciencewise has developed a framework for assessing quality in public dialogue. This forms the backbone of any independent evaluation of a Sciencewise co-funded public dialogue, including guidance on assessing context, scope and design, delivery and impact.
It has been fascinating watching public participants grapple with the topic of location data ethics.
For this particular evaluation, I wanted to apply a methodology called realist evaluation, first outlined by Ray Pawson and Nicholas Tilley. This is a theory-based evaluation methodology that asks what works, for whom, in which circumstances and why. In particular, it focuses on identifying the mechanisms by which outcomes are achieved (that are found somewhere in the intersection of what resources are offered by the dialogue and how all involved respond to those).
Articulated in this language, the overall theory that I have been testing is that by providing participants with new resources (stimulus material, interaction with experts, structured and welcoming space to discuss with others), public dialogue enables them to make meaningful contributions to policy development. The public dialogue process and its outputs are seen as credible and are used by policymakers and other stakeholders.
Emerging from the evaluation so far are three key mechanisms to test whether the intended outcomes have been achieved:
These are what I will be paying particular attention to over the next six months as the longer-term impact of the dialogue emerges.
Seeking feedback and next steps
In practice, evaluating using this methodology means I have been observing the workshops, facilitator briefings and focus groups with specifically impacted groups. I have asked public participants, experts and observers to complete a survey after some of the workshops. I have interviewed a sample of participants, experts and observers, members of the project’s independent expert oversight group, and members of the project team, including Traverse, Ada Lovelace Institute and the Geospatial Commission, at multiple points during the dialogue.
It has been fascinating watching public participants grapple with the topic of location data ethics. It was both a technical conversation and a conversation about society and ethics. This is a challenge to which the participants have risen when provided with the space, materials and expert facilitation to learn about the topic and develop their own views through discussion with a diverse group of others.
A final evaluation report of the dialogue will be published by Sciencewise in summer 2022.
Sophie Reid, Independent Evaluator for the Geospatial Commission
www.gov.uk/government/organisations/geospatial-commission
1 www.gov.uk/government/publications/public-dialogue-on-location-data-ethics