DG Cities - Blog

Is it possible to shift public opinion on automated cars? Lessons from DeepSafe

Ed Houghton

Guest blog: Ed Houghton shares four vital steps towards public trust in AI for LOTI

Last year, DG Cities was commissioned by the Department for Science Innovation and Technology to research AI assurance in industry, and to investigate the language used to describe the approaches to evaluating AI in different sectors. This work formed part of the government’s report, Assuring a Responsible Future for AI, published in November. In a guest blog for LOTI (London Office of Technology & Innovation) Ed Houghton, who led the research, draws practical lessons from some of the key findings.

Transparency is a cornerstone of good governance, yet many public processes still feel opaque to many citizens. As AI increasingly shapes decision-making, questions arise about how open government and transparent democracy can thrive when most AI systems remain closed and complex. This is where AI assurance plays a crucial role.

AI assurance refers to the processes that ensure AI tools are used as intended, working effectively, ethically, and without harm. It’s particularly vital in local government, where public trust and service effectiveness are paramount.

Our research explored how AI assurance is understood across sectors. Through a national survey of over 1,000 leaders and interviews with 30 managers, the research identified key steps for maximising the benefits of AI safely and transparently. These include defining common AI terminology, fostering cross-department collaboration, prioritising continuous evaluation, and engaging communities to build public understanding and trust in AI systems.

Read the full piece on LOTI’s website.

 


Lost in translation? How language defines trust in AI tools

Last month, the Department for Science, Innovation and Technology released the report Assuring a Responsible Future for AI. This important piece of work highlights the challenge of businesses and public sector organisations adopting AI tools without sufficient safeguards in place. For our latest blog, our Research & Insights Director, Ed Houghton looks at the importance of choosing the right words for assurance, and takes an in-depth look at some of the trust issues our research discovered.

Humans are hard-wired to consider trust, whether we’re buying a new car, meeting a stranger, or even understanding when to cross the street. We’re constantly assessing the world around us and deciding whether or not, through a decision we or someone else makes, we’re likely to benefit or come to harm. The problem is that humans aren’t always great at knowing what should and shouldn’t be trusted.

Trust, or more specifically trustworthiness, is a central element in the field of AI acceptance.

Trustworthiness, defined as displaying the characteristics that demonstrate you can be trusted, is gold dust to those looking to make use of AI in their tools and services. Tech designers go out of their way to make sure you trust their tools, because without trust, you’re very unlikely to come back. UX designers might choose voices that convey warmth, or use colloquialisms and local language to help ease interactions and build rapport. In text-based interactions too there’s a need for trust – some tools might use emojis to appear more authentic or friendly, others might seek to reassure you by providing references for the answer it generated. These are all methods to help you trust what AI is doing.

The issue, however, is that trust in AI is currently being fostered by the tools seeking your engagement. This obvious conflict here means that people using AI, whether employees or consumers, may be placing their trust in a risky product or tool – and in an emerging market that is evolving at pace, that creates real risk.

Understanding the risk AI presents, and the language used by business to assure products, was the topic of our most recent study for government. The DG Cities team undertook research for the Responsible Technology Adoption Unit and Department for Science, Innovation and Technology exploring AI trust from the perspective of those buying and using new products in the field today to understand what AI assurance means to them – and to understand their needs in order to assure new tools coming to the market. Our approach explored how AI tools are currently understood, and key to people’s understanding was the concept of fairness.

Understanding fairness of AI tools

For AI tools to be used safely, there’s a need to ensure their training is based on real world data that represents the reality in which the tool is likely to operate, but which also protects it from making decisions that are biased or limit outcomes. We found an example of the reality of “good bias” vs “bad bias” when exploring the use of AI in recruitment technology – here, bias from both objective and subjective measures is considered to drive a hiring decision – but for those using the tool, there is a need to ensure there is no bias related to protected characteristics. This challenge is an area where fairness comes to the fore:

“Fairness is the key one. And that intersects with unwanted bias. And the reason I try and say ‘unwanted bias’ is that you naturally need some (bias). Any AI tool or any kind of decision making tool needs some kind of bias, otherwise it doesn't produce anything. And so, I think front and centre is how does it work, does it work in the same way for all users?”

- private sector procurer

You can imagine a similar scenario playing out in a local authority setting in which resident information is used to asses housing allocation, or drive retrofit and improvement works to social housing stock. Here, bias must be understood to ensure the tool is delivering value to all groups – but with the introduction of certain criteria, an equitable approach may be created, whereby certain characteristics (e.g. low income, disabilities) are weighted differently. Fairness here is critical – and is a major reason why assurance processes, including bias assessments, and impact evaluations, are key practices for local authorities to build their capabilities in.

Making AI assurance more accessible

UK public sector bodies and businesses of all sizes are going to need to ensure the AI tools they are using are fit for purpose – without steps in place to make the checks needed, there is real risk of AI being used incorrectly and potentially creating harm.

Defining terms is important for several reasons, not least because without clarity and consistency, it is likely that those involved in the development, implementation, and regulation of AI technologies may find themselves speaking at cross purposes. Clear terms, used in agreed ways, help prevent misunderstandings and misinterpretations that could lead to errors or inefficiencies.

Well-defined terminology is also crucial for establishing ethical guidelines and legal standards. It allows policymakers to create regulations that address specific aspects of AI, such as privacy, bias, and accountability, ensuring that AI technologies are developed and used responsibly. Terminology related to AI assurance practice must convey requirements for legal standards, but as we’ve found from our engagement with industry for DSIT, this issue of terminology prevents business of all sizes from understanding what they need.

Is the language of AI assurance clear? I don't know whether it's the language per se, I think there's probably a lack of vocabulary… to me it's a question of ‘what are you assuring? What are you trying to show that you've achieved?’ And that all stems from: ‘what does the public want from the technology, what do they not want, what do regulators expect to see, how much evidence is enough evidence?”

- private sector procurer

Assurance language that is clear and well understood is also a pillar of effective risk management.

By precisely defining terms like "bias," "transparency," and "explainability," businesses and their stakeholders are far more likely to understand potential risks and take action to limit their potential impact. Shared meaning between leaders, teams, suppliers and clients is important if issues with AI are to be tackled in an appropriate way.

Finally, and perhaps most importantly, without clear AI assurance terminology, it’s unlikely that AI technologies are to be widely accepted and trusted. Assurance is one of the key mechanisms through which public bodies and businesses convey the trustworthiness of AI to the public. This is where clear terminology can be most powerful – it helps to demystify complex concepts, making AI more accessible to non-experts and increasing public trust. It’s also important in demonstrating the trustworthiness of brands – not only private sector businesses, but also local government.

Being a trusted source of information

As our research highlights, there’s a lot to be done in business and public sector to share and learn about AI tools and services in reality. At DG Cities, this is the kind of role we’re playing with authorities today to make sense of a complex and changing field. If you’re keen to learn more about what AI tools are in the field, and the types of assurance steps you should take to make better decisions on AI, get in touch.


Read the full report, Assuring a Responsible Future for AI.


Understanding how we describe trustworthy, responsible and ethical AI

Less than half (44%) of UK businesses using AI are confident in their ability to demonstrate compliance with government regulations, according to a new report released by the Department for Science, Innovation and Technology. DG Cities contributed to this research, published under the Responsible Technology Adoption Unit, which highlights the challenge of businesses and public sector organisations adopting AI tools without sufficient safeguards in place. For our latest blog, our Research & Insights Director, Ed Houghton, who led the research, explains why the words we use to define emerging tech matter.

Max Gruber / Better Images of AI / Ceci n'est pas une banane / CC-BY 4.0

In a world already full of jargon and buzzwords comes AI to generate its own. Almost overnight (although those in the field will no doubt argue otherwise) business has had to run to keep up, as new terms, such as gen-AI have entered the lexicon. Of course, the day-to-day use of jargon might be irritating, but beneath it lies a critical challenge: within the AI space there is no clear language that people believe, understand and trust.

Nowhere in the AI field is language more important than in the space of AI assurance. Put simply, assurance is the practice of checking something does as it is designed and intended. For businesses using AI, assurance is critical in assessing and validating the way AI uses business or consumer data. In regulated industries like banking, AI assurance is becoming a key requirement of responsible practice.

At DG Cities, we were recently commissioned by DSIT to explore assurance language as part of the UK government’s push to create the UK’s AI assurance ecosystem. Our aim was to engage with UK industry to understand the barriers to using assurance language, and the importance of standardised terms to helping businesses communicate with their customers and stakeholders. We surveyed over 1,000 business leaders and interviewed 30 in greater depth to explore their views.

What we found gives an interesting picture of this emerging space. We found excitement and interest in making use of AI, but concerns over doing the right thing. For example, almost half (44%) didn't feel confident they were meeting assurance requirements from regulation. The reasons for this were numerous, but consistent themes were: lack of clear terms, and lack of UK and international standards.

We also spoke to the public sector about assuring AI when working on public services, including in local government. Here similar issues came up: lack of knowledge in how to assure AI, and terms that were inconsistent. We believe this a barrier to the safe adoption of AI in sectors where it could have major value.

It's great to see our work for DSIT now shared. We think this is a massive opportunity for the UK to lead globally, to create AI assurance businesses and tools that are designed to ensure AI remains safe and trustworthy, and that ensure the public is always protected when AI is used.


If you’re interested in finding out more about our work in AI, you can read about how we help local authorities navigate the challenges of ethical and effective use of new tools here, browse our reports here, or get in touch

Trust us – two little words that aren’t going to advance the self-driving industry

Today, our Director of Research and Insights, Ed Houghton will be joining a panel at the CAM Innovators day at the Institution of Engineering and Technology. He’ll be sharing insights from our recent work - talking about the need to demonstrate safety, evidence from our trials and surveys, the importance of engaging vulnerable groups and assurance.

Trust is central to relationships. Whether it’s with people, brands, services or technologies, trust radically shapes our behaviour and experiences. And with AI now becoming more and more prevalent in our lives, trust has a whole new dimension of complexity – is it possible to trust technologies that are, on the surface, behaving like a human? What happens when trust is broken?

The APA Dictionary of Psychology defines trust as “the confidence that a person or group of people has in the reliability of another person or group… the degree to which each party feels they can depend on the other party to follow through on their commitments.” In the case of self-driving then, trust isn’t only in relation to the vehicle – it’s also placed on the service provider. the originator or owner of the technology. And when it comes to commitments, there are key outcomes those using self-driving tech expect: as DfT research has shown, safety is paramount. By that token, when we talk about trust in self-driving AI, we’re essentially also talking about perceptions of safety.

Trust issues are particular to different industries

This is different to how trust is understood in other AI use cases. In banking, trust in chatbots is tied to issues such as fraud. In HR, trust is related to bias and discrimination. The focus of trust requires a different approach and strategy when engaging with customers, clients or users.

Across the board, however, there are a variety of factors that influence public trust in AI: traits such as personality, past experiences, technology anxiety/confidence, for example, shape public response. But so do the characteristics of the AI itself: reliability, anthropomorphism and performance, in particular, shape our views.

And it’s this last one – performance – is key in the self-driving space. In the absence of visible self-driving technology on our roads beyond trials, it’s difficult for the public to understand if the performance of a self-driving vehicle is up to scratch. There are few tangible examples out there to act as a baseline for us.

The context itself also plays a huge role. Driving or being a road user, in general, is a high-risk daily task that puts individuals at an increased risk compared to many other day-to-day activities. AI in a driving context is therefore subject to behaviour at increased risk, and it is a demonstrably difficult to develop AI at present to deal with complex driving scenarios.

Demonstrating safety – in every situation

These complex scenarios present a massive challenge to industry that we’re helping to understand more about. Complex ‘edge cases’ need better simulation, so AI can be taught how to deal with them – they also present huge risk, as they are often visceral, emotive experiences that describe the nature of incidents on our roads. Using these examples as a platform to build trust is a challenge, and could break trust in technology if dealt with incorrectly – but if safety can be demonstrated, it is likely to support acceptance of AI technology as a transformative factor of our future mobility system.

We’ve done many pieces of work over the years into public acceptance and trust, and are currently working on several projects on trust with a self-driving angle. DeepSafe, our work with Drisk.ai, Claytex, rfPRO and Imperial College is looking at trust in self-driving from the perspective of testing and demonstrating trustworthiness through the AI Driving Test. With our partners, we’re exploring if it’s possible to use driving test simulations to showcase how AI behaves around edge-cases using the very latest simulation technology, and the impact this has on trust. We’re exploring public attitudes and capturing their experiences of complex situations to help train the AI. This, we hope, will help us to develop an understand of how trust can be influenced by different types of information related to safety and the importance of demonstrating safe behaviours in building trust.

That’s why the self-driving industry, unlike banking, or other sectors, cannot rely on asking to be trusted, or saying they are trustworthy. Instead, the industry must demonstrate trust through safety – safety of users, safety of others on our streets, and in particular, safety of vulnerable groups. Only then can industry expect to see the mass adoption and acceptance of AI on our roads.

Interested to learn more? Get in touch or read more about our work in the sector and current project, DeepSafe.

Complex-to-decarbonise homes: a systems perspective

As the government publishes DG Cities’ research report with UCL, ‘Defining and identifying complex-to-decarbonise homes’, Head of Research, Ed Houghton explains the importance of a definition in addressing the multifaceted challenge of decarbonisation - and the value of an index rather than binary approach to understanding this complexity.

The UK is committed to achieving net zero carbon emissions by 2050. According to the UK Climate Change Committee, over a third (37%) of Britain’s annual greenhouse gas emissions come from building energy and heat. If the goal is to be achieved, housing, and in particular social housing, must be decarbonised.

However, decarbonising such diverse social housing stock is no easy feat. The UK has some of the oldest and least energy efficient housing in Europe. Across the social housing sector, many tenants suffer from poor insulation and inefficient heating systems. Some are prone to draughts and damp, creating uncomfortable and unhealthy living conditions. Many aging social housing blocks are expensive to heat and contribute significantly to carbon emissions.

Social housing providers, such as local authorities and housing associations, face many challenges to decarbonising stock, and understanding which barriers to tackle and when requires consideration and planning. UK social housing is hugely diverse – and the approaches required must fit the needs of the property, whether that’s a post-war tower block in a London housing estate, or a listed Georgian terrace of converted flats – understanding the attributes and characteristics of the property and its context is key.

Complex-to-decarbonise homes: the value of applying a systems lens

The diversity and technical complexity of housing in this country means that there is no one-size-fits-all solution to decarbonisation, particularly where there are numerous barriers compounding issues. Instead, those looking to retrofit and decarbonise heating should seek to understand the barriers and opportunities with a property to design a solution.

This is where a better understanding of the concept of ‘complex-to-decarbonise’ (CTD) can help. CTD refers to “homes with either one, or a combination of, certain physical, locational, occupant demographic, or behavioural attributes that prevent the effective decarbonisation of that home until they are addressed. These attributes might constrain the design and delivery of measures to improve energy efficiency, decarbonise heating, or realise occupant benefits (e.g. increased comfort and affordability of heat and energy).”

By defining the specific attributes and factors that describe the property, it is easier to understand the best way forward – and for the most challenging properties, this can be hugely beneficial. Take, for example, a CTD block of flats built in the 1960s with electric heating and cavity walls, as described in our case study for our DESNZ study. These properties were challenging to fit external cladding, requiring skilled teams to abseil and install insulation – what’s more, the variability of cavity insulation across the property created a real challenge. This property required detailed consideration, which made it particularly complex when it came to standard decarbonisation.

The definition of CTD can be applied to any situation in which a property is to be retrofitted and its heat source made more carbon efficient. Essentially, rather than binary, our work positions CTD as an index. The value of this approach is that it provides a spectrum on which any property can sit – some being less complex to decarbonise (e.g. requiring simple insulation retrofit) while others require improvements in multiple ways. The method also means that the user can weight the attributes according to their perceived importance – for example, weighting a social factor, such as vulnerable occupants, highly to make sure this factor is taken into account in the retrofit selection and delivery, rather than looking at the fabric of the building in isolation.

A step towards greater impact

Decarbonising the UK's housing stock is a huge challenge, but it is critical to meet our environmental aims. It will require a collective effort from the government, industry, and homeowners, and a focus on tackling those most complex in the CTD scale.

We believe this new approach can radically shift decarbonisation towards a more holistic appreciation of the system in which these activities happen. By understanding the socio-economic and environmental factors, we believe that more sustainable and higher impact approaches can be brought to the market, and utilised to create healthier, more sustainable and liveable conditions, particularly for social housing tenants.

Summary report

Understanding the value of a CTD index for local authorities

Read more about our work on retrofit and download the full DESNZ report here. 

COP28: global decisions depend on local leadership

With another COP drawing to a close, Head of Research and Service Design, Ed Houghton shares his view on the summit ahead of the final wording of any consensus, and highlights the vital work of local government in delivering today on what global leaders can only negotiate as principles for an increasingly unstable future. 

As this year’s controversial COP 28 wraps up, we think its time to stop looking to the top for leadership, and instead recognise and learn from what’s happening at a local level.

The United Nations Climate Change Conference (COP) has become an important part of the annual climate calendar, as world leaders meet to agree how to tackle climate change. The meeting centres on agreements by countries on the targets and approaches to mitigating greenhouse gas emissions and building resilience. This year, the conference has been held in Dubai – a petrostate not necessarily known for its climate credentials.

Climate change is a recognised threat to global stability by many scientists worldwide. The evidence for its impact continues to grow – and the picture is bleak. The Intergovernmental Panel on Climate Change's Sixth Assessment Report from 2021 painted a grim picture of the planet's changing climate, highlighting the urgency of immediate and drastic action. The IPPC urged nations to limit global warming to 1.5°C above pre-industrial levels, the target set in the Paris Agreement, which it described as essential to avoid the most catastrophic impacts of climate change.

A leadership vacuum

Unfortunately, ‘leadership’ at this year’s summit has been severely lacking. Just days before COP28 began, leaked documents revealed that the UAE planned to discuss oil and gas deals with several countries throughout the summit, raising suspicions among many that the UAE was using COP as a platform to promote fossil fuel interests.

Then, not long into the gathering, the Guardian published accounts of the COP28 president, Sultan Al Jaber, downplaying the need to phase out fossil fuels. Whilst at a public event, he suggested that there was "no science" to support calls for a fossil fuel phase-out, contradicting the scientific consensus that fossil fuels are a major contributor to climate change. This was less than a year out from hosting the summit.

Closer to home, Prime Minister, Rishi Sunak took the opportunity to highlight the UK’s progress, despite noting that he is rolling back commitments for low carbon heating such as heat pump deployments and retrofit energy efficiency measures. This is despite the UK’s independent Climate Change Committee of leading climate scientists recent outlook from October, stating that the UK is highly likely to miss both the target of reducing greenhouse gases by 68% by 2030, and its long-term ambitions of net-zero by 2050. 

Time for local government to lead the way

In the absence of national level leadership, it falls to local government to try to deliver on net zero, whilst also under extreme pressure to reduce costs and operate efficiently. There are, however, clear indications of local councils across the country delivering net zero innovation. For example, the recent LGA report Key Cities - Emissions Down Levelling Up published in May 2023, took a closer look at the LGA's Key Cities Network in achieving net-zero emissions. The analysis highlighted significant progress in key areas including retrofit and decarbonisation, but noted that more work was needed to meet the ambitious targets set out in the networks net zero plans. The report concluded that with limited resources there is only so far that local government can go.

Our own work has also also highlighted how action is happening across local governments at every scale, including the ‘hyperlocal’ level of neighbourhood decarbonisation. Our case studies for the LGA showed action across areas of technology innovation, retrofit acceleration, and community engagement, for example Redditch Borough Council and the Midlands Net Zero Hub partnered to locate local assets that were eligible for the Public Sector Decarbonisation Scheme, bringing together experts from the hub with council officers to deliver retrofit projects. In Devon, the County Council drew together LAD (the Green Homes Local Authority Delivery scheme, which aims to raise the energy efficiency of low-income and low EPC rated homes) and HUG (home upgrade grant) funding to develop the Sustainable Warmth Fund to promote retrofit to the able to pay market, and provide advice and guidance through local networks and organisations.

Look local to recognise impact

Many local authorities are doing as much as they can with limited funding – and often very much out of view. While the international press focuses on the motives and decisions of global leaders in Dubai this month, local authorities are quietly making do with limited budgets to do as much as they can to tackle net zero. And they’re doing this, not only to achieve their commitments, but also to deliver on their social value purpose to their communities. The local level is where real change is happening.

If you are part of a local authority looking to develop a strategy or accelerate decarbonisation, find out more about our consultancy services.

Latest Government steer on self-driving vehicles

This month, the Government published evidence on issues around the potential deployment of self-driving services in the UK. Our Head of Research and Service Design, Ed Houghton, who presented to the Transport Select Committee in March looks at how far we have come in terms of public attitudes – and where we are going next with the DeepSafe consortium.

Self-driving technology has the potential to radically change how we move around our towns and cities. It shows so much potential that the UK Government considers deployments possible by 2025. In August 2022, it stated that it intends to move forward with defining a regulatory, legislative and safety framework to make deployment a reality. Industry is waiting for these developments to enable rapid commercialisation, and as Government looks to define its place on the global stage as an AI safety superpower, it stands to reason those self-driving services – based on AI – should for the basis of this next leap forward.

When we have worked with industry on the development of safe self-driving services, we have seen how UK companies see the potential value of the technology to our neighbourhoods. Our work has always focused on bringing in under-represented public voices into the discussion about the future of our transport system – and our work with industry leaders such as Oxa and DRISK have enabled us to explore, in detail, the opportunities and challenges the public sees when they consider self-driving tech. This has given us a deep understanding of where we think development should move next.

Societally, we place driving licences, gaining the freedom of driving, as almost a part of our identity... The vehicle becomes part of how we gain freedom. The challenge that industry faces is that you have to disconnect the vehicle, the object, from somebody’s identity.
— Ed Houghton

One key issue we pick up on is the view that technology-based solutions are too often built without broad engagement. As researchers we know that without effective engagement with diverse audiences, new technology solutions can be severely limited in their utility to the end user. For something as potentially transformational as self-driving technology, we think the risks are too high to deliver technologies to market technologies that have not been extensively validated with the public.

That’s why, when the UK Parliament Transport Select Committee sought evidence on the evolution of safe self-driving services, we were keen to share our insights from our public engagement work. The Committee’s report, recently published, highlights the outcomes of their inquiry, and rightly showcases the challenges and opportunities facing the development of commercial services.

Our research during trials, which have included live public engagement, have shown overwhelmingly that the public considers safety a key priority, but many still lack knowledge of the reality self-driving technology, and what it might mean in practice, on the road:

  • 26.8% would feel confident using an AV tomorrow if it were possible to do so. Over half would not (55.1%). The remainder are undecided (18.1%).

  • 3 in 10 (29.9%) believe that self-driving vehicles will be safer than traditional vehicles, whilst 44.2% disagree. A quarter (25.9%) are undecided.

But our study showed that demonstrations can make a big difference in reassuring and shifting public opinion.

  • Live trials improved perceptions of safety by 15 percentage points: before the trial, 68.3% agreed that AVs would be safer than human driven vehicles, whilst after the trial 83.6% agreed, an improvement of 15 points.

  • Trust in self-driving vehicles is low, but a large minority is yet to be persuaded: findings from our national survey show almost a third (32.5%) think self-driving vehicles will be trustworthy, whilst two in five (43.8%) do not. Almost a quarter (23.6%) are undecided.

It’s important that safety and demonstrating trust are key outcomes for future services. If industry is to drive adoption and acceptance, designers must prioritise these factors in their practice. How we can do that en-masse with the wider self-driving ecosystem? That is the topic of our new study with the DeepSafe consortium, which kicked off recently.

Through DeepSafe, we are working with experts at drisk.ai, Claytex, RF Pro, and Imperial College London to test and validate AI responses to hard-to-predict edge case scenarios, and using these to demonstrate the potential of the technology to the public. We believe this process will not only prompt engagement and discussion on the value and potential of the technology, but also surface insights that can help self-driving AI developers ensure the technology they’re developing is human-centred.

We think this will go some way to supporting some of the findings from the UK Parliament Transport Select Committee, as they rightly called for a focus on safety as a priority from government. As the report highlights, “Safety must remain the Government’s overriding priority as self-driving vehicles encounter real-world complexity.” Understanding this complexity, and engaging the public in validating self-driving AI responses to it, is exactly what the deep safe project is looking to do.   

Global warming, local action: best practice in neighbourhood decarbonisation

Tackling the causes of climate change requires decisive action and political leadership at a global scale, but it also relies on collective change by individuals, supported by local initiatives. Last year, DG Cities worked on a project with the Local Government Association to understand the range of neighbourhood approaches to decarbonisation. Head of Research and Service Design, Ed Houghton revisits some of the case studies.

Three-quarters of the way through 2023, its clear that this has been a record-breaking year for the climate, for all the wrong reasons. July was the hottest month on record: temperatures in China hit a high of 52.2 °C, while in the US, the city of Phoenix experienced an astonishing 31 days of temperatures at or above 43.3°C, smashing the 18-day record set in 1974. In the southern hemisphere, where winter replenishes vital Antarctic Sea ice, this June saw ice cover 4.5 million square miles of ocean around the continent, nearly a million square miles less than the average from over 40 years of observations. In Greece, Canada and China, some of the worst wildfires in living memory have ravaged communities, displacing people and damaging local diversity and wildlife.

While the UK escaped record-breaking heat, this June was still the hottest in the country since records began. And in the UK, like much of the globe, climate change isn’t only increasing the likelihood of extreme heat events – this July was also one of the wettest on record. The unpredictability of weather and climate is set to continue. Its is therefore critical that we not only limit harmful emissions, but also start to build resilience in our infrastructure and communities to adapt to our changing climate.

How does this start at a local level?

Adapting to tackle climate change, and supporting communities in learning and developing new approaches, is one strategy to mitigate its future impacts. Towards the end of last year, DG Cities worked with the Local Government Association (LGA) to undertake a deep dive into local decarbonisation strategies to understand how approaches are being designed and delivered across key themes. These included housing and energy decarbonisation, transport, and service delivery. We wanted to understand some of the best practices of leaders in the field, as well as to draw out some key lessons to help ensure others across the network can build and develop their own successful projects in the future.

We cast our net wide to explore approaches from across the UK, looking at rural and urban communities, making sure we drew from a range of projects that reflect the diversity of approaches and challenges local authorities are experiencing. The studies captured examples of real projects delivering tangible change, and we think, reflect a real richness of insights that help to showcase some of the work underway to tackle climate change.

One important strand of our work at DG Cities is neighbourhood decarbonisation: bringing together an understanding of a council’s assets with the social value proposition of retrofit, and aligning the different steps needed to improve an area. The team has been reflecting on examples from our LGA project that really stood out, and there are a couple that we think demonstrate how action in this area is delivering tangible differences to communities.


As with many projects, initially uptake was slow from private properties. So Leeds prioritised retrofitting the 40 council houses to show the improvement and to start conversations. This created a snowball effect, whereby private landlords and homeowners began to want the works too.
— Leeds City Council, case study

Leeds City Council: the Neighbourhood Retrofit Programme

One major challenge for local authorities is how to retrofit social housing to meet the target of EPC C minimum, and to do this in a systematic and evidence-based way. The team at Leeds City Council developed a Priority Neighbourhoods approach, in which they drew on a set of key performance indicators to help identify where retrofit interventions should be targeted. But it wasn’t only at the identification stage where this played out – the team also directed their resources to transform communities through focused action, channelling funding, such as ECO and regional funding, to create a big impact over a short period. By doing so, the team was able to reduce the costs of regeneration work, and create campaigns in local areas to build buy-in and demonstrate impact.


Hampshire County Council: the Greening Campaign

Community engagement and support was a common theme across the case studies we developed. Those we spoke with reflected on the value that community participation brings to the design and delivery of projects – and to helping to ensure success. One example of this was the Hampshire County Council programme, delivered in partnership with the Greening Campaign. Through targeted support, intervention design with community members, and simple, repeatable activities, the group leveraged community interest to deliver projects that tackled real issues for local residents. These included improving recycling rates, supporting local wildlife and highlighting the value of home retrofit to homeowners. Through this work, Hampshire County Council has been able to trial new approaches to building community participation through behaviour change programmes – it is now at the stage of seeking further funding to grow these activities and create lasting impact.

It will take real leadership, innovation, and collaboration to navigate the universal challenges of climate change, however they present locally. But through our research, it was encouraging to reflect that all of this and more is already happening in local authorities, that at a community level, things are already changing across the UK.  

Read more of the case studies here.

"Trust me, I'm a robot” - Why asking isn’t enough

Trust is complicated. It can be hard to define why we trust certain people or information, as so much of that decision-making process is instinctive. How do those human cues and feelings translate to our interactions with robots? It’s not such a far-fetched issue to consider, as Head of Research and Service Design, Ed Houghton explains.

Midjourney AI urban robot

Increasingly, we are being asked to consider trusting new and emerging technologies in our towns and cities. AI and machine learning have a range of urban applications, from connected self-driving vehicles to smart refuse systems and IoT-based security cameras. But, however inventive, well intentioned or well-funded the technology, the major limiting factor in its successful rollout and adoption is trust. Are we prepared to trust these systems that have been designed to support us? If we don’t, the technological solution could fail – and when trust is broken, reputational damage can be difficult to shake.

Are we prepared to trust these systems that have been designed to support us? If we don’t, the technological solution could fail – and when trust is broken, reputational damage can be difficult to shake.
— Ed Houghton

DG Cities approaches projects by putting people at the centre of any innovation, and making sure services are useful, accessible and could improve people’s lives. That’s why, before we get carried away with the many potential benefits and uses of AI, we need to start from first principles and consider what makes us trust (or distrust) a new technology.

Research shows that to build trust, it’s important to think about five connected concepts:  

Reliability

Robots are seen as more trustworthy if they are reliable and present. In one study, a virtual and physical AI model were tested together to understand which was deemed more reliable when presenting the same information. The study found that physical robots are considered more intelligent than virtual systems such as chatbots, even when sharing the same information. [1]

Transparency

Seeing how technology works can help to build trust. Research shows that explaining how AI-based processes work can help to improve trust in their use, but only for simple procedures. One military wargame example showed that experienced staff developed trust when they could understand how AI-based decisions were being made, and were able to interrogate it. [2]

Personality

Appealing to the user and their unique needs can help to build trust, so tailoring more unique, personalised information can produce greater trust in a system. However, too much ‘personality’ can have a negative effect on trust in the tech, particularly for virtual AI. We like to see our own characteristics reflected. One experiment with virtual AI showed that mirroring different AI personalities (e.g., extrovert vs introvert) elicited positive trust outcomes when they matched the characteristics of the user [3], and these personalised responses were considered more persuasive.

Presence

AI systems with a physical presence, for example a robot based on an AI model, tend to garner higher levels of trust than virtual AI systems, like chatbots. Physical characteristics like human forms and characteristics can build trust, but it can be a hard balance to strike – too similar and they can create feelings of unease. [4]

Demonstrating trust is key, so simple design changes that take into account these principles can help to build valued services. It could mean creating a physical presence, such as a robot assistant instead of a digital chatbot, or simply ensuring that new tools and services are completely reliable before they hit the shelves.

Building trust will be an important outcome for technology developers, as well as those looking to use their services, like local authorities and developers. Whether designing a chatbot service to make customer services more efficient, or trialling a sophisticated self-driving system, evidence shows that it isn’t enough to simply ask people to trust you. Instead, it’s important to demonstrate that you can be trusted – because even in the fast-paced world of AI, trust can only be earned.

To find out how we have been exploring trust in the context of smart city tech and AI, take a look at some of our research into public attitudes to self-driving technology.


[1] Bainbridge et al, 2011. [Bainbridge, W.A., Hart, J.W., Kim, E.S., & Scassellati, B. (2011). The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3(1):41–52.]

[2] Fan et al, 2008. [Fan, X., Oh, S., McNeese, M., Yen, J., Cuevas, H., Strater, L., & Endsley, M.R. (2008). The influence of agent reliability on trust in human-agent collaboration. ECCE’08: Proceedings of the 15th European Conference on Cognitive Ergonomics: The Ergonomics of Cool Interaction, ACM International Conference ProceedingSeries,vol.369:1–8.]

[3] Andrews, 2012. [Andrews, P.Y. (2012). System personality and persuasion in human-computer dialogue. ACM Transactions on Interactive Intelligent Systems, 2(2):1–27.]

[4] Chattaramanetal, 2014. [Chattaraman, V., Kwon, W.-S., Gilbert, J.E., & Li, Y. (2014). Virtual shopping agents. Journal of Research in Interactive Marketing,8(2):144–162.]

DG Cities in Westminster: presenting self-driving research to the Transport Select Committee

Last week, our Head of Research & Service Design, Ed Houghton was invited to give evidence to the Government’s Transport Select Committee on self-driving vehicles. There are significant consumer barriers to be overcome to shift gear from car ownership to usership, let alone to new self-driving models. For our latest blog, Ed suggests we consider what driving means to people – and how evolving trends and technologies could shape this in the future, but only if the public are at the heart of developing any new service.

Last week, I had the honour of presenting DG Cities’ research to the UK Parliament Transport Select Committee’s investigation into self-driving vehicles. Over several years, the team at DG Cities has been working hard to help government and industry better understand how self-driving services can be designed around the needs of diverse communities, and exploring if and how acceptance of self-driving services can be made more likely. Our work has looked closely at the major barriers facing the technology, and last week we were able to share and explain in more detail some of the key findings from our evidence submission in 2022. In a field of significant hype and excitement, our research has looked to ground technology in the realities of people’s daily lives, and to make what is often the preserve of sci-fi films more tangible.

In a field of significant hype and excitement, our research has looked to ground technology in the realities of people’s daily lives, and to make what is often the preserve of sci-fi films more tangible.
— Ed Houghton

Self-driving services are expected to deliver many benefits, including safer roads and a shift towards shared, more sustainable mobility. But direct engagement with consumers over several years has shown us there are several significant barriers that are likely to slow the pace of the technology’s adoption. Safety, trust, and accessibility all top the list of concerns for consumers – only a quarter (26.8%) say they would use a self-driving car tomorrow if they could. Consumers don’t yet see self-driving as part of their mobility.

Why is this important?

We know that mobility, and driving in particular, is an important aspect of many peoples’ lives. It might be the way they get to work – for some, it might be needed to unlock opportunities for better paid work. For those in the countryside with little or no access to buses due to impoverished public transport, driving might be the only way a family can get their children to school.

Not only is driving often economically beneficial, whether we like it or not, it also forms a large part of many people’s identity. Whether it’s the freedom that comes from learning to drive at 17, buying a car to accommodate a growing family, or losing the opportunity to drive due to ailing health, the act of driving, and the feelings related to it, can be associated with key stages in our lives. The 20th century saw the UK’s cities and wider society become increasingly car-centric, and research has shown that people place significant financial and non-financial value on their cars.[1] This makes moving from human-driven to AI-driven vehicles, and shifting away from single or even multiple car ownership, incredibly challenging to advance.

Ed Houghton, Head of Research & Service Design

Where do we go from here?

This sets the stage for a difficult, but potentially transformative, transition for communities. We already see the concept of vehicle usership increasing in popularity, as young urban dwellers change their spending habits and look to micro-mobility and public transport to get about, with the occasional option of renting a shared vehicle. Car ownership declined for the second year running in 2022 – the first time this has happened in over a century.[2] And many expect this to continue.

The systems in which self-driving technologies are being deployed are complex. Infrastructure, regulation, public attitudes, insurance, data security and connectivity – all these components must be managed and maintained to enable acceptance and safe use of self-driving technology.

To overcome the many challenges ahead for industry and government, we will need to continue deep engagement with the public. We need to fully understand their perspective, and then design technologies and services around their needs. For a technology that plays such a central role in people’s lives, it would be hugely negative to not take account of the public’s ideas in developing new services. Failure to do so won’t just limit the chances of services being successfully adopted – at stake are also the many potential benefits of autonomy, which would remain unrealised.

Watch DG Cities’ formal submission to the UK Parliament Transport Select Committee.




[1] Haustein, S. (2021). The hidden value of car ownership. Nat Sustain 4, 752–753.

[2] SMMT (2022) UK Motorparc Data 2021. Accessed online: https://media.smmt.co.uk/uk-motorparc-data-2021/

To develop safe, trustworthy self-driving services, we need to bring people on board

This week, DG Cities is at the IoT Solutions World Congress in Barcelona. Head of Research and Service Design, Ed Houghton will be presenting lessons from our work on projects such as Endeavour and DRisk. For more than five years, we have been helping people imagine the ‘self’ in self-driving and envision what a future service could look like; and in doing so, deepening our understanding of the public’s needs and concerns. We have become a UK leader in attitudes to self-driving technology, and we’re excited to be in beautiful Barcelona to share our insights, as Ed explains...

Barcelona; Torre de Collserola on the Tibidabo hill

Imagine, for a second, what you would do if your taxi turned up at your doorstep with no driver. It still turned up – in fact it turned up on time, doors opened, and invited you to get inside to drive to your destination. Would you get in, sit back and relax, or would you ride on the edge of your seat, anxious for the journey to end?

This experience isn’t necessarily too far in the future. Self-driving vehicle trials have been underway across the UK for several years, with projects exploring how to ensure the technology is ready to offer this type of service. Just last week, the CAVForth project in Scotland trialled a self-driving bus on a real-life route across the Forth Bridge. Not quite a taxi, and there are still safety personnel involved, but the tech being tested is always improving.

The barriers to successful delivery aren’t necessarily tech-based, however (though there are still plenty of tech barriers to overcome!) In fact, we know from our research that there are key barriers among consumers, in their attitudes to self-driving services, and to AI more generally. Decision-making systems which support, or even remove the need for a person to make decisions, are often viewed with considerable mistrust. For example, DG Cities research on trust in AI-based self-driving services shows that almost a quarter (23.6%) of the public are yet to be convinced, neither trusting nor mistrusting self-driving services.

That is why, at DG Cities, we focus on bringing the public into the process of designing and developing AI-based transport. To build trust, we must incorporate diverse perspectives and needs into the service design approach. We cannot design new services without first understanding what the people that will ultimately benefit from them need and want.

We also need to make sure that we go to the public to meet them where they are. This means physically (and digitally) convening discussions in ways which are inclusive and accessible, and making sure that participants are able to fully engage with discussions about their visions for the future of AI-based mobility, such as self-driving cars.

We need to make sure that we go to the public to meet them where they are. This means physically (and digitally) convening discussions in ways which are inclusive and accessible.
— Ed Houghton

A good example of this in action is the delivery of the national roadshow for project D-RISK, which is a project designed to develop a driving test for AI based self-driving services. By crowd sourcing the most unexpected, bizarre and unpredictable driving experiences, the public can help to train vehicles to deal with the most complex and unique scenarios (‘edge cases’). As part of our roadshow we travelled across the UK, to museums, universities, and outdoor markets, to meet people and hear their stories. As well as contributing edge cases to our research, this gave us a unique perspective on wider attitudes to emerging technologies.

What these projects brought into view is that a key issue with current approaches to AI tech, and IoT innovation in general, is that there isn’t enough dialogue and discussion with the public about what they want or need. If we are to build services that are trusted, valued and most importantly adopted, we have to get much better at listening, learning and building them with the people we hope will use them in the future.

Ed Houghton is speaking at the IOT Solutions World Congress on January 31st and February 1st 2023.

Do you know how clever a smart electric vehicle charger can be? Launch of our BEIS-funded smart charging research

Some of the barriers that deter consumers from making the switch from petrol and diesel to electric vehicles include anxiety around charger availability and range. There are a number of technologies available to mitigate these concerns. But are the public aware of the smart capabilities of home charging, for example? To better understand perceptions of EV charging, the Department for Business, Energy and Industrial Strategy (BEIS) asked DG Cities to conduct a national survey. The results shed light not only on consumer attitudes, but also on areas where improvements are needed to accelerate EV adoption. On the day of the report’s publication, Head of Research and Service Design, Ed Houghton introduces the findings.

Woman plugging her car into smart EV charger on driveway next to garage door

Exploring how consumers understand EV smart charging

Transport is the largest contributor to UK domestic greenhouse gas (GHG) emissions, responsible for 27% in 2019. As such, it is an area requiring rapid transition to low/no carbon alternatives. [1] Battery electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs) have become an increasing presence on our roads – one in ten new vehicles purchased in 2022 were EVs, with sales increasing by 40%. [2] This trend has been growing year on year, as more vehicles enter the market offering more choice and more competitive price points.

A big challenge, however, is how to ensure the UK has the necessary charging infrastructure to support the transition to electric vehicles. Range anxiety is a known concern among drivers. For a long time, worries about when and how to charge have prevented people from switching to EVs. One of the benefits of EVs over internal combustion engine (ICE) vehicles is the relative ease with which they can be charged form home, if consumers can afford to, and have access to a home chargepoint. As sales of EVs have increased, sales and installations of home chargepoints have lagged behind, even though new smart functionalities have come to the market. Smart charging is a benefit as it enables consumers to manage how and when to home-charge their EV and to control charging remotely.

But to what extent are consumers aware of the value of smart chargepoints? And how clear are their different functions to consumers, given the number of chargepoints on the market? In 2022, we were asked to help the Government answer these questions. The DG Cities team undertook a national survey of EV consumers to find out more.

The first national survey of EV smart chargepoint attitudes and consumer behaviours

DG Cities was excited to be asked by the Department for Business, Energy and Industrial Strategy to explore this area in detail through a national survey of EV consumers, and to provide a useful baseline for new regulations designed to support improvements in the smart chargepoint market.

Working with BEIS, we delivered a literature review of recently published data and insights to develop a survey that captured views and behaviours. We partnered with YouGov to develop the national sample of EV and hybrid vehicle owners to distribute the survey among. We developed questions that investigated various aspects of vehicle charging – including preferences over location, charging time and chargepoint functions – and we asked respondents to share their views and interest in purchasing a smart chargepoint in the future.

Findings

Our survey was completed by over 1,002 electric vehicle and plug-in electric vehicle owners in March 2022. Some of the key findings were:

  • Most battery EV owners have a dedicated chargepoint at home: Two-thirds (66%) of battery-electric car drivers have a dedicated chargepoint at home. However, the majority (66%) of respondents with battery-electric vans have a 3-pin cable as their main charger, which doesn’t allow charge scheduling.

  • Smart functions are increasingly prevalent: The top three functions are charge scheduling (41%), connecting to the vehicles on-board computer (39%) and internet connectivity (36%). This indicates that many of the chargepoints EV drivers own have at least some degree of ‘smartness’.

  • For those who schedule their smart charging, most have a positive experience: The majority agree that they can view their current charging schedule with ease (67%); change the charging schedule with ease (63%) and monitor the cost of their charging with ease (77%).

  • Overriding schedules is common, which may have an impact on the grid at times of high load: The results show that a quarter (26%) of participants never override their charging schedule. However, over half override their schedule up to 50% of the time. A few respondents override every time they charge, suggesting that they may not have the scheduled charging set up to suit their needs.

  • Workplace charging is still uncommon: A third (30%) of participants use workplace charging. Over 60% say that their workplace either does not have the facilities, that they do not go to a physical workplace or choose not to drive to work.

Growing smart charging in the future

Our work for BEIS highlights that there is growing interest amongst consumers of smart functionalities, particularly the ability to schedule charging. There are, however, some barriers that need to be overcome for consumers, particularly when it comes to the complexity of the products on offer, and standardisation of chargepoint technologies.

Data and insights of this type are important for industry and policymakers to understand progress towards net zero. DG Cities is excited to have partnered on this work, and we’re looking forward to seeing how industry, and adoption by consumers, evolves in the future.

To read the new research, click here. To find out more about our research into consumer behaviour when it comes to EVs and some of the projects we have been working on in this field, download our Electric Vehicles Community Insights Report, 2022.






[1] Department for Transport (2021) Decarbonising transport: a better, greener Britain.

[2] Society of Motor Manufacturers and Traders (2023) Annual vehicle sales figures.