DG Cities - Blog

Is it possible to shift public opinion on automated cars? Lessons from DeepSafe

Emily King

The Digital Exclusion Data Gap

For our latest short read on digital inclusion – or exclusion – our Behavioural Scientist, Emily King explains why understanding people’s barriers to accessing online services is vital to delivering inclusive public engagement, whether that is around council services or what happens in their neighbourhood. Highlighting our work with the Royal Borough of Greenwich, she explains the need for councils to get specific: to understand the area-wide picture, but also behaviours at an individual and community level.

Image: Unsplasg/Centre for Ageing Better

To ensure that design is inclusive and human-centred, local government and other public sector organisations need to closely involve the public in shaping services and innovations. At DG Cities, we work on projects doing just that – engaging the public with new innovations, services, and initiatives in their local area, to ensure these are always designed with their needs in mind.

However, in a world that is increasingly moving online, from ordering a prescription to commenting on a planning application, it is important to consider the methods used to engage the public, to ensure that those who do not regularly use the internet are not excluded from having their voices heard.

“There is no one-size-fits-all approach to tackling digital exclusion. It’s a complex issue that demands a clear focus and good data if decision-makers are going to create the positive change that will improve the lives of those most at risk of being left behind.”
— Ed Houghton, Director of Research & Insights, DG Cities
 

Who won’t see your online survey?

According to Lloyds’ 2023 Digital Consumer Index, 2.1 million people in the UK are offline, and c.4.7 million people cannot connect to WiFi. These individuals are unlikely to answer an online survey or attend a virtual interview, both increasingly common methods for conducting public-facing research since the Covid-19 pandemic and the popularisation of online collaboration tools. Inclusive research design involves applying varied methodologies that can capture the views of people that don’t have the skills or resources to access the internet.

It is also difficult to identify digitally excluded individuals in the first place, for organisations to be aware that they may need to receive information in a non-digital format, or to recruit them to take part in research. Recent work by LOTI has sought to map the extent of digital exclusion across London, which provides useful information about key areas of exclusion and types of digitally excluded groups. However, more work is necessary to understand digital exclusion on a more granular level; to uncover which individuals or areas in local communities may require further support and options for engagement, and what these options might look like for different individuals. 

Tackling digital exclusion with local government

DG Cities is currently working with the Royal Borough of Greenwich to map digital exclusion on two Greenwich estates, to better understand which residents are digitally excluded and why.

There are many different aspects to tackling digital exclusion, from connectivity, access and education to behaviours and respecting personal preferences. Despite being an increasingly connected society, only 27% of UK households can access modern, gigabit-capable broadband. As the rollout of new connectivity technology is fragmented and delivered by multiple providers, there is a risk that some get left behind and can’t take advantage of new opportunities. Early last year saw the launch of Digital Greenwich Connect, a joint venture enabled by DG Cities between the Royal Borough of Greenwich and tech provider, ITS to bring ultrafast broadband to housing estates in the borough. Our research into the behaviours and factors influencing people’s ability to access online services is key to tackling the issue holistically and helping communities make the most of this infrastructure investment.

This work will help Greenwich to understand where there might be gaps in their ability to communicate with residents via online channels. As well as ensuring these residents are receiving adequate support in their daily lives more broadly, the work will help to ensure that digitally excluded residents are not left out of opportunities for engagement and consultation.


We know that great initiatives are underway in this area, and we think it is hugely important that more local authorities and organisations work to involve the voices of digitally excluded residents. As a multidisciplinary team, we’re able to look at the technical aspects of the issue - the availability of infrastructure and role of data in identifying groups - as well as behavioural science and engagement. If you would like to talk to us about how we might support you in addressing the issue of digital exclusion in your area, get in touch.

Technological innovation with human values

How do we ensure innovations in transport, for example, or public services, are not only easy to use but also meet real human needs? Can they reflect fundamental societal principles like safety, fairness, and community? Following up on some great discussions at Tech Week in London last week, our Behavioural Scientist, Emily King explores the science behind value-led development at a local as well as a global scale, and how an understanding of these drivers can ensure that innovations like self-driving cars are responsibly designed and deployed.

Ethical Roads workshop at SMLL

One of the last bills to make it through parliament before the election was the UK’s Automated Vehicles Bill – a world-first piece of legislation designed to ensure AI innovations on our roads are safe and deployed responsibly by industry. The AV Act has established the legal framework, but for self-driving to be accepted, legal foundations aren’t enough. New AI-based technologies need sound ethical foundations too.

At DG Cities, we spend a lot of time thinking about how to develop technologies that work for individuals and communities. We use principles and approaches from the fields of human-centred design and behavioural science to understand how to develop and deploy technologies that meet real human needs.

In this respect, self-driving is an interesting area of innovation, as it is one challenging industry to put people first. Our work often centres on the concepts of trust and acceptance of technology in different forms. Our ongoing DeepSafe work, for example, with a commercial and academic consortium in the self-driving industry, seeks to better understand the factors driving acceptance of self-driving vehicles, and what is important to build trust in them.

The technology acceptance model highlights two important factors that drive acceptance, commodity and ease:

  • Is it useful? Does the technology help to meet specific needs?

  • Is it easy to use?  

Human-centred design focuses largely on the second of these factors – how easy or attractive they are to use – by developing technologies which take as their starting point the user experience. However, what seems to be less at the heart of discussions around human-centred design of technological innovations is their actual usefulness ­– how much they will meet real human needs, and particularly how they align with broader societal values.  

How do we start to bring values into the design of self-driving services?

One way to make the process of ensuring acceptance of technological innovations more seamless would be for those working in technological innovation to root the process in societal values. The human-centred design process begins with empathy for the potential user of a productthis should include an empathetic understanding of what users value the most.

But first, how to define values – essentially, they are our internal standards of what is important. Our values inform our attitudes, beliefs and behaviours. Whilst individuals hold different values, cross-cultural analysis[1] suggests that some types of values are consistent across most individuals and societies.

According to this research, the most strongly held values worldwide include:

  • Benevolence: ‘preserving and enhancing the welfare of those with whom one is in frequent personal contact’

  • Universalism: ‘understanding, appreciation, tolerance, and protection for the welfare of all people and for nature’

  • Self-direction: ‘independent thought and action-choosing, creating, exploring’.

Technological innovations may align with some widely-held values more than others. For example, self-driving vehicles are a solution to improving societal needs such as improved safety on the roads, and increased ease of travel by reducing congestion. These benefits largely come from the greater connectedness of vehicles providing additional information to enable safer driving decisions.

However, the autonomous element of the vehicles also threatens ‘human welfare’, for example by reducing the job security of bus and taxi drivers, or reducing connectedness and community by removing any opportunity for human interaction between passengers taking a taxi journey. Thus, this innovation is not fully aligned with the core values of benevolence and universalism.

Our Ethical Roads project, delivered in collaboration with Reed Mobility, identified several ‘ethical red lines’ for self-driving vehicles, which align with the values of benevolence and universalism, such as ensuring that vehicles improve road safety and that all road users are protected equally. This highlights how values underpin requirements for technologies to be accepted.

For technological innovations to be truly human-centred, it is crucial to develop a coherent sense of which values are most important to communities, and use these as a basis for innovation, to ensure that technologies reflect the true needs and values of society.

What could this look like in practice?

At DG Cities, we look at technological innovation at a range of different scales, from very local issues facing a particular community (e.g. the best method for using sensors to reduce damp and mould on specific estates) through to issues at a national or global scale (e.g. AI assurance).  

On a local community scale: values-centred design could involve identifying the specific priority needs and values communities hold before embarking on a project or introducing a new technological innovation. Research into attitudes and priorities is important here – what is it that matters most to people, and what innovations might be possible to truly improve their lives?

Innovation should also be based on the values of a specific community. Measures such as the Schwartz Value Survey or the Portrait Values Questionnaire could be used in research instruments to identify which values are of greatest importance to individuals and communities, and technological innovations should be aligned with these.

Starting a project with a problem or goal which has been identified or defined by communities helps to bring a sense of ownership to new innovations, and involves communities throughout the whole process, rather than seeking feedback on a pre-determined idea.

At a global level, technological innovation that is truly human-centred should be aligned with the values of the global majority. This means that innovations in AI should not only reflect the values of demographics like tech bros or white wealthy westerners, but those from around the world. According to Schwarz, this means ensuring innovations improve or at the very least do not reduce the overall welfare of the global population and nature; and that they enhance rather than undermine independent thought and creativity.

It is important for innovation to begin with research about people, communities, and their values. For innovations in AI which have a global reach and impact, there is a need for behavioural and design research to ensure innovations reflect the priorities of the rest of the world.

Meanwhile, local organisations should focus on establishing the values and priorities of local communities as a method for identifying where to innovate. Methodologies such as citizens assemblies or deliberative dialogue research, which asks communities across the globe to design their ideal futures, could be vital in taking the next step toward technological design centred on human values.

If you’d like to learn more about our behavioural innovation approach, you can read more here - or get in touch!



[1] Schwartz (2012)

No average thinking: bringing different perspectives into the development of self-driving vehicles

Last week, the DG Cities team was at Cenex, the annual gathering for those working in the CAM (Connected Automated Mobility) industry. For our Behavioural Scientist, Emily King, this year was her first time at the event – we asked her to write a little bit about her impressions of the self-driving vehicle sector, and how it relates to our latest project, DeepSafe…

Last week, I attended the annual Cenex-LCV conference, a two-day event hosted at the Millbrook testing ground near Milton Keynes and attended by a wide range of organisations driving forward innovation in transport, from electric vehicles to automated mobility. There was plenty to engage with, from virtual reality simulations and driving games to vehicle test-drives, and a range of talks on offer from key stakeholders in the sector.

As DG Cities, along with a consortium of partners, embarks on the DeepSafe project, which aims to increase the safety of connected autonomous vehicles (CAVs), my main aim for the event was to learn more about current issues in CAVs safety and public engagement.

An important component of ensuring CAVs are as safe as they can be is encouraging diversity in the perspectives that are considered when developing them. Cenex was a microcosm of the autonomous vehicles world, and the largely white, middle class, male attendees indicated that the sector may be limiting itself in its thinking about safety through a lack of diversity.

Safety in the automotive sector has historically centred on the needs of “the average man”. For example, until 2015, safety tests such as the seatbelt test were performed on 50th-percentile male crash test dummies, leaving dangerous data gaps on the impact of crashes on those with female anatomy. Further, findings from a 2021 study that analysed ten years of personal injury collision data from Great Britain show that pedestrians of non-white ethnicity and individuals living in deprived areas are more likely to be injured in a collision on the roads.

So, the crucial question is: how can the CAV sector prevent similar biases in safety processes for autonomous vehicles? The answer seems to lie in involving a wide range of potential users throughout the development of these vehicles.

There are different needs and experiences to be considered for different groups of transport users. Existing research suggests that safety perceptions can differ based on factors such as gender, for example – recent CCAV trials demonstrated that women tended to have higher ‘focus’ and ‘stress’ levels initially when trialling self-driving vehicles compared to men.

It is particularly important to consider the needs of those expected to benefit most from CAV technologies, for example people with reduced mobility who may currently have limited transport options available to them, as well as those from marginalised groups who are regularly overlooked in service design and who currently face significant barriers accessing transit.

As well as seeking diversity in user perspectives, it’s also vital to encourage a more diverse workforce in the sector – this is crucial amongst those making key decisions on the future of CAVs. Women are currently under-represented within the automotive sector, at all levels (Automotive Council UK, 2022), as well as more widely in the STEM industries (Engineering UK, 2022). Showcasing a diverse set of applications for careers in the CAV sector, including their relevance to topics like climate change, a more equitable society, and safety could be crucial in inspiring those from outside the sector to explore this as a career option.

As the CAV industry undergoes significant transformation, ensuring safety for all requires us to welcome a broader range of perspectives. By involving a more diverse group of users and professionals, we can create a safer and more inclusive future for autonomous vehicles.

 

A behavioural science perspective on consumer barriers to self-driving tech

Last week, we welcomed our new Behavioural Scientist, Emily King to the team. No sooner had she said hello than she was off downriver to Woolwich to a workshop Ed Houghton was chairing at the Smart Mobility Living Lab. The subject was consumer barriers to the adoption of CAV (Connected and Autonomous Vehicles). We’ll be a hearing a little more about Emily’s background and experience so far in another piece soon, but first, she breaks down the different factors at play in the application of the COM-B behavioural model to a self-driving future…

In my first week as the new Behavioural Scientist at DG cities, I was fortunate to attend an event on the consumer barriers to commercialisation of connected autonomous vehicles (CAVs) at the Smart Mobility Living Lab (SMLL). The event was attended by a range of industry professionals, researchers, and policymakers and explored the user perspective of self-driving technologies.

The opening presentation for the event highlighted that public trust and acceptance of self-driving technologies need to be in place before CAVs can be commercialised. Public acceptance of CAVs is currently low, as evidenced by findings from project Endeavour that only a quarter of the UK public (27%) would be comfortable using autonomous vehicles tomorrow if it was possible to do so.

Commercialisation is a behavioural challenge

This indicates that commercialisation of CAVs is a primarily behavioural challenge: how can people be encouraged to accept and ultimately to use self-driving technologies? It is clear from the discussions at the event that behavioural science has a crucial role to play in shaping how we communicate with the public about CAVs, and how to design self-driving services in a way that will be accepted by the public.

A key stage in any behavioural science research project is to identify the specific barriers and drivers to the behaviour of interest. In this instance, identifying the barriers to using CAVs amongst different potential user groups is the first step in understanding why this hesitancy to use autonomous vehicles exists. This provides a useful starting point to exploring how policymakers and industry can encourage engagement with this emerging technology and ensure that it works for society.  

Discussions at the SMLL event shed light on some potential barriers to consumer adoption of connected autonomous vehicles, which can be summarised through the lens of the COM-B model.

A recap of the COM-B model

The COM-B model is a well-established behaviour change framework which suggests that for an individual’s motivation to engage in a behaviour to translate into actual behaviour change, they need to have both the capability and the opportunity to engage in the behaviour. [1]

Examining the potential underlying capability, opportunity and motivational factors can help to highlight how best to build perceptions of safety and trust, to achieve public acceptance and the opportunity for commercialisation of CAVs.

Diagram showing COM-B model of Capability, Motivation, Opportunity linked to Behaviour

Capability factors

Capability means that an individual has the knowledge, skills, and abilities required to engage in a behaviour. 

The SMLL event highlighted a need to continue educating the public about autonomous vehicles, including building knowledge of how the technology works and what the potential benefits of using self-driving vehicles might look like.

Educating about how the technology works and the specifics of existing safety measures is important to help to build perceptions of safety and trust in AVs, which in turn can increase acceptance. One successful method for educating people about AV technology is via conducting trials in person or via virtual reality, which allow individuals to experience riding in a self-driving vehicle first-hand. Discussions at the SMLL event highlighted positive examples of trial participants perceiving AV technology as much safer once they had the opportunity to experience it for themselves.

There are numerous potential benefits of AVs, from improving mobility options for disabled people through to decarbonising the transport system. For autonomous vehicles to be accepted, it is vital that the public are also clearly educated on these specific benefits and how using CAVs can help to achieve them. Individuals tend to (either consciously or unconsciously) weigh up the potential costs and benefits before deciding how to behave, meaning that for people to decide to use CAVs that any perceived costs such as reduced feelings of safety or anxiety about AI need to be outweighed by the perceived benefits.

“Individuals tend to (either consciously or unconsciously) weigh up the potential costs and benefits before deciding how to behave, meaning that for people to decide to use CAVs that any perceived costs, such as reduced feelings of safety or anxiety about AI need to be outweighed by the perceived benefits.”

Opportunity factors

Opportunity factors are the external factors which make a behaviour possible. Opportunity factors encompass all aspects of the CAVs technology and service offer which might influence whether people are willing or able to use them.

The workshop included discussions about the specific use cases and opportunities within the user journey where CAVs could play a useful role. For example, introducing self-driving services in rural areas where there are currently limited transport options could provide more benefit than in major city centres.

Self-driving services also need to be designed so that they are usable for the groups which need them most. As those with disabilities and reduced mobility are a key group expected to benefit from self-driving services, it is vital that they are included in conversations to ensure that CAV technologies are meeting their needs and ensuring they have sufficient opportunity to use CAV services. If these groups are unable to access CAV services in the first place, then this potential benefit of the technology cannot be realised.

Motivation factors

Even when capability and opportunity factors are in place, this does not guarantee that people will be motivated to engage with CAVs.

Motivation is also dependent on factors such as values and emotional states, which can differ vastly between individuals and even within the same individual depending on their current circumstances.

Understanding these more subjective, emotional aspects of CAV acceptance was mentioned at the SMLL event as a necessity going forward. This is an area where behavioural science research can play a useful role. Existing research in the field suggests that acceptance of autonomous vehicles is influenced by an individual’s levels of innovativeness (a general willingness to try new things) and general anxiety about technology, as well as levels of hedonic motivation (valuing enjoyment and sensation seeking) and utilitarian motivation (valuing rationality and effectiveness). These findings point to some potential options for increasing consumers’ motivation to engage with AV technologies, which link to discussions at the SMLL event. [2]

Hedonic motivation was the greatest predictor of intentions to use AVs overall, suggesting that making vehicles fun to use could be a route to increasing adoption of the technology. Workshop discussions at the event highlighted some ideas for increasing the ‘fun’ element of AVs such as giving vehicles faces to ‘anthropomorphise’ them, or including customisable elements so that they could be personalised.

 
Hedonic motivation was the greatest predictor of intentions to use AVs overall, suggesting that making electric vehicles fun to use could be a route to increasing adoption of the technology.
— Emily King, Behavioural Scientist

Meanwhile utilitarian motivation was found to be a predictor of intention to use AVs amongst innovative consumers only. This suggests it is important to educate more innovative consumers on the specific benefits of AV services. For those who are technologically anxious, there is a need to address broader concerns about AI before addressing specific concerns about CAVs.

It is important to note that these are hypotheses based on discussions from the event and existing research in this area. Much more extensive research is needed to identify the full range of behavioural barriers and drivers to build a full understanding of how to support acceptance and use of CAVs.

 

Read more of our research into self-driving services and consumer trends.





[1] West, R., & Michie, S. (2020). A brief introduction to the COM-B Model of behaviour and the PRIME Theory of motivation [v1]. Qeios.

[2] Keszey, T. (2020). Behavioural intention to use autonomous vehicles: Systematic review and empirical extension. Transportation research part C: emerging technologies119, 102732.