LATEST EDITION

GPSJ SPRING 2025

July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

Archives

If AI is widening the digital divide, what can be done to stop it?

AI’s increasing presence poses a real danger of making an already sizable digital literacy divide worse. Jon Rimmer, CXO at Mercator Digital, says governments and organisations have a responsibility to bridge this gap, explaining both why and how.

Jon Rimmer

New technology is designed – or at least is meant to be designed – to make life easier. In the UK, for example, people can now renew passports online, file taxes through HMRC’s digital service, receive emergency alerts on their phones, and even attend virtual Jobcentre appointments – all in the name of saving time and improving access.

However, for those with low digital literacy, advancements in technology can in fact do the exact opposite, further excluding people who are already marginalised.

According to recent research, 8.5 million people lack basic digital skills, of which a large proportion include those living in poverty. In fact, 3.7 million families fall below the Minimum Digital Living Standard, facing barriers such as limited internet connectivity, outdated devices, and insufficient digital literacy support.

Similarly, both older people and those with physical and mental disabilities often encounter accessibility issues that make digital tools frustrating or even unusable. Again, this can be down to equipment and connectivity issues, but lack of confidence or skills to utilise technology also come into play. A survey of people with severe mental illness, for example, found that 42% lacked basic digital skills, such as changing passwords or connecting to Wi-Fi.

While it’s already widely acknowledged that digital exclusion disproportionately affects the above-mentioned groups, even beyond the barriers of access and affordability, 21% of people still say they feel left behind by technology.

These are all high figures that, with the advent of AI, are at risk of rising.

The impact of AI on digital exclusion

AI of course has the potential to drastically improve public services, healthcare, education, and employment. But, if not carefully designed and implemented, it also risks deepening digital exclusion.

For those already struggling to use digital systems, AI adds complexity to interactions. Chatbots and automated interfaces, for example, are becoming increasingly common in settings like healthcare and social services, where human interaction is often essential. These tools can confuse users with low digital literacy or those experiencing mental health challenges, creating yet another barrier between vulnerable individuals and the services they need.

Looking beyond usability, there’s also a deeper structural problem: the data used to train most AI models is inherently biased. These datasets are often pulled from the web, where information has historically been shaped by academic, technical, and hobbyist communities  (think Western, white, middle-class, English-speaking men). As a result, marginalised groups are significantly underrepresented, leading AI systems to reflect and reinforce existing social inequalities; a problem further propagated through continued use.

And this is not just theoretical. Take the COVID-19 pandemic as an example, where the impact of AI bias was clear as day – an AI system downgraded exam results for 39% of students, disproportionately affecting those from disadvantaged schools.

We’ve seen other examples in healthcare too, with recent research showing that AI systems being developed to diagnose skin cancer run the risk of being less accurate for people with dark skin, simply because the data used is predominantly from those with lighter skin. In this case, it’s a disparity with potentially life-threatening implications.

Four strategies to bridge the digital divide

For me, governments and organisations have a responsibility to address both this bias and the widening digital divide. If not, they risk a huge proportion of the public feeling annoyed, isolated and inadequate – and that’s at best. At worst, there’s a real risk that the needs of the UK’s most vulnerable people are not met; which in turn causes significant implications for education, employment, and health and wellbeing.

With this in mind, below are some practical strategies for ensuring AI-powered services are intuitive, inclusive, and adaptable to different needs:

  1. Embed inclusive research as a foundational step in service design: In order to ensure inclusion from the outset, robust user research is key. When it comes to the design of Government Digital Services, researchers have always held the responsibility of amplifying the voices of marginalised and digitally excluded groups, ensuring that services derived are effective and usable for all. Researchers also play a key role in the ethical and responsible uses of AI, which is something that must continue into every stage of service design. Prioritising researchers’ work here helps uncover the real-world challenges people face, not just assumptions based on the experiences of digital natives.

While there is already a well-established community of researchers across Government, there is always more that can be done to share findings across Departments that will relate more widely than the individual project to which a researcher is assigned.

We also need to continue validating digital services with representative users at every stage of development, using insights from government researchers to shape and test design decisions.

  1. Apply the MASTA framework to AI inclusion:
  • Motivation: If users don’t see how AI improves their daily lives or work, they’re less likely to develop the skills to use it. As such, it’s important to raise public awareness of how AI and data can be used safely and meaningfully. This education needs to be embedded early in schools and extended to older adults through touchpoints. The NHS, for example, is already doing a great job of this, showcasing the advantages of aggregated data.
  • Access: For AI technologies to work, they quite often need reliable internet, modern devices, and supporting infrastructure. Without access to these things, existing digital divides will only deepen. The Government must continue to fund or subsidise broadband roll out, providing hubs where people can get access and support.
  • Security: Security is a big concern for many, but is especially worrisome to those lacking the skills and knowledge of how to stay safe online. That’s why practical training on how to recognise and protect against AI-enabled and general digital scams is key. This guidance should be accessible and relevant to different age groups and communities.
  • Trust: If people don’t trust that AI is fair, unbiased, and secure, they simply won’t engage with it – so we need a better explanation of how data is derived and used within systems to improve trust in use.
  • Anxiety: People need help to build confidence with anything new – without this, even well-designed AI tools risk being underused. So again, training and education to improve confidence whilst interacting with digital tools and services is key here. But it’s not just about people; systems and interfaces also need to do their part. Baking in appropriate reassurances at key moments can reduce cognitive overload and performance anxiety. Time and time again, I’ve seen technically confident users demonstrate impoverished skills under stress. Think of the panic that hits when filling out a tax return and wondering, “If I get this wrong, do I go to jail?” Thoughtful prompts, clear feedback, and supportive design cues can make all the difference.
  1. Tackle AI bias and break down silos through smarter collaboration: For Governments to design services that are intuitive, inclusive, and adaptable to different needs, it’s time to approach potential biases in AI head on by understanding where data sets are derived from and actively work to acknowledge, avoid or counterbalance skewed inputs. At the same time, we need to accelerate programmes that reduce silos across government departments, while bolstering security measures to ensure individual and business data is secure. This, of course, is far easier said than done. It’s key to recognise that, unlike start-ups, the government can’t always “move quickly and break things,” but closer alliances with smaller companies can help it quickly learn from their techniques and findings.
  1. Strengthen policy frameworks and funding: While I think we don’t necessarily need brand new initiatives, as some helpful ones already exist, the issue is a lack of attention or/and funding. Service Standard 5 (a UK government digital standard), for example, is already about inclusion, ensuring everyone can use digital services, including people with disabilities, low confidence, or no internet access. But, it’s perhaps time to specifically call out AI in this standard, making sure it’s clear that inclusion must extend to AI-driven services too.

Alternatively, I’d like to see a specific standard on AI and Data within the Government Digital Service Standards to make sure these technologies are designed and deployed in a way that doesn’t exclude vulnerable people. Existing initiatives, like Helen Milner’s ‘Good Things Foundation’, are already working to boost digital skills in underrepresented communities. They just need more support and funding to scale that work and to add a focus on AI resilience.

The bottom line here is that AI doesn’t have to reinforce the status quo, or deepen an existing gap. With thoughtful design, transparent data practices, and meaningful human oversight, AI has the potential to break it entirely. 

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

  

  

  

This site uses Akismet to reduce spam. Learn how your comment data is processed.