AI can transform public safety, human collaboration is key

By Nick Chorley, Director, EMEA Public Safety & Security, Hexagon’s Safety
Nick Chorley, Director, EMEA Public Safety & Security, Hexagon’s Safety highlights how human collaboration can support AI as it transforms public safety

Artificial intelligence (AI) is becoming increasingly ubiquitous and has the power to revolutionise society’s methods of ensuring public safety. However, the public is wary of how the technology will be implemented with 47.4 per cent of those surveyed in 2018 by Bristows stating that AI will have a negative impact on society. With serious concerns existing surrounding privacy issues, bias, and accountability, the key to ensuring that the technology serves the public good lies in adopting an assistive AI approach governed by strong operational policy. 

AI and ML: a definition 

In its strictest sense, AI is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans. ML, on the other hand, is a subset and application of AI that provides software systems with the ability to learn from experience and improve themselves without being explicitly programmed. ML is a form of AI.

Uses and limitation of AI in public safety 

The potential ways in which AI and ML could transform public safety are far reaching. For example, agencies can implement AI to improve processes of image identification within databases. The technology can also be used for pattern and data analysis, especially within the context of threat detection. This incorporates using computer systems to analyse large pools of data to help determine where to deploy field resources. Other uses include sonar-based imaging and biometric systems, video and audio transcription, and cybersecurity enforcement. 

However, there are also a number of concerns surrounding unguided implementations of AI technologies in public safety. For example, in taking away the human judgment factors in emergency response, AI could miss important social, cultural, and political context that inform decision making in public safety. 

In addition to this, ML decision making is only as accurate as the data that informs it, and AI systems have proven susceptible to bias in this respect. This means data that is inputted without context, that reflects the bias of the human inputting it, or that is incomplete can cause AI systems to develop unfair bias. Indeed, existing human biases can easily be embedded into decision making processes when the data itself originates from human interpretation. This issue is compounded by ‘black box’ models of algorithmic ML, in which even the designers of AI programs are unsure of what data is being used to inform their decisions. The implications of the impact that this model could have on public trust are clear.  

These issues exist within larger conversations surrounding accountability, transparency and trust, and the ethical usage of AI technology. This being the case, in a sector such as public safety, which demands high levels of sensitivity and in which peoples’ lives can be at stake, AI must be implemented cautiously.

Given the multiple issues that can arise from AI in public safety, it is important that the technologies are introduced with human oversight – this can be realised through ‘assistive AI.’ Assistive AI is a human-centred, transparent approach to embedding AI within an operational system that focuses on augmenting human judgement and intuition in real-time. 

Benefits of this model are that it assists personnel – speeding up laborious tasks and alleviating strain placed on the emergency services – but leaves the decision-making up to humans. This helps public safety professionals make better decisions, amplifies intuition, and accelerates the real-time impact they are making. It also helps reduce a key issue in public safety known as the ‘operational blind spot.’

The ‘operational blind spot’ is when public safety organisations miss opportunities to reduce the impact of complex emergency situations when their tools do not effectively maximise real-time operational data and insights. These blind spots are borne from the lag between capturing operational data and analysis from emergency service personnel. After all, emergency situations are fast-moving and highly complicated – for example, a terrorist attack requires a combined response from multiple organisations responding to a fluid and dangerous situation. In this situation, public safety personnel using traditional tools will not have time to analyse all relevant data efficiently when strategising a holistic response. 

In these scenarios, assistive AI can act as a force multiplier for emergency service personnel through augmenting data-driven, real-time decision making. It can also bridge information gaps and data silos by creating a shared awareness with neighbouring jurisdictions and other organisations responding to a crisis. Lastly, and perhaps most importantly, it improves the well-being of staff by addressing alert fatigue, augmenting personnel judgements with real-time insight, and helping quicken the onboarding of new hires.

A collaborative future

There is much to be gained from AI and ML in public safety, but the implementation of these technologies can bring considerable societal risk if their use is not cautious and collaborative with human input. By augmenting human decision making, AI can allow emergency service teams to be quicker, more cooperative, and better protected in responding to a crisis. However, an assistive AI approach with humans at the centre is the most effective way to avoid the ethical pitfalls that define the AI debate in public safety.

Share

Featured Articles

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

We attended the IOC Olympic AI Agenda Launch for Olympic Games Paris 2024 to learn about its AI strategy and enterprise partnerships to transform sports

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology

Andrew Ng Joins Amazon Board to Support Enterprise AI

Machine Learning