Cities are leveraging artificial intelligence (AI) to ensure safety and security for their citizens while safeguarding privacy and fundamental human rights.
Surveillance and predictive policing through AI is the most controversial trend in this report but one that has important implications for the future of cities and societies.
Technology is frequently used as a synonym of evolution, but the ethics of its use may need to be questioned. An underlying question is what society are we aiming to build. There are doubts and uncertainties about the impact of AI on communities and cities: the most fundamental concern is privacy, but there are frequent debates about AI from other perspectives, such as its impact on jobs, the economy and the future of work. Therefore, one cannot disconnect the discussions about surveillance and predictive policing from recent debates about the societal, ethical, and even geopolitical dimensions.
The pace the adoption of AI for security purposes has increased in recent years. AI has recently helped create and deliver innovative police services, connect police forces to citizens, build trust, and strengthen associations with communities. There is growing use of smart solutions such as biometrics, facial recognition, smart cameras, and video surveillance systems. A recent study found that smart technologies such as AI could help cities reduce crime by 30 to 40 per cent and reduce response times for emergency services by 20 to 35 per cent.1 The same study found that cities have started to invest in real-time crime mapping, crowd management and gunshot detection. Cities are making use of facial recognition and biometrics (84 per cent), in-car and body cameras for police (55 per cent), drones and aerial surveillance (46 per cent), and crowdsourcing crime reporting and emergency apps (39 per cent) to ensure public safety. However, only 8 per cent use data-driven policing.2 The AI Global Surveillance (AIGS) Index 2019 states that 56 out of 176 countries used AI for surveillance for safe city platforms, although with different approaches.3 The International Data Corporation (IDC) has predicted that by 2022, 40 per cent of police agencies will use digital tools, such as live video streaming and shared workflows, to support community safety and an alternative response framework.4
Surveillance is not new, but cities are exploring the capabilities of predicting crime by analysing surveillance data, in order to improve security. Cities already capture images for surveillance purposes, but by using AI images can now be analysed and acted on much more quickly.5 Machine learning and big data analysis make it possible to navigate through huge amounts of data on crime and terrorism, to identify patterns, correlations and trends. When the right relationships are in place, technology is the layer that supports law enforcements agencies to better deliver their job and trigger behaviour change. The ultimate goal is to create agile security systems that can detect crime or terrorism networks and suspicious activity, and even contribute to the effectiveness of justice systems.
Cities are also exploring other uses of surveillance and artificial intelligence technologies. AI is being used for urban tolling and emission zones to reduce air pollution for sustainability purposes. Another emerging area of application is the prevention of another health crisis. Paris uses AI to monitor the metro system to ensure passengers are wearing face masks. The aim is not to identify and punish rule-breakers but to generate anonymous data that helps authorities to anticipate future outbreaks of infection.6
How to achieve these goals while respecting privacy and liberties remains a crucial question.
Experts say it is almost impossible to design broadly adopted ethical AI systems because of the enormous complexity of the diverse contexts they need to encompass. Any advances in AI for surveillance and predictive policing need to be accompanied by discussions about ethical and regulatory issues. Even though the value proposition of these technologies might seem attractive from a use case perspective, liberties and civil rights need to be protected by proper privacy and human rights regulations.
Although a controversial issue in Western countries (and some cities in the US have banned it), predictive policing is being deployed widely in Asia. A survey by Deloitte has shown considerable differences in the acceptance and desirability of these technologies between regions. Both surveillance and predictive policing are considered undesired in more privacy-aware geographies such as the EU and North America. Latin America and Asia have shown greater acceptance.
In summary, cities need to consider if using technology for surveillance and policing implies making concessions to convenience at the expense of freedom.
“There is a lot of mistrust between communities and the police, and what we have seen again and again is that traditionally marginalised low-income communities are less likely to call for help. Introducing technology like gunshot detection empowers your police officers and law enforcement agencies to respond and help the community.”
-Jeff Merritt, Head of IoT and Urban Transformation at The World Economic Forum
“Whether it is about sensors, CCTVs, digital contact tracing, it is very important for us to be sensitive to how people feel about the data collection and data use, and we must communicate and be very clear about what we are doing.”
-Kok Yam Tan, Deputy Secretary of Smart Nation and Digital Government Office, Singapore