AI fears abound

BY Richard Summerfield

Artificial intelligence (AI) and machine learning have the potential to revolutionise many aspects of our professional and personal lives. In the decades to come, the potential benefits to be gained from embracing technology solutions will be remarkable. That said, the negative impact of AI and machine learning is widely debated, and it may have unintended consequences.

The risk of immoral, criminal or malicious utilisation of AI by rogue states, criminals and terrorists will grow exponentially in the coming years, according to 'The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation' report. The report is authored by 26 experts in AI, cyber security and robotics from universities including Cambridge, Oxford, Yale, Stanford and non-governmental organisations, such as OpenAI, the Center for a New American Security and the Electronic Frontier Foundation.

Yet despite the potential risks posed by malicious actors, many institutions are wholly unprepared. For the authors, over the course of the next decade, the cyber security landscape will continue to change and the increased use of AI systems will lower the cost of a cyber attack, meaning that the number of malicious actors and the frequency of their attacks will likely increase.

“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions, and individuals across the globe,” says Dr Seán Ó hÉigeartaigh, executive director of Cambridge University’s Centre for the Study of Existential Risk and a co-author of the report.

In response to the evolving threat of cyber crime and the potential misappropriation of AI, the report sets forth four recommendations. First, policymakers should work with researchers to investigate, prevent and mitigate potential malicious uses of AI. Second, researchers and engineers in AI should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms. Third, organisations should identify best practices where possible in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI. Finally, companies should actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

Report: The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.