The Risks of AI
- Simeon Spencer
- May 5, 2023
- 4 min read

With all our recent content talking about the potential of AI, it's only fair that we cover the darker side of AI and its most popular subset, generative AI. Recently, Geoffrey Hinton, a renowned computer scientist and AI researcher widely regarded as one of the pioneers of the deep learning revolution that enabled AI, left his position as VP at Google's Brain Team. Geoffrey told the New York Times that he left Google so he could voice his concerns about the dangers of AI without directly implicating Google. Other high-profile names like Elon Musk and Steve Wozniak have also signed the open letter from the Future of Life Institute calling for a 6-month pause in all AI development until we can understand the related risks. The letter has garnered 27,565 signatures as of this article.
However, OpenAI's CEO Sam Altman seems unwilling to halt his company's progress despite concerns, although he did voice agreement to some portions of the open letter. And in our opinion, he probably will never agree to a complete halt, because firstly, it gives competitors a significant amount of time to catch up as it's almost impossible to enforce a global AI development hiatus. Secondly, OpenAI's new stakeholders who have sunk massive investments are unlikely to agree due to the first point, they invested in a market leader and want it to remain so.
The Current and Potential Risks of AI
Now you may already be familiar with some of the risks of AI such as the more popular ones like the risk of job loss or advanced Deepfakes but we'll give you an overview of other risks below:
Labour Risks:
Job loss due to AI automation - AI and automation could lead to significant job losses, particularly in sectors that rely on routine tasks or knowledge work. Goldman Sachs estimates that 300 million full-time jobs globally could be affected in the future, especially in fields like administration, legal, and production. However, AI could also significantly boost labour productivity leading to an estimated annual 7% increase in global GDP. So AI automation is a double-edged sword, on one hand, it could provide significant economic gains but it also could lead to political and economic instability if a large portion of people are rapidly replaced.
Over-dependence - jobs that are complemented by AI could see an increase in employees' productivity, however, at the same time it could also result in a decrease in manual proficiency of employees that rely on AI for their jobs. If the AI was to go down, these employees may suffer a more than proportionate fall in productivity than one without AI support. Other affected traits could be a decrease in critical thinking and decision-making skills if AI is overused in decision-making. A balanced approach must be adopted to prevent an AI-dependent workforce culture.
Security Risks:
AI model vulnerabilities - AI systems could be vulnerable to hacking and cyber-attacks, leading to significant security risks and potentially catastrophic consequences if they were to fail or go down, especially if a large number of enterprises are utilizing its API for day-to-day operations. Hacks could be empowered by other higher-performance AI models, some of which can even overpower enterprise firewalls.
AI-powered hacks and malware - AI can be used to make hacking and exploitation of user data or scams more sophisticated and effective. Automated attacks, impersonation, and antivirus evasion are just some of the ways AI can enhance hacks or exploits. Generative AIs may also be used to generate new and complex types of malware that can evade conventional detection and protection measures to infect devices. It's likely that as generative AIs progress, AI security solutions will be one of the first use cases being widely adopted.
Legal Risks:
Deepfakes - generative AI can create realistic videos or audio of people saying or doing things that they never did. This can be used for malicious purposes, such as spreading false information, blackmailing, impersonating, or defaming someone.
Disinformation - generative AI can generate content that is false, misleading, or biased. This can be used for manipulating public opinion, influencing elections, undermining trust, or promoting agendas.
Copyright - there are many creatives and organizations debating if generative AI models have been plagiarising their work and if the work that's created by these models are truly owned by the person who prompted the model. It remains uncertain how regulators will regulate generative AIs later into 2023. It's likely that regulators may require AI models to be transparent about their datasets to check for any copyright infringement.
Overall grey regulatory area - although everyone wants to capture a share of the generative AI market for themselves. It is worth noting that regulations implemented in 2023 or later could drastically increase the costs of operating AI models or make certain models entirely obsolete. Companies seeking to develop or are currently building their own AI models should take note and potentially mitigate regulatory risks by ensuring datasets hold no copyrighted data and adopt strategies that focus on model transparency.
These are the more significant risks presently, although some would argue that operational risks like AI models not behaving as intended are also important, we believe that the market and users understand to a certain extent that AI models are not perfect and very nascent in nature.
We expect more risks to arise as more use cases are developed for generative AIs and as more AI companies begin popping up given the renewed hype around the sector. Companies are now building AI for every sector and every work department without concern about whether there's a real need for it. I leave you all with a famous quote from Jurassic Park that sums up what's happening in AI now:
"Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."
Comments