Ashley O'Connor
Marketing Executive
August 2, 2023
The New York Times recently reported that Dr Geoffrey Hinton, the man widely recognised as the “Godfather of artificial intelligence” was leaving his job at Google so that he could speak more freely about the dangers of AI. In his recent interview, Dr Hinton shared his concern that “artificial intelligence could cause the world serious harm” and that he even regrets his life’s work.
Dr Hinton started work on AI more than 40 years ago, becoming a pioneer in neural network research. In AI, neural networks are systems similar to the human brain in the way they learn and process information, also known as “deep learning”.
Dr Hinton went on to sell his groundbreaking technology in 2012, sparking a bidding war between Google and Microsoft, which ended in a $44 million sale. The award winning system which sold to Google led to the creation of technologies such as ChatGPT and Google Bard. Hinton’s student who co-created the system went on to be the Chief Scientist at OpenAI (the creator of ChatGPT).
Although Dr Hinton has spent his life developing AI, he now adds to the growing number of technology experts who have expressed concerns about the speed and direction that AI is progressing. In March, an open letter calling for a pause to developments of a more advanced version than the current ChatGPT was signed by Apple co-founder Steve Wozniak, tech entrepreneur Elon Musk, and Turing Award winning researcher Yoshua Bengio. Bengio, who won the 2018 Turning Award for his work on deep learning released a statement expressing that “we need to take a step back” from Artificial Intelligence.
Despite a couple of far-out stories surfacing, like a software engineer who claimed that Google’s chatbot was sentient, the real concerns these experts have are related to the dangers what such a powerful tool could cause if misused by humans with bad intentions or humans simply unaware of the consequences of their actions.
Since ChatGPT’s sensationalised release in November 2022, the public have begun to widely adopt artificial intelligence into their everyday lives, enhancing productivity, solving problems, and even using it to generate compute code.
Although ChatGPT was released with good intentions, it is already being misused, with one of the most obvious misuses being students cheating on schoolwork. But there are more malicious cases of people “jailbreaking” the chatbot and harnessing its power to create new forms of malware, and the tool is already having an impact on the cyber security landscape.
Data privacy and security | With more reliance on AI systems such as healthcare, there is concerns around the privacy of this data. |
Cyber security | AI chatbots can lower the entry barrier for cyber criminals, enabling fraud and phishing, and reverse engineering code. |
Job Displacement | Concerns about job displacement and unemployment in certain sectors, potentially leading to socioeconomic challenges. |
Unethical use | AI can be used by governments to push misinformation campaigns. |
The explosion of cyber attacks in recent years is due to the low barrier of entry to cyber criminals. There are now guides and services such as ransomware as-a-service that enable almost anyone to become an attacker, with little to no technical knowledge. ChatGPT could potentially contribute to this trend, further lowering the barrier of entry.
ChatGPT 4 can pass the Turing Test, meaning that the technology is capable of tricking humans into thinking that they’re speaking to another human. This level of sophistication can allow cyber criminals to use AI to generate phishing communications on a massive scale, which are highly convincing and difficult to detect. Not only are these emails convincing and able to mimic writing styles, but ChatGPT can create them instantly, reducing the time and effort required to launch an attack.
ChatGPT is flawless at understanding and analysing computer code. When used maliciously, this can allow attackers to reverse engineer and explain the code and security features for any system, identifying vulnerabilities for attack. The chatbot could even potentially provide methods to exploit these vulnerabilities, without the need for advanced technical knowledge.
According to Darkreading, ChatGPT can act as a “mini-brain” for malware, making it completely autonomous. This malware can allow an attacker to extract sensitive company data or shut down systems. Additionally, if ChatGPT’s censorship features are bypassed, the chatbot can be used to help create new malware which can be used in an attack.
Although, it is important to note that our cyber security experts think that AI is still far off from being widely used to create effective malware.
While many are focused on the negatives of artificial intelligence, it is important that we recognise the positive impact AI can have on keeping us protected. In 2023, research by Blackberry found that the majority of IT decision makers (82%) plan to invest in AI-driven cyber security, and almost half plan to invest before the end of the year.
The amount of data that the world produces is increasing at an exponential rate, and it has become impossible for humans to analyse this manually. As a result, artificial intelligence is becoming an essential tool in the fight against cybercrime. AI has the ability to analyse vast amounts of data, detect patterns and anomalies and proactively intercept threats autonomously.
A key tool at the forefront of this movement is from our partner, Darktrace. Darktrace is the world’s leading AI cyber security company, protecting the vast digital systems of corporations such as Samsung, Unilever and Coca-Cola. With the help of deep learning, Darktrace is able to continuously monitor the entire digital infrastructure of these organisations, automatically identifying threats and autonomously nullifying them at any time they may occur, something that would be almost impossible if left to humans alone.
Artificial intelligence is an incredibly powerful tool, which provides us with possibilities previously unattainable. While it does present new dangers, the idea of AI becoming sentient is limited to science fiction. The real dangers of AI are likely to arise from the malicious intentions of the human controlling it.
In the future, it is possible that artificial intelligence could begin to fuel cyber-attacks. And this new wave of cyber attacks will require a sophisticated artificial intelligence based form of defence.