As it stands, AI cannot completely replace a competent cybersecurity professional. However, AI can play a positive (yet limited) role in improving our effectiveness, regardless of how society is starting to believe the ultimate disruptor has arrived.
First Published 3rd February 2023
Does OpenAI dream of electric sheep?
3 min read | Reflare Research Team
Since OpenAI made ChatGPT available to the public in late 2022, we have heard a lot of FUD (Fear, Uncertainty, and Doubt) about how Artificial Intelligence (AI) will replace cybersecurity professionals. This is very unfortunate, because instead of thinking about how we could take advantage of recent advancements in AI to improve the quality of our work and reduce the day-to-day pressure that has led many in the industry to destination burnout – we decided to demonise it.
For the past several years, AI has helped advance many fields, including cybersecurity, but it still has its limitations. There are several reasons why relying solely on AI in the realm of cybersecurity is still not feasible.
First, AI is lacking in comprehending human emotions and social engineering techniques. Social engineering attacks such as phishing heavily rely on manipulating human emotions and psychology; thus AI systems would not be effective against these sorts of attacks. On the other hand, human security professionals are trained to recognise and respond to such attacks.
Second, adapting to new and changing threats is a challenge for AI. The cybersecurity landscape is constantly evolving, and new threats arise frequently. AI algorithms are designed to detect known threats but are not equipped to handle new and unknown dangers. Human security professionals can adapt quickly to new threats through training and experience.
Third, AI systems are only as effective as the data they are trained on, and they can sometimes produce unexpected results. Human security professionals are needed to review and interpret the results generated by AI systems and make informed decisions based on these results.
Fourth, AI lacks the creative problem-solving skills that human security professionals possess. Human security professionals can think outside the box and find unique solutions to complex problems, whereas AI systems are limited to the algorithms they have been trained on.
Fifth, ethical and legal limitations can restrict certain actions taken by AI. For example, there could be legal restrictions on using AI to monitor employee communications or perform specific forms of surveillance. Human security professionals are trained in ethical and legal aspects of cybersecurity and can make decisions accordingly. Also, AI algorithms are trained on data, and if the data is biased, the results generated by the AI system will also be biased. Thus, again, humans may still need to review the results generated by AI systems.
Having said that, while AI has made significant strides in cybersecurity, it is not a substitute for human security professionals. AI's limitations in comprehending human emotions, adapting to evolving threats, lack of creativity, the need for human oversight due to ethical and legal limitations, and potential bias in training data make it evident that human security professionals remain a crucial aspect of the cyber security domain.
Now that we have explained why many cybersecurity workers would still have their job, at least in the foreseeable future, let us talk about how AI systems can be our best friends.
AI can augment the efforts of human security professionals and make their jobs easier and more effective. It can help automate routine and repetitive tasks, freeing human security professionals to focus on more complex and critical issues. For example, AI can automate the process of detecting and patching vulnerabilities in the network or to detect and block malicious traffic.
Moreover, AI can help analyse large amounts of data and provide insights that humans may not be able to detect due to our limitations. This can be particularly useful in detecting advanced persistent threats (APTs) and other stealthy attacks that can evade detection by traditional security systems. It can also identify patterns and anomalies in the data that may indicate a security breach.
In addition, AI can improve the speed and accuracy of incident response. For example, it can automatically triage and prioritise security incidents, allowing human security professionals to focus on the most critical incidents. It can also automate the process of collecting and analysing data from various sources to help determine the root cause of an incident and identify the best course of action.
Another area where AI can positively impact cybersecurity is in the realm of threat intelligence. It can collect, analyse and disseminate information about current and emerging threats in real-time faster than human analysts. This can help organisations stay informed about the latest security threats and take proactive measures to protect themselves.
Despite these benefits, it is important to remember that AI is not a silver bullet and should not be relied upon solely in cybersecurity. These systems must be properly designed, trained, and maintained. They should be used in conjunction with human security professionals who can provide oversight and make informed decisions based on the results generated by AI systems.
While AI has its limitations in cybersecurity, it can be a valuable tool in augmenting the efforts of human security professionals. AI can help automate routine tasks, provide insights into large amounts of data, improve the speed and accuracy of incident response, and help organisations stay informed about the latest security threats. However, as human security professionals, we must not solely rely upon them and always be involved in the decision-making.
Stay up-to-speed on the latest analysis in cybersecurity trends with your free subscription to Reflare's biweekly research newsletter. You can also explore some of our related articles to learn more.