Among the rapid pace of technological progress and societal shifts, the term "AI" has firmly positioned itself at the forefront of global conversations. With the increasing spread of large language models (LLMs), the surge in security and privacy concerns directly links AI with the cybersecurity world. Kaspersky researchers illustrate how AI tools have helped cybercriminals in their malicious activity in 2023, while also showcasing the potential defensive applications of this technology. The company's experts also reveal the evolving landscape of AI-related threats in the future.
As instruction-following LLMs are integrated into more consumer-facing products, new complex vulnerabilities will emerge on the intersection of probabilistic generative AI and traditional deterministic technologies, expanding the attack surface for cybersecurity professionals to secure. This will require developers to study new security measures like user approval for actions initiated by LLM agents.
Red teamers and researchers leverage generative AI for innovative cybersecurity tools, potentially leading to an assistant using LLM or machine learning (ML). This tool could automate red teaming tasks, offering guidance based on executed commands in a pentesting environment.
In the coming year, scammers may amplify their tactics using neural networks, leveraging AI tools to create more convincing fraudulent content. With the ability to effortlessly generate convincing images and videos, malicious actors pose an increased risk of escalating cyber threats related to fraud and scams.
More complex vulnerabilities
As instruction-following LLMs are integrated into more consumer-facing products, new complex vulnerabilities will emerge on the intersection of probabilistic generative AI and traditional deterministic technologies, expanding the attack surface for cybersecurity professionals to secure. This will require developers to study new security measures like user approval for actions initiated by LLM agents.
A comprehensive AI assistant to cybersecurity specialists
Red teamers and researchers leverage generative AI for innovative cybersecurity tools, potentially leading to an assistant using LLM or machine learning (ML). This tool could automate red teaming tasks, offering guidance based on executed commands in a pentesting environment.
Neural networks will be increasingly used to generate visuals for scams
In the coming year, scammers may amplify their tactics using neural networks, leveraging AI tools to create more convincing fraudulent content. With the ability to effortlessly generate convincing images and videos, malicious actors pose an increased risk of escalating cyber threats related to fraud and scams.
AI will not become a driver for groundbreaking change in the threat landscape in 2024
Despite the above trends, Kaspersky experts remain skeptical about AI changing the threat landscape significantly any time soon. While cybercriminals do adopt generative AI, the same is true about cyber defenders who will use the same or even more advanced tools to test enhance security of software and networks, making it unlikely to drastically alter the attack landscape.
More AI-related regulatory initiatives, with private sector's contribution
As fast-growing technology develops, it has become a matter of policy making and regulation. The number of AI-related regulatory initiatives is set to rise. Non-state actors, such as tech companies, given their expertise in developing and utilizing artificial intelligence, can provide invaluable insights for discussions on AI regulation on both global and national platforms.
Watermark for AI-generated content
More regulations, as well as service provider policies will be required to flag or identify synthetic content, with the latter continuing to invest in detection technologies. Developers and researchers, on their part, will contribute to methods of watermarking synthetic media for easier identification and provenance.
"Artificial Intelligence in cybersecurity is a double-edged sword. Its adaptive capabilities fortify our defenses, offering a proactive shield against evolving threats. However, it also poses risks as attackers leverage AI to craft more sophisticated assaults," said Vladislav Tushkanov, security expert at Kaspersky. "Striking the right balance, ensuring responsible use without oversharing sensitive data, is paramount in securing our digital frontiers."
To learn more about AI in cybersecurity, visit Securelist.com.
SOURCE Kaspersky
Image credit: Secure list
No comments:
Post a Comment