By Kerry Jones, Head of Compliance and Information Security at DigitalXRAID

The ever-accelerating march of AI into our lives has transformed industries and opened new realms of possibility. From AI-powered customer service chatbots to streamlining data analysis, AI is undoubtedly a game-changer. However, there are some sectors where AI could significantly disrupt the status quo, posing a wide range of risks for businesses.

Particularly in cybersecurity, we’ve witnessed firsthand tools like FraudGPT circulating on the cybercrime underground, offering to lower the barrier to entry for budding hackers. What’s more, organisation’s data is becoming more vulnerable than ever, with occurrences of sensitive data leaked as a result of staff using ChatGPT to help them with tasks. It’s become clear that there’s an urgent need for policies and frameworks to ensure data protection in the era of generative AI.

Several AI security regulatory frameworks have recently emerged, including NIST AI RMF, ENISA’s guidance and Google’s SAIF. These frameworks have surfaced in response to the rapid proliferation of AI technologies and their increasing integration into various industries. They represent a collective effort to proactively address the unique security and privacy challenges that AI poses. These regulatory frameworks emphasise the importance of proactive security measures and adaptability in the face of evolving threats. They encourage organisations to expand their security foundations by leveraging secure-by-default infrastructure while staying abreast of AI advancements and promptly adapting software and procedures accordingly. With prompt injection attacks emerging as a new vulnerability impacting AI models, it represents the AI-based supply chain risks that security leaders need to account for. Google’s SAIF framework urges the need to extend detection and response capabilities, urging companies to collaborate with trust and safety, threat intelligence, and counter-abuse teams to continuously monitor for potential threats, reiterating the importance of harmonising platform-level controls to ensure consistent security measures across the organisation, especially for AI risk mitigation.

While these frameworks offer valuable guidelines for enhancing AI security, they do not exist in isolation, and other standards such as ISO 27001 must be considered given the complexity of AI’s impact on information security. Firstly, ISO 27001 is a well-established and widely recognised information security standard that encompasses a broad spectrum of security aspects, including those related to AI. Transitioning to the updated ISO27001: 2022 standard allows organisations to align information security practices with the latest industry best practices, ensuring comprehensive coverage of security concerns. The integration of AI tools into operations necessitates a deep understanding of how these technologies affect data protection, confidentiality, integrity, and availability. While AI security frameworks provide valuable insights, ISO 27001 offers a holistic approach to security that can be tailored to address the unique AI-related risks and challenges faced by organisations.

The role of the cybersecurity industry in securing AI tools is crucial. Vendors and service providers must actively raise awareness of the risks associated with using AI tools in various business processes. Employees need to understand that utilising these tools, especially when handling sensitive client or company information, can potentially expose private data, jeopardising contracts and inviting legal consequences. When it comes to regulation, it’s essential for regulators to proactively address AI security. Currently, AI Is generating security risks at a rate faster than companies can keep up with. And while technology often outpaces regulation, finding the right balance between security and innovation is vital. Regulators must facilitate safe AI adoption while safeguarding against misuse.

Mitigating cybersecurity risks posed by generative AI misuse requires a multifaceted approach. Organisations must stay updated on AI developments and adapt their operations to address evolving risks. Regularly reviewed communication and training are essential to ensure employees are aware of risks and proper procedures. Establishing a security-first culture is paramount, as human error remains a significant cybersecurity threat.

But is generative AI truly useful for cybersecurity? In its current state, AI chatbots like ChatGPT can enhance the efficiency of cyber professionals, but human validation of their output is essential. It’s crucial that developers with subject-matter knowledge should drive these tools to ensure their reliability. However, the rise of AI in cybercrime poses a significant concern. Malicious actors may leverage AI to create tools and exploit vulnerabilities, potentially democratising cybercrime further. Better training is required so that teams relying on generative AI are more capable of spotting these kinds of mistakes.

As AI continues to shape the cybersecurity landscape, organisations must strike the balance between harnessing AI’s potential and safeguarding against its risks. The collaboration of regulators, the cybersecurity industry, and businesses is pivotal in ensuring a secure and innovative AI-powered future. Understanding and communicating risks, educating employees, and fostering a security-first culture are the first steps toward navigating this new era of AI and cybersecurity.

About the author

Kerry Jones, Head of Compliance and Information Security, DigitalXRAID

Kerry Jones joined DigitalXRAID in 2018 and was quickly promoted to Head of Compliance and Information Security. As a highly accomplished and driven cyber security professional with over 10 years of experience in the field, Kerry is instrumental in implementing robust security frameworks for a wide range of organisations, from large enterprise to SMEs. Internally, she manages the ISO certifications for ISO 27001, 9001, and 20000, and supports additional accreditations, making DigitalXRAID a leader in the industry. 

Kerry was also recently awarded Security Woman of the Year at the prestigious Computing Security Excellence Awards. Dedicated to recognising the talent, leadership and contribution of women in the IT industry, Kerry was commended for her position as a strong advocate of cyber awareness training, how she has taken organisations from being in the dark on security measures to providing them with a scalable roadmap for implementation, and for building and leading an all-female compliance team.

Read more of our article here.