Generative artificial intelligence (GenAI) is changing the game for companies around the globe.
It's bringing previously unimaginable capabilities within reach and enabling exciting business opportunities. Potential use cases range from discovering new drugs, developing virtual reality worlds, and enabling hyper-personalized banking experience to software coding. The opportunities are many, but GenAI technologies are also creating novel cybersecurity risks for enterprises.
With the implementation of AI applications built on large language models (LLMs), companies have a new technology environment that must be protected. However, the GenAI domain has characteristics and behaviors that differ fundamentally from the IT environments organizations know—and have experience protecting. Organizations need to realize that the same prompts used to interact with GenAI chatbots could be manipulated by bad actors to get a model to do things or disclose data in unintended ways. Therefore, it is important for organizations to safely manage the privacy and security of extensive data used in these models.
Sure enough, cybercriminals are already tapping the power of GenAI to unleash new threats. Enterprises that fear they may be missing out or losing the competitive edge in the AI race may jump on the bandwagon without a robust plan to secure or govern their new GenAI solutions. This rapid unstructured adoption may open doors for cybercriminals, providing them with opportunities to compromise and exploit AI systems.
This is why, as companies take the initial steps to explore and implement GenAI to advance their business strategies, they need to pursue a rigorous and proactive AI cybersecurity defense. These efforts should move forward in lockstep to ensure that AI implementations—and associated risks—are effectively managed by the cybersecurity team.
As a starting point for this, the cybersecurity team needs to identify all the AI and machine learning (ML) assets in the enterprise and assess the risks they present.
The cybersecurity leadership should develop AI-specific security policies and standards, setting clear expectations and establishing necessary guardrails. Wherever they expect LLM capabilities to be added to their IT environment, or sensitive company data fed into such models, they need to bring in robust procedures and policies. Software security engineering and processes should be modified to consider LLM-related security and privacy.
An organization's AI cybersecurity strategy must align with its objectives of protecting data integrity and confidentiality, safeguarding privacy, and preventing adversarial attacks like model theft or evasion. It must also clearly articulate its principles around reliability of models, minimization of data, and privileges.
As adoption widens, enterprises must establish standards and governance to ensure that effective controls are implemented—in the engineering and operational environment and throughout the development life cycle. The strategy must include assurance testing to validate the security and privacy of any ML and AI models the organization is using or plans to use.
Bad actors are likely to exploit a wide range of potential vulnerabilities.
This will occur as companies tap into the power of LLMs and other new technologies, adding to the complexity of the cybersecurity mandate. Among the issues of concern, as generative AI becomes more deeply integrated into the enterprise IT landscape, are the following:
Firms need to urgently address the risks arising from the adoption of GenAI models.
These risks can be new and, in some cases, unique. The key considerations that may help the enterprise to develop a robust approach to prevent these risks include the following:
Further, there should be security architecture reviews and threat modeling to assess model design for robustness and inherent resistance to attacks. Adversarial training should be incorporated so that the model is exposed to simulated attack scenarios during its training phase.
Indirect prompt injection can be defended against by using reinforcement learning from human feedback (RLHF) and filtering retrieved inputs to remove harmful instructions. Other techniques that can be leveraged are an LLM moderator to detect sophisticated attacks and interpretability-based solutions for outlier detection.
As enterprises enter uncharted territory, strategies to de-risk the adoption of GenAI solutions are imperative.
It has now become clear that AI capabilities will themselves be indispensable to improve cybersecurity teams: Fire will be fought with fire. Even as GenAI presents security leaders with new challenges, it brings new tools that can boost the effectiveness and efficiency of cybersecurity defenses.
GenAI holds the potential to revolutionize the field, significantly improving threat detection and response times through automated analysis, while also delivering real-time summarized actionable threat intelligence. New technology streamlines entitlements, enables conditional access controls, and automates the generation and enforcement of access policies. It reduces the risk of human error, alleviating alert fatigue by automating routine tasks such as log analysis, among other areas. It ensures that only authorized users have access to sensitive information. Another place where GenAI solutions may play a role is in continuous compliance with regulatory requirements through automated collection and analysis of compliance data.
As cyber adversaries increasingly attack the AI ecosystem, enterprises must bravely step into this new era by de-risking its adoption—and by leveraging advanced AI technology to combat sophisticated cyberattacks. The achievable goal is to enhance overall cyber resilience, countering sophisticated threats and addressing an ever-present skills gap in this critical domain.