AI in SaaS is unavoidable. The top half of ServiceNow’s homepage is dedicated to putting AI to work. Salesforce has 17 mentions of AI or Einstein on its homepage. Copilot dominates the homepage banner for Microsoft, while GitHub touts itself as “the world’s leading AI-powered developer platform.
Make no mistake; AI is transformative. Its ability to connect data points, identify patterns, and use that information to offer insights, write communications, and identify connections is breathtaking. However, it also opens up SaaS applications to new areas of risk. Data that was thought to be securely saved within a SaaS app could be exposed by AI and shared.
The Cybersecurity and Infrastructure Security Agency (CISA) recognizes the value AI provides, noting that there were more than 150 beneficial uses of AI covering things from R&D to forecasting and system planning. However, for all the benefits involving AI, it introduces significant risks that must be mitigated. In April, CISA published its AI guidelines for critical infrastructure, much of which can be applied to non-critical infrastructure enterprises looking to improve their SaaS AI security posture.
Three Types of AI Threats
According to CISA, there are three types of AI risk:
- Attacks that use AI
- Attacks that target AI systems
- Failures in the design and implementation of AI systems
Enterprises need to be wary of all three types of AI threats, and take the necessary steps to mitigate risk.
CISA’s AI Risk Mitigation Guidelines
CISA’s guidelines are aligned closely with the NIST AI RMF, and comprised of four functions: Govern, Map, Measure, and Manage.
Govern
As in NIST, Govern sits at the center of the model, and is built on the foundation of a culture of AI risk management. The guidelines here support the establishment of policies, processes, and procedures that allow organizations to enjoy the benefits of AI while mitigating its risks. It follows a “secure by design” philosophy, where cybersecurity leaders build a culture in which security is a top priority.
Among the different guidelines within Govern is the need to create a detailed plan for cybersecurity risk management, establish transparency in AI system use, and integrate AI threats, incidents, and failures into information-sharing mechanisms.
Furthermore, organizations should establish roles and responsibilities with their AI vendors, invest in workforce training, and collaborate with industry groups or government agencies to stay on top of risk management tools and methodologies.
Map
Mapping is key to understanding where and how AI systems are being used. The visibility into these systems allows security teams to assess, evaluate, and mitigate specific risks.
The guidelines include documenting AI use cases, their risks, and mitigations, as well as conducting an impact assessment of the AI tool and the negative potential impact that could arise from the implementation.
Security teams should also assess whether certain AI systems require human supervision to address any malfunctions or unintended consequences.
Measure
Within this function, organizations are guided to develop systems capable of assessing, analyzing, and tracking AI risks. It asks security teams to identify repeatable methods that can monitor AI risks and impacts throughout the AI system lifecycle.
As part of this function, security teams should define metrics for detecting and tracking known risks and incidents. Organizations should continuously test AI systems for errors, and establish practices to prevent exposure of confidential information.
AI systems should be developed and used with resilience in mind, which enables fast recovery from any type of disruption. Most importantly, teams should also establish processes for AI security reporting, to collect feedback from impacted stakeholders.
Manage
The last of CISA’s guidelines covers the need to prioritize and act upon AI risks to safety and security. Organizations should establish and follow AI cybersecurity best practices, including the use of role-based access controls and logging all system use.
Whenever possible, mitigations should be applied before systems or applications are deployed, and system should be monitored for any kind of unusual or malicious activity behavior.
When incidents arise, security teams and stakeholders should follow established incident response plans to restore and secure the AI system.
Benefits of Applying this Framework To SaaS
The CISA framework is geared today using AI modules. However, with some minor tweaks it is relevant and applicable to AI in SaaS applications. In SaaS applications, AI risk emanates from two main areas – user accounts with broad permission scopes and AI tools that have deep reach into applications.
Permission trimming is always vital to SaaS security. The introduction of AI heightens this need, as AI agents inherit the permissions associated with the user account. If an over-permissioned user account with AI access is compromised, the threat actor can use the GenAI tool to exfiltrate massive volumes of sensitive data.
Similarly, if GenAI tools have infinite reach and access within the application, they can cause data leaks by sharing sensitive information in proposals, correspondence, or the creation of marketing materials.
To prevent these nightmare scenarios from unfolding, some SSPMs now include GenAI checks in their platforms. These checks look for sensitivity labels that would prevent GenAI agents from accessing confidential data. Other checks identify users with AI licenses or access, allowing the security team to limit GenAI access to those who need it.
Security teams must bear in mind that every SaaS application is different, and the challenges presented by AI in one application may require a different approach than those in a different app. However, following CISA’s guidelines should enable organizations to fully embrace AI while mitigating the risks these systems can bring on.