Google Cloud and CSA: 2024 will bring significant generative AI adoption in cybersecurity, driven by C-suite

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


The “department of no” stereotype in cybersecurity would have security teams and CISOs locking the door against generative AI tools in their workflows. 

Yes, there are dangers to the technology, but in fact, many security practitioners have already tinkered with AI and the majority of them don’t think it’s coming for their jobs — in fact, they’re aware of how useful the technology can be. 

“When we hear about AI, it’s the assumption that everyone is scared,” said Caleb Sima, chair of the CSA AI security alliance. “Every CISO is saying no to AI, it’s a huge security risk, it’s a huge problem.”

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

Request an invite

But in reality, “AI is transforming cybersecurity, offering both exciting opportunities and complex challenges.”

Growing implementation — and disconnect

Per the report, nearly three-fourths (67%) of security practitioners have already tested AI specifically for security tasks. Additionally, 55% of organizations will incorporate AI security tools this year — the top use cases being rule creation, attack simulation, compliance violation detection, network detection, reducing false positives and classifying anomalies. C-suites are largely behind that push, as confirmed by 82% of respondents.

Courtesy Google Cloud/CSA

Bucking conventions, just 12% of security professionals said they believed AI would completely take over their role. Nearly one-third (30%) said the technology would enhance their skill set, generally support their role (28%) or replace large parts of their job (24%). A large majority (63%) said they saw its potential for enhancing security measures.

“For certain jobs, there’s a lot of happiness that a machine is taking it,” said Anton Chuvakin, security advisor in the office of the CISO at Google Cloud. 

Sima agreed, adding that, “most people are more inclined to think that it’s augmenting their jobs.” 

Interestingly, though, C-levels self-reported a higher familiarity with AI technologies than staff — 52% compared to 11%. Similarly, 51% had a clear indication of use cases, compared to just 14% of staff.

“Most staff, let’s be blunt, don’t have the time,” said Sima. Rather, they’re dealing with everyday issues as their executives are getting inundated with AI news from other leaders, podcasts, news sites, papers and a multitude of other material. 

“The disconnect between the C-suite and staff in understanding and implementing AI highlights the need for a strategic, unified approach to successfully integrate this technology,” he said. 

AI in use in the wild in cybersecurity

The no. 1 use of AI in cybersecurity is around reporting, Sima said. Typically, a member of the security team has manually gathered outputs from various tools, spending “not a small chunk of time” doing so. But “AI can do that much faster, much better,” he said. AI can also be used for such rote tasks as reviewing policies or automating playbooks. 

But it can be used more proactively, as well, such as to detect threats, perform end detection and response, find and fix vulnerabilities in code and recommend remediation actions. 

“Where I’m seeing a lot of action immediately is ‘How do I triage these things?”, said Sima. “There’s a lot of information and a lot of alerts. In the security industry, we are very good at finding bad things, not so good at determining what of those bad things are most important.”

It’s difficult to cut through the noise to determine “what’s real, what’s not, what’s prioritized,” he pointed out. 

But for its part, AI can catch an email when it comes in and quickly determine whether or not it’s phishing. The model can fetch data, determine who the email is from, who it’s going to and the reputation of website links — all within moments, and all while providing reasoning around threat, chain and communication history. By contrast, validation would take a human analyst at least 5 to 10 minutes, said Sima. 

“They now with very high confidence can say ‘This is phishing,’ or ‘This is not phishing,’” he said. “It’s pretty phenomenal. It’s happening today, it works today.”

Executives driving the push — but there’s a trough ahead

There is an “infection among leaders” when it comes to using AI in cybersecurity, Chuvakin pointed out. They are looking to incorporate AI to supplement skills and knowledge gaps, enable faster threat detection, improve productivity, reduce errors and misconfigurations and provide faster incident response, among other factors. 

However, he noted, “We will hit the trough of disillusionment in this.” He asserted that we are “close to the peak of the Hype Cycle,” because a lot of time and money has been poured into AI and expectations are high — yet use cases haven’t been all that clear or proven. 

The focus now is on discovering and applying realistic use cases that by the end of the year will be proven and “magical.”

When there are real tangible examples, “security thoughts are going to change drastically around AI,” said Chuvakin. 

AI making low-hanging fruit dangle ever lower

But enthusiasm continues to intermingle with risk: 31% of respondents to the Google Cloud-CSA survey identified AI as equally advantageous for both defenders and attackers. Further, 25% said AI could be more useful to malicious actors.

“Attackers are always at an advantage because they can make use of technologies much, much faster,” said Sima. 

As many have before, he compared AI to the previous cloud evolution: “What did the cloud do? Cloud allows attackers to do things at scale.”

Instead of aiming at one purposeful target, threat actors can now target everyone. AI will further support their efforts by allowing them to be more sophisticated and focused. 

For instance, a model could troll someone’s LinkedIn account to collect valuable information to craft a completely believable phishing email, Sima pointed out.

“It allows me to be personalized at scale,” he said. “It brings that low-hanging fruit even lower.”