Cybersecurity is within the midst of a contemporary arms race, and the highly effective weapon of selection on this new period is AI.
AI affords a traditional double-edged sword: a strong defend for defenders and a potent new device for these with malicious intent. Navigating this complicated battleground requires a gradual hand and a deep understanding of each the expertise and the individuals who would abuse it.
To get a view from the entrance strains, AI Information caught up with Rachel James, Principal AI ML Menace Intelligence Engineer at international biopharmaceutical firm AbbVie.

“Along with the in-built AI augmentation that has been vendor-provided in our present instruments, we additionally use LLM evaluation on our detections, observations, correlations and related guidelines,” James explains.
James and her workforce are utilizing giant language fashions to sift by a mountain of safety alerts, on the lookout for patterns, recognizing duplicates, and discovering harmful gaps of their defences earlier than an attacker can.
“We use this to find out similarity, duplication and supply hole evaluation,” she provides, noting that the following step is to weave in much more exterior risk knowledge. “We wish to improve this with the mixing of risk intelligence in our subsequent section.”
Central to this operation is a specialised risk intelligence platform known as OpenCTI, which helps them construct a unified image of threats from a sea of digital noise.
AI is the engine that makes this cybersecurity effort attainable, taking huge portions of jumbled, unstructured textual content and neatly organising it into an ordinary format often known as STIX. The grand imaginative and prescient, James says, is to make use of language fashions to attach this core intelligence with all different areas of their safety operation, from vulnerability administration to third-party threat.
Profiting from this energy, nonetheless, comes with a wholesome dose of warning. As a key contributor to a serious trade initiative, James is conscious about the pitfalls.
“I might be remiss if I didn’t point out the work of an exquisite group of parents I’m part of – the ’OWASP Top 10 for GenAI’ as a foundational manner of understanding vulnerabilities that GenAI can introduce,” she says.
Past particular vulnerabilities, James factors at three basic trade-offs that enterprise leaders should confront:
- Accepting the danger that comes with the inventive however usually unpredictable nature of generative AI.
- The lack of transparency in how AI reaches its conclusions, an issue that solely grows because the fashions turn out to be extra complicated.
- The hazard of poorly judging the actual return on funding for any AI mission, the place the hype can simply result in overestimating the advantages or underestimating the trouble required in such a fast-moving subject.
To construct a greater cybersecurity posture within the AI period, you need to perceive your attacker. That is the place James’ deep experience comes into play.
“That is really my explicit experience – I’ve a cyber risk intelligence background and have performed and documented intensive analysis into risk actor’s curiosity, use, and growth of AI,” she notes.
James actively tracks adversary chatter and gear growth by open-source channels and her personal automated collections from the darkish net, sharing her findings on her cybershujin GitHub. Her work additionally entails getting her personal arms soiled.
“Because the lead for the Immediate Injection entry for OWASP, and co-author of the Guide to Red Teaming GenAI, I additionally spend time creating adversarial enter methods myself and preserve a community of specialists additionally on this subject,” James provides.
So, what does this all imply for the way forward for the trade? For James, the trail ahead is evident. She factors to a captivating parallel she found years in the past: “The cyber risk intelligence lifecycle is sort of similar to the information science lifecycle foundational to AI ML programs.”
This alignment is a large alternative. “Certainly, when it comes to the datasets we will function with, defenders have a singular probability to capitalise on the ability of intelligence knowledge sharing and AI,” she asserts.
Her remaining message affords each encouragement and a warning for her friends within the cybersecurity world: “Knowledge science and AI will probably be part of each cybersecurity skilled’s life transferring ahead, embrace it.”
Rachel James will probably be sharing her insights at this yr’s AI & Big Data Expo Europe in Amsterdam on 24-25 September 2025. Make sure you try her day two presentation on ‘From Precept to Follow – Embedding AI Ethics at Scale’.
See additionally: Google Cloud unveils AI ally for safety groups

Need to study extra about AI and large knowledge from trade leaders? Try AI & Big Data Expo happening in Amsterdam, California, and London. The excellent occasion is co-located with different main occasions together with Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Discover different upcoming enterprise expertise occasions and webinars powered by TechForge here.
