The Identity Security Market: Technology Trends
Dark LLMs are forcing the industry to reinvent how to verify and protect identity
Note: This post is the third in a series examining the identity security market. See to parts one and two for continuity. In this post, we examine the technology trends driving product development. Our next post will discuss how vendor strategies and products are evolving.
Ironically, IAM systems cannot defend themselves. Historically, they relied on strong network perimeters for protection. For example, Microsoft built Active Directory in an era when firewalls shielded its APIs and domain controller protocols. Even modern platforms like Okta initially relied on simple HTML authentication, assuming identities weren’t prime attack targets.
That assumption no longer holds. Cloud adoption and remote work have expanded the attack surface, pushing identities beyond the perimeter and making them high-value, exposed targets. Defending them now requires more than network-centric security—it demands an identity-first approach.
The Limits of Zero Trust
Many promote Zero Trust as the answer to this problem, and when implemented properly, it is an integral part of the overall solution. But even fully compliant Zero Trust environments remain vulnerable to identity-based attacks. Today, there’s still no reliable way to verify who’s truly behind a “legitimate” account in real time. So while the framework of Zero Trust (particularly the implementation guidelines developed at NIST) are helpful, securing identity systems still requires a more extensive approach.
Can Shared Signals Help?
One promising development is the OpenID Foundation’s Shared Signals Framework (SSF), which uses asynchronous communication between systems to share security events. Its two main protocols—CAEP (Continuous Access Evaluation) and RISC (Risk Incident Sharing and Coordination)—aim to improve real-time visibility and response.
While the potential for SSF is significant, adoption is slow. Simply put, enterprises need systems that turn today’s opaque patterns into actionable intel. Consequently, SSF—or other solutions like it—must gain traction in the market. Until more vendors and users get behind efforts like SSF, the identity security problem will remain somewhat intractable.
The objectives of SSF combined with the increasing interest by both vendors and adopters is noteworthy and therefore makes this standard noteworthy. While unclear whether this iteration of a shared signals standard will find broad adoption, some version of this approach is clearly necessary to secure distributed, inter-reliant systems.
AI: The Good News
AI has the potential to transform identity security by addressing the scale, speed, and complexity of modern threats. It can baseline user behavior, detect real anomalies with fewer false positives, and automate tasks like log analysis, identity verification, and incident triage. AI not only reduces noise and manual effort—it promises more proactive, adaptive defenses that learn and improve in real time.
AI use cases vary by product. Early implementations focused on machine learning and user and entity behavior analytics (UEBA), now common in most identity security offerings. The current wave is driven by large language models (LLMs), as vendors race to embed generative AI into identity security systems.
Results will be mixed. Some startups are building AI-native products from a clean slate. Legacy vendors are retrofitting existing platforms—some with meaningful improvements, others with superficial bolt-ons aimed at fast delivery over real innovation.
Given these dynamics, enterprises should evaluate products carefully, as marketing will likely exceed reality. Here are some guidelines:
Look Beyond the Hype: Don’t assume every AI-branded feature delivers meaningful value. Scrutinize whether the AI is core to the product’s architecture or merely a superficial add-on. Ask vendors to demonstrate how their AI improves accuracy, reduces false positives, or accelerates response times.
Prioritize Outcome-Driven Use Cases: Focus on how AI supports specific security outcomes, such as detecting credential misuse, automating threat triage, or improving identity context across tools. Favor vendors that can show measurable improvements in real-world environments.
Evaluate UEBA and LLM Capabilities Separately: UEBA has matured and is a baseline requirement for ITDR tools. LLM-based features are newer and more variable—evaluate them cautiously, especially for tasks like summarization, investigation assistance, or natural language queries that may still be evolving.
Assess Integration and Data Quality: AI-driven systems are only as effective as the data they ingest. Ensure the solution integrates well with your existing IAM, SIEM, and log sources, and assess how it handles noisy or incomplete identity data.
Plan for Operational Impact: AI can shift security teams from reactive to proactive, but it also requires new workflows, trust-building, and tuning. Prepare for a learning curve and ensure your team can manage and optimize AI-powered tools.
Favor Vendors Committed to Transparency: Choose partners who are clear about how their AI works, what data it uses, and what limitations it has. Vendors that overpromise or resist scrutiny may pose operational and compliance risks.
AI: The Bad News
AI isn’t just empowering defenders—it’s supercharging attackers. Cybercriminals now widely use “dark LLMs” to scale and automate sophisticated attacks. These malicious models generate phishing emails, write evasive malware, conduct reconnaissance, and fuel disinformation campaigns. Examples include:
FraudGPT – Crafts phishing emails and social engineering scripts
WormGPT – Generates undetectable malware and fake content
DarkGPT – Automates vulnerability scanning and attack execution
DarkLLaMa – Produces code to bypass security controls
ChaosGPT / FoxGPT / ShadowGPT – Spread misinformation, conduct surveillance, and gather sensitive data
Combined with stolen credentials and open-source hacking tools, these LLMs make it easy for attackers to relentlessly target identities at scale. Static defenses won’t hold up against dynamic, machine-driven adversaries. Defending against AI-powered attacks requires tools that detect novel attack patterns, monitor behavioral anomalies, and adapt in real time.
In other words, the cyber arms race has entered a new and much more dangerous era, requiring new thinking, innovation, and very different product architectures. This disruptive movement will significantly impact the identity security market over the next three to five years.
In our next post, will discuss how these factors are effective vendors in the market and product development.