
Attackers Increasingly Targeting Cloud, AI Systems — Campus Technology
Report: Attackers Increasingly Targeting Cloud, AI Systems
According to CrowdStrike’s 2025 Threat Hunting Report, adversaries are not just using AI to supercharge attacks — they are actively targeting the AI systems organizations deploy in production. Combined with a surge in cloud exploitation, this shift marks a significant change in the threat landscape for enterprises.
Cloud Intrusions Reach Record Levels
The report notes a sharp escalation in attacks aimed at cloud environments. CrowdStrike threat hunters identified a 136% increase in cloud intrusions in the first half of 2025 compared to all of 2024, with a 40% year-over-year rise in cloud-conscious intrusions attributed to suspected China-nexus actors. Threat groups such as GENESIS PANDA and MURKY PANDA have proven adept at evading detection by exploiting misconfigurations, abusing trusted relationships, and manipulating cloud control planes to achieve persistence, lateral movement, and data exfiltration.
In detailed case studies, GENESIS PANDA was seen leveraging credentials from compromised virtual machines to pivot into cloud service accounts, establishing “various forms of persistence” including identity-based access keys and SSH keys. MURKY PANDA demonstrated the ability to compromise a supplier’s administrative access to a victim’s Entra ID tenant, then backdoor service principals to gain access to e-mail and other sensitive assets. Such tactics underscore that cloud administration tooling itself is a prime attack surface.
AI as Both a Weapon and a Target
The report’s headline theme is the rise of AI in both offensive and defensive cyber operations — but with a critical warning for defenders. Threat actors are using generative AI to accelerate intrusion workflows, improve phishing lures, create deepfake personas, automate malware development, and enhance technical problem-solving. At the same time, they are increasingly exploiting vulnerabilities in AI platforms themselves as an initial access vector.
CrowdStrike highlighted CVE-2025-3248 in the report, described as “an unauthenticated code injection vulnerability in Langflow AI,” a widely used framework for building AI agents and workflows. By exploiting it, attackers were able to achieve “unauthenticated remote code execution” and pursue persistence, credential theft (including cloud environment credentials), and malware deployment. This signals a fundamental shift: “Threat actors are viewing AI tools as integrated infrastructure rather than peripheral applications, targeting them as primary attack vectors.”
North Korea-nexus FAMOUS CHOLLIMA exemplifies AI weaponization. In over 320 incidents in the past year, operatives used GenAI to draft résumés, create synthetic identities with altered photos, mask their true appearance in live video interviews using real-time deepfake technology, and leverage AI code assistants for on-the-job tasks. CrowdStrike notes that this represents “a 220% year-over-year increase” in such infiltrations.
This growing trend of targeting AI platforms parallels another key avenue of attack highlighted in the report: identity compromise. Just as adversaries exploit weaknesses in AI tools to gain privileged access, they also exploit weaknesses in human and process-driven identity verification to move laterally across environments. These identity-driven breaches often serve as the connective tissue in complex, cross-domain attacks.
Identity as the Gateway in Cross-Domain Attacks
Source link