Basic Strategies Will Have to Evolve

The latest batch of cybersecurity predictions point to DBOMs, mandated MFA and, or course, new AI challenges.

Industrial Cyber

Sometimes the most difficult part of change is actually recognizing it. At least this seems to be the case with some elements of industrial cybersecurity. While tactics and strategies don't always appear to change, the reality is that they are constantly evolving in response to new threats and the corresponding defensive tactics. This collection of ongoing alterations to key approaches underscores predictions from our collection of experts as they weigh in on what to expect in 2025.

George Gerchow, Faculty, IANS Research

  • Nation-state actors will increasingly exploit AI-generated identities to infiltrate organizations: An emerging insider threat gaining traction over the past six months, these sophisticated operatives bypass traditional background checks using stolen U.S. credentials and fake LinkedIn profiles to secure multiple roles within targeted companies. Once inside, they deploy covert software and reroute hardware to siphon sensitive data directly to hostile nations. The FBI confirmed that 300 companies unknowingly hired these imposters for over 60 positions, exposing critical flaws in hiring practices. Traditional background checks can’t catch this level of deception, and HR teams lack the tools to identify these threats. This escalating risk demands stronger identity verification and fraud detection. This isn’t just an attack trend; it’s a wake-up call.
  • AI blurs the lines between novice and expert: Much has been said about AI’s risks, but a critical element often overlooked is how it’s empowering previously marginalized threat actors. Newcomers—known as “script kiddies”—are leveraging AI-driven automation and sophisticated deep fakes to rapidly escalate their capabilities. Less-experienced hackers now have the means to execute complex and damaging cyberattacks with unprecedented ease. Scaling up defenses against these AI-powered adversaries will be crucial. Organizations must adopt AI-enhanced security strategies and deploy internal and external AI bots to automate key functions like audits and incident response.
  • The end of optional MFA. The shared responsibility model in cloud security is breaking down, which will push cloud providers to enforce mandatory MFA for all customers. Rising supply chain attacks and multi-cloud complexities demand tighter collaboration between security teams and cloud-savvy developers. This shift will spark a critical push for both providers and customers to elevate security standards in an increasingly volatile landscape.

Bruno Kurtic, Co-Founder, President & CEO, Bedrock Security

  • Escalating security liabilities in AI data handling will drive demand for enhanced data visibility, classification and governance: In 2025, increasing security risks and AI regulations on data handling will push organizations to enhance data visibility, classification, and governance. With agentic AI systems becoming integral to operations, companies will need full insight into data assets to use them responsibly, emphasizing data sensitivity classification to avoid exposing confidential or personal information during AI training.
  • A standard practice will emerge for creating a data bill of materials (DBOM) for AI datasets. DBOMs will detail the origin, lineage, composition, and sensitivity of data, ensuring only appropriate data trains AI models. Strict entitlements will limit access, allowing only authorized users to manage sensitive data, thereby reducing accidental or malicious exposures.
  • As data volumes surge, scalable solutions will be essential to handle diverse datasets. This focus on visibility, classification, and access control will drive new data platforms, advancing AI data governance and mitigating security risks.

Pedram Amini, Chief Scientist, OPSWAT

  • Escalating sophistication and increasing abuse of AI as costs decrease. The drumbeat of threat evolution will continue, with nation-states increasing attacks on physical devices and appliances. ML-assisted scams will grow significantly in volume, quality, and believability. As costs associated with ML compute decrease, we'll see a transition from ML-assisted to fully autonomous operations.
  • Organizations should expect increased attacks on employees' personal devices and should prioritize training and novel detection controls to prepare for AI-enhanced social engineering attacks.
  • Production-grade zero-day vulnerabilities will likely be found—and perhaps even exploited—by AI. While we're likely a few years out from the first fully agentic AI malware, the industry should brace for its emergence.
More in Cybersecurity