Securing the AI Frontier
Nov 18, 2025
How Cybersecurity Frameworks are Adapting to AI Risks
Artificial intelligence has moved from experimentation to core infrastructure across sectors. As organizations increasingly embed machine learning systems into products, workflows and services, security teams are discovering that AI introduces unique and often poorly understood risk vectors. Emerging frameworks - especially those from NIST - are beginning to reshape how enterprises evaluate, mitigate and govern AI-related threats. This article examines how AI is altering the cybersecurity landscape, what the latest NIST guidance contributes, and how organizations can build mature, security-first Responsible AI programs.
Why AI changes the cybersecurity and privacy risk landscape
AI systems differ from traditional software in several critical ways. Their logic is learned rather than coded, their behavior changes depending on data, and their attack surface spans training pipelines, models, APIs, data sources and deployment infrastructure. This makes them vulnerable to unfamiliar categories of attacks such as model inversion, data poisoning, extraction and adversarial manipulation.
In parallel, organizations are deploying AI into mission-critical contexts - authentication, fraud detection, medical triage, supply-chain optimization - raising the stakes of system failure or compromise. At the same time, AI models often process sensitive personal, biometric or proprietary data, creating heightened privacy and confidentiality obligations.
These combined forces have pushed regulators, security leaders and policymakers to recognize that AI introduces systemic risk: failures are scalable, opaque and fast-moving.
Key NIST outputs: adversarial ML taxonomy and Cyber-AI profiles
To address these challenges, NIST has accelerated its work at the intersection of cybersecurity and AI safety. In March 2025, the agency released a comprehensive taxonomy of adversarial machine learning, cataloguing attack classes, vectors and system vulnerabilities. The goal is to standardize terminology, align research and provide organizations with a structured foundation for threat modeling.
Complementing this work, NIST is developing new Cybersecurity/Privacy/AI guidance intended to help organizations integrate AI into established risk management practices. These efforts build on the NIST AI Risk Management Framework (AI RMF) and extend it with deeper technical considerations around model robustness, data integrity, system monitoring and secure deployment pipelines.
Together, these publications signal a shift: AI is no longer a “special project” but a domain requiring the same rigor as traditional cybersecurity engineering.
Translating guidance into organizational governance and operations
Frameworks alone do not secure systems - organizations must translate them into governance, engineering standards and operational processes. Security and AI engineering teams should begin by mapping where AI sits within their digital infrastructure:
- Where models are trained, stored and deployed
- What data sources they rely on
- Who owns, accesses or modifies models
- What business decisions they influence
With this visibility, firms can adopt NIST-aligned practices such as adversarial threat modeling, role-based access control for model assets, secure data pipelines, and continuous validation of model behavior.
Operationalizing these practices requires collaboration across teams that traditionally operate in silos. Security engineers need insight into model architectures and training data. Data scientists must understand threat vectors and security controls. Product teams must appreciate how adversarial behavior affects user trust and safety.
Common gaps and how to close them (data, model, supply-chain risks)
Most organizations still underestimate the breadth of AI-specific vulnerabilities. Common gaps include:
- Data poisoning risks: Insufficient controls around data collection allow attackers to inject malicious samples.
- Model supply-chain exposure: Pretrained models downloaded from public repositories may contain backdoors or biases.
- Lack of monitoring: Many deployments lack drift detection, anomaly indicators, or triggers for human review.
- Weak evaluation: Testing focuses on accuracy, not robustness or resistance to manipulation.
- Opaque vendor dependencies: Third-party AI services may offer little transparency about their training data, safety evaluations or security posture.
Closing these gaps requires investing in secure MLOps infrastructure: cryptographic integrity checks on data and models, evaluation suites that include adversarial stress testing, and vendor questionnaires aligned with NIST and emerging regulatory requirements.
Even more importantly, organizations must adopt a lifecycle mindset. AI security is not a one-time exercise but a continuous process of monitoring, updating and auditing.
The link between security, trustworthiness and responsible AI
Responsible AI is often framed around ethics, fairness and transparency, but security is the foundation that enables all three. A model that can be manipulated cannot be fair; a system vulnerable to extraction cannot protect privacy; a model whose behavior changes under adversarial influence cannot be transparent or reliable.
NIST’s evolving guidance reinforces this link. The agency’s work illustrates that trustworthiness is multidimensional: robustness, governance, privacy, explainability and security are interdependent. For policymakers, this means cybersecurity considerations must be embedded into standards, impact assessments and regulatory controls. For practitioners, it means Responsible AI programs must include dedicated security competencies, not merely ethics review boards.
References
- NIST AI RMF (2023): https://www.nist.gov/itl/ai-risk-management-framework
- NIST Adversarial Machine Learning Taxonomy (2025): https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2025.pdf
- NIST AI RMF Resources: https://airc.nist.gov/airmf-resources/airmf