Briefing
- The future of AI may move beyond generative models, with new AI capable of reasoning and interacting with the physical world, creating new security challenges.
- Open-source AI models are on the rise, offering control but demanding careful security vetting due to potential supply chain risks.
- Data privacy regulations drive AI model localization, impacting where data is processed and requiring CISOs to adapt data strategies.
- CISOs must become strategic enablers, balancing AI’s benefits with security and actively shaping the future of AI governance.
AI is fundamentally reshaping cybersecurity, forcing CISOs to adapt their strategies. The prediction that generative AI as we know it might disappear signals a major shift.
A recent AI House session in Davos, Breakthroughs and Promises: Foundation Models from Big-Tech to Localization, brought together leading AI experts to discuss this new landscape. Maciej Piasecki (Wroclaw University of Science and Technology) moderated the panel, which featured Yann LeCun (Meta), Aleksandra Przegalińska (Kozminski University), Tomasz Kułakowski (deepsense.ai & CodiLime), and Mennatallah El-Assady (ETH Zürich).
The panelists explored the topics of new trends beyond Large Language Models (LLMs), open-source AI models, data privacy and localization, and the changing AI regulatory landscape. This article examines those insights, translates them into cybersecurity impacts, and recommends actionable steps for CISOs.

01: Beyond LLMs: Preparing for New Security Challenges
AI may progress beyond today’s LLMs toward systems capable of reasoning, planning, and understanding the physical world. This section explores insights on this transition, analyzes the implications for CISOs, and outlines key recommendations for preparing for the next generation of AI-related security challenges.
Key Insights from the Session
LLMs are Not the End Goal. Yann LeCun emphasized that LLMs are not the “be-all and end-all of AI.” He predicted another revolution in AI within the next few years, driven by systems that can move beyond language processing. Other panelists supported this view, with Mennatallah El-Assady highlighting the need for AI that can share structured knowledge and reason, and Tomasz Kułakowski pointing out that in some niche cases, smaller local language models can outperform LLMs.
The Rise of Reasoning and Planning. LeCun highlighted the need for AI systems with persistent memory capable of reasoning and planning – abilities that current LLMs lack. This suggests a shift towards more complex and autonomous AI.
Understanding the Physical World. The next generation of AI will need to understand the physical world, enabling them to learn abstractions and predict real-world events. This is a significant departure from the text-based nature of current LLMs.
Non-Generative AI. LeCun made a surprising prediction that the future of AI might not be generative, challenging the current focus on generative AI models. He suggested that the next generation of AI could be based on paradigms like zero-shot learning, which allows AI to solve problems the first time they encounter them, pointing toward capabilities beyond current generative models.
Cyber Implications
New Attack Vectors. AI systems with reasoning, planning, and physical world understanding will likely introduce new attack vectors. For example, attackers might try to manipulate an AI’s understanding of its environment or exploit its planning capabilities for malicious purposes.
Increased Complexity. These advanced AI systems will be inherently more complex than current LLMs, making them more challenging to secure. This complexity will require new security models and tools to address their vulnerabilities. Understanding how these systems function and where they might be vulnerable will be a significant undertaking.
Securing Autonomous Decision-Making. As AI systems become more autonomous, ensuring the security of their decision-making processes will be critical. The potential for these systems to be manipulated or make harmful decisions requires careful consideration of their design and deployment.
Expertise Gap. Securing the next generation of AI will require specialized skills and knowledge that many organizations lack. There will be a growing need for professionals who understand both AI and cybersecurity.
Key Recommendations for CISOs
Invest in Advanced AI Security Research. Allocate resources to research focused on understanding and securing advanced AI systems. This could involve collaborating with academic institutions, AI research labs, or other organizations. Foster collaboration to share information and best practices on securing these systems.
Develop New Security Frameworks. Start developing new security frameworks that are specifically designed for advanced AI systems. These frameworks should address the unique challenges of securing AI that can reason, plan, and interact with the physical world.
Focus on Model Integrity and Robustness. Implement measures to ensure the integrity and robustness of AI models, protecting them from tampering, manipulation, or adversarial attacks. They can involve techniques like adversarial training, model verification, and anomaly detection.
Stay at the Forefront of AI Development and Security. Continuously monitor the advancements in AI and security. Upskill existing security teams or hire AI security experts to ensure you have the expertise to address these new challenges.
As AI continues to evolve beyond the capabilities of LLMs, CISOs face a new set of security challenges. Adapting to this new reality and proactively addressing the risks associated with advanced AI will be crucial for organizations seeking to leverage its benefits securely. This demands a strategic approach that combines research, development, and a commitment to building in-house expertise.

02: Open Source vs. Proprietary AI
Platforms like Llama or Mistral demonstrate the increasing popularity of open-source AI models and highlight the growing innovation in this space. This trend has serious implications for cybersecurity.
Key Insights from the Session
Growing Adoption of Open Source. Yann LeCun of Meta emphasized the widespread adoption of open-source models, noting that Llama has been downloaded over 650 million times. This signifies a move away from solely relying on proprietary models like those provided by key players, including OpenAI, Anthropic or Google. Aleksandra Przegalińska also noted the interest of smaller and medium-sized companies in open-source models, although she presented survey data suggesting that ChatGPT still dominates public perception.
Flexibility and Control. LeCun highlighted the flexibility and control that open-source models offer. Organizations can run these models on their on-premise infrastructure, which is better for those working with sensitive or regulated data. Kułakowski added that his company spends 95% of its time building solutions using foundation models like Llama and ChatGPT.
Fine-Tuning for Specific Needs. Open-source models can be fine-tuned on specific datasets, allowing organizations to tailor them to their unique requirements. The session emphasized that this capability is a key advantage over proprietary models, particularly for niche applications.
Cyber Implications
Enhanced Data Privacy. The ability to run open-source models on-premise offers CISOs greater control over data, reducing the risks associated with transferring sensitive information to third-party cloud providers. LeCun’s story about LAMA 3.2’s license restriction in Europe due to data usage concerns underscores the increasing regulatory scrutiny surrounding AI and data privacy.
Supply Chain Security Concerns. While open source offers transparency, it also introduces supply chain risks. CISOs need to consider the provenance of the code, the potential for malicious contributions, and the ongoing maintenance of the models they deploy. 32% of organizations reported accidental exposure of vulnerabilities in open-source AI components, with half of these incidents being very or extremely significant.
Need for In-House Expertise. Managing open-source AI models requires organizations to develop or acquire in-house expertise in secure deployment, configuration, and fine-tuning.
Vulnerability Management. Vulnerabilities in open-source models might be discovered and exploited more quickly. CISOs need robust vulnerability management processes to address this risk.
Key Recommendations for CISOs
Assess Open Source AI Usage. Conduct a thorough assessment of your organization’s current and planned use of open-source AI models. Identify which models are being used, for what purposes, what data they are processing and which security requirements apply to them.
Adjust SDLC processes for AI. Adapt your existing SDLC processes to incorporate the unique aspects of AI development specifically tailored for open-source AI models. This includes threat modeling, secure coding practices, and all processes associated with vulnerability management.
Develop Expertise. Invest in training your existing security team or hire specialists with expertise in securing AI systems, particularly in deploying, fine-tuning, and securing open-source AI models.
Monitor and Audit AI Model Activities. Implement continuous monitoring of AI model inputs, outputs, and behavior to detect anomalies or potential security incidents. Regularly review the AI systems to ensure compliance with the security requirements you have established.
CISOs who address the security implications of open-source AI can leverage these models effectively while mitigating the associated risks. The growing use of open source is significantly changing the AI landscape, and CISOs who adapt their strategies will be better prepared for this future.

03: Data Privacy, Localization, and the Security Landscape
AI’s reliance on vast datasets has intensified concerns about data privacy and localization, particularly as the regulatory environment surrounding AI continues to evolve and organizations seek greater control over where their data is processed and stored. This section explores these concerns and provides recommendations for managing data privacy in the age of AI.
Key Insights from the Session
Data Privacy Driving Localization. The panel emphasized that data privacy and compliance with regional regulations are major factors driving the localization of AI models. Organizations want more control over where their data is processed and stored. El-Assady mentioned initiatives like the Swiss AI Initiative as examples of localized models being built.
Localized Models Enhance Control. As Yann LeCun mentioned, localized models allow organizations to keep data within their jurisdictions. This helps them meet regulatory requirements and mitigate the risks associated with cross-border data transfers. Piasecki discussed Poland’s efforts to build a localized model (PLAM) to enhance technological sovereignty and provide a tool for local entities.
Challenges of Global Models with Localized Data. The panelists discussed the difficulty of training global AI models while respecting data localization needs. This highlights the tension between the desire for comprehensive AI models and the need to protect data privacy. LeCun predicted that foundational models would need to be trained on data from around the world, in all languages and cultures, and that no single entity could do this alone.
Cyber Implications
Heightened Regulatory Scrutiny: AI systems that process personal data will face increased scrutiny from regulators and privacy advocates. CISOs must be prepared to demonstrate that their AI systems are designed and operated in a privacy-preserving manner.
Data Residency and Sovereignty Challenges. Determining data residency (where data is stored and processed) and data sovereignty (who controls it) are significant challenges when using AI models, especially those trained or operated in different jurisdictions.
Data Breaches Involving AI. Data breaches involving AI systems can have significant consequences, including reputational damage, legal penalties, and loss of customer trust.
Key Recommendations for CISOs
Establish Data Governance Frameworks for AI. Develop and implement robust data governance frameworks that address the unique challenges of AI, including data provenance, data quality, and data lineage.
Choose AI Models and Architectures that Support Privacy. When selecting AI models, prioritize those that offer privacy-preserving features, such as on-device processing, federated learning, or differential privacy. Consider using localized models to keep data within specific jurisdictions.
Enhance Transparency and Explainability. Strive for transparency in how AI systems process data and make decisions. While full explainability may be challenging, CISOs should aim to provide meaningful information to data subjects about how their data is being used.
AI’s growing reliance on data demands a strong focus on privacy. By prioritizing data localization, implementing strong safeguards, and staying ahead of regulations, CISOs can help their organizations utilize AI’s power while respecting individual privacy.

04: AI Regulations and Liability: Security Implications for CISOs
AI’s expansion into regulated areas, coupled with questions of liability, creates new security considerations for CISOs. This section examines insights from the AI House session, focusing on how regulations and the evolving understanding of liability shape the CISO’s role in the age of AI.
Key Insights from the Session
Regulatory Uncertainty as a Security Challenge. The AI House session highlighted the tangible impact of regulatory uncertainty on AI development and deployment. This uncertainty creates security challenges as organizations struggle to anticipate and adapt to evolving rules.
Liability in the Age of Open Source. The discussion, especially LeCun’s comments on liability, raised critical questions about responsibility in open-source AI. The idea of “cascading liability”, where developers and deployers of open-source models could potentially be held liable for security incidents, introduces a new dimension to risk management.
The Innovation-Regulation Tightrope. The session underscored the tension between fostering AI innovation and imposing regulations. LeCun’s concerns about regulations stifling open-source development highlight the need for a balanced approach that considers security without hindering progress. Przegalińska raised concerns about targeting open source specifically because of its openness, potentially creating an environment that punishes something that is more helpful.
Cyber Implications
Security as a Prerequisite for Compliance. Emerging AI regulations, such as the EU AI Act, are placing a strong emphasis on security. “High-risk” AI systems will face stringent security requirements, making robust cybersecurity practices essential for legal compliance. CISOs must ensure that their security measures align with these evolving standards.
The Open Source Liability Minefield. The lack of clear legal precedent around liability for AI security incidents, especially in the context of open source, creates a challenging environment for organizations. CISOs need to proactively assess their organization’s potential exposure to liability stemming from their use of open-source AI models and implement measures to mitigate these risks.
Impact on Technology Choices. Regulatory restrictions can directly limit the AI technologies available to organizations. CISOs must factor these limitations into their technology selection process, ensuring that chosen AI tools comply with relevant regulations and their usage.
Key Recommendations for CISOs
Prioritize Security in AI Procurement and Development. When procuring or developing AI systems, make security a primary consideration, not an afterthought. Ensure that vendors provide adequate security assurances and that internal development teams follow secure coding practices, particularly when using open-source components.
Conduct AI-Specific Threat Modeling. Develop threat models that specifically address the unique attack vectors and vulnerabilities of AI systems. Consider scenarios where AI’s decision-making, data handling, or physical-world interactions could be compromised.
Test AI Systems Before Deployment. Rigorous testing can help uncover and address vulnerabilities before they lead to real-world incidents.
Engage in Policy Discussions. CISOs have valuable expertise to contribute to the ongoing dialogue around AI regulation. When possible, engage with policymakers, industry groups, or standards bodies to advocate for regulations that effectively address security risks without unduly hindering innovation. Share your practical insights to help shape a balanced regulatory approach.
The expansion of AI into regulated areas, coupled with the uncertainties surrounding liability, redefines the CISO role. It is no longer sufficient to just secure systems against traditional threats. CISOs must now also consider the legal and regulatory implications of their AI security strategies.

Conclusion: Securing the AI Future
The rapid growth of AI presents significant new cybersecurity challenges. CISOs must adapt quickly, moving beyond traditional security measures to address these emerging threats. It requires prioritizing data privacy, developing AI-specific security frameworks, and actively participating in shaping AI regulations. By taking on a strategic role in managing AI risk, CISOs can guide their organizations toward a secure and innovative future.