The future of AI and cyber implications

Exploring AI’s future and its impact on cyber strategy at Davos 2024

AI emerged as a hot topic during the chilly January days in Davos at the 54th Annual Meeting of the World Economic Forum and its accompanying events. Situated at the core of the Davos promenade, AI House Davos hosted a series of engaging panels, roundtables, and networking discussions, all centered on harnessing AI’s potential for advancing humanity.

The first day at AI House was marked by a dynamic panel titled “AI – The Ultimate Invention” that focused on AI’s real-world impacts. This session was particularly notable for its exploration of the challenges and opportunities presented by integrating AI into our everyday lives. The panel led by Diane Brady, Assistant Managing Editors at Forbes, facilitated a rich exchange of perspectives among  Peng Xiao, CEO of G42 Group; Amy Webb, CEO of the Future Today Institute; and Bill Ford, Chairman and CEO of General Atlantic.

Peng Xiao, CEO of G42 Group

Key insights

The session provided several compelling insights:

Rapid evolution of AI. Peng Xiao noted that AI began with specialized applications, focusing on narrow tasks like facial or voice recognition. Amy Webb added a significant milestone, highlighting the surge in AI development following the release of Chat GPT by Sam Altman’s OpenAI. This Generative AI began a substantial paradigm shift, with many companies eager to capitalize on AI’s advantages. According to the G24 CEO, this trajectory is expected to culminate in the emergence of Super AI – a form of AI anticipated to exceed human intelligence across all domains. However, realizing this vision will necessitate substantial infrastructural investments, including advancements in data centers, connectivity, and processing power.

Expanding use cases for AI. The spectrum of AI applications is broadening with its growing capabilities. A vital aspect of this expansion is AI’s ability to replicate aspects of human creativity and its potential to eliminate human cognitive biases from decision-making processes. Illustrating this point, Bill Ford shared how his company has started leveraging AI to enhance decision-making in investment committees. By analyzing extensive internal and external data, AI aids in fostering decisions that are more objective and devoid of human bias.

The urgency to embrace AI. Amy Webb articulated a common concern among businesses about missing out on the AI revolution. This sentiment is driven by a perception that the opportunity window is rapidly closing, necessitating swift action. Echoing this urgency, Peng Xiao warned that businesses lacking AI integration risk becoming irrelevant. He gave a striking example: traditional banking could face obsolescence within three years, underscoring the pressing need for AI adoption in various sectors.

The need for regulatory advancements. The rapid pace of AI development presents a significant challenge for regulatory bodies struggling to keep pace with these technological advancements. A notable development in the European Union is the AI Act, which is anticipated to become law in early 2024. This legislation is a step toward regulating AI applications. In the United States, discussions are underway regarding several legislative initiatives, such as the Algorithmic Accountability Act and the DEEP FAKES Accountability Act. These aim to ensure the responsible use of AI in crucial areas like housing, healthcare, and education and address the challenges posed by mis/disinformation through generative AI. Meanwhile, China is advancing its AI regulatory framework in a more fragmented approach. Amy Webb, however, points out a potential inefficiency in the democratic regulatory structures regarding AI. She suggests a more practical approach might be financially incentivizing companies towards responsible AI practices.

The imperative of long-term planning. Amy Webb observes that many companies have reduced their planning cycles, often foregoing long-term planning due to global volatility. Despite these challenges, she emphasizes the need for a more forward-thinking approach. “You need data, you need models, and you need patience,” she advises. Echoing this sentiment, Bill Ford also underscores the importance of strategic planning in the context of AI. He advocates that every company and nation should develop its AI strategy.

Amy Webb, CEO of the Future Today Institute

Cyber implications

The insights presented during this session are a good source of inspiration for CISOs and cyber strategists to understand the cyber implications of AI. Based on my observations, here are some key considerations:

Revising cyber strategies. We must regularly revise our cyber strategies to align with evolving business expectations. In an era where AI offers not just a competitive edge but is also a critical component for survival for many companies, incorporating a secure approach to AI in our cyber strategies is essential. This means continuously adapting to the rapidly changing AI landscape to maintain the trust in AI and the competitiveness of the organizations we support.

Adopting a proactive approach. Reflecting on my experience in the consulting business, I noticed that cyber strategies often lagged, much like the regulatory frameworks we see today. A shift is needed towards a more proactive stance in our cyber strategy to address this. This involves securing the adoption of emerging technologies and positioning ourselves as strategic partners to the business rather than merely as risk assessors. This proactive approach is vital in integrating cybersecurity into AI implementation and innovation initiatives.

Monitoring the regulatory landscape. We must closely monitor these developments with the changing regulatory environment, particularly regarding AI. Understanding and complying with regulations concerning AI safety, trust, and transparency is critical. The regulatory landscape is one of the key factors assessed when reviewing external factors impacting cyber strategy so we can meet these requirements effectively. However, this is not only about compliance – it is about protecting our organizations from actions, reputational damage and loss of customer trust.

Understanding AI-specific risks. As highlighted in the World Economic Forum’s Global Risks Report 2024, the adverse outcomes of AI technologies are projected to rank among the top 10 risks over the next decade. These risks, which stand alongside issues of cybersecurity, misinformation, and disinformation, require a deeper understanding. We must understand the specific risks associated with AI solutions and develop controls that effectively mitigate these risks to acceptable levels.

Addressing the skill gap. A thorough understanding of the technology itself is essential to grasp the risks linked to AI fully. The cybersecurity field is experiencing a widening skill gap, increased by the rapid pace of technological advancements. To bridge this gap, a shift in recruitment strategy is necessary. We cannot focus on the current skill set anymore. We need to focus on the attitudes and behaviors of potential team members. Qualities like a proactive approach, agility, flexibility, and the capacity for rapid learning are becoming increasingly crucial.

The need for a specialized framework. Defining the optimal state of cyber capabilities concerning AI requires a specific framework outlining best security practices for this area. The challenge with AI is that its security controls span multiple cyber domains, with some controls being unique to AI. This scenario is reminiscent of the development of the Cloud Controls Matrix for cloud security. Similarly, AI security needs a dedicated framework. A recent development in this area is the AI Risk Management Framework published by NIST on January 26, 2023, along with a supporting playbook. This framework is an essential resource for CISOs and cyber strategists, especially for organizations actively engaging in or planning to venture into AI.

Bill Ford, Chairman and CEO of General Atlantic

Further reading

If you would like to deepen your knowledge of AI context, use cases, risks and security implications, here is a selection of recently published sources that I have found particularly interesting:

Deloitte State on Generative AI. The Deloitte team has released an in-depth report titled “Now decides next: Insights from the leading edge of generative AI adoption.” This report offers an analysis based on the responses of 2,835 business and technology leaders who are at the forefront of implementing generative AI in their enterprises. It covers various topics, including the general sentiment toward AI, the confidence level in existing AI expertise, practical use cases, and the essential risks that must be managed for successful AI integration. This report benefits cyber strategists seeking a comprehensive understanding of AI’s business and technological aspects. 

Accenture Total Enterprise Reinvention in High Tech. Accenture’s report on Total Enterprise Reinvention in High Tech results from a survey involving 1,516 C-Suite executives. This report focuses on total enterprise reinvention, explicitly focusing on high-tech industries. By providing a broader perspective on AI’s role within organizations, this report will help you to understand recent technology trends you need to consider while reviewing external factors impacting your cyber strategy.

NIST AI Risk Management Framework. The NIST AI Risk Management Framework is a comprehensive guide for organizations to understand and frame the risks associated with AI. It details the nature of these risks and outlines the characteristics of trustworthy AI systems. The framework’s core is structured around four essential functions – Govern, Map, Measure, and Manage – further divided into various categories and subcategories, offering a structured approach to AI risk management.

NIST AI RMF Playbook. The NIST AI Risk Management Framework Playbook is a dynamic resource designed to evolve alongside AI technology progress. This playbook provides practical actions to achieve the objectives in the risk management framework, aligning with the four AI RMF functions. It is available for download in multiple formats from the NIST website, making it an accessible tool for professionals seeking to apply these principles in practice.

Machine Learning Security Principles. Written by John Paul Miller, “Machine Learning Security Principles” is an essential read primarily targeted at security professionals. However, it also offers valuable insights for security managers. The book delves into various aspects of AI security, including developing secure AI systems and strategies to protect against machine learning-driven attacks.

Next sessions

If you are interested in the future sessions hosted by AI House Davos, please refer to the AI House official program. If you are interested in other engaging sessions from the cyber perspective in Davos 2024, please look at this article.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use