On the second day of AI House Davos 2024, the focus shifted to exploring the risks and opportunities in artificial intelligence, including AI safety. The session ‘AI & Trust‘ led by Jennifer Web, Investment Director at Swisscom Ventures, focused on essential questions about trust in Generative AI solutions.
These questions were tackled by Christoph Aeschlimann – CEO of Swisscom, Joël Mesot – President of ETH Zurich, Keith Strier – VP of Worldwide AI at NVIDIA and Pia Tischhauser – Member of the Board of Directors at Swiss Re.
The panelists provided an overview of the current state of AI advancements:
Boom in the AI area. Keith Strier noted a significant increase in AI applications, particularly in genomics, space exploration, and clinical diagnostics. This growth extends beyond enhancing the safety of our cars and the resilience of our cities. While major companies are making headlines, thousands of startups and mid-sized businesses are also developing their language models. There is also a noticeable uptick in national-level initiatives in this field. A recent survey by BCG revealed that 89% of executives considered AI and Generative AI as top priorities for 2024, with 85% planning to boost their investment in this area.
Trust as a cornerstone of efficient AI. The panel agreed that trust in AI-generated outputs is vital for their effective use. Christoph Aeschlimann emphasized that when AI outputs do not meet quality expectations, people tend to abandon these solutions, finding them unreliable. Pia Tischhauser highlighted that trust is a major concern for organizations implementing AI. Consequently, BCG has initiated extensive research on Responsible AI, aiming to guide organizations in developing AI systems aligning with business goals and ethical standards.
The panelists highlighted several challenges that organizations currently face while adopting AI and reinventing their business strategies:
Slow progress of regulations. Keith Strier pointed out that regulatory developments concerning AI are still in their infancy. The European Union is at the forefront with its AI Act, expected to become law in early 2024. However, aligning AI regulatory approaches across different countries and jurisdictions remains a significant challenge.
Patchwork of approaches. Keith also discussed the varying regional and national approaches to AI. While some places prioritize identifying and mitigating the risks and potential harm of AI, others emphasize the benefits AI offers to their economies, placing risk management as a secondary concern.
Urgency to adopt AI. There’s a prevailing market sentiment emphasizing the rapid adoption of AI. Christoph Aeschlimann expressed a stark viewpoint: companies slow to harness AI’s benefits risk losing their competitive edge and may even face obsolescence. This urgency has become more pronounced, according to Keith Strier, particularly after OpenAI brought ChatGPT to the market.
To effectively navigate the challenges identified, the panelists proposed several strategies for successful AI adoption, emphasizing trust:
Have a strategy in place. Pia Tischhauser emphasized the importance of a clear strategy for AI adoption. Understanding the ultimate goal, the problems AI is intended to solve, current progress, and necessary steps for integrating AI into the existing technology stack is crucial. It is about more than just technology implementation – it involves rethinking the entire business.
Human-centric approach to AI. AI’s significance extends beyond technology to what it enables us to achieve. Christoph Aeschlimann remarked, “In the end, it is about people.” AI must be developed to serve human needs, and its benefits should be effectively communicated to employees and customers. Pia Tischhauser also underlined the importance of a human-centric approach, achieved by understanding and addressing the needs of employees and various customer groups.
Manage impact on our teams. Joël Mesot estimated that AI developments will alter 50% of our jobs in the coming years. Therefore, according to Pia Tischhauser, AI adoption strategies should focus on reskilling and upskilling employees to fully utilize the benefits AI brings to the organization. Keith Strier pointed out that AI literacy is essential for trust, as mistrust often stems from a lack of understanding of the technology. A higher level of workforce literacy is necessary to address this issue.
Address the full AI lifecycle. The panel agreed that handling data correctly throughout the AI lifecycle—input, model training, tuning, and output management—is critical for trust. Transparency is key in ensuring sufficient trust levels.
Act decisively, but do not forget the basics. Pia Tischhauser noted that some organizations delay AI adoption, awaiting clearer compliance guidelines. However, waiting for regulatory certainty can lead to a loss of competitive advantage, as regulations are continuously evolving. Christoph Aeschlimann advised acting decisively while remaining mindful of safety and regulatory aspects, addressing specific challenges as they arise.
The insights presented by the panelists also have their cyber implications. Here is my personal view of them:
Ensure strategic alignment. We are in the midst of what can be termed the fifth industrial revolution, where digitalization is a key focus for many businesses. In this context, it is crucial to align cybersecurity objectives with business, digital and technology strategies reviewed as part of your assessment of internal factors. To help our organizations stay ahead in AI adoption, it is vital for us to actively monitor developments in this field and assess which advancements could benefit our businesses as part of the review of external factors impacting our cyber strategies. Preparing for the secure adoption of emerging technologies is a proactive step toward maintaining robust cybersecurity with a business-oriented approach.
Utilize available security frameworks. Given the ongoing discussions around regulations and their variability across jurisdictions, relying on them for consistent guidance is currently challenging. Instead, focusing on established security best practices can help manage risks associated with AI implementations. Resources such as the NIST AI Risk Management Framework (AI RMF) and ISO’s AI-related standards, including their guidance on AI risk management, provide valuable frameworks to secure AI systems.
Manage relationships with regulators. It is crucial to engage in dialogues with regulators, possibly through industry groups or associations in your region. A top-down regulatory approach might not fully grasp the nuances of AI implementations, potentially hindering its adoption and benefits. This could lead to a loss of the competitive edge and negatively impact the national economy. Open communication with regulators is key to fostering an environment conducive to AI development.
Consider the risks that may impact trust. In your threat landscape assessment, include the risks associated with AI solutions. The NIST AI Risk Management Framework outlines the characteristics of trustworthy AI systems, which should be safe, secure, resilient, explainable, interoperable, privacy-enhanced, unbiased, valid, reliable, accountable, and transparent. Notably, Appendix B of the framework delineates how AI risks differ from traditional software risks, providing examples of these unique challenges.
Manage the Full AI Secure Lifecycle. Securely implementing AI involves multiple cyber domains. Addressing risks solely at the software development or data protection level is inadequate. We need to consider the entire AI lifecycle, encompassing planning and design of the AI systems, data collection and processing, building and using the AI model, verifying and validating the model, deploying and operating the AI solution, and the human aspect of AI implementation. The NIST AI RMF, along with its supporting playbook, offers comprehensive guidance on managing this lifecycle effectively.
Prepare your team. Equip your team for the evolving workplace landscape. As AI-driven automation advances, tasks that require minimal human judgment and offer little business value will be phased out. Analyze your service value streams, identify inefficiencies, and leverage automation where feasible. Encourage your team to enhance their AI literacy and focus on emerging technologies. Skills in cloud computing and AI are becoming increasingly crucial, and with the advent of quantum computing, staying abreast of these technologies is essential for cyber teams to remain relevant and effectively partner with the business.
If you are interested in other sessions hosted by AI House Davos, please refer to the AI House official program. If you are interested in other engaging sessions from the cyber perspective in Davos 2024, please look at this article. Our previous article highlights the insights from the “AI – The Ultimate Invention” session.