- AI Confidential
- Posts
- Protecting Data Amidst Record-High Breach Costs
Protecting Data Amidst Record-High Breach Costs
How confidential AI can help protect organizations
Hello readers,
The cost of a data breach has hit an all-time high, according to IBM’s annual survey. The 2024 report reveals that the average incident now costs organizations a staggering $5 million, with U.S.-based attacks often costing twice as much. Overall, the survey with over 3,500 individuals across 600 global organizations highlights a worrying trend: the problem is getting worse, and no industry is immune.
Particularly hard-hit are sectors like healthcare, which faces an average cost of $9.8 million per breach, followed closely by financial services at $6.1 million, industrial at $5.6 million, technology at $5.5 million, and energy at $5.3 million. These figures are not just numbers—they are tangible risks that organizations face as they manage sensitive information in an increasingly complex digital landscape.
In response to the growing threat, states are beginning to craft regulations around AI in healthcare, for example, to ensure better protection and ethical use. However, we can expect this to spread to many other industries. Yet legislation alone cannot keep pace with the rapidly evolving threat landscape fueled by the exponential innovation unlocked by the weaponization of GenAI.
While bad actors innovate with new AI attack vectors, new technologies offer promising solutions. In particular, confidential computing is proving to be a crucial ally for enterprises, governments, and other organizations with sensitive data.
Confidential computing, and more specifically, confidential AI, offers a powerful solution to data breaches. Securing data in use eliminates a significant vulnerability that traditional security measures often overlook. Many breaches occur not because of weak encryption or poor storage practices but because data is exposed during processing—and common techniques involving data obfuscation to preserve privacy are proving ineffective to AI-based identification attacks. Confidential AI solves this by keeping data encrypted even while it’s being analyzed, ensuring that sensitive information is never left unprotected. Plus, the higher fidelity of data unlocks more value.
One standout feature of confidential computing is its support for multiparty data sharing. In industries like financial services, multiparty data sharing is beneficial and often necessary for protecting sensitive data. However, the challenge lies in sharing this sensitive financial data without compromising customer privacy or exposing the banks to regulatory risks.
Giuseppe Giordano, R&D Principal Director, Accenture Labs, discussing the potential of confidential AI to protect organizations
In the interview below, Giuseppe Giordano, R&D Principal Director, Accenture Labs, explains how confidential AI enables the next generation of multiparty data sharing in areas like financial services. This not only protects organizations from the devastating impact of a breach but also fosters safer, more effective collaborations complete with verifiability of data policies, privacy, and sovereignty.
— Aaron Fulkerson, CEO of Opaque Systems
Securing AI Through Multiparty Data Sharing
Whether organizations are migrating to the cloud, collaborating with partners, or deploying AI models at the edge, the risks associated with data sharing are substantial. Exposing sensitive workloads to unauthorized users, maintaining data sovereignty, and protecting intellectual property are just a few of the hurdles to consider.
At Opaque’s Confidential Computing Summit in June, Guiseppe Giordano, Director of Accenture Labs, highlighted how confidential computing can overcome these challenges, revolutionizing data security and enabling businesses to unlock new insights.
The more [data] the merrier
Data sharing allows multiple organizations to collaborate by pooling data without exposing sensitive information. In a typical multiparty scenario, Giordano explained, participants bring data sets with varying structures to a shared platform.
Confidential computing ensures that each participants’ data remains encrypted and secure, even as it is processed alongside data from other parties. This process can reveal patterns and insights that would be impossible to uncover with isolated data sets, all while maintaining strict security and privacy standards.
Consider a scenario where multiple banks want to collaborate to identify and prevent fraudulent activities. Each bank has its own data on transactions, but by sharing and analyzing this data collectively through an AI model, they can detect patterns of fraud that might not be evident within a single institution's dataset. A transaction pattern that looks normal in one bank’s records might, when combined with data from other banks, reveal a cross-institutional fraud scheme.
With confidential AI, these financial institutions can run their data through an AI model in a secure, encrypted environment. The AI model can analyze the combined data to detect potential fraud while ensuring that the individual data sets remain encrypted and protected throughout the process. This means that no bank’s sensitive data is exposed, even during analysis, making it possible to collaborate securely and effectively.
Accenture Labs has been at the forefront of applying these technologies across various use cases. For Accenture, one of the most promising applications of multiparty data sharing is in regulatory compliance, particularly in regions with stringent data sovereignty laws like the EU AI Act or DORA. Confidential computing guarantees that data location, access, and processing remain compliant with local regulations, providing peace of mind to organizations navigating complex legal landscapes.
Giordano noted that the conversation around multiparty data sharing is increasingly shifting from purely technical discussions to a focus on business value. As organizations witness the tangible benefits—enhanced security, cost savings, improved compliance, and the ability to unlock new insights with AI—the adoption of confidential computing is poised to become the “default mode of operation across industries in the near future.”
In this future, multiparty data sharing will not only prevent data breaches but also drive innovation by enabling secure collaboration on a scale previously thought impossible.
Watch the interview with Giuseppe Giordano below.
In the Lab
The Latest Happenings at Opaque Systems
Securing Against GenAI Weaponization
The advent of ChatGPT has been a catalyst for widespread genAI adoption. This AI awakening has kicked off a tech supercycle that will cause both disruption for businesses across industries and government organizations. Opaque CEO Aaron Fulkerson recently spoke with Help Net Security to discuss the pitfalls of current techniques to secure sensitive data—and share the massive potential that confidential AI solutions can provide businesses that want to innovate with AI.
Securing AI in the Enterprise: Opaque’s Foundation and Confidential AI Solutions
AI holds a wealth of opportunity for enterprises, but accelerating models into production continues to be a massive hurdle. In the video below, Rishabh Poddar, Co-Founder and CTO, and Chester Leung, Co-Founder and Head of Platform Architecture of Opaque Systems, discuss how Opaque's Confidential AI Platform delivers faster time to value for businesses exploring AI—and what its privacy-preserving solutions offer organizations.
Introducing: Industry Solutions
Discover how Opaque Systems is pioneering secure data sharing and analysis with cutting-edge confidential AI on our new Industry Solutions page, tailored specifically for the high-tech, financial services, and insurance sectors. Explore how organizations can leverage these innovations to drive competitive advantage, ensure data privacy, and stay ahead in the rapidly evolving tech landscape.
Product Feature: Starter Scripts
Starting from scratch is never easy. In the video below, Opaque Product Manager Daniel Schwartz introduces a new feature designed to improve user experience by making it easier for users to get started on their queries.
Code for Thought
Worthwhile Reads
🏢 AI acceleration fuels the need for IT infrastructure evolution. As more enterprises invest in AI, many are also failing to invest in the proper infrastructures to support their deployment. Research conducted by data management software company Komprise finds that many companies store vast amounts of data in unstructured ways, such as emails and audio, which can hinder the progress of their AI programs—and only 30% plan to increase the IT budget to support their AI ventures. This disconnect highlights a need for stakeholders such as CTOs and CIOs to adopt computing infrastructures that will enable them to deploy new AI technology quickly and securely.
💳 National Public Data leak spotlights long-term risk of breaches. A data breach at background-check service National Public Data is the latest example of the escalating danger posed by cyberattacks. Despite denying the breach initially, National Public Data announced last week that a third-party bad actor obtained a trove of personal data including names, email addresses, phone numbers, Social Security numbers, and mailing addresses. While experts say that not all of the data is completely accurate, the breach, which impacted more than 1 million people, could still pose long-term risks for crimes such as identity theft. The incident highlights the urgency of incorporating more robust security measures to secure customer data.
🔐 New security frameworks are vital to safeguard AI investments. As more businesses continue to adopt AI, the technology is becoming a double-edged sword: it's helping businesses strengthen their security systems while simultaneously introducing unique security challenges. Systems that use LLMs, like customer-service chatbots, are prime targets for attacks through techniques like prompt injection. To deploy AI safely and successfully, businesses need to adapt their security frameworks to include AI in their defenses— think threat models and data flow diagrams—while also educating all stakeholders on AI's security risks and best practices.
⚙️ AI: A Multifaceted Driver of Cyber Aggregation Risk. The rapid acceleration in AI adoption, along with evolving deployment methods, has introduced new dynamics that heighten the risk of cyber event aggregation from both malicious and accidental sources. A report from GuyCarpenter outlines four key factors through which AI development and deployment contribute to this aggregation: software supply chain risk; expanded attack surface; increased data exposure; and greater use in cybersecurity operations. These emerging challenges also present opportunities for the entire insurance ecosystem, from carriers to reinsurers, to innovate and enhance risk management strategies for their stakeholders.
Reply