- AI Confidential
- Posts
- Unlocking Data Security with Confidential Computing
Unlocking Data Security with Confidential Computing
How confidential AI is shaping data privacy and business transformation
Hi there,
Desktop computing. The Internet. Mobile computing. Cloud. Some technological innovations simply define an era, sending the human race into a previously unimaginable direction. Today, we’re living in the era of AI.
Despite the excitement around the technology’s potential, AI leaders face a recurring obstacle: the tension between the existential business imperative to innovate with AI and the requirement to secure sensitive data. From mounting regulations to growing concerns from consumers, companies that rely on AI models for any business function can’t ignore the questions that the technology raises. Last week, Jules Love, founder at Spark, a consultancy that advises companies on building AI tools into their workflows summarized the worries well: “It’s hard to know where your data will end up.” (Plus, there are the very real challenges of implementing AI at scale.)
According to a recent Gartner report, 90% of enterprises are researching or piloting AI, yet most of them have not yet formalized acceptable use policies. And in this tricky landscape, vendors hosting AI models don’t always provide transparency around their data policies or provide safeguards for usage. GenAI providers are arguably the biggest culprits, the report reveals, leaving enterprises to largely fend for themselves. With analysts estimating that 30-50% of enterprise data is sensitive, this is hugely problematic.
Confidential AI, powered by the breakthrough innovation of confidential computing, is on the brink of revolutionizing how companies navigate AI data security and utilization across industries. And if attendance and participation at Opaque’s recent Confidential Computing Summit are any indication, a significant shift towards confidential computing is well underway.
Anjuna Security CEO and Co-Founder Ayal Yogev, Microsoft Azure CTO Mark Russinovich, Opaque Systems CEO Aaron Fulkerson, and NVIDIA CSO and Head of Product Security Dave Reber at the Confidential Computing Summit 2024
Advancements have expanded confidential computing from CPUs to GPUs, particularly with the NVIDIA H100 GPU enclave. This step forward allows GenAI to operate on private and proprietary data, unlocking new potential for insights without compromising privacy. Traditional data privacy methods like data masking and anonymization are often limited and prone to leaks. They diminish the value of data when used with AI and, more importantly, are entirely ineffective at protecting data from new AI-based attack techniques.
Meanwhile, cryptographic approaches like homomorphic encryption and secure multiparty computation, while offering strong security, are impractically slow for complex workloads. Confidential computing strikes a balance between security and efficiency, encrypting all data fields, enabling AI to function securely under encryption.
Obstacles that once stood in the way of confidential computing adoption—implementation once required expertise in setting up and managing security aspects like key management, cluster scaling, and remote attestation—are disappearing. Companies like Opaque Systems have simplified this process with a software stack that eliminates barriers to entry, making confidential computing accessible even to those without deep security knowledge.
In our interview below with fellow co-founder Raluca Ada Popa, we explore just a handful of confidential AI applications from our customers. And there are many, many more to come.
— Aaron Fulkerson, CEO, Opaque Systems
The Expanding Impact of Confidential AI Across Sectors
Companies across industries, from auto manufacturing and high tech, to government and healthcare are experiencing the benefits of confidential AI, Raluca Ada Popa, president and co-founder of Opaque Systems, said in an interview at Opaque’s Confidential Computing Summit earlier this summer.
BMW, for example, uses Opaque Systems to aggregate confidential data from different silos, gaining better insights into operations across its multiple dealership locations while maintaining privacy. Opaque’s genAI gateway enforces data usage policies across the different locations, ensuring compliance and data segregation while unveiling shopping patterns and other data points that help the brand optimize its operations, from marketing to staffing.
Similarly, Salesforce leverages confidential computing to securely combine data from various sources across its offerings, deriving valuable insights without compromising users’ privacy. This technology allows for strict policy enforcement and monitoring of data access, ensuring adherence to privacy regulations. And a European cybersecurity agency employs Opaque’s technology for secure data collaboration across different organizations, enhancing cybersecurity measures and compliance with stringent European regulations.
But the potential applications of confidential AI extend far beyond these current use cases, Ada Popa explained.
In medical research, confidential computing can enable the secure sharing of sensitive medical data across hospitals and research institutions, leading to breakthroughs in treatments and cures. Cancer research could benefit from aggregated data to better understand patterns and develop effective treatments. In finance, financial forensics across banks could be enhanced by confidential computing, allowing for secure data sharing to detect money laundering and other illegal activities. Similarly, tracking and preventing human trafficking could be improved by enabling data collaboration between banks, revealing suspicious patterns and aiding authorities in taking timely action.
The future potential of confidential AI is immense. By enabling secure and compliant data sharing, it has the power to drive unprecedented transformation across industries.
Watch the interview with Raluca Ada Popa, President and Co-Founder of Opaque Systems, below.
In the Lab
The Latest Happenings at Opaque Systems
Sizzle Reel: Confidential Computing Summit
Missed our Confidential Computing Summit earlier this summer? Catch up on the latest trends and insights from the brightest minds in our industry as they discuss innovation around AI, security, cloud, and more.
Securing AI in the Enterprise: Differentiating Opaque's Trusted AI Platform
Securing AI in the enterprise is a massive feat. The complexity of AI models and the vast amounts of sensitive data they require make them vulnerable to data breaches, adversarial attacks, and compliance issues. Aaron Fulkerson, CEO, and Raluca Ada Popa, President and Co-Founder of Opaque Systems and Associate Professor, UC Berkeley, discuss how Opaque's trusted AI platform enables organizations to have auditability, verifiability, and compliance with their data privacy, security and sovereignty. For more, download our latest whitepaper, Securing Generative AI in the Enterprise.
Product Demo: Opaque Input Variables Feature
In the video below, Daniel Schwartz, Product Manager at Opaque Systems, introduces a new feature called Input Variables, a flexible and secure way to collaborate on sensitive data. The capability enables secure data access and manipulation without compromising privacy or compliance, empowering teams to unlock the full potential of their sensitive datasets.
Code for Thought
Worthwhile Reads
🇪🇺 It’s showtime for the EU AI Act. The EU AI Act officially went into effect last week, setting out a comprehensive risk-based approach to regulating AI that will impact any organization with operation or impact in the EU. The act will primarily target large US-based tech companies, and may increase demand for confidential computing as organizations seek to ensure compliance with its stringent data protection and transparency requirements.
☀️ Sunny skies in the forecast for confidential computing. The confidential computing market is estimated to reach a market size of $53.214 billion by 2029. One of the biggest drivers of adoption? Cloud. Though cloud computing offers businesses significant advantages like scalability and flexibility, it also poses security risks, especially with sensitive data in shared environments. Confidential computing addresses these concerns by encrypting data both on the server and during transit, creating a secure enclave even in multi-tenant cloud environments.
👤 The cost of data lurking in the shadows. Data is spread out across more digital locations than ever, a new report from IBM states, and 35% of breaches this year involved data stored in unmanaged data sources—aka “shadow data.” Multi-cloud environments are particularly prone to vulnerability. When compromised, these also happen to be the most costly breaches—a whopping $5.17 million on average. To mitigate these risks, the report calls for proper data encryption, data security posture management (DSPM), and clear-cut data protection strategies.
🗞️ Redemption for OpenAI? In an effort to counter the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable, powerful generative AI technologies, Sam Altman says that OpenAI is working with the U.S. AI Safety Institute on an agreement to provide early access to its next major generative AI model for safety testing. That’s on the heels of a tough week for OpenAI: multiple major news outlets, including the New York Times, have blocked its new search tool, SearchGPT, citing concerns about the trustworthiness of the company.
Reply