- AI Confidential
- Posts
- Empowering Data Sovereignty with Confidential AI Solutions
Empowering Data Sovereignty with Confidential AI Solutions
A comprehensive approach to safeguarding sensitive data
Hi readers,
As we continue to push the boundaries of AI and data-driven innovation, it’s important to recognize the role of hyperscale cloud providers in this journey. Trusted names like Microsoft Azure, Google Cloud Platform, and Amazon Web Services have become essential to the world’s largest companies, including ours here at Opaque. These hyperscalers provide critical infrastructure that enables our customers to scale their workloads efficiently, while safeguarding their valuable, sensitive data and models.
However, while these services leverage a solid foundation, there’s still significant complexity involved in building a comprehensive solution that fully addresses our customers’ needs. When it comes to protecting the privacy and sovereignty of data, relying solely on hyperscalers’ confidential computing solutions is like renting an empty lot with a fence. Yes, you get the land, but the responsibility to build, equip, and safeguard the space lies with you.
At our Confidential Computing Summit, we discussed this very topic with Steve Wilson—a project leader at Open Web Application Security Project (OWASP) Foundation. As he explains in our interview below, the next generation of AI requires more than just hyperscale infrastructure. It requires an integrated approach that combines the strengths of confidential computing with the ease and reliability of a purpose-built solution.
A critical step to data protection, multi-cloud confidential computing environments support the infrastructure layer, ensuring that data remains protected during computation across multiple cloud environments. However, addressing the full complexity of ensuring the privacy and sovereignty of sensitive data and AI workloads demands a more comprehensive approach. Opaque builds on these tools by offering a turnkey solution—rather than simply providing the infrastructure, we offer a fully furnished, highly secure office, complete with all the features you need to ensure data privacy and sovereignty and run your AI workloads right out of the box.
With advanced features like policy enforcement, distributed data processing, scalability (with dynamic scaling), reliability, integrated AI workflows, and no-code interfaces, we handle the complexity so that you can focus on what matters: moving quickly to drive innovation.
Our new report, “Bridging the Gap: Hyperscalers and Verifiable Data Sovereignty with Opaque, Confidential AI Platform,” lays out some of the key differences between the offerings of confidential computing and the offerings of Opaque Systems—underscoring the value of comprehensive confidential AI, designed for out-of-the-box security and swift integration into existing workflows and software stacks.
I encourage you to dive into the insights.
— Rishabh Poddar, Co-Founder and CTO of Opaque Systems
Enhancing Hyperscaler Infrastructure with Comprehensive Protections Offered by Confidential AI
In today's rapidly evolving AI landscape, IT leaders are increasingly turning to hyperscalers—large cloud providers—for the scalability and performance needed to support their AI initiatives. But hyperscale solutions alone are not enough to address the full spectrum of security challenges posed by AI, according to Steve Wilson, Chief Product Officer at Exabeam. That’s becoming especially evident as large language models (LLMs) become more prevalent.
In an interview with Wilson at our Confidential Computing Summit, he highlights that while hyperscalers offer valuable infrastructure, they often lack the critical security features required to safeguard the trustworthiness of AI models and their training data. One major question is the provenance and reliability of pre-trained models, many of which come from open-source platforms like Hugging Face. Without strong cryptographic controls in place, it’s difficult for organizations to ensure the integrity of these models and the data they rely on.
This is why confidential AI solutions are essential. By integrating confidential computing technologies into their tech stacks, IT leaders can protect sensitive data, secure the software supply chain, and establish trust in their AI models.
What’s more, confidential AI can also mitigate one of the top threats in AI—excessive agency—where LLMs are granted unsupervised control over critical functions, leading to potential vulnerabilities. Unlike traditional security features, confidential AI applies cryptographic controls and enforcing strict boundaries on model autonomy. It ensures that LLMs only operate within authorized parameters, using verified data. By limiting model permissions and embedding security at lower infrastructure layers, confidential AI prevents LLMs from having that dangerous unchecked control.
Ultimately, Wilson urges IT leaders to recognize that while hyperscale solutions are a must for scaling AI operations, a complementary platform—confidential AI—can ensure a more trusted and scalable AI future.
Watch our full interview with Wilson below.
In the Lab
The Latest Happenings at Opaque Systems
Beyond Barriers with Giorgio Natali: The Importance of Investing in Accessibility in Modern Enterprises
Giorgio, Vice President and Head of Engineering at Opaque Systems, sat down with Brian Gavin, co-founder of Wally, in an episode of their podcast titled Beyond Barriers. Giorgio discussed the need for investment in accessibility and the lack of representation of people with disabilities in organizations. Listen in as he shares success stories from his career, also diving into his thoughts on the biggest disruptions in the space.
Partner Spotlight: ServiceNow Unveils Customizable AI Agents for Enhanced Efficiency and Data Security
ServiceNow, the leading provider of cloud-based solutions for enterprise management, collaborates with Opaque to ensure protection of company and client data. Recently, ServiceNow introduced a customizable library of enterprise AI agents designed to enhance workflows. These agents allow businesses to tailor AI-driven solutions to meet specific organizational needs, improving efficiency across various tasks. By focusing on accessibility and adaptability, the library empowers companies to seamlessly integrate AI into their existing operations.
Product Demo: Simulation Mode with Test Data
Working with sensitive data can slow development due to privacy and compliance challenges, causing delays for data scientists while data owners seek assurance of secure usage. Opaque’s new feature, Simulation Mode, creates synthetic test data that mirrors real data without exposing sensitive information, allowing teams to quickly validate queries before approval. This accelerates development and provides data owners with confidence in the security and accuracy of the process. In the video below, Daniel Schwartz, Product Manager at Opaque Systems, walks through the process and functions of this feature.
Code for Thought
Worthwhile Reads
📈 AI's data surge fuels the need for next-generation infrastructure. AI is driving a massive increase in data demands, requiring cloud and private networks to handle data in petabytes and exabytes. To meet this challenge, companies like CoreWeave and VAST Data are modernizing their infrastructures to enhance speed, scalability, and cost-efficiency. As enterprises adopt these new stacks, there’s now also a growing need for platforms that efficiently handle the complexity and scale of AI data.
📱 iPhone 16 to be a key driver of AI adoption in the consumer market. Apple's integration of AI into the iPhone 16 via "Apple Intelligence" is expected to significantly boost generative AI usage globally—Wedbush analysts predict that over the coming years, roughly 20% of global consumers will interact with generative AI apps through Apple products. In order to maintain its long-standing commitment to privacy, Apple has designed its AI to process most tasks directly on the device, reducing reliance on remote data centers. Potentially establishing new industry benchmarks, Apple’s new AI development underscores the importance of balancing innovation with data security.
🛑 Data quality in the way of successful AI implementation. As AI evolves from a futuristic concept to a practical tool, many CIOs find their organizations unprepared for its deployment. A study by Riverbed shows that while 94% of C-suite executives prioritize AI, only 37% are ready to implement it now, with 86% expecting readiness within three years. The primary challenge is data quality, as 25% of AI projects underperform due to inadequate data. Confidential AI systems address this issue by ensuring that sensitive data is properly handled and used effectively, enhancing overall data quality and AI project success.
📝 The U.S. proposes new reporting requirements for advanced AI and cloud computing providers. The Department of Commerce is proposing new reporting requirements to ensure these technologies are resilient against cyberattacks. The proposal mandates detailed disclosures on the development of cutting-edge AI models and computing infrastructure, including cybersecurity measures and red-teaming results that test for potential misuse, such as cyberattacks or weapons development. This move highlights the increasing regulatory focus on robust security measures, further emphasizing the value of revolutionizing and protecting high-tech with confidential AI.
🧑⚖️ California's new AI bills await governor's approval amid industry debate. Recently, the California legislature passed two AI bills—SB 1047, which requires large AI model developers to implement significant safety measures, and AB 2013, which mandates transparency in generative AI development—both awaiting Governor Gavin Newsom's approval. Even if these bills do not pass, the conversation is active regarding setting new standards for transparency, accountability, and ethical use of AI technologies.
Reply