- AI Confidential
- Posts
- The Internet of Agents is Coming
The Internet of Agents is Coming
Exploring why we need a new internet trust layer, the future of agent collaboration, and the dangers of moving at machine speed.
Welcome to AI Confidential, your biweekly breakdown of the most interesting developments in confidential AI.
Today we’re exploring:
The newfangled “Internet of Agents”
How collaborative AI agents are similar to a film crew
Open source projects you need to know about
Also mentioned in this issue: Vijoy Pandey, Jason Clinton, Ion Stoica, Dave Vellante, John Furrier, Chandra Gnanasambandam, Cisco, Outshift, Stibo Systems, bigID, theCUBE, LandChain, Galileo, Microsoft, AMD, Google, Meta, Anthropic, Databricks, Gartner, PwC, NVIDIA NeMo, LangGraph, Mistrial AI, Kortex AI, and Sailpoint.
A quick note before we dive in:
We’re still buzzing from the 2025 Confidential Computing Summit, two packed days full of insights from industry leaders on the future of AI security. To celebrate, we’re compiling a special report giving you the key takeaways—an inside look at where confidential computing is headed in the age of agentic AI. Stay tuned, it’s coming soon.
Now, let's get into it!
A new era of the internet is coming—and it’s already got a catchy name.
This week on the AI Confidential Podcast, we’re joined by Vijoy Pandey, SVP and GM of Cisco and Head of Outshift, to discuss what comes after task-based automation: the so-called “Internet of Agents.”
Right now, agentic systems exist in enterprise silos. Soon, that won’t be the case.
According to Vijoy, the Internet of Agents is what’s coming—where specialized agents openly collaborate across platforms, companies, and cloud servers with ease.
This increased collaboration is a game changer for improving business and efficiency, but it also comes with serious risks that can’t be ignored.
Think of it this way:
A single human can only cause so much intentional damage in an hour. But an autonomous agent with the same level of access, acting at machine speed, across multiple systems? Now that’s a serious threat.
That’s why identity, trust, and coordination aren’t optional in this next phase of AI—they’re foundational.
So foundational, in fact, that Vijoy is involved with AGNTCY (pronounced agency), an open-source collective launched by Outshift by Cisco, LangChain, and Galileo, to get ahead of the problem.
Its mission is simple: enable secure agent-to-agent communication at scale.
AGNTCY is building the infrastructure needed to move from siloed automation to collaborative AI systems through four core pillars:
Discovery — so agents can identify, locate, and understand each other’s capabilities
Composability — so they can work together, chaining tasks to solve bigger challenges
Deployment — so they can run reliably and securely across diverse environments
Evaluation — so teams can assess performance, security, and value
In an ideal world, agents will one day run just like a film crew—coming together like a director, editor, sound designer, etc., for joint projects, able to do their work as members of the same team.
(But this type of collaboration only works if governance is embedded from the start, and we have a ways to go before it’s our reality).
Also in this episode, you’ll hear us talk about:
Why traditional MLOps approaches won’t cut it in an agentic future
Why the distinction between deterministic and probabilistic systems matters
Why agent guardrails and verification layers are critical for safety
How Cisco’s SRE “Jarvis” agent fixes system issues autonomously
The full episode is live now. Give it a listen here.

Keeping it Confidential
What percent of enterprises have already integrated AI agents into their company workflows?
37%
68%
79%
86%
See the answer at the bottom.
Code for Thought
Important AI news in <2 minutes
📉 75% of business leaders don’t fully trust the data in their own systems, a Stibo Systems report found, with the primary concerns including inconsistent and inaccurate data points.
🔓 70% of enterprises see data leaks by employees as a major security concern, a bigID report revealed, while only 58% are stressed about potential unstructured data exposure.
💼 Only 38% of financial firms have an AI data protection strategy, the same report found.
🎯 Cybercriminals are using AI built on Grok and Mixtral models to generate phishing emails and malware code, posing new cybersecurity threats.
🛡️ Google beefed up its security infrastructure to protect genAI from prompt injection attacks, taking a “layered” approach to improving its defenses.
Community Roundup
Updates involving OPAQUE and our partners
⚡️ That’s a wrap on the 2025 Confidential Computing Summit—and the energy was electric.
Over three days in San Francisco, leaders from NVIDIA, Microsoft, Google, Anthropic, and more came together to tackle a critical challenge:
How do we create a trusted security layer for AI and the internet?
💥 These are some standout moments:
Jason Clinton of Anthropic shared that Claude now generates 65% of their code, with plans to hit 95% by year’s end
Ion Stoica from Databricks warned that as AI agents scale, risk grows just as fast
Microsoft, Meta, Google, and OPAQUE showcased real-world deployments of Confidential AI
AMD introduced new secure hardware specifically made for enterprise AI
OPAQUE debuted its Confidential Agent stack, combining NVIDIA NeMo and LangGraph with built-in verifiability
🚨 The biggest takeaway of the event?
Top players across AI and infrastructure are joining forces to build an open, trusted Confidential AI stack with shared standards and verifiable protections.
This is how we future-proof the agentic web and the trust it depends on—and our work is just getting started.
OPAQUE in the wild
Exciting news—our mission is reaching an even bigger audience 🎉
Our CEO, Aaron Fulkerson, joined Dave Vellante and John Furrier during the recent theCUBE + NYSE Wired: Robotics & AI Infrastructure Leaders 2025 event to talk confidential AI and trust.
Here are the highlights:
⭐️ There’s demand for verifiable security in AI environments, especially for agents
⭐️ From trust layers to threat models, enterprises must address critical gaps in security
⭐️ Robust safeguards are essential for the next generation of AI systems
⭐️ Privacy-forward ecosystems ensure integrity across distributed environments
OPAQUE in the wild
🎉 OPAQUE was mentioned in Gartner’s “Emergent Tech: Confidential AI for Commercial Deployment in Highly Regulated Industries” report, and our team is absolutely jazzed about it.
The report explains why Confidential AI is crucial for protecting your company’s sensitive data, and it includes steps organizations can take to prevent data from potential threats.
We’re honored to be recognized alongside juggernauts like Intel, AMD, and NVIDIA as one of the industry leaders paving the way towards safer AI deployment. It’s a great reminder that we are on the right track!
Open source spotlight
🧑💻 Mistal AI’s Devstral is an agentic model designed for software engineering tasks, which outperforms all other open source models on SWE-Bench by a significant margin.
📄 RAGFlow is a RAG engine specifically designed to understand documents, capable of extracting nuanced insights from documents, images, websites, and lines of code.
🤖 Kortix AI’s Suna is a customizable agentic AI assistant built to help with real-world tasks faster, like browsing the web, scraping for data, and running code.
Quotable
🔐 “As organizations expand their use of AI agents, they must take an identity-first approach to ensure these agents are governed as strictly as human users, with real-time permissions, least privilege, and full visibility into their actions.”
— Chandra Gnanasambandam, EVP of Product and CTO at SailPoint
Trivia answer: 79% (and it’s quickly growing).
According to a recent AI agent survey from PwC, 79% of the 300 executives surveyed have already adopted agentic AI in their companies—and 66% of that group reports seeing increased productivity. The Internet of Agents really is coming!
Stay confidential,
- Your friends at OPAQUE
ICYMI: Links to past issues
How'd we do this week?Vote below and let us know! |
Reply