Replies: 4 comments 1 reply
-
Thanks for your thoughts here! I'm not massively familiar with WoT, but I believe MCP and WoT serve different needs that necessitate a separate protocol here, though there may be opportunities for cross-pollination. MCP specifically focuses on making it as easy as possible to create new MCP servers (often just 100–200 lines of code, easily one-shottable using AI), and AI-specific features like prompts, resources, tools, and sampling. There are also a lot of inbuilt safety considerations, like how to keep humans in the loop, and how to expose just the right amount of information between clients and servers. While WoT could theoretically be adapted to handle these cases, I worry we'd be introducing even more complexity—and we've already observed a fair amount of sensitivity to it. Perhaps you could share how you imagine the MCP concepts mapping onto WoT, specifically with examples? It may be that the best solution is building some sort of bridge or intercompatibility, rather than actually rebasing the former onto the latter. |
Beta Was this translation helpful? Give feedback.
-
I'd say @RobWin has a point here. We all know the standards proliferation more than we like to and W3C is the best standard body we have for the web - so using what is there should be the first thought. To my understanding, WoT is not tied to IoT, but you can rather think of "generic services and stuff" interoperating on the web - so it should be worth to dive into it, because MCP does not seem to target anything different - and the list of "stuff" you might want to make available through MCP on the long run will only increase for sure.
I'm long enough in the IT industry to smell that this is rather a lame excuse for not going deeper into it, sorry. 😉 Nonetheless, if it turns out that WoT is a mismatch for what really is required in the AI/models space, one should not try to misuse WoT for different purposes. |
Beta Was this translation helpful? Give feedback.
-
I hadn't heard of WoT before it started coming up in these discussions, so I want to spend some more time with this and I really appreciate the effort folks are putting into explaining how WoT might apply to MCP (especially @RobWin 's examples of how it might map). Some high level reactions that I'll be trying to assess for myself as I dig in, but thought I'd throw out there in case anyone has already thought through them and would like to share:
I spent a chunk of my career building both apps and a developer platform at the bleeding edge of FHIR, the healthcare data interop standard. While FHIR feels very necessary and a standard worth fighting for due to how otherwise fragmented the US healthcare ecosystem is, I also saw how it frequently bogged us down and even ended up creating bad end-user experiences as we occasionally overfit to the standard. And how often we missed things in the expansive standard and had to thrash to stay adherent to it. I don't regret it - it was a standard worth the tradeoffs - but there were definitely tradeoffs. |
Beta Was this translation helpful? Give feedback.
-
Hey, thanks for your feedback! Next Friday at 9 AM CET, I'll be presenting Eclipse LMOS, LMOS Protocol and WoT in a joint call with three W3C Community Groups: WebAgents, Web of Things, and the WebThing Protocol Community Group. You can find the event details here: https://www.w3.org/events/meetings/5a77b997-467b-4d5d-bb40-b84789c17d2e/20250314T090000/ My goal is to explain how we use Eclipse LMOS at Deutsche Telekom and why WoT can be applied for modeling Agents and Tools. I also hope to address questions. Plus, several other W3C WoT experts will be available for insights and discussion. Every protocol starts simple. However, as more features are added—such as support for multiple authentication schemes, multiple transport protocols, multiple communication patterns (request-reply, event-driven, real-time, single request-multi-response), transport QoS, end-to-end message acknowledgments, Registry-based discovery, mDNS-based discovery, decentralized digital identities, and more—the specification inevitably grows in complexity. A significant amount of effort has gone into documenting and standardizing these features within open W3C specifications to support implementers. Yet, the cycle continues: people argue that the protocol is too complex—so a new one is created, only for it to eventually reach a similar level of complexity, leading to the same debate. At some point, we need to stop reinventing protocols and instead focus on reducing fragmentation within ecosystems. Looking forward to the discussion! |
Beta Was this translation helpful? Give feedback.
-
Pre-submission Checklist
Discussion Topic
Hello,
I’d like to open a discussion about the current path of the Model Context Protocol (MCP) and its proposed client/server architecture. While we all appreciate the effort and innovation behind MCP, I have concerns that we might be reinventing the wheel rather than building on existing, open standards.
The concept of MCP as a protocol for describing tools, resources, and prompts appears to be a subset of what the W3C Web of Things (WoT) architecture already is capable of. WoT has proven its value in other domains, such as IoT, where it has successfully addressed fragmentation by offering:
Why Reinventing May Be Counterproductive
Rather than creating a new protocol standard, I believe the community should evaluate whether the W3C WoT standards could be leveraged for MCP.
Benefits of WoT for AI/LLM
WoT can already achieve many of the goals outlined for MCP:
Protocol Flexibility:
Cross-Domain Compatibility:
WoT is not limited to IoT. It’s flexible enough to describe and interface with agents, tools, and resources in AI systems.
Browser and Client Support:
A standardized Scripting API already exists for browsers, allowing direct interaction with WoT-defined Things. This could be extended for agents and tools.
I've described a lot of the concepts and ideas here: https://lmos-ai.org/docs/multi_agent_system/overview
We use WoT to describe the capabilities of Agents and their API. We use it for inter-agent communication and metadata discovery. But it can be easily adapted to be used for "tools", "resources", "prompts" or other "things".
WoT is not just theoretical, it has implementations in multiple languages, including:
These implementations can serve as a foundation for quickly adapting WoT for MCP use cases.
Call to Action
I’d love to get feedback from the community on this topic. Specifically:
Let’s consider whether leveraging established standards like WoT could save time and effort while promoting interoperability across AI/LLM ecosystems. I look forward to hearing your thoughts!
Beta Was this translation helpful? Give feedback.
All reactions