[HN Gopher] Model Context Protocol
       ___________________________________________________________________
        
       Model Context Protocol
        
       Author : benocodes
       Score  : 391 points
       Date   : 2024-11-25 16:14 UTC (6 hours ago)
        
 (HTM) web link (www.anthropic.com)
 (TXT) w3m dump (www.anthropic.com)
        
       | somnium_sn wrote:
       | @jspahrsummers and I have been working on this for the last few
       | months at Anthropic. I am happy to answer any questions people
       | might have.
        
         | s3tt3mbr1n1 wrote:
         | First, thank you for working on this.
         | 
         | Second, a question. Computer Use and JSON mode are great for
         | creating a quasi-API for legacy software which offers no
         | integration possibilities. Can MCP better help with legacy
         | software interactions, and if so, in what ways?
        
           | jspahrsummers wrote:
           | Probably, yes! You could imagine building an MCP server
           | (integration) for a particular piece of legacy software, and
           | inside that server, you could employ Computer Use to actually
           | use and automate it.
           | 
           | The benefit would be that to the application connecting to
           | your MCP server, it just looks like any other integration,
           | and you can encapsulate a lot of the complexity of Computer
           | Use under the hood.
           | 
           | If you explore this, we'd love to see what you come up with!
        
         | kseifried wrote:
         | For additional context the PyPi package:
         | https://pypi.org/project/mcp/
         | 
         | And the GitHub repo: https://github.com/modelcontextprotocol
        
         | benocodes wrote:
         | Seems from the demo videos like Claude desktop app will soon
         | support MCP. Can you share any info on when it will be rolled
         | out?
        
           | jspahrsummers wrote:
           | Already available in the latest at
           | https://claude.ai/download!
        
             | synack wrote:
             | No Linux version :(
        
             | dantiberian wrote:
             | Will this be partially available from the Claude website
             | for connections to other web services? E.g. could the
             | GitHub server be called from https://claude.ai?
        
         | slalani304 wrote:
         | Super cool and much needed open-standard. Wondering how this
         | will work for websites/platforms that don't have exposed API's
         | (LinkedIn, for example)
        
           | spullara wrote:
           | you build an MCP that does great calling using your own
           | cookies and browser to get around their scraping protections.
        
         | instagary wrote:
         | What is a practical use case for this protocol?
        
           | somnium_sn wrote:
           | A few common use cases that I've been using is connecting a
           | development database in a local docker container to Claude
           | Desktop or any other MCP Client (e.g. an IDE assistant
           | panel). I visualized the database layout in Claude Desktop
           | and then create a Django ORM layer in my editor (which has
           | MCP integration).
           | 
           | Internally we have seen people experiment with a wide variety
           | of different integrations from reading data files to managing
           | their Github repositories through Claude using MCP. Alex's
           | post https://x.com/alexalbert__/status/1861079762506252723
           | has some good examples. Alternatively please take a look at
           | https://github.com/modelcontextprotocol/servers for a set of
           | servers we found useful.
        
           | anaisbetts wrote:
           | Here's a useful one that I wrote:
           | 
           | https://github.com/anaisbetts/mcp-youtube
           | 
           | Claude doesn't support YouTube summaries. I thought that was
           | annoying! So I added it myself, instead of having to hope
           | Anthropic would do it
        
           | drdaeman wrote:
           | Zed editor had just announced support for MSP in some of
           | their extensions, publishing an article showing some possible
           | use cases/ideas: https://zed.dev/blog/mcp
        
         | throwup238 wrote:
         | Are there any resources for building the LLM side of MCP so we
         | can use the servers with our own integration? Is there a
         | specific schema for exposing MCP information to tool or
         | computer use?
        
           | somnium_sn wrote:
           | Both Python and Typescript SDK can be used to build a client.
           | https://github.com/modelcontextprotocol/typescript-
           | sdk/tree/... and
           | https://github.com/modelcontextprotocol/python-
           | sdk/tree/main.... The TypeScript client is widely used, while
           | the Python side is more experimental.
           | 
           | In addition, I recommend looking at the specification
           | documentation at https://spec.modelcontextprotocol.io. This
           | should give you a good overview of how to implement a client.
           | If you are looking to see an implemented open source client,
           | Zed implements an MCP client: https://github.com/zed-
           | industries/zed/tree/main/crates/conte...
           | 
           | If you have specific questions, please feel free to start a
           | discussion on the respective
           | https://github.com/modelcontextprotocol discussion, and we
           | are happy to help you with integrating MCP.
        
             | throwup238 wrote:
             | Thanks! Do Anthropic models get extra training/RLHF/fine-
             | tuning for MCP use or is it an extension of tool use?
        
         | tcdent wrote:
         | Do you have a roadmap for the future of the protocol?
         | 
         | Is it versioned? ie. does this release constitute an immutable
         | protocol for the time being?
        
           | bbor wrote:
           | Followup: _is_ this a protocol yet, or just a set of
           | libraries? This page is empty:
           | https://spec.modelcontextprotocol.io/
        
             | jspahrsummers wrote:
             | Sorry, I think that's just the nav on those docs being
             | confusing (particularly on mobile). You can see the spec
             | here: https://spec.modelcontextprotocol.io/specification/
        
               | bbor wrote:
               | Ahh thanks! I was gonna say it's broken, but I now see
               | that you're supposed to notice the sidebar changed and
               | select one of the child pages. Would def recommend
               | changing the sidebar link to that path instead of the
               | index -- I would do it myself but couldn't find the
               | sidebar in your doc repos within 5 minutes of looking.
               | 
               | Thanks for your hard work! "LSP for LLMs" is a fucking
               | awesome idea
        
           | jspahrsummers wrote:
           | You can read how we're implementing versioning here: https://
           | spec.modelcontextprotocol.io/specification/basic/ver...
           | 
           | It's not exactly immutable, but any backwards incompatible
           | changes would require a version bump.
           | 
           | We don't have a roadmap in one particular place, but we'll be
           | populating GitHub Issues, etc. with all the stuff we want to
           | get to! We want to develop this in the open, with the
           | community.
        
         | startupsfail wrote:
         | Is it at least somewhat in sync with plans from Microsoft ,
         | OpenAI and Meta? And is it compatible with the current tool use
         | API and computer use API that you've released?
         | 
         | From what I've seen, OpenAI attempted to solve the problem by
         | partnering with an existing company that API-fys everything.
         | This feels looks a more viable approach, if compared to
         | effectively starting from scratch.
        
           | kmahorker21 wrote:
           | What's the name of the company that OpenAI's partnered with?
           | Just curious.
        
         | singularity2001 wrote:
         | Is there any way to give a MCP server access for good? Trying
         | out the demo it asked me every single time for permission which
         | will be annoying for longer usage.
        
           | jspahrsummers wrote:
           | We do want to improve this over time, just trying to find the
           | right balance between usability and security. Although MCP is
           | powerful and we hope it'll really unlock a lot of potential,
           | there are still risks like prompt injection and
           | misconfigured/malicious servers that could cause a lot of
           | damage if left unchecked.
        
         | rictic wrote:
         | I just want to say kudos for the design of the protocol. Seems
         | inspired by https://langserver.org/ in all the right ways.
         | Reading through it is a delight, there's so many tasteful
         | little decisions.
         | 
         | One bit of constructive feedback: the TypeScript API isn't
         | using the TypeScript type system to its fullest. For example,
         | for tool providers, you could infer the type of a tool request
         | handler's params from the json schema of the corresponding
         | tool's input schema.
         | 
         | I guess that would be assuming that the model is doing
         | constrained sampling correctly, such that it would never
         | generate JSON that does not match the schema, which you might
         | not want to bake into the reference server impl. It'd mean
         | changes to the API too, since you'd need to connect the tool
         | declaration and the request handler for that tool in order to
         | connect their types.
        
           | jspahrsummers wrote:
           | This is a great idea! There's also the matter of requests'
           | result types not being automatically inferred in the SDK
           | right now, which would be great to fix.
           | 
           | Could I convince you to submit a PR? We'd love to include
           | community contributions!
        
         | thenewwazoo wrote:
         | How much did you use LLMs or other AI-like tools to develop the
         | MCP and its supporting materials?
        
         | xyc wrote:
         | Superb work and super promising! I had wished for a protocol
         | like this.
         | 
         | Is there a recommended resource for building MCP client? From
         | what I've seen it just mentions Claude desktop & co are
         | clients. SDK readme seems to cover it a bit but some examples
         | could be great.
        
           | somnium_sn wrote:
           | We are still a bit light on documentation on how to integrate
           | MCP into an application.
           | 
           | The best starting point are the respective client parts in
           | the SDK: https://github.com/modelcontextprotocol/typescript-
           | sdk/tree/... and
           | https://github.com/modelcontextprotocol/python-
           | sdk/tree/main..., as well as the official specification
           | documentation at https://spec.modelcontextprotocol.io.
           | 
           | If you run into issues, feel free to open a discussion in the
           | respective SDK repository and we are happy to help.
           | 
           | (I've been fairly successful in taking the spec documentation
           | in markdown, an SDK and giving both to Claude and asking
           | questions, but of course that requires a Claude account,
           | which I don't want to assume)
        
             | xyc wrote:
             | Thanks for the pointers! Will do. I've fired up
             | https://github.com/modelcontextprotocol/inspector and the
             | code looks helpful too.
             | 
             | I'm looking at integrating MCP with desktop app. The spec (
             | https://spec.modelcontextprotocol.io/specification/basic/tr
             | a...) mentions "Clients SHOULD support stdio whenever
             | possible.". The server examples seem to be mostly stdio as
             | well. In the context of a sandboxed desktop app, it's often
             | not practical to launch a server as subprocess because:
             | 
             | - sandbox restrictions of executing binaries
             | 
             | - needing to bundle binary leads to a larger installation
             | size
             | 
             | Would it be reasonable to relax this restriction and
             | provide both SSE/stdio for the default server examples?
        
               | xyc wrote:
               | ^ asked the question in the discussion: https://github.co
               | m/modelcontextprotocol/specification/discus...
        
               | somnium_sn wrote:
               | Having broader support for SSE in the servers repository
               | would be great. Maybe I can encourage you to open a PR or
               | at least an issue.
               | 
               | I can totally see your concern about sandboxed app,
               | particularly for flatpack or similar distribution
               | methods. I see you already opened a discussion https://gi
               | thub.com/modelcontextprotocol/specification/discus..., so
               | let's follow up there. I really appreciate the input.
        
         | computerex wrote:
         | It seems extremely verbose. Why does the transport mechanism
         | matter? Would have loved a protocol/standard about how best to
         | organize/populate the context. I think MCP touches on that but
         | has too much of other stuff for me.
        
         | cynicalpeace wrote:
         | Was Cursor in any way an inspiration?
        
       | benocodes wrote:
       | Good thread showing how this works:
       | https://x.com/alexalbert__/status/1861079762506252723
        
         | kseifried wrote:
         | Twitter doesn't work anymore unless you are logged in.
         | 
         | https://unrollnow.com/status/1861079762506252723
        
       | outlore wrote:
       | i am curious: why this instead of feeding your LLM an OpenAPI
       | spec?
        
         | jasonjmcghee wrote:
         | It's not about the interface to make a request to a server,
         | it's about how the client and server can interact.
         | 
         | For example:
         | 
         | When and how should notifications be sent and how should they
         | be handled?
         | 
         | ---
         | 
         | It's a lot more like LSP.
        
           | outlore wrote:
           | makes sense, thanks for the explanation!
        
           | quantadev wrote:
           | Nobody [who knows what they're doing] wants their LLM API
           | layer controlling anything about how their clients and
           | servers interact though.
        
             | pizza wrote:
             | I do
        
               | quantadev wrote:
               | > "who knows what they're doing".
        
             | jasonjmcghee wrote:
             | Not sure I understand your point. If it's your client /
             | server, you are controlling how they interact, by
             | implementing the necessaries according to the protocol.
             | 
             | If you're writing an LSP for a language, you're
             | implementing the necessaries according to the protocol
             | (when to show errors, inlay hints, code fixes, etc.) - it's
             | not deciding on its own.
        
               | quantadev wrote:
               | Even if I could make use of it, I wouldn't, because I
               | don't write proprietary code that only works on one AI
               | Service Provider. I use only LangChain so that _all_ of
               | my code can be used with _any_ LLM.
               | 
               | My app has a simple drop down box where users can pick
               | whatever LLM they want to to use (OpenAI, Perplexity,
               | Gemini, Anthropic, Grok, etc)
               | 
               | However if they've done something worthy of putting into
               | LangChain, then I do hope LangChain steals the idea and
               | incorporates it so that _all_ LLM apps can use it.
        
               | gyre007 wrote:
               | It's an _open_ protocol; where did you get the idea that
               | it would only work with Claude? You can implement it for
               | whatever you want - I 'm sure langchain folks are already
               | working on something to accommodate it
        
               | quantadev wrote:
               | Once fully adopted by at least 3 other companies I'll
               | consider it a standard, and would consider it yes, if it
               | solved a problem I have, which it does not.
               | 
               | Lots of companies open source some of their internal
               | code, then say it's "officially a protocol now" that
               | anyone can use, and then no one else ever uses it.
               | 
               | If they have new "tools" that's great however, but only
               | as long as they can be used in LangChain independent of
               | any "new protocol".
        
         | pizza wrote:
         | I think OpenAI spec function calls are to this like what raw
         | bytes are to unix file descriptors
        
         | quotemstr wrote:
         | Same reason in Emacs we use lsp-mode and eglot these days
         | instead of ad-hoc flymake and comint integrations. Plug and
         | play.
        
       | recsv-heredoc wrote:
       | Thank you for creating this.
        
       | ianbutler wrote:
       | I'm glad they're pushing for standards here, literally everyone
       | has been writing their own integrations and the level of
       | fragmentation (as they also mention) and repetition going into
       | building the infra around agents is super high.
       | 
       | We're building an in terminal coding agent and our next step was
       | to connect to external services like sentry and github where we
       | would also be making a bespoke integration or using a closed
       | source provider. We appreciate that they have mcp integrations
       | already for those services. Thanks Anthropic!
        
         | bbor wrote:
         | I've been implementing a lot of this exact stuff over the past
         | month, and couldn't agree more. And they even typed the python
         | SDK -- _with pydantic_!! An exciting day to be an LLM dev, that
         | 's for sure. Will be immediately switching all my stuff to this
         | (assuming it's easy to use without their starlette `server`
         | component...)
        
       | ado__dev wrote:
       | You can use MCP with Sourcegraph's Cody as well
       | 
       | https://sourcegraph.com/blog/cody-supports-anthropic-model-c...
        
       | jascha_eng wrote:
       | Hmm I like the idea of providing a unified interface to all LLMs
       | to interact with outside data. But I don't really understand why
       | this is local only. It would be a lot more interesting if I could
       | connect this to my github in the web app and claude automatically
       | has access to my code repositories.
       | 
       | I guess I can do this for my local file system now?
       | 
       | I also wonder if I build an LLM powered app, and currently simply
       | to RAG and then inject the retrieved data into my prompts, should
       | this replace it? Can I integrate this in a useful way even?
       | 
       | The use case of on your machine with your specific data, seems
       | very narrow to me right now, considering how many different
       | context sources and use cases there are.
        
         | bryant wrote:
         | > It would be a lot more interesting if I could connect this to
         | my github in the web app and claude automatically has access to
         | my code repositories.
         | 
         | From the link:
         | 
         | > To help developers start exploring, we're sharing pre-built
         | MCP servers for popular enterprise systems like Google Drive,
         | Slack, GitHub, Git, Postgres, and Puppeteer.
        
           | jascha_eng wrote:
           | Yes but you need to run those servers locally on your own
           | machine. And use the desktop client. That just seems...
           | weird?
           | 
           | I guess the reason for this local focus is, that it's
           | otherwise hard to provide access to local files. Which is a
           | decently large use-case.
           | 
           | Still it feels a bit complicated to me.
        
         | jspahrsummers wrote:
         | We're definitely interested in extending MCP to cover remote
         | connections as well. Both SDKs already support an SSE transport
         | with that in mind:
         | https://modelcontextprotocol.io/docs/concepts/transports#ser...
         | 
         | However, it's not quite a complete story yet. Remote
         | connections introduce a lot more questions and complexity--
         | related to deployment, auth, security, etc. We'll be working
         | through these in the coming weeks, and would love any and all
         | input!
        
           | jascha_eng wrote:
           | Will you also create some info on how other LLM providers can
           | integrate this? So far it looks like it's mostly a protocol
           | to integrate with anthropic models/desktop client. That's not
           | what I thought of when I read open-source.
           | 
           | It would be a lot more interesting to write a server for this
           | if this allowed any model to interact with my data. Everyone
           | would benefit from having more integration and you
           | (anthropic) still would have the advantage of basically
           | controlling the protocol.
        
             | somnium_sn wrote:
             | Note that both Sourcegraph's Cody and the Zed editor
             | support MCP now. They offer other models besides Claude in
             | their respective application.
             | 
             | The Model Context Protocol initial release aims to solve
             | the N-to-M relation of LLM applications (mcp clients) and
             | context providers (mcp servers). The application is free to
             | choose any model they want. We carefully designed the
             | protocol such that it is model independent.
        
               | jascha_eng wrote:
               | LLM applications just means chat applications here though
               | right? This doesn't seem to cover use cases of more
               | integrated software. Like a typical documentation RAG
               | chatbot.
        
         | singularity2001 wrote:
         | For me it's complementary to openai's custom GPTs which are
         | non-local.
        
         | mike_hearn wrote:
         | Local only solves a lot of problems. Our infrastructure does
         | tend to assume that data and credentials are on a local
         | computer - OAuth is horribly complex to set up and there's no
         | real benefit to messing with that when local works fine.
        
       | WhatIsDukkha wrote:
       | I don't understand the value of this abstraction.
       | 
       | I can see the value of something like DSPy where there is some
       | higher level abstractions in wiring together a system of llms.
       | 
       | But this seems like an abstraction that doesn't really offer much
       | besides "function calling but you use our python code".
       | 
       | I see the value of language server protocol but I don't see the
       | mapping to this piece of code.
       | 
       | That's actually negative value if you are integrating into an
       | existing software system or just you know... exposing functions
       | that you've defined vs remapping functions you've defined into
       | this intermediate abstraction.
        
         | resters wrote:
         | The secret sauce part is the useful part -- the local vector
         | store. Anthropic is probably not going to release that without
         | competitive pressure. Meanwhile this helps Anthropic build an
         | ecosystem.
         | 
         | When you think about it, function calling needs its own local
         | state (embedded db) to scale efficiently on larger contexts.
         | 
         | I'd like to see all this become open source / standardized.
        
           | jerpint wrote:
           | im not sure what you mean - the embedding model is
           | independent of the embeddings themselves. Once generated, the
           | embeddings and vector store should exist 100% locally and
           | thus not part of any secret sauce
        
         | ethbr1 wrote:
         | Here's the play:
         | 
         | If integrations are required to unlock value, then the platform
         | with the most prebuilt integrations wins.
         | 
         | The bulk of mass adopters don't have the in-house expertise or
         | interest in building their own. They want turnkey.
         | 
         | No company can build integrations, at scale, more quickly
         | itself than an entire community.
         | 
         | If Anthropic creates an integration standard and gets adoption,
         | then it either at best has a competitive advantage (first mover
         | and ownership of the standard) or at worst prevents OpenAI et
         | al. from doing the same to it.
         | 
         | (Also, the integration piece is the necessary but least
         | interesting component of the entire system. Way better to
         | commodify it via standard and remove it as a blocker to
         | adoption)
        
       | orliesaurus wrote:
       | Are there any other Desktop apps other than Claude's supporting
       | this?
        
         | jdorfman wrote:
         | Cody (VS Code plugin) is supporting MCP
         | https://sourcegraph.com/blog/cody-supports-anthropic-model-c...
        
           | orliesaurus wrote:
           | What about ChatGPT Desktop? Do you think they will add
           | support for this?
        
             | jdorfman wrote:
             | I hope so, I use Claude Desktop multiple times a day.
        
         | deet wrote:
         | My team and I have a desktop product with a very similar
         | architecture (a central app+UI with a constellation of local
         | servers providing functions and data to models for local+remote
         | context)
         | 
         | If this protocol gets adoption we'll probably add
         | compatibility.
         | 
         | Which would bring MCP to local models like LLama 3 as well as
         | other cloud providers competitors like OpenAI, etc
        
           | orliesaurus wrote:
           | would love to know more
        
             | deet wrote:
             | Landing page link is in my bio
             | 
             | We've been keeping quiet, but I'd be happy to chat more if
             | you want to email me (also in bio)
        
       | orliesaurus wrote:
       | How is this different from function calling libraries that
       | frameworks like Langchain or Llamaindex have built?
        
         | quantadev wrote:
         | After a quick look it seemed to me like they're trying to
         | standardize on how clients call servers, which nobody needs,
         | and nobody is going to use. However if they have new Tools that
         | can be plugged into my LangChain stuff, that will be great, and
         | I can use that, but I have no place for any new client/server
         | models.
        
       | andrewstuart wrote:
       | Can someone please give examples of uses for this?
        
         | singularity2001 wrote:
         | let Claude answer questions about your files and even modify
         | them
        
       | keybits wrote:
       | The Zed editor team collaborated with Anthropic on this, so you
       | can try features of this in Zed as of today:
       | https://zed.dev/blog/mcp
        
         | singularity2001 wrote:
         | Looks like I need to create a rust extension wrapper for the
         | mcp server I created for Claude?
        
       | bentiger88 wrote:
       | One thing I dont understand.. does this rely on vector
       | embeddings? Or how does the AI interact with the data? The
       | example is a sqllite satabase with prices, and it shows claude
       | being asked to give the average price and to suggest pricing
       | optimizations.
       | 
       | So does the entire db get fed into the context? Or is there
       | another layer in between. What if the database is huge, and you
       | want to ask the AI for the most expensive or best selling items?
       | With RAG that was only vaguely possible and didnt work very well.
       | 
       | Sorry I am a bit new but trying to learn more.
        
         | orliesaurus wrote:
         | it doesnt feed the whole DB into the context, it gives Claude
         | the option to QUERY it directly
        
           | cma wrote:
           | It never accidentally deletes anything? Or I guess you give
           | it read only access? It is querying it through this API and
           | some adapter built for it, or the file gets sent through the
           | API, they recognize it is sqllite and load it on their end?
        
             | simonw wrote:
             | It can absolutely accidentally delete things. You need to
             | think carefully about what capabilities you enable for the
             | model.
        
         | simonw wrote:
         | Vector embeddings are entirely unrelated to this.
         | 
         | This is about tool usage - the thing where an LLM can be told
         | "if you want to run a SQL query, say <sql>select * from
         | repos</sql> - the code harness well then spot that tag, run the
         | query for you and return the results to you in a chat message
         | so you can use them to help answer a question or continue
         | generating text".
        
       | lukekim wrote:
       | The Model Context server is similar to what we've built at Spice,
       | but we've focused on databases and data systems. Overall,
       | standards are good. Perhaps we can implement MCP as a data
       | connector and tool.
       | 
       | [1] https://github.com/spiceai/spiceai
        
       | orliesaurus wrote:
       | I would love to integrate this into my platform of tools for AI
       | models, Toolhouse [1], but I would love to understand the
       | adoption of this protocol, especially as it seems to only work
       | with one foundational model.
       | 
       | [1] https://toolhouse.AI
        
       | bionhoward wrote:
       | I love how they're pretending to be champions of open source
       | while leaving this gem in their terms of use
       | 
       | """ You may not access or use, or help another person to access
       | or use, our Services in the following ways: ... To develop any
       | products or services that compete with our Services, including to
       | develop or train any artificial intelligence or machine learning
       | algorithms or models. """
        
         | loeber wrote:
         | OpenAI and many other companies have virtually the same
         | language in their T&Cs.
        
           | SSLy wrote:
           | that doesn't absolve any of them
        
             | monooso wrote:
             | Absolve them of what?
        
           | Imnimo wrote:
           | OpenAI says, "[You may not] Use Output to develop models that
           | compete with OpenAI." That feels more narrow than Anthropic's
           | blanket ban on any machine learning development.
        
         | j2kun wrote:
         | Presumably this doesn't apply to the standard being released
         | here, nor any of its implementations made available. Each of
         | these appears to have its own permissible license.
        
         | haneefmubarak wrote:
         | Eh the actual MCP repos seem to just be MIT licensed; AFAIK
         | every AI provider has something similar for their core services
         | as they do.
        
         | cooper_ganglia wrote:
         | I think open-sourcing your tech for the common person while
         | leaving commercial use behind a paywall or even just against
         | terms is completely acceptable, no?
        
       | pants2 wrote:
       | This is awesome. I have an assistant that I develop for my
       | personal use and integrations are the more difficult part - this
       | is a game changer.
       | 
       | Now let's see a similar abstraction on the client side - a
       | unified way of connecting your assistant to Slack, Discord,
       | Telegram, etc.
        
       | killthebuddha wrote:
       | I see a good number of comments that seem skeptical or confused
       | about what's going on here or what the value is.
       | 
       | One thing that some people may not realize is that right now
       | there's a MASSIVE amount of effort duplication around developing
       | something that could maybe end up looking like MCP. Everyone
       | building an LLM agent (or pseudo-agent, or whatever) right now is
       | writing a bunch of boilerplate for mapping between message
       | formats, tool specification formats, prompt templating, etc.
       | 
       | Now, having said that, I do feel a little bit like there's a few
       | mistakes being made by Anthropic here. The big one to me is that
       | it seems like they've set the scope too big. For example, why are
       | they shipping standalone clients and servers rather than
       | client/server libraries for all the existing and wildly popular
       | ways to fetch and serve HTTP? When I've seen similar mistakes
       | made (e.g. by LangChain), I assume they're targeting brand new
       | developers who don't realize that they just want to make some
       | HTTP calls.
       | 
       | Another thing that I think adds to the confusion is that, while
       | the boilerplate-ish stuff I mentioned above is annoying, what's
       | REALLY annoying and actually hard is generating a series of
       | contexts using variations of similar prompts in response to
       | errors/anomalies/features detected in generated text. IMO this is
       | how I define "prompt engineering" and it's the actual hard
       | problem we have to solve. By naming the protocol the Model
       | Context Protocol, I assumed they were solving prompt engineering
       | problems (maybe by standardizing common prompting techniques like
       | ReAct, CoT, etc).
        
         | ineedaj0b wrote:
         | data security is the reason i'd imagine they're letting other's
         | host servers
        
           | killthebuddha wrote:
           | The issue isn't with who's hosting, it's that their SDKs
           | don't clearly integrate with existing HTTP servers regardless
           | of who's hosting them. I mean integrate at the source level,
           | of course they could integrate via HTTP call.
        
         | thelastparadise wrote:
         | Your point about boilerplate is key, and it's why I think MCP
         | could work well despite some of the concerns raised. Right now,
         | so many of us are writing redundant integrations or reinventing
         | the same abstractions for tool usage and context management.
         | Even if the first iteration of MCP feels broad or clunky,
         | standardizing this layer could massively reduce friction over
         | time.
         | 
         | Regarding the standalone servers, I suspect they're aiming for
         | usability over elegance in the short term. It's a classic
         | trade-off: get the protocol in people's hands to build
         | momentum, then refine the developer experience later.
        
       | _pdp_ wrote:
       | It is clear this is a wrapper around the function calling
       | paradigm but with some extensions that are specific to this
       | implementation. So it is an SDK.
        
       | prnglsntdrtos wrote:
       | really great to see some standards emerging. i'd love to see
       | something like mindsdb wired up to support this protocol and get
       | a bunch of stuff out of the box.
        
       | singularity2001 wrote:
       | Tangential question: Is there any LLM which is capable of
       | preserving the context through many sessions, so it doesn't have
       | to upload all my context every time?
        
         | fragmede wrote:
         | it's a bit of a hack but the web UI of ChatGPT has a limited
         | amount of memories you can use to customize your interactions
         | with it.
        
           | singularity2001 wrote:
           | "remember these 10000 lines of code" ;)
           | 
           | In an ideal world gemini (or any other 1M token context
           | model) would have an internal 'save snapshot' option so one
           | could resume a blank conversation after 'priming' the
           | internal state (activations) with the whole code base.
        
       | alberth wrote:
       | Is this basically open source data collectors / data integration
       | connectors?
        
         | somnium_sn wrote:
         | I would probably more think of it as LSP for LLM applications.
         | It is enabling data integrations, but the current
         | implementations are all local.
        
       | hipadev23 wrote:
       | Can I point this at my existing private framework and start
       | getting Claude 3.5 code suggestions that utilize our framework it
       | has never seen before?
        
       | wolframhempel wrote:
       | I'm surprised that there doesn't seem to be a concept of payments
       | or monetization baked into the protocol. I believe there are some
       | major companies to be built around making data and API actions
       | available to AI Models, either as an intermediary or marketplace
       | or for service providers or data owners directly- and they'd all
       | benefit from a standardised payment model on a per transaction
       | level.
        
         | ed wrote:
         | I've gone looking for services like this but couldn't find
         | much, any chance you can link to a few platforms?
        
       | benopal64 wrote:
       | If anyone here has an issue with their Claude Desktop app seeing
       | the new MCP tools you've added to your computer, restart it
       | fully. Restarting the Claude Desktop app did NOT work for me, I
       | had to do a full OS restart.
        
         | anaisbetts wrote:
         | Hm, this shouldn't be the case, something Odd is happening
         | here. Normally restarting the app should do it, though on
         | Windows it is easy to think you restarted the app when you
         | really just closed the main window and reopened it (you need to
         | close the app via File => Qui)
        
       | melvinmelih wrote:
       | This is great but will be DOA if OpenAI (80% market share)
       | decides to support something else. The industry trend is that
       | everything seems to converge to OpenAI API standard (see also the
       | recent Gemini SDK support for OpenAI API).
        
         | will-burner wrote:
         | True, but you could also frame this as a way for Anthropic to
         | try and break that trend. IMO they've got to try and compete
         | with OpenAI, can't just concede that OpenAI has won yet.
        
         | thund wrote:
         | "OpenAI API" is not a "standard" though. They have no interest
         | in making it a standard, otherwise they would make it too easy
         | to switch AI provider.
         | 
         | Anthropic is playing the "open standard" card because they want
         | to win over some developers. (and that's good from that pov)
        
         | defnotai wrote:
         | There's clearly a need for this type of abstraction, hooking up
         | these models to various tooling is a significant burden for
         | most companies.
         | 
         | Putting this out there puts OpenAI on the clock to release
         | their own alternative or adopt this, because otherwise they run
         | the risk of engineering leaders telling their C-suite that
         | Anthropic is making headway towards better frontier model
         | integration and OpenAI is the costlier integration to maintain.
        
         | skissane wrote:
         | I wonder if they'll have any luck convincing other LLM vendors,
         | such as Google, Meta, xAI, Mistral, etc, to adopt this
         | protocol. If enough other vendors adopt it, it might still see
         | some success even if OpenAI doesn't.
         | 
         | Also, I wonder if you could build some kind of open source
         | mapping layer from their protocol to OpenAI's. That way OpenAI
         | could support the protocol even if they don't want to.
        
       | jvalencia wrote:
       | I don't trust an open source solution by a major player unless
       | it's published with other major players. Otherwise, the perverse
       | incentives are too great.
        
       | valtism wrote:
       | This is a nice 2-minute video overview of this from Matt Pocock
       | (of Typescript fame) https://www.aihero.dev/anthropics-new-model-
       | context-protocol...
        
         | xrd wrote:
         | Very nice video, thank you.
         | 
         | His high level summary is that this boils down to a "list
         | tools" RPC call, and a "call tool" RPC call.
         | 
         | It is, indeed, very smart and very simple.
        
       | gjmveloso wrote:
       | Let's see how other relevant players like Meta, Amazon and
       | Mistral reacts to this. Things like these just make sense with
       | broader adoption and diverse governance model
        
       | threecheese wrote:
       | WRT prompts vs sampling: why does the Prompts interface exclude
       | model hints that are present in the Sampling interface? Maybe I
       | am misunderstanding.
       | 
       | It appears that clients retrieve prompts from a server to hydrate
       | them with context only, to then execute/complete somewhere else
       | (like Claude Desktop, using Anthropic models). The server doesn't
       | know how effective the prompt will be in the model that the
       | client has access to. It doesn't even know if the client is a
       | chat app, or Zed code completion.
       | 
       | In the sampling interface - where the flow is inverted, and the
       | server presents a completion request to the client - it can
       | suggest that the client uses some model type /parameters. This
       | makes sense given only the server knows how to do this
       | effectively.
       | 
       | Given the server doesn't understand the capabilities of the
       | client, why the asymmetry in these related interfaces?
       | 
       | There's only one server example that uses prompts (fetch), and
       | the one prompt it provides returns the same output as the tool
       | call, except wrapped in a PromptMessage. EDIT: lols like there
       | are some capabilities classes in the mcp, maybe these will
       | evolve.
        
         | jspahrsummers wrote:
         | Our thinking is that prompts will generally be a user initiated
         | feature of some kind. These docs go into a bit more detail:
         | 
         | https://modelcontextprotocol.io/docs/concepts/prompts
         | 
         | https://spec.modelcontextprotocol.io/specification/server/pr...
         | 
         | ... but TLDR, if you think of them a bit like slash commands, I
         | think that's a pretty good intuition for what they are and how
         | you might use them.
        
       | zokier wrote:
       | Does aider benefit from this? Big part of aiders special sauce is
       | the way it builds context, so it feels closely related but I
       | don't know how the pieces would fit together here
        
         | ramon156 wrote:
         | My guess is more can be done locally. Then again I only
         | understand ~2 of this and aider.
        
       | ssfrr wrote:
       | I'm a little confused as to the fundamental problem statement. It
       | seems like the idea is to create a protocol that can connect
       | arbitrary applications to arbitrary resources, which seems
       | underconstrained as a problem to solve.
       | 
       | This level of generality has been attempted before (e.g. RDF and
       | the semantic web, REST, SOAP) and I'm not sure what's
       | fundamentally different about how this problem is framed that
       | makes it more tractable.
        
       | faizshah wrote:
       | So it's basically a standardized plugin format for LLM apps and
       | thats why it doesn't support auth.
       | 
       | It's basically a standardized way to wrap you Openapi client with
       | a standard tool format then plug it in to your locally running AI
       | tool of choice.
        
       | gyre007 wrote:
       | Something is telling me this _might_ turn out to be a huge deal;
       | I can't quite put a finger on what is that makes me feel that,
       | but opening private data and tools via an open protocol to AI
       | apps just feels like a game changer.
        
         | orliesaurus wrote:
         | This is definitely a huge deal - as long as there's a good
         | developer experience - which IMHO we're not there yet!
        
           | somnium_sn wrote:
           | Any feedback on developer experience is always welcomed
           | (preferably in github discussion/issue form). It's the first
           | day in the open. We have a long long way to go and much
           | ground to cover.
        
         | MattDaEskimo wrote:
         | LLMs can potentially query _something_ and receive a concise,
         | high-signal response to facilitate communications with the
         | endpoint, similar to API documentation for us but more
         | programmatic.
         | 
         | This is huge, as long as there's a single standard and other
         | LLM providers don't try to release their own protocol. Which,
         | historically speaking, is definitely going to happen.
        
           | gyre007 wrote:
           | > This is huge, as long as there's a single standard and
           | other LLM providers don't try to release their own protocol
           | 
           | Yes, very much this; I'm mildly worried because the
           | competition in this space is huge and there is no shortage of
           | money and crazy people who could go against this.
        
       | _rupertius wrote:
       | For those interested, I've been working on something related to
       | this, Web Applets - which is a spec for creating AI-enabled
       | components that can receive actions & respond with state:
       | 
       | https://github.com/unternet-co/web-applets/
        
       ___________________________________________________________________
       (page generated 2024-11-25 23:00 UTC)