Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But none of the criticisms here is specific to MCP, just to tool calls in general, it wouldn't matter if the agent used a custom tool protocol, plain OpenAPIs, etc. These issues would still exist.


> These issues would still exist.

I just explained why these issues don't apply to the LLM writing and invoking code: again, this is because code can apply successive transformations to the input without having to feed the intermediate results into the LLM's context. That code can read a file that would weigh in at 50,000 tokens, chain 100 functions together, producing a line that would be 20 tokens, and only the LLM will only see the final 20 token result. That really is only 20 tokens for the entire result -- the LLM never sees the 50,000 tokens from the file that was read via the program, nor does the LLM see the 10s of thousands of tokens worth of intermediate results between the successive transformations from each of the 100 functions.

With MCP, there's no way for the LLM to invoke one tool call that expresses "compose/pipeline these 100 tools, please, and just give me the final result" -- the LLM must make 100 individual tool calls, manually threading the results through each tool call, which represents 100 opportunities for the LLM to make a mistake.

It sounds like you are disagreeing with what I am saying, but it doesn't look you're giving any reason why you disagree, so I'm a bit confused.


I'm not exactly disagreeing, its just that I don't think that's a criticism of MCP for tool calls.

You need a way to expose tools to LLM, even for what you're talking about. The LLM can't write code without access to code writing tools.

MCP is a protocol for exposing tools to an LLM agent in an extensible way.

If you didn't have MCP for this, the alternative wouldn't be a python script. It would be some other tool calling protocol.

Maybe a good criticism is that the protocol doesn't have a defined mechanism for piping tool calls together. It be nice to define as part of the protocol a standard way for the model to say call tool1 and then pass output.foo as input.bar of tool2.

I feel that can probably be an eventual extension.

As an advanced user of coding agents that you let run locally or have sandboxed yourself, you might get more power and mileage by having it implement scripts and execute them, or use bash/curl and so on, to say access the Google Drive standard APIs for whatever you want to do, than to use the Google Drive MCP. I 100% agree. But that's not adequate for all users, and it's not really an alternative to supporting custom tools in an agent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: