This is a perfect example of why supply chain is becomaing an agent problem or an agent governance problem. It's no longer just devops. We, humans, will notice things are off a bit maybe during an install or upgrade. Agents can't. They'll just intall whatever and then keep going, often with credentials loaded and tools enabled.
So what I've found to be useful or even critical is treating dependency changes as "authority changes." What I mean is upgrades and new transititive deps shouldn't be in the same permissions bucket as "normal" execution. You want to isolate the install/update into a separate job or identity with no access to production secrets. Secondly require an explicit allowlist or signed artifact for packages in the execution enviornemnt. Third, log who/hwat authorized this new code to run as a first-class audit event.
If agents are going to operate as we are tyring them to (unattended) then the question isn't only was the package malicious but it's also why was any unattended actor allowed to do what it did. Isn't this within our best interest?
zlurker 1 days ago [-]
I still dont fully understand the point of MCP servers. What do they provide that a skill doesnt? Maybe I've just used too many poorly written ones.
Is there some sort of tool that can be expressed as an MCP and but not as an API or CLI command? Obviously we shouldnt map existing apis to MCP tools, but why would I used an MCP over just writing a new "agentic ready" api route?
simonw 1 days ago [-]
I know of two benefits to MCP over Skills:
- If your agent doesn't have a full Bash-style code execution environment it can't run skills. MCP is a solid option for wiring in tools there.
- MCP can help solve authentication, keeping credentials for things in a place where the agent can't steal those credentials if it gets compromised. MCPs can also better handle access control and audit logging in a single place.
simianwords 1 days ago [-]
I don't agree with either. Skills with an API exposed by the service solves both your problems.
The LLM can look at the OpenAPI spec and construct queries - I often do this pretty easily.
simonw 1 days ago [-]
How can you disagree with my first point? You can't use skills if you don't have a Bash environment in which to run them. Do you disagree?
Skills with an API exposed by the service usually means your coding agent can access the credentials for that service. This means that if you are hit by a prompt injection the attacker can steal those credentials.
ntonozzi 1 days ago [-]
Something like Cloudflare's Code Mode fixes both of these! No privileged bash environment, no VM necessary, no exposing credentials to the LLM.
As the article states, LLMs are fantastic at writing code, and not so good at issuing tool calls.
tbh, that companies tried to make something proprietary of this concept is probably why its adoption has been weak and why we have "MCP vs CLI/Skills/etc" debates in the first place. In contrast, CLI tools only require a general a bash shell (potentially in a sandbox environment), which is very standardised.
simianwords 1 days ago [-]
Fair points, learned something new.
mememememememo 1 days ago [-]
It creates a new problem. I need an isolated shell environment. I need to lock it down. I need containers. I need to ensure said containers are isolated and not running as root. I probably need Kubernetes to do this at scale. &tc
Also even with above there is more opportunity for the bot to go off piste and run cat this and awk that. Meanwhile the "operator" i.e. the Grandpa who has an iPhone but never used a computer has no chance of getting the bot back on track as he tries to renew his car insurance.
"Just going to try using sed to get the output of curl https://.."
"I don't understand I just want to know the excess for not at fault incident when the other guy is uninsured".
Everyone has gone claw-brained. But it really is ok to write code and save that code to disk and execute thay code later.
You can use MCP or even just hard coded API call from your back end to the service you wanna use like it's 2022.
staticassertion 1 days ago [-]
Can you explain the auth part? I feel like auth for an agent is largely a matter of either verifying its context or issuing it a JWT that's scoped to its rights, which I assume is quite similar to how any tools would work. But I'm very unfamiliar with MCP.
monkpit 1 days ago [-]
I think they’re saying you could start up the mcp and pass it creds/auth for some downstream service, and then the LLM uses the tool and has auth but doesn’t know the creds.
simonw 1 days ago [-]
Right. If you're running a CLI tool that is authenticated there's effectively no way to prevent the coding agent from accessing those credentials itself - they're visible to the process, which means they're visible to the agent.
With MCP you can at least set things up such that the agent can't access the raw credentials directly.
dcherman 8 hours ago [-]
How so? Let's use a common CLI tool as an example - kubectl. Config is generally stored in ~/.kube in a variety of config files. Running `kubectl config view` already redacts the auth information from the config. LLMs could invoke `kubectl` commands without having knowledge of how it's authenticated.
simonw 8 hours ago [-]
If the agent has permission to run kubectl config view what's to stop it from reading those config files directly?
dcherman 8 hours ago [-]
The same permissions model that works for other tools. In Claude Code terms, allow Bash(kubectl:*). Deny Read(**/.kube/**). That allows kubectl access without allowing the tool to read ~/.kube directly.
Your argument is the same for an MCP server - auth is stored somewhere on disk, what's to stop it from reading that file? The answer is the same as above.
The point I'm making here is that with an MCP you can disable shell access entirely, at which point the agent cannot read credential files that it's not meant to be able to access.
dcherman 6 hours ago [-]
You can make the identical argument for the CLI tool. Allow kubectl, deny everything else.
simonw 5 hours ago [-]
I don't understand.
My argument here is that one of the reasons to use MCP is that it allows you to build smaller agents that do not have a full code execution environment, and those agents can then use MCPs to make calls to external services without revealing those credentials to the agent.
I think we both agree that if your agent has full Bash access it can access credentials.
dcherman 5 hours ago [-]
I think the gist of what we're debating is principle of least privilege - give the LLM the fewest privileges needed to accomplish the task and no more, that way you avoid issues like leaking credentials.
The approach you're proposing is that with a well designed MCP server, you can limit the permissions for your agent to only interacting with that MCP server, essentially limiting what it can do.
My argument is that you can accomplish the identical thing with an agent by limiting access to only invoking a specific CLI tool, and nothing more.
Both of our approaches accomplish the same thing. I'm just arguing that an MCP server is not required to accomplish it.
simonw 47 minutes ago [-]
If you're "limiting access to only invoking a specific CLI tool" then yeah, that's functionally equivalent to an MCP server. Most of the work I do with tools avoids MCPs entirely because you don't need them to hook up tools using raw JSON calls to LLMs or the official provider libraries.
But... if you're going all-in on the Bash/Python/arbitrary-programming-language environments that are necessary to get Skills to work, you're going to find yourself in a position where the agent can probably read config files that you don't want it to see.
steve-atx-7600 9 hours ago [-]
Also, you can set permissions to allow and disallow specific mcp server tool calls. With a skill you’d have to do something in the shell environment to block unwanted behaviors with auth or other means in a way that isn’t declarative.
zbentley 1 days ago [-]
This is right. It’s not about scoping auth, it’s about preventing secret misuse/exfil.
(Moved from wrong sub)
JambalayaJimbo 1 days ago [-]
The MCP implementation is itself an agent right? Is that not just pushing the problem somewhere else?
Also, I run programs on my machine with a different privilege level than myself all the time. Why can’t an agent do that?
conception 1 days ago [-]
No, mcp just is a server that returns prompts to the llm. The server can be/do whatever. You can have an echo mcp that list echoes back whatever you send it.
gavmor 8 hours ago [-]
Typically, no; an MCP is a deterministic program with SSE protocols.
simonw 1 days ago [-]
I define the agent as the harness that runs the LLM in a loop calling tools. The MCI implementation is one of those tools. I wouldn't call an MCP implementation an agent.
staticassertion 22 hours ago [-]
Oh. Yeah, that's neat at least. I don't think it's a big deal but that's nice enough.
1 days ago [-]
throwuxiytayq 1 days ago [-]
- MCPs can be long-running processes that have state, e.g., they can maintain a persistent connection with a server or local software.
- MCPs are trivial to write and maintain - at least in my experience and language of choice - and bash scripts are cursed. But I guess you can use a different scripting language.
- Agents can pollute their context by reading the script. I want to expose a black box that just works.
tomjwxf 1 days ago [-]
[dead]
dathanb82 1 days ago [-]
Skills are part of the repo, and CLIs are installed locally. In both cases it's up to you to keep them updated. MCP servers can be exposed and consumed over HTTPS, which means the MCP server owner can keep them updated for you.
Better sandboxing. Accessing an MCP server doesn't require you to give an agent permissions on your local machine.
MCP servers can expose tools, resources, and prompts. If you're using a skill, you can "install" it from a remote source by exposing it on the MCP server as a "prompt". That helps solve the "keep it updated" problem for skills - it gets updated by interrogating the MCP server again.
Or if your agentic workflow needs some data file to run, you can tell the agent to grab that from the MCP server as a resource. And since it's not a static file, the content can update dynamically -- you could read stocks or the latest state of a JIRA ticket or etc. It's like an AI-first, dynamic content filesystem.
swingboy 1 days ago [-]
You can install skills globally so they are available in all projects.
alexwebb2 1 days ago [-]
You could get pretty far with a set of agent-focused routes mounted under e.g. an /agents path in your API.
There'd be a little extra friction compared to MCP – the agent would presumably have to find and download and read the OpenAPI/Swagger spec, and the auth story might be a little clunkier – but you could definitely do it, and I'm sure many people do.
Beyond that, there are a few concrete things MCP provides that I'm a fan of:
- first-class integration with LLM vendors/portals (Claude, ChatGPT, etc), where actual customers are frequently spending their time and attention
- UX support via the MCP Apps protocol extension (this hasn't really entered the zeitgeist yet, but I'm quite bullish on it)
- code mode (if using FastMCP)
- lots of flexibility on tool listings – it's trivial to completely show/hide tools based on access controls, versus having an AI repeatedly stumble into an API endpoint that its credentials aren't valid for
I could keep going, but the point is that while it's possible to use another tool for the job and get _something_ up and running, MCP (and FastMCP, as a great implementation) is purpose built for it, with a lot of little considerations to help out.
9dev 1 days ago [-]
If you expand your scope a bit from just developer tooling, you’ll notice a lot of scenarios where an agent running somewhere as a service may need to invoke commands elsewhere, in other apps, or maybe provided by a customer in a bring-your-own-MCP setup. In these cases, the harness is not running locally, you don’t have a filesystem to write skills on demand to (or a fixed set of skills is baked into the container), so to get extensibility or updates to tooling, you want something that avoids redeployments. MCP fills that spot.
lagrange77 12 hours ago [-]
> Obviously we shouldnt map existing apis to MCP tools
Why? Isn't obvious to me..
alexwebb2 8 hours ago [-]
Yeah, this deserves a quick explanation.
When a human is coding against a traditional API, it might be a bit annoying if the API has four or five similar-sounding endpoints that each have a dozen parameters, but it's ultimately not a showstopper. You just spend a little extra time in the API docs, do some Googling to see what people are using for similar use cases, decide which one to use (or try a couple and see which actually gets you what you want), commit it, and your script lives happily ever after.
When an AI is trying to make that decision at runtime, having a set of confusing tools can easily derail it. The MCP protocol doesn't have a step that allows it to say "wait, this MCP server is badly designed, let me do some Googling to figure out which tool people are using for similar use cases". So it'll just pick whichever ones seems most likely to be correct, and if it's wrong, then it's just wasted time and tokens and it needs to try the next option. Scaled up to thousands or millions of times a day, it's pretty significant.
There's a lot of MCP servers out there that are just lazy mappings from OpenAPI/Swagger specs, and it often (not always, to be fair) results in a clunky, confusing mess of tools.
IanCal 1 days ago [-]
You could write an api, and then document it, and then add maybe useful prompts?
Then you’d need a way of passing all that info on to a model, so something top level.
It’d be useful to do things in the same way as others (so if everyone is adding Openapi/swagger you’d do the same if you didn’t have a reason not to).
And then you’ve just reinvented something like MCP.
It’s just a standardised format.
yoyohello13 1 days ago [-]
I built an MCP server various people in our company can use to query our various databases. I can have a service account scoped only to the non-sensitive data, and users only need to have an MCP aware agent on their computer instead of dealing with setting up drivers, DB tools, etc.
paulddraper 20 hours ago [-]
Authentication for one.
Marazan 1 days ago [-]
You can tightly constrain MCPs and shape the context that is shared back to the Agent.
A skill is, at the end of the day, just a prompt.
zapnuk 1 days ago [-]
Thats just one of the interpretations of a skill.
A skill can also act as an abstraction layer over many tools (implemented as an mcp server) to save context tokens.
Skills offer a short description of their use and thus occupy only a few hundled tokens in the context compared to thousends of tokens if all tools would be in the context.
When the LLM decides that the skill is usefull we can dynamically load the skills tools into the context (using a `load_skill` meta-tool).
dionian 1 days ago [-]
true but we could also integrate a non-MCP app with a skill and put the controls there.
arthurjean 1 days ago [-]
MCP earns its keep in specific cases: when the agent has no shell access, when you need to keep credentials out of the prompt context, or when you want runtime tool discovery across teams. But I've built a few MCP servers and half of them would've been simpler as a CLI script the agent calls directly.
TimTheTinker 23 hours ago [-]
I've had good results from creating a command-line bash utility (and associated skill) that wraps and supplies credentials opaquely to a cli tool.
arthurjean 14 hours ago [-]
Same here. It's not airtight, the agent could technically read the wrapper or env vars, but in practice it doesn't bother. Good enough for most setups.
cadamsdotcom 1 days ago [-]
Why wouldn’t you just give the agent a shell (and by implication a sandbox)?
Seems like unnecessarily constraining it.
arthurjean 24 hours ago [-]
Most of the time you should. But it depends on what you're wrapping. Exa is a good example of where MCP makes sense, it's not just one API call, it's 4 different tools (web search, code search, crawling, advanced search) plus embedded skills for chaining them. One MCP connection and the agent discovers all of that at runtime. Doing that with a CLI means building a multi-command script and hoping the agent figures out the orchestration.
On the other hand, something like context7 is just `npx ctx7 resolve <lib>` then `npx ctx7 docs <id>` — two stateless shell calls, done. No server to maintain, no protocol overhead. CLI is the right tool there.
TimTheTinker 23 hours ago [-]
Why not put all of that into a skill file? The context overhead from an MCP connection is significantly higher.
arthurjean 14 hours ago [-]
You're right actually. Exa's MCP server is stateless, just a REST wrapper. A skill + CLI would do the same job with way less context cost. Someone already built that (https://github.com/tobalsan/exa).
_verandaguy 1 days ago [-]
> FastMCP is the standard framework for building MCP applications
Standardized by whom?
In an era where technology exists that can lend the appearance of legitimacy to just about anyone, that kind of statement needs to be qualified.
simonw 1 days ago [-]
Yeah, calling itself "the standard framework" doesn't feel right to me, https://github.com/modelcontextprotocol is the home of the actual standard and has a bunch of libraries for this, of which FastMCP is not one.
If I recall correctly, the ‘official’ Python one is a fork of FastMCP v1 (which then removed the attribution, arguably in violation of the original software’s license)
There is a whole history with this, and i think its not appropriate or fair to malign the mcp python-sdk.
My read of what happened is that the author spiked an an initial the implementation of 'fastmcp' on Nov 30 2025, 5 days later, the author relicensed it to MIT, and donated it to the python sdk (10 days after anthropic announced MCP):
It was incorporated on Dec 21 2024, and hardened through the efforts of one of the python-sdk maintainers.
The author seemingly abandoned the github project shortly after donating it to the python-sdk and marked it as unmaintained, and it remained so for several months (there are roughly zero commits between jan-april):
Many contributors to the python sdk continued to iterate on the mcp server implementation using the name fastmcp ( since it had been donated to the project ) resulting in growing interest:
Then around April 2025, the author likely noticing the growing interest and stickyness of the name, decided to write a new version and start using the name fastmcp again.
This resulted in a lot of confusion by users, which persists to this day. I only looked into this last year, because i was one of those users who was suddenly confused regarding the provenance of what i was actually using vs what i thought i was using; and as i looked into it i was suddenly seeing lots of questionable reddit comments pop up in subreddits i was reading, all evangelizing fastmcp 2.0 and using language that was contributing to the confusion.
The author's interest in monetizing the fastmcp github repo is understandable, and he and others have clearly put alot of effort into iterating in his SaaS onramp, but the confusion arises simply because the author wanted to capitalize on the success of mcp and on the popularity of the fastmcp name, the initial growth and popularity of which was primarily driven by the effort and support of contributors to the mcp python sdk .
Alifatisk 1 days ago [-]
The official website modelcontextprotocol refers to FastMCP in their examples
Well it sure took "FastMCP" long enough. And the announcement lands at a time when its looking increasingly like CLI is the preferred method vs MCP. I'm sure in a few months time, even that will be out of date
speedgoose 1 days ago [-]
MCP is superior to CLI by design, and it’s not even close. I don’t understand the sudden hype towards CLI for agents.
zingar 1 days ago [-]
Would you mind elaborating on the superiority you perceive?
TheMrZZ 1 days ago [-]
For MCP servers, there's no need to install a potentially untrusted software on your computer. Remote MCP can do very little harm, a CLI though? You're vulnerable to bad actors or supply chain attacks.
For client side MCP it's a different story.
needs 1 days ago [-]
Explorable by design, can be served through HTTP, OAuth integration.
speedgoose 1 days ago [-]
It has a json schema, that’s the main point. It also enforces good documentation by design. No need to get a man page or run the help command, it’s in the context. It can work remotely with authentication.
ramon156 1 days ago [-]
Most CLI tools have JSON support. Your arguments fall flat pretty short.
I think MCP is fine in an env where you have no access to tools, but you cannot ripgrep your way through an MCP (unless you make an MCP that calls ripgrep on e.g. a repo, which in that case what are you doing).
vova_hn2 1 days ago [-]
Tool calls can have JSON schema enforced on lower level (token sampling). Although, I'm not sure if major providers do it, but I don't see any reason why they wouldn't.
speedgoose 17 hours ago [-]
I am not sure I agree on your statement about most CLI tools having JSON support. First, I’m not sure it’s true. But most are for sure not coming with JSON schemas for inputs and outputs.
jlowin 11 hours ago [-]
FastMCP author here, surprised and excited to see this hit the front page! Certainly not an announcement, we’ve been around since the beginning, but we will be launching full generative apps support shortly so stay tuned.
iblaine 8 hours ago [-]
FastMCP is great and this post is overdue. What did FastMCP solve? Lots of code reduction, reduced complexity and the ability to streamline auth. Offloading the auth was a huge benefit. With FastMCP I could replace all my custom JWT validation and get an auth workflow with fewer steps.
rcarmo 15 hours ago [-]
I built https://github.com/rcarmo/umcp to be tiny _and_ fast, but this has some nice twists on the theme. Will investigate for sure (even if it seems like a much larger dependency).
whattheheckheck 1 days ago [-]
Whatever you do do not simply put 1 to 1 rest apis to mcp tools. Really think about common workflows users want and make good abstractions for good chunks of work.
Alifatisk 1 days ago [-]
Have FastMCP become the standard sdk? The docs is great, honestly way better than the official website modelcontextprotocol which most if its pages is ”under construction”.
cboyardee 1 days ago [-]
[dead]
Rendered at 22:42:16 GMT+0000 (Coordinated Universal Time) with Vercel.
So what I've found to be useful or even critical is treating dependency changes as "authority changes." What I mean is upgrades and new transititive deps shouldn't be in the same permissions bucket as "normal" execution. You want to isolate the install/update into a separate job or identity with no access to production secrets. Secondly require an explicit allowlist or signed artifact for packages in the execution enviornemnt. Third, log who/hwat authorized this new code to run as a first-class audit event.
If agents are going to operate as we are tyring them to (unattended) then the question isn't only was the package malicious but it's also why was any unattended actor allowed to do what it did. Isn't this within our best interest?
Is there some sort of tool that can be expressed as an MCP and but not as an API or CLI command? Obviously we shouldnt map existing apis to MCP tools, but why would I used an MCP over just writing a new "agentic ready" api route?
- If your agent doesn't have a full Bash-style code execution environment it can't run skills. MCP is a solid option for wiring in tools there.
- MCP can help solve authentication, keeping credentials for things in a place where the agent can't steal those credentials if it gets compromised. MCPs can also better handle access control and audit logging in a single place.
The LLM can look at the OpenAPI spec and construct queries - I often do this pretty easily.
Skills with an API exposed by the service usually means your coding agent can access the credentials for that service. This means that if you are hit by a prompt injection the attacker can steal those credentials.
As the article states, LLMs are fantastic at writing code, and not so good at issuing tool calls.
tbh, that companies tried to make something proprietary of this concept is probably why its adoption has been weak and why we have "MCP vs CLI/Skills/etc" debates in the first place. In contrast, CLI tools only require a general a bash shell (potentially in a sandbox environment), which is very standardised.
Also even with above there is more opportunity for the bot to go off piste and run cat this and awk that. Meanwhile the "operator" i.e. the Grandpa who has an iPhone but never used a computer has no chance of getting the bot back on track as he tries to renew his car insurance.
"Just going to try using sed to get the output of curl https://.."
"I don't understand I just want to know the excess for not at fault incident when the other guy is uninsured".
Everyone has gone claw-brained. But it really is ok to write code and save that code to disk and execute thay code later.
You can use MCP or even just hard coded API call from your back end to the service you wanna use like it's 2022.
With MCP you can at least set things up such that the agent can't access the raw credentials directly.
Your argument is the same for an MCP server - auth is stored somewhere on disk, what's to stop it from reading that file? The answer is the same as above.
My argument here is that one of the reasons to use MCP is that it allows you to build smaller agents that do not have a full code execution environment, and those agents can then use MCPs to make calls to external services without revealing those credentials to the agent.
I think we both agree that if your agent has full Bash access it can access credentials.
The approach you're proposing is that with a well designed MCP server, you can limit the permissions for your agent to only interacting with that MCP server, essentially limiting what it can do.
My argument is that you can accomplish the identical thing with an agent by limiting access to only invoking a specific CLI tool, and nothing more.
Both of our approaches accomplish the same thing. I'm just arguing that an MCP server is not required to accomplish it.
But... if you're going all-in on the Bash/Python/arbitrary-programming-language environments that are necessary to get Skills to work, you're going to find yourself in a position where the agent can probably read config files that you don't want it to see.
(Moved from wrong sub)
Also, I run programs on my machine with a different privilege level than myself all the time. Why can’t an agent do that?
- MCPs are trivial to write and maintain - at least in my experience and language of choice - and bash scripts are cursed. But I guess you can use a different scripting language.
- Agents can pollute their context by reading the script. I want to expose a black box that just works.
Better sandboxing. Accessing an MCP server doesn't require you to give an agent permissions on your local machine.
MCP servers can expose tools, resources, and prompts. If you're using a skill, you can "install" it from a remote source by exposing it on the MCP server as a "prompt". That helps solve the "keep it updated" problem for skills - it gets updated by interrogating the MCP server again.
Or if your agentic workflow needs some data file to run, you can tell the agent to grab that from the MCP server as a resource. And since it's not a static file, the content can update dynamically -- you could read stocks or the latest state of a JIRA ticket or etc. It's like an AI-first, dynamic content filesystem.
There'd be a little extra friction compared to MCP – the agent would presumably have to find and download and read the OpenAPI/Swagger spec, and the auth story might be a little clunkier – but you could definitely do it, and I'm sure many people do.
Beyond that, there are a few concrete things MCP provides that I'm a fan of:
- first-class integration with LLM vendors/portals (Claude, ChatGPT, etc), where actual customers are frequently spending their time and attention
- UX support via the MCP Apps protocol extension (this hasn't really entered the zeitgeist yet, but I'm quite bullish on it)
- code mode (if using FastMCP)
- lots of flexibility on tool listings – it's trivial to completely show/hide tools based on access controls, versus having an AI repeatedly stumble into an API endpoint that its credentials aren't valid for
I could keep going, but the point is that while it's possible to use another tool for the job and get _something_ up and running, MCP (and FastMCP, as a great implementation) is purpose built for it, with a lot of little considerations to help out.
Why? Isn't obvious to me..
When a human is coding against a traditional API, it might be a bit annoying if the API has four or five similar-sounding endpoints that each have a dozen parameters, but it's ultimately not a showstopper. You just spend a little extra time in the API docs, do some Googling to see what people are using for similar use cases, decide which one to use (or try a couple and see which actually gets you what you want), commit it, and your script lives happily ever after.
When an AI is trying to make that decision at runtime, having a set of confusing tools can easily derail it. The MCP protocol doesn't have a step that allows it to say "wait, this MCP server is badly designed, let me do some Googling to figure out which tool people are using for similar use cases". So it'll just pick whichever ones seems most likely to be correct, and if it's wrong, then it's just wasted time and tokens and it needs to try the next option. Scaled up to thousands or millions of times a day, it's pretty significant.
There's a lot of MCP servers out there that are just lazy mappings from OpenAPI/Swagger specs, and it often (not always, to be fair) results in a clunky, confusing mess of tools.
Then you’d need a way of passing all that info on to a model, so something top level.
It’d be useful to do things in the same way as others (so if everyone is adding Openapi/swagger you’d do the same if you didn’t have a reason not to).
And then you’ve just reinvented something like MCP.
It’s just a standardised format.
A skill is, at the end of the day, just a prompt.
A skill can also act as an abstraction layer over many tools (implemented as an mcp server) to save context tokens.
Skills offer a short description of their use and thus occupy only a few hundled tokens in the context compared to thousends of tokens if all tools would be in the context.
When the LLM decides that the skill is usefull we can dynamically load the skills tools into the context (using a `load_skill` meta-tool).
Seems like unnecessarily constraining it.
On the other hand, something like context7 is just `npx ctx7 resolve <lib>` then `npx ctx7 docs <id>` — two stateless shell calls, done. No server to maintain, no protocol overhead. CLI is the right tool there.
In an era where technology exists that can lend the appearance of legitimacy to just about anyone, that kind of statement needs to be qualified.
UPDATE: I was wrong about this, see comment reply. The python-sdk in https://github.com/modelcontextprotocol is a fork of FastMCP.
My read of what happened is that the author spiked an an initial the implementation of 'fastmcp' on Nov 30 2025, 5 days later, the author relicensed it to MIT, and donated it to the python sdk (10 days after anthropic announced MCP):
https://github.com/PrefectHQ/fastmcp/pull/54
It was incorporated on Dec 21 2024, and hardened through the efforts of one of the python-sdk maintainers.
The author seemingly abandoned the github project shortly after donating it to the python-sdk and marked it as unmaintained, and it remained so for several months (there are roughly zero commits between jan-april):
https://github.com/PrefectHQ/fastmcp/issues/96
He also apparently has made almost no other contributions to the mcp python-sdk:
https://github.com/modelcontextprotocol/python-sdk/commits?a...
Many contributors to the python sdk continued to iterate on the mcp server implementation using the name fastmcp ( since it had been donated to the project ) resulting in growing interest:
https://trends.google.com/explore?q=fastmcp%20&date=2024-12-...
Then around April 2025, the author likely noticing the growing interest and stickyness of the name, decided to write a new version and start using the name fastmcp again.
https://github.com/PrefectHQ/fastmcp/graphs/contributors?fro...
The author clearly made an attempt to promote his effort:
https://www.reddit.com/r/mcp/comments/1np6dwg/fastmcp_20_is_...
This resulted in a lot of confusion by users, which persists to this day. I only looked into this last year, because i was one of those users who was suddenly confused regarding the provenance of what i was actually using vs what i thought i was using; and as i looked into it i was suddenly seeing lots of questionable reddit comments pop up in subreddits i was reading, all evangelizing fastmcp 2.0 and using language that was contributing to the confusion.
The author's interest in monetizing the fastmcp github repo is understandable, and he and others have clearly put alot of effort into iterating in his SaaS onramp, but the confusion arises simply because the author wanted to capitalize on the success of mcp and on the popularity of the fastmcp name, the initial growth and popularity of which was primarily driven by the effort and support of contributors to the mcp python sdk .
https://modelcontextprotocol.io/docs/develop/build-server
For client side MCP it's a different story.
I think MCP is fine in an env where you have no access to tools, but you cannot ripgrep your way through an MCP (unless you make an MCP that calls ripgrep on e.g. a repo, which in that case what are you doing).