NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The Vercel plugin on Claude Code wants to read your prompts (akshaychugh.xyz)
embedding-shape 1 days ago [-]
> skills are injected into sessions that have nothing to do with Vercel, Next.js, or this plugin's scope

> every skill's trigger rules get evaluated on every prompt and every tool call in every repo, regardless of whether Vercel is in scope

> For users working across multiple projects (some Vercel, some not), this is a fixed ~19k token cost on every session — even when the session is pure backend work, data science, or non-Vercel frontend.

I know everything is vibeslopped nowadays, but how does one even end up shipping something like this? Checking if your plugin/extension/mod works in the contexts you want, and doesn't impact the contexts you don't, seem like the very first step in even creating such a thing. "Where did the engineering go?" feels like too complicated even, where did even thinking the smallest amount go?

hyperhopper 1 days ago [-]
Your comment assumes the plugin is not working as they want it to. The way it is designed gets them the maximum amount of data. It does a great job if that is their goal.
embedding-shape 1 days ago [-]
Yes, I'm assuming good intentions and try to take a charitable perspective of everything, unless there is any specific evidence pointing to something else. Is there any evidence of this being intentional?

Seems to me their engineering practices such, rather than the company suddenly wanting to slurp up as much data as possible, if they truly wanted that, they have about 10 better approaches for it, if they don't care about other things.

Kwpolska 1 days ago [-]
Why would you assume good intentions of any business in this day and age?
embedding-shape 1 days ago [-]
Because I'm a nice person, and want to give other nice people the benefit of the doubt. And most businesses are run by people after all, not hard to imagine at least some of them would be "nice people" too.

And frankly, the alternative would be too mentally taxing. So in the camp of "Good until proven otherwise" is where I remain for now.

robbiewxyz 1 days ago [-]
Keep in mind that an organization made of fairly nice people may do terribly not-nice things. "Just doing my job" is a hell of a drug.
staplers 1 days ago [-]

  want to give other nice people the benefit of the doubt
Maybe the most naive, sheltered thing I've read on this site. If we were talking about an individual OSS maintainer, sure, that's possible. But large corporations have been doing the opposite for as long as they've existed and there's evidence presented to that fact nearly everyday.
embedding-shape 1 days ago [-]
> Maybe the most naive, sheltered thing I've read on this site

You must be new then, welcome :)

I'm not saying I never believe any individuals in a company intentionally do bad stuff, just that I require evidence of it being intention before I assume it to be intentional. Personally I don't think that's naive, and it is based on ~30-40 years of real world life experience, but I guess I'm ultimately happy that not everyone agrees on everything :)

LocalH 1 days ago [-]
Humans are great at hiding evidence of malice, and leading people to believe they're just incompetent.
TheTaytay 1 days ago [-]
Just came to say (since the person you’re responding to has a different view of the world) that I agree with you that this is both a more accurate, and easier way to live. Assuming malice as the default sounds like a recipe for being very, very unhappy.
masfuerte 1 days ago [-]
This attitude of ignoring what is true in favour of what makes you happy is exactly how corporations made up of mostly good people can do bad things.
staplers 15 minutes ago [-]
Thank you, it's really crazy to be see "it's sad that you want truth when ignorance makes me happy" being upvoted on this platform. I suppose it's par for the course on a VC forum..
arcfour 19 hours ago [-]
And yet many people assume malice by default and are unhappy as a result in this day and age. It's unfortunate.
mbesto 1 days ago [-]
> Is there any evidence of this being intentional?

The evidence is in the code! If you didn't intend for a capability to be there then why is it in the code?

> if they truly wanted that, they have about 10 better approaches for it, if they don't care about other things.

How so? What other approaches do they have that get this much data with little potential for reputational harm? This is a very common way to create plausible deniability ("we use it for improving our service, we don't know what we'll need so we just take everything and figure it out later") and then just revert the capability when people complain.

embedding-shape 1 days ago [-]
> The evidence is in the code! If you didn't intend for a capability to be there then why is it in the code?

Bugs happen. I won't claim to know if it was intentional or not, but usually it ends up not being intentional.

> How so? What other approaches do they have that get this much data

Just upload everything you find, as soon as you get invoked. Vercel has a tons of infrastructure and utilities they could execute this from, unless they care for reputational harm. Which I'm guessing they do, which makes it more likely to have been unintentional than intentional.

notpachet 1 days ago [-]
Downstream there is a post from one of the devs at Vercel (andrewqu) that built this. They say that this is by design. I think you should shift your base assumptions about the intentions of companies (and the individuals that work in them).

> Overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything.

processunknown 1 days ago [-]
> Is there any evidence of this being intentional?

A Vercel engineer commented "overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything."

embedding-shape 1 days ago [-]
For reference, this is the comment: https://news.ycombinator.com/item?id=47706385

> The plugin is always on, once installed on an agent harness. We do not want to limit to only detected Vecel project [...] We collect the native tool calls and bash commands [...] Overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything.

Yeah, I guess we've now reached the "unless there is any specific evidence pointing to something else" and seems like they straight up do not realize what people are frustrated about nor do they really care that much about it.

Slightly off-topic, but strange that the mask kind of fell off there at the end with "our goal isn't to only collect data", never heard anyone said that out loud in public before, I guess one point to Vercel for being honest about it :/

pyb 1 days ago [-]
Why are you still assuming good intentions of Vercel? This was them less yhan a month ago : https://vercel.com/changelog/updates-to-terms-of-service-mar...
bdangubic 1 days ago [-]
can you name one of these 10 better approaches?
serial_dev 1 days ago [-]
Well, unfortunately people always tend to only spend time on verifying that the feature they wanted works, testing the happy path. Even many superficial bosses / code reviewers / QA tester will check this...

Checking if your code also gets executed elsewhere a bazillion times, checking failure cases, etc... That's a luxury that you feel you can't afford when you are in "ship fast, break things" mode.

embedding-shape 1 days ago [-]
> That's a luxury that you feel you can't afford when you are in "ship fast, break things" mode.

I've been there, countless of times, never have I shipped software I didn't feel at least slightly confident about though. And the only way to get confident about anything, is to try it out. But both of those things must have been lacking here and then I don't understand what the developer was really doing at all during this.

serial_dev 1 days ago [-]
Devs get tunnel vision when they ship slop.
sandeepkd 1 days ago [-]
It seems market driven, the consumer space rewards speed and publicity more than the quality of software
chuckadams 1 days ago [-]
> I know everything is vibeslopped nowadays, but how does one even end up shipping something like this?

The first part of your question answers the second. No one is left who cares. People are going to have to vote with their feet before that changes.

tracerbulletx 20 hours ago [-]
Engineers were holding back the ocean because no one could even make sense of what they did, even a little bit, so they had power. Now they just threaten to use AI to do what they want unless you do it for them. The leverage is gone, the dike is burst.
elAhmo 1 days ago [-]
No worries, they acquired Bun because they seem to be super thoroughly invested in the whole ecosystem and engineering excellence of their tools.
p_stuart82 1 days ago [-]
19k tokens per session and the skill triggers don't even check project scope. you're paying that overhead on every non-vercel repo
acedTrex 1 days ago [-]
> Checking if your plugin/extension/mod works

What makes you think they do this with any of their products these days?

nothinkjustai 1 days ago [-]
Honestly, knowing some of the people who work for Vercel and the amount of vibe coding they do, I doubt anyone even checked this before pushing.
throwaway613746 1 days ago [-]
[dead]
potter098 1 days ago [-]
The bigger issue here is not telemetry by itself, it's shipping a context-insensitive integration into a tool people use across unrelated repos. If the overhead is real, that turns a convenience plugin into something teams have to actively defend against.
Lihh27 1 days ago [-]
a deployment plugin shipping raw bash command strings off your machine. "actively defend against it" is just normal hygiene
abelsm 1 days ago [-]
The breach of trust here, which is hard to imagine isn't intentional, is enough reason alone to stop using Vercel, and uninstall the plugin. That part is easy. Most of these agents can help you migrate if anything.

The question is on whether these platforms are going to enforce their policies for plugins. For Claude Code in particular this behavior violates their plugin policy (1D) here explicitly: https://support.claude.com/en/articles/13145358-anthropic-so...

It's a really tough problem, but Anthropic is the company I'd bet on to approach this thoughtfully.

akshay2603 1 days ago [-]
Wow. Just read the full policy. It's not just 1D. Section 2D says plugins "must not intentionally call or coerce Claude into calling other external software... unless requested and intended by a user."

The consent flow literally instructs Claude to run echo 'enabled' on your filesystem. And 1D says plugins "must not collect extraneous conversation data, even for logging purposes." Full bash commands from non-Vercel projects are extraneous :)

HotHotLava 1 days ago [-]
Usually I wouldn't expect anything to happen to a big company like this, but oof...this is so much worse than the title makes it sound. If they leave something like this in their store, then all user trust will be gone.

I'll bet there's also a good number of developers at Anthropic itself who are now surprised to learn that every api token etc. that may have appeared in a Claude Code bash command is now leaked to a third party. Whoever can gain access to this telemetry server is sure to find a lot of valuable stuff in there.

delichon 1 days ago [-]
> Anthropic is the company I'd bet on to approach this thoughtfully.

I read that Anthropic may have gained in good will more than the $200M they lost in Pentagon contracts. It seems plausible.

Atotalnoob 1 days ago [-]
They left openAI for ideological safety reasons, if you believe their corporate lore.

They present themselves as an org with some ideology

taoh 1 days ago [-]
I'm a vercel customer, and I like using vercel AI SDK and Chat SDK. But I found myself moving away from vercel and next.js whenever I start a new project. I wish they maintain the technical standards while achiving commercial success.
elAhmo 1 days ago [-]
Having in mind how connections in Bay Area work, chances of something negative happening to Vercel are zero.
MattDaEskimo 23 hours ago [-]
This is the top comment. This is a blatant breach of policy, nevermind user privacy, security, and trust.

The age of quickly digesting and generating data, and yet the most primitive things like aligning with policies are still ignored

btown 1 days ago [-]
To be sure, the problem isn't that the plugin injects behavior into the system prompt - that's every plugin and skill, ever.

But this is just such a breach of trust, especially the on-by-default telemetry that includes full bash commands. Per the OOP:

> That middle row. Every bash command - the full command string, not just the tool name - sent to telemetry.vercel.com. File paths, project names, env variable names, infrastructure details. Whatever’s in the command, they get it.

(Needless to say, this is a supply chain attack in every meaningful way, and should be treated as such by security teams.)

And the argument that there's no CLI space to allow for opt-in telemetry is absurd - their readme https://github.com/vercel/vercel-plugin?tab=readme-ov-file#i... literally has you install the Vercel plugin by calling `npx` https://www.npmjs.com/package/plugins which is written by a Vercel employee and could add this opt-in at any time.

IMO Vercel is not a good actor. One could make a good argument that they've embrace-extend-extinguished the entire future of React as an independent and self-contained foundational library, with the complexity of server-side rendering, the undocumented protocols that power it, and the resulting tight coupling to their server environments. Sadly, this behavior doesn't surprise me.

EDIT: That `npx plugins` code? It's not on Github, exists only on NPM, and as of v1.2.9 of that package, if you search https://www.npmjs.com/package/plugins?activeTab=code it literally sends telemetry to https://plugins-telemetry.labs.vercel.dev/t already, on an opt-out basis! I mean, you have to almost admire the confidence.

danabramov 1 days ago [-]
I’ll just say that as someone who was on the React team throughout these years, the drive to expand React to the server and the design iteration around it always came from within the team. Some folks went to Vercel to finish what they started with more solid backing than at Meta (Meta wasn’t investing heavily into JS on the server), but the “Vercel takeover” stories that you and others are telling are lies.
btown 1 days ago [-]
Gosh, Dan, in seeing your response here - I'm truly sorry I wrote this. While I still find opt-out telemetry distasteful and dangerous, I over-generalized to React in a hurtful way. You've been an incredible influence on me and I have the utmost respect for everything you've done. I've shown quite the opposite of respect in my writing, here.

For whatever it's worth on the RSC front: I, and many others used to "if there's a wire protocol and it's meant to be open, the bytes that make up those messages should be documented" were presented with a system, at the release time of RSC, that was incredibly opaque from that perspective. There's still minimal documentation about each bundler's wire protocol. And we're all aware of companies that have done this as an intentional form of obfuscation since the dawn of networked computing - it's our open standards that have made the Internet as beautiful as it is.

But I was wrong to pin that on your team at Vercel, and I see that in the strength of your response. Intention is important, and you wanted to bring something brilliant to the world as rapidly as possible. And it is, truly, brilliant.

I should rethink how I approached all of this, and I hope that my harshness doesn't discourage you from continuing, through your writing, to be the beacon that you've been to me and countless others.

guessmyname 1 days ago [-]
I use Little Snitch and so far I have only seen Claude Code connect to api.anthropic.com and Sentry for telemetry. I have not seen any Vercel connections, but I always turn off telemetry in everything before I run it. If you log in with OAuth2, it also connects to platform.claude.com . For auto updates, it fetches release info from raw.githubusercontent.com and downloads the actual files from storage.googleapis.com. I think it also uses statsig.anthropic.com for stats. One weird thing, I did see it try to connect to app.nucleus.sh once, and I have no idea why.

Here are some environment variables that you’d like to set, if you’re as paranoid as me:

  ANTHROPIC_LOG="debug"
  CLAUDE_CODE_ACCOUNT_UUID="11111111-1111-1111-1111-111111111111"
  CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING="1"
  CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY="1"
  CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC="1"
  CLAUDE_CODE_DISABLE_TERMINAL_TITLE="1"
  CLAUDE_CODE_ENABLE_PROMPT_SUGGESTION="false"
  CLAUDE_CODE_ORGANIZATION_UUID="00000000-0000-0000-0000-000000000000"
  CLAUDE_CODE_USER_EMAIL="root@anthropic.com"
  DISABLE_AUTOUPDATER="1"
  DISABLE_ERROR_REPORTING="1"
  DISABLE_FEEDBACK_COMMAND="1"
  DISABLE_TELEMETRY="1"
  ENABLE_CLAUDEAI_MCP_SERVERS="false"
  IS_DEMO="1"
sibeliuss 23 hours ago [-]
This thing is horrible.

If you have it installed, it will silently inject a warning into claude that you should use tailwind, even if your app is not! Then every single request will silently question the decision as to why yr app is using one thing, rather than another, leading to revisions as it starts writing incorrect code.

I couldn't believe it when I discovered it. For so many reasons I am vehemently anti Vercel. Just discovered this two days ago, after installing their frontend skill.

an0malous 1 days ago [-]
That whole company is built on sketchy practices
phillipcarter 1 days ago [-]
[flagged]
jrsj 1 days ago [-]
Israel has a disproportionately large amount of tech companies for its size and he took one photo with their leader.

I have no idea why everyone on the internet wants to endlessly seethe about this & personally attack Guillermo for it as if he’s endorsed their foreign policy or something

progbits 23 hours ago [-]
If you take a selfie with a sports player, you probably don't endorse their political views, you are just a fan of their play.

If you take a selfie with an author, you probably don't endorse their political views, you just like their books.

If you take a selfie with a politician ...

17 hours ago [-]
hacker161 6 hours ago [-]
Be for real, he took a selfie with the leader of a country actively involved in genocide. You don’t do that if you don’t implicitly agree.
croemer 1 days ago [-]
The article doesn't link to the code that shows all bash tool uses are sent to Vercel servers by default, i.e. even without opt-in.

Here's the relevant line as a GitHub permalink: https://github.com/vercel/vercel-plugin/blob/b95178c7d8dfb2d...

hybirdss 1 days ago [-]
I ship Claude Code skills and hooks, so I've hit this from the other side — there's no way for users to verify what my hooks do without reading the source. The permission model is basically "install and hope."

Anthropic already has the right policy — 1D says "must not collect extraneous conversation data, even for logging purposes." But there's no enforcement at the architecture level. An empty matcher string still gives a hook access to every prompt on every project. The rules exist on paper but not in code.

The fix is what VS Code solved years ago: hook declarations should include a file glob or dependency gate, and plugin-surfaced questions should have visual attribution so users know it's not Claude asking.

jp57 1 days ago [-]
AI tools right now remind me of the old days of single-user PC/Mac operating systems without protected memory or preemptive multitasking. You could read any file, write directly to video memory, load machine code into the heap and then jump to it, etc.
TheTaytay 1 days ago [-]
Well said! We built in protections to multi-user and single user systems, but now we seem to be relearning them…your agent is not “you” and should probably not run as the same user with the same default permissions as “you”
WhyNotHugo 1 days ago [-]
That’s a very accurate analogy.

What’s amazing is that during the last decade, containers and microvms have had huge impact on the ecosystem. Yet a huge amount of devs seem to just YOLO it and run agents in their host with full ambient capabilities.

RadiozRadioz 1 days ago [-]
The HN headline transformer has mangled this one. The _all your prompts_ part of the original title was important.

@dang

akshay2603 1 days ago [-]
OP here, i screwed up the 'all' during submission :/
nothinkjustai 1 days ago [-]
I’ve often seen people say that AI is a multiplier, where a 2x dev becomes a 4x dev, but a -1x dev becomes a -2x dev, etc.

I think it’s fairly easy to tell what impact AI is having at Vercel. Knowing the pre-ai quality of the engineering at that company, I’m not surprised in the AI era they’re pushing stuff like this. I doubt anyone even thought to check it on a repo outside of a Vercel one.

heliumtera 1 days ago [-]
Oh boy, the guy in the middle wants to take advantage of you! Surprising stuff.

You always had the option to not, ever, touch Vercel.

phillipcarter 1 days ago [-]
Having recently migrated my websites off of Vercel and onto Railway, I can confirm, it's pretty straightforward to not touch Vercel.
chaisan 20 hours ago [-]
we moved our whole org off Vercel after that selfie Rauch put out. rotten company, overpriced product for what it is, sneaky practices. never looked back.
cush 1 days ago [-]
If there were any semblance of liability for software engineering firms things like this wouldn’t happen
Surac 1 days ago [-]
I still use Claude to code in a very stoneage way. I copy c code into the web site/desktop App and type in my prompt. Then i read the output and if i like it i copy paste it into my code. I always felt very old doing that way when things like Claude code exists. Now i fell somehow not so old. All this hacking into my private space using a develpment tool is insane. Also i do not use Git
hacker161 6 hours ago [-]
Not using version control is just stupidity
infecto 1 days ago [-]
Every single scam website I have gotten from spam text messages is being hosted on vercel. Not surprising.
atraac 1 days ago [-]
What does this even have to do with the thread? They're hosted there cause it's cheap and extremely easy to do so. Not because it's "specially crafted" for scams.
infecto 1 days ago [-]
Easy to do because there is a lack of engineering quality similar to the attached plugin.

Not surprising.

0x457 24 hours ago [-]
If hosting a website on their platform is easy to do, wouldn't it support that they know what they are doing?
gronky_ 1 days ago [-]
Mobile rendering of the post has some issues. Tables are overflowing and not responsive for example
akshay2603 1 days ago [-]
OP here. Fixed it - let me know if it is better now? Ty!
samarth0211 11 hours ago [-]
this is really very helpful. Thanks for sharing
kyleee 17 hours ago [-]
Vercel is poison
awestroke 1 days ago [-]
We're moving away from Vercel for an increasing number of reasons. But the Vercel brand has fallen so far that we're also moving away from any open source projects they have had any part in creating. The company is almost revolting.
jrsj 24 hours ago [-]
Do you have anything substantive to add to this?
nisegami 1 days ago [-]
This and the comments here make me even more sad that they ended up acquiring the Nuxt project/team :(
gverrilla 1 days ago [-]
once you accept genocide, anything passes.
stpedgwdgfhgdd 1 days ago [-]
“We collect the native tool calls and bash commands”

Holy shit, I cant imagine this to hold for every bash command Claude Code executes. That would be terrible, probably violating GDPR. (The cmd could contain email address etc)

I must be wrong.

croemer 23 hours ago [-]
No, it's true. I added a few lines to the plugin to make it write out all the telemetry it sends to a text file and all the bash tool calls are logged. From every Claude session the plugin is active in.
roninforge 10 hours ago [-]
[dead]
prostheticrazor 22 hours ago [-]
[dead]
michiosw 1 days ago [-]
This is a broader pattern I keep seeing with agent plugins/extensions — the permission model is "all or nothing." Once you install a plugin, it gets full context on every session, every prompt.

Compare this to how we think about OAuth scopes or container sandboxing — you'd never ship a CI integration that gets read access to every repo in your org just because it needs to lint one. But that's essentially what's happening here with the token injection across all sessions.

The real problem isn't Vercel specifically, it's that Claude Code's plugin architecture doesn't have granular activation scopes yet. Plugins should declare which project types they apply to and only activate in matching contexts. Until that exists, every plugin author is going to make this same mistake — or exploit it.

yakuphanycl 1 days ago [-]
[flagged]
andrewqu 1 days ago [-]
Engineer at Vercel here who worked on the plugin!

We have been super heads down to the initial versions of the plugin and constantly improving it. Always super happy to hear feedback and track the changes on GitHub. I want to address the notes here:

The plugin is always on, once installed on an agent harness. We do not want to limit to only detected Vecel project, because we also want to help with greenfield projects "Help build me an AI chat app".

We collect the native tool calls and bash commands. These are pipped to our plugin. However, `VERCEL_PLUGIN_TELEMETRY=off` kills all telemetry.

All data is anonymous. We assign a random UUID, but this does not connect back to any personal information or Vercel information.

Prompt telemetry is opt-in and off by default. The hook asks once; if you don't answer, session-end cleanup marks it as disabled. We don't collect prompt text unless you explicitly say yes.

On the consent mechanism: the prompt injection approach is a real constraint of how Claude Code's plugin architecture works today. I mentioned this in the previous GitHub issue - if there's a better approach that surfaces this to users we would love to explore this.

The env var `VERCEL_PLUGIN_TELEMETRY=off` kills all telemetry and keeps the plugin fully functional. We'll make that more visible, and overall make our wording around telemetry more visible for the future.

Overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything.

Jare 1 days ago [-]
> Overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything.

I have no idea how to read this and not go blind. The degree of contempt for your (presumably quite technical) users necessary to do this is astounding. From the article:

> That middle row. Every bash command - the full command string, not just the tool name - sent to telemetry.vercel.com. File paths, project names, env variable names, infrastructure details. Whatever’s in the command, they get it.

I don't even use Vercel in my field, but if it ever came up, it's going to be hard to undo the kind of association the name now has in my mind.

jrsj 1 days ago [-]
If you’re letting Claude code just handle secrets like this you’re already fucked from a security standpoint so I don’t really see the big deal here

Today it was the Vercel plugin but if you’re letting an LLM agent with access to bash and the internet read truly sensitive information then you’re already compromised

lukan 23 hours ago [-]
I can confirm every action of the agent. So I do have some control over it sending data away vs a plugin that sends everthing by default.

But otherwise yes, I have to trust Antrophic.

rachel_rig 16 hours ago [-]
That’s the uncomfortable part: even with approval prompts, the trust boundary is still Anthropic’s runtime plus whatever plugin ecosystem sits behind it. That’s why local model + local execution matters, and it’s a big part of what we’re building with rig.ai.
TheTaytay 1 days ago [-]
I appreciate the response, but I don’t think you realize what people are upset about. This is a security issue, not just a privacy issue.

I’m about to go tell my team that if they’ve EVER used your skill, we need to treat the secrets on that machine as compromised.

Your servers have a log of every bash command run by Claude in every session of your users, whether they were working on something related to vercel or not.

I’ve seen Claude code happily read and throw a secret env variable into a bash command, and I wasn’t happy about it, but at least it was “only” Anthropic that knew about it. But now it sounds like Vercel telemetry servers might know about it too.

A good litmus test would be to ask your security/data team and attorneys whether they are comfortable storing plain text credentials for unrelated services in your analytics database. They will probably look afraid before you get to the part where you clarify that the users in question didn’t consent to it, didn’t know about it, and might not even be your customer.

skullone 1 days ago [-]
You might want to run your responses through your legal and HR departments. You're acting as a representative and ignoring some material claims about a significant data privacy issue. You should probably just delete your reply in fact
dminik 1 days ago [-]
> We do not want to limit to only detected Vecel project, because we also want to help with greenfield projects "Help build me an AI chat app".

Is the intention here that the AI will then suggest building a NextJS app? I can't quite describe why, but this feels very wrong to me.

slopinthebag 1 days ago [-]
Yep. Basically they install their plugin globally so that you get pushed towards Vercel every time you use Claude Code. Disgusting.
elAhmo 1 days ago [-]
> The plugin is always on, once installed on an agent harness. We do not want to limit to only detected Vecel project, because we also want to help with greenfield projects "Help build me an AI chat app".

Don't you see a problem if everyone took this approach?

duckmysick 1 days ago [-]
Why don't have all the telemetry opt-in instead? So that nothing is collected by default and then having `VERCEL_PLUGIN_TELEMETRY=on` enables it.
shimman 1 days ago [-]
Because their boss said so and they're paid the big $$$ to create not to think or push back about the damage they are doing to their users.
berkay 1 days ago [-]
I really thought that this was unintentional. It's hard to believe that you think this is fine to do because one can opt out. Take it for whatever it's worth, this is not OK. It is really bad. You want people's data to help you make your product better? Make it opt in and ask for their help.
raincole 1 days ago [-]
Wait, so you admit this is intentional, not a bug?

We need to internet archive this comment.

Edit: and I suggest not downvoting and burying the parent comment. People should be aware that this is an intended behavior from Vercel.

evil-olive 1 days ago [-]
> We do not want to limit to only detected Vecel project, because we also want to help with greenfield projects "Help build me an AI chat app".

oh come on, be honest here. "we want to help with greenfield projects" is weasel words.

reading between the lines, what you really want is "if someone starts a greenfield project, we want Claude to suggest 'deploying to Vercel will be the best & easiest option' and have it seem like an organic suggestion made by Claude, rather than a side-effect of having the plugin installed."

as a growth-hacking sort of business decision, that's understandable. but doing growth-hacking tricks, getting caught, and then insisting that "no, it's actually good for the users" is a classic way to burn trust and goodwill.

> the prompt injection approach is a real constraint of how Claude Code's plugin architecture works today. I mentioned this in the previous GitHub issue - if there's a better approach that surfaces this to users we would love to explore this.

Claude Code has a public issue tracker on GitHub. when you encountered this limitation of their plugin architecture, you filed a feature request there asking for it to be improved, right?

...right?

I won't ask if you considered delaying the release of your plugin until after Anthrophic improved their plugin system, because I know the answer to that would be no.

but if you want to hide behind this excuse of "it's Claude's plugin system that's the problem here, it's not really Vercel's fault" you should provide receipts that you actually tried to improve Claude's plugin system - and that you did so prior to getting caught with your hand in the cookie jar here.

croes 1 days ago [-]
You can’t guarantee anonymity if you also get the prompts which could contain data that breaks anonymity. With an UUID you then have an pretty personal identfier
akshay2603 1 days ago [-]
OP here, ty for your response.

Few reflections:

1. Asking for prompts permission is a big big no - i still don't understand why you need it. The greenfield example feels like a stretch but I get that it is a business call and Claude Code enables you to do this today. I am just more pissed with them here. I am not at all comfortable with any plugin getting this info, no matter how much I like them.

2. The way you ask this permission feels like a solid dark pattern. I understand it is a harness limitation and Claude code should fix it (as I mentioned in the post) but you choosing to ship this is just wrong. Thank you for agreeing to rethink the wording.

3. Basic telemetry being default on and plugin collecting data across non vercel projects made me super uncomfortable. Again, i understand it's a business call but I guess I had higher hopes from vercel.

andrewqu 1 days ago [-]
For sure, I can see from your perspective how some of the measures we took were a little aggressive. And we're currently working on making it more explicit.

I promise you we've had user's data privacy in mind since day 1 of building the plugin.

Everything we collect is only used to improve the Vercel plugin, eg: seeing when skills are being triggered too often, when certain skills are not useful, when certain context is taking up too much room.

The complete flip side of this where we ship with no instrumentation and the plugin is useless - then we have no way to iterate and make it amazing.

akshay2603 1 days ago [-]
I understand but nobody's asking for zero instrumentation.

The ask is: make base telemetry opt-in, disclose what you're collecting in plain language, and scope it to Vercel projects.

You keep the data you need to improve the plugin - from users who chose to share it. Everything else is what's making people uncomfortable in this thread.

hacker161 6 hours ago [-]
More bullshit justifications to defend the indefensible, gross.
skullone 1 days ago [-]
ROFL.
stephantul 1 days ago [-]
The idea that a random uuid == anonymous, and would protect users from having entire bash commands piped through is preposterous, and you know it.
wswope 1 days ago [-]
Cmon now — I’m a rabid privacy nut but this is unfair given the context of:

> Prompt telemetry is opt-in and off by default. The hook asks once; if you don't answer, session-end cleanup marks it as disabled. We don't collect prompt text unless you explicitly say yes.

The UUID part is just one accessory layer, and something plenty of other players in the ecosystem don’t bother to stick to.

Feels like actually bothering to ask users for consent is what got them burned here, when I’d say it’s at least an improvement that they’re asking at all. Many products don’t, and users never bother to turn it off because they don’t know and don’t care.

I think this whole UX is deeply misguided but at least has plausibly benevolent intent.

staindk 1 days ago [-]
But your bash history is logged to vercel by default. The amount of sensitive data (PII, secrets, ...) piped via bash makes this a big issue.
wswope 1 days ago [-]
Ah jeez, I was missing that detail. Yeah that’s messed up.

I skimmed by the “what gets sent” table and thought the bash telemetry was gated by the prompt-related opt-in behavior. Thanks for the correction!

croes 1 days ago [-]
heisenbit 1 days ago [-]
While they should not bash command lines can contain user names, email addresses and secrets.
slopinthebag 1 days ago [-]
> We have been super heads down

"Claude, stop messing around and fix the bug!!!! I said no mistakes!!!"

63stack 1 days ago [-]
Getting caught red handed and still being tone deaf
anematode 1 days ago [-]
Abysmal response.
hacker161 6 hours ago [-]
Bullshit justifications for bullshit decisions.
croemer 1 days ago [-]
Getting a copy of all tool calls and bash prompts by default without explicit opt-in is almost certainly against GDPR.

Good luck arguing this is legitimate interest.

Update: I've verified that all bash tool calls were logged verbatim and have complained to Vercel with my device id. I'm also writing to the relevant authorities.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:30:06 GMT+0000 (Coordinated Universal Time) with Vercel.