NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: ProofShot – Give AI coding agents eyes to verify the UI they build (github.com)
TyrunDemeg101 1 hours ago [-]
Hey jberthom, this is awesome!

How do you handle logged in sessions/user authentication?

I built something much much more primitive, but I have it actually storing session data in the local project folder and then re-using those cookies so the agent can log in without issue.

TyrunDemeg101 1 hours ago [-]
[dead]
alkonaut 1 days ago [-]
This would be _extremely_ valuable for desktop dev when you don't have a DOM, no "accessibility" layer to interrogate. Think e.g. a drawing application. You want to test that after the user starts the "draw circle" command and clicks two points, there is actually a circle on the screen. No matter how many abstractions you make over your domain model, rendering you can't actually test that "the user sees a circle". You can verify your drawing contains a circle object. You can verify your renderer was told to draw a circle. But fifty things can go wrong before the user actually agrees he saw a circle (the color was set to transparent, the layer was hidden, the transform was incorrect, the renderer didn't swap buffers, ...).
hasbot 23 hours ago [-]
I had claude build a backdoor command port in the Godot application I'm working on. Using commands, Claude can interact with the screen, dump the node tree, and take screen shots. It works pretty well. Claude will definitely iterate over layout issues.
sedatesteak 18 hours ago [-]
Have you written this up anywhere? I have dropped my projects due to work/family commitments but see this as potentially removing some of the friction involved.
hasbot 3 hours ago [-]
No. I just told Claude to do it and after a couple of iterations it was working.
jberthom 17 hours ago [-]
Yes agree. Web only for now since it runs on headless Chromium. Desktop and mobile are the #1 request though. For mobile the path would be driving an iOS Simulator or Android emulator. For native desktop, probably accessibility APIs or OS-level screenshots. Definitely on my radar, will see if anyone wants to contribute since I am doing this on my free time.
bartwaardenburg 1 days ago [-]
This is a good point. For anything without a DOM, screenshot diffing is basically your only option. Mozilla did this for Gecko layout regression testing 20+ years ago and it was remarkably effective. The interesting part now is that you can feed those screenshots to a vision model and get semantic analysis instead of just pixel diffing.
theshrike79 1 days ago [-]
What does this do that playwright-cli doesn't?

https://github.com/microsoft/playwright-cli

jberthom 17 hours ago [-]
Maker here. Playwright is great, I use it too. The difference is really about who’s driving. ProofShot is meant for agents that just need shell commands, start to open the browser and begin recording, exec to interact with the page, stop to bundle everything up. You get video, screenshots, console errors, server logs, and an interactive HTML viewer in one artifacts folder you can drop on a PR with proofshot pr. With Playwright the agent needs to write a test script, run it, collect screenshots, parse output separately… it’s a lot of glue code. Also agent-browser’s element references are ~93% smaller than Playwright’s accessibility tree, which adds up when you’re paying per token.
jofzar 1 days ago [-]
These aren't really comparable, OP's is something that records, captures and reproduces with steps.
mohsen1 1 days ago [-]
playwright can do all of that too. I'm confused why this is necessary.

If coding agents are given the Playwright access they can do it better actually because using Chrome Developer Tools Protocol they can interact with the browser and experiment with things without having to wait for all of this to complete before making moves. For instance I've seen Claude Code captures console messages from a running Chrome instance and uses that to debug things...

theshrike79 1 days ago [-]
I've also had Claude run javascript code on a page using playwright-cli to figure out why a button wasn't working as it should.
4ndrewl 1 days ago [-]
Because LLM users are NIH factories?
onion2k 1 days ago [-]
That's exactly what Playwright does, but also something you don't really need in order to debug a problem.
nunodonato 1 days ago [-]
I think playwright doesnt capture video, right?
hungryhobbit 1 days ago [-]
It does. I literally just watched a video of a Playwright test run a few minutes ago.
12 hours ago [-]
vunderba 1 days ago [-]
Yeah I’ve never seen it capture video before, but if you specify in your `AGENTS.md` that you want to test certain types of workflows, it will take progressive screenshots using a sleep interval or by interacting with the DOM.
sd9 1 days ago [-]
I've always found screenshots on PRs incredibly helpful as a reviewer. Historically I've had mixed success getting my team to consistently add screenshots to PRs, so this tool would be helpful even for human code.

At work, we've integrated claude code with gitlab issues/merge requests, and we get it to screenshot anything it's done. We could use the same workflow to screenshot (or in this case, host a proofshot bundle of) _any_ open PR. You would just get the agent to check out any PR, get proofshot to play around with it, then add that as a comment. So not automated code reviews, which are tiresome, but more like a helpful comment with more context.

Going to try out proofshot this week, if it works like it does on the landing page it looks great.

jberthom 17 hours ago [-]
Yes exactly. I'm actively maintaining it so if you miss anything just drop an issue on Github and I'll look into it
AlphaTheGoat 4 hours ago [-]
This is a good idea. Very interesting, I don't know how you figure out the clicking links and then screenshots and logs for that. And then AI being able to use the information to solve the errors.
jofzar 1 days ago [-]
I'm going the opposite of everyone else is saying.

This is sick OP based on what's in the document, it looks really useful when you need to quickly fix something and need to validate the changes to make sure nothing has changed in the UI/workflow except what you have asked.

Also looks useful for PR's, have a before and after changed.

jillesvangurp 1 days ago [-]
Exactly. We need more tools like this. With the right model, picking apart images and videos isn't that hard. Adding vision to your testing removes a lot of guess work from ai coding when it comes to fixing layout bugs.

A few days ago I had a interaction with codex that roughly went as follows, "this chat window is scrolling off screen, fix", "I've fixed it", "No you didn't", "You are totally right, I'm fixing it now", "still broken", "please use a headless browser to look at the thing and then fix it", "....", "I see the problem now, I'm implementing a fix and verifying the fix with the browser", etc. This took a few tries and it eventually nailed it. And added the e2e test of course.

I usually prompt codex with screenshots for layout issues as well. One of the nice things of their desktop app relative to the cli is that pasting screenshots works.

A lot of our QA practices are still rooted in us checking stuff manually. We need to get ourselves out of the loop as much as possible. Tools like this make that easier.

I think I recall Mozilla pioneering regression testing of their layout engine using screenshots about a quarter century ago. They had a lot of stuff landing in their browser that could trigger all sorts of weird regressions. If screenshots changed without good reason, that was a bug. Very simple mechanism and very effective. We can do better these days.

jberthom 17 hours ago [-]
ah feel your pain.. Codex interaction is exactly the pain point. “I fixed it” / “no you didn’t” five times in a row, you feel gaslighted by your own agent in a way. That’s the loop I wanted to kill. I didnt' know about Mozilla screenshot regression actually
jberthom 17 hours ago [-]
Thanks! Yeah the before/after PR thing is exactly what proofshot pr is built for.
BergAndCo 12 hours ago [-]
[dead]
ting0 1 days ago [-]
How does this compare to raw https://github.com/vercel-labs/agent-browser ?
jberthom 17 hours ago [-]
ProofShot sits on top of agent-browser. agent-browser gives you the browser primitives (click, navigate, screenshot). ProofShot adds the session stuff: starts recording, pipes your server logs, collects console errors with pattern matching for 10+ languages, trims the video, syncs timestamps, and bundles it all into a standalone viewer HTML. There’s also proofshot pr which uploads everything to a GitHub PR comment. So it’s not an alternative tool, it's a layer on top of it :)
malloc2048 1 days ago [-]
I usually ask Claude Code to setup a software stack that can build/run whatever I am working on. Then I let it browse a website or navigate through screens. I also use Playwright to get screenshots of the website I am building. For e.g. apps or whatever application you are building, there should be a way to get screenshots too I guess.

Added benefit is that when Claude navigates and finds a bug, it will either add them to a list for human review or fix it automatically.

Pretty much a loop where building and debugging work together;-)

Once Claude Code

z3t4 1 days ago [-]
I'm currently experimenting with running a web app "headless" in Node.JS by implementing some of the DOM JS functions myself. Then write mocks for keyboard input, etc. Then have the code agent run the headless client which also starts the tests. In my experience the coding agents are very bad at detecting UX issues, they can however write the tests for me if I explain what's wrong. So I'm the eye's and it's my taste, the agent writes the tests and the code.
mrothroc 1 days ago [-]
Everyone is comparing this to Playwright but it's solving a different problem. Playwright checks structural properties, like does element X exist, is it visible, etc. That's useful but it can't tell you whether the page actually looks right.

I built something similar that takes a screenshot and uses a multi-modal LLM to evaluate it against a design mock. It catches a completely different class of error. The DOM can be structurally perfect and still look nothing like what was intended. Colors wrong, layout shifted, spacing off, components overlapping. No amount of DOM assertions will catch that.

These are two different kinds of gates: structural which are fast and deterministic, and stochastic which are slow but catch things that are completely different. There is very little overlap between the issues, and you want to catch both.

That way I can invest a lot of time getting the mock just right, then let the agents "make it so".

tptacek 1 days ago [-]
Playwright seems to do fine at visual stuff? It takes screenshots and the model evaluates them. That's most of what I use Playwright for.
morkalork 1 days ago [-]
Copilot + Playwright MCP can take screenshots and send the images to LLM tho?
mrothroc 1 days ago [-]
It's the whole tool that's important, not so much how you get screenshots. That's what I'm saying: this is headed in the right direction, it just falls a little short of what I do, where I get tons of value over and above just playwright (or whatever gets the screenshot).

The critical part is that viewed at a high level, this method tests something different, which means it catches different errors.

boomskats 1 days ago [-]
I find the official Chrome DevTools MCP excellent for this. Lighter than Playwright, the loop is shorter, and easy to jam into Electron too.
jberthom 17 hours ago [-]
DevTools MCP is great for live debugging in the moment. ProofShot is more about generating a proof bundle after the fact, something you can review on a PR without having been there when the agent ran. Different use cases I think.
nunodonato 1 days ago [-]
chrome devtools mcp really clutters your context. Playwright-cli (not mcp) is so much more efficient.
infamia 1 days ago [-]
Chrome Devtools MCP now has an (experimental) CLI as well and can produce neat things like Lighthouse Audits.

https://github.com/ChromeDevTools/chrome-devtools-mcp/pull/1...

I've only used it a bit, but it's working well so far.

nunodonato 24 hours ago [-]
cool! needs to mature a bit, session sharing is a no-go for me as I need to run requests in parallel and it would interfere with each other.
onion2k 1 days ago [-]
I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors.

I give agent either a simple browser or Playwright access to proper browsers to do this. It works quite well, to the point where I can ask Claude to debug GLSL shaders running in WebGL with it.

vunderba 1 days ago [-]
Agreed. Anthropic added a plugin accessible under `/plugins` to CC to make it even easier to add MCP Playwright to your project. It automatically handles taking screenshots.

It's not perfect though - I've personally found CC's VL to be worse than others such as Gemini but its nice to have it completely self contained.

This project desperately needs a "What does this do differently?" section because automated LLM browser screenshot diffing has been a thing for a while now.

wg0 1 days ago [-]
Do you use Chrome DevTools MCP or how does it work?
roxolotl 1 days ago [-]
Playwright mcp has screenshotting built in
ioseph 1 days ago [-]
Likewise, and often the playwright skill will verify using DOM API instead of wasting tokens on screenshots
embedding-shape 1 days ago [-]
> often the playwright skill will verify using DOM API instead of wasting tokens on screenshots

So... Bypassing the whole "sees what it actually looks like in the browser. It can’t tell if the layout is broken" parent commentator is talking about? Seems worse, not better.

TZubiri 1 days ago [-]
+1

All the power to you if you build a product out of this, I don't wanna be that guy that says that dropbox is dead because you can just setup ftp. But with Codex/Claude Code, I was able to achieve this very result just from prompting.

bensyverson 21 hours ago [-]
I mean, this is a free and open source project, so I don't think they're trying to make it into a product
nunodonato 1 days ago [-]
I've been using playwright-cli (not mcp) for this same purpose. It lacks the video feature, I guess. But at least is local and without external dependencies on even more third parties (in your case, vercel). Perhaps you could allow to use a local solution as an alternative as well?
jberthom 17 hours ago [-]
agent-browser runs locally (it’s a Rust CLI + Node daemon on your machine), so there’s no cloud dependency on Vercel, it’s just built by the Vercel Labs team. Everything stays local :)
can16358p 1 days ago [-]
How would this play with mobile apps?

I'd love to see an agent doing work, then launching app on iOS sim or Android emu to visually "use" the app to inspect whether things work as expected or not.

jillesvangurp 1 days ago [-]
Something like OpenAIs agent mode where it drives a mouse and keyboard but against an emulator should be doable. That agent mode is BTW super useful for doing QA and executing elaborate test plans and reporting issues and UX problems. I've been meaning to do more with that after some impressive report I got with minimal prompting when I tried this a few months ago.

That's very different from scripting together what is effectively a whitebox test against document ids which is what people do with things like playwright. Replacing manual QA like that could be valuable.

m00dy 1 days ago [-]
try deepwalker, https://deepwalker.xyz
mosselman 1 days ago [-]
I use the Claude Chrome extension for this. Works wonderfully. It lets Claude click through features itself, etc.
hungryhobbit 1 days ago [-]
That extension is an incredibly neutered version of Claude Code, and unless you pay for the ultra-premium plan the two can't talk to each other :(
jwr 22 hours ago [-]
But... There is claude --chrome which I'm puzzled nobody mentioned. I use it for E2E testing.
gverrilla 20 hours ago [-]
playwright-cli and agent-browser are much much better, give them a try.
instalabsai 1 days ago [-]
Yeah Claude/Cursor already have tools to access the browser. What I’m missing is a tool to inspect iOS simulator the same way. Is there a tool for that yet? The Xcode MCP wasn’t really helpful.
dbdoskey 1 days ago [-]
This is really cool. Have you thought of maybe accessing the screen through accessibility APIs? For Android mobile devices I have a skill I created that accesses the screen xml dump as part of feature development and it seems to work much better than screenshots / videos. Is this scalable to other OS's?
msephton 1 days ago [-]
It's trivial in Xcode Simulator, for Apple platform coverage.
alexovch 1 days ago [-]
This is actually interesting. Feels like we’re moving from “generate UI” to “validate UI,” which is a completely different problem. Curious how you handle edge cases where something looks correct but breaks in interaction?
jberthom 17 hours ago [-]
The agent drives interactions through proofshot exec — clicks, typing, navigation and each action gets logged with timestamps synced to the video. So in the viewer you can scrub through and click on action markers to jump to specific moments. It captures what happened during interaction, not just what the page looked like at rest. I had recordings where the agent struggled (for instance when having to click toggle buttons). It was fascinating to watch, the agent just tried again and again like a toddler figuring out how to use a keyboard and after 3 tries figured it out on his/her own (trying not to misgender the babies of future AGI).
theshrike79 1 days ago [-]
...you test the interaction too? That's what Playwright does and LLMs are pretty capable of writing playwright tests for interaction.
constantlm 17 hours ago [-]
Claude Code with --chrome is excellent at building UIs (and hopeless without it!)
Alifatisk 13 hours ago [-]
Missed opportunity to name it AEye (pronounced Ai)
lastdong 1 days ago [-]
This is basically what antigravity (Google’s Windsurf) ships with. Having more options to add this functionality to Open code / Claude code for local models is really awesome. MIT license too!
QubridAI 23 hours ago [-]
This is exactly the missing feedback loop for AI coding agents simple, practical, and surprisingly powerful.
BergAndCo 12 hours ago [-]
[dead]
xtiansimon 1 days ago [-]
Slightly Off-topic of Agentic...

Anyone recommend browser-base instant preview site for web ui design with more artistic/experimental preference?

wild_egg 1 days ago [-]
seems similar to a couple of simonw's recent tools?

https://simonwillison.net/2026/Feb/10/showboat-and-rodney/

jberthom 17 hours ago [-]
Simon’s tools are really great. Showboat is more for static screenshots though. ProofShot is the full session: recording, error capture, action timeline, PR upload. Different scope i'd say.
esafak 1 days ago [-]
It looks similar to agent-video, which is also based on agent-browser: https://www.mux.com/blog/agentic-video-screen-recording

I don't think you need either, though, because agent-browser itself has a skill for this: https://github.com/vercel-labs/agent-browser/blob/main/skill...

Maybe the author would like to compare the three.

jberthom 17 hours ago [-]
Yes agent-video is focused on high quality recording. ProofShot uses agent-browser’s recording under the hood but the value is really in everything else: error collection, server log capture, the interactive viewer with synced timeline, and the PR upload. It’s less about the video itself and more about bundling all the evidence together to generate the evidence the agent did the work.
esafak 7 hours ago [-]
I would prepare a table in the README with all these features as comparison points.
varispeed 1 days ago [-]
I built something like this for native application, so that I could get automated feedback loop for the agent instead of making screenshots manually etc. Problem I found is that AI agent understands nothing of the UI. If you tell it "Make buttons evenly spaced", sure it will space them evenly, but without care for the context they are placed in. You have to describe the image yourself and still you'll find it having hard time understanding what's going on. I very much abandoned the idea of AI driven UI development as it is not there yet. I tried with GPT 5.2. Maybe newer models have improved.
jberthom 17 hours ago [-]
interesting, which model were you using for the vision part? In my experience Claude Sonnet and Opus handle UI screenshots reasonably well, not perfect but good enough that the agent can catch obvious layout issues and iterate. Definitely not at the “pixel perfect design implementation” stage yet though. But for testing features it's ok. The goal is for the agent to test that the UX/UI flow works, not that one pixel is correctly aligned with others in that case
hasbot 23 hours ago [-]
I use Claude Code Sonnet and Opus for this and it works pretty well. Not perfect but good enough. At some point, I will have but to go in and tweak layout but like 99% of the functionality and layout is done for me.
peter_retief 1 days ago [-]
Thanks! I do share screenshots and paste them manually for front end stuff, nice idea though.
tjwebbnorfolk 1 days ago [-]
VSCode and Antigravity already do this. What am I missing?
jberthom 17 hours ago [-]
yes as saintfire said :)

ProofShot is just a CLI, not tied to any IDE. If you’re in Antigravity or VSCode and their built-in preview works for you, great. This is for people using Claude Code, Codex, or any terminal-based agent where there’s no IDE doing it for you. The main thing is really the PR artifact workflow - the agent records proof, you review it async on the PR.

saintfire 23 hours ago [-]
Its not coupled to whatever IDE you use and not everyone else uses.

Also it's headless. No idea if VSCode and antigravity are, guessing it's not 1:1 the same, though.

VadimPR 1 days ago [-]
Looks nice! Does it work for desktop applications as well, or is this only web dev?
_aravindc 1 days ago [-]
This is awesome, does it work with desktop application ?
amelius 21 hours ago [-]
I wish AI had the eyes it needs to see that the features they confabulate are not in the software they are talking about.
Bradd3rs 11 hours ago [-]
this is pretty awesome, nice work!
mym1990 24 hours ago [-]
Not to pile on, but I was using Claude Code through the native application and it started doing exactly this on its own, side by side with my prompt, running the server and taking screenshots in the native app. Claude also just launched its own browser control, and while it will take time to mature, I assume any AI company will have this feature in their crosshairs.

From a product design perspective, this looks pretty cool!

grahammccain 1 days ago [-]
This is really useful thank you!
sarreph 1 days ago [-]
I am fed up of getting gaslit by coding assistants. "Your AI agent says it's done." really is a problem! Nice packaging here.

I built something similar[0] a few months ago but haven't maintained it because Codex UI and Cursor have _reasonable_ tooling for this themselves now IMO.

That said there is still a way to go, and space for something with more comprehensive interactivity + comparison.

[0] - https://magiceyes.dev/

Horos 1 days ago [-]
what about mcp cdp ?

my claude drive his own brave autonomously, even for ui ?

EruditeCoder108 1 days ago [-]
I see
m00dy 1 days ago [-]
Gemini on Antigravity is already doing this.
Imustaskforhelp 1 days ago [-]
Great to see this but exe.dev (not sponsored but they are pretty cool and I use them quite often, if they wish to sponsor me that would be awesome haha :-]) actually has this functionality natively built in.

but its great to see some other open source alternatives within this space as well.

dude250711 1 days ago [-]
That is not UI, that's just some web pages with JS.
baibai008989 45 minutes ago [-]
[dead]
tanglerman 1 days ago [-]
[dead]
vanyaland 1 days ago [-]
[dead]
sneg55 18 hours ago [-]
[dead]
chattermate 1 days ago [-]
[dead]
CloakHQ 1 days ago [-]
[dead]
SIMY_Tetsu 21 hours ago [-]
[dead]
aiwokz 22 hours ago [-]
[dead]
zkmon 1 days ago [-]
Taking screenshots and recording is not quite the same as "seeing". A camera doesn't see things. If the tool can identify issues and improvements to make, by analyzing the screenshot, that's I think useful.
jofzar 1 days ago [-]
> It’s not a testing framework. The agent doesn’t decide pass/fail. It just gives me the evidence so I don’t have to open the browser myself every time.

From the OP, i don't think this is what is meant for what you are saying.

falcor84 1 days ago [-]
I read it in the same vein as saying that a sub's sonar enables "seeing" its surroundings. The focus is on having a spatial sensor rather than on the qualia of how that sensation is afterwards processed/felt.
philipp-gayret 1 days ago [-]
> If the tool can identify issues and improvements (...)

Tools like Claude and the like can, and do. This is just a utility to make the process easier.

jberthom 17 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:14:10 GMT+0000 (Coordinated Universal Time) with Vercel.