Reading this just makes me think of yesterday's post - vibe code in turbo mode, Gemini Antigravty wipes the entire storage drive partition because it missed a quotation in function code it wrote and ran without checking.
it used to be you needed to be an idiotic noob to do that to your company but now you can automate it and do it at scale, idiotic noobs are expensive and it costs a lot to hire enough to wreck everything but vibe coding is cheap and inexpensive to bring awesome self-destructive power!
Yes it helps you write all the boiler plate code to do straightforward repetetive things. What would be even better would be simple code to do simple things
My wife noticed that I don't mind being interrupted when programming anymore; between the less-intense level of concentration required now and the always-present transcript, it's not like a collapsing mental house of cards to look up for a few minutes and talk about something else.
Sure, but the headline wasn't "Google CEO says ‘vibe coding’ made software development ‘so much less like a video game.’"
In fact since many people think video games are enjoyable, making software development less gamelike might make it less enjoyable.
(But would further gamification make it more enjoyable? No, IMO. So maybe all we learn here is that people don't like change in any direction.)
I think we’re mixing our metaphors here, what I mean is at the end of the day you write code to get some result you actually care about, or that matters for some material reason. Work is labor at the end of the day. If you don’t care about that outcome or optimizing for it, then you may as well play a video game or code golf or something. What you now want is a hobby.
It depends on how you use it. I was running 15 agents at once, 12 hours a day for a month straight because it was more optimal to add more, and that wasn't very enjoyable. Now I'm back to writing code the enjoyable way, with minor LLM assistance here and there.
Been "vibe coding" for 8 months building thepassword.app - AI browser automation that changes passwords across websites.
The enjoyment factor is real. The iteration speed with Claude Code is insane. But the model's suggestions still need guardrails.
For security-focused apps especially, you can't just accept what the LLM generates. We spent weeks ensuring passwords never touch the LLM context - that's not something a vibe-coded solution catches by default.
The productivity gains are real, but so is the need for human oversight on the security-critical parts.
There is no way I would provide the password I use on multiple sites to some random app, and there's absolutely no way I'd do that if I had any inkling it was vibe coded.
1. We don't ask for your current passwords. The app imports your CSV from your existing password manager (1Password, Bitwarden, etc.), which you already trust with your credentials. We automate the change process - you provide the new passwords you want.
2. Zero passwords leave your machine. The app runs locally. Browser automation happens in a local Playwright instance. The AI (GPT-5-mini via OpenRouter) only sees page structure, never credential values. Passwords are passed to forms via a separate injection mechanism that's invisible to the LLM context.
The "vibe coding" comment was about development speed with AI assistants, not about skipping security review. We spent weeks specifically on credential isolation architecture - making sure passwords can't leak to logs, LLM prompts, or network requests. That's the opposite of careless.
Code's not open source yet, but we're working toward that for exactly the reasons you describe - trust requires verification.
Good question - and yes, this should probably be a library.
The core approach: browser-use's Agent class accepts a `credentials` parameter that gets passed to custom action functions but never included in the LLM prompt. So when the agent needs to fill a password field, it calls a custom `enter_password()` function that receives the credential via this secure channel rather than having it in the visible task context.
We forked browser-use to add this (github.com/anthropics/browser-use doesn't have it upstream yet). The modification is in `agent/service.py` - adding `credentials` to the Agent constructor and threading it through to the tool registry.
Key parts:
1. Passwords passed via `sensitive_data` dict
2. Custom action functions receive credentials as parameters
3. LLM only sees "call enter_password()" not the actual value
4. Redaction at logging layer as defense-in-depth
Would be happy to clean this up into a standalone pattern/PR. The trickiest part is that it requires changes to the core Agent class, not just custom actions on top.
Edit: for those who don't frequent HN or reddit every day: https://old.reddit.com/r/google_antigravity/comments/1p82or6...