Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google CEO says ‘vibe coding’ made software development ‘so much more enjoyable’ (google.com)
16 points by ashishgupta2209 4 months ago | hide | past | favorite | 26 comments


Reading this just makes me think of yesterday's post - vibe code in turbo mode, Gemini Antigravty wipes the entire storage drive partition because it missed a quotation in function code it wrote and ran without checking.

Edit: for those who don't frequent HN or reddit every day: https://old.reddit.com/r/google_antigravity/comments/1p82or6...


it used to be you needed to be an idiotic noob to do that to your company but now you can automate it and do it at scale, idiotic noobs are expensive and it costs a lot to hire enough to wreck everything but vibe coding is cheap and inexpensive to bring awesome self-destructive power!


Biz leaders who are seeking to profit off AI sure do have a positive view of AI.


This links to

> https://indianexpress.com/article/technology/tech-news-techn...

dang, please replace the link.


Finally, the CEO can feel like they’re producing something of value.


I'm not excited to review the CEO's AI-written PR.


You know there are lots of things that make software development more enjoyable

Having a private office instead of an open floor plan for instance

Or not working in the JIRA two week sprint format

Or not having to work with offshore teams that push the burden of quality control onto you

My point is I bet that the Google CEO (and basically every other software CEO) doesn't actually care if software development is enjoyable or not



Yes it helps you write all the boiler plate code to do straightforward repetetive things. What would be even better would be simple code to do simple things


How many hours per day is Google CEO "enjoying" the pleasures of (vibe) coding ?


My wife noticed that I don't mind being interrupted when programming anymore; between the less-intense level of concentration required now and the always-present transcript, it's not like a collapsing mental house of cards to look up for a few minutes and talk about something else.


No buddy, you're replacing the doer engineers that built your company and made you rich with some form of automation.


It does though. That’s a separate issue from the inevitable layoffs and any bugs introduced along the way, but he’s not wrong.


Speak for yourself. I think he's extremely wrong

I think if all you care about is the outcome then sure, you might enjoy AI coding more

If you enjoy the problem solving process (and care about quality) then doing it by hand is way, way more enjoyable


If you don’t care about outcome then all you’re doing is playing a video game.


Sure, but the headline wasn't "Google CEO says ‘vibe coding’ made software development ‘so much less like a video game.’" In fact since many people think video games are enjoyable, making software development less gamelike might make it less enjoyable.

(But would further gamification make it more enjoyable? No, IMO. So maybe all we learn here is that people don't like change in any direction.)


If writing code by hand is like playing a videogame, then vibe coding is like playing a slot machine

Argue about the value of video games all you like, I would still place them above slot machines any day


I think we’re mixing our metaphors here, what I mean is at the end of the day you write code to get some result you actually care about, or that matters for some material reason. Work is labor at the end of the day. If you don’t care about that outcome or optimizing for it, then you may as well play a video game or code golf or something. What you now want is a hobby.


> If you don’t care about that outcome or optimizing for it,

I do care about the outcome, which is why the thought of using AI to generate it makes me want to gouge my eyes out

In my view using AI means not caring about the outcome because AI produces garbage. In order to be happy with garbage you have to not care


It depends on how you use it. I was running 15 agents at once, 12 hours a day for a month straight because it was more optimal to add more, and that wasn't very enjoyable. Now I'm back to writing code the enjoyable way, with minor LLM assistance here and there.


Been "vibe coding" for 8 months building thepassword.app - AI browser automation that changes passwords across websites.

The enjoyment factor is real. The iteration speed with Claude Code is insane. But the model's suggestions still need guardrails.

For security-focused apps especially, you can't just accept what the LLM generates. We spent weeks ensuring passwords never touch the LLM context - that's not something a vibe-coded solution catches by default.

The productivity gains are real, but so is the need for human oversight on the security-critical parts.


There is no way I would provide the password I use on multiple sites to some random app, and there's absolutely no way I'd do that if I had any inkling it was vibe coded.


Fair skepticism - I'd be suspicious too.

Two clarifications:

1. We don't ask for your current passwords. The app imports your CSV from your existing password manager (1Password, Bitwarden, etc.), which you already trust with your credentials. We automate the change process - you provide the new passwords you want.

2. Zero passwords leave your machine. The app runs locally. Browser automation happens in a local Playwright instance. The AI (GPT-5-mini via OpenRouter) only sees page structure, never credential values. Passwords are passed to forms via a separate injection mechanism that's invisible to the LLM context.

The "vibe coding" comment was about development speed with AI assistants, not about skipping security review. We spent weeks specifically on credential isolation architecture - making sure passwords can't leak to logs, LLM prompts, or network requests. That's the opposite of careless.

Code's not open source yet, but we're working toward that for exactly the reasons you describe - trust requires verification.


Thanks for the reply. Would be a useful service, good luck with it.


How do you ensure this? That pattern could be a useful feature in hundreds of apps being built by other developers if you turn it into a library


Good question - and yes, this should probably be a library.

The core approach: browser-use's Agent class accepts a `credentials` parameter that gets passed to custom action functions but never included in the LLM prompt. So when the agent needs to fill a password field, it calls a custom `enter_password()` function that receives the credential via this secure channel rather than having it in the visible task context.

We forked browser-use to add this (github.com/anthropics/browser-use doesn't have it upstream yet). The modification is in `agent/service.py` - adding `credentials` to the Agent constructor and threading it through to the tool registry.

Key parts: 1. Passwords passed via `sensitive_data` dict 2. Custom action functions receive credentials as parameters 3. LLM only sees "call enter_password()" not the actual value 4. Redaction at logging layer as defense-in-depth

Would be happy to clean this up into a standalone pattern/PR. The trickiest part is that it requires changes to the core Agent class, not just custom actions on top.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: