Hacker Newsnew | past | comments | ask | show | jobs | submit | halamadrid's commentslogin

I pay $20 for OpenAI and codex makes me incredibly productive. With very careful prompts aimed at tiny tasks, I can review, fix and get a lot of things done.

I’ll happily pay up to $2k/month for it if I was left with no choice, but I don’t think it will ever get that expensive since you can run models locally and it could have the same result.

That being said, my outputs are similarish in the big picture. When I get something done, I typically don’t have the energy to keep going to get it to 2x or 3x because the cognitive load is about the same.

However I get a lot of time freed up which is amazing because I’m able to play golf 3-4 times a week which would have been impossible without AI.

Productive? Yes. Time saved? Yes. Overall outputs? Similar.


I would like to know what models people are running locally that get the same results as a $20/month ChatGPT plan

Same? Not quite as good as that. But google’s Gemma 3 27B is highly similar to their last Flash model. The latest Qwen3 variants are very good, to my need at least they are the best open coders, but really— here’s the thing:

There’s so many varieties, specialized to different tasks or simply different in performance.

Maybe we’ll get to a one-size fits all at some point, but for now trying out a few can pay off. It also starts to build a better sense of the ecosystem as a whole.

For running them: if you have an Nvidia GPU w/ 8GB of vram you’re probably able to run a bunch— quantized. It gets a bit esoteric when you start getting into quantization varieties but generally speaking you should find out the sort of integer & float math your gpu has optimized support for and then choose the largest quantized model that corresponds to support and still fits in vram. Most often that’s what will perform the best in both speed and quality, unless you need to run more than 1 model at a time.

To give you a reference point on model choice, performance, gpu, etc: one of my systems runs with an nvidia 4080 w/ 16GB VRAM. Using Qwen 3 Coder 30B, heavily quantized, I can get about 60 tokens per second.


I get tolerable performance out of a quantized gpt-oss 20b on an old RTX3050 I have kicking around (I want to say 20-30 tokens/s, or faster when cache is effective). It's appreciably faster on the 4060. It's not quite ideal for more interactive agentic coding on the 3050, but approaching it, and fitting nicely as a "coding in the background while I fiddle on something else" territory.

Yeah, tokens per second can very much influence the work style and therefore mindset a person should bring to usage. You can also build on the results of a faster but less than SOTA class model in different ways. I can let a coding tuned 7-12b model “sketch” some things at higher speed, or even a variety of things, and I can review real time, and pass off to a slower more capable model to say “this is structural sound, or at least the right framing, tighten it all up in the following ways…” and run in the background.

Just in case anyone hasn't seen this yet:

https://github.com/ggml-org/llama.cpp/discussions/15396 a guide for running gpt-oss on llama-server, with settings for various amounts of GPU memory, from 8GB on up


The run at home was in the context of $2k/mo. At that price you can get your money back on self-hosted hardware at a much more reasonable pace compared to 20/mo (or even 200).

Well theres an open source GPT model you can run locally. I dont think running models locally is all that cheap considering top of the line GPUs used to be $300 now you are lucky if you get the best GPU for under $2000. The better models require a lot more VRAM. Macs can run them pretty decently but now you are spending $5000 plus you could have just bought a rig with a 5090 with mediocre desktop ram because Sam Altman has ruined the RAM pricing market.

Mac can run larger models due to the unified memory architecture. Try building a 512GB nvidia VRAM machine. You basically can’t.

Fully aware, but who the heck wants to spend nearly 10 grand, and that's with just a 1TB hard drive (which needs to be able to fit your massive models mind you). Fair warning not ALL the RAM is fully unified. On my 24GB RAM Macbook Pro I can only use 16GB of VRAM, but its still better than me using my 3080 with only 10 GB of RAM, but I also didn't spend more than 2 grand on it.

I got some decent mileage out of aider and Gemma 27B. The one shot output was a little less good, but I don’t have to worry about paying per token or hitting plan limits so I felt more free to let it devise a plan, run it in a loop, etc.

Not having to worry about token limits is surprisingly cognitively freeing. I don’t have to worry about having a perfect prompt.


And what hardware they needed to run the model, because that's the real pinch in local inference.

There are no models that you can run locally that'll match a frontier LLM

Marx in his wildest nightmare couldn’t have anticipated the level selling short the working class does with the advent AI. Friend, you should be doing more than golf…

Bro, nobody wants to hear about the hustle anymore. We're in the second half of this decade now.

> nobody wants to hear about the hustle anymore

Plenty of people are still ambitious and being successful.


HubSpot CTO was very vocal about how AI is changing everything and how he is supporting by offering the domain chat.com to OpenAI etc. I say was because it has toned down quite a bit. I always thought HubSpot will transform into a true AI CRM given how invested the CTO was in the space from the early days.

Now the stock is down from $800+ to $200+ and the whole messaging has changed. The last one I saw on LinkedIn was "" No comment on the HubSpot stock price.

But, I strongly agree with this statement:

"...I don't see companies trusting their revenue engine to something vibe-coded over a weekend." ""

The stock dip is likely because of the true AI native CRMs being built and coming to market, but why couldn't HubSpot take that spot given the CTOs interest in the space.


If someone in the admin reads this, there is a chance this will be reversed and lead will be allowed in gas again :)

I don't think it was designed to handle the volume of traffic that HN generates.


A Cloudflare fronted website can't handle HN frontpage levels of traffic?

Then why does anybody use cloudflare?


Probably bad cache headers configuration. Even with Cloudflare in front it could be forwarding every request to the backend if the cache headers are misconfigured...


Code is probably just 20% effort. There is so much more after that. Like manage the infra around it and the reliability when it scales, and even things like managing SPAM and preventing abuse. And the effort required to market it and make it something people want to adopt.


Sure, but the premise here is that making a gmail clone is strategically necessary for OpenAI to compete with Google in the long term.

In that case, there's some ancillary value in being able to claim "look, we needed a gmail and ChatGPT made one for us - what do YOU need that ChatGPT can make for YOU?"


Those are still largely code-able. You can write Ansible files, deploy AWS (mostly) via the shell, write rules for spam filtering and administration... Google has had all of that largely automated for a long time now.


We haven't written rules for spam filtering in a while now: it's been a successful machine learning problem for ages.


Nice we haven’t faced this cold start problem. We like the idea of Lambdas being offered in a simple runtime platform where you can store and run the code as needed.

And chain it with other stuff as well which is where workflow engines like n8n or Unmeshed.io works better. You can mix up lambdas in different languages as well.


Unmeshed.io is another alternative. You don’t even need to write code for your schedules


For the sake of argument 20x means you have basically suddenly got access to 19 people with the same skill set as you.

You can build a new product company with 20 people. Probably in the same domain as you are in right now.


Output doesn't necessarily scale linearly with as you add more people. Look up mythical man.


We are using a workflow engine called Unmeshed - which has what you are asking about. Workflow definitions can be updated without running interfering with running instances and if you choose to you can patch updates on to running workflows. And you can also rerun workflows with the same input from an older execution.


Built on Lovable? Looks neat. If yes, how was your experience? Did you have to customize much by hand or did it take care of most of the work?


It was great, am glad it saved a lot of my time. Of course prompts are important but it would surely cut down your overall dev time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: