Hacker Newsnew | past | comments | ask | show | jobs | submit | artdigital's commentslogin

That’s very clearly a no, I don’t understand why so many people think this is unclear.

You can’t use Claude OAuth tokens for anything. Any solution that exists worked because it pretended/spoofed to be Claude Code. Same for Gemini (Gemini CLI, Antigravity)

Codex is the only one that got official blessing to be used in OpenClaw and OpenCode, and even that was against the ToS before they changed their stance on it.


Is Codex ok with any other third party applications, or just those?

Yes. You can build third party applications on top of codex app server. All open source. https://developers.openai.com/codex/app-server/

  Codex app-server is the interface Codex uses to power rich clients (for example, the Codex VS Code extension). Use it when you want a deep integration inside your own product.
It mentions 'Inside your own product', but not sure if that means also your own commercial application.

I think it's permissible. Zed uses it to power their Codex integration. OpenAI has been quite vocal about it.

By default, assume no. The lack of any official integration guide should be a clear sign. Even saying that you reverse-engineer Codex for apps to pretend to be Codex makes it clear that this is not an officially endorsed thing to do

Codex is Open Source though, so I wonder at what stage me adding features to Codex is different from me starting a new project and using the subscription.

But I believe OpenAI does let you use their subscription in third parties, so not an issue anyway.


Interested to know this too

But why does it matter which program consumes the tokens?

Presumably because their flat rate pricing is based off their ability to manage token use via their first-party tools.

A third-party tool may be less efficient in saving costs (I have heard many of them don't hit Anthropic LLMs' caches as well).

Would you be willing to pay more for your plan, to subsidize the use of third-party tools by others?

---

Note, afaik, Anthropic hasn't come out and said this is the reason, but it fits.

Or, it could also just be that the LLM companies view their agent tools as the real moat, since the models themselves aren't.


What if I'm only willing to pay if it support by tool of choice? Would you pay for a streaming service that enforces a certain TV brand?

Given the latest changes on Claude Code where they hide the actions

https://news.ycombinator.com/item?id=47033622

it's likely more the other way around. They control how fast your subscription tokens are burned


> What if I'm only willing to pay if it support by tool of choice?

I don’t want to say that you won’t be missed but they will get over it.


But wouldn't a less efficient tool simply consume your 5-hour/weekly quota faster? There's gotta be something else, probably telemetry, maybe hoping people switch to API without fighting, or simply vendor lock-in.

> But wouldn't a less efficient tool simply consume your 5-hour/weekly quota faster?

Maybe.

First, Anthropic is also trying to manage user satisfaction as well as costs. If OpenCode or whatever burns through your limits faster, are you likely to place the blame on OpenCode?

Maybe a good analogy was when DoorDash/GrubHub/Uber Eats/etc signed up restaurants to their system without their permission. When things didn't go well, the customers complained about the restaurants, even though it wasn't their fault, because they chose not to support delivery at scale.

Second, flat-rate pricing, unlike API pricing, is the same for cached vs uncached iirc, so even if total token limits are the same, less caching means higher costs.


> are you likely to place the blame on OpenCode?

am I? Probably, but I get your point that your average user would blame Anthropic instead.

> even if total token limits are the same, less caching means higher costs

Not really, flat-rate pricing simply gives you a fixed token allotment, so less caching means you consume your 5-hour/weekly allotment faster.


> Not really, flat-rate pricing simply gives you a fixed token allotment, so less caching means you consume your 5-hour/weekly allotment faster.

Higher costs for Anthropic, not users. With a tool that caches suboptimally, you cost Anthropic more per token.


Presumably most people also do not use their full quota when using the official client, whereas third-party clients could be set up to start back up every 5 hours to use 100% of the quota every day and week.

It's the whole "unlimited storage" discussion again.


Why does it matter to the free buffet manager where do you consume the food? We may never know.

Because it could be over longer time periods than buffet hours.

They must be getting something out of it, because we sure aren't.

Cory Doctorow has a word for this..

They think their position is strong enough to lock users in. I'm not so sure.

It's enshittification - for those who didn't know.

They'll own entire pipeline interface, conduit, backend. Interface is what people get habitual to. If I am a regular user of Claude Code, I may not shift to competitor for 10-20% gains in cost.

They want that sweet vendor lock-in.

I don’t feel they’re similar at all and I don’t get why people compare them.

MCP is giving the agents a bunch of functions/tools it can use to interact with some other piece of infrastructure or technology through abstraction. More like a toolbox full of screwdrivers and hammers for different purposes, or a high-level API interface that a program can use.

Skills are more similar to a stack of manuals/books in a library that teach an agent how to do something, without polluting the main context. For example a guide how to use `git` on the CLI: The agent can read the manual when it needs to use `git`, but it doesn’t need to have the knowledge how to use `git` in it’s brain when it’s not relevant.


> MCP is giving the agents a bunch of functions/tools

A directory of skills... same thing

You can use MCP the same way as skills with a different interface. There are no rules on what goes into them.

They both need descriptions and instruction around them, they both have to be is presented and index/instn to the agent dynamically, so we can tell them what they have access to without polluting the context.

See the Anthropic post on moving MCP servers to a search function. Once you have enough skills, you are going to require the same optimization.

I separate things in a different way

1. What things do I force into context (agents.md, "tools" index, files) 2. What things can the agent discorver (MCP, skills, search)


Get on medicine. It’s the thing that actually changed my life.

That, and learn what adhd actually does, stuff like issues prioritizing tasks, sudden impulsive thoughts, “the adhd wall”, RSD, weak inner voice, etc. It’s much easier to handle if you understand it and can differentiate between something like an impulse and regular thoughts.

I recommend watching videos from Dr Berkeley on YouTube on how to manage adhd. There are lots of tricks like making things physical (time, tasks), or using with external consequences to combat weak inner voice


not Berkeley, its Russell Barkley. He's done a few hundred videos after he retired from the day job, but now he's 72 and really retired, so it's back to Google scholar we go... :-)


Yes that’s how I see it too. It’s a productivity multiplier, but depends on what you put in.

Sure Opus can work fully on its own by just telling it “add a button that does X”, but do that 20 times and the good turns into mush. Steer the model with detailed tech specs on the other hand, and the output becomes magical


See https://github.com/charmbracelet/crush/pull/1783

I wouldn’t be surprised if Anthropic filed a similar request against OpenCode, and follows it up with a takedown eventually


Yea exactly, I’m surprised people are calling this “drama”. It was from the beginning against the ToS, all the stuff supporting it just reverse engineered what Claude Code is doing and spoof being a client.

I tried something similar few months back and Claude already has restrictions against this in place. You had to very specifically pretend to be real Claude Code (through copying system prompts etc) to get around it, not just a header.


It’s bad that this is against the TOS in the first place, and reeks of anticompetitive behavior. Why does Anthropic care what frontend I use as long as I pay for their model?


I use Perplexity all the time for search. It's very good at exactly that - internet search. So when using it for search related things it really shines

Yeah sure ChatGPT can spam a bunch of search queries through their search tool but it doesn't really come close to having Perplexity's search graph and index. Their sonar model is also specifically built for search


I’m equally surprised to see these posts pop up everywhere on X, GitHub and now also HN. Am I that old that SSHing into a server through a VPN is such a novel concept nowadays?


I think the commonly used platforms, ISPs, etc. make this just annoying enough that most people really don't know how easy this should be.


I switched my subscription from Claude to ChatGPT around 5.0 when SOTA was Sonnet 4.5 and found GPT-5-high (and now 5.2-high) so incredibly good, I could never imagine Opus is on its level. I give gpt-5.2-high a spec, it works for 20 minutes and the result is almost perfect and tested. I very rarely have to make changes.

It never duplicates code, implements something again and leaves the old code around, breaks my convention, hallucinates, or tells me it’s done when the code doesn’t even compile, which sonnet 4.5 and Opus 4.1 did all the time

I’m wondering if this had changed with Opus 4.5 since so many people are raving about it now. What’s your experience?

Claude - fast, to the point but maybe only 85% - 90% there and needs closer observation while it works

GPT-x-high (or xhigh) - you tell it what to do, it will work slowly but precise and the solution is exactly what you want. 98% there, needs no supervision


Btw, Zed has a similar setting to turn off all AI features :)


And I'm sure they'll never forget to connect an AI feature to it.


Your point here being? In that case people will complain, and the next release will have it included into the setting


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: