Hacker Newsnew | past | comments | ask | show | jobs | submit | makestuff's commentslogin

Isn't this the main complaint people had about cable packages though? People were tired of paying $100/mo and only watching 10 channels out of 150.

I came across a startup awhile ago that was handling the micropayments for you and you paid a monthly subscription fee which is similar to what you want. I think the main issue is getting every publisher to agree to onboard to your platform before you have sufficient scale of paying customers.


It's a misunderstanding of the payment model really. No one watches 150 channels, the pricing is based on you being the average person who watches a subset of them, but it doesn't cost them any extra to provide all of them.

Regular users also don't really like usage based fees which is why every consumer plan has a fixed price rather than paying per use. Cloud storage for example charging you for "up to x gb" rather than "$x per gb".


How do you explain public utilities? No one has any issue with the fact that flicking a light switch in your home is technically a micropayment, as it consumes extra electricity that comes out in your monthly bill.

I would venture to say that what consumers don't like about micropayments is any combination of the following:

(1) It's a pain in the ass to provide payment info most places, and comes with the looming paranoia that your data is going to be abused;

(2) It's viscerally disgusting when e.g. AAA video game developers expect you not to notice the difference between $100 for marginal extra content, and 100 micropayment charges of $1 for the same amount of marginal extra content;

(3) It's an infohazard to the average person to inform them exactly how much they're spending on each thing in their life, because it tempts them toward a culturally validated budgetary anorexia.

Public utilities avoid (1) because it's a one-time signup with trusted vendors for years of service, they avoid (2) because utilities are priced (somewhat) rationally in nationally standardized ways, and they avoid (3) because utility bills can only get so itemized.


It's also the incentive structure that's different. E.g. I can choose to buy cheaper LED lights to reduce my electricity costs because the interests of lightbulb companies are mostly orthogonal to the (usage-based) interests of the utility providers.

Micro-payments are more akin to a hypothetical world in which the lightbulb company gets paid via my electricity bill; now they have an incentive to sell incandescents over LEDs. Similar to how micro-payment (and advertising) based news companies have an incentive to sell click-bait, because they're getting paid based on usage rather than a flat fee.


This seems like a problem of perverse incentives independent of the medium of micropayment (cash vs. ad farming), no? I suppose the only way around that particular problem would be to decouple their revenue from the number of people actually accessing their content, which as far as I can tell precludes those people being the patrons. Instead the patron would be some larger corporate or public body auditing and funding them based on merit.

Curiously, there are still perverse incentives even in the case of lightbulbs and other consumable goods or technologies: planned obsolescence, delay of technology upgrades, and deliberate backroom deals from associated resource providers.


Yes! You can partially decouple it through recurring subscriptions, or possibly bundling, such as cable TV. But I can't think of a viable micro-payment method that wouldn't have the same problem.

Planned obsolescence is a failure mode because unit consumption (vs metered consumption) is the monetization scheme. Hypothetically this could be decoupled through something like lifetime warranties, but that has too many failure modes to be broadly viable.

The point is, despite other perverse incentives, with lightbulbs you have a situation in which unit consumption and metered consumption are at odds, so one company can make more money by enabling the customer to spend less elsewhere. Of course, if you ever tie the two together, such that one company profits from metered consumption and controls/profits from the unit -- Inkjet printers with proprietary cartridges come to mind -- you've now adopted an anti-consumer business model.

It's ideal when corporate incentives end up opposed to each other for the benefit of the consumer, but I think you'll be hard pressed to create that through micro-payments.


Is usage based billing the same as micropayments? In any case, I have one utility company with lines connected to my house, so I put up with whatever they want to bill me. Very different marketplace than news papers.

> I have one utility company with lines connected to my house, so I put up with whatever they want to bill me. Very different marketplace than news papers.

Fair point. I suppose I'm considering the alternative scenario where rather than near-monopoly between utility providers in any given region, there is instead room for competition. I claim that even given such competition, those utility providers who offered usage-based billing would be at least as appealing to the public as flat-fee, usage-independent billing.

> Is usage based billing the same as micropayments?

Technically, I suppose you're paying for a resource which you are then allowed to use as you please. But since the average consumer doesn't have access to huge batteries or water reservoirs in their garage, and since utilities companies don't/can't price you differently per watt or gallon or water depending on which appliance you're using, the effect is identical as if utilities companies were instituting rationally (per unit of resource consumed) priced micropayments on each of your household appliances.


For utilities its tolerated because there is a massive difference in cost between servicing different users of different usage patterns. There is no way to have a fixed monthly bill for utilities in a way that is fair. If usage didn't incur such huge costs it would be a fixed bill like internet and phone plans.

While providing extra TV channels costs nothing. Even if you are a power user who watches 10x the TV as a normal person, it doesn't cost the company anything extra.


And also that many of the channels people were insisting they don't want were actually paying for coverage, not charging for it. (Home shopping, religion, etc)

This is totally hypothetical, but I wonder if a system whereby your dollars went to the publications you actually read, but you could immediately, at any time read anything else you wanted for free would work. There would be an obvious reason to subscribe (you get past the paywall for any publication that is part of the bundle) but you would have the feeling that you're not "wasting" money because your money only goes to the publications you actually support.

(In reality, of course, cable providers were mostly doing this under the hood along with pocketing a big cut for themselves; television is just expensive to produce. But it didn't help the feeling of unfairness when you didn't watch any sports but ESPN was probably the most expensive channel in your "package".)


Isn't that the YouTube premium model? You pay a fixed monthly fee, Google takes a cut and the rest is divided among the channels you watch. It's supposedly in proportion to the watch time you've allocated to each of them, but I'm not sure that's ever been confirmed.

That’s the Spotify model.

I thought Spotify's model is all subscriptions go into one pool that gets divided by platform wide listen time.

EDIT: this is indeed the Spotify model while youtuve's approach was to treat premium as a make up for missinflg ad watches so pays out from the individual viewers subscription.


Curious what time period this was. For example, if you lived in Portland in 2019 and Seattle in 2023 it could just be inflation causing people to go out less.


Great point. Temporarily separated samplings would have that effect. However, I moved directly from Portland to Seattle. Further, I have returned to Portland and found it to be just as wonderful a place to eat out as before.


It seems like the main feature is being able to access your home network to watch netflix, access LAN devices, etc.

How is this different compared to running a tailscale exit node in your home network?

Is the benefit of this that you have a hardware device that you can connect to instead of needing software like tailscale?


I have a hard time believing anyone would actually use this versus self-hosting headscale in a discarded ThinkCentre and running it from a closet.


Not sure if you’re serious but reeks of “you can already build such a system yourself quite trivially”


Not serious, and you got it.



But Unifi should be able to implement this with zero extra hardware, just with VPN-style clients on phones and laptops?

I'm just surprised this needs an extra device. It would make sense if the device provided its own connectivity (with global wireless service, say), but this doesn't seem to be the case here. It still needs an uplink.


That's already an option, too.


I run OpnSense, Wireguard, hooked up to third party WiFi access points, and I had to do a lot of configuration and work that I wouldn't have had to do if I had just bought Ubiquiti equipment.

I did save money, a really significant amount of money.

Obviously, yes, I am capable of going through the work that eliminates my need for this product. I have no trouble configuring Wireguard and setting it up on my client devices and running through all that.

But it was a lot of work to get to this point and I had to spend a lot of time learning how to do that, even as a person who is already technical. Wireguard in particular took me a solid half a day to build understanding and get it configured.

If I was a little bit richer and I went back in time I'd probably just buy all Unifi. Actually if I went back in time I think with my same levels of wealth I'd probably just buy Unifi and save some precious time.

This specific device does seem like a really nice extension of their product line.


The catch is figuring out what's going to stick around and what won't.

I have a Ubiquiti EdgeRouter Lite that's a little over ten years old. At the time, it was revolutionary in its ability to pump a whole lot of data over a cheap device with a lot of features - but a lot of those features weren't available in the GUI at all; you had to go CLI and learn Vyatta (of which it was a fork) to do them. It's been updated over the years and is now much easier to use as the web interface exposes a lot more functionality, but it's not part of Unifi (and never will be).

Early on, I looked at and even tried one of their AP's. 100 Mbps wired uplinks for N wireless? No thanks. Even the one that I got to test with had absolutely abysmal range. Say what you will about TP-LINK generally, but their Omada unified control system had AP's that actually worked in my house. So the early Unifi stuff wasn't anything special, and based on how they had dropped the ball on so much of their early hardware (the EdgeRouter Lite had its software on an internal USB drive that, out of warranty, failed in a way that I was only able to diagnose with a serial console cable - at least it had a port so I could monitor it during boot, and searching for the error messages found a way to replace the thumbdrive and reload the software) I had no reason to go with them.

If I were setting someone up today, with all new gear, I might go Unifi, but I have no reason to spend any time at all replacing a system that works just fine.


What I didn’t like about TP-Link Omada was their weird requirement for a separate controller hardware thing, or running a controller server thing. If I remember right.

I ended up with the OpnSense box plus Zyxel APs. The Nebula cloud offering has been surprisingly good for me: it offers plenty of features in the free tier and the APs don’t actually need the cloud service to be configured if it were to be discontinued.


They phrase it oddly, I think to try to get people to buy a controller, but you only need it for setup, and the free software controller works fine for that. You only really need a hardware controller for a business environment where you expect to manage multiple sites remotely (it can be done remotely but isn’t worth the $80 you save vs having a hardware controller on site). Once configured, the devices will keep on doing their thing after reboots. You will have to fire it up for upgrading devices, but that’s no different from running Unifi without a controller with only AP’s - there has to be a provisioning controller somewhere to get them to work as a true network with seamless handoffs and the like. Otherwise, running in standalone mode, they are just like running consumer AP’s individually.

I have a hardware controller, but I will probably end up putting it in my in-laws’ house because software is fine for where I live. I actually set the whole thing up via software controller and transferred the config when it was all set and I would only be making small changes.


Time is your most precious commodity.


I’m in the market for a solid travel router, and my home network is all Unifi gear. This is a no brainer, especially with the built-in Teleport support.


I think so: it looks like "UniFi Teleport" is also based on Wireguard.

You can also do this with a travel router like one of GL.iNet's and Tailscale subnet routers.


UniFi teleport is also very buggy with frequent disconnects. Tailscale and WireGuard proper don’t have those issues for me.


How would Tailscale run in your home network without a hardware device to connect to?


You can create a subnet router on tailscale and access any device on your local network, regardless of them having tailscale installed


Sure but you need a device on the local network to run Tailscale so it routes to that subnet no?


Not to take away from this device, I think it’s pretty neat. But you can run tailscale on anything, even Apple TVs. If you have a Unifi network odds are that you have at least one spare computing device that can run tailscale.


Problem is that I think my Apple TV goes into some sort of deep idle mode where tailscale stops working. So it’s been effectively useless for me when I travel.


Check the Tailscale blog and docs for AppleTV. ISTR reading about an issue like this popping up and they had a workaround of some sort. Never happened to me.


Never had that, and I use that feature often.


Is the idea that you would need less chemo after the tumor is broken up to remove any remaining cancer cells versus just starting out with chemo to remove the tumor?


Chemotherapy isn't always successful, and depends on the tumor's characteristics, but the idea is yes, less chemo. Histrophy is similar to resection, physically removing a tumor. I've seen chemo options for both scenarios with resectable cancers. For example, hormonal therapy is usually prescribed after resectable breast cancer, regardless of outcome. Or, chemo first to shrink the tumor, and have better surgical margins.


Yeah I agree it is a lack of understanding on how to use the tools. The main issue I ran into in my undergrad FPGA class as a CS student was a lack of understanding on how to use the IDE. We jumped right into trying to get something running on the board instead of taking time to get everything set up. IMO it would have been way easier if my class used an IDE that was as simple as Arduino instead of everyone trying to run a virtual machine on their macbooks to run Quartus Prime.


Is a skill essentially a reusable prompt that is inserted at the start of any query? The marketing of Agents/MCP/skills/etc is very confusing to me.


It's basically just a way for the LLM to lazy-load curated information, tools, and scripts into context. The benefit of making it a "standard" is that future generations of LLMs will be trained on this pattern specifically, and will get quite good at it.


> It's basically just a way for the LLM to lazy-load curated information, tools, and scripts into context.

So basically a reusable prompt like the previous has asked?


Ah not exactly.

The way the OP phrased it

> Is a skill essentially a reusable prompt that is inserted at the start of any query?

Actually is a more apt description for a different Claude Code feature called Slash Commands

Where I can create a preset "prompt" and call it with /name-of-my-prompt $ARGS

and this feature is the one that essentially prefixes a Prompt.

The other description of lazy loading is more accurate for Skills.

Where I can tell my Claude Code system: Hey if you need to run our dev server see my-dev-server-skill

and the agent will determine when to pull in that skill if it needs it.


Yes, but with more sales magic sprinkled on top.


Does it persist the loaded information for the remainder of the conversation or does it intelligently cull the context when it's not needed?


This question doesn’t have anything to do with skills per se, this is just about how different agents handle context. I think right now the main way they cull context is by culling noisy tool call output. Skills are basically saved prompts and shouldn’t be that long, so they would probably not be near the top of the list of things to cull.


Claude Code subagents keep their context windows separate from the main agent, sending back only the most relevant context based on the main agent's request.


Each agent will do that differently, but Gemini CLI, for example, lets you save any session with a name so you can continue it later.


It's the description that gets inserted into the context, and then if that sounds useful, the agent can opt to use the skill. I believe (but I'm not sure) that the agent chooses what context to pass into the subagent, which gets that context along with the skill's context (the stuff in the Markdown file and the rest of the files in the FS).

This may all be very wrong, though, as it's mostly conjecture from the little I've worked with skills.


Claude also has custom slash-commands, so you can force skill usage as you see fit.

This lets you trigger a skill with '/foo' in a way that resembles the way you'd use the command line.

Claude Code is very good at using well-defined skills without a command though, but in a scenario where this is some nuance between similar skills they are useful.


Skills can be just instructions how to do things.

BUT what makes them powerful is that you can include code with the skill package.

Like I have a skill that uses a Go program to traverse the AST of a Go project to find different issues in it.

You COULD just prompt it but then the LLM would have to dig around using find and grep. Now it runs a single executable which outputs an LLM optimised clump of text for processing.


Its part of managing the context. It's a bit of prepared context that can be lazy-loaded in as the need arises.

Inversely, you can persist/summarize a larger bit of context into a skill, so a new agent session can easily pull it in.

So yes, it's just turtles, sorry, prompts all the way down.


“inserted at the start of any query” feels like a bit of a misunderstanding to me. It plops the skill text into the context when it needs it or when you tell it to. It’s basically like pasting in text or telling it to read a file, except for the bit where it can decide on its own to do it. I’m not sure start, middle, or end of query is meaningful here.


It also has (Python/Ruby/bash) scripts which Claude Code can execute.


The rise in the preventative screening centers (such as prenuvo) that offer whole body MRIs will be interesting.

The research seems split on if it is worth it or just causes unneeded worry. Obviously if you catch something early then that is great, but there are a lot of people who have a ton of followup testing only to find out there is no issue.

There are also limitations with the level of detail a full body MRI can capture.

I could see it becoming similar to a colonoscopy where you get it like when you turn 30 or something and then every 5-10 years after that.


As part of my post-cancer screening, I have received a full-body MRI every year since 2017. In 2024, it discovered pancreatic cancer. Grateful for those years where it found nothing, but even MORE grateful when it did catch something!


Yeah, I have a feeling we will instead start exposing some /help api that the AI will first call to see all possible operations and how to use them in some sort of minified documentation format.


This is the first time I have heard of world models. Based on my brief reading it does look like this is the idea model for autonomous driving. I wonder if the self driving companies are already using this architecture or something close to it.


I have been part of a few migration projects like this. There is another issue apart from tests not existing. Business/Product still want new high priority features so developers keep adding new logic to the old system because they cannot be supported by the new system yet.


The secret sauce is being able to operate below your maximum effectiveness while still seeming impressive enough. That is, if you want to play the long game and get a lot done over a 5 year horizon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: