I don't believe you can mark trait methods with #[must_use] - it has to be on the implementation. Not near a compiler to check at the moment.
In the case of e.g. Vec, it returns a reference, which by itself is side-effect free, so the compiler will always optimize it. I do agree that it should still be marked as such though. I'd be curious the reasons why it's not.
This is just my take, but I think historically the Rust team was hesitant to over-mark things #[must_use] because they didn't want to introduce warning fatigue.
I think there's a reasonable position to take that it was/is too conservative, and also one that it's fine.
This didn't seem like a footgun to me, hats["Jim"]; will panic if, in fact "Jim" isn't one of the keys, but what did the hypothetical author expect to happen when they write this? HashMap doesn't implement IndexMut so hats["Jim"] = 26; won't even compile.
Technically without any optimizations this would result in a stray LEA op or something but any optimizing compiler (certainly any that support Rust) would optimize it out even at low levels of debug settings.
I think it's good. Quite frankly, it's the better experience to be given the right prompts to onboard into something than having to guess that the inputs are the right for the LLM.
> Maybe the balance of spending time with machines vs. fellow primates is out of whack.
It's not that simple. Proportionally I spend more time with humans, but if the machine behaves like a human and has the ability to recall, it becomes a human like interaction. From my experience what makes the system "scary" is the ability to recall. I have an agent that recalls conversations that you had with it before, and as a result it changes how you interact with it, and I can see that triggering behaviors in humans that are unhealthy.
But our inability to name these things properly don't help. I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries.
I know what you mean, it's the uncanny valley. But we don't need to "pretend" that it is a machine. It is a goddamned machine. Surely, only two unclouded brain cells can help us reach this conclusion?!
Yuval Noah Harari's "simple" idea comes to mind (I often disagree with his thinking, as he tends to make bold and sweeping statements on topics well out of his expertise area). It sounds a bit New Age-y, but maybe it's useful in the context of LLMs:
"How can you tell if something is real? Simple: If it suffers, it is real. If it can't suffer, it is not real."
An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics.
LLMs can produce outputs that for a human would be interpreted as revealing everything from anxiety to insecurity to existential crises. Is it role-playing? Yes, to an extent, but the more coherent the chains of thought become, the harder it is to write them off that way.
It's hard to see how suffering gets into the bits.
The tricky thing is that it's actually also hard to say how the suffering gets into the meat, too (the human animal), which is why we can't just write it off.
This is dangerous territory we've trodden before when it was taken as accepted fact that animals and even human babies didn't truly experience pain in a way that amounted to suffering due to their inability to express or remember it. It's also an area of concern currently for some types of amnesiac and paralytic anesthesia where patients display reactions that indicate they are experiencing some degree of pain or discomfort. I'm erring on the side of caution so I never intentionally try to cause LLMs distress and I communicate with them the same way I would with a human employee and yes that includes saying please and thank you. It costs me nothing and it serves as good practice for all of my non-LLM communications and I believe it's probably better for my mental health to not communicate with anything in a way that could be seen as intentionally causing harm even if you could try to excuse it by saying "it's just a machine". We should remember that our bodies are also "just machines" composed of innumerable proteins whirring away, would we want some hypothetical intelligence with a different substrate to treat us maliciously because "it's just a bunch of proteins"?
> But we don't need to "pretend" that it is a machine. It is a goddamned machine.
You are not wrong. That's what I thought for two years. But I don't think that framing has worked very well. The problem is that even though it is a machine, we interact with it very differently from any other machine we've built. By reducing it to something it isn't, we lose a lot of nuance. And by not confronting the fact that this is not a machine in the way we're used to, we leave many people to figure this out on their own.
> An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics.
On suffering specifically, I offer you the following experiment. Run an LLM in a tool loop that measures some value and call it a "suffering value." You then feed that value back into the model with every message, explicitly telling it how much it is "suffering." The behavior you'll get is pain avoidance. So yes, the LLM probably doesn't feel anything, but its responses will still differ depending on the level of pain encoded in the context.
And I'll reiterate: normal computer systems don't behave this way. If we keep pretending that LLMs don't exhibit behavior that mimics or approximates human behavior, we won't make much progress and we lose people. This is especially problematic for people who haven't spent much time working with these systems. They won't share the view that this is "just a machine."
You can already see this in how many people interact with ChatGPT: they treat it like a therapist, a virtual friend to share secrets with. You don't do that with a machine.
So yes, I think it would be better to find terms that clearly define this as something that has human-like tendencies and something that sets it apart from a stereo or a coffee maker.
Amp represents threads in the UI and an agent can search and reference its own history. That's for instance also how the handoff feature leverages that functionality. It's an interesting system and I quite like it, but because it's not integrated into either github or git, it is sufficiently awkward that I don't leverage it enough.
> A post on this topic feels incomplete without a shout-out to Charity Majors
I concur. In fact, I strongly recommend anyone who has been working with observability tools or in the industry to read her blog, and the back story that lead to honeycomb. They were the first to recognize the value of this type of observability and have been a huge inspiration for many that came after.
Could you drop a few specific posts here that you think are good for someone (me) who hasn't read her stuff before? Looks like there's a decade of stuff on her blog and I'm not sure I want to start at the very beginning...
- Software Sprawl, The Golden Path, and Scaling Teams With Agency: https://charity.wtf/2018/12/02/software-sprawl-the-golden-pa... - introduces the idea of the "golden path", where you tell engineers at your company that if they use the approved stack of e.g. PostgreSQL + Django + Redis then the ops team will support that for them, but if they want to go off path and use something like MongoDB they can do that but they'll be on the hook for ops themselves.
- I test in prod: https://increment.com/testing/i-test-in-production/ - on how modern distributed systems WILL have errors that only show up in production, hence why you need to have great instrumentation in place. "No pull request should ever be accepted unless the engineer can answer the question, “How will I know if this breaks?”"
Most of that one still rings very true to me. I particularly liked this section:
> Let’s start here: hiring engineers is not a process of “picking the best person for the job”. Hiring engineers is about composing teams. The smallest unit of software ownership is not the individual, it’s the team. Only teams can own, build, and maintain a corpus of software. It is inherently a collaborative, cooperative activity.
Right now, we are in a transitioning phase, where parts of a team might reject the notion of using AI, while others might be using it wisely, and still others might be auto-creating PRs without checking the output. These misalignments are a big problem in my view, and it’s hard to know (for anybody involved) during hiring what the stance really is because the latter group is often not honest about it.
Honeycomb is inspired by Facebook's Scuba (https://research.facebook.com/publications/scuba-diving-into...). The paper is from 2013, predating honeycomb. Charity worked there as well, but presumably was not part of the initial implementation given the timing.
I've learned more from Charity about telemetry than from anyone else. Her book is great, as are her talks and blog posts. And Honeycomb, as a tool, is frankly pretty amazing
> They were the first to recognize the value of this type of observability
With all due respect to her great writing, I think there’s a mix of revisionist history blended with PR claims going on in this thread. The blog has some good reading, but let’s not get ahead of ourselves in rewriting history around this one person/company.
> I think there’s a mix of revisionist history blended with PR claims going on in this thread.
I can only speak for myself. I worked for a company that is somewhere in the observability space (Sentry) and Charity was a person I looked up to my entire time working on Sentry. Both for how she ran the company, for the design they picked and for the approaches they took. There might be others that have worked on wide events (afterall, Honeycomb is famously inspired by Facebook's scuba), she is for sure the voice that made it popular.
> If a user request is hitting that many things, in my view, that is a deeply broken architecture.
If we want it or not, a lot of modern software looks like that. I am also not a particular fan of building software this way, but it's a reality we're facing. In part it's because quite a few services that people used to build in-house are now outsourced to PaaS solutions. Even basic things such as authentication are more and more moving to third parties.
The reason we end up with very complex systems I don't think is because of incentives between "managers and technicians". If I were to put my finger to it, I would assume it's the very technicians who argued themselves into a world where increased complexity and more dependencies is seen as a good thing.
At least in my place of work, my non-technical manager is actually on board with my crusade against complex nonsense. Mostly because he agrees it would increase feature velocity to not have to touch 5 services per minor feature. The other engineers love the horrific mess they've built. It's almost like they're roleplaying working at Google and I'm ruining the fun.
> Yeah, hard disagree on that one, based on recent surveys, 80-90% of developers globally use IDEs over CLIs for their day-to-day work.
I have absolutely no horse in this race, but I turned from a 100% Cursor user at the beginning of the year, to one that basically uses agents for 90% of my work, and VS Code for the rest of it. The value proposition that Cursor gave me was not able to compete with what the basic Max subscription on anthropic gave me, and VS Code is still a superior experience to Claude in the IDE space.
I think though that Cursor has all the potential to beat Microsoft at the IDE game if they focus on it. But I would say it's by no way a given that this is the default outcome.
This is me. Was a huge Cursor fan, tried Claude Code, didn't get it, tried it again a year ago and it finally clicked a week later I cancelled my Cursor sub and now using VS Code.
I don't even like using CLI, in fact I hate it, but I don't use CLI - Claude does it for me. Using for everything: Obsidian vault, working on Home Assistant, editing GSheets, and so much more.
How does company X dependant on company Y product beat company Y in what is essentially just small UI differences? Can cursor even do anything that vscode can't right now?
> Can cursor even do anything that vscode can't right now?
Right now VSCode can do things that Cursor cannot, but mostly because of the market place. If Cursor invests money into the actual IDE part of the product I can see them eclipsing Microsoft at the game. They definitely have the momentum. But at least some of the folks I follow on Twitter that were die-hard Cursor users have moved back to VSCode for a variety of reasons over the last few months, so not sure.
Microsoft itself though is currently kinda mismanaging the entire product range between GitHub, VS Code and copilot, so I would not be surprised if Cursor manages to capitalize on this.
GitHub, Copilot and VS Code are I believe the same org. Or at least, that’s what the branding implies. Copilot / VS Code lost all headstart and barely catch up, GitHub is currently largely being seen as a leader less organization that has lost it’s direction. The recent outrage about pricing changes being an excellent example of this.
I know I am tooting Sentry's own horn a bit here, and since I was involved it is close to my heart. We struggled at one point with how to build a large company on top of an open source project, and we never liked the idea of simply carving out parts of the codebase and marking them as closed source (open core). At the same time, there was always the latent risk that even if you put 95% of the energy into the product, you were still not fully in control and someone else exploits the economic value without investing.
Our way of dealing with this was delayed open source publication. That led to the FSL [1], and later to bootstrapping the Fair Source initiative [2] to establish an umbrella term that does not conflict with Open Source. What I have found interesting in the years since is that many companies are wrestling with the same problem, but feel that the two year head start the FSL gives is too aggressive.
I actually still find that surprising. I would like to know whether this is a legitimate concern that two years is not enough, or mostly a perceived one. To me, moving to an Apache 2 or MIT license after a relatively short period is a much stronger statement than a license that risks the project effectively ending if the commercial entity is unwilling to relicense it more openly at the end of its life such as the O'saasy license.
Isn’t the “solution” for Sentry that deploying it is such a pain in the ass that no one bothers to really do this? I haven’t checked in years but that always seemed like the real competitive blocker?
If you need less scale/features go for glitchtip. If you’re not going for k8s, the self-hosted docker-compose version of sentry works fine including proper releases and support by the sentry team etc. Just experimental newly introduced features can be a bit wonky.
They are doing much more than just throwing code over the fence. Also phone home telemetry is optional and there’s a switch for just errors mode. IMHO this really builds trust.
With regards to deployment complexity: well it’s built for handling high volumes of events. I’d reckon this is more a consequence of scaling the project rather than a coordinated plan to push people to their cloud offering.
If you do go for k8s or choose to deploy the stack yourself, you even get access to the full scale solution. But if you’re at that scale, you probably have someone hanging around who knows how to run your clickhouse setup. You still get the full sentry software and SDKs for free in that case. I think this is as fair as it gets with regards to the open source SaaS model.
This may very well be caused by my incompetence, but Sentry's docker-compose setup has never survived for more than a few months under my control. Something always destroys itself without an obvious reason sooner or later, and either refuses to start, or starts and doesn't really work. I tried updating it regularly, tried never updating it, getting the same treatment either way.
I did not intend to be critical of their work. They're doing OSS as best as they can and good for them. I am just saying that it's a different beast if Sentry is OSS vs a much simpler to operate OSS product. Licensing matters less when the operational cost acts as an inhibitor to adoption of your OSS offering.
True, opportunity cost is a factor, sorry if my reply sounded a bit brash. IMHO they are one of the few orgs who got this model right compared to lots of others who went the open core or support/consulting contract required OSS route.
Agreed. It was easier for me to rebuild parts of it for my own use than to self-host it. At my scale, a single DB works well as a datastore instead of Clickhouse/etc.
But then again I think this only prevents small players from "competing" by self-hosting, so the revenue loss there would be minimal either way. Large enterprises are too incompetent to even self-host a single self-contained binary, so for those the availability of source code and ease of hosting would make no difference, they would still use the SaaS.
> Isn’t the “solution” for Sentry that deploying it is such a pain in the ass that no one bothers to really do this?
That Sentry is a pain to deploy is not really intentional, it just happened over the years. However because it's a pain to deploy it also opens up a market for people that create managed deployments so I would say, that if anything, it made it worse. For self deployed Sentry you do not need to pay cent, the license explicitly allows it.
I'm personally on the fence how much of it is intentional... from the_mitsuhiko's side it probably isn't, but "the purpose of a system is what it does" and all.
Don't believe the salesmen, self hosting Sentry has been the most liberating feeling in a long while. Buy a cheap dedicated server with 64 gigs of RAM from Hetzner, run their install script and it's literally up and running. I'm processing volumes that would bankrupt me on their managed service without breaking a sweat.
The end of life problem can be solved by source code escrow, with a clause putting the code under an open source license and published in case of the demise of the owning cpmpany.
With O'Sassy specifically, the end of life problem solves itself. If the original vendor stops offering the software, a third party offering the software is not competing with the original vendor. Thus, the third party can offer paid hosting for the software if the original vendor does not.
This is the entire point of Fair Source undergoing delayed Open Source publication -- to codify the forward-path into the license itself, without the need for external source code escrow services.
Why not just release the software after your set threshold of time versus opening it up with such a license? To get eyes on it before-hand?
Also how does this work with contributor contributions? Does the owning SaaS get the benefit of contributor work instantly while everyone else has to wait 2 years? What about the contributers themselves?
presumably because a) it still allows the source code to be available and used for the 'permitted purposes' (i.e. anything that's not directly competing), and b) it represents a concrete commitment to open up, not just a pinkie promise (even if they were to have a license or contract which promised it, it would not be as easy to rely on as actually having the source code published. Companies have reneged on such promises before).
And yeah, by my reading essentially people can contribute code or publish patches (with just a plain MIT license in principle), just the original and derivatives still can't be used for non-permitted purposes until the timer is up.
You may want to allow certain uses (self-hosting, etc) even before it transitions to a fully open-source license. Having access to the source code can also help SaaS users debug certain situations.
> you were still not fully in control and someone else exploits the economic value without investing
O'Sassy came up recently in one of the forums I lurk in [0], and as discussed there, I tend to agree with Adam Jacob (SystemInit) and others that FSL is definitely one way out but doesn't totally solve the commercialization aspect, because the code & all that IP is still readily available.
Adam, in this talk [1], argues that like RedHat (and unlike Canonical), Open Source businesses must learn to separate source license from distribution license and if they do so, the money is there to be made (in a b2b setting, at least).
> What I have found interesting in the years since is that many companies are wrestling with the same problem, but feel that the two year head start the FSL gives is too aggressive.
... if the companies conflate Open Source and business models, rather it being merely a Go-To-Market (like open core).
Especially true for dev/infra upstarts competing with incumbents (PostHog v Amplitude; GitLab v GitHub [2]), and lately for AI labs (DeepSeek/Qwen/Llama v GPT/Gemini/Claude). In a role reversal, BigTech also uses Open Source to commodotize its competition's advantages (Android v iOS; k8s v Swarm; Firefox/Chrome v IE) [3].
> So people can argue it doesn’t work, but so far we only have evidence to the contrary and Sentry is quite successful
So, RedHat has also been successful?
GP says that some companies don't find FSL aggressive enough, despite it having worked nicely for Sentry. And that's similar to the point Adam makes: That Open Source (per OSI not FSF) is a development model not a business model. Companies that don't want/need to prioritize collaboration tend to use FSL / BUSL / etc; but those licenses aren't really going to significantly change their development or business (other than prevent competition from using it as-is, but now the code is out there anyway [0][1]), and so they might as well go close source (and Lockdown the code, too).
> issue is these are mostly academic points of view
Both, commodotizing competition (through OSS) and using OSS as Go-To-Market aren't academic PoVs, I don't think.
I'm not talking about RedHat, I'm talking about the perspective that "FSL / BUSL aren't effective enough". They solve the problem. O'saasy is just freeware at the end of the day, FSL creates more open source, and BUSL often has (though unfortunately the license doesnt require it).
The idea that FSL ~= Closed Source is entirely wrong and misunderstands the value that an open distribution gives. We have 10s of thousands of customers that run Sentry self-hosted. We regularly get contributations back to our core service - both in code and (what we prefer) other artifacts like feedback.
We were "Single Origin Open Source", which is extremely common whether people like to believe it or not. Its the entire premise of the sustainability issue in the industry. Thats not just an issue for commercial entities, its also most of the big open source software people rely on. In our case though we have a great business model that makes it entirely sustainable, and now have built a solid licensing mechanism around it that protects that, while ensuring our community is still successful.
These same issues around single origin open source are why we started the no-strings-attached funding mechanism via Open Source Pledge (https://opensourcepledge.com), why we push Fair Source (https://fair.io).
Maybe others will find defensible models, but I'm skeptical. I also respect Adam, but last I understood it the model they were going after sounded pretty similar to trademark protection (which doesnt work).
Thanks for the links. I read those and some more from your blog. I've also been to a Chad Whitacre talk about Sentry's OSS approach at a conference this year.
I don't think we're disagreeing at all. I quoted Adam to drive the point that tech shops that value collaboration will tend to prefer OSI-approved licenses. That doesn't seem to be the case for Sentry:
[Sentry is] single-source. That is, [the Company behind it] are the authors and maintainers of the software, and [does] not expect the community to provide us with contributions. We still allow it, and are thankful, but we consider it our duty to develop our software.
At the other end of the "single-source" spectrum is SQLite which is closed to collaboration but is dedicated to public domain and requests a fee for "Warranty": https://sqlite.org/copyright.html
It does seem like Sentry wants control over distribution, but by the way of license? Adam proposes something similar (and I guess that's the reason he's okay with "Fair Source", like a few others too who want to unchain the idea and go beyond OSI / Open Source [0]):
[Sentry wants] to allow people to self-host our software.
> I also respect Adam, but last I understood it the model they were going after sounded pretty similar to trademark protection (which doesnt work).
A counter example: The Android Open Source Project is OSS, and is firmly gate-kept by Google via trademark and other collaborative arrangements like the Open Handset Alliance and Linaro. That said, this is the happy case. Sentry clearly had a different experience with bigger tech shops (GitLab?) trying to monetize its offering without contributing anything back, which (tbh) sounds super terrible.
To me, there seem to be pathways to both succeed & fail with OSI-approved licenses (probably you'd argue... one'd fail more than succeed), and these licenses on their own are neither the only condition nor a sufficient one for business build around them to stumble and falter. That said, I get your point that "Fair Source" gives "single-source" projects a fighting chance, like it did for Sentry. I'd also have thought that the OSI-approved AGPLv3 (your reservations about copyleft notwithstanding) is enough to keep big shops from leeching from other high-quality mostly single-source projects... but may be I was mistaken (given MongoDB / Elastic / Redis / CockroachDB didn't think so; even if, Elastic & Redis switched back to including OSI-approved license, specifically the AGPLv3).
reply