I’m sure you’re here to educate me, but this is not about criss-cross merges between two different work branches, this is about whether it’s better to rebase a work branch onto the main branch, or to pull the changes from the main branch to the work branch.
I have an early draft of a blog post about them :) as a source control expert who built both these systems and tooling on top of them for many years, I think they're the biggest and most fundamental reason rebases/linear history are better than merges.
> whether it’s better to rebase a work branch onto the main branch, or to pull the changes from the main branch to the work branch.
The problem with this is that the latter has an infinitely higher chance of resulting in criss-cross merges than the former (which is 0).
It's definitely not 0 because rebase heavy workflows involve the rerere cache which is a minefield of per-repo hidden merge changes. You get the results of "criss-cross merges" as "ghosts" you can't easily debug because there aren't good UI tools for the rerere cache. About the best you can do is declare rerere cache bankruptcy and make sure every repo clears their rerere cache.
I know that worst case isn't all that common or everyone would be scared of rebases, but I've seen it enough that I have a healthy disrespect of rebase heavy workflows and try to avoid them when given the option/in charge of choosing the tools/workflows/processes.
To be honest I've used rebase-heavy workflows for 15 years and never used rerere, so I can't comment on that (been a happy Jujutsu user for a few years — I've always wondered what the constituency for rerere is, and I'm curious if you could tell me!) I definitely agree in general that whenever you have a cache, you have to think about cache invalidation.
rerere is used automatically by git to cache certain merge conflict fixes encountered during a rebase so that you don't have to reapply them more than once rebasing the same branch later. In general, when it works, which is most of the time, it's part of what keeps rebases feeling easy and lightweight despite capturing in the final commit output sometimes a fraction of the data of a real merge commit. The rerere cache is in some respects a hidden collection of the rest of a merge commit.
In git, the merge (and merge commit) is the primitive and rebase a higher level operation on top of them with a complex but not generally well understood cache with only a few CLI commands and just about no UI support anywhere.
Like I said, because the rerere cache is so out-of-sight/out-of-mind that's why problems with it become weird and hard to debug. The situations I've seen that have been truly rebase-heavy workflows with multiple "git flow" long-running branches and even sometimes cherry picking between them. (Generally the same sorts of things that create "criss-cross merge scenarios".) Rebased commits start to bring in regressions from other branches. Rebased commits start to break builds randomly. If what is getting rebased is a long-running branch you probably don't have eyes on every commit, so finding where these hidden merge regressions happen becomes full branch bisects, you can't just focus on merge commits because you don't have them anymore, every commit is a candidate for a bad merge in a rebased branch.
Personally, I'd rather have real merge commits where you can trace both parents and the code not from either parent (conflict fixes), and you don't have to worry about ghosts of bad merges showing up in any random commit. Even the worst "criss-cross merge" commits are obvious in a commit log and I've seen have had enough data to surgically fix, often nearly as soon as they happen. rerere cache problems are things that can go unnoticed for weeks to everyone's confusion and potentially a lot of hidden harm. You can't easily see both parents of the merges involved. You might even have multiple repos with competing rerere caches alternating damage.
But also yes rerere cache problems are so generally infrequent that it might also take weeks of research, when it does happen, just to figure out what the rerere cache is for, that it might be the cause of some of your "merge ghosts" haunting your codebase, and how to clean it.
Obviously by the point where you are rebasing git flow-style long runnning branches and using frequent cherry picks you're in a rebase heavy workflow that is painful for other reasons and maybe that's an even heavier step beyond "rebase heavy" to some, but because the rerere cache is involved to some degree in every rebase once you stop trusting the rerere cache it can be hard to trust any rebase heavy workflow again. Like I said, personally I like the integration history/logs/investigatable diffs that real merge commits provide and prefer tools like `--first-parent` when I need "linear history" views/bisects.
You have to turn rerere on, though, right? I've never done that. I've also never worked with long-running branches — tend to strongly prefer integrating into main and using feature flags if necessary. Jujutsu doesn't have anything like rerere as far as I know.
Hmm, yeah looks like it is default off. Probably some git flow automation tool or other sort of bad corporate/consultant disseminated default config at a past job left the impression that it was default on. It's the solution to a lot of papercuts working with long-running branches as well as the source of new problems as stated above; problems that are visible with merge commits but hidden in rebases.
Note that while C++ templates are more powerful than Rust generics at being able to express different patterns of code, Rust generics are better at producing useful error messages. To me, personally, good error messages are the most fundamental part of a compiler frontend.
True but you lose out on much of the functionality of templates, right? Also you only get errors when instantiating concretely, rather than getting errors within the template definition.
No, concepts interoperate with templates. I guess if you consider duck typing to be a feature, then using concepts can put constraints on that, but that is literally the purpose of them and nobody makes you use them.
If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later? This behavior is in fact used to decide between alternative template specializations for the same template. Concepts do it better in some ways.
> If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later?
Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
A big concern here would be accidentally depending on something that isn't declared in the concept, which can result in a downstream consumer who otherwise satisfies the concept being unable to use the template. You also don't get nicer error messages in these cases since as far as concepts are concerned nothing is wrong.
It's a tradeoff, as usual. You get more flexibility but get fewer guarantees in return.
Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.
>Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
What I meant is, if the thing is not instantiated then it is not used. Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that. Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to. But that's not a problem with the language.
> Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.
I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P
As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled. IIRC Swift takes advantage of this (polymorphic generics by default with optional monomorphization) and the Rust devs are also looking into it (albeit the other way around).
> Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that.
I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
> Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to.
Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
>I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P
The actual effects depend on a lot of things. I'm just saying, it seems contrived to me, and the most likely outcome of this type of broken template is failed compilation.
>As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled.
This is incompatible with how C++ templates work. There are methods to separately compile much of a template. If concepts could be made into concrete classes and used without direct inheritance, it might work. But this would require runtime concepts checking I think. I've never tried to dynamic_cast to a concepts type, but that would essentially be required to do it well. In practice, you can still do this without concepts by making mixins and concrete classes. It kinda sucks to have to use more inheritance sometimes, but I think one can easily design a program to avoid these problems.
>I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
This sounds wrong to me. Template parameters plus template code actually turns into real code. Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable". No language I can dream of that has generics could do any different.
>Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.
Sure. Contrivance is in the eye of the beholder for this kind of thing, I think.
> and the most likely outcome of this type of broken template is failed compilation.
I don't think that was ever in question? It's "just" a matter of when/where said failure occurs.
> This is incompatible with how C++ templates work.
Right, hence "tangentially related". I didn't mean to imply that the aside is applicable to C++ templates, even if it could hypothetically be. Just thought it was a neat capability.
> This sounds wrong to me.
Wrong how? Definition checking was undeniably part of the original C++0x concepts proposal [0]. As for some reasons for its later removal, from Stroustrup [1]:
> [W]e very deliberately decided not to include [template definition checking using concepts] in the initial concept design:
> [Snip of other points weighing against adding definition checking]
> By checking definitions, we would complicate transition from older, unconstrained code
to concept-based templates.
> [Snip of one more point]
> The last two points are crucial:
> A typical template calls other templates in its implementation. Unless a template using concepts can call a template from a library that does not, a library with the concepts cannot use an older library before that library has been modernized. That’s a serious problem, especially when the two libraries are developed, maintained, and used by more than one organization. Gradual adoption of concepts is essential in many code bases.
And Andrew Sutton [2]:
> The design for C++20 is the full design. Part of that design was to ensure that definition checking could be added later, which we did. There was never a guarantee that definition checking would be added later.
> To do that, you would need to bring a paper to EWG and convince that group that it's the right thing to do, despite all the ways it's going to break existing code, hurt migration to constrained templates, and make generic programming even more difficult.
I probably could have used a more precise term than "backwards compatibility", to be fair.
> Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable".
I'm a bit worried I'm misunderstanding you here? It's true that C++ as it is now requires you to instantiate templates to test anything, but what I was trying to say is that changing the language to avoid that requirement runs into migration/backwards compatibility concerns.
> No language I can dream of that has generics could do any different.
I've mentioned Swift and Rust already as languages with generics and definition-site checking. C# is another example, I believe. Do those not count?
> I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.
My apologies for the misdirected focus.
In any case, that type of error might be "new" in the context of the conversation so far, but it's not "new" in the PL sense since that's basically Rice's theorem in a nutshell. No real way around it beyond lifting semantics into syntax, which of course comes with its own tradeoffs.
That is all very good information. I don't often get into the standards and discussions about the stuff. Maybe ChatGPT or something can help me find interesting topics like this one but it hasn't come up so much for me yet.
>I'm a bit worried I'm misunderstanding you here? It's true that C++ as it is now requires you to instantiate templates to test anything, but what I was trying to say is that changing the language to avoid that requirement runs into migration/backwards compatibility concerns.
I see now. I could imagine a world where templates are compiled separately and there is essentially duck typing built into the runtime. For example, if the template parameter type is a concept, your type could be automatically hooked up as if it was just a normal class and you inherited from it. If we had reflection, I think this could also be worked out at compile time somehow. But I'm not very up to speed with what has been tried in this space. I'm guessing that concept definitions can be very extensive and also depend on complex expressions. That sounds hairy compared to what could be done without concepts, for example with an abstract class.
> I could imagine a world where templates are compiled separately and there is essentially duck typing built into the runtime.
The bit of my comment you quoted was just talking about definition checking. Separate compilation of templates is a distinct concern and would be an entirely new can of worms. I'm not sure if separate compilation of templates as they currently are is possible at all; at least off the top of my head there would need to be some kind of tradeoff/restriction added (opting into runtime polymorphism, restricting the types that can be used for instantiation, etc.).
I think both definition checking and separate compilation would be interesting to explore, but I suspect backwards compat and/or migration difficulties would make it hard, if not impossible, to add either feature to standard C++.
> For example, if the template parameter type is a concept, your type could be automatically hooked up as if it was just a normal class and you inherited from it.
Sounds a bit like `dyn Trait` from Rust or one of the myriad type erasure polymorphism libraries in C++ (Folly.Poly [0], Proxy [1], etc.). Not saying those are precisely on point, though; just thought some of the ideas were similar.
> but you lose out on much of the functionality of templates, right?
I don't think so? From my understanding what you can do with concepts isn't much different from what you can do with SFINAE. It (primarily?) just allows for friendlier diagnostics further up in the call chain.
You're right but concepts do more than SFINAE, and with much less code. Concept matching is also interesting. There is a notion of the most specific concept that matches a given instantiation. The most specific concept wins, of course.
It depends. I've been working on a series of large, gnarly refactors at work, and the process has involved writing a (fairly long), hand-crafted spec/policy document. The big advantage of Opus has been that the spec is now machine-executable -- I repeatedly fed it into the LLM and see what it did on some test cases. That sped up experimentation and prototyping tremendously, and it also found a lot of ambiguities in the policy document that were helpful to address.
The document is human-crafted and human-reviewed, and it primarily targets humans. The fact that it works for machines is a (pretty neat) secondary effect, but not really the point. And the document sped up the act of doing the refactors by around 5x.
The whole process was really fun! It's not really vibe coding at that point, really (I continue to be relatively unimpressed at vibe coding beyond a few hundred lines of code). It's closer to old-school waterfall-style development, though with much quicker iteration cycles.
> Rust should have done exactly one thing and do that as good as possible: be a C replacement and do that while sticking as close as possible to the C syntax.
The goal of Rust is to build reliable systems software like the kind I've worked on for the last many years, not to be a better C, the original goal of which was to be portable assembler for the PDP-11.
> Now we have something that is a halfway house between C, C++, JavaScript (Node.js, actually), Java and possibly even Ruby with a syntax that makes perl look good and with a bunch of instability thrown in for good measure.
I think Rust's syntax and semantics are both the best by some margin across any industrial language, and they are certainly much better than any of the ones you listed. Also, you missed Haskell.
Haskell is interesting, it is probably one of the best programming languages ever devised but it just can't seem to gain traction. There are some isolated pockets where it does well (both business wise as well as geographically).
I like Haskell, but I never want to work with Haskellers, ever again. I’ll do personal / solo Haskell projects. But never professionally on a team. Haskellers are not team players.
Rustaceans should really take Haskell as a cautionary tale. It doesn’t matter how good your tech is, if your community is actively hostile to newcomers, if you try to haze every newcomer by making them recite your favorite monad definition before giving them the time of day.
Rustaceans are already working their way onto my shitlist for proliferating X years’ Rust experience all over the place. And no, that’s not HR’s fault. HR has no idea what Rust is. It’s rustaceans corrupting the hiring process to reward their fellow cultists.
It’s idiotic to be so insular and so tribalistic, if you want to increase adoption of your favorite language. Programming languages are like natural languages. The more people that use them, the more valuable it is to speak it. Imagine if someone tried to get you to learn mandarin by shitting on your native language. You catch a lot more flies with honey than vinegar.
I’d rather be stuck in JS hell forever, than have to deal with such toxic, dramatic, dogmatic people. And I really dislike writing JavaScript… but the community and ecosystem around the language are way more important than the syntax and semantics. You want the engineers and builders to vastly outnumber the radioactive PL theorists.
That's a rather drastic generalization! As a counterpoint I've worked on professional teams with, what, 20 or 30 different Haskellers over the years and the number of "toxic, dramatic, dogmatic people" in that set is zero (or one if I really stretch the definition, and then only dogmatic, not toxic or dramatic). None were poor at their jobs, and the proportion of truly excellent software engineers with deep capability in the hard and soft skills needed to get code shipped was far greater than 50%.
That said, if toxic behavior occurs it can be more visible in smaller communities, just by how the numbers work out, so I don't doubt you've had a hard time interacting with some Haskellers, and I sympathize with you. Please point me to any toxic behavior you see in the public Haskell community and I'll do my best to address it with whatever authority I have.
I do actually have this setup going with a Cable Matters adapter [1] + a custom firmware I found [2] and
> chroma/RGB 4:4:4 + HDR + VRR/Freesync + 4k,120hz for their Linux PC on a TV
works great now on my LG C4 TV with Bazzite's gaming mode, though:
* 144Hz is unstable
* 12-bit color is unstable (10-bit works fine), and gamescope doesn't have a way to limit color depth (kwin does), so I had to put in place an EDID override
* in the EDID, limiting the FreeSync range to 60-120Hz (which should still allow frame doubling/tripling) seemed to be better -- the default 40Hz caused a bit of flickering because the AMD driver would drop the refresh rate down to 38.5Hz or so.
I didn't know about that incident before starting at Oxide, but if I'd known about it, it absolutely would have attracted me. I've written a large amount of technical content and not once in over a decade have I needed to use he/him pronouns in it. Bryan was 100% correct.
Joyent took funding from Peter Thiel. I have not seen attacks from Cantrill against Thiel for his political opinions, so he just punches down for street cred and goes against those he considers expendable.
What about Oxide? Oxide is funded by Eclipse ventures, which now installed a Trump friendly person:
I'm not sure about research, but I've used LLMs for a few things here at Oxide with (what I hope is) appropriate judgment.
I'm currently trying out using Opus 4.5 to take care of a gnarly code reorganization that would take a human most of a week to do -- I spent a day writing a spec (by hand, with some editing advice from Claude Code), having it reviewed as a document for humans by humans, and feeding it into Opus 4.5 on some test cases. It seems to work well. The spec is, of course, in the form of an RFD, which I hope to make public soon.
I like to think of the spec is basically an extremely advanced sed script described in ~1000 English words.
Maybe it's not as necessary with a codebase as well-organized as Oxide's, but I found gemini 3 useful for a refactor of some completely test-free ML research code, recently. I got it to generate a test case which would exercise all the code subject to refactoring, got it to do the refactoring and verify that it leads to exactly the same state, then finally got it to randomize the test inputs and keep repeating the comparison.
reply