I'd love to replace Python with something simple, expressive, and strongly typed that compiles to native code. I have a habit of building little CLI tools as conveniences for working with internal APIs, and you wouldn't think you could tell a performance difference between Go and Python for something like that, but you can. After a year or so of writing these tools in Go, I went back to Python because the LOC difference is stark, but every time I run one of them I wish it was written in Go.
(OCaml is probably what I'm looking for, but I'm having a hard time getting motivated to tackle it, because I dread dealing with the tooling and dependency management of a 20th century language from academia.)
Have you tried Nim? Strong and static typed, versatile, compiles down to native code vía C, interops with C trivially, has macros and stuff to twist your brain if you're into that, and is trivially easy to get into.
That looks very interesting. The code samples look like very simple OO/imperative style code like Python. At first glance it's weird to me how much common functionality relies on macros, but it seems like that's an intentional part of the language design that users don't mind? I might give it a try.
You can replace Python with Nim. It checks literally all your marks (expressive, fast, compiled, strong-typing). It's as concise as Python, and IMO, Nim syntax is even more flexible.
Yes, Go can hardly be called statically typed, when they use the empty interface everywhere.
Yes, OCaml would be a decent language to look into. Or perhaps even OxCaml. The folks over at Jane Street have put a lot of effort into tooling recently.
> Yes, Go can hardly be called statically typed, when they use the empty interface everywhere.
How often are you using any/interface {}? Yes, sometimes it's the correct solution to a problem, but it's really not that common in my experience. Certainly not common in ways that actually make life hard.
Also, since generics, I've been able to cut down my use of the empty interface even further.
I bounced off OCaml a few years ago because of the state of the tooling, despite it being almost exactly the language I was looking for.
I'm really happy with Gleam now, and recommended it over OCaml for most use cases.
Did you consider using F#? The language is very similar to OCaml, but it has the added benefit of good tooling and a large package ecosystem (can use any .NET package).
I always assumed a runtime specialized for highly concurrent, fault-tolerant, long-running processes would have a noticeable startup penalty, which is one of the things that bothers me about Python. Is that something you notice with Gleam?
Rust might be worth a look. It gets much closer to the line count and convenience of the dynamic languages like Python than Go, plus a somewhat better type system. Also gets a fully modern tooling and dependency management system. And native code of course.
I suppose you could try typescript which can compile to a single binary using node or bun. Both bun and node do type stripping of ts types, and can compile a cli to a single file executable. This is what anthropic does for Claude code.
I think the downside, at least near-term, or maybe challenge would be the better word, is that anything richer than text requires a lot more engineering to make it useful. B♭ is text. Most of the applications on your computer, including but not limited to your browser, know how to render B♭ and C♯, and your brain does the rest.
Bret Victor's work involves a ton of really challenging heavy lifting. You walk away from a Bret Victor presentation inspired, but also intimidated by the work put in, and the work required to do anything similar. When you separate his ideas from the work he puts in to perfect the implementation and presentation, the ideas by themselves don't seem to do much.
Which doesn't mean they're bad ideas, but it might mean that anybody hoping to get the most out of them should understand the investment that is required to bring them to fruition, and people with less to invest should stick with other approaches.
> You walk away from a Bret Victor presentation inspired, but also intimidated by the work put in, and the work required to do anything similar. When you separate his ideas from the work he puts in to perfect the implementation and presentation, the ideas by themselves don't seem to do much.
Amen to that. Even dynamic land has some major issues with GC pauses and performance issues.
I do try to put my money where my mouth is, so I've been contributing a lot to folk computer[1], but yeah, there's still a ton of open questions, and it's not as easy as he sometimes makes it look.
That's fair. It's still pre-alpha, and under heavy development, but it's working on taking the best of dynamicland[1] and trying to take it a lot further.
In terms of technical details, we just landed support for multithreaded task scheduling in the reactive database, so you can do something like
When /someone/ wishes $::thisNode uses display /display/ with /...displayOpts/ {
and have your rendering loop block the thread. Folk will automatically spin up a new thread when it detects that a thread is blocking, in order to keep processing the queue. Making everything multithreaded has made synchronizing rendering frames a lot tricker, but recently Omar (one of the head devs) made statements atomic, so there is atomic querying for statements that need it.
In terms of philosophy, Folk is much more focused on integration, and comes from the Unix philosophy of everything as text (which I still find amusingly ironic when the focus is also a new medium). The main scripting language is Tcl, which is sort of a child of Lisp and Bash. We intermix html, regex, js, C, and even some Haskell to get stuff done. Whatever happens to be the most effective ends up being what we use.
I'm glad that you mention that the main page is unhelpful, because I hadn't considered that. Do you have any suggestions on what would explain the project better?
Although, one could make the argument that staff notation is itself a form of text, albeit one with a different notation than a single stream of Unicode symbols. Certainly, without musical notation, a lot of music is lost (although, one can argue that musical notation is not able to adequately preserve some aspects of musical performance which is part of why when European composers tried to adopt jazz idioms into their compositions in the early twentieth century working from sheet music, they missed the whole concept of swing which is essential to jazz).
> one could make the argument that staff notation is itself a form of text, albeit one with a different notation than a single stream of Unicode symbols
Mostly this is straightforwardly correct. Notes on a staff are a textual representation of music.
There are some features of musical notation that aren't usually part of linguistic writing:
- Musical notation is always done in tabular form - things that happen at the same time are vertically aligned. This is not unknown in writing, though it requires an unusual context.
- Relatedly, sometimes musical notation does the equivalent of modifying the value of a global variable - a new key signature or a dynamic notation ("pianissimo") takes effect everywhere and remains in effect until something else displaces it. In writing, I guess quotation marks have similar behavior.
- Musical notation sometimes relates two things that may be arbitrarily far apart from each other. (Consider a slur.) This is difficult to do in a 1-D stream of symbols.
> although, one can argue that musical notation is not able to adequately preserve some aspects of musical performance
Nothing new there; that's equally true of writing in relation to speech.
The replied to comment seemed skeptical to treat musical notation as text. But any reasonable definition of "text" should include musical notation.
Otherwise it would be hard to include other types of obvious text, including completely mainstream ones such as Arabic. They are all strings of symbols intended for humans to read.
Feel free to disagree but I don't understand the argument here, if there is any. Lots of people read both Arabic and musical notation, it's a completely normal thing to do.
any reasonable definition of "text" should include musical notation
Then many a dictionary must be unreasonable [0]:
text
1. A discourse or composition on which a note or commentary is written;
the original words of an author, in distinction from a paraphrase, annotation, or commentary.
6. That part of a document (printed or electronic) comprising the words [..]
7. Any communication composed of words
n 1. the words of something written
Musical notes do not form words, and therefore are not text. (And no, definition 1 does not refer to musical notes). The written down form of music is called a score, not a text.
For complex music, sure, but if I'm looking up a folk tune on, say, thesession.org, I personally think a plain-text format like ABC notation is easier to sight-read (since for some instruments, namely the fiddle and mandolin, I mainly learn songs by ear and am rather slow and unpracticed at reading standard notation).
Think about the article from a different perspective: several of the most successful and widely used package managers of all time started out using Git, and they successfully transitioned to a more efficient solution when they needed to.
People have invented so many things similar but not identical to recutils that I wonder why you think recutils is the solution that everyone should converge on.
Piggybacking on this comment to say, I bet a lot of people's first question will be, why aren't you contributing to Octave instead of starting a new project? After reading this declaration of the RunMat vision, the first thing I did was ctrl-f Octave to make sure I hadn't missed it.
Honest question, Octave is an old project that never gained as much traction as Julia or NumPy, so I'm sure it has problems, and I wouldn't be surprised if you have excellent reasons for starting fresh. I'm just curious to hear what they are, and I suspect you'll save yourself some time fielding the same question over and over if you add a few sentences about it. I did find [1] on the site, and read it, but I'm still not clear on if you considered e.g. adding a JIT to Octave.
Fair question, and agreed we should make this clearer on the site.
We like Octave a lot, but the reason we started fresh is architectural: RunMat is a new runtime written in Rust with a design centered on aggressive fusion and CPU/GPU execution. That’s not a small feature you bolt onto an older interpreter; it changes the core execution model, dataflow, and how you represent/optimize array programs.
Could you add a JIT to Octave? Maybe in theory, but in practice you’d still be fighting the existing stack and end up with a very long, risky rewrite inside a mature codebase. Starting clean let us move fast (first release in August, Fusion landed last month, ~250 built-ins already) and build toward things that depend on the new engine.
This isn’t a knock on Octave, it’s just a different goal: Octave prioritizes broad compatibility and maturity; we’re prioritizing a modern, high-performance runtime for math workloads.
It's interesting that he concludes that freezing dicts is "not especially useful" after addressing only a single motivation: the use of a dictionary as a key.
He doesn't address the reason that most of us in 2025 immediately think of, which is that it's easier to reason about code if you know that certain values can't change after they're created.
You can't really tell though. Maybe the dict is frozen but the values inside aren't. C++ tried to handle this with constness, but that has its own caveats that make some people argue against using it.
Indeed. So I don't really understand what this proposal tries to achieve. It even explicitly says that dict → frozendict will be O(n) shallow-copy, and the contention is only about O(n) part. So… yeah, I'm sure they are useful for some cases, but as Raymond has said — it doesn't seem to be especially useful, and I don't understand what people ITT are getting excited about.
I feel like the elephant in the room in this post is property-based testing. I dislike using fixtures for all the reasons stated in the post, and when it seems like I might need really them, I reach for property-based testing instead.
"Generators" for property-based testing might be similar to what the author is calling "factories." Generators create values of a given type, sometimes with particular properties, and can be combined to create generators of other types. (The terminology varies from one library to another. Different libraries use the terms "generators," "arbitraries," and "strategies" in slightly different and overlapping ways.)
For example, if you have a generator for strings and a generator for non-negative integers, it's trivial to create a generator for a type Person(name, age).
Generators can also be filtered. For example, if you have a generator for Account instances, and you need active Account instances in your test, you can apply a filter to the base generator to select only the instances where _.isActive is true.
Once you have a base generator for each type you need in your tests, the individual tests become clear and succinct. There is a learning curve for working with generators, but as a rule, the test code is very easy to read, even if it's tricky to write at first.
Author here. Yes, what you describe sound where much like what I call Factories (and that's what they're usually called in Ruby land, and some other languages).
The problem arises when they're used to generate Database records, which is a common approach in Rails applications. Because you're generating a lot of them you end up putting a lot more load on the test database which slows down the whole test suite considerably.
If you use them to generate purely in memory objects, this problem goes away and then I also prefer to use factories (or generators, as you describe them).
Ah, ok, now I understand. Ok, I wasn't talking about that. From what I understand about property based testing it's sort of half way between regular example based testing and formal proofs: It tries to prove a statement but instead of a symbolic proof it does it stohastically via a set of examples?
Unfortunately, I'm not aware of a good property based testing library in Ruby, although it would be useful to have one.
Even so I'm guessing that property based testing in practice would be too resource intensive to test the entire application with it? You'd probably only test critical domain logic components and use regular example tests for the rest.
Oh, that's a very different set of requirements than I was thinking, and I missed that context even though you did mention database testing at one point. You're right, property-based testing is less helpful in that situation, because your database may contain legacy data that your current application code must be able to read but also shouldn't be able to write.
This is basically how I solved this in a past codebase. I called them "builders" and for complex scenarios requiring multiple different entities I called them "scenario builders" that created multiple entities.
My rule was to randomize every property by default. The test needs to specify which property needs to have a certain value. E.g. set the address if you're testing something about the address.
So it was immediately obvious which properties a test relied on.
You should see if your language has a property-based testing library; it'll have a ton of useful functionality to help with what you're already doing!
A clarification on terminology, the "property" in "property-based testing" refers to properties that code under test is supposed to obey. For example, in the author's Example 2, the collection being sorted is the property that the test is checking.
Forgive me if I’m just reading this incorrectly, but that doesn’t sound exactly like property testing as I’ve done it. the libraries implement an algorithm for narrowing down to the simplest reproducer for a given failure mode, so all of the inputs to a test that are randomized are provided by the library.
How did you deal with reproducibility when your tests use randomized data? Do you run with a random seed or something, so you can reproduce failures when they come up?
ScalaCheck includes the random seed in its failure messages, so it's easy to pull the seed out of CI/CD logs and reproduce the failure deterministically.
My current employer had me answer the question of whether I'm "disabled." I've never answered "yes" to this question since I've never been diagnosed with any form of neurodivergence, though therapists have suggested that there's a good chance I'd be diagnosed if I saw a specialist. But this time I noticed that my employer's definition of "disabled" included not only neurodivergence but also depression, which I do have a diagnosis for. So... now I'm disabled.
I have no idea what use the label is when it's so broadly defined. It doesn't give my employer any information that would help them support me in any way. Fingers crossed there is some benefit to it.
It probably helps the employer demonstrate that they hire and retain disabled people, likely assisting with some government quotas, and defenses against lawsuits by aggrieved ex-employees.
I think it was based on the misconception that the mainstream turned away from Perl because of a handful of warts and mistakes, not because Perl's unconstrained flexibility made it impractical, and that Perl "done right" could recapture the excitement and mainstream attention that Perl once enjoyed. I think they should have accepted that the existing community was already the largest subset of programmers that could embrace Perl's trade-offs, with or without the historical warts.
fwiw I think Perl was so popular in the late 90s that a transition like Python2.0 to 3.0 that traded some backward compatibility for some structure COULD have been successful. However, the Perl community also got tired of waiting such a long time for what is now Raku, and it was so different with no incremental migration path, that the lifeboat never materialized. Its not like Larry and the community didn't know that a transition was needed, but the execution was not there.
> a transition like Python2.0 to 3.0 that traded some backward compatibility for some structure COULD have been successful.
I think Perl was a lot further away than Python was from anything that would have allowed "trading some backward compatibility for some structure".
This was a clear case of a language collapsing under the weight of its own poor decisions and lack of coherent design. Could it have been kept on life support with a series of incremental improvements? Probably, but things wouldn't have gotten materially better for its users, and it would have bled users anyway as the industry left it behind.
(OCaml is probably what I'm looking for, but I'm having a hard time getting motivated to tackle it, because I dread dealing with the tooling and dependency management of a 20th century language from academia.)
reply