A short story from the pre-development of Electric/Hyperfiddle. I was sitting at a table with Rich and Dustin after Conj in Philly. Dustin was trying to explain some of these (still hypothetical) ideas to Rich, and Rich was just too tired from the keynote to really follow along. He wasn't dismissive, but just didn't seem to get the Big Idea. Many people would have taken that as a sign and possibly dropped the idea.
Not Dustin though, if anything he seems to have doubled down and sought to prove the ideas with working libraries. Here we are years later with a growing community and excitement about this real working software.
Bravo Dustin for having the strength to stay the course. You knew you were on to something way back then, and haven't lost sight of it.
ha nooo i think he said: “i am looking for something more flexible” than the immutable REST payload POC we were playing with at the time (this was 8 years ago). He was right!
I think every clojure programmer has; thinking alone can only get one so far. Electric is the result of contributions of multiple 4-sigma engineers who I have worked closely with over the years. Projects like this are larger than any individual; genuine progress requires collaboration.
As much as I love lisps, and I've written a fair bit of code in Scheme, Racket, and Clojure, I have to admit, when you are an immigrant in another person or organization's code, their specific abstractions, DSLs (macros), etc., can be very (VERY) burdensome to get up to speed on to even begin to start contributing to the codebase (much less debugging complicated issues).
On the other hand, there is a ridiculous amount of power in Lisps. Operating at a higher level of abstraction is like having magical powers. When you go back to programming in Go, you feel like someone chopped off one of your arms or legs.
On the other hand, to be fair to Go, when you are an immigrant in a Go codebase, everything is very easy. Turn this way ... oh I've seen this a thousand times before its an XYZ. Turn that way, oh, I don't even have to read that, I know what that is. Oh what is that ... oh its my greencard to this codebase, I can now fix bugs with confidence.
Your comments about Go make sense since it was no secret Go was designed to get junior engineers at Google up to speed being productive as fast as possible.
And on the opposite side, Clojure was made with a full focus on getting professional developers with lots of experience to be able to use Lisp in places they normally couldn't (initially Java shops), with no compromises to make it easier for beginner programmers.
And that's OK, not every language needs to cater to everyone :)
Dustin, Leo, and the rest of Electric team are brilliant, borderline geniuses.
You can do absolutely magical interactive apps with Electric. However, I think, the catch is that the learning curve of Missionary and Electric is _very_ steep, and you have to understand the concepts pretty deeply in order to successfully debug (hence, to make) your code. And it is worth it, in my opinion — the concepts used are beautiful in a mathematical way and fit together pretty well.
Thanks! We hope the v3 learning curve will be a lot better; v2 has a bunch of non-obvious semantics that you just had to learn (as described in [1]) – which are now fixed in v3. But you're right, Electric is for experts, we built it for ourselves — in order to meet the performance and scalability requirements of our upcoming product Hyperfiddle (low-code declarative infrastructure for UI, or more simply, a CRUD Spreadsheet). https://www.hyperfiddle.net/
I work in a finance company and basically what our product is in terms of tech is a lot of client-server spreadsheets, with a huge $ amount per cell. So when I saw the demo and the use case I got extremely hooked! Unfortunately due to the hiring conditions and tech competence, there's almost 0 chance of adopting something like this, but still, I'm impressed.
I haven't got enough time to finish the video today, but I'd like to throw out some of our major technical domain problems and see how people familiar with electric would try to solve them:
1. How does it handle multiple different functionality servers? And potentially server-server networking? e.g. if I need to get some data from server A and some other data from server B, and combine them into the UI, is that doable in electric? Also what if I want to save the combined data onto the database on server C?
Now for simplicity, let's just assume we're talking about less than 20 servers here and there's no problem with the port, so each one of them can keep 1 websocket per each other server open up.
2. How does it handle listening to a stream of events and triggering if the event meets a certain criteria? e.g. like if the price of GOOG is lower than a certain price then trigger a popup or something like that. This may not be the intended scope of electric Clojure but it is what we do day to day, so I'm curious.
3. How does it handle interop with existing Java apps? Is it just a normal java-clj interop?
Overall I really like the concept of being network transparent, or shall I say that devs would be better served by spending more time on business data and function, but not on coming up with derived request/response obj, API route, etc. Also, if this type of library ends up in some sort of more mainstream programming language, I would vouch for adoption in a heartbeat.
3- yes, Electric Clojure is a Clojure library (macro) so the interop story is identical to Clojure’s
2- Electric vars are reactive signals and stock prices are signals, so: (if (< price target) ($ Modal))
1- we can support microservice topologies (N sites) in principle though the work requires a corporate design partner to make sure we get it right.
This is an important capability for the Hyperfiddle layer - a Universal UI is only as valuable as the data it reaches; i.e., as far as RAD platforms go, service connectivity is the only thing that matters. Today, connecting a service to a UI requires immeasurable amounts of glue code (constantly shifting, breaking, requiring ongoing maintenance) at the cost of ~$100k+ per year per service connection, and costs grow superlinearly with complexity of the service! Electric collapses to zero the cost of service connectivity.
What I’ve never understood about this approach is that it claims to be “network transparent,” but you need to add client and server annotations. So it’s very much network-aware.
The author says somewhere that, paraphrasing, where something runs in a distributed system is essential complexity. I think the transparent part is where / when the communication between front and back end happen. The "transparency", whether or not it's the right term, it compared to having to do software development with awareness of the network interactions vs just calling out what runs where. No adding endpoints for everything, managing http calls or webhooks, etc.
In the video Dustin also makes the point that in v3 all the client and server annotations can now be kept fully isolated from the parts of the code that model the essential complexity (i.e. any dynamic scoping or function parameters used by a 'pure' function are also network transparent), and this dramatically reduces the amount of global coupling across the codebase.
The site macros (e/client & e/server) let the programmer declare what site an effect must run on. The network is not explicit, only implied. For example, platform calls like (query-database) or (.createTextNode js/document) or (check-password) are inherently sited. Siting is essential complexity (arguably the essence of a distributed system), and consequently we as programmers are hyper-aware of where (at which site) our effects must run, and we require perfect control over their placement.
How does this implicit, "bottom up" definition of the network boundary between client and server cope with version incompatibilities? You, as the developer, control what version of your code is running on the server, but you don't have direct control over what version is running in the browser.
In approaches with an explicit API, you can explicitly maintain backward compatibility for a period of time until you believe that enough browsers have "caught up" to newer versions of the software running on the server to allow you to retire that backward compatibility.
Haven't personally encountered this yet – it seems keeping some stale servers around and routing to them would be sufficient? Long term, over-the-air upgrades (hot migrating connected clients) seems within reach, if a bit researchy - but the durable workflows projects seem to be making headway on the problem
It's so incredibly brilliant that it spoils me for mainstream tech, but it's never actually "done enough" for mgmt to feel comfortable adopting it in any capacity. (See also: Unison https://www.unison-lang.org/)
Or, you can get them to adopt, but then you hit hiring limitations.
I say this as someone who built a large team of Clojure developers in a Fortune 100 company, and lived to regret it.
> Or, you can get them to adopt, but then you hit hiring limitations.
Depends on how flexible your "hiring principles" are. You mention you did this at a Fortune 100 company, so obviously you didn't have any flexibility at all. But for the (good) places that allow people to learn on the job, you just need to find a sufficiently smart person who likes to learn, and they'll get up to speed with Clojure relatively quickly, as long as their first reaction when seeing parenthesis isn't "eww".
Or you could start a startup that codes circles around its competitors (see Nubank, which became large enough to acquire Rich Hickey's team). There's a growing list of Clojure unicorns, most of whom have a tech-forward culture.
Agree with the challenges you mentioned using new tech at a more mainstream company.
I've found that the gap between generative AI's ability to generate code for mainstream languages and niche languages such as Clojure is growing. It is getting much better at generating code for the former, and continuing to flounder for the latter. How do Clojure folks feel about that? Do you disagree, or maybe see it as a non-issue?
Clojure people are probably THE community with a philosophy diametrically opposed to AI generated code.
Rich Hickey once called TDD "guardrail programming", because it's like navigating to a location by first making contact with the guardrail and then just letting go of the steering wheel.
What they value is "hammock time", which is similar to what used to be called "Turinging" by the people working with Alan Turing, you just stare at the problem for hours and weeks and months, until a solution seems to magically manifest itself as obvious. Or in Richs' case, you just lay in a hammock with your books and a laptop, and think about your problem domain until the solution becomes easy.
Clojure itself is a tool that is meant to be refine- and hone-able by a skilled user, but can create a mess in the hands of a junior/AI.
Case in point, I've been working on the same 20k lines of code for the last 5 Years, and I don't mind it because I feel like I'm gaining another meaningful insight into my problem domain every few weeks. I'm not even working in Clojure anymore but the mindset is something that stuck to me.
By a similar argument, the Clojure people could be said to be diametrically opposed to human-generated code. Which is also true-ish, but not that meaningful an observation. At some point code needs to be written by someone or something.
But code as the formalisation of the problem is not seperatable from the understanding of its domain.
The whole point of explorative REPL based programming is that it allows you to further your understanding of the problem. And the value is in the understanding, and not the code.
I feel like this is best captured by a story that my father (an artist/designer) once told me.
The Shogun and the Artist
Once, a young shogun visited Mount Fuji and was so captivated by its beauty that he commissioned a renowned local artist to create an ink wash painting of the mountain. The shogun instructed the artist to complete the painting by the time he returned from his travels.
Years passed, and the shogun finally returned, eager to see the painting. However, the artist apologized, explaining that he had not yet finished. The shogun, though disappointed, granted the artist more time.
More years went by, and the shogun returned again, only to find the painting still unfinished. Yet again, the shogun, moved by the mountain's beauty, granted another extension.
This pattern repeated for many years, until both the shogun and the artist had grown old. Finally, the shogun, tired of waiting, demanded the artist finish the painting.
“We are both old and frail,” the shogun said. “I may not live to see the day you finish. I want to visit Mount Fuji through your painting, even if I can no longer travel. Finish it now—I demand it.”
“Very well,” the artist replied. He took up his brush and, in just a few strokes, perfectly captured the essence of Mount Fuji.
The shogun was astonished. “This captures the essence perfectly,” he said. “But if it took only moments, why did you make me wait a lifetime? Do you think so little of me?”
The artist smiled and replied, “On the contrary, my lord. I have painted Mount Fuji every day since you first made your request. But it took a lifetime of learning, experimentation, and experience to capture its essence like this.”
Understanding at last, the shogun rewarded the artist for his lifetime of dedication.
Maybe because language models are mostly good at boilerplate and Clojure and lisps in general are mainly about eliminating it entirely. Clojure programs are very terse.
Claude 3.5 Sonnet is pretty good at writing Clojure, including Electric Clojure. The mistakes it makes (IME) tend to be the same kind of mistakes that human programmers make.
I largely agree with the comments here, that LLMs struggle with Clojure but Claude does it best at the moment. I also use it mainly for writing individual functions, though it occasionally correctly generates a simple Electric app with my desired functionality (Projects help, along with giving it lots of examples).
The most optimistic outlook I've heard on AI+Clojure is that AI is not magical and still benefits from good abstractions. So hopefully as models get better at reasoning, using something like Electric Clojure will help them write clean, maintainable code.
Been working with a Clojure, Python, and Elixir codebase since the advent of LLMs. It’s shockingly bad at Clojure still, but can get halfway decent with Elixir. Claude has always been better than OpenAI at Clojure, but that’s not really saying much.
I think Clojure (or any lisp) applications tend to lean towards mini-DSLs that don’t lend themselves to generalization very well. There’s also not much agreement in the community on any single problem or way of doing anything (one of the best and worst parts about the language)
The advent of competent LLMs has increased the viability of using more powerful "esoteric" languages like Clojure because it solves The Hiring Problem: I can ask it to rewrite my application in another language, or it can help junior programmers write better code.
Recently I was writing Pedestal code (for the first time in my life) and ChatGPT was ok at generating boilerplate for it. If the nature of the code is boilerplate like and there is online documentation that ranks high in Google search, I suppose ChatGPT will do ok for any language.
I have found chatgpt to be shockingly good for writing functions. I decompose my problem domain into a series of functions and I have chatgpt generate code for each those functions. My greatest fear is that I am losing my memory when in comes to clojure's built in functions as ChatGPT has been taking care of those for me for close to over a year.
Agreed, this mirrors what I've found using LLMs with Clojure and Python. Any large language with lots of training data is going to get good results from LLMs.
I've seen that the author of Electric Clojure is pretty involved in the startup scene and I was curious about how those folks perceive the changing world.
It’s honestly top of mind question for someone like me, who loves Clojure but doesn’t want to be disadvantaged for this reason. Like I wonder if the “Blub paradox” is still real or if we’re in a world where “if Blub isn’t the answer you aren’t using enough of it (spat from an LLM)”.
Like try finding YC job posting without TS and/or Python in the requirements :-/
if you like working in a certain language, just do it and the opportunities will come along. I really love elixir and I started 5 years ago when it wasn't so popular. nevertheless, I found a niche building MVPs for people. eventually I met my cofounders and we built our entire startup on elixir. today I work on elixir all day and we recently hired an elixir engineer.
you made the decision to start elixir before the rapid recent developments of AI. it's a different world now, and the value that elixir provides relative to another language e.g. python is much different now. it makes far less sense to do what you are talking about in the current state of the word.
> it's a different world now, and the value that elixir provides relative to another language e.g. python is much different now
Copilot and other ai powered autocompletes mostly amount to large boilerplate generators. macros in lisp and elxiir are strictly better in terms of long term maintenence in the cases that warrant that. But autogenerated code with a junior checking it in under "trust me bro" does not instill confidence in me.
I'm reminded of a situation a few months ago with my (nontechnical) cofounder. He had been discussing our recent funding round and strategies for growth with another CTO friend he knew. The approach he was a proponent of was hiring a bunch of gifted juniors and watching them like hawks. The idea is that they would produce lots of code for all the upcoming features. The problem here is obvious. Juniors make messes and more code != better code. These new AI powered pushes seem to basicly be this strategy on steroids. I can certainly see the strength of it if you're trying to build fast, get traction and get acquired before the house of cards falls apart.
Personally, I don't see it as a optimal strategy when you're trying to build a viable long term business. Platform matters. Ergonomics matter and most importantly, Human intelligence matters. We hired another senior engineer instead and we're doing fine. If I had to do it over again, I would still stick with elixir.
Love Clojure, but am repulsed by the idea of a “responsive DOM network stream”.
This feels over engineered to me. The classic cool factor influencing the “if we could vs if we should”, bolstered by developing it as a software engineer on a performant dev box with impeccable internet connection; hell, maybe even a local dev db clone.
I suppose you could argue server side caching would alleviate some of the pains of a design like this, but you’re still performing redundant server round trips for the stream when a piece of data goes in and out of view.
Does the tech have any local caching options to make this a little more sane for a worst case an end user?
> you’re still performing redundant server round trips for the stream when a piece of data goes in and out of view
This is not the way how Electric apps work, this was just a demonstration of how easily you can orchestrate complex network-transparent client-server interactions _when_ you need them.
Have you tried it? I was skeptical too until I built a few apps with it. The learning curve is easier than it seems (I started by modifying the examples) and it really does cut down on development time - turns out much of modern web programming revolves around network management and its associated complexities.
> Does the tech have any local caching options to make this a little more sane for a worst case an end user?
This is trivially solved in Electric- as long as you hold onto a client-side handle of the data, it won't be unmounted.
Re. performance - Electric v2 is already faster than alternatives for many kinds of apps (not all). We expect v3 to be so fast that it can express apps that are outright impossible with alternatives. The robotics observability IDE in the talk, for example, as well as Hyperfiddle itself.
Re. server caching – Electric (being reactive) auto memoizes all scopes, even on the server. (The essence of reactive programming is a time/space tradeoff - cache more things to minimize recomputation later.) For example, in the virtual scroll demo from the talk, the database query runs once and is retained in memory, so that scrolling simply indexes over the memoized collection:
Regarding client caching - it's basically the same. Hoist the value you want to save to be above the conditional that is disposing it. If that doesn't work, use an atom.
Not Dustin though, if anything he seems to have doubled down and sought to prove the ideas with working libraries. Here we are years later with a growing community and excitement about this real working software.
Bravo Dustin for having the strength to stay the course. You knew you were on to something way back then, and haven't lost sight of it.