To my mind at least, it is different. I lean heavily on AI for both admin and coding tasks. I just filled out a multipage form to determine my alimony payments in Germany. Gemini was an absolute godsend, helping answer questions in, translate to English, draft explanations, emails requesting time extensions to the Jugendamt case worker.
This is super scary stuff for an ADHDer like me.
I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.
My daily todos are now being handled by NanoClaw.
These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.
But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.
My empirical experience is that people with ADHD are more vulnerable to get addicted to LLMs due to the feeling of instant gratification. But when PRs take ages and 3 different people are reviewing, you are just making prompting a group effort. If you think meetings are a time waste multiplier you should watch LLM PRs.
For that reason, and my own experience with AI users being unaware of how bad of a job the LLM is doing (I've had to confront multiple people about their code quality suddenly dropping), if someone says they can rely on LLM I've learned to not trust them.
When I was younger if I had an idea for a project I would spend time thinking of a cool project name, creating a git repo, and designing an UI for my surely badass project. All easy stuff that gave me the feeling of progress. Then I would immediately lose interest when I realized the actual project idea was harder than that, and quit. This is the vibe I get from LLM use.
I pray you do not become the next HN user to be screwed over by over-trusting LLM when you have it fill out legal documents for you.
I have many friends and loved ones with ADHD. It's very common in the IT industry, and probably >50% of people in the hacker spaces I frequent are neurodivergent in some way.
What I wrote is my empirical experience, but also what friends and loved ones tell me. I have friends with ADHD who have gone through the exact "wow I'm getting a lot done" -> "wow this is actually wasting a lot of time in hindsight" thing I described. If you think others lived experience is degrading to you it may be hitting a sore spot. What if I had ADHD? My friends with ADHD have the same opinion. Would you then say you were degraded by another person with ADHD that were offering their lived experience?
Maybe we live in very different countries but help has been good for everyone I know who got it. More want it the problem is money. You basically have to be suicidal to get public help, and private costs a fortune. It is a psychologists whole job to use their knowledge to help you self reflect and then act on it. It is uncomfortable, and I can understand why you may experience it as degrading. I don't know about the kind of help you've tried, though.
"This time is different" has been correct for every major technological shift in history. Electricity was different. Antibiotics were different. Semiconductors were different.
Gen AI reached 39% adoption in two years (internet took 5, PCs took 12). Enterprise spend went from $1.7B to $37B since 2023. Hyperscalers are spending $650B this year on AI infra and are supply-constrained, not demand-constrained. There is no technology in history with these curves.
The real debate isn't whether AI is transformative. It's whether current investment levels are proportionate to the transformation. That's a much harder and more interesting question than reflexively citing a phrase that pattern-matches to past bubbles.
> The real debate isn't whether AI is transformative.
No, the debate is very much whether AI is transformative. You don't get to smuggle your viewpoint as an assumption as if there was consensus on this point. There isn't consensus at all.
Stuff that's going into production now (actual production, not startup MVP production) would have been being written just before Claude Code came out, so pretty much by definition no. There's some copilot-style assisted stuff in the wild, I guess? But not really more of it than pre-copilot so the productivity argument kind of falls through there.
Right but this agentic stuff was supposed to be the wave where we would finally actually see increased output, so we should probably be seeing it soon if it's real. Like, my dev team should definitely have the actual code they keep talking about their agents making, ready for me to put into production. As should my vendors. Any day now.
You said that none of this was in production and then when people pointed out that it was obviously in production, you shifted the goal post to some other measure that you just imagined in your head.
Well, if it's in production, it's not at my company, any of my vendors, or for that matter any of the software I use in my private life; the pace of all of that is exactly what it was 2 years ago. When it shows up I'll form an opinion.
Let me amend that: one of my vendors has a new diffusion-based noise-reduction plugin that's pretty good though the resource usage is still too high. I imagine that will come down as they improve it. And that's pretty cool. But it didn't come out any faster it's just that it uses diffusion in the plugin itself. But docker was a much bigger impact on the software we use at work than AI has been so far.
I was even trying to come up with a list of software I use in my personal life to see if any of that has started coming out faster, and I came up with:
KDE
Supercollider
Puredata
Mixxx
Renoise
CUDA and ROCM
none of which have had any kind of release acceleration that I know of (though obviously the hardware to use the last two has gotten mind-blowingly expensive, alas). I use maybe three apps on my phone and they aren't updating any more frequently than they used to.
I get that for whatever reason this bugs people, but I'm in a very tech job and have a very tech personal life (just not webdev in either case) and literally have not seen anything I deal with change other than needing to learn to scroll past the AI summary at the top of search results.
What do you expect that it’s gonna announce itself in a modal dialogue when you run the software?
This isn’t like AI image generation where you’re going to convince yourself that you can tell the difference based on how you think it looks. Do you really think no one in the production chain of any of the software that you use picked up copilot in the last two years?
What signal are you hoping to receive that this is happening?
Well like I said in the sibling post to this one I'd expect really any of the software vendors in my professional or personal life to release either more rapidly or with a wider array of features than they were a few years ago, and that hasn't been my experience, at all.
I'm certainly sympathetic to that argument, but if you scroll way back this thread started with the question of whether or not AI is transformative, and if it is neither faster nor better that would suggest "no".
I feel like you might only be convinced when an AI powered robot rolls up to you and asks, "Bandrami, are you convinced that AI is transformative yet?"
I put AI assisted code in production every day, what are you talking about? At this point I don't even doubt I'm going to lose the job eventually, the question is only whether or not I will be able to pay my mortgage off first.
The problem is in the middle of such a change it's hard to recognize if this is a real change or if this is another Wankel motor.
Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.
The other issue is nobody knows what the future will actually look like and they'll often be wrong with their predictions. For example, with the rise of robotics, plenty of 1950s scifi thought it was just logical that androids and smart mechanic arms would be developed next year. I mean, you can find cartoons where people envisioned smart hands giving people a clean shave. (Sounds like the making of a scifi horror novel :D Sweeney Todd scifi redux)
I think AI is here to stay. At very least it seems to have practical value in software development. That won't be erased anytime soon. Claims beyond that, though, need a lot more evidence to support them. Right now it feels like people just shoving AI into 1000 places hoping that they can find an new industry like software dev.
> Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.
But how many of your non-nerdy friends were talking about them, let alone using them daily?
Yeah that's another rub. The current price is basically there in the hopes that in the future they can find revenue streams to maintain their current pace.
But even if the big companies ultimately go belly up, I think the open models are good enough that we'll likely see pretty cheap AI available for a while, even if it's not as good as the STOA when the bankruptcies roll through.
> Gen AI reached 39% adoption in two years (internet took 5, PCs took 12)
You're comparing a service that mostly costs a free account registration and is harder to avoid than to use, with devices that cost thousands of dollars in the early days.
There's another perspective you can see in the comparison with the dot com boom. The web is here to stay, but a lot of ideas from the beginning didn't work out and a lot of companies turned bankrupt.
The original concept of the web, hyperlinked documents originating from high-quality institutions, is pretty much dead. Now we have an application platform that happens to have adopted some similar protocols and is 99% slop
In 1995 how many people used the internet in their daily work, of those that did how many was it a curiosity that maybe supplemented their existing business practice (sending a memo via email rather than post for example). Large companies were using large computer mainframes but the majority of employers - the SMEs - weren’t.
By 2005 it massively shifted, and AI seems to be coming faster than the internet and computers in general.
By 2015 non intenet companies were going the way of the dodo. How many travel agents were there per 100k in 1995 compared to 2015?
Also add in that these adoption rates are being enforced via threats of firing by bosses of workers. It's hardly something organic, there's a reason why the LLM companies are chasing lucrative corporate welfare contracts because consumers have soundly rejected this nonsense.
The four technologies I look at are 3D televisions, VR, tablets, and the electric car. 3D televisions and VR have yet to find their moment. Judging tablets by the Apple Newton and electric cars by the EV1, this time is different turns out to be the correct model looking at the iPad and Tesla, but not for 3d televisions or VR (yet). So, it could be, but my time machine is as good as yours (mine goes 1 minute per minute, and only forwards, reverse is broken right now.), so unless you've got money on it, we'll just have to wait and see where it goes.
I'd be remiss to point out we went from "LLMs are vaporware" to "people are becoming slaves to their LLMs" awful quick.
> [I'm scared] you are growing dependent on stilts that could disappear any moment.
First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.
Second, maybe if people like you showed as much concern for the fact that LGBT people can expect family violence as you do for Dr. Strangelove scenarios, then people like me wouldn't have to lean on LLMs so heavily.
Third, it's hilarious that your response to a comment pointing out how difficult it was to get help from another human without being degraded, was to degrade me by calling me an LLM junkie. Maybe you should be worried that Gemini appears to have more capacity for empathy and self-awareness than you.
Fourth, given that you show absolutely zero concern or willingness to help when it comes to the difficulties faced by LGBT people or ADHDers, my advice to you is to take your fears and shove them up your ass.
I am sorry, but literally fuck off.
You don’t fucking know me. You don’t fucking know how much I do for the LGBTQ community, to which I belong, and honestly you just don’t fucking know shit about shit.
Maybe you should start your journey by realizing how your problem is first and foremost the disgusting entitlement and victim mentality you show on this post.
And also ask yourself why you seem to derive and perceive more empathy from stochastic sycophantic parrots than other human beings.
But once again, let me reiterate: FUCK OFF.
> First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.
Quick question but what model are you exactly running with 3B parameter. The only decent model I can find which can compete sort of with Cloud models without costing a bank in GPU/RAM are the recently launched Qwen models (35A3B or 27B) which were released a week ago
> First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.
My larger question to you is that even if it might not disappear in any moment, the fact of the matter still remains as if that its still a dependency. Is this dependency worth it? This is an open question and something I am still thinking.
> Third, it's hilarious that your response to a comment pointing out how difficult it was to get help from another human without being degraded, was to degrade me by calling me an LLM junkie. Maybe you should be worried that Gemini appears to have more capacity for empathy and self-awareness than you.
Gemini isn't real tho. It's still linear algebra with no regards to what it says or not. It's just trained on all the corpus data that Google can find and fine tuned to mimic it. By attaching real human qualities to Gemini, we dilute the value of those human qualities in those first place.
I don't necessarily know how "Humans" have treated you. They have treated me both good and bad but I am always more greatful to those who taught me or discussed with me things and helped me know something new. I very much feel like the same fine-tuning that I discussed earlier about models make those very agreeable and the chances of growth are rather limited.
> Fourth, given that you show absolutely zero concern or willingness to help when it comes to the difficulties faced by LGBT people or ADHDers, my advice to you is to take your fears and shove them up your ass.
Actually, You are a human as well so try to think it like this, I am sure you must've met both good and bad people and observed a few common characteristics of them. You are a human too and each second gives you a choice which can help you get either good or bad characteristics being better/worse each day.
Now my philosophy is to be good if not for yourself, then for others in the sense that you become the person that you wished could help you in your life and you can use that to actually help other people. This might be a little naive and practical nature sometimes might not follow this philosophy but yea.
So I want for you to reflect on what you wrote and think as if perhaps that might be a little too aggressive? and if that's what you want or not.
My or (our?) worry is that it feels like too big of a dependence on LLM which are fundamentally black boxes (yes, they are!), Humans can be bad but humans can be good too, I suggest even though it can be hard to have a good friend group (even if online) and talk with them about normal life issues.
Regarding, Coding, I would consider that there are some great people here on forums or Github or just about anywhere who are kind as well and can be helpful. Stackoverflow as an example had issues because of moderation problems which led to the community being hostile but to say that the whole of Software Engineering is such way might be wrong.
Speaking from personal experience, I may or may not have ADHD, I haven't diagnosed it yet but I definitely went into the AI=Producitivty rabbit hole especially more because I am a teen and I was in 9th/10th grade when ChatGPT came iirc. I knew basic python and knew the concepts of multiple languages and chatgpt felt hella addicting to be making websites in svelte all of a sudden where I can make one color button turn to another.
I wouldn't be lying if I say that I may not have learnt Coding effectively the way it was designed from its origin until quite recently. I was Vibe coding from the start and I have made quite some projects at the very least.
My observation is that its great for prototyping purposes but even after finally creating prototypes of most if not all the project ideas I ever had. I lost the motivation to continue and felt burn out. I did everything that I ever wanted to and made every project I thought yet the projects still felt hollow.
So, nowadays I am trying to focus more on studies for my college which can also act as a sort of recovery, to me it was also the fact that I was making these projects when I should've been studying in hindsight haha but I always just wanted to "prove" something (Yes I struggle with studies quite often but I wish to improve and I hope I can improve since I know from past that I can study often but its rather that I need my pure undirected focus on it which became hard for some time)
Recently, I went into a marriage of my own cousin. I found that to be much more fulfilling experience than expected. There is something about human experience both good or bad which can't be quantified.
I don't know what the future holds for me or you. But I wish you luck and hope this message helps ya. I personally realize that aside from prototyping which may be less meaningful than I previously thought at times, AI to me feels quite weak.
I think that for any product to really win, you might need true conviction in the product itself and at that point, the point of prototyping with AI or writing the code with AI to me becomes moot/redundant whereas AI is causing ram prices/storage to increase which is putting genuine projects out of luck as well. [This is one of the worst times to open a Cloud/VPS provider shop]
Perhaps I can understand AI use to get Open source tool when there were none or something but that to me seems like a cultural issue where Open source isn't funded so people are more likely to have it closed source to survive their likelihood but even that to me feels very moot point as there are some great open source projects as well who would appreciate each and every dollar that you donate to them, perhaps more so than a 200$ subscription of claude code as well which you might have to create the alternative to those in the first place as well.
My point still feels to me that it still feels hollow, I think you can find one of my other comments some days ago where I talk about this feeling of hollowness about AI projects as well which I can't help but feel relevant so many times. I am curious as to what you might think.
Second, asymmetric multimethods give something up: symmetry is a desirable property -- it's more faithful to mathematics, for instance. There's a priori no reason to preference the first argument over the second.
So why do I think they are promising?
1. You're not giving up that much. These are still real multimethods. The papers above show how these can still easily express things like multiplication of a band diagonal matrix with a sparse matrix. The first paper (which focuses purely on binary operators) points out it can handle set membership for arbitrary elements and sets.
2. Fidelity to mathematics is a fine thing, but it behooves us to remember we are designing a programming language. Programmers are already familiar with the notion that the receiver is special -- we even have a nice notation, UFCS, which makes this idea clear. (My language will certainly have UFCS.) So you're not asking the programmer to make a big conceptual leap to understand the mechanics of asymmetric multimethods.
3. The type checking of asymmetric multimethods is vastly simpler than symmetric multimethods. Your algorithm is essentially a sort among the various candidate multimethod instances. For symmetric multimethods, choosing which candidate multimethod "wins" requires PhD level techniques, and the algorithms can explode exponentially with the arity of the function. Not so with asymmetric multimethods: a "winner" can be determined argument by argument, from left to right. It's literally a lexicographical sort, with each step being totally trivial -- which multimethod has a more specific argument at that position (having eliminated all the candidates given the prior argument position). So type checking now has two desirable properties. First, it design principle espoused by Bjarne Stroustroup (my personal language designer "hero"): the compiler implementation should use well-known, straightforward techniques. (This is listed as a reason for choosing a nominal type system in Design And Evolution of C++ -- an excellent and depressing book to read. [Because anything you thought of, Bjarne already thought of in the 80s and 90s.]) Second, this algorithm has no polynomial or exponential explosion: it's fast as hell.
4. Aside from being faster and easier to implement, the asymmetry also "settles" ambiguities which would exist if you adopted symmetric multimethods. This is a real problem in languages, like Julia, with symmetric multimethods. The implementers of that language resort to heuristics, both to avoid undesired ambiguities, and explosions in compile times. I anticipate that library implementers will be able to leverage this facility for disambiguation, in a manner similar to (but not quite the same) as C++ distinguishes between forward and random access iterators using empty marker types as the last argument. So while technically being a disadvantage, I think it will actually be a useful device -- precisely because the type checking mechanism is so predictable.
5. This predictability also makes the job of the programmer easier: they can form an intuition of which candidate method will be selected much more readily in the case of asymmetric multimethods than symmetric ones. You already know the trick the compiler is using: it's just double-dispatch, the trick used for "hit tests" of shapes against each other. Only here, it can be extended to more than two arguments, and of course, the compiler writes the overloads for you. (And it won't actually write overloads, it will do what I said above: form a lexicographical sort over the set of multimethods, and lower this into a set of tables which can be traversed dynamically, or when the types are concrete, the compiler can leverage monomorphize -- the series of "if arg1 extends Tk" etc. is done in the compiler instead of at runtime. (But it's the same data structure.)
6. It's basically impossible to do separate compilation using symmetric multimethods. With asymmetric multimethods, it's trivial. To form an intuition, simply remember that double-dispatch can easily be done using separate compilation. Separate compilation is mentioned as a feature in both the cited papers. This is, in my view, a huge advantage. I admit, this I haven't quite figured out generics will fit into this -- at least if you follow C++'s approach, you'll have to give up some aspects of separate compilation. My bet is that this won't matter so much; the type checking ought to be so much faster that even when a template needs to be instantiated at a callsite, the faster and simpler algorithm will mean the user experience will still be very good -- certainly faster than C++ (which uses a symmetric algorithm for type checking of function overloads).
To go a bit more into my "vision" -- the papers were written during a time when object-orientation was the dominant paradigm. I'd like to relax this somewhat: instead of classes, there will only be structs. And there won't be instance methods, everything will be a multimethods. So instead of the multimethods being "encapsulated" in their classes, they'll be encapsulated in the module in which they're defined. I'll adopt the Python approach where everything is public, so you need to worry about accessibility. Together with UFCS, this means there is no "privileging" of the writer of a library. It's not like in C++ or Java, where only the writer of the library can leverage the succinct dot notation to access frequently used methods. An extension can import a library, write a multimethod providing new functionality, and that can be used -- using the exact same notation as the methods of the library itself. (I always sigh when I read languages, having made the mistake of distinguishing between free functions and instance methods, "fix" the problem that you can only extend a library from the outside using free functions -- which have a less convenient syntax -- by adding yet another type of function, an "extension function. In my language, there are only structs and functions -- it has the same simplicity as Zig and C in this sense, only my functions are multimethods.)
Together with my ideas for how the parser will work, I think this language will offer -- much like Julia -- attractive opportunities to extend libraries -- and compose libraries that weren't designed to work "together".
And yeah, Claude Code and Gemini are going to implement it. Probably in Python first, just for initial testing, and then they'll port it to C++ (or possibly self-host).
Thanks for elaborated reply, both papers Ive seen too. I have mostly same views, but I really dislike that there is no clean solution for binary methods, i.e. add( float, int), where symmetric add(int, float) ends up being a boilerplate. Also I think in asymmetric case its hard to handle dispatch when it has failed to produce method when looking in first argument. i.e. dispatching "collide" with Asteroid, Ship, if collider method is found in Ship, how to bind "this", where does Asteroid is bound. Anyways, good luck with your experiments!
Sorry, when you say "gloriously free market", do you mean whatever it takes EU, helicopter money (or, rewinding a decade, Greenspan put) US, or factory of the world China? :)
My point is that it's not a real market economy if the risk premium -- and in China's case, the exchange rate -- is rigged. And it has been, since the 90s.
EDIT: For clarity, I'm agreeing with you, since you were being facetious.
Absolutely! -- and we could play this game for a long time ;)
The right way of looking at it is, there was tiny little interlude of something vaguely approaching the free market -- back when Volcker was in charge.
That's part of the answer, but there's a bit more to it IMO.
The syntax is a bit weird; python, swift, rust, and zig feel more parsimonious.
I absolutely love multimethods, but I think the language would have been better served by non-symmetric multimethods (rather than the symmetric multimethods which are used). The reason is that symmetric multimethods require a PHD-level compiler implementation. That, in turn, means a developer can't easily picture what the compiler is doing in any given situation. By contrast, had the language designers used asymmetric multimethods (where argument position affects type checking), compilation becomes trivial -- in particular, easily allowing separate compilation. You already know how: it's the draw shapes trick i.e., double-dispatch. So in this case, it's trivial to keep what compiler is "doing" in your head. (Of course, the compiler is free to use clever tricks, such as dispatch tables, to speed things up.)
The aforementioned interacts sensitively with JIT compilation, with the net outcome that it's reportedly difficult to predict the performance of a snippet of Julia code.
1. I use the term "performance" slightly vaguely. It's comprised of two distinct things: the time it takes to compile the code, and the execution time. The issue is the compilation time: there are certain cases where it's exponential in the number of types which could unify with the callsite's type params.
2. IIRC, Julia compiler has heuristics to ensure things don't explode for common cases. If I'm not mistaken, not only do compile times explode, but certain (very common) things don't even typecheck. There's an excellent video about it by the designer of the language, Jeff Bezanson -- https://www.youtube.com/watch?v=TPuJsgyu87U . Note: Julia experts, please correct me if this has been fixed.
3. The difficulty in intuiting which combinations of types will unify at a given callsite isn't theoretical; there are reports of libraries which unexpectedly fail to work together. I want to qualify this statement: Julia is light years ahead of any language lacking multimethods when it comes to library composability. But my guess is that those problems would be reduced with non-symmetric multimethods.
4. The non-symmetric multimethod system I'm "proposing" isn't my idea. They are referred to variously as encapsulated or parasitic multimethods. See http://lucacardelli.name/Papers/Binary.pdf
I have huge respect for Jeff Bezanson, for the record!
In what way? It's more-or-less the same syntax as Ruby and Elixir, just with different keywords. Like as much as I love Zig, Zig's syntax is way weirder than Julia's IMO (and none of 'em hold a candle to the weirdness of, say, Erlang or Haskell or Forth or Lisp).
First, let's distinguish between two types of syntactic constructs: null and left denominations. (Terminology borrowed from Pratt parsers.) Null denominations can exist on their own, left denominations can't -- they are inherently chained (eg arithmetic expressions, statements in a block, or elements of a tuple), and allow a succinct, infix notation for variable-length constructs (no lispy parentheses hell).
Second, null denominations usually introduce names -- whether for variables, types, functions, lifetimes, macros, etc. One exception to this are free-standing value expressions (a bit weird; less so when they're the last expression in a block indicating the value returned by it). Another other exception would be directive type constructs - eg directives to import names from another module, directives to give hints to the compiler, etc. The last two exceptions are the most common ones: variable assignment and function invocation.
The golden rule of good language design, as I see it, is this: null denominations must begin with a fixed and unique token. The only permissible exceptions should be for assignment and function invocation; exceptions which exist because those use-cases appear so often in a typical program that requiring a prefix would be insufferable.
Julia breaks this rule for global variables. (Fair enough, Python also commits this error, but it's a mistake and a source of bugs!) But wait, Julia also has "const" and "local" binding constructs, where it follows the golden rule -- but now your syntax isn't consistent. So now you need to keep in your head these nuances -- and know the difference between a soft and hard scope -- when you want to write a function which modifies a function using macrology.
(As a point of taste on the choice of prefix token: introduction of variables through "local" is just as weird as C++'s "auto" -- and at least Bjarne Stroustroup had an excuse for that choice. Anyone who introduces a global variable in a local scope should be punished imho, so there's no need to say "this is a local variable", it's obvious from the fact that the name is introduced inside a function. Instead, my personal preference is to introduce constants through "let", and variables through "var". The former is well-known to anyone numerate, and the latter is ubiquitous in software engineering. Both read well; they're as close as possible as you can get to constructs in English.)
Julia breaks the golden rule again with its succinct, Mathematica-style notation for function definition. I get that it wants to appeal to Mathematica users, but Python already proved you don't need to do that. This is a programming language; brainy types, like mathematicians and physicists, aren't going to be flummoxed by an unfamiliar notation for function definition, or irritated by having to type a few extra characters.
I mention macrology; it's not just that. Let's say you want write a syntax highlighter -- you need to take into account all that weirdness. If null denominations have a fixed & unique prefix, parsing is easy-peasy. Want to add a capability to "inline" HTML code within Julia, react-style? You're going to run into similar issues. And so on...
Interesting article! FYI I clicked on joinkith.com and got: "Application error: a server-side exception has occurred while loading joinkith.com (see the server logs for more information)."
This is going to make the US look less predictable to adversaries, and that's, on balance, perhaps not such a bad thing.
It will be interesting to observe how the aftermath unfolds. If the US succeeds in installing a gov't which gains some level of legitimacy, perhaps by stoking the economy, then this will be a significant win for the US. If not, it will be a strong "the US is the newish sick man" signal.
That said, it's one thing to pull this in Venezuela, another thing to annex Greenland.
First, wine was widely panned for years before it stopped sucking.
Second, you're simply ignoring that parent poster mentioned Ladybird, a non-rust project which is advancing much more speedily than servo. And I think they have a valid point -- and while the jury is still out, it's possible that in other rust-centric efforts which have experienced foot-dragging (eg WASI), the root cause may be rust itself.
Parent poster expressed their point somewhat sarcastically, but if I (C++/python dev, I admit!) were a betting transfem, my money would be on them being right.
That said, I think the Tor project got this decision right. This is as close to an ideal use-case for rust as you can get. Also, the project is mature, which will mitigate rewrite risk. The domain is one where rust can truly shine -- and it's a critical one to get right.
OTOH, it wasn't until recently that you were able to write something like `std::array<T, N>` in rust. Even now, there are restrictions on the kinds of expressions that N can be.
This is super scary stuff for an ADHDer like me.
I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.
My daily todos are now being handled by NanoClaw.
These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.
But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.
reply