Hacker Newsnew | past | comments | ask | show | jobs | submit | librasteve's commentslogin

on code duplication… I prefer the Ian Fleming approach:

  - once is happenstance 
  - twice is coincidence 
  - three times is a pattern

Occam (1982 ish) shared most of BEAMs ideas, but strongly enforced synchronous message passing on both channel output and input … so back pressure was just there in all code. The advantage was that most deadlock conditions were placed in the category of “if it can lock, then it will lock” which meant that debugging done at small scale would preemptively resolve issues before scaling up process / processor count.

Once you were familiar with occam you could see deadlocks in code very quickly. It was a productive way to build scaled concurrent systems. At the time we laughed at the idea of using C for the same task

I spreadsheeted out how many T424 die per Apple M2 (TSMC 3nm process) - that's 400,000 CPUs (about a 600x600 grid) at say 1GIPs each - so 400 PIPS per M2 die size. Thats for 32 bit integer math - Inmos also had a 16 bit datapath, but these days you would probably up the RAM per CPU (8k, 16k?) and stick with 32-bit datapath, but add 8-,16-bit FP support. Happy to help with any VC pitches!

David May and his various PhD students over the years have retried this pitch repeatedly. And Graphcore had a related architecture. Unfortunately, while it’s great in theory, in practice the performance overall is miles off existing systems running existing code. There is no commercially feasible way that we’ve yet found to build a software ecosystem where all-new code has to be written just for this special theoretically-better processor. As a result, the business proposal dies before it even gets off the ground.

(I was one of David’s students; and I’ve founded/run a processor design startup raised £4m in 2023 and went bust last year based on a different idea with a much stronger software story.)


Yes David is the man and afaict has made a decent fist of Xmos (from afar). My current wild-assed hope for this to come to some kind of fruition would be on NVidia realising this opportunity (threat?), making a set of CUDA libraries and the CUDA boys going to town with Occam-like abstractions at the system level and just their regular AI workloads as the application. No doubt he has tried to pitch this to Jensen and Keller.

This looks very much fun. A little known fact is that the original Raku implementers were all Haskell heads and PUGS one of the first Raku parsers was written in Haskell by Audrey Tang.

Raku is mainly written in a custom subset “NQP” and the team is now cranking on RakuAST for 6.e which is rewriting the Raku parser as a Raku Grammar.


This is a bit rich given the draconian rules the EU is now imposing on brits. https://www.bbc.co.uk/news/articles/cx24xyjplp4o

- a remainer


I am the author and I love the UK, am sad about you leaving, and I am as angry about similar practices in EU countries. My gripe is with the pushing of foreign controlled apps, not the immigration rules.

I even got flak in this discussion for referring to the UK government multiple times as "EU government" because I can not let go :(.


keep up the anger! EU is the only hope for non-US domination

HTMX

This is great news if you are coding accounting/banking software and good to see the IBM standards referenced. Given that decimal32 starts to see gaps in the continuous dollars.cents range at about $100,000 I don’t expect that has many real world applications (except legacy compatibility).

For general applications, I prefer to use Rational representation for decimals (Rat or FatRat) and Allomorphs (RatStr, FatRatStr) to maintain a literal representation like this https://raku.land/zef:librasteve/FatRatStr.


Do you encounter things where fixed point isn't good enough? (I.e. always store cents or millis etc?)

Or is the flexibility of not having to choose an advantage?


I always use fixed point decimals for accounting. Floating points are approximations of decimals, which is the exact opposite what you want while accounting.

Useful for say, simulating aerodynamics or weather, not useful for precision tasks.


In the last years, simplistic languages such as Python and Go have “made the case” that complexity is bad, period. But when humans communicate expertly in English (Shakespeare, JK Rowling, etc) they use its vast wealth of nuance, shading and subtlety to create a better product. Sure you have to learn all the corners to have full command of the language, to wield all that expressive power (and newcomers to English are limited to the shallow end of the pool). But writing and reading are asymmetrical and a more expressive language used well can expose the code patterns and algorithms in a way that is easier for multiple maintainers to read and comprehend. We need to match the impedance of the tool to the problem. [I paraphrase Larry Wall, inventor of the gloriously expressive https://raku.org]


Not sure how I feel about Shakespeare and JK Rowling living in the same parenthesis!

Computer languages are the opposite of natural languages - they are for formalising and limiting thought, the exact opposite of literature. These two things are not comparable.

If natural language was so good for programs, we’d be using it - many many people have tried from literate programming onward.


Natural languages are ambiguous, and that's a feature. Computer languages must be unambiguous.

I don't see a case for "complex" vs "simple" in the comparison with natural languages.


I fully accept that formalism is an important factor in programming language design. But all HLLs (well, even ASM) are a compromise between machine speak (https://youtu.be/CTjolEUj00g?si=79zMVRl0oMQo4Tby) and human speak. My case is that the current fashion is to draw the line at an overly simple level, and that there are ways to wrap the formalism in more natural constructs that trigger the parts of the brain that have evolved to hanle language (nouns, verbs, adverbs, prepositions and so on).

Here's a very simple, lexical declaration made more human friendly by use of the preposition `my` (or `our` if it is packaged scoped)...

  my $x = 42;


How is that snippet any better than:

x := 42

Or

let x = 42

Or

x = 42

It seems like a regression from modern languages.


"my" is 33% shorter than "let"

Example 1 and 3 are not declarations, so apples ↔ oranges


Example 1 is a declaration in Go. Example 3 is a declaration in Python.

my $x = 42;

let x = 42

Well, when you add in the '$' and ';' tokens the "let" example is still shorter. Also as another person replied to you, those other two examples are declarations in other languages. So 0 for 3 there.


Have you looked at all the previous attempts?

Your example is not compelling I’m afraid but you should try building a language to see. Also read literate programming if you haven’t already.


Literate programming is not about programming in natural languages: it's about integrating code (i.e. the formal description in some DSL) with the meta-code such as comments, background information, specs, tests, etc.

BTW, one side benefit of LP is freedom from arbitrary structure of DSLs. A standard practice in LP is to declare and define objects in the spot in which they are being used; LP tools will parse them out and distribute to the syntactically correct places.


Well I think the ambition was to have as much as possible in natural language, with macros calling out to ‘hidden’ code intended for machines. So I do think there is a good link with later attempts to write using natural language and make computer languages more human-friendly and he was one of the first to have this idea.

Neither strategy has had much success IMO.


Exactly. I mean think about the programming languages used in aircraft and such. There's reasons. It all depends on what people are willing to tolerate.


>But writing and reading are asymmetrical and a more expressive language used well can expose the code patterns and algorithms in a way that is easier for multiple maintainers to read and comprehend.

It's exactly the opposite. Writing and reading are asymmetrical, and that's why it's important to write code that is as simple as possible.

It's easy to introduce a lot of complexity and clever hacks, because as the author you understand it. But good code is readable for people, and that's why very expressive languages like perl are abhorred.


> Writing and reading are asymmetrical, and that's why it's important to write code that is as simple as possible.

I 100% agree with your statement. My case is that a simple language does not necessarily result in simpler and more readable code. You need a language that fits the problem domain and that does not require a lot of boilerplate to handle more complex structures. If you are shoehorning a problem into an overly simplistic language, then you are fighting your tool. OO for OO. FP for FP. and so on.

I fear that the current fashion to very simple languages is a result of confusing these aspects and by way of enforcing certain corporate behaviours on coders. Perhaps that has its place eg Go in Google - but the presumption that one size fits all is quite a big limitation for many areas.

The corollary of this is that richness places an burden of responsibility on the coder not to write code golf. By tbh you can write bad code in any language if you put your mind to it.

Perhaps many find richness and expressivity abhorrent - but to those of us who like Larry's thinking it is a really nice, addictive feeling when the compiler gets out of the way. Don't knock it until you give it a fair try!


Then you should write assembly only. Like `MOV`, `ADD`... can't really get simpler than that.

Problem is, that makes every small part of the program simple, but it increases the number of parts (and/or their interaction). And ultimately, if you need to understand the whole thing it's suddenly much harder.

Surely you can write the same behaviour in "clever" (when did that become a negative attribute?) or "good" way in assembly. You are correct. But that's a different matter.


Perlis's 10th epigram feels germane:

> Get into a rut early: Do the same process the same way. Accumulate idioms. Standardize. The only difference(!) between Shakespeare and you was the size of his idiom list - not the size of his vocabulary.


Well sure - being in a rut is good. But the language is the medium in which you cast your idiom, right?

Here's a Python rut:

  n = 20  # how many numbers to generate
  a, b = 0, 1
  for _ in range(n):
    print(a, end=" ")
    a, b = b, a + b
  print()
Here's that rut in Raku:

  (0,1,*+*...*)[^20]
I am claiming that this is a nicer rut.


  seq = [0,1]
  while len(seq) < 20:
      seq.append(sum(seq[-2:]))
  print(' '.join(str(x) for x in seq))
> I am claiming that (0,1,+...*)[^20] is a nicer rut.

If it's so fantastic, then why on earth do you go out of your way to add extra lines and complexity to the Python?


Complexity-wise, this version is more complicated (mixing different styles and paradigms) and it's barely less tokens. Lines of code don't matter anyway, cognitive load does.

Even though I barely know Raku (but I do have experience with FP), it took way less time to intuitively grasp what the Raku was doing, vs. both the Python versions. If you're only used to imperative code, then yeah, maybe the Python looks more familiar, though then... how about riding some new bicycles for the mind.


> Complexity-wise, this version is more complicated (mixing different styles and paradigms)

Really? In the other Python version the author went out of his way to keep two variables, and shit out intermediate results as you went. The raku version generates a sequence that doesn't even actually get output if you're executing inside a program, but that can be used later as a sequence, if you bind it to something.

I kept my version to the same behavior as that Python version, but that's different than the raku version, and not in a good way.

You should actually ignore the print in the python, since the raku wasn't doing it anyway. So how is "create a sequence, then while it is not as long as you like, append the sum of the last two elements" a terrible mix of styles and paradigms, anyway? Where do you get off writing that?

> Lines of code don't matter anyway, cognitive load does.

I agree, and the raku line of code imposes a fairly large cognitive load.

If you prefer "for" to "while" for whatever reason, here's a similar Python to the raku.

  seq = [0,1]
  seq.extend(sum(seq[-2:]) for _ in range(18))
The differences are that it's a named sequence, and it doesn't go on forever and then take a slice. No asterisks that don't mean multiply, no carets that don't mean bitwise exclusive or.

> If you're only used to imperative code, then yeah, maybe the Python looks more familiar, though then... how about riding some new bicycles for the mind.

It's not (in my case, anyway) actually about imperative vs functional. It's about twisty stupid special symbol meanings.

Raku is perl 6 and it shows. Some people like it and that's fine. Some people don't and that's fine, too. What's not fine is to make up bogus comparisons and bogus implications about the people who don't like it.


Reminds me a bit of the fish anecdote told by DFW... they've only swam in water their entire life, so they don't even understand what water is.

Here are the mixed paradigms/styles in these Python snippets:

- Statements vs. expressions

- Eager list comprehensions vs. lazy generator expressions

- Mutable vs. immutable data structures / imperative reference vs. functional semantics

(note that the Raku version only picks _one_ side of those)

> seq.extend(sum(seq[-2:]) for _ in range(18))

I mean, this is the worst Python code yet. To explain what this does to a beginner, or even intermediate programmer.... oooooh boy.

You have the hidden inner iteration loop inside the `.extend` standard library method driving the lazy generator expression with _unspecified_ one-step-at-a-time semantics, which causes `seq[-2:]` to be evaluated at exactly the right time, and then `seq` is extended even _before_ the `.extend` finishes (which is very surprising!), causing the next generator iteration to read a _partially_ updated `seq`...

This is almost all the footguns of standard imperative programming condensed into a single expression. Like ~half of the "programming"-type bugs I see in code reviews are related to tricky temporal (execution order) logic, combined with mutability, that depend on unclearly specified semantics.

> It's about twisty stupid special symbol meanings.

Some people program in APL/J/K/Q just fine, and they prefer their symbols. Calling it "stupid" is showing your prejudice. (I don't and can't write APL but still respect it)

> What's not fine is to make up bogus comparisons and bogus implications about the people who don't like it.

That's a quite irrational take. I didn't make any bogus comparisons. I justified or can justify all my points. I did not imply anything about people who don't like Raku. I don't even use Raku myself...


> You have the hidden inner iteration loop inside the `.extend` standard library method driving the lazy generator expression with _unspecified_ one-step-at-a-time semantics

That's why it wasn't the first thing I wrote.

> To explain what this does to a beginner, or even intermediate programmer.... oooooh boy.

As if the raku were better in that respect, lol.

> Some people program in APL/J/K/Q just fine, and they prefer their symbols.

APL originally had a lot of its own symbols with very little reuse, and clear rules. Learning the symbols was one thing, but the usage rules were minimal and simple. I'm not a major fan of too many different symbols, but I really hate reuse in any context where how things will be parsed is unclear. In the raku example, what if the elements were to be multiplied?

> Calling it "stupid" is showing your prejudice. (I don't and can't write APL but still respect it) > Reminds me a bit of the fish anecdote told by DFW...

Yeah, for some reason, it's not OK for me to insult a language, but it's OK for you to insult a person.

But you apparently missed that the "twisty" part was about the multiple meanings. Because both those symbols are used in Python (the * in multiple contexts even) but the rules on parsing them are very simple.

perl and its successor raku are not about simple parsing. You are right to worry about the semantics of execution, but that starts with the semantics of how the language is parsed.

In any case, sure, if you want to be anal about paradigm purity, take my first example, and (1) ignore the print statement because the raku version wasn't doing that anyway, although the OP's python version was, and (2) change the accumulation.

  seq = [0,1]
  while len(seq) < 20:
    seq = seq + [seq[-2] + seq[-1]]
But that won't get you very far in a shop that cares about pythonicity and coding standards.

And...

You can claim all you want that the original was "pure" but that's literally because it did nothing. Not only did it have no side effects, but, unless it was assigned or had something else done with it, the result was null and void.

Purity only gets you so far.


You're getting more and more irrational.

> it's OK for you to insult a person.

I made an analogy which just means that it's hard to understand what the different styles and paradigms are when those are the things you constantly use.

You're apparently taking that as an insult...

> But you apparently missed that the "twisty" part

I didn't miss anything. You just didn't explain it. "twisty" does not mean "ambiguous" or "hard to parse". Can't miss what you don't write.


> In the raku example, what if the elements were to be multiplied?

$ raku -e 'say (0, 1, 2, * × * ... )[^10]' # for readability (0 1 2 2 4 8 32 256 8192 2097152)

$ raku -e 'say (0, 1, 2, * * ... *)[^10]' # for typeability (0 1 2 2 4 8 32 256 8192 2097152)


Yeah, no thanks.

My instincts about raku were always that perl was too fiddly, so why would I want perl 6, and this isn't doing anything to dissuade me from that position.


err - I cut and pasted the Python directly from ChatGPT ;-)


But it doesn't do the same thing at all as the raku.

It doesn't build a list, rather it dumps it as it goes.

It has an explicit print.

It uses a named constant for 20 rather than a literal.

etc, etc...


Why is the sky black?

- at night (of course)

- there are ~1 septillion stars that are all shiny


If the universe was infinite and eternal you’d expect the night sky to be white - all the gaps between stars would be filled in with stars further away.


this! guess this is the definitive proof that the visible universe is finite


errr market monopoly forces are doing their thing … the point is that only a govt can force eg an OS + APP anticompetitive monopoly provider to split up into multiple companies


this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: