Hacker Newsnew | past | comments | ask | show | jobs | submit | jupp0r's commentslogin

libuv? libevent?



I thought that works but I only see "Please enable JS and disable any ad blocker"

It's not the same source but this works

https://www.msn.com/en-us/money/careersandeducation/palantir...


Your's doesn't work for non-countries. MSN switches to the country you live in, location off or no.


I generally agree, but max line length being so high you have to horizontally scroll while reading code is very detrimental to productivity.


Formatters eliminating long lines is a pet peeve of mine.

About once every other project, some portion of the source benefits from source code being arranged in a tabular format. Long lines which are juxtaposed help make dissimilar values stand out. The following table is not unlike code I have written:

  setup_spi(&adc,    mode=SPI_01, rate=15, cs_control=CS_MUXED,  cs=0x01);
  setup_spi(&eeprom, mode=SPI_10, rate=13, cs_control=CS_MUXED,  cs=0x02);
  setup_spi(&mram,   mode=SPI_10, rate=50, cs_control=CS_DIRECT, cs=0x08);

Even if we add 4-5 more operational parameters, I find this arrangement much more readable than the short-line equivalent:

  setup_spi(&adc,
      mode=SPI_01,
      rate=15,
      cs_control=CS_MUXED,
      cs=0x01);
  setup_spi(&eeprom,
      mode=SPI_10,
      rate=13,
      cs_control=CS_MUXED,
      cs=0x02);
  setup_spi(&mram,
      mode=SPI_10,
      rate=50,
      cs_control=CS_DIRECT,
      cs=0x08);

Or worse, the formatter may keep the long lines but normalize the spaces, ruining the tabular alignment:

  setup_spi(&adc, mode=SPI_01, rate=15, cs_control=CS_MUXED, cs=0x01);
  setup_spi(&som_eeprom, mode=SPI_10, rate=13, cs_control=CS_MUXED, cs=0x02);
  setup_spi(&mram, mode=SPI_10, rate=50, cs_control=CS_DIRECT, cs=0x08);


Sometimes a neat, human-maintained block of 200 character lines brings order to chaos, even if you have to scroll a little.


The worst is when you have lines in a similar pattern across your formatter's line length boundary and you end up with

  setup_spi(&adc, mode=SPI_01, rate=15, cs_control=CS_MUXED, cs=0x01);
  setup_spi(&eeprom,
      mode=SPI_10,
      rate=13,
      cs_control=CS_MUXED,
      cs=0x02);
  setup_spi(&mram, mode=SPI_10, rate=50, cs_control=CS_DIRECT, cs=0x08);


I think with the Black formatter you can force the multiline version by adding a trailing comma to the arguments.

The pain point you describe is real, which is why that was intentionally added as a feature.

Of course it requires a language that allows trailing commas, and a formatter that uses that convention.


A similar tip: As far as I can tell, clang-format doesn't reflow across comments, so to force a linebreak you can add a // end-of-line comment.


Yes, so much this!

I've often wished that formatters had some threshold for similarity between adjacent lines. If some X% of the characters on the line match the character right above, then it might be tabular and it could do something to maintain the tabular layout.

Bonus points for it's able to do something like diff the adjacent lines to detect table-like layouts and figure out if something nudged a field or two out of alignment and then insert spaces to fix the table layout.


I believe some formatters have an option where you can specify a "do not reformat" block (or override formatting settings) via specific comments. As an exception, I'm okay with that. Most code (but I'm thinking business applications, not kernel drivers) benefits from default code formatting rules though.

And sometimes, if the code doesn't look good after automatic formatting, the code itself needs to be fixed. I'm specifically thinking about e.g. long or nested ternary statements; as soon as the auto formatter spreads it over multiple lines, you should probably refactor it.


I'm used to things like `// clang-format off` and on pairs to bracket such blocks, and adding empty trailing `//` comments to prevent re-flowing, and I use them when I must.

This was more about lamenting the need for such things. Clang-format can already somewhat tabularize code by aligning equals signs in consecutive cases. I was just wishing it had an option to detect and align other kinds of code to make or keep it more table like. (Destroying table-like structuring being the main places I tend to disagree with its formatting.)


I get what you're saying, and used to think that way, but changed my mind because:

1) Horizontal scrolling sucks

2) Changing values easily requires manually realigning all the other rows, which is not productive developer time

3) When you make a change to one small value, git shows the whole line changing

And I ultimately concluded code files are not the place for aligned tabular data. If the data is small enough it belongs in a code file rather than a CSV you import then great, but bothering with alignment just isn't worth it. Just stick to the short-line equivalent. It's the easiest to edit and maintain, which is ultimately what matters most.


This comes up in testing a lot. I want testing data included in test source files to look tabular. I want it to be indented such that I can spot order of magnitude differences.


Those kind of tables improve readability right until someone hits a length constraint and had to either touch every line in order to fix the alignment, causing weird conflicts in VCS, or ignore the alignment and it's slow decay into a mess begins.


It's not an either/or though. Tables are readable and this looks very much like tabular data. Length constraints should not be fixed if you have code like this, and it won't be "a slow decay into a mess" if escaping the line length rules is limited to data tables like these.


By length constraint I meant that one of the fields grows longer than originally planned rather than bypassing the linter.


so you're basically saying "look this is neat and I like it, but since we cannot prevent some future chap come along and make a mess of it, let's stop this nonsense, now, and throw our hands up in the air—thoughts and prayers is what I say!"?


At best I'd say it's ok to use it sparingly, in places where it really does make an improvement in readability. I've seen people use it just to align the right hand side of a list of assignments, even when there is no tabular nature to what they are assigning.


I agree, I'm very much against any line length constraint, it's arbitrary and word wrapping exists.


The first line should be readable enough, but in case it's longer than that, I way prefer the style of

  setup_spi(&adc, mode=SPI_01, rate=15, cs_control=CS_MUXED,  
            cs=0x01);
  setup_spi(&eeprom, mode=SPI_10, rate=13, cs_control=CS_MUXED,  
            cs=0x02);
  setup_spi(&mram, mode=SPI_10, rate=50, cs_control=CS_DIRECT, 
            cs=0x08);
of there the short-line alternative presented.

I like short lines in general, as having a bunch of short lines (which tend to be the norm in code) and suddenly a very long line is terrible for readability. But all has exemptions. It's also very dependent on the programming language.


People have already outlined all the reasons why the long line might be less than optimal, but I will note that really you are using formatting to do styling.

In a post-modern editor (by which I mean any modern editor that takes this kind of thing into consideration which I don't think any do yet) it should be possible for the editor to determine similarity between lines and achieve a tabular layout, perhaps also with styling for dissimilar values in cases where the table has a higher degree of similarity than the one above. Perhaps also with collapsing of tables with some indicator that what is collapsed is not just a sub-tree but a table.


It is an obvious example where automatic formatter fails.

But are there more examples? May be it's not high price to pay. I'm using either second or third approach for my code and I never had much issues. Yes, first example is pretty, but it's not a huge deal for me.


Another issue with fixed line lengths is that it requires tab stops to have a defined width instead of everyone being able to choose their desired indentation level in their editor config.


I think you have that backward. Allowing everyone to choose their desired indentation in their editor config is the issue. That's insane!


Another issue with everyone being able to choose their desired indentation level in their editor config is unbounded line length.


//nolint


/* clang-format off */


  setup_spi(
    &adc,
    mode=SPI_01,
    rate=15,
    cs_control=CS_MUXED,
    cs=0x01
  );
  setup_spi(
    &eeprom,
    mode=SPI_10,
    rate=13,
    cs_control=CS_MUXED,
    cs=0x02
  );
  setup_spi(
    &mram,
    mode=SPI_10,
    rate=50,
    cs_control=CS_DIRECT,
    cs=0x08
  );
ftfy


This is good, and objectively better than letting the random unbounded length of the function name define and inflate and randomize the indentation. It also makes it easier to use long descriptive function names without fucking up the indentation.

  setup_spi(&adc,
            mode=SPI_01,
            rate=15,
            cs_control=CS_MUXED,
            cs=0x01
  );
  setup_spoo(&adc,
             mode=SPI_01,
             rate=15,
             cs_control=CS_MUXED,
             cs=0x01
  );
  setup_s(&adc,
          mode=SPI_01,
          rate=15,
          cs_control=CS_MUXED,
          cs=0x01
  );
  validate_and_register_spi_spoo_s(&adc,
                                   mode=SPI_01,
                                   rate=15,
                                   cs_control=CS_MUXED,
                                   cs=0x01
  );


Here, fixed it for you:

    setup_spi(
      &adc,
      mode        = SPI_01,
      rate        = 15,
      cs_control  = CS_MUXED,
      cs          = 0x01 );
    setup_spoo(
      &adc,
      mode        = SPI_01,
      rate        = 15,
      cs_control  = CS_MUXED,
      cs          = 0x01 );
    setup_s(
      &adc,
      mode        = SPI_01,
      rate        = 15,
      cs_control  = CS_MUXED,
      cs          = 0x01 );
    validate_and_register_spi_spoo_s(
      &adc,
      mode        = SPI_01,
      rate        = 15,
      cs_control  = CS_MUXED,
      cs          = 0x01 );


That is harder to read than the long line version.

However, it is the formatting I adopt when forced to bow down to line length formatters.


Err..I find the short-line version easier to read. Esp if you need to horizontally scroll.

This is why a Big Dictator should just make a standard. Everyone who doesn't like the standard approach just gets used to it.


to you, to me, it reads nicely, and thus the issue -- editors should have built in formatters that don't actually edit source code, but offer a view


To me, that reads fine, but it has lost the property elevation wanted, which was that it's easy to compare the values assigned to any particular parameter across multiple calls. In your version you can only read one call at a time.


I'm suprised. I find the short-line version to be much better.


Devs have different pixel count screens. Your table wrapped for me. The short line equivalent looks best on my screen.

Thus 80 or perhaps 120 char line lengths!


So fix your setup? Why should others with wider screens leave space on their screen empty for your sake?

Especially 80 characters is a ridiculously low limit that encourages people to name their variables and functions some abbreviated shit like mbstowcs instead of something more descriptive.


My main machine is an ultrawide, but I usually have multiple files open, and text reads best top-down so I stack files side-by-side. If someone has like, a 240 character long line, that is annoying. My editor will soft wrap and indicate this in the fringe of course but it's still a little obnoxious.

80 is probably too low these days but it's nice for git commit header length at least.


Do you guys never read code as side by side diffs in the browser?


Never mind in a browser, this is how I review a ton of code, either in magit or lazygit or in multiple terminals.


> So fix your setup? Why should others with wider screens leave space on their screen empty for your sake?

What a terrible attitude to have when working with other people.

"Oh, I'm the only one who writes Python? Fix your setup. why should I, who know python, not write it for your sake?"

"Oh, I'm the only one who speaks German? Fix your setup. Why should I, who know German, not speak it for your sake?"

How about doing it because your colleagues, who you presumably like collaborating with to reach a goal, asks you to?


What do you do about the "oh, I'm the only one who cares about [???]? should I just fucking kill myself then?" Many such cases.

>How about doing it because your colleagues, who you presumably like collaborating with to reach a goal, asks you to?

If a someone wants me to do a certain thing in a certain way, they simply have to state it in terms of:

- some benefit they want to achieve

- some drawback they want to avoid

- as little as an acknowledged unexamined preference like "hey I personally feel more comfortable with approach X, how bout we try that instead"

I'm happy to learn from their perspective, and gladly go out of my way to accomodate them. Sometimes even against my better judgment, but hell, I still prefer to err on the side of being considerate. Just like you say, I like to work with people in terms of a shared goal, and just like you do, in every scenario I prefer to assume that's what's going on.

If, however, someone insists on certain approaches while never going deeper in their explanations than arbitrary non-falsifiable qualifiers such as "best practice", "modern", "clean", etc., then I know they haven't actually examined those choices that they now insist others should comply with. They're just parroting whatever version they imagine of industry-wide consensus describes their accidental comfort zone. And then boy do they hate my "make your setup assume less! it's the only way to be sure!". But no, I ain't reifying their meme instead of what I've seen work with my own two.


> If, however, someone insists on certain approaches while never going deeper in their explanations than arbitrary non-falsifiable qualifiers such as "best practice", "modern", "clean"

You're moving the goalposts of this discussion. The guy I was responding to said "fix your setup" to another person saying "Your table wrapped for me. The short line equivalent looks best on my screen." That's a stated preference based on a benefit he'd like to achieve.

We are not discussing "best practice" type arguments here.


"Best practice" type arguments are the universal excuse for remaining inconsiderate of the fact that different people interact with code differently, but fair enough I guess


Yes, I don't think we should discourage people from using Python or German just because you don't want to learn those particular languages either.

Working together with others should not mean having to limit everyone to the lowest common denominator, especially when there are better options for helping those with limitations that don't impact everyone else.


So haul your wide monitor around with your laptop, you mean? No.

Just use descriptive variable names, and break your lines up logically and consistently. They are not mutually exclusive, and your code will be much easier for you and other people to read and edit and maintain, and git diffs will be much more succinct and precise.


I softwrap so I don't care about line length myself but I read code on a phone a lot so people who hardwrap at larger columns are a little more annoying


> Why should others with wider screens leave space on their screen empty for your sake?

Because "I" might be older or sight-impaired, and have "my" font at size 32, and it actually fills "my" (wider than yours) screen completely?

Would you advise me to "fix my eyes" too? I'd love to!

"Why should I accommodate others" is a terrible take.


I would advise you to buy one of these: https://www.dell.com/en-ca/shop/dell-ultrasharp-49-curved-us...

80-column line lengths is a pretty severe ask.


Living in the 80's XD


I am at the opposite end. Having any line length constraints whatsoever seems like a massive waste of time every time I've seen it. Let the lines be as long as I need them, and accept that your colleagues will not be idiots. A guideline for newer colleagues is great, but auto-formatters messing with line lengths is a source of significant annoyance.


> auto-formatters messing with line lengths is a source of significant annoyance.

Unless they have been a thing since the start of a project; existing code should never be affected by formatters, that's unnecessary churn. If a formatter is introduced later on in a project (or a formatting rule changed), it should be applied to all code in one go and no new code accepted if it hasn't passed through the formatter.

I think nobody should have to think about code formatting, and no diff should contain "just" formatting changes unless there's also an updated formatting rule in there. But also, you should be able to escape the automatic formatting if there is a specific use case for it, like the data table mentioned earlier.


Define high? I think 120 is pretty reasonable. Maybe even as high as 140.

Log statements however I think have an effectively unbounded length. Nothing I hate more than a stupid linter turning a sprinkling of logs into 7 line monsters. cargo fmt is especially bad about this. It’s so bad.


I still prefer 80. I won’t (publicly) scoff at 100 though. IMO 120 is reasonable for HTML and Java, but that’s about it.

Sent from my 49” G9 Ultrawide.


Ugh. 80 is the worst. For C++ it’s entirely unreasonable. I definitely can not reconcile “linters make code easier to read” and “80 width is good”. Those are mutually exclusive imho.

What I actually want from a linter is “120, unless the trailing bits aren’t interesting in which case 140+ is fine”. The ideal rule isn’t hard and fast! It’s not pure science. There’s an art to it.



Give a try to 132 mode, maybe? It was the standard paper width for printouts since, well, forever.


That's actually just weirdly specific enough to be worth a shot.


Printing industry have not been anything close to forever, even writing is relatively novel compared to human spoken languages.

All that said, I'm interested with this 132 number, where does it come from?


"Since forever" as in, "since the start of electronic computing"; we started printing the programs out on paper almost immediately. The 132 columns comes from the IBM's ancient line printers (circa 1957); most of other manufacturers followed the suit, and even the glass ttys routinely had 132-column mode (for VT100 you had to buy a RAM extension, for later models it was just there, I believe). My point is, most of the people did understand, back even in the sixties, that 80-columns wide screen is tiny, especially for reading the source code.


Printers aside the VT220 terminal from DEC had a 132 column mode. Probably it was aping a standard printer column count. Most of the time we used the 80 column mode as it was far more readable on what was quite a small screen.


Not only a small screen by modern standards, but the hardware lacked the needed resolution. The marketing brochure claims a 10x10 dot matrix. That will be for the 80 column mode. That works out to respectable 800 pixel horizontally, barely sufficient 6x10 pixel in 132 column mode. There was even a double-high, double-width mode for easier reading ;-)

Interesting here perhaps is that even back then it was recognized, that for different situations, different display modes were of advantage.


> There was even a double-high, double-width mode for easier reading

I'd forgotten that; now that waa a fugly font. I don't think anyone ever used it (aside from the "Setup" banner on the settings screen)

I think the low pixel count was rather mitigated by the persistence of phospher though - there's reproductions of the fonts that had to take this into account; see the stuff about font stretching here: https://vt100.net/dec/vt220/glyphs


The IBM 1403 line printer, apparently.


That's literally my setup everywhere. 120 for html/java/JavaScript and 80 elsewhere.

Really suites each language imo Although I could probably get away with 80, habit to use tailwind classes can get messy compared to 120


Caveat, my personal experience is mainly limited to JS/TS, Java, and associated languages. 120 is fine for most use cases; I've only seen 80 work in Go, but that one also has unwritten rules that prefer reducing indentation as much as possible; "line-of-sight programming", no object-oriented programming (which gives almost everything a layer of indentation already), but also it has no ternary statements, no try/catch blocks, etc. It's a very left-aligned language, which is great for not unnecessarily using up that 80 column "budget".


But a 49" ultrawide is just two 27" monitors side by side. :-)


Better yet, its three monitors with more reasonable aspect ratios side by side.

16:9 is rarely what you want for anything that is mainly text.


It’s tricky to find an objective optimum. Personally I’ve been happy with up to 100 chars per line (aim for 80 but some lines are just more readable without wrapping).

But someone will always have to either scroll horizontally or wrap the text. I’m speaking as someone who often views code on my phone, with a ~40 characters wide screen.

In typography, it’s well accepted that an average of ~66 chars per line increases readability of bulk text, with the theory being that short lines require you to mentally «jump» to the beginning of the next line frequently which interrupts flow, but long lines make it harder to mentally keep track of where you are in each line. There is however a difference between newspapers and books, since shorter ~40-char columns allows rapid skimming by moving your eyes down a column instead of zigzagging through the text.

But I don’t think these numbers translate directly to code, which is usually written with most lines indented (on the left) and most lines shorter than the maximum (few statements are so long). Depending on language, I could easily imagine a line length of 100 leading to an average of ~66 chars per line.


> the theory being that short lines require you to mentally «jump» to the beginning of the next line frequently which interrupts flow, but long lines make it harder to mentally keep track of where you are in each line.

In my experience, with programming you rarely have lines of 140 printable characters. A lot of it is indentation. So it’s probably rarely a problem to find your way back on the next line.


I don’t think code is comparable. Reading code is far more stochastic than reading a novel.

For C/C++ headers I absolutely despise verbose doxygen bullshit commented a spreading relatively straightforward functions across 10 lines of comments and args.

I want to be able to quickly skim function names and then read arguments only if deemed relevant. I don’t want to read every single word.


100 is the sweet spot, IMO.

I like splitting long text as in log statements into appropriate source lines, just like you would a Markdown paragraph. As in:

    logger.info(
        "I like splitting long text as in log statements " +
        "into ” + suitablelAdjective + " source lines, " +
        "just like you would a Markdown paragraph. " +
        "As in: " + quine);
I agree that many formatters are bad about this, like introducing an indent for all but the first content line, or putting the concatenation operator in the front instead of the back, thereby also causing non-uniform alinkemt of the text content.


This makes it really annoying to grep for log messages. I can't control what you do in your codebase but I will always argue against this the ones I work on.


I haven’t found this to be a problem in practice. You generally can’t grep for the complete message anyway due to inserted arguments. Picking a distinctive formulation from the log message virtually always does the trick. I do take care to not place line breaks in the middle of a semantic unit if possible.


Yes, I find the part of the message that doesn't have interpolated arguments in it. The problem is that the literal part of the string might be broken up across lines.


And to add to this, you rarely need to read a log message when just visually scanning code, its fine going off the screen.


Splitting log messages across lines like that is pure evil. Your punishment is death by brazen Bull. Sorry I don’t make the rules, just how it is. :(


Nitpick: this looks like Python. You don't need + to concatenate string literal. This is the type of thing a linter can catch.


IMO, implicit string concatenation is a bug, not a feature.

I once made a stupid mistake of having a list of directories to delete:

    directories_to_delete = (
        "/some/dir"
        "/some/other/dir"
    )
    for dir in directories_to_delete:
        shutil.rmtree(dir)
Can you spot the error? I somehow forgot the comma in the list. That meant that rather than creating a tuple of directories, I created a single string. So when the `for` loop ran, it iterated on individual characters of the string. What was the first character? "/" of course.

I essentially did an `rm -rf /` because of the implicit concatenation.


It’s actually Java, where the “+” is necessary.


every editor can wrap text these days. good ones will even indent the wrapped text properly


Thats a slippery slope towards storing semantics and displaying locally preferred syntax ;)


I prefer storing plain text and displaying locally preferred syntax, to a degree.

With some expressions, like lookup tables or bit strings, hand wrapping and careful white space use is the difference between “understandable and intuitive” and “completely meaningless”. In JS world, `// prettier-ignore` above such an expression preserves it but ideally there’s a more universal way to express this.


And that's fine, as long as whatever ends up in version control is standardized. Locally you can tweak your settings to have / have not word wrapping, 2-8 space indentation, etc.

But that's the core of this article, too; since then it's normalized to store the plain text source code in git and share it, but it mentions a code and formatting agnostic storage format, where it's down to people's editors (and diff tools, etc) to render the code. It's not actually unusual, since things like images are also unreadable if you look at their source code, but tools like Github will render them in a human digestable format.


And the bikeshedding has begun...


Who’s going to be bikeshedding (about formatting) when everyone can individually configure their own formatting rules without affecting anyone else?


What's the nuclear reactor in this analogy?


That the values could have been extracted to an array of structs, and iterated over in a small cycle that calls the function for each set of values.


was going to say the same thing.

Boy that was fast.


That's why Python should have gone all-in on significant spaces: tabs for blocks, spaces after tabs for line continuation


Mixing spaces and tabs is a surefire way to ruin everything.


Is this a subtle pro-tab pinch?


You still have to minimize the wrapping that happens, because wrapped lines of code tend to be continuous instead of being properly spaced so as to make its parts individually readable.


> every editor can wrap text these days.

could. Yesterday notepad (win 10) just plainly refused.


Windows is so weird


I forget there are people who don’t configure softwrap in their text editor.

Some languages (java) really need the extra horizontal space if you can afford it and aren’t too hard to read when softwrapped.


I’d agree with you except for the trend over the last 10 years or so to set limits back to the Stone Age. For a while there we seemed to be settling on somewhere around 150 characters and yet these days we’re back to the 80-100 range.



> What caused the drop in popularity in RoR?

Async/await. JavaScript and all other modern languages and frameworks have a great concurrency story. Rails still hasn't (but it's coming next year, it's been coming next year for a decade).


The concurrency story in Ruby is fine. We've been using multi-process Ruby scripts in production for over a decade. The pre 2.7 ruby had some issues, but it's been solid for years. The async/await programming paradigm is painful by comparison. Sure, there are languages out there that have been designed from the ground up with concurrency in mind, that have an even better concurrency story, but those do not put developer happiness(™) front and center.


I disagree. Hiding async makes reasoning about code harder and not easier. I want to know whether disposal is async and potentially affected by network outages, etc.


look into how React Suspense hides asynchrony (by using fibers). its very commingled with nextjs but the original ideas of why react suspense doesnt use promises (sebmarkbage had a github issue about it) is very compelling


Compelling? It's freaking terrible, instead of pausing execution to be resumed when a promise is resolved they throw execution and when the promise is resolved the whole execution runs again, and potential throws again if it hits another promise, and so on. It's a hacked solution due to the use of a global to keep track of the rendering context to associate with hook calls, so it all needs to happen synchronously. If they had passed a context value along with props to the function components then they could have had async/await and generator components.


This is fallacious. You could use the same logic to argue that we should encode the type of every argument and return value of a function into the function signature, and have to explicitly write it out by hand at every call site, for the same reason:

x = number:foo(number:x, string:y)

It's absurd. The type system should responsible for keeping track of the async status of the function, and you should get that when hovering over the function in your IDE. It does not belong in the syntax any more than the above does and it's an absolutely terrible reason to duplicate all of your functions and introduce these huge headaches.


I think you should appreciate more how much the tens of billions of dollars Google has invested in Chrome has benefited the web and open source in general. Some examples:

Webrtc. Google’s implementation is super widely used in all sorts of communications software.

V8. Lots of innovation on the interpreter and JIT has made JS pretty fast and is reused in lots of other software like nodejs, electron etc.

Sandboxing. Chrome did a lot of new things here like site isolation and Firefox took a while to catch up.

Codecs. VP8/9 and AV1 broke the mpeg alliance monopoly and made non patented state of the art video compression possible.

SPDY/QUIC. Thanks to Google we have zero RTT TLS handshakes and no head of line blocking HTTP with header compression, etc now and H3 has mandatory encryption.


> Codecs. VP8/9 and AV1 broke the mpeg alliance monopoly and made non patented state of the art video compression possible.

Not really. That was done more by the greed of the MPEG alliance.

Back in the days when <video> was first proposed, VP8 was required to be supported as a codec by all browsers. This was removed as a requirement after Apple stated they were never going to support it, but the other browsers still implemented VP8 because it was codec free. Then Google implemented H.264 in Chrome. Mozilla only implemented H.264 in Firefox after it became clear that Google's announcement that they were going to rip H.264 out of Chrome was a bald-faced lie, making H.264 a de facto codec requirement for web browsers.

Having won, then the MPEG Alliance got greedy with their next version. H.265 upped the prices on its license agreement, and additionally demanded a cut of all streaming revenue. It got worse--the alliance fragment, and so you had to pay multiple consortia the royalties for the codec (although only one of them had the per-video demand).

It was in response to this greed that the Alliance for Open Media was created, which brought us AV1. I don't know how important Google is to the AOM, but I will note that, at launch, it did contain everybody important to the web video space except for Apple (which, as noted earlier, is the entity that previously torpedoed the attempt to mandate royalty-free codecs for web video).


Not supporting H.264 was arguably what caused the downfall of Firefox usage. Unfortunately Mozilla didn't listen.


The finer point is where these tens of billions came from.

All of it was ad money, and a lot of these innovations were also targeted at better dealing with ads (Flash died because of how taxing it was, mobile browsers just couldn't do it. JavaScript perf allowed these ads to come back full force)

The net balance of how much web technology advanced vs how much ad ecosystems developed is pretty near 0 to me, if not slightly negative.


Isn't webrtc broken in Chrome? Or did they finally fix that? It used to be that everyone supported Chrome's broken implementation, leaving Firefox users with the correct implementation out in the cold.


If you are referring to the standards-based "Unified Plan" vs. the Google proprietary "Plan B" for handling multiple media tracks in SDP, I believe that "Plan B" was finally phased out in 2022.


> VP8/9 and AV1 broke the mpeg alliance monopoly and

and paved way for Google monopoly. They literally threatened to pull their support from devices if devices don't implement AV1 in hardware.


And refuse to support JPEG-XL

They are now no different to Microsoft with Windows Media.


You raise some good points but re: codecs, I was quite unimpressed with how they handled JPEG-XL.


No, there isn't a need for appreciation. We all cheered at that time where Google was building a great JavaScript engine and a browser around that. But in hindsight it is clear, that Google was just running the old embrace, expand, extinguish playbook on a scale that we where unable to comprehend. We would've be just fine with Firefox, webkit and maybe Microsoft would have made Internet explorer somehow not total shit. Google captured the whole web as a market and we used the opportunity to build endless JS frameworks in top and went wild with all the VC and advertising money.


Let's play devil's advocate:

> Webrtc. Google’s implementation is super widely used in all sorts of communications software.

Webrtc uses the user's bandwidth without permission or notification and it used to prevent system sleep on macs without any user visible indication.

> V8. Lots of innovation on the interpreter and JIT has made JS pretty fast and is reused in lots of other software like nodejs, electron etc.

No matter how efficient they made it, javascript "applications" are still bloatware that needlessly waste the user's resources compared to native code.

> Sandboxing. Chrome did a lot of new things here like site isolation and Firefox took a while to catch up.

That's useful but only because the bloatware above. If you didn't give code running in the browser that much power you wouldn't need sandboxing.

> Codecs. VP8/9 and AV1 broke the mpeg alliance monopoly and made non patented state of the art video compression possible.

Could agree. Not sure of Google's real contribution to those.

> SPDY/QUIC. Thanks to Google we have zero RTT TLS handshakes and no head of line blocking HTTP with header compression, etc now and H3 has mandatory encryption.

It's also a binary protocol that cannot be debugged/tested via plain telnet, which places a barrier to entry for development. Perhaps enhances Google's market domination by requiring their libraries and via their control of the standard.


> > Codecs. VP8/9 and AV1 broke the mpeg alliance monopoly and made non patented state of the art video compression possible.

> Could agree. Not sure of Google's real contribution to those.

They were not the only contributor (I was the technical lead for Mozilla's efforts in this space), but they were by far the largest contributor, in both dollars and engineering hours.


> No matter how efficient they made it, javascript "applications" are still bloatware that needlessly waste the user's resources compared to native code.

Well that's just biased. Saying application is bloated (which is often not true) is the result of an entire ecosystem, has something to do with an interpreter, is ridiculous. Any qualified software engineer can see the fault in such a comment. You probably know that as well.

So I consider your comment trolling.


Is have to agree to be honest. Whoever decided to run JavaScript in the backend should be committed to a mental institution. JavaScript is a nightmare. But you can't tell a man something his paycheck depends on him not knowing.


>Webrtc uses the user's bandwidth without permission or notification and it used to prevent system sleep on macs without any user visible indication.

>No matter how efficient they made it, javascript "applications" are still bloatware that needlessly waste the user's resources compared to native code.

>No matter how efficient they made it, javascript "applications" are still bloatware that needlessly waste the user's resources compared to native code.

So should we not deliver advanced sandboxed cross platform applications for any platform, and instead deliver unsandboxed native code for all possible platforms? ActiveX called, it wants to say thanks for the endorsement and that it told you so.

And no more zoom meetings because somebody's Mac might not go to sleep? I'm with you on that one, brother!


> ActiveX called

You do not need to "deliver" inside a bloated VM you know.

Just to spell it out, a web browser is a bloated VM these days.

> And no more zoom meetings

Yes please. No more zoom meetings. Ever.


>You do not need to "deliver" inside a bloated VM you know.

>Just to spell it out, a web browser is a bloated VM these days.

Then Java applets? Oops, that's a bloated VM too.

And how is an M4 emulating x86 code or jitting WASM code not also a bloated VM? Bloated VMs are here to stay.

>> And no more zoom meetings

>Yes please. No more zoom meetings. Ever.

Yay, we've found common ground! Want to chat about it on zoom? ;)


I can read and write just fine thank you, want o chat about it on irc? :)


IRC and other simple tech are the real losers in the modern tech ecosystem.


If the problem could add reactions and replies it would enable the clients to make it more engaging like it's modern contemporaries


webrtc is awful, though


And then they removed

Don‘t be evil.

At some point the stopped improving the browser for the users and changed to improving the browser for Google.


Maybe they were actually lying when they originally said "Don't be evil," and removing it was only being more truthful?


There actions back then fitted the Don‘t be evil motto.

That’s what mattered.


>There actions back then fitted the Don‘t be evil motto.

Disagree with that. All the privacy issues people have problem with now were already a problem in 2007. But being the media darling along with Submarine PR Google didn't get much bad press.

There were lots of other things too, including their site breaking Firefox as well as Chrome, their promise not to make another browser.


They never removed it.


You are right, they moved from the preface to the end.

Seems they don’t read to the end.


> V8

Great we have fifty bloated front-end frameworks powered by ten bloated back-ends written by novice devs who need to use left-pad dependencies


Of all the things you've mentioned, the only one that genuinely stands out to me as a positive contribution from Google—something that wouldn’t have happened had Chrome never existed—is the codec situation. They leveraged their scale and influence for good in that instance.

That said, it’s not as if other browsers weren’t already making independent strides in optimisation and innovation. In fact, I sometimes wonder whether Chrome has actually steered the browser ecosystem in the wrong direction, while simultaneously eroding a lot of the diversity that once existed.


> That said, it’s not as if other browsers weren’t already making independent strides in optimisation and innovation

Honestly I can't believe that anyone who was around when Chrome came out would say this. IE7 was around, and terrible. Firefox was trying hard, as was Opera, but web tech has become infinitely better with Chrome around, and Google funding it. Without Google funding Firefox as well, Firefox would be nowhere near what it is today.


Aside: it’s impressive how the whole blog post does not mention a single detail of what they actually did to achieve these performance improvements. Code changes, really?


There is also the question of cause and effect. Did Instagram grow to what it is today because of a decade of investments from Meta?


The point of deterrence is that you spend the money so the chance of Russia doing something like a land invasion of Europe is decreased.

The much more likely scenario that Europeans are wanting to prevent is a limited invasion in the Baltics or Balkans in order to politically divide and damage western democracies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: