Also potentially relevant: in the 00s, the performance gap between gzip and bzip2 wasn't quite as wide - gzip has benefited far more from modern CPU optimizations - and slow networks / small disks made a higher compression ratio more valuable.
RAR/ACE/etc used continuous compression - all files were concatenated and compressed as if they were one single large file. Much like what is done with .tar.bz. Bzip on Windows did not do that, there was no equivalent of .tar.bz2 on Windows.
You can bzip2 -9 files in some source code directory and tar these .bz2 files. This would be more or less equivalent to creating ZIP archive with BWT compression method. Then you can compare result with tar-ing the same source directory and bzip2 -9 the resulting .tar.
Then you can compare.
The continuous mode in RAR was something back then, exactly because RAR had long LZ77 window and compressed files as continuous stream.
'Solid compression' (as WinRAR calls it) is still optional with RAR. I recall the default is 'off'. At the time, that mode was still pretty good compared to bzip2.
In fact, collusion with the likes of yeildstar is the name of the game. Everyone is setting prices based on what the algorithm tells them to set prices and they all benefit from that uprise in prices because there's basically no competition decreasing the price.
There's also been a steady consolidation of ownership of rental units which also artificially increases prices.
There's a reason nowhere in the country at this point has affordable housing.
Yup. Development companies contract when the housing market contracts. They aren't building houses for the fun of it, they are building them because they believe the 100 houses they build in a hot market will ultimately pay back the land purchase rights. They will never build so many houses as to decrease the cost of a home.
I actually got my home from a developer right after the housing bubble. They confided in me that they were giving away these homes pretty much at cost and that they had to fire a huge portion of their staff because the market was just crap at the time.
Really, the only way to actually achieve lower housing prices is through the state ownership and build out. The state could also spend a premium on building homes that it sells at a loss or rents at lower rates. But that will be pretty unpopular with the general public.
Yep — or aggressive subsidization of the inputs of housing production, or some other cost management of an input (such as a high LVT that discourages speculation and withholding of valuable land from use)
> Plus it is not literally putting money in people's hands which is often unpopular with some demographic groups
I'd be really opposed to this. It'd only be ok if we nationalized the industries where we set these rules and rates. Otherwise, this ends up being a simple handout to private industries.
For example, let's say we say x liters of water. Well who's deciding how much x liters cost? If it's a private company and the government is guaranteeing it, you can bet water (which is relatively cheap where I live) will end up being the most expensive resource imaginable. And that may actually be true depending on the location, but it'd also be true in non-desert areas with plenty of water.
We've effectively had that here with the ACA, where the government has decided that it will cover the first $800 or so dollars of your health insurance. What happened? Magically, the cost of health insurance increased by $800. Private industries aren't stupid, they'll always charge the maximum price the market will bear. And when we start talking about captured industries like data provider, power provider, or your water provider... well that's where we can trust private industry the least as they literally have the public over a barrel. Utility boards are an OK solution, but the better solution is to turn these into public institutes instead of private ones.
> We've effectively had that here with the ACA, where the government has decided that it will cover the first $800 or so dollars of your health insurance. What happened? Magically, the cost of health insurance increased by $800.
I don’t think that’s an accurate description of ACA [1], it didn’t lead to a dollar to dollar increase in premiums (share a citation if otherwise), and it’s a bit misleading to say it led to an increase in premiums because plans pre-ACA were effectively inaccessible to and lacking in benefits for impoverished people or people with pre-existing conditions.
[1] Here’s a brief description of ACA from Wikipedia:
> The act largely retained the existing structure of Medicare, Medicaid, and the employer market, but individual markets were radically overhauled.[1][11] Insurers were made to accept all applicants without charging based on pre-existing conditions or demographic status (except age). To combat the resultant adverse selection, the act mandated that individuals buy insurance (or pay a monetary penalty) and that insurers cover a list of "essential health benefits". Young people were allowed to stay on their parents' insurance plans until they were 26 years old.
There will never be a cited reason for increases, but here's 2023 where basically all insurers filled for a 10% increase in premiums. [1]
Since the 2022 covid bill which significantly increased the subsidization of premiums, health insurers have found various reasons to increase their premiums by inflation beating numbers.
That's obviously a "the market will bear it" situation.
The ACA was a big bill that did a lot. I'm not talking about all of it, but rather the premium subsidization along with the covid premium increase which both expired in 2026.
Look, the premiums expiring was bad. IDK if that was clear from my earlier comment. But there's a fundamentally unaddressed issue with insurers in general where they charge not based on competition or the cost of service, but based on what consumers can bear. Profit incentives for healthcare in the US are completely misaligned with providing good general healthcare. The ACA premiums are a bandaid over an artery laceration. Better than nothing, but that thing is going to very quickly start bleeding through. You can keep slapping on band-aides, but ultimately you'll be looking at more damage if you don't just address the issue.
Utilities in America refers to the service relative to ideas of basic needs for survival in the US so they are often public infrastructure with private operators but in the case of some things like the internet, it’s purely privatized.
> Would this is safe to do on a sunny warm weather? Would body heat plus the sun ruin the cream?
It's fairly safe. You can leave dairy products unrefrigerated for an uncomfortable amount of time :) Butter, in particular, can last for days outside a fridge.
The bacteria that tends to infest dairy products will usually (but not always) turn it into something tasty like yogurt.
Don't get me wrong, you can definitely get sick from spoiled dairy products, but it's not a 100% thing.
> Butter, in particular, can last for days outside a fridge.
I live in Ireland, and once we take butter out of the fridge (to replace the one that's now gone), it doesn't go back in, whatever the weather. All butter here is basically of Kerrygold quality (I'm talking real butter of course).
That's basically how we treated butter while I grew up. So long as it's salted, it rarely goes bad outside the fridge. We had a butter dish and that was about it. The cover keeps the butter from turning a darker yellow and drying out. But we'd still eat it even when that happened.
Gotta be honest, though, I'm not a fan of grassy dairy products :). I had dairy cows growing up and in the spring their milk definitely took on a distinct grassy flavor. I personally preferred it more when it was primarily hay flavored. Store milk tastes like basically nothing in particular.
Yes, also in Ireland and while I wouldn't leave homemade butter out for more than a day or two, Kerrygold salted will last two weeks at 19C without issue.
Yeah we keep butter in a butter dish in the cupboard, refill from the fridge as it is used up. I never knew this wasn’t what everyone did until my roommate in college was blown away about how good the butter was this way.
If I could change one thing in computing, it'd be how SQL handles NULL. But if I got a second thing, it'd be how IEEE handles NaN. I probably wouldn't even allow NaN as a representation. If some mathematical operation results in what would be NaN, I'd rather force the programming language to throw some sort of interrupt or exception. Much like what happens when you divide an integer by 0. Heck, I'd probably even stop infinity from being represented with floats. If someone did 1/0 or 0/0, I'd interrupt rather than generating an INF or NaN.
In my experience, INF and NaN are almost always an indicator of programming error.
If someone want's to programmatically represent those concepts, they could do it on top of and to the side of the floating point specification, not inside it.
People who criticize the IEEE standard typically have never read it.
Nobody forces you to use NaNs or to have partially-ordered floating-point numbers.
The default settings that have been recommended since the beginning by the standard were to not use NaNs and to have totally-ordered FP numbers.
For this choice, the invalid operation exception must not be masked.
However, it turns out that most programmers are lazy and do not want to bother to write an exception handler, so the standard libraries of C and of most other languages have chosen to mask by default this exception, which causes the generation of NaNs.
This bad default configuration in combination with the bad feature that most programming languages have only the 6 relational operators sufficient for a total order, instead of the 14 operators required for a partial order, makes necessary frequent tests for detecting whether a value is a NaN, to avoid surprising behavior in relational expressions.
I think that in most cases the extra checks for NaNs add up to much more work than writing at exception handler, so the better solution is to unmask the invalid operation exception, which makes the NaN problem go away.
> What actual useful skill do you think the gas station keeper could learn?
I mean, it's possible there are useful skills they could learn but there's not the interest or desire to learn those skills. It's completely possible that person is perfectly content doing that work.
That labor cheapness is enabled by a cheapness of cost of living. Those things all tend to feed onto each other.
> I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
It's useful labor. Yes you could do it yourself, but it gives them a job which they can ultimately use to afford food and where they live.
I mostly only feel bad for kids doing that sort of labor as it means they aren't getting an education. But for an adult? It speaks to something a bit right about their economic situation that they can stay a float by merely fetching items in a store.
I wish in the US that it was possible for someone to make a living doing doordash or instacart.
> what specifically should they have done differently?
Kamala squandered a lot of good will and enthusiasm when she needed it the most. When Biden dropped out there was a lot of real excitement about something different.
It really wouldn't have been hard for her to spend time touting some of the best parts of the Biden admin like Lina Khan. But that sort of messaging was unpopular with the donors.
Putting forward actual policies to make things better would have also helped, even if they were just carbon copies of the biden policies. The way she campaigned there was, frankly, really weak. Giving a tax break to home owners and copying Trump's "No tax on tips" line really did not look good.
It was also pretty apparent that while Walz was doing a pretty good job making Trump and Vance look bad, the Kamala team pulled him in for being too alienating. Kamala distanced herself from her own VP pick and instead decided to campaign with Liz Cheney, a well known republican who's father was good ole war crimes cheney. Neither are particularly popular with either Democrats or Republicans.
The Kamala campaign spent a large amount of time trying to win over disaffected trump voters. That was a disaster. No amount of "I'm tough of transnational criminals" would convince a crown that's currently cheering on ICE to cheer on Kamala.
In the end, she did a lot to kill the enthusiasm of the base. She spent just too much of the limited time she had trying to make the case that she is appealing to republicans. Who, of course, all thought she was a super woke radical leftist (she was not).
Gaza was another huge issue that Kamala's campaign ignored and never addressed. A lot of people believe this is why the DNC autopsy hasn't been released as it likely played a large role in the depressed voter turnout for Kamala.
In the end, the problem with her and her campaign is she ran the Hillary Clinton campaign playbook. Far too much time trying to remind people that Trump is bad and far too little time making the case for why she's better.
This isn't all her fault. Biden is a big asshole for running for a second term. There has been leaks that his staff knew full well that he was a train-wreck and that his polling was really bad. I think they thought that the early debate would ultimately prove that he was capable of winning which, as we all know, was one of the biggest train-wrecks of a modern presidential campaign. But also, there's absolutely no chance that Biden didn't know he was dealing with cancer going into 2024. That's not something a President is unaware of. Especially not getting to stage 4. My conspiracy theory was that a major reason he disappeared towards the end of his term is that he was dealing with cancer therapy. It wouldn't shock me to know that he had chemobrain while debating trump.
This ultimately is what shapes my view of what a good test is vs a bad test.
An issue I have with a lot of unit tests is they are too strongly coupled to the implementation. What that means is any change to the implementation ultimately means you have to change tests.
IMO, good tests are relatively immutable. You should be able to have multiple valid implementations. You should add new tests to describe the new functionality of that implementation, however, the old tests should remain relatively untouched.
If it turns out that a single change to an implementation requires you to change and update 20 tests, those are bad tests.
What I want as a dev is to immediately think "I must have broken something" when a test fails, not "I need to go fix 20 tests".
For example, let's say you have a method which sorts data.
A bad test will check "did you call this `swap` function 5 times". A good test will say "I gave the method this unsorted data set, is the data set sorted?". Heck, a good test can even say something like "was this large data set sorted in under x time". That's more tricky to do well, but still a better test than the "did you call swap the right number of times" or even worse "Did you invoke this sequence of swap calls".
> IMO, good tests are relatively immutable. You should be able to have multiple valid implementations. You should add new tests to describe the new functionality of that implementation, however, the old tests should remain relatively untouched.
Taken to extreme this would mean getting rid of unit tests altogether in favor of functional and/or end-to-end testing. Which is... a strategy. I don't know if it is a good or bad strategy, but I can see it being viable for some projects.
If you can't tell, I actually think functional tests have a lot more value than most unit tests :)
Kent Dodd agrees with me. [1]
This isn't to say I see no value in unit tests, just that they should tend towards describing the function of the code under test, not the implementation.
> Taken to extreme this would mean getting rid of unit tests all together in favor of functional and/or end-to-end testing.
The dirty little secret in CS is that unit, functional, and end-to-end tests are all the exact same thing. Watch next time someone tries to come up with definitions to separate them and you'll soon notice that they didn't actually find a difference or they invent some kind of imagined way of testing that serves no purpose and nobody would ever do.
Regardless, even if you want to believe there is a difference, the advice above isn't invalidated by any of them. It is only saying test the visible, public interface. In fact, the good testing frameworks out there even enforce that — producing compiler errors if you try to violate it.
Yep, the 'unit' is size in which one chooses to use. The exact same thing happens when trying to discuss micro services v monolith.
Really it all comes down to agreeing to what terms mean within the context of a conversation. Unit, functional, and end-to-end are all weasel words, unless defined concretely, and should raise an eyebrow when someone uses them.
> The dirty little secret in CS is that unit, functional, and end-to-end tests are all the exact same thing.
I agree that the boundaries may be blurred in practice, but I still think that there is distinction.
> visible, public interface
Visible to whom? A class can have public methods available to other classes, a module can have public members available to other modules, a service can have public API that other services can call through network etc
I think that the difference is the level of abstraction we operate on:
unit -> functional -> integration -> e2e
Unit is the lowest level of abstraction and e2e is the highest.
The user. Your tests are your contract with the user. Any time there is a user, you need to establish the contract with the user so that it is clear to all parties what is provided and what will not randomly change in the future. This is what testing is for.
Yes, that does mean any of classes, network services, graphical user interfaces, etc. All of those things can have users.
> Unit is the lowest level of abstraction and e2e is the highest.
There is only one 'abstraction' that I can see: Feed inputs and evaluate outputs. How does that turn into higher or lower levels?
It took me a bit of time (and two or three different view) to finally get this. That is mostly why I hardcode my values in the tests. Make them simpler. If something fails, either the values are wrong or the algorithm of the implementation is wrong.
Comparing actual outputs against expected ones is the ideal situation, IMHO. My own preference is for property-checking; but hard-coding a few well-chosen values is also fine.
That's made easier when writing (mostly) pure code, since the output is all we have (we're not mutating anything, or triggering other processes, etc. that would need extra checking).
I also think it's important to make sure we're checking the values we actually care about; since those might not be the literal return value of the "function under test". For example, if we're testing that some function correctly populates a table cell, I would avoid comparing the function's result against a hard-coded table, since that's prone to change over time in ways that are irrelevant. Instead, I would compare that cell of the result against a hard-coded value. (Rather than thinking about the individual values, I like to think of such assertions as relating one piece of code to another, e.g. that the "get_total" function is related to the "populate_total" function, in this way...).
The reason I find this important, is that breaking a test requires us to figure out what it's actually trying to test, and hence whether it should have broken or not; i.e. is it a useful signal that requires us to change our approach (the table should look like that!), or is it noise that needs its incidental details updated (all those other bits don't matter!). That can be hard to work out many years after the test was written!
It was long surpassed by lzma and zstd.
But back in roughly the 00s, it was the best standard for compression, because the competition was DEFLATE/gzip.
reply