Hacker Newsnew | past | comments | ask | show | jobs | submit | thephyber's commentslogin

Your links say that Apple complies with the laws of major countries. Which companies don’t do that?

Signal is one example. Their values are simply not compatible with what the Chinese government wants (local data storage, key access, etc.). Instead of complying and putting their users' privacy at risk, they accepted the ban.

Google, out of all companies, also decided to partially walk away from the Chinese market in 2010 over censorship concerns [1].

Nobody is forcing Apple to do business in China, or the UK. They actively choose to do so, and because of that also put themselves in a position where they have to comply with these laws, presumably because it makes them more money.

[1] https://www.nytimes.com/2010/03/23/technology/23google.html


Signal responds to warrants with all the the data they keep.

ProtonMail / ProtonVPN responds to the vast majority of warrants with the data they keep.

Apple iCloud always responded to iCloud warrants with whatever data they had (eg. If the user didn’t enable encryption). They shouldn’t have removed end-to-end encryption for the UK, but they have thousands of employees in that country and millions of customers.

Sometimes it’s not the company that is the problem, but the country / legislators.


Google also chooses to be a US company even thought the US is supporting a genocide and is doing an illegal war against a foreign country (again)

You could argue Signal is the most "moral" here, but even then they don't really allow self-hosted backends and refuse to open-source their setup


Google of all companies didn't. And we all know how much they care about privacy.

I’m less impressed with Zuck every time I hear something new about him.

Apple has made incredible progress in the last 20 years, but almost none of that has been a brand new product. It has all been evolving the existing products and on building the world’s best supply chain and rearing incredible market share from Windows. To be clear, AirPods are a much bigger market than Nike shoes. Those, plus Apple Watch, iPad and Vision Pro are new in the last 20 years.

In the past 20 years, the Facebook website has evolved, but all of the other major investments by the company have been acquisitions. Instagram, WhatsApp, Oculus. Diem (or whatever that proprietary cryptocurrency was called) and the Metaverse were massive failures. I don’t know what to credit Meta for in the AI era except some of the LLAMA tooling and some open weights LLMs. CZI is doing cool things, but that’s Zuck’s private science company, not part of Meta.


Anything groundbreaking in advertising? Meta ad tools are pretty granular.

(Looking for anything here!)


Facebook “allowed” Cambridge-Analytica to Hoover up a massive amount of psychometric data on US voters before the 2016 election (ins care quotes because they “allowed” it in the sense they prevented it by policy but did nothing significant to deter the data collection at scale).

I would argue that FB’s bigger wins have been in being the first app / website to get perhaps 50% of the world population using it, and also the Herculean effort it took to moderate that’s volume of content (whether or not you agree that moderation was the right choice or successful).


Profiting off of a genocide is a first in the social media world, pretty ground breaking. Especially when you find out that workers and management knew about it as it was on-going but did nothing in response. Truly such innovative people, surely they deserve all the money and non of the responsibility. It is a meritocracy after all.

Marketing and finance are also very large components of cost and value, respectively.

It was a short pithy sentence, but it does have a kernel of truth to it.


That’s what they said about Enron.

Skepticism is an incredibly useful tool, even in excess.


> Security ONLY through obscurity is bad (Kerckhoffs's Principle).

This is the crux of the article.

(1) Kerckhoffs's Principle doesn’t say that. It says to design the system AS IF the adversary has all of the info about it except the secrets (encryption key, certificates, etc).

(2) this rule is okay if you are a solo maintainer of a WordPress installation. It’s a problem if you work at a large company and part of the company knows the full intent of this, while the rest of the company doesn’t know the other layers of security BECAUSE of the obscurity layer. In this way, it’s important to communicate that this is only a layer and shouldn’t replace any other security decisions.


Kerkhoff's principle is not about security in general, it is about the design of cryptography. Assume your opponent knows everything about how your crypto system works. Your security then lies in the keys and not knowledge of the method.

More broadly, anything that raises the cost of an attack helps security. Whether it is worth investing your defensive effort in that vs on more actual security is a different matter.


If it does not obscure your own view of the security or reasoning about the security stance.

For instance, with respect to url parameters, I have seen people being told they have an Insecure Direct Object Reference, then apply base64 encoding to it to obscure what is going on. To QA they don't notice it looks like junk, it is obscure, but base64 encoded parameters are catnip to hackers.

So in this case, the obscurity made the system worse over time.

Heck, the most cringeworthy phrase "Base64 Encryption" which I have heard many many times.


I love this nuance!

But I think it's covered by your immediate parent comment

> Whether it is worth investing your defensive effort in that vs on more actual security is a different matter.

So the base64 introduces a marginal security gain, but in addition to expending effort in implementation, it increases the cost of other efforts (which is the case for almost all features), in the case of a fixed cost QA (which is again, always the case), the quality of the QA (pardon the redundancy) will be the parameter that suffers.

So yes, if the security gain is very minimal, then it's likely that the cost of the feature will be so great comparatively, that it will not only affect all other parameters like ease of use, but the negative indirect impact on security will be greater than the marginal positive direct impact on security.

Many such cases.


A nice point!

I agree, that anything that raises the cost of an attack may be worth doing. Most “obscurity” related practices do not meaningfully raise the cost of an attack beyond a certain threshold. Physical locks are not a great analogy.

"Security through obscurity" can help in the reverse (for a time) — if they have your keys but haven’t found the locks.

Might give you enough time to change the locks. But not provably — which can matter to a lot of people.


The example in the article is more likely. Changing the name of a DB table from the default helps because any low quality probe script will break as soon as this assumption of default errors. It means that low effort, low tech, low talent attacks will fail. This is not a bad thing because these are likely to be the most common kinds of attacks.

Again, I'm not opposed to simple tricks like this to “buy some time” so long as they don’t PREVENT the deeper layers of security from being performed. But if a company has scarce resources and a choice between patching unpatched software or changing DB names from the defaults the former actually improves security and the latter should only be performed if the staff has solved all of the higher risk items.


Do not track WHEN?

This flag is sent by my browser when I connect to SOMEONE ELSE’s SERVER.

The internet only took off because the primary business model which ran on ads and derivative information that servers do to their users.

It’s not fun. It’s not private or secure. It’s not illegal (in most jurisdictions for most industries). The flag exists as a response to the de facto and de jure state of the world, not some fairytale scenario.


> The internet only took off because the primary business model which ran on ads

No? It took off before advertising was widespread as a primary or sole funding business model? Also there's literally nothing about advertising that requires data collection about users. Sure they love to do it, and they might even believe that it helps their profits in some way. But it's not inherent, they got along just fine with billboards and newspaper classifieds. TV ads never required personal information. Not did pre roll cinema ads, or radio adverts. Nobody was bemoaning in the streets that they couldn't possibly find anything to buy


> The internet only took off because the primary business model which ran on ads and derivative information that servers do to their users.

quite the opposite I would argue:

https://nickyreinert.de/2020/2020-10-24-marketing-killed-the...


> This flag is sent by my browser when I connect to SOMEONE ELSE’s SERVER.

No, it's set in your command shell (e.g. bash) and tells CLI programs that support it to not connect to a server. It has nothing to do with browsers or ads. This is all very clear in the article.


You’re confusing the Internet with Google.

You can have ads without tracking.

Article quite literally talks about tracking of cli tools you run on your own computer, half of which are to pilot products that you pay with your own money.

Get off your high horse.


I would advocate for not getting your horse high to begin with, or hide your stash better.

Wow, I guess I grew up too close to actual cowboys that this is an interpretation I just never considered. Not sure why though as it's right there for the taking.

The article is about local desktop / CLI tools that collect telemetry, not the web browser "do not track" standard.

You can serve ads without tracking

> The internet only took off because the primary business model which ran on ads and derivative information that servers do to their users.

Arguable, on the other hand it did kill the internet. (or, almost so far, we'll see whether we rebound after decades of enshittification)


> This flag is sent by my browser when I connect to SOMEONE ELSE’s SERVER.

...and promptly, thoroughly ignored.


Who is investing in NFTs today?

Who is building their company using permission-less blockchain as the database? The average person still uses a bank checking account, not replacing it with a crypto account.

I haven’t heard of any progress on tokens in the Governance direction.

Stablecoins without a public audit trail have so far stayed relevant, but there are several which are suspiciously reminiscent of the mistakes that SBF made.

We all see the transfer of funds and the ostensible store of wealth when it comes to buying influence or presidential pardons. Those of us not wearing crypto-colored glasses don’t see the promise that VCs sold us on the industry 5-10 years ago.


I never spoke about NFTs nor do I have to speak about them, not today and not ever, so save your bait. It's in the same way that you didn't speak about bank bailouts, so I won't bait you into it.

Most people obviously use multiple accounts of different types. Those who have crypto wallets will never reveal them to you in the interest of their privacy.

Stablecoin firms make so much cash via interest that they're easily over-capitalized.

If you're foolish enough to be manipulated by VC interests, that's your own fault. I would focus on the tech, not on what VCs want you to believe. This applies generally, irrespective of the sector. I don't know why this is hard to understand.


NFTs are stupid. But I have a feeling as governments default on their debt and economies collapse in the next few decades cryptocurrencies will be of increasing importance.

Cryptocurrencies are now useless, considering how openai and similar companies have enough compute to highjack them and the AI thing might not work out at all…

(1) Capability is not the same as action. Every police officer in my city COULD murder me with their department issued gun at pretty much any time, but they haven’t. There are multiple reasons why, not the least of which is that _actions have consequences_. Worrying about that scenario is futile.

(2) The major cryptocurrencies aren’t as vulnerable to a malicious majority as you seem to think. All of the BTC ATMs, PoS providers, crypto exchanges, etc have strong incentive to ban malicious peers and they can do this soon after they identify the threat. The malicious majority would not be sufficient - they would also have to continually mine their own blocks faster than the rest of the network does.

(3) There would be a forked blockchain but only naive nodes which trust by default would continue with the illegitimate fork. Of the nodes who actually USE the cryptocurrency don’t agree with the malicious majority, it will be difficult to get the coins / tokens out of exchanges.

(4) The duration of any stolen nodes is the duration of the attack. Once the AI GPUs stop the attack and return to responding to LLM prompts, the legitimate blockchain returns to being the longest one, so all of the network returns to trusting the legitimate blockchain fork.

(5) The BTC network is controlled by a protocol agreed to by consensus. If the illegitimate blockchain fork stays longer than the legitimate one, the participants in the market can agree on a protocol change which hardcodes the illegitimate blockchain out of the picture (this happened with the ETH DAO in the early days after a successful double-spend attack).


I want the drugs you’re on :D

Thanks for your deep and clearly thought out reply.

That's 100% nonsense.

Thanks for putting the time to articulate why the AI GPUs cannot possibly be used to fork the blockchain and obtain the majority.

Because they're needed to run AI. Newer hardware is increasingly specialized for AI too. Moreover, if funds start disappearing, the price will crash, negating the point.

They're raising the prices for AI, so demand will inevitably lower, freeing up capacity.

You're speaking out of your a$$. AI demand is booming; it will keep booming ~10x more every year, so ~1000x more in three years.

If the prices are booming, it's to have a more sustainable pricing model.


> it will keep booming ~10x more every year, so ~1000x more in three years

LOL. That's some absurd interpolation.


Scary = “if I have no future prospects for work”

It’s the combination of AI changing the workplace, the large techs shedding double digit headcount, recruiting / hiring departments being so broken by the AI arms race hitting job applications, and the macro business environment generally being on the downward slope at the moment.


Not all AI companies are the same.

Some are piling on masses of debt to built capacity (eg. Oracle). Others are just reinvesting the profits from the rest of their company (eg. Google, Meta).

Anthropic’s moat is their best tool, Claude Code.

OpenAI’s moat is the brand of ChatGPT, once the fastest growing app in the history of the world.

It’s possible that open weight models keep pace, but it’s also possible that the investment to train them becomes prohibitively expensive and open weight models cease to keep pace with the large foundation model companies.


I really don't think open models will lose. I think they are cheaper to train because they have to be more efficient than the monstrosities we have now.

There is no theory that says the current frontier models cannot exist in models with 1/100th the compute waste ;). When we start trending in that direction, and oh wow we truly are, there will be no reason for these services. You could run them on your own hardware without serious investments.

The moat openai and anthropic have is them among others have attempted to buy all of the computer hardware for the next two years. That's intentional. They know the only existential threat to them is anyone coming up with a way to do this better than them. It's already happened and it's going to become more and more divergent.


I’m interested in learning more about your theory that these models can be trained more cheaply. Is anyone doing it from scratch, rather than adversarial distillation?

It is a lot cheaper to train a 27b model such as qwen3.6 which you can even vibe code or agentic code with than it is to train a 1t+ parameter model. It runs on a single commodity GPU for goodness sake

It's not a theory. These smaller models that are coming out are huge advances for the field.

I can't comment on companies training practices. That would be proprietary stuff I guess. I think the claims that the advances being made are due to distillation alone are completely unfair. The advances alone are not just data.


It almost doesn’t matter if it’s trained using adversarial distillation - if it’s nearly as good, and one-hundredth the cost, the choice is obvious.

Open weight models will keep pace because capable open-weight models are China's strategy for preventing a closed takeover of AI by the West.

US megatechs stole copyrighted data to train their. Hyper expensive models.

Chinese megatechs stole copyrighted data AND trained their models on derivative / synthetic data that came from the US foundation models.

I’m happy Chinese foundation model trainers were able to use Huawei (homegrown) hardware to train their models (also because having Nvidia dominate that sector is terrible for competition), but if Chinese megatech companies are just deriving their open weights models from US companies, then this is just an IP theft exercise.


There will always be jobs for private security, firefighters, and utility repairmen to protect / restore the data centers when people inevitably attack them.

There will be a period of rapid change. If we are lucky, the political class will see and adjust policy quickly. Otherwise we will see US urban areas gutted like the Rust Belt was after NAFTA / WTO. They are making the same mistakes but in a different industry.


Why will there always be these jobs, if the technofascists are right? They're creating enslaved sentience. Even the class traitor police want a union, fight for more pay.

What's uniquely un-automate-able about those jobs in their dream future?


Never underestimate the capabilities of a desperate human.

I don't think you understood my question.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: