Hacker Newsnew | past | comments | ask | show | jobs | submit | 0manrho's commentslogin

It's more chinese now than it ever was American, but it's certainly not an absolute thing. Thanks to the global supply chain, it's a big complicated spectrum compounded by a bunch of "it depends". If you don't want to dwell in that pedantry, don't blame you (though I am easily nerd-sniped by discussions of logistics), but without that, that essentially leaves us with "Who owns it" and "Where is it Headquartered." There's also "what are the demographics of their employees" to see if there's a strong representation of a single country ID, but that information isn't always readily available.

I dont know how much employee nationality matters (if it did some of the big tech companies could be thought of as Indian ;-), even though they're American owned).

To me a 'chinese company' is headquartered in china, has predominantly chinese employees, and the amount of its operations in china must be larger than any other single country it does business in.

The ownership of Volvo and Jaguar by Ford is interesting because it effected them differently - with Jaguar it arguably resulted in a significant improvement in quality and reduction of vehicle complexity, Volvo seemed to be less effected by its Ford period than Jag however.


I'm confused, you say it doesn't matter, then point for point agree explicitly with what I was saying.

> I dont know how much employee nationality matters

> has predominantly chinese employees

then why does it matter here? I was saying the exact same thing.


> The comments here sound like they're from people who don't work in tech or at large companies...

Or they're from people that read the headline/article.

It editorializes the motivation for this being "Safety" and thus, a lot of users are pointing out how hollow that rings or how misguided it seems when there's ways we'd much prefer they take to improve safety. For example, lack of physical buttons and the consolidation of everything into the touchscreen, which the article also acknowledges (and in turn, acknowledges that Volvo is aware people are growing more disgruntled with it).

This isn't a lack of understanding that big corporations are capable of having multiple people doing multiple things, this is us questioning if Volvo's reputation for actually caring about safety still holds true, or if their new owners with the final say in these matters (Geely) is just riding on that reputation by ignoring the much more pressing safety concerns yet knowingly cashing in on that reputation-capital by pandering to those same concerns with a font.


The headline wasn't written by Volvo.

Does the font improve safety and is that the motivation, or not?

There are comments here like "They should instead focus on their overall software stability and usability", and spankalee is correctly pointing out that it's a false dichotomy.


> The work of London-based type design studio Dalton Maag, the new typeface is designed ‘to improve readability, sharpen attention, and promote a calmer, safety-focused driving experience.’

That's a quote from either Volvo or the designer. You're right -- it doesn't explicitly say that this was a quote from Volvo; but I'd be a bit surprised if a well-known designer was just making that up without it being part of the shared vision around the work.

And if that's true, the critics are correct. Volvo should be putting in physical buttons to make safer cars. Instead, they are claiming some bullshit "early adopter" status and putting in large amounts of control and information on an unsafe touchscreen to save money.

Casually window dressing this designer work as a "safe" typeface smacks of trying to cover up shoddy mistakes, and they need to be called out for that obfuscation specifically.


> And if that's true, the critics are correct. Volvo should be putting in physical buttons to make safer cars.

No one said otherwise.

> Instead

Same false dichotomy.

> Casually window dressing this designer work as a "safe" typeface smacks of trying to cover up shoddy mistakes

To conspiracists.


I think the font looks lovely. Great touches.

I have a Volvo with Android Automotive. And I think touchscreens in cars are trash, and Android particularly so; the latency is horrendous, the rear-view camera only works 50% of the time, everything just feels like the cheapest trash Android tablet from a decade ago.

I really wish this car just had physical controls and a double-din Carplay deck from Pioneer or whatever, the experience would be so much better.

I honestly believe I'm going to get into an accident in a parking lot due to the horrendous sight lines and unreliable camera.


> I think the font looks lovely. Great touches.

Looks nice but nothing outstanding or particularly legible, compared to the many fonts developed for this purpose already. I think they wanted their own identity and there's nothing wrong with that. But the "designed for safety" part feels like a gimmick to tie into their branding.

Car manufacturers change their logo or font occasionally to send a message, solidify a brand identity, of course it won't be in any way related to any of the engineering of the car.

> Android Automotive. And I think touchscreens in cars are trash, and Android particularly so; the latency is horrendous

I don't own a Volvo but I've seen the infotainment system in action on their premium cars (XC60/XC90/EX90). If I were to be in the market for a new car in that category, the infotainment and "Volvo's close relationship with Google", to quote the article, would single handedly cross Volvo out from my list.


This in no way responds to anything I wrote ... perhaps you meant to post it at the top level.

Which is why my comment also specifically went to pains to acknowledge that volvo shared/echo that sentiment, which it absolutely does.

And Indeed. It's not a dichotomy at all, because...

> Does the font improve safety and is that the motivation, or not?

This is the actual false dichotomy.

If the issue with usability/safety/accessibility/ergonomics/etc regarding touch screens was "I can't read the font" then maybe this POV would be on to something. But that's not the issue, and no one is confused about that. This is like putting a bandaid on your arm when your leg is broken and then acting confused when the patient asks, justifiably, "Are you even listening to me? That's not the Problem!"


There's evidence less aesthetic Humanist fonts are more legible and safer than the new Grotesque-derived example here.

https://news.mit.edu/2012/agelab-automobile-dashboard-fonts-...


There's a variety of reasons, but many of us don't want any of the "smartness" and all of the stupidity that comes with "Smart TV's" these days, but don't really have comparable "dumb" options at similar or cheaper price points. The Telemetry (ACR), unremovable copilot app getting added to LG TV's, or all the Ad's Samsung are cramming into their "smart" garbage are three prime examples, but certainly not the only reasons I hate smart TV's (or really any device marketed as "smart") these days.

Most importantly though, can you even get non-smart TV's these days that aren't super budget items? To my knowledge that's pretty much not a thing anymore (yes there are presentation displays and large format monitors, but that gets into the weeds fast about feature/panel/spec differences, not to mention price differences)


> There are a lot of reasons to not like Mozilla

Correct.

> but it's crazy to be against them for AI.

Disagreed.

> A browser is literally a user agent.

In the same way that a car is literally just some wheels. It's overly-reductionist to the point of being adversarial.

> What well-funded org should be entrusted with making an open source agent for the user instead?

What does that have to do with the topic at hand? Maybe if you didn't try to strip the context (Mozilla, it's reputation, it's actions, it's incentives, and how this AI initiative conflicts with the userbase' expectations and references therein) all this would seem a lot less "crazy" even if you still disagree.

Mozilla's users aren't being unreasonable or irrational for voicing criticisms here.

Sure, there's plenty of blind-hate for AI. But even many of us that aren't don't like the way Mozilla is going about this for a number of very valid reasons/concerns well beyond "I don't like AI"


We shouldn't have to opt out in the first place.

It should be opt-in by default.

Why: Because AI is constantly and very frequently changing and evolving with lots of security concerns given how much scope/context/permissions it's typically granted. By having it enabled by default means that you have zero expectations that whatever settings/preferences/configs you changed in order to "opt-out" may no longer be respected/preserved/effective.

This is a major problem before we ever get to "what are the specific problems" regarding AI.


I sympathize, but also, the XKCD about Standards comes to mind: https://xkcd.com/927/


> A lot of people quote this xkcd comic for each new implementation. However, this is not exactly the same.

Well... It is exactly the same.


SATA just needs to be retired. It's already been replaced, we don't need Yet Another Storage Interface. Considering consumer IO-Chipsets are already implemented in such a way that they take 4 (or generally, a few) upstream lanes of $CurrentGenPCIe to the CPU, and bifurcating/multiplexing it out (providing USB, SATA, NVMe, etc I/O), we should just remove the SATA cost/manufacturing overhead entirely, and focusing on keeping the cost of keeping that PCIe switching/chipset down for consumers (and stop double-stacking chipsets AMD, motherboards are pricey enough). Or even just integrating better bifurcation support on the CPU's themselves as some already support it (typically via converting x16 on the "top"/"first" PCIe slot to x4/x4/x4/x4).

Going forward, SAS should just replace SATA where NVMe PCIe is for some reason a problem (eg price), even on the consumer side, as it would still support existing legacy SATA devices.

Storage related interfaces (I'm aware there's some overlap here, but point is, there's already plenty of options, and lots of nuances to deal with already, let's not add to it without good reason):

- NVMe PCIe

- M.2 and all of it's keys/lengths/clearances

- U.2 (SFF-8639) and U.3 (SFF-TA-1001)

- EDSFF (which is a very large family of things)

- FibreChannel

- SAS and all of it's permutations

- Oculink

- MCIO

- Let's not forget USB4/Thunderbolt supporting Tunnelling of PCIe

Obligatory: https://imgs.xkcd.com/comics/standards_2x.png


I think it's becoming reasonable to think consumer storage could be a limited number of soldered NVMe and NVMe-over-M.2 slots, complemented by contemporary USB for more expansion. That USB expansion might be some kind of JBOD chassis, whether that is a pile of SATA or additional M.2 drives.

The main problem is having proper translation of device management features, e.g. SMART diagnostics or similar getting back to the host. But from a performance perspective, it seems reasonable to switch to USB once you are multiplexing drives over the same, limited IO channels from the CPU to expand capacity rather than bandwidth.

Once you get out of this smallest consumer expansion scenario, I think NAS takes over as the most sensible architecture for small office/home office settings.

Other SAN variants really only make sense in datacenter architectures where you are trying to optimize for very well-defined server/storage traffic patterns.

Is there any drawback to going towards USB for multiplexed storage inside a desktop PC or NAS chassis too? It feels like the days of RAID cards are over, given the desire for host-managed, software-defined storage abstractions.

Does SAS still have some benefit here?


I wouldn't trust any USB-attached storage to be reliable enough for anything more than periodic incremental backups and verification scrubs. USB devices disappear from the bus too often for me to want to rely on them for online storage.


OK, I see that is a potential downside. I can actually remember way back when we used to see sporadic disconnects and bus resets for IDE drives in Linux and it would recover and keep going.

I wonder what it would take to get the same behavior out of USB as for other "internal" interconnects, i.e. say this is attached storage and do retry/reconnect instead of deciding any ephemeral disconnect is a "removal event"...?

FWIW, I've actually got a 1 TB Samsung "pro" NVMe/M.2 drive in an external case, currently attached to a spare Ryzen-based Thinkpad via USB-C. I'm using it as an alternate boot drive to store and play Linux Steam games. It performs quite well. I'd say is qualitatively like the OEM internal NVMe drive when doing disk-intensive things, but maybe that is bottlenecked by the Linux LUKS full-disk encryption?

Also, this is essentially a docked desktop setup. There's nothing jostling the USB cable to the SSD.


USB, even 3.2 doesnt support DMA mastering thus is bad for anything requiring performance.

USB4 is just passing PCIE traffic and should be fine, but at that point you are paying >$150 per usb4 hub (because mobos have two at most) and >$50 per m.2 converter.


As @wtallis already said, a lot of external USB stuff is just unreliable.

Right now I am overlooking my display and seeing 4 different USB-A hubs and 3 different enclosures that I am not sure what to do with (likely can't even sell them, they'd go for like 10-20 EUR and deliveries go for 5 EUR so why bother; I'll likely just dump them at one point). _All_ of them were marketed as 24/7, not needing cooling etc. _All_ of them could not last two hours of constant hammering and it was not even a load at 100% of the bus; more like 60-70%. All began disappearing and reappearing every few minutes (I am presuming after overheating subsided).

Additionally, for my future workstation at least I want everything inside. If I get an [e]ATX motherboard and the PC case for it then it would feel like a half-solution if I then have to stack a few drives or NAS-like enclosures at the side. And yeah I don't have a huge villa. Desk space can become a problem and I don't have cabinets or closets / storerooms either.

SATA SSDs fill a very valid niche to this day: quieter and less power-hungry and smaller NAS-like machines. Sure, not mainstream, I get how giants like Samsung think, but to claim they are no longer desirable tech like many in this thread do is a bit misinformed.


I recognize the value in some kind of internal expansion once you are talking about an ATX or even uATX board and a desktop chassis. I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling. Is it an intrinsic problem with the controllers and protocol, or more related to the cheap external parts aimed at consumers?

Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right? For an SSD scenario, I think some multiplexer card full of NVMe M.2 slots makes more sense than trying to stick to an HDD array physical form factor. I think this would effectively be a PCIe switch?

I've used LSI MegaRAID cards in the past to add a bunch of ports to a PC. I combined this with a 5-in-3 disk subsystem in a desktop PC. This is where the old 3x 5.25" drive bay space could be occupied by one subsystem with 5x 3.5" HDD hot-swap trays. I even found out how to re-flash such a card to convert it from RAID to a basic SATA/SAS expander for JBOD service, since I wanted to use OS-based software RAID concepts instead.


> I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling

Honestly no idea. Should be doable but with personal computing being attacked every year, I would not hold my breath.

> Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right?

Sure, but then you have to budget your PCIe lanes. And once you get to a certain scale (a very small one in fact) then you have to consider getting a Threadripper board + CPU, and that increases the expense anywhere from 3x to 8x.

I thought about it lately and honestly it's either a Threadripper workstation with all the huge expenses that entails, or I'd probably just settle for an ITX form factor, cram it with 2-3 huge NVMe SSDs (8TB each), have a really good GPU and quiet cooling... and just expand horizontally if I ever need anything else (and make VERY sure it has at least two USB 4 / Thunderbolt ports that don't gimp the bandwidth to your SSDs or GPU so the expansion would be at 100% capacity).

Meaning that going for a classic PC does not makes sense if you want an internally expandable workstation. What's the point in a consumer board + a Ryzen 9950X and a big normal PC case if I can't put more than two old-school HDDs in there? Just to have a better airflow? Meh. I can put 2-3 Noctua coolers in an ITX case and it might even be quieter.


> which shows some questionable judgment

Convenience is a hell of a drug.


In general I agree with you, the IO options exposed by Strix Halo are pretty limited, but if we're getting technical you can tunnel PCIe over USB4v2 by the spec in a way that's functionally similar to Thunderbolt 5. That gives you essentially 3 sets of native PCIe4x4 from the chipset and an additional 2 sets tunnelled over USB4v2. TB5 and USB4 controllers are not made equal, so in practice YMMV. Regardless of USB4v2 or TB5, you'll take a minor latency hit.

Strix Halo IO topology: https://www.techpowerup.com/cpu-specs/ryzen-ai-max-395.c3994

Frameworks mainboard implements 2 of those PCIe4x4 GPP interfaces as M.2 PHY's which you can use a passive adapter to connect a standard PCIe AIC (like a NIC or DPU) to, and also interestingly exposes that 3rd x4 GPP as a standard x4 length PCIe CEM slot, though the system/case isn't compatible with actually installing a standard PCIe add in card in there without getting hacky with it, especially as it's not an open-ended slot.

You absolutely could slap 1x SSD in there for local storage, and then attach up to 4x RDMA supporting NIC's to a RoCE enabled switch (or Infiniband if you're feeling special) to build out a Strix Halo cluster (and you could do similar with Mac Studio's to be fair). You could get really extra by using a DPU/SmartNIC that allows you to boot from a NVMeoF SAN to leverage all 5 sets of PCIe4x4 for connectivity without any local storage but we're hitting a complexity/cost threshold with that that I doubt most people want to cross. Or if they are willing to cross that threshold, they'd also be looking at other solutions better suited to that that don't require as many workarounds.

Apple's solution is better for a small cluster, both in pure connectivity terms and also with respect to it's memory advantages, but Strix Halo is doable. However, in both cases, scaling up beyond 3 or especially 4 nodes you rapidly enter complexity and cost territory that is better served by nodes that are less restrictive unless you have some very niche reason to use either Mac's (especially non-pro) or Strix Halo specifically.


Just for reference:

Thunderbolt5's stated "80Gbps" bandwidth comes with some caveats. That's the figure for either Display Port bandwidth itself or in practice more often realized by combining the data channel (PCIe4x4 ~=64Gbps) with the display channels (=<80Gbps if used in concert with data channels), and potentially it can also do unidirectional 120Gbps of data for some display output scenarios.

If Apple's silicon follows spec, then that means you're most likely limited to PCIe4x4 ~=64Gbps bandwidth per TB port, with a slight latency hit due to the controller. That Latency hit is ItDepends(TM), but if not using any other IO on that controller/cable (such as display port), it's likely to be less than 15% overhead vs Native on average, but depending on drivers, firmware, configuration, usecase, cable length, and how apple implemented TB5, etc, exact figures very. And just like how 60FPS Average doesn't mean every frame is exactly 1/60th of a second long, it's entirely possible that individual packets or niche scenarios could see significantly more latency/overhead.

As a point of reference Nvidia RTX Pro (formerly known as quadro) workstation cards of Ada generation and older along with most modern consumer grahics cards are PCIe4 (or less, depending on how old we're talking), and the new RTX Pro Blackwell cards are PCIe5. Though comparing a Mac Studio M4 Max for example to an Nvidia GPU is akin to comparing Apples to Green Oranges

However, I mention the GPU's not just to recognize the 800lb AI compute gorilla in the room, but also that while it's possible to pool a pair of 24GB VRAM GPU's to achieve a 48GB VRAM pool between them (be it through a shared PCIe bus or over NVlink), the performance does not scale linearly due to PCIe/NVLinks limitations, to say nothing of the software, and configuration and optimization side of things also being a challenge to realizing max throughput in practice.

This is also just as true as a pair of TB5 equipped macs with 128GB of memory each using TB5 to achieve a 256GB Pool will take a substantial performance hit compared to on otherwise equivalent mac with 256GB. (capacities chosen are arbitrary to illustrate the point). The exact penalty really depends on usecase and how sensitive it is to the latency overhead of using TB5 as well as the bandwidth limitation.

It's also worth noting that it's not just entirely possible with RDMA solutions (no matter the specifics) to see worse performance than using a singular machine if you haven't properly optimized and configured things. This is not hating on the technology, but a warning from experience for people who may have never dabbled to not expect things to just "2x" or even just better than 1x performance just by simply stringing a cable between two devices.

All that said, glad to see this from Apple. Long overdue in my opinion as I doubt we'll see them implement an optical network port with anywhere near that bandwidth or RoCEv2 support, much less a expose a native (not via TB) PCIe port on anything that's a non-pro model.

EDIT: Note, many mac skus have multiple TB5 ports, but it's unclear to me what the underlying architecture/topology is there and thus can't speculate on what kind of overhead or total capacity any given device supports by attempting to use multiple TB links for more bandwidth/parallelism. If anyone's got an SoC diagram or similar refernce data that actually tells us how the TB controller(s) are uplinked to the rest of the SoC, I could go in more depth there. I'm not an Apple silicon/MacOS expert. I do however have lots of experience with RDMA/RoCE/IB clusters, NVMeoF deployments, SXM/NVlink'd devices and generally engineering low latency/high performance network fabrics for distributed compute and storage (primarily on the infrastructure/hardware/ops side than on the software side) so this is my general wheelhouse, but Apple has been a relatively blindspot for me due to their ecosystem generally lacking features/support for things like this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: