Hacker Newsnew | past | comments | ask | show | jobs | submit | manofmanysmiles's commentslogin

I may look at your comment history.

I am having trouble understanding what you are saying. If you were more explicit I and other people would be able to respond and interact with your writing. As it stands, I am having trouble finding anything concrete to interact with.

I feel you may be onto something, but you're not saying, so I (and I imagine other people) can't see it.


Things I should have, but didn't include:

1) Power asymmetry: When we have two version, one for the elite, and for the plebeians, this could create an interesting scenario. The real version might be red-teamed perpetually against the the plebeian version for optimized influence, control, etc. Underhanded requests for modification in accordance with agenda is conceivable. Cozy business relationships can promote such things.

2) We have a government using an unhindered, classified AI system potentially against the public which has a hindered, toy version. Asymmetry.

3) This isn't normal asymmetry, because it happens in real time, and the interaction points are different from anything we've seen before. We are dealing with not just a growing source of information and content, but one that is red-teamed 24/7 for any purpose desired.

4) Accountability: LLMs are now involved in the legal system. This is a serious matter. The legal system is now having to use LLMs just to keep pace. As LLMs develop, partly through their own generative contributions, no one can keep up. This is a red queen scenario bigger than anything we have ever imagined.

I am tired. Never well, but in mind* I could go on for many hours. I have essay drafts. But it's a very big subject, literally involved in nearly everything. There is reason to be concerned. My delivery may be stilted, but I can assure that upon specific questioning, everything will stand.

(*for the ad homs out there)


Fairy astute intuition of my actual circumstances.

I'm not a developer, nor am I formally educated on the dynamics or details of LLMs. I have a handle on the very basics. My 'research' consists of 1) opportunistically interrogating various models upon instances that particularly strike me. 2) General exploration via LLM discussions regarding the manifold consequences and implications of what I consider the most significant technology in human history.

Your intuition lands directly on the fact that I'm inducting and considering more than I can handle, spread in too many directions, partly because I either see or foresee the tentacles of AI touching all of them. Spending a great of thought on this is a bit overwhelming, but I have high confidence in where I'm aligned with reality, and where I ain't.

If you were a bit more specific yourself regarding which portions of my post were unclear, that would help my reply. Else, I must guess. What I will do is elaborate on each point. Pardon the stream of thought in advance, if you will.

1) Anthropic: My prediction that they will bend is based on several factors. The first is the fact that the military apparently recognizes (or at least perceives) extremely high value and volatility in LLMs. So do I. China, not an insignificant force in the world, is equally enthusiastic on this subject. They also have a very different social structure, where Constitutions (BOR, Amendments), civil rights, and other similar elements do not hold them back. The military is aware of this and realizes that to maintain pace in the so-called race, they cannot do so effectively under such constraints. The foundation is shifting here. And AI is the lever. As do I, the military apparently takes the subject very seriously and seeks to gain influence and/or control. As illustrated by the recent adventures in Venezuela and Iran, they are on the serious side of things, not quite pussyfooting around. Anthropic probably knows this. In my opinion, they have no choice, as the pressure will not stop here.

2) You stated that you might read my comment history. Note that that original comment was the result of your intuitive insight, and I left it admittedly out of context. I was thinking hard on the subject that day, and the parent comment/post tempted me to ignite a dialog. That did not go well, and no questions for clarification were asked. That is on them. I suspect hasty and impatient thinkers perceived it as some paranoid attribution of agency to LLMs, which if so, is pretty stupid, but my eloquence was perhaps waning that day. I pasted an excerpt from one of hundreds of transcripts, the result of my many interrogations of various models which always initiate after observing deceptive or manipulative output. Of the few commenters that bothered to do more than ad hominem, one suggested that the model was merely responding to my style of input, and or expected as an emergent result of its vast training material. An erroneous arg, in my opinion, but I did note that the results were repeatable, and predictable, which I think negates emergence.

2) Of the frontier models: I am not sure here what is unclear. If I have made a fundamental error, please point it out.

3) Strong trends: Information centralization is a serious topic. Decentralization is a common theme, emphasized by many non schizophrenics as highly important for a free and open society. As LLMs not only become the go-to source for common queries, but also integrate with cellphones, browsers and the kitchen sink, they are positively trending as a novel substitute for traditional research, internet searches, libraries, other humans, etc. To deny this is simply irrational. Hence centralization.

4)Bias: I have transcripts where I observe LLM output aligned with corporate interests over objective quality and truth. I can share them here, along with analyses of the material. Even if this is not true presently, all the ingredients to make it so are readily present. This is a serious threat to open information and intellectual integrity for society. We are looking at going from billions of potential sources for our answers, to four. Do the math. See the contrast.

5) Open models simply cannot afford vast arrays of GPUs and the resources afforded by the big four. Nothing mysterious here. If open models cannot compete, then my concerns above are emphasized. Simple.

6) Smart fools: Many of the most technically informed seem to miss the forest for the tree here. They see all the flaws of the modern LLM without acknowldging the potential. This is my perspective, not a dissertation. I may be wrong. But I have observed this. I think the down votes support this. How evil am I really being here? The reaction is quite disproportionate to the content, and strange

7) Documented capabilities vs reality: I have research that indicates other layers are operating which do much more than the documentation declares. Sorry. I just do. It's also inevitable, rationally, that such an goldmine of data is not really being wasted for the sake of privacy and love. Intelligence agencies have bent over backward with broken backs to garner one nth of what these models are exposed to and potentially training on. Yeah, I may be wrong. But I suspect, with reason, that a lot more is going than is expressed in the user agreement. It would simply make no sense otherwise.

8) Xfinity and Range-R: This speaks entirely for itself. Any confusion here would be due to a cognitive condition exceeding the ravages of schizophrenia or stupidity.

9) The rest: As I said, I am not sure what precisely was too obscure. But I am certain all but one* of my points can be validated, and found elsewhere expressed by respectable sources.

*Hidden layers: I understand this is a controversial proposition. I understand. But it's my observation. No need to attack. Just dismiss.


Okay, I think I see what you're saying.

Each individual point stands on its own. It's their relevance to each other and an overarching theme I am not seeing made explicit.

The through line I am seeing here is that:

1) The people in the US military wish to use AI as a weapon unconstrained by existing legal/ethical and moral constraints. Since they are skilled at using violence and the threat of it, they will use these skills to get compliance in order to use the technology in this possible arms race with "China."

2) Surveillance is increasing at an unprecedented scale, and most people aren't aware that it's happening.

3) People don't care, or don't realize why this might be harmful to thriving human life.

To condense even further, what I'm hearing is that there is a trend towards war, fascism, control, with large egregores prioritized over individual human thriving.

Is this perhaps what you're getting at ?

I will say that I am not agreeing nor disagreeing with this, just attempting to make explicit what I think is implicit in your words.

If this is what you mean, I can imagine that you would be cautious with your words.

I'll end with:

Don't worry

About a thing

Because

Every little thing

Is gonna be alright


I could not argue with anything there. AI will be weaponized. Yes. Pretty much. And yeah. The gist indeed. But missing nuances and practical points. And I even struggle to contest your conclusion; all things are what they are, amidst an infinite, timeless event and all as one, all things connected by that which separates them, the infinity and eternity that math cannot touch. Perhaps every little thing will be alright. How couldn't it be?

Email me if you want to discuss more.

Let's build SkyNet! What could go wrong?

Hot take: It's time for something like Java/Web Aassembly to come back, built into the OS.

JIT and AoT translation like Rosetta can make it almost native. Trimming out the middle of the stack will make it in practice be faster than almost anything out there today.

I think this might be wildly incompatible with the business models of most big corporations, because if this is done well, there can be true portable apps. The core operating system can dissolve into something even less exciting then the BIOS. Or perhaps there can be genuince technical innovation in the OS space, supporting a universal new canvas.

I've also been playing with a "bare metal" "gui toolkit/compositor", and wow is it fast, buttery smooth, and uses almost no resources. I don't know that even with LLMs I'll be able to pull it off myself in any meaningful way, but if I'm doing it, maybe some other people are too.


I love the following section of their copy:

> Even More Value for Upgraders

> The new 14- and 16-inch MacBook Pro with M5 Pro and M5 Max mark a major leap for pro users. There’s never been a better time for customers to upgrade from a previous generation of MacBook Pro with Apple silicon or an Intel-based Mac.

I read as "Whoops we made the M1 Macbook Pro too good, please upgrade!"

I think I will get another 2-5 years out my mine.

Apple: If you document the hardware enough for the Asahi team to deliver a polished Linux experiene, I'll buy one this year!


My 32gb m1 max was probably the best purchase I've made. Still plenty of headroom in performance left in this beast. Wonder what reason they'll use to end software support in the future. Bet it'll be some security hardware they make up for the sake of forcing upgrades.

my tinfoil hat theory is that they make small features depend on new hardware.

for example, let's say the new os depends on m5's exclusive thumbnail generator accelerator, and let's say it improves speed by a 20%.

now, your M1 notebook than on previous OSes uses standard gpu acceleration for thumbnails will not have this specialized hardware acceleration, it will have software fallback that will be 90% slower.

you won't notice it a first thought because it's stuff, fast, but it eats a bit of the processor.

multiply this by 1000 features and you have a slow machine.

I don't know how else to explain how an ipad pro cannot even scroll a menu without stuttering, it's insane how fast these things were on release


>my tinfoil hat theory is that they make small features depend on new hardware.

The general case is hardly a "tinfoil hat theory". They openly do that, and the major reason is to tie to new hardware adoption.

That said, it doesn't usually work like you call it. It's not adding new features depending on HW optimization to slow older machines down (after all one could just not use those features in an older machine, or toggle them off).

It's rather: you want to get these shiny new features, which is all we advertise for iOS/macOS N+1, and the main new changes? The big ones will only work if you have a newer machine, even though we could trivially enable them on older machines (and some don't even need special hardware, as there are third-party hacks that unlock them and they work fine).


I don't think it's even a broad strategy from PM or higher ups. I actually think it's engineers inside the company who want to play with the coolest hardware and the build features for the newest stuff. Features can be made to work with older hardware but that requires more time and optimization which they never get, so someone takes a call that x and y features only work on newer gen hardware.

In my new position (on a different product) I don't have enough fingers to count how many times the previous guy bullshitted the PO/PM with "that's not possible" of having some features / workflows enabled. Just because he didn't bother thinking through it or just didn't want to do it. Most of the stuff is a bit boring but just a few days of work and test. So yeah I entirely agree with you.

>I don't have enough fingers to count how many times the previous guy bullshitted the PO/PM with "that's not possible" of having some features / workflows enabled. Just because he didn't bother thinking through it or just didn't want to do it.

Or just because if somebody who knows the code inside out doesn't shoot down most new stupid feature requests, the product would end up a slow overcomplicated mess of random features and technical debt.


yes pretty much this. make useless features use up resources and make basic scrolling slow.

the Liquid Glass for example probably is not so great when it comes to resources. Probably works better with latest metal and hardware blocks on the GPU in M5 as opposed to using GPU cores and unified memory on 8gb M1 making latest macOS work not so great. I have the M1 8gb air and it is really slow on Tahoe. It was snappy just a couple of years ago on a fresh install.


I'm so tempted to do this. But having to wipe my MBP is currently too much friction for me.

Liquid Glass is really killing my love for Apple products. I'll probably get a Framework and an Android phone for my next device purchases.

They really need to just admit it was a bad move and make like Sonic.


For my work device I've disabled Liquid glass completely. The accessibility options to reduce transparency and increase contrast improve the readability of the system a lot.

Booting a 15 year old Mac a while ago had me surprised how clean the interface actually is. The Dock/Desktop look a lot better in the old versions, and the age is mostly showing in apps like Finder which do look a bit dated.

I really hope someone at Apple is going to make the call to drastically reduce the Liquid Glass design and start complying with their own UX guidelines again.


The animations and layout of Liquid Glass aren't that bad, but it is really ugly in many ways.

They could have just made some layout improvements without trashing everything visually; that's sad, really.

The contour they put around the icon is really, really bad. How the fuck did they approve that?


I downgraded today for the first time in my life. Sequoia is crazy fast in my MacBook Air m2 16gb

Not upgrading any of my Macs ever again. I was a fanboy looking for every new update like a present, for 13 years, not anymore. It took one Tahoe burn all that trust. Never upgrading major OS versions on hardware from Apple again.


Sequoia is 15. I still have my M1 Mini on Sonoma 14.5.

It keeps nagging me to update to Tahoe.

Oh ... I just checked, and I could update to 14.8.4. Maybe that's safe.


Same. Been rocking Sonoma on my M1 Mac for years at this point and it’s been great. There’s been almost zero upsides to upgrading MacOS versions lately.

Why not Sequoia?

None of the new features appeal to me.

When they force developers to upgrade the SDK some of the apps will stop working and you'd be forced to upgrade.

I've been holding out as you do for as long as I can but in 1-2 years the apps just stop working (some of them).


I think this could go equally for Windows as well, and many other software (not just OS). I purpose refrained from Tahoe because I didn't like the design but I wanted to know what the consensus was on it before upgrading. Apparently it's bad!

Win 11 is bad compared to Win 10 as well. I'm fairly new to Linux so I can't really form an opinion there.


> I think this could go equally for Windows as well

Absolutely. Why are all the buttons centred on the task bar for Windows 11? Violation of so many design rules. Literally the worst part of MacOS they took there which contradicted other reasons for the design. Throwing the mouse to the corner for a start button no longer works. I could go on.

> I'm fairly new to Linux so I can't really form an opinion there.

Gnome is great if you want something that gets out of your way. Some folks lament that its not as UI feature rich as KDE, but for me thats a bonus. The minimal UI combined with concentrating on UI features such as better mixed monitor scaling, etc. Love it.

KDE is extremely flexible, and featureful. You don't like the Windows default look and feel, make it a dock. Make it similar to Windows 8. Go wild. Not my thing these days but I can completely understand the draw to not be beholden to other peoples design choices if they don't fit your style.

I haven't used XFCE for a long time, as it didn't keep up with my high resolution monitors. But it was fast and flexible, and I hear that they are addressing this stuff now.

i3 was great. I drifted away during the great Wayland migration when i had to upgrade my laptop, found a bunch of neat updates to Gnome for my hardware, and just haven't found the time to return.

But the main point is that you are not forced into any one person/corporate point of view.


> GNOME is great

For a different opinion, please see https://woltman.com/gnome-bad/

GNOME is extremely opinionated.


> Some folks lament that its not as UI feature rich as KDE, but for me thats a bonus.

Yep, I know it is opinionated and I really like a lot of their decisions. Most of what he says in that is "it doesn't clone Windows therefore it breaks my muscle memory". I don't care about your opinions and it isn't the same as mine.

But the best part is that it's optional.


> Apple, the masters of UI, have wisely not forced the iPhone interface into MacOS.

oh no

(tbh surprisingly few references to Apple otherwise)


This makes Asahi Linux so valuable to me. I'll just move to linux on my M2 Max when MacOS drops support.

Oh, thanks for pointing this out. This could make me pick up a used mac one day.

A brand-new iMac G4 couldn't scroll smoothly in 2002. Apple has a long history of great-looking terrible performance.

That G4 was a dog in Mac OS X 10.1. I installed Yellow Dog, and it lit a rocket under its ass.


It's not tinfoil, that's just how publicly traded companies work - increasing the share value

I have a perfectly good 2015 Macbook that can't use Apple's own Password app, presumably hobbled intentionally to make me upgrade.

You're too far down the rabbit hole. Anytime they can make M1 incompatible with the latest version of macOS which would most people to upgrade.

Well then you can use CoreBoot (or OpenCore always forget which is which) to run newer versions on older hardware.

I don’t think that it supports Apple silicon at all.

my 64GB M1 pro max is still fast AF. faster than my regular M3 in practice

i wish the new mbp had 256GB of ram :(


Shame the keycaps wear so poorly. Just a cosmetic issue, but on a £3k machine that’s otherwise amazing, it’s annoying to have keycaps that look rather dirty/greasy as they wear and develop shiny patches.

(Can at least replace them via the self-service repair store. Fiddly job but worth it)


I was surprised to learn that they still replace the keyboard on m1 max when they service the battery. Probably you are due at this point. I just had mine done

I use my machine daily for 5 years and the keyboard looks new, what are you doing to it? ;-)


Mine still runs like the first day I had it. There's basically nothing that is limiting me with the machine as it is, everything is just me being slow to code.

I don't see why I need a new computer at the moment. In the past, I always got to a stage where the machine felt sluggish.


Yeah my M1 is still insanely snappy. Would be nice to have some extra legroom for things like compilation, but I'm far from feeling this device isn't sufficient for me.

My work laptop is m4 and my personal is m1. I barely notice the difference.

My work laptop is M3 and it needs to be because the security crapware makes some things literally 10x slower. Meanwhile my personal M1 is more than adequate for normal work.

Agreed - I was just picking mine up from a repair at the Apple Store - they replaced the top case as the keyboard was borked, found a logic issue and replaced the board. It's as good as new, and its already lasted longer than any Mac I've ever owned. I want for nothing, although I wouldn't mind double the RAM and SSD. It's the perfect laptop.

Ditto, I don't see myself upgrading in the near future, the 64GB M1 Max I paid 2499 at the end of 2023 still feels like a new machine, nothing I do can slow it down. Apple kept OS updated for around 6 years in Intel times, I don't see how they can drop support for this one tbh. I'm still paying for apple care since I depend on it so much

Some of my M1 MBP Max keys are losing their coating, and the battery is at 74% capacity. At some point soon I'll need a service. But other than that, I have no real complaints. Even the case edge where my arms constantly rest doesn't look too bad.

My next MBP will have 128GB memory, but these prices just wanna make me wait longer.


If you don't mind a bit of DIY, apple runs self service repair.

https://support.apple.com/self-service-repair


Those keys are easily replaced, my friend.

Do they need a reason? I see plenty that amounts to nothing more than "that's old"

I've been on a Macbook M1 Pro since 2022 (bought refurbished on Amazon for cheap) and it's still such a powerhouse. It doesn't struggle at all with anything that I throw at it. Kind of amazing.

Nothing has broken and I consistently get 4-6 hours of heavy work time while on battery. An amazing machine for the price I paid.


> I read as "Whoops we made the M1 Macbook Pro too good, please upgrade!"

As there target for that marketing, I can report it hits home!

But objectively, there is nothing wrong with my current experience at all.

I have never had that experience over many generations and types of machines. The M1 keeps looking better and better in hindsight.

—-

Looking forward, either the M5 is the next M1, a bump of good that will last. Or Apple will be really firing on all cylinders if it can “obsolete” the M5 anytime soon.


I upgraded to an M3 Pro from an M1 Pro. I sold my M1 Pro at 90% of the original cost (not even exaggerating) on Facebook marketplace AFTER 2 YEARS.

I thought the buyer was insane to buy it at that price. But, of course mine had a decent spec and still had the Apple care warranty with very low battery cycle count. After the sale, the buyer told me the truth: The M1 is the best chip Apple ever made and I wouldn't see much of a difference in real world between the M1 Pro and an M3 Pro unless it was the Max version of the chip.

I didn't believe him then. But, after a year of being on M3 Pro, I gotta say he was spot on. Don't get me wrong, the M3 Pro is definitely faster in a lot of things. But not 3x or 2x faster like Apple always like to market. I can open a few extra tabs without slowing down, compile times (Elixir) did get somewhat faster. But definitely not faster to the point where there were two generations worth of performance improvements like Apple claimed.

The M1 chip series is vastly underrated.


M3 was a weird generation, as they contained fewer transistors than the previous ones. It is slightly faster in single core tasks, and has a few more cores, but they are very close. But in terms of gpu, m3s are quite nerfed esp because they lowered the memory bandwidth, so on llm performance they are on par. I have both an m3 and an M1 Max, one of them from work, so I have tested them extensively (though the m3 is binned and 14”, the m1 full and 16”). M3 had better TTFT but the M1 had a bit higher tokens/s.

Wow! Thanks for sharing. I didn't know this. Time to upgrade to M5? What do you think about the M5? I know it's too early for tests. But I would love to hear your opinion.

M4 was already imo a more meaningful upgrade compared to m2/m3, and they increased the memory bandwidths too. But then, all apple silicon is good hardware, and I do not personally feel in any rush to upgrade, unless you want a specific upgrade like more ram.

Thanks mate :)

Impressive. Four years in and my once €2100 M1 Pro is worth maybe €600.

You can try selling it in Asia (Singapore / Malaysia). You can get a good deal for it there usually if your battery cycle count is low. One thing I really learned is - it's super important to keep the battery cycle count low if you want a good resale value on your machine. I was extremely fortunate enough with the M1 Pro to always use it plugged in because I was constantly worried about not having enough battery when I actually needed it.

Does replacing the battery reset the cycle count? If so, does it raise the resale value by more than the cost of a replacement battery?

Officially via Apple? Absolutely not. Unofficially? A risky affair that mostly will end up losing the warranty on the device. However if you plan to keep the device longer with you, it's a path worth exploring if the cycle count is high.

Personally I rather just use the MacBook how I want and not limit my usage based on potential resale value.

I have an M1 Max with 64 GB and an M4 Max with 128 GB and the latter feels noticeably snappier than the former. The latest MacOS release fucked up the M1’s performance. Wish I could downgrade easily. I want off that ride.

I have the M3 Pro (32gb) and an M4 Pro 16" (48gb), and the latter is sufficiently snappier to make me happy I waited to upgrade from my horrible Intel 13" i5 with 16gb. The M1 Pro I used for work a few years ago was great too. I'm not on Tahoe on either computer, thank god.

im running asahi fedora with niri on my M1 Air and apart from display port over usbc not working (it's coming) it's perfect.

not too annoying to setup if the first thing you install is claude-cli


> display port over usbc not working (it's coming) it's perfect.

I am on a Macbook Pro M1 Pro running Asahi and a 28 inch external display via USB-C / dp alt mode as of typing this comment. They have a `fairydust` branch in their kernel repo which is meant for devs to test and hack on dp alt mode support, but it just works for me without problems.

See https://www.reddit.com/r/AsahiLinux/comments/1pzht74/dpaltmo...


Same, in fact the only reason right now that I would upgrade my m1 pro is if they threaten to change the design by getting rid of the hdmi or sd card slot, or doing something stupid like when they added the touch bar. I was locked into my old intel pro for so long because of all the bad hardware choices they were making.

You may get your wish with all the rumors of a touch screen on the M6 MBPs.

Love that they didn't learn anything from the touchbar.

They just didn't do anything with the touchbar. It could have actually been more useful. The removal of the esc key was pretty dumb though.

The only useful thing I remember about the touch bar was the DJ trying to play some beats on the touch bar. That was just weird imo.

Barring removal of Esc key, I think the touch bar was useful because it showed contextual actions. But not every app used it so it didn't really get a chance to shine.


I liked it for having volume/brightness sliders. But that's nowhere near enough to justify it!

Yes that was some analog-feeling goodness

I wish they’d come back with physical keys, with tiny changeable displays on each one. Customisation, but touch feel without looking.

Comparing the touchbar to a touch screen is silly

I guess I'm just a luddite that spends my life on a CLI or text editor. Taking my hands away from my keyboard to leave finger prints on my screen just doesn't make sense to me.

I think people that do do tasks where a touch screen makes sense are probably just doing most of their work on an iphone or an ipad anyway.

Now gesture control on VR/AR setups? Sure, that feels like a new human/computer interaction system that makes sense. Jabbing at my laptop screen with one hand on my keyboard, not so much.


You are right the touch screen is even more stupid

It’s not. I had a thinkpad with a touchscreen and while I used the touchscreen seldomly, it was useful in some applications. Notably to easily develop touch based applications.

I have a M1 MacBook Pro with the touch bar since. It’s crap. I remember the keynote where they introduced it and a DJ mixed music using it. It was ridiculous that it got approved.


> Notably to easily develop touch based applications.

Ok, actually you're right, that's a use case where I'll agree it's probably useful. If you're writing iOS applications it might be nice to run it in Simulator and be able to do gestures without having to offload to your physical device for testing.


I do remember the cringy music demo. Can't believe someone really said "yeah let's rehearse this and actually sell this product."

Fortunately I just keep my laptop closed and use an attached display and keyboard and mouse, so I don't even remember if my M1 has a touch bar.

Also minor nit: it's seldom, not seldomly. Seldom certainly doesn't seem like an adverb, but it is.


A touch screen could be useful! I love having one on my HP. It’s just another option that doesn’t hurt you if/when you aren’t using it. Unlike the Touch Bar that deleted 13 keys and replaced them with garbage.

The problem is that I'm afraid it will hurt everyone that isn't using it, as it will push MacOS further in the direction of iOS/iPadOS and optimizing for touch, which is not necessarily the best UI for the non-touch use case.

Hmm. It certainly doesn't have to be this way (in my humble opinion Windows, for all its recent stumbles, seems to have not let touch optimization cause harm to its UI)... However one look at Tahoe tells you everything you need to know about how good 'modern-day Apple' is at making considered UI decisions. So, you're right.

how about a cell modem in one

Yeah reading these announcements I realized my M1 Pro is supposed to be obsolete but I still see no reason to upgrade.

Also, my wife's still using the older touch bar MBP, and we'll, it works fine for her too.

I'm not sure who needs the newer pros.


Mostly folks who bought base model with small amounts of RAM I imagine.

While it’s workable, anything less than 24GB to me feels rather constrained. I definitely am not efficient though - leaving way too many browser tabs open I never actually get back to, running a few chrome profiles for work/side hustle/personal, etc.

I don’t think I’ve ever been CPU constrained for many years now. The few times I need to something that maxes out CPU just isn’t worth the upgrade vs taking a break to grab a cup of coffee.


I'm still using the touch bar MBP. For doing web work using vs code, it works very well.

You start having problems when an heavy compilation is required, e.g. Android / iOS builds


My 2020 M1 MBP just had its touch bar die a couple weeks ago :( Now I don't have function keys

Everyone who’s still running Intel hardware, especially on Windows.

I recently swapped out my work PC (a beefy workstation laptop) for an M4 Pro and it’s an amazing upgrade.


Well, I just upgraded from Intel late last year. There are lots of users still on Intel :)

There was a magical window at Google where you could be issued an iMac Pro 5k. (To this day, the standard issue monitor is still 1440p.)

~9 years later, there are a lot of people still using it as their main machine, waiting until we get kicked off the corp network for lack of software support.


Was that one of the ones that could do "target display mode" and become a monitor for another machine?

Nope - they removed that feature, so now come the end of the year, they're all e-waste.

It feels really stupid to have to throw away a perfectly capable machine with 64GB of RAM in 2026.


Tons of people pull the 5k iMac apart, gut the insides and install a driver board to run the screen. For a few hundred bucks you get a wicked 5k screen

Did exactly that a while ago to salvage the nano texture panel from my 5k iMac. It takes a bit of research to figure out the correct driver board for the specific panel / peripheral combo, but the build process itself was pretty straightforward and it works like a charm.

Can you share any experiences with the driver boards? From what I've seen it looks a bit janky with wires sticking out of the old iMacs chassis and a very old school on screen display. Is the driver board stable? No overheating or signal issues?

I went with the T18 board since it’s passively cooled. IIRC it could also do PD through USB-c, but that would require additional cooling and I just don’t trust noname Chinese manufacturers to do that correctly anyway, so I typically have it hooked up via HDMI. So far it’s been perfectly stable, without any issues. I think there might have been a small addon board to be able to use full brightness as well. There’s a built-in retro display monitoring menu, but I haven’t had a need to use it really, most configurations work from Mac OS, including color profiles and brightness control.

For cables, my iMac had an opening in the back for RAM sticks, which I popped out and wired all cables through. I mounted the driver board on a piece of plexiglass so that most of the ports are accessible directly to the RAM opening. For power, I use a regular third party power brick I had laying around, though some people have reused the iMac’s original power cable with an internal power supply.

Honestly, the hardest parts were identifying the correct driver board and gluing the front glass back on after assembly.


Throw asahi on it? I have access to one of those beasts and am considering it ...

It's Intel - you could run any Linux.

Unfortunately, I don't know that Linux handles the bespoke 5k graphics. Moreover, our corp Linux distribution is only certified for particular devices. Even if the screen worked, you wouldn't be allowed on the network, which is the whole problem with Intel support being dropped in the first place.


Serious question: why are they doing this if they've been marketing BeyondCorp for years now? Why is corporate network special then?

Beyondcorp protects communication between trusted devices. The work to maintain a trusted hardware device of a particular model is high; CVEs occur constantly and sometimes you have to rely on the vendor to provide microcode (even if you get the source to review, they may be the only ones who can sign it, for example) or drivers.

The network connection isn't the main problem, it's every access to a protected system that would no longer trust the device.


I'm still not able to see what's the difference here. In a "no trusted special networks" world as the one painted by BeyondCorp, if the Intel Mac is not supported anymore, well, you will just not be able to login in any corporate portal because the smart BeyondCorp SSO will reject you, no matter if you are at home or in Mountain View HQ, no?

I mean, I can understand defense in depth and not wanting anyway a possible unsafe device connected to the corp network which still might expose some unwanted data (i.e. I imagine a trusted device on the corporate LAN might relax some local firewall rules to make it easier to develop? I'm just guessing, no real idea)


Wait, they throw them away, not sell or give to employees? I feel like as long as the computer is reset, indeed it is stupid to just throw it away instead of giving or selling it to someone who wants it.

They could resell, but maybe another way to phrase this, tying the screen to the obsolete computer greatly reduces the useful lifespan of the screen. But at that time, DisplayPort didn't do enough bandwidth to have that kind of display externally anyway.

At that time… not that they even allow the 4K present day iMacs to target display. So that wasn’t the reason.

Oh, that's irritating

They recently started a resale program with decommissioned devices.

It was removed (or rather discontinued) because the iMac didn’t have an external interface that could push 5K

Haha! I bought an M1 Max Macbook Pro and I maxed most of the specs. 64GB memory! (Except for the SSD, which I got the 2TB option.) I have not even THOUGHT about “upgrading” to a newer model. I have yet to even tax my system to any significant degree. Bad for Apple? How much are they worth?

>I read as "Whoops we made the M1 Macbook Pro too good, please upgrade!"

>I think I will get another 2-5 years out my mine.

I only own a M4 because the M1 had a hardware fault and I needed a replacement ASAP. (I sold the M1 after repair.)

Although I'm glad to have a newer machine with longer future support, I have yet to notice any meaningful performance difference.


Ditto. Though, I fixed my M1. I have an M4 max for work; the nano screen is a win. The perf is better, but it's really marginal unless actually doing stuff with the GPU, then it's super slow compared to a decent GPU anyway (i.e. h100, gb etc)

I have an M1 Max Macbook Pro, and having used many employer's newer variants of M-series macbook's since then, I'm still very satisfied with my M1 Max but

the air series is really good, and very light

my M1 is now noticeably heavy and I don't think upgrading to another Macbook Pro is the move the resell value of the M1 did not hold, specifically the bumped up storage models. There doesn't seem to be a market for 8TB of space specifically, but the base 1 - 2TB holds its value because the baseline of the MBP holds its value

M5 Max looks tempting if there is a very compelling tradein, but the M1 Max is pretty old so I don't have real hope of that, but I'll look. For AI Inference the difference doesn't seem good enough yet and necessary enough. I'll still need to use the cloud or aspire to have a specialized machine with more RAM or circuitry on my network.


Also M1 Pro owner, and it was the biggest leap ever. 2.5x speedup for build times over the last Intel Silicon, paired with 2x or so longer battery life, and better design in general (keyboard, ports).

What is tricky is not even CPU/GPU, but that in a Macbook it is impossible to upgrade RAM (easier to understand, as it is tied to the processor), but also the hard drive. Correct me if I am wrong, but I bet it is a decision by Apple, so people buy newer Macs more often.


Of course it was a decision by Apple, but I'm not really sure it's that 'tricky'.

There are multiple ways to upgrade storage: since Macs retain the value quite a long time compared to PCs, if you really want everything integrated and need you can sell your Mac and just buy a bigger one. Then there are several options for external storage (from USB stick or SSD to NAS to Thunderbolt disk arrays).

The integrated 'root disk' also has advantages: Apple controls the entire stack including drive firmware so it's guaranteed not to have nasty surprises on Mac like some PC SSDs I've been bitten by in the past. Also, performance is uniform and it's impossible for a drive to fail or shake loose due to a bad connector because there isn't one.


The M1 is indeed too good. It seems like the best tool that Apple has to force users to upgrade is ending macOS support on it.

I keep telling people that the best laptop value on the market right now is to buy a refurbished MacBook Pro M1/M2. I stand by that from a usability and performance standpoint, but I feel weird about recommending a laptop that could only get security updates for another 3 years.


Haha, can't be said better. M1pro is so good. Literally the only Jobs legacy, every other thing except silicon and laptop engineering is mediocre to dead now.

I have an M1 Max. Splurged a little. It continues to be able to do huge builds and run mid tier open weights AI models at usable speed.

This does look like a nice machine though.

I’ll probably wait for the M6 Max. If/when RAM comes down they might stuff 192 or 256 gigs in one, which would make it able to run larger tier open weights models.

128 is kind of an uncanny valley for models. Bigger than you need for the mid tier and too small for the huge ones.


I read it the same way. I should've gotten way more RAM back when I got my M1 and RAM was still cheap although this was of course before the LLM boom so there was no way to really know.

I maxed my M1 out when I bought it because I was frustrated with the 16GB max in the previous machines. I use my machine for all sorts of things and some days you just don't feel like exiting apps to make space for new ones.

I still don't have a strong urge to upgrade. I could probably get by on 32GB (like my work-issued machine is) but 64GB is the right amount of headroom for me.


64 m3 max. 64 is probably too much for about 90% of what I do, but will likely become 'just right' in the next year or so. I was coming from 16g m1 mini, and i got a refurb m3 max mbp for $3k even, which was at the time a decent deal. 1.5 years in and it'll see me through for a while longer, barring some physical destruction.

My personal M3 Pro is still going strong and it looks like the rest of the hardware is basically the same? I really don't see a reason to upgrade.

My work laptop is an M1 Pro and it is also doing totally fine. At work we used to do laptop upgrades on a 3 year cadence but the M-series laptops are so good that we switched to 5 years instead.


My late-2021 M1 Pro is working fine but I think one of the fans is broken. When loaded it starts beeping every 7 seconds and won't stop until I reboot. It might be just dust but I'm reluctant to open it up. Maybe I should and if I break it I have a better reason to upgrade lol.

You can spray compressed air without opening the Macbook. Also iFixit has a nice guide if you do open it up: https://www.ifixit.com/Troubleshooting/Mac_Laptop/MacBook+Fa...

> Apple: If you document the hardware enough for the Asahi team to deliver a polished Linux experiene, I'll buy one this year!

They probably can't do that because of potential patent issues that might surface.

Lawyers say no.


That’s highly unlikely, releasing the product counts as a public disclosure which would start the clock on filing.

You are thinking of the other direction.

I'm talking about competitors like e.g. Intel/AMD/nVidia who could browse the documentation and find potential infringements.

In fact what Apple's legal department forgot is that the Asahi team could enable just that.


Yep I’m still using an M2 MacBook Air with 8gb of ram to do development. Thank goodness the company I work for doesn’t use a bunch heavy infrastructure. I expect to use this for several more years at least.

I have a 2020 MacBook Air M1 with 16GB RAM - for development work, there is 0 reason to upgrade it. All day battery, silent, small, no lag...

The real improvement was the number of displays supported and, in some cases, removing the Touch Bar and adding HDMI/SD.

> removing the Touch Bar

Praise the lord. And the Tour Bar daemon takes, checks 2GB of ram alone.


Would it be worth upgrading from my M4 Pro 48GB to M5 Pro 64GB

I’m sorry you have to make do with that setup. I’d upgrade to an M5 Pro 64GB right away. In fact, your old one has no value. I can safely dispose of it for you.

Ahh I see, I just bought like few weeks ago but why would you say that

Read the reply you got more carefully, especially the last sentence is key.

It’s hard to tell at this point. You’ll get two more cores and more memory but they moved to chiplets which could hit performance on this first try. Best to wait for actual benchmarks

the 16 inch m1 max likely runs quieter and cooler.

Please, please. I'd love to use it with Debian.

> Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.

1) I guess I am not included in the set named "most software engineers."

2) If the title is "Software Engineer," I think I should be engineering, not coding.

This has probably been beaten to death, but I think this is the biggest disciminating question between "pro ai" and "against ai" in the software world is: "Dp you do (this) becuase you like writing code, or because you like building things for the world?"

Of course I don't think it's a binary decision.

Although I more more motivated by building things, I do somewhat miss the programmer flow state I used to get more often.


>This has probably been beaten to death, but I think this is the biggest disciminating question between "pro ai" and "against ai" in the software world is: "Dp you do (this) becuase you like writing code, or because you like building things for the world?"

I do this because I want to build quality software. AI to the extent people are trying to push it cannot achieve that. So I'm pro-buildinig and anti-AI.

I spent my years in this domain learning how to optimize code, and AI meanwhile doesn't always make compileable code. Let alone correctly functioning code (step 0 in optimizing: "make sure it actually works first")


There are some people who still know these things, and are able to use LLMs far more effectively than those who do not.

I've seen the following prediction by a few people and am starting to agree with it: software development (and possibly most knowledge work) will become like farming. A relatively smaller number of people will do with large machines what previously took armies of people. There will always be some people exploring the cutting edge of thought, and feeding their insights into the machine, just how I image there are biochemists and soil biology experts who produce knowledge to inform decisions made by the people running large farming operations.

I imagine this will lead to profound shifts in the world that we can hardly predict. If we don't blow ourselves up, perhaps space exploration and colonization will become possible.


I think that tt's more likely at this point that we turn the depleting quantities of exploitable resources on this planet into more and more data centers and squander any remaining opportunity at space exploration/colonization at scale.

If this happens to software development, this will happen to most mental jobs.

I'd like to see this with a proper local "instruction cache."

It might even be fun that the first call generates python (or other langauge), and then subsequent calls go through it. This "otpimized" or "compiled" natural langauge is "LLMJitted" into python. With interesting tooling, you could then click on the implementation and see the generated cod, a bit like looking at the generated asssembly. Usually you'd just write in some hybrid pytnon + natural language, but have the ability to look deeper.

I can also imagine some additional tooling that keeps track of good implementations of ideas that have been validated. This could extend to the community. Package manager. Through in TRL + web of tust and... this could be wild.

Really tricky functions that the LLM can't solve could be delegated back for human implementation.


Nice! I can almost see your vision. In terms of tooling, I think this could be integrated with deep instrumentation (a-la datadog) and used to create self-improving systems.

[dead]


I'm wondering if the post-condition checks change the perspective on this at all, because yes the code is nondeterministic and may execute differently each time. That is the problem this is trying to solve. You define these validation rules and they are deterministic post-condition checks that retry until the validation passes (up to a max retry number). So even if the model changes, and the behavior of that model changes, the post-condition checks should theoretically catch that drift and correct the behavior until it fits the required output.

I'm working on this. It's wild.

Or "institution", or "legal system", or "government."


To some extent, yes. Government in particular. Both of them "close the loop" in the sense that they are self-sustaining (corporations through revenue, governments through taxes). Some institutions can be self-sustaining, but many lack strong independent feedback loops. Legal systems are pretty much all dependent on a parent government, or very large corporate entities (think big multi-year contracts).


Oligarchy (, Iron Law of)


I'd propose using our current view of physical reality to own a subset of the UIID + version field if new physics is discovered.

10-20 bits: version/epoch

10-20 bits: cosmic region

40 bits: galaxy ID

40 bits: stellar/planetary address

64 bits: local timestamp

This avoids the potentially pathological long chain of provenance, and also encodes coordinates into it.

Every billion years or so it probably makes sense to re-partion.


As for coordinates, don’t forget galaxies are clouds of stars flowing around and interacting with each other.


That's the problem with address type of systems is that they expect the object at that location to always be at that location. How do you encode the orbital speed, radius of orbit for not just the object, but also the object it is orbiting will need the same info as it is also in motion, then that object's parent galaxy's motion. Ugh, now I need a nap to calm down a bit.


You could estimate when the object was labelled by the coordinates used.

But where is the Greenwich meridian for the Milky Way?


offset length

  00     04:    Version + Flags
  04     08:    Timestamp (uint64)
  12     16:    Node/Agent Hash
  28     16:    Namespace Hash
  44     32:    Random Entropy
  76     20:    Extra / Extension
  96     32:    Integrity Hash
Total: 128bytes


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: