The dream never dies, possibly because people remember when class time was supplanted by a movie. Anyone remember "I Am Joe's Heart"? Those movies showed that you could just sit and watch passively like TV, and you'd learn quite a bit, with professional diagrams and animations to help.
Yet your comment is true. Perhaps the difference is that science is inherently interesting because nature is confined to things that are consistent and make sense, while the latest security model for version 3.0 of this-or-that web service protocol, vs. version 2.0, is basically arbitrary and resists effective visual diagramming. Learning software (not computer science) is an exercise in memorizing things that aren't inherently interesting.
According to demos, AI coding tools are allowing neophytes to instantly create working apps and websites with mere descriptions of what they want. According to devs, they're 10x as productive because certain time-consuming tasks are condensed like unit test writing, code reviews, and code refactors and clean-up. So we're to assume that in the age when the typical App Store offers a million apps we'll never be interested in, soon that number will be a billion.
In comes Wanderfugl. A tool for traveling that I will never need, where just trying to figure out what it does used more time than I wanted to spend on it. Now with AI, there will be several shiny new travel apps like Wanderfugl for you to learn and choose from literally every time you go on another vacation.
Wanderfugl may be wonderful, and an achievement. But the reaction of this Seattleite is "What's the point anymore?" This is why I am uninterested in the AI coding trend. It's just a part of a lot of new stuff I don't need.
Example: new Yahoo! Mail AI summaries helpfully added to the top of each mail. Thanks, now I get to read each email twice! With the original text now placed in a variable location on the screen.
Unfortunately it's the coders who are most excited to put themselves out of business with incredible code-generation facilities. The techies that remain employed will be the feature vibers with 6-figure salaries supplied by the efforts of the now-unemployed programmers. The cycle will thus continue.
Reliving the days when the possibilities were endless and we weren't already captured by an entrenched computing path is important. 50 years ago, every marketer intuited that a home computer would be used for storing recipes. It never happened. Why not? (Reasons aren't hard to come up with, but the process of doing so draws our imagination toward what computer interfaces could have been and should still be.)
> "[...] every marketer intuited that a home computer would be used for storing recipes. It never happened."
Storing recipes "never happened"? Rubbish! Even famous cook Casey Ryback used his Apple Newton to store recipes, as evidenced in the 1995 documentary Under Siege 2 [1].
Yeah easy to say that but that is because they are the elite. They have a newton, do you? I don't! Time for Newton 2, I mean they are doing iPod sock 2 so why not Newton 2.
I would love to see a new Newton with the same spirit of innovation but current tech. Current phones are so boring. No innovation, just slow evolution.
It really was way ahead of its time. I remember the handwriting recognition being excellent for the time, too. Meanwhile Palm forced its users to write each letter one at a time in a tiny box and requiring specific sequencing of each stroke too.
Newton had a modem module you could plug in and third parties had written web browsers for it, it basically was the first smart phone just without the phone.
Trying to imagine that level of innovation, but starting from present day tech, is very interesting.
I had the message pad 100 and a message pad 120. My handwriting improved, and it s recognition also improved. It was brilliant. I stored shopping lists and recipes on it. Although a lot of fun was made of the handwriting recognition, it was surprisingly good, and got better with use.
Hey, I store recipes on my home computer! Having a portable handheld terminal that can view the recipes makes it much more practical than it would have been in the 80s.
I keep my bread recipes in Google Keep on my phone. It's extremely useful, since the phone takes up much less room on the counter than the laptop does.
It didn't? Who knows how many copies of Americas_test_kitchen.pdf are floating around out there, how many recipes are in Apple notes or in Google Keep. Sure, you might just Google for "banana bread recipe" and get lost on a tangent about technology, and the smartphone isn't the personal computer of yore, but recipes existing in a digital format has happened.
I think in the context of the GP's comment, 'never' means it never (or hardly ever) happened on the products it was expected to happen on (home computers, as understood circa late 70s/early 80s). Yes, it has happened on very different devices decades later.
Spot on. The home computer never became accessible from the kitchen, and the storage system most anyone uses for recipes, if not paper, is the web or some other internet-accessible source. (Don't know for sure but I'd bet photos of recipes found online, viewed on a phone, is the most common.)
RE ".... a home computer would be used for storing recipes...."
No doubt, some home computers where used for this purpose, However, (QUICKLY) much more interesting applications where discovered, for example games and educational applications, business applications, engineering applications including spreadsheets ... Look at old software catalogs of software around 1980 (say) .. to verify this range of available applications or CD application archive CDs .....
Some are Word documents that I view on my phone. (Via OneDrive).
My turkey recipe is a PDF that I print out two copies of when I prepare the turkey. My hands get messy (and unsanitary) when I follow it, so I don't want to handle my phone.
(In case you're curious: When I called up a BBS for the first time as a teenager, I didn't realize that I had to make up a name for myself. My GWBasic manual was sitting in front of me, so I just called myself GWBasic and really liked the name. At the time, GWBasic was very obsolete, so most people didn't get the reference.)
I always pick them up. Every penny buys enough pasta to keep you alive for another 15 minutes. So in case I ever go broke, I've staved off my eventual starvation by 15 minutes.
It was always just an estimate but today I verified it with a chatbot before I made any claim about the time. Caught it in a mistake too, which I untangled by reminding it that food calories are actually kilocalories.
Enlightened take. For similar reasons I often say that going meta and fussing about your own happiness--literally basing your happiness on whether you are happy--is a doom spiral. If you're asking yourself "Am I happy?" I can give you the answer: No.
The post asked a hypothetical question about human motivation. "Why would I..." It came as a question but presumed that the answer was so obvious that the point would be clear without an explicit answer: with such a policy, people won't bother creating the products that lead to unicorns.
My peeve: electric cars that make noise to make sure that blind people around them can hear them like they hear gas-powered cars--except they make it 10x as loud as any gas-powered car.
I love the basic point. Timing based association is fundamental to thinking, across species. How does the bunny knows that you're stalking it? Because your eyes move when it moves. I had no idea that LLMs missed all this. Plus the political reference is priceless.
Glad the little political wink landed with at least one reader!
You’re right: Stripping away all ambient context is both a bug and a feature. It lets us rebuild “senses” one at a time—clean interfaces instead of the tangled wiring in our own heads.
Pauses are the first step, but I’m eager to experiment with other low‑bandwidth signals:
• where the user is (desk vs. train)
• weather/mood cues (“rainy Sunday coding”)
• typing vs. speech (and maybe sentiment from voice)
• upcoming calendar deadlines
If you could give an LLM just one extra sense, what would you pick—and why?
Yet your comment is true. Perhaps the difference is that science is inherently interesting because nature is confined to things that are consistent and make sense, while the latest security model for version 3.0 of this-or-that web service protocol, vs. version 2.0, is basically arbitrary and resists effective visual diagramming. Learning software (not computer science) is an exercise in memorizing things that aren't inherently interesting.
reply