This is what I don't really understand. It's a bit difficult to take "wait x months" at face value because I've been hearing it for so long. Wait x months for what? Why hasn't it happened yet?
Things seem to be getting better from December 2022 (chatgpt launch), sure, but is there a ceiling we don't see?
"Self-driving cars" and Fusion power also come to mind. With the advent of photography, it was widely believed that drawing and painting would vanish as art forms. Radio would obsolete newspapers, becoming obsolete themselves with television, and so on. Don't believe the hype.
And your ability to go your own way is only temporary and due to inertia. Today, for a while, you can still buy a vehicle that requires a driver and doesn't look and perform exactly like every other waymo.
But that's only because self driving cars are still new and incomplete. It's still the transition period.
I already can't buy the car I want with a manual transmission. There are still a few cars that I could get with one, but the number is both already small and getting smaller every year. And none of those few are the one I want, even though it was available previously.
I already can't buy any (new) car that doesn't have a permanent internet connection with data collection and remote control by people that don't own the car even though I pay full cash without even financing, let alone the particular one I want. (I can, for now, at least break the on board internet connection after I buy the car without disabling the whole car, but that is just a trivial software change away, in software I don't get to see or edit.)
It's hardly unreasonable to suggest that in some time you won't be able to avoid having a car that drives itself, and even be legally compelled to let the car drive itself because you can't afford the insurance or legal risk or straight up fines.
And forget customizing or personalizing. That's right out.
Waymos require a highly mapped environment to function safely in. Not to take away from what Waymo has accomplished, but it's a far more bounded problem that what the "self driving" promise has been.
Um.. Claude Code has been out less than a YEAR.. and the lift in capability in the last year has been dramatic.
It does seem probable based on progress that in 1-2 more model generations there will be little need to hand code in almost any domain. Personally I already don't hand code AT ALL, but there are certainly domains/languages that are under performing right now.
Right now with the changes this week (Opus 4.6 and "teams mode") it already is another step function up in capability.
Teams mode is probably only good for greenfield or "green module" development but I'm watching a team of 5 AI's collaborating and building out an application module by module. This is net new capability for the tool THIS WEEK (Yes I am aware of earlier examples).
I don't understand how people can look at this and then be dismissive of future progress, but human psychology is a rich and non-logical landscape.
Because then you won't be important, the model will be important. And then everyone will have to use their model, that's their dream. Why isn't that your nightmare too? Why will you be special if it can just code whatever it needs to code? Then anthropic can just employ all the programmers that will ever be needed, to just review new skills and modules of code. It was predicted early on there would be a need for about six big computers worldwide. Well now we'll just need six AI shepherds. And then literally everyone else will forget how anything works because it will be a solved problem. People already treat computers like magic, it will literally become a dark art. And I guess it's fine, what can we do, right? Go with the flow I guess. "If I don't , someone else will. Maybe I can be one of those six real people at Anthropic".
> ...it's not just about saving costs – it's about saving the planet
There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
Probably because "making the world a better place" has been a trope used so much in the industry that it's made it to a TV show [1]. It's fine to be passionate about your job. It's fine to be paid well. You don't need to make us believe that you're mother Theresa on top of it.
Reminds me of when I was younger and thought of companies like Google and Tesla as a force for good that will create and use technology to make people's lives better. Surely OpenAI and these LLM companies will change the world for the better, right? They wouldn't burn down our planet for short-term monetary gain, right?
I've learned over the years that I was naive and it's a coincidence if the tech giants make people's lives better. That's not their goal.
Could US tech companies stop making the world a better place? Like how Airbnb made housing markets "better" and and Facebook made politics "better"? We barely have anything left as regular people as our new feudal lords capture everything they can.
airBnB made a very constrained market more efficient, the downsides are classic NIMBY factors. (which are important, but also nothing has been solved in cities that outlawed AirBnB.)
on the other hand Facebook made the internet hate machine more efficient :(
because constrained supply leads to shitty solutions, and people spend their money on land rent instead of spending it on actual service (like making sure there are no hidden fees and cameras)
that said those are not new problems, and are not caused (nor exacerbated) by airBnB (likely the only tangentially significant factor is the decentralization of the hospitality industry thanks to online market places, and barriers to building more hotels, and of course people's preferences for renting something more unique, or simply something remote or secluded)
I saw this video where this gen z girl was saying she preferred working for boomers who just wanted her to show up on time, get her work done, and maybe stay a little late from time to time. She said it was exhausting working for millenials who wanted her to think her job was "saving the world."
The greatness of human accomplishment has always been measured by size. The bigger, the better. Until now. Nanotech. Smart cars. Small is the new big. In the coming months, Hooli will deliver Nucleus, the most sophisticated compression software platform the world has ever seen. Because if we can make your audio and video files smaller, we can make cancer smaller. And hunger. And AIDS.
For me, it just sounds like a ChatGPT-generated sentence. Especially, it likes to write sentences like "it's not just about... - it's about ..." and it first sounds legit, but it doesn't really make much sense when you start to think about it.
The HBO “Silicon Valley” series’ version of “making the world a better place by” nonsense. The blog article has fallen for the marketing of OpenAI. OpenAI is making the world a worse place by inflating the cost of RAM and even getting rid of RAM chip providers from the consumer space. Not to mention all the wasted power on compute for all sorts of meaningless tasks. At least with something like Claude I am saving months if not years of engineering effort and resources in a few hours.
Right? Like what an incredibly naive thing to think, that BG is going to contain power consumption lmao. OpenAI is always going to run their hardware hot. If BG frees up compute, a new workload will just fill it.
Sure you might argue "well if they can do more with less they won't need as many data centers." But who is going to believe that a company that can squeeze more money from their investment won't grow?
Tangentially, I am looking forward to learn the new innovations that come from this problem space. [Self-rightous] BG certainly is exceptional at presenting hard topics in an approachable and digestible manner. And now it seems he has an unlimited fund to get creative.
Sam Altman is pro-extinctionist like most of the surveillance capitalist ghouls. He literally invests in mind uploading companies and believes only the rich deserve to "survive" the singularity he believes it is his job to bring about.
Sure, humans going extinct is good for the planet, I guess, but be up front about what you are really supporting.
If you reduce energy consumption of training a new model by 25%, OpenAI will just buy more hardware and try to churn out a new model 25% faster. The total consumption will be exactly the same.
And they're 100% justified to do so, until they hit another bottleneck (when there is literally not that much Nvidia hardware to buy, for example.)
Not only that, every optimization gain makes it more attractive and creates even more demand, ie. effective energy usage will not decrease or stay the same - it will increase.
It's like with electric cars - if you make them more efficient, it doesn't mean less electricity will be used but more as it'll become more attractive, more people will switch to electric cars.
There's no gain to be had there at all. Any optimizations that reduce resource usage per output will be gobbled up by just making more output.
OpenAI released an open source model only because they are capped on growth right now by the amount if hardware they have. Improve resource efficiency and you better believe they'll just crank up use of said resources until they capped again.
I imagine there's a lot more to be gained than that via algorithmic improvements. But at least in the short term, the more you cut costs (and prices), the more usage will increase.
If you’re going to hold datacenter operators to blame for the waste associated with non-optimized computation, then it would seem to follow that they get some credit for optimizing.
Interesting disconnect from AI people saying AI is inevitable, but if we’re involved we can mitigate the harshest negative outcomes. Not sure that follows and I don’t think there’s any reason to believe that.
It’s just a kind of lazy fatalistic nihilism. I worry about the world and think that these adult children who were told “no” during the pandemic have let their worst misanthropic tendencies flourish. They are indoctrinated in a belief that “you can just do things” for “you can just do things” sake.
If you trust what the executives of OpenAI and Anthropic say about their respective projects, its a die roll as to whether or not they will totally destroy the world. A theme of the last 5-10 years has been tech dropping the whitewashing of their reputation and embracing the idea that they what they are doing is incredibly sociopathic and still somehow cool (to them, I guess). Guess not everyone got the memo.
Firstly, you would do well to read the guidelines about avoiding snark, and then actually say whatever it is you’re trying to say rather than make insinuations. As is, this response comes across as a very shallow read. It’s hard to get to the root of what you’re actually saying in your post other than it quotes two paragraphs about how it’s not fun to push through the bureaucracy of a large organisation, which - I would agree. Probably most people who’ve worked at a big company would.
So why does that make him a “big shot”? Are you perhaps envious of him?
Why does openAI deserve him or anyone? Hard to say.
LinkedIn has been employing a lot of strange dark patterns recently:
* Overriding scroll speed on Firefox Web. Not sure why.
* Opening a profile on mobile web, then pressing back to go to last page, takes me to the LinkedIn homepage everytime.
* One of their analytic URLs is a randomly generated path on www.linkedin.com, supposedly to make it harder to block. Regex rules on ublock origin sufficiently stop this.
Giving them the benefit of the doubt here obviously, I know they're in an all out war with the contact database industry. Going from websoup to agents dialing out to rent-a-human services requires different tactics.
- scroll speed - unsure of ulterior motives, but i've seen this even on some foss things. i think some people just think it looks cool/modern/"responsive"/whatever
- back - hijacking it seems fairly common on malicious/dark-pattern sites to try to trap you on them. not sure why because you can just leave and it seems it would obviously piss someone off
- analytics paths - not everyone may know about/how to use regex rules for it or may use something else that doesn't support it (the stripped down ublock for chrome? i don't know if it can or not). sites seem to do this with malicious js code as well, presumably to prevent blocking
I've been wondering why my scroll speed was off in LinkedIn, inspecting scroll-related css without finding an answer, I thought this was a bug. Anyone know what property does this? I might try to fix it with uBO scripts.
I think they want you to feel disoriented.
Why do they do all this bs and not fix the bug that happens when you insert Unicode U+202E in your name?
I've been having loads of fun with that but it's never been fixed. Anyone tagging me in a comment makes their input right-to-left unless they backspace the tag or insert newline. It also jumbles notification text because your name is concatenated to the notification static text.
You can also create an inverted link but it isn't clickable, just like other unicode links which aren't punycode-encoded on LinkedIn but aren't clickable (on the clients I've tried).
It could very much be confirmation bias, but I do feel like most "please use our app" popups appear after a mobile site breaks or refuses to load something
It actually does or at least did, until at least a few years ago. When you opened the audio mixer (alsamixer or pulse audio control?) on XFCE, you could still see MS Teams labeled as Skype there. Not sure how it would be now, because I only ever use MS Teams isolated in a separate Ungoogled Chromium browser now, and have given up on the client for GNU/Linux.
>In an interview with Robert Wright in 2003, Dyson referred to his paper on the search for Dyson spheres as "a little joke" and commented that "you get to be famous only for the things you don't think are serious" [...]
To be fair, he later added this:
>in a later interview with students from The University of Edinburgh in 2018, he referred to the premise of the Dyson sphere as being "correct and uncontroversial".[13] In other interviews, while lamenting the naming of the object, Dyson commented that "the idea was a good one", and referred to his contribution to a paper on disassembling planets as a means of constructing one.
Thanks for pointing out those follow ups. Interesting stuff!
> correct and uncontraversial
From the original quote it is clear he was referring to the idea of aliens being detectable by infrared because they will absorb all of their sun's energy. Later in the same paragraph he says:
> Unfortunately I went on to speculate about possible ways of building a shell, for example by using the mass of Jupiter...
> These remarks about building a shell were only order-of-magnitude estimates, but were misunderstood by journalists and science-fiction writers as describing real objects. The essential idea of an advanced civilization emitting infrared radiation was already published by Olaf Stapledon in his science fiction novel Star Maker in 1937.
So the Dyson Sphere is a rhetorical vehicle to make an order-of-magnitude estimate, not a description of a thing that he thought could physically exist.
Full quote from the video cited before "the idea was a good one":
> science fiction writers got hold of this phrase and imagined it then to be a spherical rigid object. And the aliens would be living on some kind of artificial shell. a rigid structure surrounding a star. which wasn't exactly what I had in mind, but then in any case, that's become then a favorite object of science fiction writers. They call it the Dyson sphere, which was a name I don't altogether approve of, but anyway, I mean that's I'm stuck with it. But the idea was a good one.
Again he explicitly says this "wasn't exactly what I had in mind." This one hedges a bit more and could be interpreted as his saying the idea of a Dyson Sphere is a good one. He may have meant that in the sense of it being a good science fiction idea though, and he subsequently goes on to talk about that.
The Dyson Sphere is good for order-of-magnitude calculations about hypothetical aliens, and also for selling vapourware to the types of people who uncritically think that vapourware is real.
Have you read the paper itself, not just summaries of the idea? It's obvious from the way he wrote it, dripping in sarcasm. Talking about "Malthusian principles" and "Lebensraum", while hand waving away any common sense questions about how the mass of Jupiter would even be smeared into a sphere around the sun, just saying that he can conceive of it and therefore we should spend public money looking for it. He's having a lark.
Also, he literally said it was a joke, and was miffed that he was best know for something he didn't take seriously.
He thought SETI listening to space radio waves was dumb, so made essentially a satirical paper saying we should look for heat instead, because "an advanced civilization would be using these Shells to capture all star energy, so we could only see the heat"
The "dyson sphere" was a made up and entirely unfounded claim, without justification.
It's because they don't have a substantive response to it, so they resort to ad hominems.
I've worked extensively in the AI space, and believe that it is extremely useful, but these weird claims (even from people I respect a lot) that "something big and mysterious is happening, I just can't show you yet!" set of my alarms.
When sensible questions are met with ad hominems by supporters it further sets of alarm bells.
Things seem to be getting better from December 2022 (chatgpt launch), sure, but is there a ceiling we don't see?
reply