Can anyone really blame the students? If I were in their shoes, I probably wouldn't bother studying CS right now. From their perspective, it doesn't really matter whether AI is bullshit in any capacity; it matters whether businesses who are buying the AI hype are going to hire you or not.
Hell, I should probably be studying how to be a carpenter given the level at which companies are pushing vibe coding on their engineers.
I guess it depends on what you ask an LLM to teach you. For certain subjects, I've found them to be a pain in the ass to get right.
For instance, I was hoping that I could use GPT to help me learn to fly a B737-800. This is actually less challenging than people think... if you just want to get in the air and skip all proper procedure and safety checks! If you want to fly a commercial plane like a real pilot, there is a ton of procedure and instruments to understand. There is actually quite a bit of material on this available online via flight crew operations manuals, as well as an old (but still relevant) manual straight from Boeing. So why rely on GPT? It's a bit hard to explain without rambling, but those manuals are designed for pilots with a lot of prior knowledge, not some goofball with X-Plane and a joystick. It would be nice to distill that information down for someone who just wants an idiot's guide to preflight procedure, setting the flight computer, taxiing, taking off, and performing an ILS landing.
Sadly, it turned out I really had to hold the LLM's hand along the way, even when I provided it two PDFs of everything it needed to know, because it would skip many steps and get them out of order, or not be able to correctly specify where a particular instrument or switch was located. It was almost a waste of time, and I actually still have more to do because it's that inefficient.
That said, I still think LLMs can be unreasonably good for learning about very specific subjects so long as you don't blindly believe it. I kinda hate how I have to say that, but I see people all the time believing anything Grok says. :facepalm: GPT has been a big help in learning things about finance, chemistry, and electronics. Not sure I would assume it could create a full blown course, but who knows. I bet it'd be pretty solid at coming up with exam questions.
Modern society totally devalues anything considered even slightly old. I used to notice it as a real lack of intergenerational knowledge transfer, but it's gotten so bad that it seems like more and more people react with "how do you know so much?" and "why would you do that?" over very basic knowledge that isn't even that old. For all the reading the average person claims to do, they sure don't seem to know very much outside of a 10-year window unless they happen to have studied history in college or whatever.
But I don't necessarily blame said people, at least in the proximal sense. The technological industrial complex continuously refines its understanding of the desire for novelty that's always been there and seeks to exploit it; and they've gotten unreasonably good at that. It doesn't matter if your intellectual property is just as relevant as ever, perhaps more so, if there's some hip new alternative. Udemy and of course social media sites know this, and I think there's a feedback loop that goes beyond mere exploitation of the human psyche, but in the actual training of the human psyche to have blindness towards the past.
The only answer right now, besides hosting your own courses (with hookers and blackjack), might be to periodically recreate your online presence from scratch in order to exploit the algorithm back. If your courses on Udemy aren't seeing the traffic they deserve, close your account, and create a new one... assuming that's feasible and they don't check too hard. With the current state of AI, this may just be a cat and mouse game that can't be sustained.
Yep, you're definitely not wrong. We see it all the time on GitHub. If a project hasn't gotten a new commit in 2 days then the project is claimed dead.
The same thing with blogs in general. A post could be popular and ranked highly in 2020 but in 2025 it's not even ranked on a search engine, even if the content is still highly relevant and fully working. It's bad because you could have a 10+ year old site with 500+ posts but nothing old ranks anymore, there's no ranking bonus on new stuff from having a snowball effect of previously highly ranked stuff in the same category.
Sites like StackOverflow sometimes show old things from 2017 because there's a bunch of recent comments. For a blog, even if you change the "updated at" date to something new, it doesn't matter and rewriting the post with different words makes no sense because the original content is still accurate.
> If your courses on Udemy aren't seeing the traffic they deserve, close your account, and create a new one... assuming that's feasible and they don't check too hard
Creating a separate account likely wouldn't work, at least not in the US. To get paid you have to fill out tax forms which has your social security number and other personal info tied to you as 1 human.
> Yep, you're definitely not wrong. We see it all the time on GitHub. If a project hasn't gotten a new commit in 2 days then the project is claimed dead.
There is a difference between being dead and not actively maintained. If a popular FLOSS package isn't touched for many moons, do you think it just means it's done?
I wish the underlying platforms would also consider providing a "done" version now and again, instead of bundling bug/security fixes with new problems. You can rarely just write something and call it done on something like Android. Anything other than a simple crud/webview wrapper will usually need to use permissions of some sort to do useful things, and as soon as you start using anything like that, your code is probably obsolete in a day or two.
There's nuance here. Is it not being maintained because it does what it is supposed to do and there aren't any issues with it? Or is there a growing pile of issues being opened with no activity around them?
> If a project hasn't gotten a new commit in 2 days then the project is claimed dead.
That is certainly true, those projects are effectively dead. They lack security updates, lack integrations with new platforms, lack support for new HW architectures, lack newer privacy guarantees, etc., etc.
I suspect that CVE inflation has poisoned the minds of many developers.
A db driver may have an issue with unsanitized user input when run against SQLite, but you only use it with oracle and sanitize input anyway, but that shows up as a 9.1 critical deployment blocker for corporate employees.
Unexploitable CVEs with inflated ratings make using any open source software a pain in the butt at BigCo.
Very few projects update dependencies that often, and only very big ones are found with security issues that often.
> lack integrations with new platforms
You don't need a new intration _every 2 days_, not to mention that many projects don't need such integrations at all. Moreover some popular and updated projects lack such integrations despite having lot of commits.
> lack support for new HW architectures
This is something that many projects get for free. But also, you don't get a new HW architecture every 2 days.
> lack newer privacy guarantees
What more privacy guarantees do I need from projects that don't communicate with external services or store data at all?
People who are earnestly engaging with their point have assumed “two days” was hyperbole so that they can instead respond to the greater idea, yet you have not: you’re stuck on an unserious detail like it’s the lynchpin of their claim.
Certainly that depends on the nature of the software. For instance, I don't expect some header-only library that does what it's supposed to do to ever need updating.
> Certainly that depends on the nature of the software. For instance, I don't expect some header-only library that does what it's supposed to do to ever need updating.
If it's a headers-only library in a language such as C++, if the project is not dead then the very least anyone would expect from it is being updated to support any of the modern C++ versions.
Also, if the project is actively maintained then there is always a multitude of low-priority issues and features to address.
Being FLOSS also means anyone in the whole world is able to contribute anything. If no one bothers to do so, that is aligned with the fact the project is indeed dead.
> If it's a headers-only library in a language such as C++, if the project is not dead then the very least anyone would expect from it is being updated to support any of the modern C++ versions.
Did I miss a new C++ version released <2 days ago perhaps?
If you somehow believe this kind of work is done in a couple of days, that's a good way to explain to the world how oblivious you are about the topic you are discussing.
> If you somehow believe this kind of work is done in a couple of days, that's a good way to explain to the world how oblivious you are about the topic you are discussing.
And, in turn, you appear to be oblivious to the point - the release cadence of this best-case scenario still means like a decade between updates to the project.
C++26 was released 4months ago; pointless to update it until compilers and deps are updated. So, best case is maybe you'll have complete bug-tested support in the supported compilers in 2030.
If we're looking at 2035-ish for the next release, we're still only looking at 2040 before you update.
You still have to take into account that updating might not even be necessary. It's not like C++ < C++26 suddenly doesn't work.
> And, in turn, you appear to be oblivious to the point - the release cadence of this best-case scenario still means like a decade between updates to the project.
It doesn't seem you are managing to think the issue all the way through. Even if you believe you can claim that release cadence is a factor, C++26 is the latest release in a process that outputs a new version every two years. Therefore, your argument would lead you to agree that there is a greater need for maintenance as there are more releases still evolving.
> C++26 was released 4months ago; pointless to update it until compilers and deps are updated.
This is a silly argument to make. At best you are trying to argue that you somehow believe maintenance needs aren't as urgent. Except urgency is irrelevant to the discussion, and the whole argument is derived from specious reasoning to begin with.
It sounds like you are fully invested in contrarianism and not invested at all in thinking about the issue you are trying to discuss. This is not the best use of anyone's time.
> I refuted what arguments you tried to put together by pointing the specious reasoning behind them.
You're confusing me with the person at the top of the thread. My response to you was to point you in a subtle way that you are attacking people, not arguments.
Read through your posts - the bulk of your "refutation" is basically calling the other person ignorant.
> If it's a headers-only library in a language such as C++, if the project is not dead then the very least anyone would expect from it is being updated to support any of the modern C++ versions.
C++ versions are backward compatible. You don't need to modify code that works just to use recent languages features that you don't need.
There's a python course in plursight that I loved, it's quite the deep dive into the language and I learned a lot.
One day I was looking for it and couldn't find it - turns out it was archived and ranking much lower in search, because it was a handful years old. But apart from some syntactic sugar, the new python versions don't change the course that much!
I found it: Python Fundamentals + Beyond the basics + Advanced, all retired. Highly recommend for someone who wants to get proficient at the language. And a big thanks for Robert, these videos really helped me get become "the python expert" at my last 2 companies.
> but it's gotten so bad that it seems like more and more people react with "how do you know so much?"
I had very similar experiences. I had some incredibly wild "successes" in fixing some company systems even though I had just joined the company and I was not familiar with such systems prior to joining.
My "secret" is that I just read the service's documentation (the fine manual) and did what the documentation described.
It's wild how some people's nowadays go around and around mindlessly trying stuff that the LLM of the day suggested, without actually learn enough to *reason* about internals of services and systems.
> Modern society totally devalues anything considered even slightly old.
Mild counterpoint. Our professions(all things IT) moves bloody fast.
If I were looking for info on cooking, baking, knitting sure... but IT stuff, I opine many of us seek the latest info because of the breakneck speeds this profession is known for.
> Mild counterpoint. Our professions(all things IT) moves bloody fast.
Some areas do, some areas not so much.
I have a colleague that's incredibly strong with databases (we use a mix of MySQL and PostgreSQL) and he's living off the learning he did 20 years ago when he was a junior Oracle consultant.
I live off the learning I did in Linux now that I administer Kubernetes clusters for a living. Once you get past the "cloud native" abstractions (and other BS) it's penguins all the way down, and I get to reuse most of my core Linux competencies I learned 10+ years ago (eg: I do tcpdump in prod, and it's quicker and more effective than many of the modern shiny tools).
> databases (we use a mix of MySQL and PostgreSQL) and he's living off the learning he did 20 years ago when he was a junior Oracle consultant
And there's lots of changes here, e.g. vector stores, all the different query engine improvements, PostgreSQL IO improvements, etc and they all may impact your job. Your optimal query back then might not be the same. Living off the old learnings is like taking a 50% discount on the max potential.
> I live off the learning I did in Linux now that I administer Kubernetes clusters for a living.
And these have had changes consistently too e.g. io-uring and gateway api. You can only be in legacy for so long.
> can learn these things pretty fast within few months.
This totally misses the point. You can learn anything pretty fast in some ways. The point of what my comment replied to was about not learning at all. It was about learning something some years ago and letting it sit like interest in a bank without investing further.
True, but sometimes you need a cheap course about some seasonal garbage.
A few years ago i happily spent 12€ on a course about gitlab ci that saved me quite a bit of time reading documentation and got me started within an afternoon. I had no previous experience with gitlab, but i was well versed with git.
Yes. The autodesk fusion course that I learned 3D printing design off of on Udemy had a bunch of instructions for UI elements that had moved in the application.
It wasn’t a big deal but I would have still appreciated it if the author inserted some new recorded segments or re-recorded some content to make up for it.
> Does it though? I mean I'm still teaching thread-safety and recursion to my interns... a solid foundation is a solid foundation.
I think you are confusing interiorizing some fundamentals with things moving fast. There are languages and frameworks rolling out higher level support for features covering concurrency and parallelism. While you focus on thread-safety, a framework you already use can and often does roll out features that outright eliminate those concerns with idiomatic approaches that are far easier to maintain. Wouldn't you classify that as moving fast?
But are you teaching the basics of programming with 30 year old textbooks? Can you learn the principles of web dev by building like they did 30 years ago? Sure. But it will be a pain in the ass vs using something that is up to date.
See you in ten years! We're a hop, skip and a jump from one click automated conversion from every legacy Java app to web and electron desktop compatible code and we can just retire Java entirely. in 2025, Java is not the most performant. It does not run in the most places. it is not the easiest to write or reason about. its advantage over anything else is momentum and it's losing that too.
React is just a formalizatio of a UI update pattern that exists in every app ever made except the ones that are bad.
Source: written a lot of java and nobody is currently paying enough to make it worth doing again.
I don't know what argument you think you are making. React was released in 2011 whereas AngularJS was released in 2010 and Angular2+, what we actually call Angular, was released in 2014.
So your counter examples of popularity are projects what at best started out at the same time as React but unlike React winded down in popularity.
After over a decade l, React is not only the most popular framework by far but also is the support framework for a few of the top 10 frameworks.
So what point did you thought you were making? That React managed to become the dominant framework whereas your examples didn't?
JS frameworks and chasing AI fads, perhaps. But fundamentals? Engineering principles? How CPUs work? Linux, networking, x86? Stuff that is decades old still applies.
Some owed in tech like relearning the same lessons over and over with new instead of realizing there’s a lot that is transferable and new technologies world be better implemented, sooner if it understood what had been done to date.
“why is I/O in docker slow, and how would you improve it” is pretty esoteric knowledge now, but would have been considered basic knowledge (for other applications, not specifically just docker) only 12 years ago.
I have had people working who don’t in the slightest understand how a filesystem works, so taking it a step further is impossible.
When I tune things I am asked how I know, but everything is just built from the basics, and the basics don’t make you feel productive, so they’re always skipped when possible.
12 years ago I certainly did not know why a servers IO would be slow, short of just the physical storage was slow. I think you might just be overestimating how much stuff people knew rather than the whole population forgetting how filesystem and IO internals work.
you hadn’t heard of RAID, readahead, write-back/write-through, stride or even just the concept of fragmentation?
Even if you didn’t, I doubt you didn’t have someone on staff who did know about these things and would help out randomly with troubleshooting and avoiding footguns.
The people who knew about those things back then know modern infrastructure today. I'm sure if you asked the average web dev 12 years ago what write-back io is they wouldn't have any idea.
Perhaps the only trend is more companies not hiring anyone who specialises in infrastructure and just leaving it as a side task for React devs to look at once every few months.
I knew about RAID and fragmentation, but I haven't had to work with it since I went from tech support to backend, it just never came up so it's easy to forget.
> “why is I/O in docker slow, and how would you improve it” is pretty esoteric knowledge now, but would have been considered basic knowledge (for other applications, not specifically just docker) only 12 years ago.
you could've used docker for 12 years and never hit it if you used it on Linux, and followed sensible practices (mount the data dir from outside so it can be reattached to upgraded version of the container)
and in fairness to the mobile devices thing of abstracting file systems, when it comes to discoverability and organizing files or documents, a rigid hierarchy of nested sub-folders is far inferior to a single directory with tagging or other metadata properties you can use to filter and essentially build custom directories on the fly with.
> “why is I/O in docker slow, and how would you improve it” is pretty esoteric knowledge now, but would have been considered basic knowledge only 12 years ago.
Yes and no. The world has also changed all these years. Why something is slow 10+ years ago might not be today or at least for the same reason. E.g. Docker on Mac especially with Apple silicon has undergone major changes the last few years.
Keeping tech fast, if my worldview holds. One reason I left frontend work before was that none of my colleagues seemed to care that we shipped MBs of code to the client. I also tire of APIs that are in the multi-second response time arena, often because no one seems to bother with database indexes or JOIN optimisation. This should be banal, everyday stuff.
Maybe we have too many layers of abstraction. Or there's just too much work to do now that businesses combine many roles into one?
Re-planning in isolation is not a failure. I think re-planning is a failure when nothing is learned from previous assumptions not being correct, which usually leads to serial re-planning. I've seen this pattern form at practically every company I've worked for, in particular whenever there's a merger or a private equity buy-out. More often than not, this is ironically caused by leadership failing to learn from previous assumptions because they're too busy chasing the shiny new bobble of the week, but the blame still gets diffused across the organization.
> I think re-planning is a failure when nothing is learned from previous assumptions not being correct
This is the exact problem, and why it is so extremely difficult to communicate about the subject.
Another way to say "learn from previous [failures]" is "bureaucracy". We did something wrong last time, so we need a rule about that failure.
It SHOULD NOT be seen as failure, we SHOULD NOT add bureaucracy, simple rules CAN NOT fix the issue.
What should happen: Line managers and developers should be able to communicate about what they are learning as they do the work of software engineering. Developers should be able to re-plan their approach to problem solutions.
This simply will not work if there is not strong trust between upper management/[the money] and the engineering/execution side of the organization.
----
The meta-problem is that you have to see things go wrong many times in many ways before you really understand what is going on, and by that point ... well everything gets very tiresome.
You may be hanging with the wrong crowds in the sense that your people are out there somewhere and you just haven't found them yet, but your people are still a minority. One would hope that tech would have more genuine and curious people, but I swear most of us are hustlers who bought a shovel for a particular gold rush.
In my experience, you'll have the best luck finding likeminded people at hacker spaces and conferences.
I agree, though I shudder to imagine how cringey the switchover would be. A significant number of students already had poor diction and linguistic skills when I was in college, and recent evidence shows this situation has likely become worse.
Senior software engineer with 13+ years of experience in full stack web development looking for a full-time remote position; excels in improving performance and efficiency in both software and developer workflows.
How is your browsing experience with that stuff? I used to go nuts with anti-tracking measures, but enough of my browsing experience kept breaking that it just didn't feel worth it.
My experience with uMatrix: most sites work right away. Others require fiddling with the matrix of media, script, xhr, frames and the third parties serving them. After a while it's easy to remember which ones must be temporary enabled and which ones don't. Sites with videos are a little more difficult. Sites with payments require care. I whitelist the minimum set of scripts that make the sites I use often work. There are usually many scripts that can be left out. If everything fails and it's a one shot site, I start Chrome.
It's fine. Sometimes I get annoyed by websites which require JavaScript to show static text (apparently HTML is too difficult?) or which block me with a 'please unblock challenges.cloudflare.com to proceed' (that second one seriously pisses me off when I see it on, for example, the website of the Belgian railways), but by and large I'm fine with just saying 'if it breaks I don't need it'. But I handle my e-mail with isync, mu, and mu4e; and as far as I understand e-mail tends to be a sticking point for those who care for their digital rights. I also don't have Xitter or Facebook or any of that nonsense.
If there's one thing I don't like its the fact that NoScript doesn't integrate with Multi-Account Containers. It would be neat if instead of having to temporarily allow GitHub JavaScript and re-disable it when I'm done; I could just allow GH JS in a GitHub or Microsoft container and it only being enabled in that container.
Hell, I should probably be studying how to be a carpenter given the level at which companies are pushing vibe coding on their engineers.
reply