Hacker Newsnew | past | comments | ask | show | jobs | submit | prhn's commentslogin

You can still be a programmer and identify with and participate in that group. AI hasn't eliminated programmers or programming, and it never will.

However, my best advice as someone with many distinct interests is to avoid tying any one of these external things to your identity. Not a Buddhist, but I think that's the correct approach.

He sort of comes to this conclusion in the final "So then, who am I?" section. The answer is you are many things and you are nothing. You can live deeply in many groups and circles without making your identity dependent on them.

If you're a programmer, what happens when programming isn't needed anymore?

If you're a runner, what happens if you get injured?

It's always been helpful personally to remind myself that

I am not a programmer. I am a person who programs.

I am not a runner. I am a person who runs.


I align to everything in this post except the below excerpt. I think it's important to be a lot of things and nothing at the same time and to tie fulfilment to internal metrics rather than externalities.

> AI hasn't eliminated programmers or programming, and it never will.

It might not fully eliminate them tomorrow, but this technology is being pitched as at least displacing a lot of them and probably putting downwards pressure on their wages, which is really just as harmful to the profession. AI as it's being pushed is a direct attack on white collar CS jobs. There will always be winners and losers, but this is a field that will change in many ways in the not so distant future because of this technology - and most current CS prospects will probably not be happy with the direction the overall field goes.

Even if you do not personally believe this, you should be concerned all the same because this is the narrative major CEOs are pushing and we know that they can remain crazy longer than we can remain solvent, so to speak.


I don't dispute the shortening of attentions spans, which seems to be directly related to new forms of entertainment young people consume.

However. Films across the generations are very different in terms of how they lay out a narrative. Watch any film before 1980 and you'll start to see a pattern that the pacing and evolution of the narrative is generally very, very slow.

Art is highly contextualized by the period it's created in. I don't really think it's fair to expect people to appreciate art when it's taken completely out of its context.

Lawrence of Arabia, for example. What a brilliant, brilliant film. Beautiful, influential, impressively produced. And really, really boring and slow a lot of the time.

If I were a film professor today, hell even 20 years ago, I would not expect a modern film student to sit through that whole thing. I think it's my job as a professor to understand the context of the period, highlight the influential/important scenes, and get students to focus on those instead of having to watch 4 hours of slowly paced film making and possibly miss the important stuff.


> If I were a film professor today, hell even 20 years ago, I would not expect a modern film student to sit through that whole thing.

Our local cinematheque has just had a 70mm festival, where they of course screened Lawrence of Arabia. All screenings were sold out. My mom went to see another screening at the same time, and commented on how many young people were going to see Lawrence. The past couple of years there has been a strong uptick[1] here of younger people flocking to see older films.

[1]: https://www.nrk.no/kultur/analog-film-trender-blant-unge-1.1...


> And really, really boring and slow a lot of the time

If you only watch the story-driven scenes in Lawrence of Arabia, and skip the prolonged shots of the desert, you would miss out feeling the same vastness and heat Lawrence is feeling.

There is a limit to how much a film can make you think or feel. Films that reach the highest limits need "boring" voids in-between the primary scenes. These voids are not to ingest more, but to help digest what has been ingested in previous scenes, with subliminal scenes and silence that let the right thoughts and feelings grow.


> And really, really boring and slow a lot of the time.

It's not boring on a giant display with the original 6-track mix playing just a tad too loud all around you. I've seen it in 70mm at the AFI in Silver Spring, MD; candy for the eyes and ears.

It would likely be boring if played at a quiet volume on a small display. This is because movies are, in part, spectacle. Cirque du Soleil would likely be boring too if viewed very, very far away.


The pacing is irrelevant in this context. As a student the main point of watching these movies is not entertainment.

Although I will say it's pretty amazing that someone that supposedly has an interest in film would not be able to watch The Conversation or an even slower film like 2001.


No you're wrong. It's not about the era. Matt Damon talked about this on the Joe Rogan podcast recently. He was asked by Netflix to create a big action sequence in the first 5 mins so that people on their phones would get hooked into watching the entire movie. He was also asked to mention the plot of the movie several times throughout the movie because people on their phones will tend to miss plot details and it helped keep them engaged.

This is not about how movies are paced, it's about the way phones have changed attention spans.


No he's right, there is definitely a difference in pacing for films throughout the decades.

Much of the content that Netflix produces however is not made to be shown in a cinema like setting - its something that people put on while doing something else, like TV so whatever Damon was saying on a podcast makes sense in the context, its however not indicative of a whole generation of movies - there are still plenty of films being made that require full attention for an extended period of time, many of which are also on Netflix. One could argue that there was never a time in history where more excellent, deep and complex content was being made.

One other part is also that traditional TV (which arguably also never required full attention) has been replaced by new mediums. Personally I never owned a TV in my life.

The whole argument "phone bad" is a bit lazy IMO and doesn't at all take in account the nuance that would be required for a serious discussion.


There is different pacing throughout film history but that's not what the original article is about. The original article is talking about how film students can't sit through movies and that's because of attention spans and phones.


I think part of the problem too is there being so much slop content now that isn't worth paying 100% to which primes people to use their phone while watching for example netflix. And there is no differentiation between "higher quality pay attention to me content" that loses value by not paying attention and "quick and dirty low budget background movie" that loses value by paying attention to everything outside of a few key moments that covers the basic story line.


> If I were a film professor today, hell even 20 years ago, I would not expect a modern film student to sit through that whole thing.

Sorry, but this to me sounds completely insane. We're not even talking about the general population here, but people who are ostensibly serious about the art and craft of film making. And the bar is being set at literally just watching the movie, and not even some obscure marathon of a film that takes a degree to be appreciated, but a major mass-released picture that has already been enjoyed by countless people.


What seems to be missing for me at least is that I doubt I would have done well being assigned entire films on top of my regular course load worth of studies.

Paying attention to a film enough to emotionally connect with the content, take notes, synthesize an academic understanding of subtle things like the use of lighting, sound, camera work, etc while also doing the other several hours worth of homework from my other classes would be pretty daunting.

Much easier to get the clif notes from the Internet and fake it... though I had CS, math and Mandarin courses which were way way heavier on the homework side of things than most other classes I took, so maybe I'm overthinking it.


I like a lot of long films, but at nearly 4 hours, Lawrence of Arabia is a marathon of a film. I've not seen it, I did order a copy recently, but it was cancelled; and I missed the Fathom screening for some reason or another, but I'll see it eventually; I like long movies and movies involving sand, so it seems like an easy win.

I would think a film studies class might not want to spend so much time on a single film, so maybe several scenes would be more appropriate.


>And really, really boring and slow a lot of the time.

At no point was it "boring"


I guess that might be a modern interpretation. But I do disagree as well. I actually prefer older films because of the pacing, and fortunately live close enough to the TIFF cinema that I can see such films every other week.


We're not talking about random people pulled off the street and asked to watch Lawrence of Arabia. We're talking about film students. So I don't see how your post is relevant at all. It's like excusing poor literature students because your brother in law struggled with Moby Dick.


I imagine a film student watching the baptism scene of Michael Corleone and thinking it is boring.


> Watch any film before 1980 and you'll start to see a pattern that the pacing and evolution of the narrative is generally very, very slow.

Star Wars, Enter the Dragon, Game of Death, Mad Max, and many Bond films are fun counterexamples.


In fairness, Lawrence's own book on which the movie is based, Seven Pillars of Wisdom, is a disjointed, rambling, and usually boring book. The high points are really good, but you slog through a lot to get there.


Technically, yes it is still burglary.

It's an odd position to take, that a crime was not committed or the offense isn't as bad if the difficulties of committing the crime have been removed or reduced.


> odd position [...] offense isn't as bad if the difficulties of committing the crime have been removed or reduced

Not really, intent is a part of the crime. If the barrier for crime is extremely small, the crime itself is less egregious.

Planning a robbery is not the same as picking up a wallet on the sidewalk. This is a feature, not a bug.


This. 1000x this.

Yes, it’s still wrong to take things but the guy should get like community service teaching white hat techniques or something. The CEO should be charged with gross negligence, fraud, and any HIPPA/Medical records laws he violated - per capita. Meaning he should face 1M+ counts of …


What does "the crime is less egregious" even mean?

Morally, you burglarized a home.

Legally, at least in CA, the charge and sentencing are equivalent.

If someone also commits a murder while burglarizing you could argue the crime is more severe, but my response would be that they've committed two crimes, and the severity of the burglary in isolation is equivalent.


Now, how do we apply that to today’s current events?

Is it still a crime if the roadblocks to commit the crime are removed? Even applauded by some? What happens when the chief of police is telling you to go out and commit said crimes?

Law and order is dictated by the ruling party. What was a crime yesterday may not be a crime today.

So if all you did was turn a key and now you’re a burglar going to prison, when the CEO of the house spent months setting up the perfect crime scene, shouldn’t the CEO at least get an accomplice charge? Insurance fraud starts the same way…


It's a common attitude with people from low-trust societies. "I'm not a scammer - I'm clever. If you don't want us to scam your system why do you make it so easy?"


The Internet is the ultimate low-trust society. Your virtual doorstep is right next to ~8 billion other peoples' doorsteps. And attributing attacks and enforcing consequences is extremely difficult and rather unusual.

When people from high-trust societies move to a low-trust society, they either adapt to their new environment and take an appropriately defensive posture or they will get robbed, scammed, etc.

Those naïfs from high-trust societies may not be morally at fault, but they must be blamed, because they aren't just putting themselves at risk. They must make at least reasonable efforts to secure the data in their custody.

It's been like this for decades. It's time to let go of our attachment to heaping all the culpability on attackers. Entities holding user data in custody must take the blame when they don't adequately secure that data, because that incentivizes an improved security posture.

And an improved security posture is the only credible path to a future with fewer and smaller data breaches.

See also: https://news.ycombinator.com/item?id=25574200


We can start by stopping the use of posture like you’re squirming in your seat. I’ve heard that term for the last 10 years and never has it been useful. Policy yes, Practice if you must, Mandate absolutely, Governance required.

Using posture is a kin to modeling or showing off clothes, the likes of which will never see the streets. Let’s all start agreeing that the term is a rug cover for whatever security wants it to be. Without checks and balances.

If your posture is having your rear end exposed and up in public then…


It's a generic, albeit somewhat euphemistic term. I agree we could do with some better messaging. Dirty and direct is usually more effective. How about this framing?

The Internet is a dark street in rural India and your dumbass company is a pretty young white woman walking around naked and alone at 2AM. It's not your fault morally if someone rapes you, but objectively you're an idiot if you do not expect it. Now, you getting raped doesn't just hurt you; it primarily hurts people your company stores data about. Those rapists aren't going away, so we need you to take basic precautions against getting raped and we're gonna hold you accountable for doing dumb shit that predictably leads you to getting raped.

> If your posture is having your rear end exposed and up in public then…

Right, that is most companies' current security posture: Naked butt waving in the air. "Improving your security posture" is just a euphemism for "pull your pants up and put your butt down".

> Using posture is a kin to modeling or showing off clothes, the likes of which will never see the streets. Let’s all start agreeing that the term is a rug cover for whatever security wants it to be. Without checks and balances.

No, I will not agree with that; that's ridiculous. "Improve [y]our security posture" is not some magic talisman used to seize unchecked power within an organization. It's basically just the Obama Doctrine brought to computer security: "Don't do stupid shit".


“Improve [y]our security posture” absolutely is without a definition of posture. Does that mean more monitoring? More security team members?

Posture is no replacement for a plan.

Originally it was “how we follow our plan” but that has since been thrown out the window. Now, posture is code word for cover.

I don’t mean to vent it’s just tiring having to deal with varying degrees of posturing where everyone is just haphazardly laying on a couch watching TV.


Welcome to America


Powerful.


This is surprisingly basic knowledge for ending up on the front page.

It’s a good intro, but I’d love to read more about when to know it’s time to replace my synchronous inter service http requests with a queue. What metrics should I consider and what are the trade offs. I’ve learned some answers to this question over time, but these guys are theoretically message queue experts. I’d love to learn about more things to look out for.

There are also different types of queues/exchanges and this is critical depending on the types of consumer or consumers you have. Should I use direct, fan out, etc?

The next interesting question is when should I use a stream instead of a queue, which RabbitMQ also supports.

My advice, having just migrated a set of message queues and streams from AWS(AvtiveMQ) to RabbitMQ is think long and hard before you add one. They become a black box of sorts and are way harder to debug than simple HTTP requests.

Also, as others have pointed out, there are other important use cases for queues which come way before microservice comms. Async processing to free up servers is one. I’m surprised none of these were mentioned.


> This is surprisingly basic knowledge for ending up on the front page.

Nothing wrong with that! Hacker News has a large audience of all skill levels. Well written explainers are always good to share, even for basic concepts.


In principle, I agree, but “a message queue is… a medium through which data flows from a source system to a destination system” feels like a truism.


For me, I've realized I often cannot possibly learn something if I can't compare it to something prior first.

In this case, as another user mentioned, the decoupling use case is a great one. Instead of two processes/API directly talking, having an intermediate "buffer" process/API can save you headache


To add to this,

The concept of connascence, and not coupling is what I find more useful for trade off analysis.

Synchronous connascence means that you only have a single architectural quanta under Neil Ford’s terminology.

As Ford is less religious and more respectful of real world trade offs, I find his writings more useful for real world problems.

I encourage people to check his books out and see if it is useful. It was always hard to mention connascence as it has a reputation of being ivory tower architect jargon, but in a distributed system world it is very pragmatic.


Agree! In fact, I would appreciate more well written articles explaining basic concepts on the front page of Hacker News. It is always good to revisit some basic concepts, but it is even better to relearn them. I am surprised by how often I realize that my definition of a concept is wrong or just superficial.


Also it's nice to have a set of well-written explainers for when someone asks about a concept.


This has more depth on System V/POSIX IPC, and a youtube video.

https://www.softprayog.in/programming/interprocess-communica...

Fun fact: IPC was introduced in "Colombus UNIX."

https://en.wikipedia.org/wiki/CB_UNIX


> when to know it’s time to replace my synchronous inter service http requests with a queue

I've found that once it's inconveniently long for a synchronous client side request, it's less about the performance or metrics and more about reasoning. Some things are queue shaped, or async job shaped. The worker -> main app communication pattern can even remain sync http calls or not (like callback based or something), but if you have something that has high variance in timing or is a background thing then just kick it off to workers.

I'd also say start simple and only go to Kafka or some other high dev-time overhead solution when you start seeing Redis/Rabbit stop being sufficient. Odds are you can make the simple solution work.


I think the article would be a little bit more useful to non-beginners if it included an update on the modern landscape of MQs. Are people still using apache kafka lol?

it is a fine enough article as it is though!


Kafka is a distributed log system. Yes, people use Kafka as a message queue, but it's often a wrong tool for the job, it wasn't designed for that.


> but I’d love to read more about when to know it’s time to replace my synchronous inter service http requests with a queue. What metrics should I consider and what are the trade offs. I’ve learned some answers to this question over time, but these guys are theoretically message queue experts. I’d love to learn about more things to look out for.

Not OP but I have some background on this.

An Erlang loss system is like a set of phone lines. Imagine a special call center where you have N operators, each of which takes calls, talks for some time (serving the customer) and hungs up. Unlike many call centers, however, they don’t keep you in line. Therefore, if all operators are busy the system hungs up and you have to explicitly call again. This is somewhat similar to a server with N threads.

Let's assume N=3.

Under common mathematical assumptions (constant arrival rate, time between arrivals modeled by a Poisson distribution, exponential service time) you can define:

1) “traffic intensity” (rho) has the ratio between arrival time and service time (intuitively, how “heavy” arrivals are with respect to “departures”)

2) the blocking probability is given by the Erlang B formula (sorry, not easy to write here) for parameters N (number of threads) and rho (traffic intensity). Basically, if traffic intensity = 1 (arrival rate = service rate), the blocking probability is 6.25%. If service rate is twice the arrival rate, this drops to 1% approximately. If service rate is 1/10 of the arrival rate, the blocking probability is 73.3%.

I will try to write down part 2 when I find some time.

EDIT - Adding part 2

So, let's add a buffer. We said we have three threads, right? Let's say the system can handle up to 6 requests before dropping, 1 processed by each thread plus an additional 3 buffered requests. Under the same distribution assumptions, this is known as a M/M/3/6 queue.

Some math crunching under the previous service and arrival rate scenarios:

- if service = arrival time, blocking probability drops to 2%. Of course there is now a non-zero wait probability (close to 9%).

- if service = twice the arrival time, blocking probability is 0.006% and there is a 1% wait probability.

- if service = 1/10 of the arrival time, blocking probability is 70%, waiting probability is 29%.

This means that a buffer reduces request drops due to busy resources, but also introduces a waiting probability. Pretty obvious. Another obvious thing is that you need additional memory for that queue length. Assuming queue length = 3, and 1 KB messages, you need 3 KB of additional memory.

A less obvious thing is that you are adding a new component. Assuming "in series" behavior, i.e. requests cannot be processed when the buffer system is down, this decreases overall availability if the queue is not properly sized. What I mean is that, if the system crashes when more than 4 KB of memory are used by the process, but you allow queue sizes up to 3 (3 KB + 3 KB = 6 KB), availability is not 100%, because in some cases the system accepts more requests than it can actually handle.

An even less obvious thing is that things, in terms of availability, change if you consider server and buffer as having distinct "size" (memory) thresholds. Things get even more complicated if server and buffer are connected by a link which itself doesn't have 100% availability, because you also have to take into account the link unavailability.


I only really ever play one game, so that's not a blocker for me.

I would have switched by now but film and audio production software, including VSTs, don't seem to be greatly supported on Linux. I'd love to hear from someone if you are successfully doing this.


> I only really ever play one game, so that's not a blocker for me.

I play loads of games; its mainly AAA multiplayers that aren't able to run on linux due to kernel anti-cheat - nearly everything else runs well with minimal effort using proton via steam (either installed via steam or imported as a non-steam game).


Music production is indeed still a blocker. I used to use Windows for that; I am now on macOS for work and music (much better than Windows in every way! I use an old trashcan Mac Pro with Monterey for my studio computer) and Debian for my personal machines.


I'd say about less than .00000001 percent of the world is in the same use case as you.


I vibe coded a windows shell extension that renders thumbnails for 10-bit videos. Windows does not do this out of the box.

I also built a Preview Pane Handler for 10-bit videos.

The installers (WIX) were vibe coded as well.

So was the product website and stripe integration. I created a bespoke license generation system on checkout.

I don’t think I wrote a single line of C++ code although the WIX installers and website did receive minimal manual adjustments.

Started with Claude but then at some point during development Codex got really good so I used only that.

https://ruptureware.com


Netflix on Apple TV has an issue if "Match Content" is "off" where it will constantly downgrade the video stream to a lower bitrate unnecessarily.

Even fixing that issue the video quality is never great compared to other services.


I just launched a 10-Bit Video Thumbnail Provider for Windows.

Windows does not natively support rendering thumbnails for 10-bit videos, which are commonly produced by cameras like the Sony A7IV.

When I started working on a short film the video clips were piling up on my hard drive. Opening them one by one to find what I was looking for was tedious.

I could not find a reputable solution to this problem, so I started a company and built one. I went through the process of EV Certification to have the installer and executable code signed.

I hope to be in the Microsoft Store soon.

I'm also building other utilities with similar purpose.

https://ruptureware.com/thumbprovider


Let's not conflate the two things that were said.

It is absolutely true that companies were rushing to rewrite their code every few years when the new shiny JS library or framework came out. I was there for it. There was a quick transition from [nothing / mootools?] to jQuery to Backbone to React, with a short Angular detour about 13 years ago. If you had experience with the "new" framework you could pretty much get a front-end gig anywhere with little friction. I rewrote many codebases across multiple companies to Backbone during that time per the request of engineering management.

Now, is React underappreciated? In the past 10 years or so I've started to see a pattern of lack of appreciation for what it brings to the table and the problems it solved. It is used near universally because it was such a drastic improvement over previous options that it was instantly adopted. But as we know, adoption does not mean appreciation.

> React is used near universally, despite there being alternatives that are better in almost every way.

Good example of under-appreciation.


Having worked in both over the years the main technical thing React had going for it over Vue, in my humble opinion, was much better Typescript support. Otherwise they are both so similar it comes down to personal preference.

However 0 of the typescript projects (front and back end) I've worked one (unless I was there when they started) used strict mode so the Typescript support was effectively wasted.


No, I was also around when React was new, moving to it from tangles of jQuery and Backbone. I absolutely know React brought several lasting innovations, in particular the component model, and I do appreciate that step change in front-end development. But other frameworks have taken those ideas and iterated on them to make them more performant, less removed from the platform, and generally nicer to work with. That is where we are today.

I agree that there was a period where many organizations did rewrite their apps from scratch, many of them switching to React, but I think very few did it ”every couple of years”, and I think very few are doing it at all today (at least not because of hype - of course there might always be other reasons you do a big rewrite). We should not confuse excitement about new technologies for widespread adoption, especially not in replacing existing code in working codebases.


I read parent's comment as an assertion that the current "fast-moving JavaScript world" expects everyone to rewrite their app. Personally I've never seen this, but since React became popular ~13+ years ago, I struggle to believe this has actually been true for others in any meaningful way.


Mootools is still around! "Copyright © 2006-2025". I don't know anyone who uses it, but glad it see it's still going.

https://mootools.net/core


MooTools also features in the infamous SmooshGate:

https://developer.chrome.com/blog/smooshgate



You can also just grab the same piece from Substack: https://thenoosphere.substack.com/p/just-how-many-more-succe...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: