Hacker Newsnew | past | comments | ask | show | jobs | submit | ipnon's commentslogin

I used to work in healthtech. Information that can be used to identify a person is regulated in America under the Health Insurance Portability and Accountability Act (HIPAA). These regulations are much stricter than the free-for-all that constitutes usage of information in companies that are dependent on ad networks. These regulations are strict and enforceable, so a healthcare company would be fined for failing to protect HIPAA data. OpenAI isn't a healthcare provider yet, but I'm guessing this is the framework they're basing their data retention and protection around for this new app.

Modal is great for infrequent batches.

Chinese diplomacy doesn't work very well because they don't have power projection, so they can't attack Netherlands. They don't control global finance, so they can't sanction anyone because there is hardly any Dutch money in China. And they don't have any moral high ground because they're constantly performing industrial espionage and corporate nationalization on friends and foes alike. So all of this just makes me laugh.

I don't feel like I ever learned how to code. The first program I wrote was in Flash, it moved a little triangle spaceship around a small frame and shot lasers. I had no idea what was going on. Later in college I wrote a lot of Python, like Django and simple algorithms. Still had no idea what was going on. After college I wrote lots of JavaScript targeting Web APIs and DOMs and strange frameworks. Still confused. We were serving on the order of 100,000,000 people a day, kind of just made it up as I went along. Now I write Elixir, and still only have a fuzzy idea of what I'm doing, but eventually it starts to work with the features and reliability I want.

I suppose all of this is to say I still feel like I'm learning! If I ever feel like I have finished the learning process it will probably be on my death bed!


$1bn seems low but $100bn seems high.

The university pipeline seems to me totally broken as a way to gain employment. It's still effective for prestige. You should stay in school as long as you're still climbing the world university rankings, but once you start falling down this ladder leave and join industry. You will get paid more and do more interesting and valuable work.

As far as I know, the data says otherwise: A college degree leads to much higher lifetime income.

> do more interesting and valuable work

It depends what you find interesting. Research is very interesting to a lot of poeple.


Pretty sure they’re talking about graduate degrees and academia as an occupation, not getting a bachelor’s in order to join the white collar workforce.

> they’re talking about graduate degrees and academia as an occupation

PhDs have the lowest unemployment rate of any education bracket, and roughly match the earnings of professional-degree holders (e.g. MBAs) [1].

[1] https://www.bls.gov/careeroutlook/2025/data-on-display/educa...


Yes, but that’s not the relevant datum, because of selection effects. The relevant question is how well employed is the person who had a choice to do a Ph.D. or not compared to the counterfactual person who made an opposite choice.

As an example, an Ivy graduate makes more than state school graduate on average, but there was a study showing that those offered Ivy admission but deciding to go to a state school made just as much (that study setup has its own selection bias issues, but hopefully those gives an idea of what I mean).


> because of selection effects

We're literally measuring a selection effect: that of pursuing a graduate degree.

> there was a study showing that those offered Ivy admission but deciding to go to a state school made just as much

Source?

I'm not rejecting the hypothesis that this is a measurement error. But it's been observed across multiple countries for several generations. The burden of proof is on the hot take that graduate degrees in general are a bad economic bet. (Note: I don't have a PhD. I went to a state school. So you're hypothesis is tempting to believe, hence my scepticism.)


> graduate degrees and academia as an occupation

Yes, that's what I meant by 'doing research': People really have deep passion for it - knowledge, being on the frontier of it and generating new knowledge.


Good luck with this!

De Bello Gallico is much more intimidating than it is difficult. Caesar wrote using pretty simple language. Once you learn the main geographic and military terms it's a fun read!

GPU programming definitely is not beginner friendly. There's a much higher learning curve than most open source projects. To learn basic Python you need to know about definitions and loops and variables, but to learn CUDA kernels you need to know maybe an order of magnitude more concepts to write anything useful. It's just not worth the time to cater to people who don't RTFM, the README would be twice as long and be redundant to the target audience of the library.


That's the whole problem. I had to "R" multiple "FMs" before one of them bothered to define the acronym.

Stop carrying water for poor documentation practice.


It's kind of like if the Django README explained how SQL works, the structure of HTTP requests, best practices for HTML, and so on. If you don't know what MLIR is, you might not be the target audience for this library. Nvidia in general doesn't prioritize developer experience as much as companies like Meta do for open source projects like React.

HTTP and HTML are very common acronyms; nobody should be getting out of high school these days without knowing them, and if they somehow managed to do so, they're darned sure not reading HN. Even SQL is pretty hard to avoid if you've been in an IT-adjacent industry for a while.

However, MLIR is a highly-specialized term. The problem with failing to define a term like that is that I don't know up front if I'm the target audience for the article. I had to Google it, and when I did that, all I found at first were yet more articles that failed to define it.

Wikipedia gets the job done, but these days, Wikipedia is often a long way down the Google search results list. I think they downranked it when they started force-feeding AI answers (which also didn't help).


Use the AI prompt to pinprick learn.

Just say to the AI, "Explain THIS".


HN: "Learning is good"

Just say to the AI, "Explain THIS".

Also HN: "Not like that"


ChatGPT Told me MLIR stands for "Modern Life Is Rubbish"

YMMV

This is a great example of how basic napkin math eliminates many classes of errors.

And most certainly we will see alien code. Already in speedrunning we see a pattern of “ahuman” behavior, where a learner optimized for a well-defined system begins to lose the implicit beauty of the system that draws humans to it. Before RL models became feasible for speedrunning, top runs were these apexes of performance, the tightest possible lines and utmost precision. But the RL speedruns use impossible strategies like nonstop tricks impossible to do with hands, and they lose a great deal of beauty in this manner, at least to some people.

Perhaps the greatest lesson we can derive from this example is that the improvements are still marginal compared to top players. A casual player might beat a track in 1:10, the world record might be 1:00, and the RL record might be 0:50. So we still see significant yet undeniably marginal improvements in performance.

I suppose soon enough we will have experimental evidence for all these ideas!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: