Hacker Newsnew | past | comments | ask | show | jobs | submit | MasterScrat's commentslogin

> Have you ever noticed that the food graphics in Super Smash Bros. and Kirby Air Riders is flat “billboarded” stock images of food?

I have not... A video showing what it's talking about would have been a good addition to this article.



Is there a reliable service that plots hourly price per GPU per cloud through time?


I’ve been working on computeprices.com as a side project for the last year to do just that.


Is there a graph view that charts all GPU prices on one graph?

If not I think the landing page should be just that with checkbox filters for all GPUs on the left that you can easily toggle all on/off to show/hide their line on the graph.


I was not expecting that the prices are going down. Makes sense as the hardware gets older but I always assumed the prices must be inflated given how much competition there is to make new datacenters


Yes i was surprised too. I think it's mostly newer models pushing older ones down. I think there's also a lot of competitive pressure in this market. And the GPU shortage is not really a thing anymore.


This is cool!

Would it be possible to add "Best Value" / "best average performance per dollar" type thing?


Good idea! I'll noodle on how to define that.


Not sure, but historically, AWS as far as I know has never raised prices on specific instance type usage like this. It makes sense that this would be the first attempt since it’s for apparently guaranteed capacity (vs the normal model of “if we’re out of capacity, too bad for you”).

That said, the real disturbing part of this is not so much the raising of the price for an extremely high-demand resource, but the utter lack of communication about it.


they have been doing that for awhile now (last few (2-3) years) for ebs (gp3) and ec2 (gen 8) instances.


Just so you know, this is already very much a thing on TikTok: AI-generated movie summaries with narrator voice explaining the plot while showing only major beats, reducing movie from 2h to shorts totaling 10min.

It’s honestly not the worse AI content out there! Lots of movies I wouldn’t consider watching but that I’m curious enough to see summarized (eg a movie where only the first title was good but two more were still published)


Then why bother at all? If you do not find 2h of watching entertaining than just do not watch it. It is like reading wiki summary of a good book or licking good burger because you do not want to chew.


One reason could be previous season or movie recaps. I know I'll go look for recaps to refresh myself on a story before launching into a new season.


I just read the plot on wiki


Exactly - best case scenario is Reddit/HN front page with a cool project you enjoyed working on, have some nice conversations there, reach a few 100s stars which look good on your CV, and that’s it.

If you expect more long term support you better be paying me for my time.


Maybe that is the answer. Put a note in your readme setting out your terms.

1) If you want to send a PR we charge $1000 to review them. This is no guarantee that we will merge. We will quote any further costs needed after a review.

2) Feature requests should be accompanied by a $250 quotation fee. Then we will scope and quote what is required to do your feature. If your feature would compromise the project, or is impractical a started we retain the right to refuse to quote or suggest more work is required. The $250 fee is non returnable.

Something like that?


> Without world models. you cannot achieve reliability. And without reliability, profits are limited.

Surprising to simultaneously announce the end of the road yet point to the road ahead


It is not a road, it is a runway.


How much AI did you use to write up this article? It tripped up my "fake AI-written article" detector a few times despite being interesting enough to read to the end


used claude to polish the draft and tighten sentences. the thinking, analysis, and examples are all mine and based on personal experiences. spent the weekend reflecting on my past experiences with claude code and actually digging into why claude code feels the way it does. curious to know what tripped your detector.


Adding to this: too many negatives before making a point, which AI text is prone to do in order to give surface level emphasis to random points in an argument. For example: "I sat there for a second. It didn't lose the thread. It didn't panic. It prioritized like a real engineer would." Then there is the fact that the paragraph ends in just about the same way, which also activates one's AI-voice-detector, so to speak: "This wasn't autocomplete. This was collaboration."

In my opinion, to write is to think. And to write is also to express oneself, not only to create a "communication object," let's put it that way. I would rather read an imperfect human voice than a machine's attempts to fix it. I think it's worth to face the frustration that comes with writing, because the end goal of refining your own argument and your delivery is that much sweeter. Let your human voice shine through.


Lots of things - typical llm em-dash situations although using dash. Lists of 3s after a colon where the 3 items aren't great. Short sentences for "impact" that sounds kind of like a high school essay i.e. "God level engineer...Zero ego."

I cannot at all understand writing an essay and then having an llm "tighten up the sentences" which instead just makes it sound like slop generated from a list of bullets


“Here’s the thing” “The best part?”


"It's not just X, it's Y"

I find it really hard to read articles that use AI slop aphorisms. Please use your own words, it matters.


What if I no good in English?

Jokes aside, my English is passable and I'm fine with it when writing comments but I'm very aware that some of it doesn't sound native due to me, well, not being native speaker.

I use AI to make it sound more fluent when writing for my blog.


As long as your bullet points+prompt are shorter than the output, couldn't you post that instead? The only time I think an LLM might be ethically acceptable for something a human has to read is if you ask it to make it shorter.


I write the full article in my Czenglish (English influenced by Czech sentence structure). Then I let it rewrite it in proper English.

So it's me doing the writing and GPT making it sound more English.


> What if I no good in English?

It would still sound more human coming from you.


Yeah it’s hard to keep interest when there’s no voice, just the same AI feel that you see everywhere else.


Well, actually, what if my own words make me come across as a raging pedantic asshole, you feckless moron!? I don't actually think you're a feckless moron, but sometimes I'll get emotional about this or that, and run my words through an LLM to reword it so that "it's not assholey, it's nice". I may know better than to use the phrase "well actually" seriously these days, but when the point is effective communication, yeah I don't want my readers to be put off by AI-isms, but I also don't want them to get put off by my words being assholey or condescending or too snarky or smug or any number of things that detract from my point. And fwiw, I didn't run this comment through an LLM.


> The start is slow as well, skipping to generation 42168M is recomended.

I picture entities playing with our universe, "it starts slow but check it out at the 13.8B mark"


Philosophically and depending on what schools of thought you follow, reality is just a really complex GoL simulation. I'm sure I read about it once, but if we were living in a simulation, would we be able to know?


I enjoy the [GoL -> our “reality” -> outside-the-simulation] comparison. It really drives home how unlikely we would be to understand the outside-the-simulation world.

Of course, there are other variants (see qntm's https://qntm.org/responsibility) where the simulation _is_ a simulation of the world outside. And we have GoL in GoL :-)


Always a fun read :) they turned it into a futurama episode


I think of reality as of GOL but 3D, with more states other than 0 and 1, and conservative (follows conservation laws, no relation with any politics).


Universe could be probability based GoL simulation; basic Turing machine cannot handle that


HN has a hatred of K8s? That’s new to me


K8s is used in many situations it shouldn't be, and a lot of HNers (including me) are bitter about having to deal with the resulting messes


This is a site for startups. They have no business running k8s, in fact, many of the lessons learned get passed on from graybeards to the younger generation along those lines. Perhaps I'm wrong! I'd love to talk shop somewhere.


And even for "out of distribution" code you can still ask question about how to do the same thing but more optimized, could a library help for this, why is that piece of code giving this unexpected output etc


I think the concern of "blog comments" is best left to external platforms eg HN, Reddit etc

What would be more useful would be an automated list of places where the post has been discussed (and maybe pull the top comments from there through API?)


There used to be a time when comments were attached to the posts. Where anyone could come, leave their name and a comment, and let the author know if any edits, misspellings, or how they liked the article.

Social media ruined that. Everyone is now on their own soap box posting comments of drivel from their sub-optimal self-conscious parroting asinine talking points about how one characterized group of statistics ruined it for everyone else. Bots, drivel, linkbacks, social media, stupid laws, and an aversion to independence - we have what we have today. Large platforms that trick humans into use because they have the largest arenas.

Also, the author’s experience with seeing scammy ads on their site doesn’t mean that others are seeing the same ads. Because they ran ad-free for so long it’s possible their token in the AdTech ecosystem is stale in which case it hasn’t put it into any buckets yet. Ergo, you get the smoking/drinking/scamming/doesn’t fit category.

A “token” is a device or ident signature used to identify a viewer or user so that they can tabulate impressions, build personas, categorize your shopping habits, track the sites you visit, link your token with others in your proximity


> Also, the author’s experience with seeing scammy ads on their site doesn’t mean that others are seeing the same ads...

Well, so they may see worse ads.


> Social media ruined that. Everyone is now on their own soap box posting comments of drivel from their sub-optimal self-conscious parroting asinine talking points about how one characterized group of statistics ruined it for everyone else.

Partially agree, partially disagree. Blog comments were already dead when SEO fraudsters discovered that "linkbacks" could be abused for spam even easier than comments were.


Correct. Site owners moved their communities over to social media pages because they couldn't handle moderating the waves of spam comments that littered every single post on their site. They figured, let Facebook/Twitter handle moderation. Then FB closed the gates, de-emphasized posts with outbound links and now site owners are screwed.


Yep. Thats 99% of the culprit.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: