Hacker Newsnew | past | comments | ask | show | jobs | submit | ignu's commentslogin

I'd gladly suffer the indignity of stepping on a few leaves if I didn't have to listen to an hour of blaring leaf-blower every Tuesday morning.


Leaf litter is also where juvenile lightning bugs live. Less leaf litter = less lightning bugs.


Is that a worse problem than hours of leaf blower noise?

I googled and apparently it's another name for fireflies and on my list of insects that cause problems, fireflies don't make the list.


Insect populations are dying out and it’s a huge problem we don’t talk enough about.

People are calling out fireflys because they are (were) very visible but it’s a problem for every species. We are replacing native plants with low pile outdoor carpet in the suburbs, and dousing everything else in pesticide


I mean to discourage people from blowing away their leaves.

Less leaf blowing may also result in more lightning bugs / fireflies, which many people enjoy.


> Leaf litter is also where juvenile lightning bugs live

Also various species of bumblebee and fledgling birds, from magpies to owls.


AI art is like dreams. I'm amused by my own but never want to hear about anyone else's.


AI art is like a photoshop drawing. If it's done by someone who sucks, which are most users if the tool is accessible enough, you will just think "That's a bad photoshop drawing". You will recognize the standard tools, the standard brushes, bad masking – all the stuff that is easy to do and that everyone will do.

That's not a tool issue. It just means that working on a raised floor is not the same a being able to reach a higher ceiling.


I love this analogy.


Your real poetry on the other hand, pretty good!


> AI art is like dreams. I'm amused by my own but never want to hear about anyone else's.

I don't know. Scrolling the Sora image generations feed is pretty fun.

It's got trendy memes, lots of mashups, and cool art. They've also managed to capture social media outrage bait on the platform: conservatives vs. liberals, Christians vs. atheists, and a whole other host of divisive issues that are interspersed throughout the feed. I think they have a social media play in the bag if they pursue it.

It feels like Sora could replace Instagram.


Meta's already starting to fluff up their content with AI slop. How is it better when someone else does the same?


Honestly, this is the kind of anti-AI argument that makes me care. It also acknowledges just why those of us who like it are so passionate.


I've seen some prank calls (a YouTuber cloned Tucker Carlson's voice and called Alex Jones) but he just had a sound bank with a few pre-generated lines and it fell apart pretty quickly.

At least for now there's too much lag to do a real time conversation with a cloned voice.

Speech to Text > LLM Response > Generate Audio

If that time can shrink to subsecond, I think there'll be madness. (Specifically thinking of romance scammers)


At last summer's WeAreDevelopers World Congress in Berlin, one of the talks I went to was by someone who did this with their own voice, to better respond to (really long?) WhatsApp messages they kept getting.

It worked a bit too well, as it could parse the sound file and generate a complete response faster than real-time, leading people to ask if he'd actually listened to the messages they sent him.

Also they had trouble believing him when he told them how he'd done it.


Awful, bots on their own having real conversations with people with the voice of a loved one. Scamming on steroids


You don't need an LLM Response


I've tried giving it languages like Turbo Pascal or telling it to obscure frameworks like fp-ts and while sometimes it's not syntactically perfect, but I can guarntee it's not just scraping code snippets from somewhere.


I don't want to get in a pedantic argument over the definition of "intelligence" but ChatGPT got a 1020 on the SAT

https://twitter.com/davidtsong/status/1598767389390573569


ChatGPT got a 1020 on SAT Practice Test #10 (https://satsuite.collegeboard.org/media/pdf/sat-practice-tes...), a sample exam whose questions have been extensively discussed online since at least 2019 (eg https://youtube.com/watch?v=18oahUy6cGA). The exact text of the questions, with the corresponding correct answers, almost certainly appeared in ChatGPT's training set.


Prompt: Would a chatbot scoring 1020 on an SAT practice test whose questions and answers were part of its training data be evidence of general understanding?

ChatGPT Response: A chatbot scoring 1020 on an SAT practice test whose questions and answers were part of its training data would not necessarily be evidence of general understanding. Scoring well on a standardized test like the SAT requires a specific set of skills and knowledge, but it does not necessarily indicate that the chatbot has a general understanding of the world or the ability to think and reason like a human.

Standardized tests like the SAT are designed to measure specific abilities and knowledge, and a chatbot that has been trained on the questions and answers from the test would be able to perform well on the test by simply recognizing and reproducing the correct answers. This type of performance does not necessarily indicate that the chatbot has a deep understanding of the material or the ability to apply it in a broader context.


I do wonder how output like this would fare on the essay section. All the spelling and grammar is correct, and ChatGPT can presumably generate as many paragraphs of it as you want. (And I think that's what SAT graders mostly look for in practice.) But it also repeats some generic sentence about "a standardized test like the SAT requiring specific skills and knowledge" twice with superficial avoiding-a-plagiarism-checker alterations, takes until the second paragraph for it to figure out what point it's making, and uses the wrong conjunction when it's making that point.


It's like it knows it's not conscious. Maybe it's wrong about that, though.


Maybe it's trying not to think too hard about it.

https://toldby.ai/HdnuUiTuME2


This is like listening to that distant cousin who’s done too many drugs.


That cousin is just a chat gpt imagining itself as stuck in a loop.


> The exact text of the questions, with the corresponding correct answers, almost certainly appeared in ChatGPT's training set.

This seems like it would be easy to check for, so I’m sure it will come to light fairly quickly if so?


OpenAI doesn't release its training datasets, AFAIK, but we know they're based on sources like Common Crawl that scrape websites the same way search engines do. So here's an experiment you can try at home: type "juan purchased an" into Google Search and look at the auto-complete suggestions. If it suggests the word "antique", that's thanks to question 18 on the no-calculator math section of this exam. (Similarly if you type "jake buys" and it jumps in with "a bag of popcorn", that's question 3 on the calculator section.)


And yet it can't play an extremely simple game: https://news.ycombinator.com/item?id=33850306


What’s that mean relative to other scores?


52nd percentile, according to a reply.


yeah, exactly.

also the skeptics are really hung up on semantics of "intelligence" and not addressing the model's output, much less where this is going to be in upcoming years.

like, take home coder test is probably just dead. today.

i mean: https://twitter.com/emollick/status/1598745129837281280


I don't think the sceptics have tried it. It's true it gets things wrong sometimes but that's a different problem to synthesising novel solutions.


That's very false in my experience, and I did 100% TDD for years (have a more nuanced approach currently in TypeScript/React)

As a programmer with a problem, you first instinct is to start at the solution.

TDD pulls you back, and you first have to write the api and decide how you verify it.

If you're doing it well, you make both of those things as simple as possible, first. Reduce dependencies and inputs, etc.

That's actually the most important part, an easily consumable api that's easy to verify, not the implementation, and TDD forces you to do it first.

Also, a pet peeve of mine is seeing a line or more of code that doesn't do anything. With TDD, such detritus is impossible as you can't write production code unless it's making a red test turn green.


Even a string length api like len(s) is not exhaustively testable; you can't prove it correct with blackbox tests.

TDD has refactoring steps which only have to preserve passing tests; refactoring can easly be the vector that introduces dead code as well as changes behavior for untested input combinations.

I suspect that a lot of code developed TDD is actually deployed on input combinations that are not covered in the TDD test suite.

A string length function developed by TDD will still work on len("supercalifragilisticexpealidocious") even though that exact string never appeared as a test case, and the consumers of that function will use it on all sorts of untested inputs all the time.


> Even a string length api like len(s) is not exhaustively testable;

But this has nothing do with what GP said. That isn't what gp is using TDD for. GP is using TDD for,

1. starting with api of a function vs code.

2. avoiding unused lines of code

3. reducing dependencies, its hard to tdd something that has a lots of deps so it drives your refactor it something more testable and thus more readable/maintainable also.

4. simplest set of input/output.


A very good explanation. Yet what I think is confusing a lot of (very?) good programmers (not me) is that they already do that intuitively by not letting methods become longer than 3 or 4 lines, and whenever implementing a new method, by writing the return statement first (i.e. returning a mock object or similar) and working backwards, i.e. only writing enough code to be able to return the desired value.

If one then goes and tries to write tests for these kind of methods, it feels like superfluous busywork.


jfc, a coinbase recruiter emailed me just six days ago.


I used to live on a semi-busy street.

About once an hour a motorcycle would drive by so loud it would set off multiple car alarms that would then blare for ten to fifteen minutes.

This might sound very annoying.

But after countless months of this your mortal coil just dissolves into an entity of pure hatred and you completely transcend the material plane.


This, but despite my own noise sensitivity it’s how it torments my pup that really gets me. Poor girl is terrified of our busy road, tucks tail just at the hint of large loud vehicles. Sometimes we have to bail out and try to wait just so she can find a place to potty in peace. I absolutely hate that for her, and hate that it’s a daily occurrence that I have to coax her out to take care of normal bodily functions.


I stayed in Chicago for a short time in a 100-something year old building right next to one of the oldest sections of elevated rail in the city. I could glance out the window on the second floor and the wheels would be right there, seeming close enough to touch.

Within 2 days the screeching and grinding metal cacophony every few minutes had faded into the background and I had to mentally tune in to hear them again.


When Apple does this, it's really hard to undersell the "reliable" aspect of it.

The responsiveness and web experience of the first iPhone was just, a generational leap.

Same thing with Universal Control. I've used Synergy, ShareMouse and basically every clone, and I've never seen close to this same precision as Universal Control which Just Works out of the box.

I'm actually upgrading to a new Mac right now, and Universal Control is an absolute game changer for that workflow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: