Insect populations are dying out and it’s a huge problem we don’t talk enough about.
People are calling out fireflys because they are (were) very visible but it’s a problem for every species. We are replacing native plants with low pile outdoor carpet in the suburbs, and dousing everything else in pesticide
AI art is like a photoshop drawing. If it's done by someone who sucks, which are most users if the tool is accessible enough, you will just think "That's a bad photoshop drawing". You will recognize the standard tools, the standard brushes, bad masking – all the stuff that is easy to do and that everyone will do.
That's not a tool issue. It just means that working on a raised floor is not the same a being able to reach a higher ceiling.
> AI art is like dreams. I'm amused by my own but never want to hear about anyone else's.
I don't know. Scrolling the Sora image generations feed is pretty fun.
It's got trendy memes, lots of mashups, and cool art. They've also managed to capture social media outrage bait on the platform: conservatives vs. liberals, Christians vs. atheists, and a whole other host of divisive issues that are interspersed throughout the feed. I think they have a social media play in the bag if they pursue it.
I've seen some prank calls (a YouTuber cloned Tucker Carlson's voice and called Alex Jones) but he just had a sound bank with a few pre-generated lines and it fell apart pretty quickly.
At least for now there's too much lag to do a real time conversation with a cloned voice.
Speech to Text > LLM Response > Generate Audio
If that time can shrink to subsecond, I think there'll be madness. (Specifically thinking of romance scammers)
At last summer's WeAreDevelopers World Congress in Berlin, one of the talks I went to was by someone who did this with their own voice, to better respond to (really long?) WhatsApp messages they kept getting.
It worked a bit too well, as it could parse the sound file and generate a complete response faster than real-time, leading people to ask if he'd actually listened to the messages they sent him.
Also they had trouble believing him when he told them how he'd done it.
I've tried giving it languages like Turbo Pascal or telling it to obscure frameworks like fp-ts and while sometimes it's not syntactically perfect, but I can guarntee it's not just scraping code snippets from somewhere.
Prompt: Would a chatbot scoring 1020 on an SAT practice test whose questions and answers were part of its training data be evidence of general understanding?
ChatGPT Response: A chatbot scoring 1020 on an SAT practice test whose questions and answers were part of its training data would not necessarily be evidence of general understanding. Scoring well on a standardized test like the SAT requires a specific set of skills and knowledge, but it does not necessarily indicate that the chatbot has a general understanding of the world or the ability to think and reason like a human.
Standardized tests like the SAT are designed to measure specific abilities and knowledge, and a chatbot that has been trained on the questions and answers from the test would be able to perform well on the test by simply recognizing and reproducing the correct answers. This type of performance does not necessarily indicate that the chatbot has a deep understanding of the material or the ability to apply it in a broader context.
I do wonder how output like this would fare on the essay section. All the spelling and grammar is correct, and ChatGPT can presumably generate as many paragraphs of it as you want. (And I think that's what SAT graders mostly look for in practice.) But it also repeats some generic sentence about "a standardized test like the SAT requiring specific skills and knowledge" twice with superficial avoiding-a-plagiarism-checker alterations, takes until the second paragraph for it to figure out what point it's making, and uses the wrong conjunction when it's making that point.
OpenAI doesn't release its training datasets, AFAIK, but we know they're based on sources like Common Crawl that scrape websites the same way search engines do. So here's an experiment you can try at home: type "juan purchased an" into Google Search and look at the auto-complete suggestions. If it suggests the word "antique", that's thanks to question 18 on the no-calculator math section of this exam. (Similarly if you type "jake buys" and it jumps in with "a bag of popcorn", that's question 3 on the calculator section.)
also the skeptics are really hung up on semantics of "intelligence" and not addressing the model's output, much less where this is going to be in upcoming years.
like, take home coder test is probably just dead. today.
That's very false in my experience, and I did 100% TDD for years (have a more nuanced approach currently in TypeScript/React)
As a programmer with a problem, you first instinct is to start at the solution.
TDD pulls you back, and you first have to write the api and decide how you verify it.
If you're doing it well, you make both of those things as simple as possible, first. Reduce dependencies and inputs, etc.
That's actually the most important part, an easily consumable api that's easy to verify, not the implementation, and TDD forces you to do it first.
Also, a pet peeve of mine is seeing a line or more of code that doesn't do anything. With TDD, such detritus is impossible as you can't write production code unless it's making a red test turn green.
Even a string length api like len(s) is not exhaustively testable; you can't prove it correct with blackbox tests.
TDD has refactoring steps which only have to preserve passing tests; refactoring can easly be the vector that introduces dead code as well as changes behavior for untested input combinations.
I suspect that a lot of code developed TDD is actually deployed on input combinations that are not covered in the TDD test suite.
A string length function developed by TDD will still work on len("supercalifragilisticexpealidocious") even though that exact string never appeared as a test case, and the consumers of that function will use it on all sorts of untested inputs all the time.
> Even a string length api like len(s) is not exhaustively testable;
But this has nothing do with what GP said. That isn't what gp is using TDD for. GP is using TDD for,
1. starting with api of a function vs code.
2. avoiding unused lines of code
3. reducing dependencies, its hard to tdd something that has a lots of deps so it drives your refactor it something more testable and thus more readable/maintainable also.
A very good explanation. Yet what I think is confusing a lot of (very?) good programmers (not me) is that they already do that intuitively by not letting methods become longer than 3 or 4 lines, and whenever implementing a new method, by writing the return statement first (i.e. returning a mock object or similar) and working backwards, i.e. only writing enough code to be able to return the desired value.
If one then goes and tries to write tests for these kind of methods, it feels like superfluous busywork.
This, but despite my own noise sensitivity it’s how it torments my pup that really gets me. Poor girl is terrified of our busy road, tucks tail just at the hint of large loud vehicles. Sometimes we have to bail out and try to wait just so she can find a place to potty in peace. I absolutely hate that for her, and hate that it’s a daily occurrence that I have to coax her out to take care of normal bodily functions.
I stayed in Chicago for a short time in a 100-something year old building right next to one of the oldest sections of elevated rail in the city. I could glance out the window on the second floor and the wheels would be right there, seeming close enough to touch.
Within 2 days the screeching and grinding metal cacophony every few minutes had faded into the background and I had to mentally tune in to hear them again.
When Apple does this, it's really hard to undersell the "reliable" aspect of it.
The responsiveness and web experience of the first iPhone was just, a generational leap.
Same thing with Universal Control. I've used Synergy, ShareMouse and basically every clone, and I've never seen close to this same precision as Universal Control which Just Works out of the box.
I'm actually upgrading to a new Mac right now, and Universal Control is an absolute game changer for that workflow.