Hacker Newsnew | past | comments | ask | show | jobs | submit | ronbenton's commentslogin

I just pasted the first paragraph in an "AI detector" app and it indeed came back as 100% AI. But I heard those things are unreliable. How did you determine this was LLM-generated? The same way?

Apart from the style of the prose, which is my subjective evaluation: This blog post is "a view from nowhere." Tiger Data is a company that sells postgres in some way (don't know, but it doesn't matter for the following): they could speak as themselves, and compare themselves to companies that sell other open source databases. Or they could showcase benchmarks _they ran_.

Them saying: "What you get: pgvectorscale uses the DiskANN algorithm (from Microsoft Research), achieving 28x lower p95 latency and 16x higher throughput than Pinecone at 99% recall" is marketing unless they give how you'd replicate those numbers.

Point being: this could have been written by an LLM, because it doesn't represent any work-done by Tiger Data.


For what it's worth, TigerData is the company that develops TimescaleDB, a very popular and performant time series database provided as a Postgres extension. I'm surprised that the fact that TigerData is behind it is not mentioned anywhere in the blog post. (Though, TimescaleDB is mentioned 14 times on the page).

The cynical take is: the AI doesn't know you-the-blog-post-author made TimescaleDB unless you tell it!

I don't understand your example: pgvectorscale was built and is maintained by Tiger Data

In terms of that example: they should link to how they got those numbers, and it should state the benchmark used, the machines used, what they controlled for etc.

Just using LLMs enough I've developed a sense for the flavor of writing. Surely it could be hidden with enough work, but most of the time it's pretty blatant.

Sometimes I get an "uncanny valley" vibe when reading AI-generated text. It can be pretty unnerving.

It’s got that LLM flow to it. Also liberal use of formatting. It’s like it cannot possibly emphasize enough. Tries to make every word hit as hard as possible. Theres no filler, nothing slightly tangential or off topic to add color. Just many vapid points rapid fire, as if they’re the hardest hitting truth of the decade lol

ChatGPT has a pretty obvious writing style at the moment. It's not a question of some nebulous "AI detector" gotcha, it's more just basic human pattern matching. The abundant bullet points, copious bold text, pithy one line summarizing assertions ("In the AI era, simplicity isn’t just elegant. It’s essential."). There are so many more in just how it structures its writing (eg "Let’s address this head-on."). Hard to enumerate everything, frankly.

I agree this is a result of monopolization or more specifically "lock-in." I spent some time at Microsoft making products in their 365 suite and it was frankly known that enterprise contracts with us were a form of lock-in--the idea of moving a 100K employee workforce off of Microsoft 365 and onto something else is nearly impossible. So management didn't worry much about quality.

We are on the darkest of paths. It’s like the current US administration is using our collective greatest fears about data privacy as a playbook.


Hm that is a weird way to ask for the feds to stop shooting innocent people


You can tell the message was written by committee, or by a large language model prompted to respect "both sides". No human with a soul could honestly characterize the situation as passively as this: "The recent challenges facing our state have created widespread disruption and tragic loss of life."


It was almost certainly written by lawyers. The letter basically says, “The current situation is disrupting our bottom line. This cannot continue.”


Reads like a prayer to summon a nostalgic past.


It's hard to imagine a joint letter that says less than this one does.


Do you have some examples of letters that say more? Some ltter signed by 30 CEOs or more.


It seems you can always without fail count on this administration to do the wrong thing


You really can. Even when they superficially appear to have a good idea, or a middling idea with a potentially good side effect, they consistently find a way to mess up the details and dodge any potential good outcomes.


look at their court briefings, they try do the same thing illegal in as many ways as possible, the goal is to break as much as they can, the constitution being a primary target

#Project2025


This is usually a smart crowd. I’m utterly mystified at the number of comments in this thread confidently stating that the US must go back to paper ballots when 99% of the country already uses them. It just takes a quick google search to know this.


Drives me nuts how many people don’t understand we are already using paper ballots. Electronically tabulated using risk-limiting audits.

Why are so many people convinced we don’t use paper ballots? Disinformation?


Yes


Telling uses to “watch out for prompt injections” is insane. Less than 1% of the population knows what that even means.

Not to mention these agents are commonly used to summarize things people haven’t read.

This is more than unreasonable, it’s negligent


We will have tv shows with hackers “prompt injecting” before that number goes beyond 1%


https://www.pcloadletter.dev

Some articles have been well-received here and certainly resulted in good discussion!


This falls under "lmao even" right? Like, come on, the entire business model of most generative AI plays right now hinges on IP theft.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: