You convey tone through word choice and sentence structure - trying to convey tone through casing or other means is unnecessary and often just jarring.
Like look at the sentence "it has felt to me like all threads of conversation have veered towards the extreme and indefensible." The casing actually conflicts with the tone of the sentence. It's not written like a casual text - if the sentence was "ppl talking about this are crazy" then sure, the casing would match the tone. But the stodgy sentence structure and use of more precise vocabulary like "veered" indicates that more effort has gone into this than the casing suggests.
Fair play if the author just wants to have a style like this. It's his prerogative to do so, just as anyone can choose to communicate exclusively in leetspeak, or use all caps everywhere, or write everything like script dialogue, whatever. Or if it's a tool to signal that he's part of an in-group with certain people who do the same, great. But he is sacrificing readability by ignoring conventions.
What matters is content, not communication. They could build a platform to chat with each other, but they could just use WhatsApp or text or email for that. But they can't build a platform with an infinite stream of targeted content (until AI generates content I guess).
Not if there's no reputation. If you see someone liked your post and then you go check out their posts, or if people recognize commenters and remember things about them, then it's social. Think engaging with friends on Facebook or participating in a hobby forum. But there's nothing social about engaging with a popular Reddit post or some celebrity's Twitter feed.
Yeah that's really the issue with all social media. If you restrict yourself to just checking what friends post on Facebook, or what people you subscribe to post on YouTube, those platforms are pretty healthy too. It's when you go to the infinite content feed that sites become an issue.
How are you using it? I'm curious if you hit the limit so quickly because you're running it with Claude Code and so it's loading your whole project into its context, making tons of iterations, etc., or if you're using the chat and just asking focused questions and having it build out small functions or validate code quality of a file, and still hitting the limit with that.
Not because I think either way is better, just because personally I work well with AI in the latter capacity and have been considering subscribing to Claude, but don't know how limiting the usage limits are.
I'm not sure I agree - on the one hand yes, it's trivial to generate pages stuffed with keywords. But on the other hand Google is already interpreting search intent, and while this is okay for some things it is extraordinarily frustrating when trying to look for something specific.
Often I do want exact matches, and Google refuses to show them no matter what special characters you use to try to modify the search behaviour.
Personally I'd rather search engines continue to return exact matches and just de-rank content that has poor reputation, and if I want to have a more free-form experience I'll use LLMs instead.
Something that's been on my mind for a while now is shared moderation - instead of having a few moderators who deal with everything, distribute the moderation load across all users. Every user might only have to review a couple posts a day or whatever, so it should be a negligible burden, and send each post that requires moderation to multiple users so that if there's disagreement it can be pushed to more senior/trusted users.
This is specifically in the context of a niche hobby website where the rules are simple and identifying rule-breaking content is easy. I'm not sure it would work on something with universal scope like Reddit or Facebook, but I'd rather we see more focused communities anyway.
I dont know if it's true or not. But I remember reading about this person who would do the community reports for cheating for a game like cs or something. They had numerous bot accounts and spent a hour a day on it. Set up in a way that when they reviewed a video the bots would do the same.
But all the while they were doing legitimate reporting, when they came across their real cheating account they'd report not cheating. And supposedly this person got away with it for years for having good reputable community reporting with high alignment scores.
I know 1 exception doesnt mean it's not worth it. But we must acknowledge the potential abuse. Id still rather have 1 occasionally ambitious abuser over countless low effort ones.
Yeah I can definitely see that being a threat model. In the gaming case I think it's harder because it's more of a general reputation system and it's based on how people feel while playing with you, whereas for a website every post can be reviewed by multiple parties and the evidence is right there. But certainly I would still expect some people to try to maximize their reputation and use that to push through content that should be more heavily moderated, and in the degenerate case the bad actors comprise so much of the userbase that they peer review their own content.
I see this kind of testing as more for regression prevention than anything. The tests pass if the code handles all possible return values of the dependencies correctly, so if someone goes and changes your code such that the tests fail they have to either fix the errors they've introduced or go change the tests if the desired code functionality has really changed.
These tests won't detect if a dependency has changed, but that's not what they're meant for. You want infrastructure to monitor that as well.
Mass market SAAS will generally just use other products to handle this stuff. And if there does happen to be a leak, they just say sorry and move on, there are very few consequences for security failures.
I see tests as more of a test of the programmer's understanding of their project than anything. If you deeply understand the project requirements, API surface, failure modes, etc. you will write tests that enforce correct behaviour. If you don't really understand the project, your tests will likely not catch all regressions.
AI can write good test boilerplate, but it cannot understand your project for you. If you just tell it to write tests for some code, it will likely fail you. If you use it to scaffold out mocks or test data or boilerplate code for tests which you already know need to exist, it's fantastic.
Like look at the sentence "it has felt to me like all threads of conversation have veered towards the extreme and indefensible." The casing actually conflicts with the tone of the sentence. It's not written like a casual text - if the sentence was "ppl talking about this are crazy" then sure, the casing would match the tone. But the stodgy sentence structure and use of more precise vocabulary like "veered" indicates that more effort has gone into this than the casing suggests.
Fair play if the author just wants to have a style like this. It's his prerogative to do so, just as anyone can choose to communicate exclusively in leetspeak, or use all caps everywhere, or write everything like script dialogue, whatever. Or if it's a tool to signal that he's part of an in-group with certain people who do the same, great. But he is sacrificing readability by ignoring conventions.
reply