Fellow bootstrapper here, roughly in your ballpark - €4M+ revenue, team of 18, bootstrapped for 12 years.
Only bootstrappers understand the bootstrap hustle ;) But what an amazing business you have built there - be proud, you deserve it.
Let me share a personal founder story if I may: after 12 years of building the company, I decided to step down as CEO, moved on and spent the last 6 months working on different projects, learned A LOT about AI coding, went to Iceland, Texas. Had a great time. Yet after only 6 months I experienced the strongest "pull" you can imagine, back to my bootstrapped company of 12 years. And here I am - December has been an amazing time, getting back to work. And next year we have ambitious plans ahead!
They leave for Germany, of all places. Germany is one of the European states with most arrests for posting entries on social media. I guess they will pack their stuff and move on in 1-2 years from now.
Germany has a big alt-rising in the form of AFD, and consequently, they do track social media heavily. There is also a non-insignificant fundamentalist Muslim population.
For things like troll posts or just general hate speech, most of the time the police visit your house and ask you questions and give you a stern warning. And remember, police in EU isn't like police in US - when you get visited by police in EU, you aren't afraid that you are going to get shot up or thrown on the ground and tazed if you did nothing wrong.
In extreme cases where you are calling for things like beheading, yea they def arrest for that.
Source: close friend that lives in Germanty works for a company that does business with German government. I don't know first hand but he is pretty aware of the policics in EU and I have no reason to believe he would be exaggerating.
On anther note, Germany policing is quite progressive actually. For example, if you run, you don't get a charge for evading/eluding - its actually legal to run from police because "desire for freedom is a human right".
In France, discriminatory identity checks are a striking illustration of this. Police disproportionately target certain citizens on the basis of their skin color or presumed origin, particularly young people perceived to be Black or Arab, including children. These abusive controls can often lead to more serious police violence, including with fatal outcomes.
> "big alt-rising in the form of AFD, and consequently, they do track social media heavily. There is also a non-insignificant fundamentalist Muslim population"
They are not though alt-right movements all work on shifting the blame. They always find a scapegoat (jewish people in WW2) for material conditions instead of attacking the root causes.
It seems reasonable to be concerned about a government that wants the power to reveal Internet users, but I couldn’t say on what basis Proton expects legal protection to continue after the move.
Neither of your links mention arrests, one specifically says "None of the suspects were detained". They don't seem to back up the original claim about Germany arresting the most people based on social media posts.
That’s an important distinction. Thank you for referring back to the original wording. They were investigated for violating the criminal code, searched, interrogated, and had devices seized in a number of cases, but seemingly not arrested.
as a german i can confirm that this happens very frequently (way more often than you think). usually it's politicians who file police reports which get prosecuted most of the time. i believe the last government (left wing coalition) built up massive infrastructure to prosecute such offenses. politicians in germany get special protection in terms of speech laws. §188 StGB allows the state to prosecute you severely, even without a private complaint from the politician in some cases.
I just tried "analyze this audio file recording of a meeting and notes along with a transcript labeling all the speakers" (using the language from the parent's comment) and indeed Gemini 3 was significantly better than 2.5 Pro.
3 created a great "Executive Summary", identified the speakers' names, and then gave me a second by second transcript:
[00:00] Greg: Hello.
[00:01] X: You great?
[00:02] Greg: Hi.
[00:03] X: I'm X.
[00:04] Y: I'm Y.
...
I made a simple webpage to grab text from YouTube videos:
https://summynews.com
Great for this kind of testing?
(want to expand to other sources in the long run)
I saw the keyboard to be operated by the left hand only and here is my (totally personal and somewhoat adjacent) problem with it.
My left hand is the one which has suffered the most the many hours of using a keyboard over the last +-25 years. While the right hand has the occasional break from the keyboard when using the mouse, the left hand is constantly glued to the keyboard.
It also has a much tougher job - all the cmd, ctrl, alt and shift + combinations are mostly done using the left hand - e.g. on Mac you cannot cmd+shift+ select text with the arrows - you must use the left hand - so it ends up doing so much more work.
I wonder if there are other people with the same problem. My right hand never hurts after many hours of computer work - but the left hand does. It hurts even now that I am typing and I haven't even spent more than an hour doing it.
Please do your hands a favor and get yourself an ergonomic keyboard! Thumb keys especially alleviate the issues with modifiers that you're describing.
I use a Glove80 as my daily driver right now, although the price tag to build quality ratio is not amazing, so idk if I would recommend it particularly. But there's a massive world of ergo keyboards out there--surely the right one for you exists somewhere!
I'm at the point where I need to redefine cmd-z, x, c, v because my left thumb doesn't want to do that dance anymore. It's been painful for a year, and I finally got to the point of redefining it a couple weeks ago. And the muscle memory is so ingrained that I changed it to option ', 1, 2, 3 and never thought about the idea that my right hand could do it.
They started with the left hand as requested, but made right hand version as well.
I wish these were also commercially available... I'd love to pay for one of these... I know it's open sources, but I don't know the language nor do I have the skills to construct one myself.
I was getting hand pain, switched to a Totem keyboard. 38 keys, 6 thumb keys. Column splay & never reaching for number row has greatly helped. 20g actuation means little force needed
Rust has been such a "pain" to learn - at least compared to other, more straight-forward languages. But boy does it feel good when you know that after a few back and forths with the compiler, the code compiles and you know, there is not much that is going to go wrong anymore.
Of course, I am exaggerating a bit - and I am not even that experienced with Rust.
But after coding with Ruby, JS/TS and Python - it feels refreshing to know that as long as your code compiles, it probably is 80-90% there.
Rust is the most defect-free language I have ever used.
I'd wager my production Rust code has 100x fewer errors than comparable Javascript, Python, or even Java code.
The way Result<T,E>, Option<T>, match, if let, `?`, and the rest of the error handling and type system operate, it's very difficult to write incorrect code.
The language's design objective was to make it hard to write bugs. I'd say it succeeded with flying colors.
Now try an actual functional programming language. I like Rust too but those features all come from FP, and FP languages have even more features like that that Rust doesn't yet or can't have.
> Rust has been such a "pain" to learn - at least compared to other, more straight-forward languages. But boy does it feel good when you know that after a few back and forths with the compiler, the code compiles and you know, there is not much that is going to go wrong anymore.
I found that at some point, the rust way kinda took over in my head, and I stopped fighting with the compiler and started working with the compiler.
One big source of bugs in TS is structural sharing. Like, imagine you have some complex object that needs to be accessed from multiple places. The obvious, high performance way to share that object is to just pass around references wherever you need them. But this is dangerous. It’s easy to later forget that the object is shared, and mutate it in one place without considering the implications for other parts of your code.
I’ve made this mistake in TS more times than I’d like to admit. It gives rise to some bugs that are very tricky to track down. The obvious ways to avoid this bug are by making everything deeply immutable. Or by cloning instead of sharing. Both of these options aren’t well supported by the language. And they can both be very expensive from a performance pov. I don’t want to pay that cost when it’s not necessary.
Typescript is pretty good. But it’s very normal for a TS program to type check but still contain bugs. In my experience, far fewer bugs slip past the rust compiler.
Appreciate it, that makes a lot of sense. I feel like I've been trained to favor immutability so much in every language that I sometimes forget about these things.
Yes, immutability is great for safety. But the copies you have to make to keep everything immutable extracts a price in copies and garbage collection.
Rust is advertised as having fearless concurrency. That's true, but not that important as concurrency is not that common. What's important to everyday programming is Rust provides fearless mutability. The fearless concurrency you get with that is just a bonus.
Fearless mutability provides Rust the same safety as a functional language in a without the speed or space cost. IMO, it's Rust's true secret sauce.
Similar. I mostly design my code around something like pipe and lifetime. The longer something needs to live the closer it is to the start of the program. If I need to mutate it, I take care that the actual mutation happens in one place, so I can differentiate between read and write access. For anything else, I clone and I update. It may not be efficient and you need to track memory usage, but logic is way far simple.
Not parent comment, but TS is generally safe if you have types correct at system borders, but very scary when you don't. Some of the most impactful bugs I've seen are because a type for an HTTP call did not match the structure of real data.
Also, many built in functions do not have sufficient typesafey like Object.entries() for instance
That is an issue with how TS works, but it can be significantly improved upon by using a library to verify the structure of deserialized data. zod is one example, or you could use protobufs. Fundamentally, this is an issue with any programming language. But having your base "struct"-like type be a hashmap leads to more mistakes as it will accept any keys and any values.
I disagree that this is an issue in every language - the problem is that in other languages the validation against some schema is more or less required for unmarshalling, and it's optional in TS.
Seeing a deserialization error immediately clues you in that your borders are not safe. Contrast that with TypeScript, where this kind of issue can lead to an insidious downstream runtime issue that might seem completely unrelated. This second scenario is very rare in other languages.
I don't know Rust, and I'm genuinely curious: How does it improve over that problem?
When you call a REST API (or SQL query for that matter), how does it ensure that the data coming back matches the types?
TS allows you to do parse the JSON, cast it into your target type, done (hiding correctness bugs, unless using runtime verification of the object shape, see sibling comment). Does Rust enforce this?
It validates the object shape at runtime, much like you can do in Typescript with a library like Zod. The key difference in this case is that Rust makes it scary to not validate data while Typescript will gladly let you YOLO it and blow your legs off, even in strict mode.
It’s not. And trying to just be a transformation of the source to JS without its own standard library (mostly, some old stuff doesn’t follow this) means it really isn’t possible with just TS alone.
That’s OK with me. I use TS because I like it and hate the total lack of safety in JS. I have to use JS on the web, so TS it is.
If I don’t need it to run on a webpage, I wouldn’t be writing it in TS. I like other languages more overall.
If you type correctly at border of your system, then TS will be very close to a formal verification of your code. This won't catch all bugs, but even broad categories for you data is helpful. If you know your input is a non-null string. Then it will warn you of every non string usage. It won't catch whether it's a name or an email, but knowing someone tries to divide it by zero is helpful.
Typescript doesn't even support notions like "unsigned integer". It is not a serious attempt at type-safety; its main claim to fame is "better than raw Javascript" which is not saying much.
I’ve never been a JS on the server person, I was used to other languages when that was developed.
Well I think I would prefer python, but simply because it’s “more traditional“ and I realize that’s specious reasoning, I prefer to use strongly typed languages whenever possible.
I would generally reach for Java since it’s the language I’m most proficient in due to my career. There’s also Go, which I played with long ago, or maybe I’d try Rust.
This is only for anything important. If I was just toying with something locally I’d probably do whatever was fastest. In that case Python or JS might be my choice for a very tiny script.
For the longest time I had been using GPT-5 Pro and Deep Research. Then I tried Gemini's 2.5 Pro Deep Research. And boy oh boy is Gemini superior. The results of Gemini go deep, are thoughtful and make sense. GPT-5's results feel like vomiting a lot of text that looks interesting on the surface, but has no real depth.
I don't know what has happened, is GPT-5's Deep Research badly prompted? Or is Gemini's extensive search across hundreds of sources giving it the edge?
I’ve been using `Gemini 2.5 Pro Deep Research` extensively.
( To be clear, I’m referring to the Deep Research feature at gemini.google.com/deepresearch , which I access through my `Gemini AI Pro` subscription on one.google.com/ai . )
I’m interested in how this compares with the newer `2.5 Pro Deep Think` offering that runs on the Gemini AI Ultra tier.
For quick look‑ups (i.e., non‑deep‑research queries), I’ve found xAI’s Grok‑4‑Fast ( available at x.com/i/grok ) to be exceptionally fast, precise, and reliable.
Because the $250 per‑month price for Gemini’s deep‑research tier is hard to justify right now, I’ve started experimenting with Parallel AI’s `Deep Research` task ( platform.parallel.ai/play/deep-research ) using the `ultra8x` processor ( see docs.parallel.ai/task‑api/guides/choose-a-processor ). So far, the results look promising.
I don't know about Gemini pro super duper whatever, but the freely available Gemini is as sycophantic as ChatGPT, always congratulates you for being able to ask a question.
And worse, on every answer it offers to elaborate on related topics. To maintain engagement i suppose.
The ChatGPT API offers a verbosity toggle, which is likely a magic string they prefix the prompt with, similar to the "juice" parameter that controls reasoning effort.
Ruby has a lot of these hidden gems (pun intended).
I wouldn't be as much in love with programming, if it wasn't for Ruby. And although I use many other programming languages these days, Ruby will forever have a special place in my heart.
Ruby, and Ruby on Rails is a treasure trove of little handy bits you can use if you just know where to look. I really miss some aspects of ruby (I just don't have a chance to use it these days).
I'm worried LLMs will make people ignore what's already there and auto generate useless functions instead of using what's there in Ruby/Rails. I've been using Rails for almost 20yrs (on and off) and I can't count the number of times I did something only to find out it was either natively supported in a recent version... or at least a new best practice in modern Rails.
You find the same thing with JS to an even higher degree, but there's always 10 options in NPM and they all need to be updated every year otherwise the other 20+ packages you depend on can't be upgraded. There's a stark contrast in maintenance overhead and DX between frontend and server side.
> I'm worried LLMs will make people ignore what's already there and auto generate useless functions instead of using what's there in Ruby/Rails.
I think you're probably right. But fwiw, as a non-Rubyist who values good style and is recently working in a one-off Ruby codebase, I've found it easy to use LLMs to write Ruby code that is idiomatic and leverages built-ins well. I use ChatGPT 5 Thinking for this, and I don't let it generate any code that I use directly. I ask it about ways to do things, including stuff built-in to the stdlib, and sometimes have it generate 3 or 4 implementations. Then I compare them, consult the language docs or stdlib API docs or the pickaxe book, and choose what seems the most stylish or composable. I then write it out by hand, bearing in mind what I just learned, and see what Rubocop has to say about its style.
I wouldn't say LLMs have been essential in this, but they've been pretty convenient. Because Ruby has relatively a lot of special syntax, it's also nice to be able to inquire directly about the meaning of some syntax in a bit of generated code-- especially when it involves characters that are (for some reason) ungoogleable, like an ampersand.
I think people who use LLMs to generate code and people that embrace agentic coding AIs and "vibe coding" will absolutely fall into the pattern you describe. But RTFMers and developers who care about craftsmanship will probably use LLMs as another discovery mechanisms for the stdlib, popular Gems, and popular style conventions.
I gave it a try! I tried to get the odd-ball perfectly circular cut and square dimensions, but I'm mostly just eyeballing it. Haven't tried printing it yet, but I have some nice red filament that I think is going to look good!
What a phenomenal launch it has been! Thanks a lot to everyone, for the many ideas and feedback. It has really made me push harder to make qqqa even cooler.
Since I launched it yesterday, I added a few new features - check out the latest version on Github!
Here is what we have now:
* added support for OpenRouter
* added support for local LLMs (Ollama)
* qqqa can be installed via Homebrew, to avoid signing issues on MacOS
* qq/qa can ingest piped input from stdin
* qa now preserves ANSI colors and TTY behavior
* hardened the agent sandbox - execute_command can't escape the working directory anymore
* history is disabled by default - can be enabled at --init, via config or flag
* qq --init refuses to override an existing .qq/config.json
Thanks. I guess it all depends on the perspective. I do not see how editing the command is a good tradeoff here in terms of complexity+UI. Once you get the command suggested by the LLM, you can quickly copy and modify it, before running it.
qqqa uses history - although in a very limited fashion for privacy reasons.
I am taking note of these ideas though, never say never!
> Once you get the command suggested by the LLM, you can quickly copy and modify it, before running it.
Copying and pasting tends to be a very tedious operation in the shell, which usually requires moving your hands away from the keyboard to the mouse (there are terminals which allow you to quick-select and insert lines but they are still more tedious than simply pressing enter to have the command on the line editor). Maybe try using llm-cmd-comp for a while.
> I do not see how editing the command is a good tradeoff here in terms of complexity+UI.
I don't find it a tradeoff, I think it's strictly superior in every way including complexity. llm-cmd-comp is probably the way I most often interface with llms (maybe second to basic search-engine-replacement) and I almost always either 1. don't have the file glob or the file names themselves ready (they may not exist yet!) at the time when I want to start writing the command or they are easier to enter using a fuzzy selector like fzf 2. don't want the llm to do weird things with globs when I pass them directly and having the shell expand them is usually difficult because the prompt is not a command (so the completion system won't do the right thing).
But even in your own demo it is faster to use llm-cmd-comp and you also get the benefit that the command goes into the history and you can optionally edit it if you want or further revise the prompt! It does require pressing enter twice instead of "y" but I don't find that a huge inconvenience especially since I almost always edit the command anyway.
Again, try installing llm-cmd-comp and try out your demo case.
Fellow bootstrapper here, roughly in your ballpark - €4M+ revenue, team of 18, bootstrapped for 12 years.
Only bootstrappers understand the bootstrap hustle ;) But what an amazing business you have built there - be proud, you deserve it.
Let me share a personal founder story if I may: after 12 years of building the company, I decided to step down as CEO, moved on and spent the last 6 months working on different projects, learned A LOT about AI coding, went to Iceland, Texas. Had a great time. Yet after only 6 months I experienced the strongest "pull" you can imagine, back to my bootstrapped company of 12 years. And here I am - December has been an amazing time, getting back to work. And next year we have ambitious plans ahead!
reply