Or similarly the difference between reading/listening to a foreign language vs. writing/speaking one. Knowing how to read code or learn algorithms or design is different than actually writing it. The difference between the theory and practice.
> "Underlying ideas" means cherry-picking opinions that suit your fancy while ignoring those that don't.
Yes, that is how terminology evolves to not meet a rigid definition that was defined in a different era of best-practice coding beliefs. I'll admit I had trouble mapping the DDD OO concepts from the original book(s) to systems I work on now, but there are more recent resources that use the spirit of DDD, Domain Separation, and Domain Modeling outside of OO contexts. You're right in that there is no single recipe - take the good ideas and practices from DDD and apply it as appropriate.
And if the response is "that's not DDD", well you're fighting uphill against others that have co-opted the buzzword as well.
You can remove JSON fields at the cost of breaking your clients at runtime that expect those fields. Of course the same can happen with any deserialization libraries, but protobufs at least make it more explicit - and you may also be more easily able to track down consumers using older versions.
For the missing case, whenever I use json, I always start with a sane default struct, then overwrite those with the externally provided values. If a field is missing, it will be handled reasonably.
The request / data fetching is interesting in how "easy" it is to write. I did basic perusal of the examples, but I'd be interested to see what it looks like with rate-limited endpoints and concurrent requests.
Another tangentially related project is https://steampipe.io/ though it is for exposing APIs via Postgres tables and the clients are written using Go code and shared through a marketplace.
Great question! Rate limiting and concurrency are absolutely critical for production API integrations. Here's how Sequor handles these challenges:
Rate Limiting:
* Built-in rate limiting controls at the source level (requests per second/minute/hour): each http_request operations refers to an http source defined separately
* Automatic backoff and retry logic with delays
* There is a option for per-endpoint rate limit configuration since different API calls can have different limits
* Because it is at the source level, it works properly even for parallel requests to the same source.
The key idea is that rate limits are handled by the engine - no need to handle it explicitely by the user.
Concurrency is explicit by the user:
* Inter-operation parallelism is activated by adding begin_parallel_block and end_parallel_block - between these two operations all the operations are executed at once in parallel
* Intra-operation parallelism: many operations have parameters to partition input data and run in parallel on partitions. For example, http_request takes an input table that contains data to be updated via API and you can partition the input table by key columns into specified number of partitions.
Thanks for the Steampipe reference! That's a really interesting approach - exposing APIs as Postgres tables is clever, and I'm definitely going to play with it.
I felt the same - have to relearn/lookup everything every time I went back to a project or wanted to do some operations that are simple to describe in SQL but I couldn't wrap my mind around e.g. using multi-indexed dataframes & aggregations properly. These days, I always jump to Polars instead of Pandas - much more intuitive and consistent API. Tons of props to Pandas for all that they did (and continue to do) in the data space, but their API did not evolve very well IMO.
I've also been wanting to play with Ibis[1] recently, but Polars has been sufficient for me.
I do the same, though my muscle memory is `1=1` instead of `true`.
Of course then you get editors/linters/coworkers that always point out that the 'true' is unnecessary. This also doesn't work with ORs (just swap to false), but in practice it seems it is always ANDs that is being used.
Sharding, pre-allocating leases of blocks of tickets across available resources, and eventual consistency. You don't need to keep the UX transactionally correct; you are able to say "0 tickets remaining" and then a minute or hour or day later say "100 tickets remaining". For something as popular as Taylor Swift, the fans will keep checking.
I would also say that could account for the download count differences between the projects. Django may still be used for more monolithic applications whereas Flask and FastAPI may be the choices for smaller-scoped microservices resulting in 10x downloads.
No negative connotation is intended here for "monolithic". On the contrary, if the above assumption is at all true, it highlights a overhead cost of individual microservices.