Hacker Newsnew | past | comments | ask | show | jobs | submit | chriscbr's commentslogin

Here's my site! https://rybicki.io/


On my current team we run a centralized task scheduler used by other products in our company that manages on the order of around ~30M schedules. To that end, it's a home-grown distributed system that's built on top of Postgres and Cassandra with a whole control plane and data plane. It's been pretty fun to work on.

There are two main differences between our system and the one in the post:

- In our scheduler, the actual cron (aka recurrence rule) is stored along with the task information. That is, you specify a period (like "every 5 minutes" or "every second Tuesday at 2am") and the task will run according that schedule. We try to support most of the RRule specification. [1] If you want a task to just run one time in the future, you can totally do that too, but that's not our most common use case internally.

- Our scheduler doesn't perform a wide variety of tasks. To maximize flexibility and system throughput, it does just one thing: when a schedule is "due", it puts a message onto a queue. (Internally we have two queueing systems it interops with -- an older one built on top of Redis, and a newer one built on PG + S3). Other team consume from those queues and do real work (sending emails, generating reports, etc). The queueing systems offer a number of delivery options (delayed messages, TTLs, retries, dead-letter queues) so the scheduling system doesn't have to handle it.

Ironically, because supporting a high throughput of scheduled jobs has been our biggest priority, visibility into individual task executions is a bit limited in our system today. For example, our API doesn't expose data about when a schedule last ran, but it's something on our longer term roadmap.

[1] https://icalendar.org/iCalendar-RFC-5545/3-8-5-3-recurrence-...


I liked how this piece cuts through nostalgia and maps out (or at least gives a sketch of) where the valley actually is today. A few highlights that stood out to me:

- The sharp contrast between the '90s "Jeffersonian", hacker-libertarian spirit and today's more "Hamiltonian", state-capacity mindset, and how they're tied to the geopolitical and technical shifts that nurtured each era.

- The clear taxonomy of the new "tribes" (EA, abundance, American dynamism, New Right, tech ethicists, network-staters) and the axes they vary across -- human nature, progress vs preservation, role of the state. Nadia did a similar breakdown of climate activist tribes in "Mapping out the tribes of climate" a few years ago that I thought was interesting as well.

- There are some data nuggets that upend cliches, e.g. nine VCs controlling ~50% of 2024 fundraising, and tech elites being less anti-government than either party’s base.

- The insight that valley thinking now shapes national politics (e.g. J.D. Vance), showing tech culture's reach beyond product launches.


Same story with the Mojo language, unfortunately.

To me this raises the question of whether this is a growing trend, or whether it's simply that languages staying closed source tends to be a death sentence for them in the long term.


I only see around three .cpp files in the entire project?


Look at other FB projects.


I really appreciate this essay.

I've never been a [traditional] artist, but I reckon that those working in the arts, and even areas of the programming world where experimentation is more fundamental (indie game development, perhaps?), would intuit the importance of discovery coding.

Even when you're writing code for hairy business problems with huge numbers of constraints and edge cases, it's entirely possible to support programmers that prefer discovery coding. The key is fast iteration loops. The ability to run the entire application, and all of its dependencies, locally on your own machine. In my opinion, that's the biggest line in the sand. Once your program has to be deployed to a testing environment in order to be tested, it becomes an order of magnitude harder to use a debugger, or intercept network traffic, or inspect profilers, or do test driven development. It's like sketching someone with a pencil and eraser, but there are 5-10 second delays between when you remove your pencil and when the line appears.

Unfortunately, it seems like many big tech companies, even that would seem to use very modern development tooling otherwise, still tend to make local development a second class citizen. And so, discovery coders are second class citizens as well.


Yea, TIL, I'm a discovery coder. Always found planning early in Greenfield projects kind a pointless. Planning is almost step 3 or 4. I almost always prototype the most difficult/opaque parts, build operations around testing and revising (how do you something is good enough?), and then plan out the rest.


Hard agree on local development. I always make apps run locally and include a readme that describes all the steps for someone else to run it locally as well.

Ideally that should be as simple as adding a local app settings file (described in readme so people don't have to start reading the code to figure out what to put in it) for secrets and other local stuff (make sure the app isn't trying to send emails locally etc), and running Docker compose up. If there are significantly more steps than that there better be good reasons for them.


Neat concept.

I had a similar idea a few months ago about whether it's possible to achieve some form of comptime with TypeScript. I didn't get too far unfortunately, but I think the implementation would need to interact with the TypeScript compiler API in some way.


Going off of the example on the home page, the language reminds me a lot of Alloy, a model checking language. Alloy lets you describe facts about some discrete system and check for the existence (or nonexistence) of properties within those systems. If you expect some property to hold and it doesn't, Alloy will automatically produce a counter-example for you. Here's an example of a program modeling a file system:

  sig FSObject { parent: lone Dir }
  sig Dir extends FSObject { contents: set FSObject }
  sig File extends FSObject { }

  // A directory is the parent of its contents
  fact { all d: Dir, o: d.contents | o.parent = d }

  // All file system objects are either files or directories
  fact { File + Dir = FSObject }

  // There exists a root
  one sig Root extends Dir { } { no parent }

  // File system is connected
  fact { FSObject in Root.*contents }

  // Every fs object is in at most one directory
  assert oneLocation { all o: FSObject | lone d: Dir | o in d.contents }
I initially thought these model checking languages were purely academic in nature. But then a curious problem came up when I was working at AWS where folks were complaining that IAM policies generated by our library were sometimes growing to be too large in size (usually the limit was a few KB) - often due to redundant statements.

To solve this, a coworker implemented some code for merging IAM policies -- though the merging processe wasn't trivial because IAM policies can have both "Resources" and "NotResources", "Actions" and "NotActions", "Principals" and "NotPrincipals" etc. So to prove the algorithm was correct, he wrote up a short Alloy specification[1] (roughly mapping to the library code) that proved if two policy statements were merged, it wouldn't change the security posture. As a new engineer to the team, I'll just say that it blew my mind that this was possible -- actually using proofs to achieve goals in industry.

Needless to say, I'm curious to dive into Quint's differences and what kinds of models/specifications it excels with.

[1] https://github.com/aws/aws-cdk/blob/main/packages/aws-cdk-li...


Protip: instead of writing

    (a.resource in b.resource and a.action in b.action and a.principal in b.principal) or
You can write

    {
     a.resource in b.resource
     a.action in b.action
     a.principle in b.principle
    } or // ...
(Also instead of `(some principal) iff not (some notPrincipal)` you can write `some principle <=> no notPrinciple`. Alloy has a lot of cool syntactic sugar!)


> describe facts about some discrete system and check for the existence (or nonexistence) of properties

To me this sounds like Logic Programming and I immediately think of Prolog. Is it fair to compare them?


Yes, but the implementation is very different. These model checkers aren't turing complete, and because of that they can give some strong guarantees about what they can and cannot do. Prolog? Shift some things around and watch your program suddenly run forever or so slowly as to be useless.

If you want to mess around with something very prolog like but using similar kinds of underlying tech to these model checkers, try playing around with ASP solvers like Clingo/Clasp or DLV


The links to your Terraform / Kubernetes / GitHub support in the Open Source tab don't work.


Totally fair. The choice to name it "bring" instead of "import" or "use" was mainly to add some flavor to the language, and make it easier to distinguish from the top of the file that "ah, this is a Wing code snippet, not a Zig/TypeScript/Python code snippet".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: