Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I’ve never understood about this approach is that it claims to be “network transparent,” but you need to add client and server annotations. So it’s very much network-aware.


The author says somewhere that, paraphrasing, where something runs in a distributed system is essential complexity. I think the transparent part is where / when the communication between front and back end happen. The "transparency", whether or not it's the right term, it compared to having to do software development with awareness of the network interactions vs just calling out what runs where. No adding endpoints for everything, managing http calls or webhooks, etc.


In the video Dustin also makes the point that in v3 all the client and server annotations can now be kept fully isolated from the parts of the code that model the essential complexity (i.e. any dynamic scoping or function parameters used by a 'pure' function are also network transparent), and this dramatically reduces the amount of global coupling across the codebase.


The site macros (e/client & e/server) let the programmer declare what site an effect must run on. The network is not explicit, only implied. For example, platform calls like (query-database) or (.createTextNode js/document) or (check-password) are inherently sited. Siting is essential complexity (arguably the essence of a distributed system), and consequently we as programmers are hyper-aware of where (at which site) our effects must run, and we require perfect control over their placement.


How does this implicit, "bottom up" definition of the network boundary between client and server cope with version incompatibilities? You, as the developer, control what version of your code is running on the server, but you don't have direct control over what version is running in the browser.

In approaches with an explicit API, you can explicitly maintain backward compatibility for a period of time until you believe that enough browsers have "caught up" to newer versions of the software running on the server to allow you to retire that backward compatibility.


Haven't personally encountered this yet – it seems keeping some stale servers around and routing to them would be sufficient? Long term, over-the-air upgrades (hot migrating connected clients) seems within reach, if a bit researchy - but the durable workflows projects seem to be making headway on the problem




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: