The site macros (e/client & e/server) let the programmer declare what site an effect must run on. The network is not explicit, only implied. For example, platform calls like (query-database) or (.createTextNode js/document) or (check-password) are inherently sited. Siting is essential complexity (arguably the essence of a distributed system), and consequently we as programmers are hyper-aware of where (at which site) our effects must run, and we require perfect control over their placement.
How does this implicit, "bottom up" definition of the network boundary between client and server cope with version incompatibilities? You, as the developer, control what version of your code is running on the server, but you don't have direct control over what version is running in the browser.
In approaches with an explicit API, you can explicitly maintain backward compatibility for a period of time until you believe that enough browsers have "caught up" to newer versions of the software running on the server to allow you to retire that backward compatibility.
Haven't personally encountered this yet – it seems keeping some stale servers around and routing to them would be sufficient? Long term, over-the-air upgrades (hot migrating connected clients) seems within reach, if a bit researchy - but the durable workflows projects seem to be making headway on the problem