Hacker Newsnew | past | comments | ask | show | jobs | submit | mikelehen's commentslogin

Good question, and to answer this well we should probably do a blog post or something. In the meantime you could dig into the code since the clients are all open source. :-)

But basically, sync is split into two halves: writes and listens. Clients store pending writes locally until they're flushed to the backend (which could be a long time if the app is running offline). While online, listen results are streamed from the backend and persisted in a local client cache so that the results will also be visible while offline (and any pending writes are merged into this offline view). When a client comes back online, it flushes its pending writes to the backend which are executed in a last-write-wins manner (see my answer above to ibdknox for more details on this). To resume listens, the client can use a "resume token" which allows the backend to quickly get the client back up-to-date without needing to re-send already retrieved results (there are some nuances here depending on how old the resume token is, etc.).


Thank you!


This works similar to the Realtime Database in that it's last-write wins (where in the offline case, "last" is the last person to come back online and send their write to the backend). This model is very easy for developers to understand and directly solves many use cases, especially since we allow very granular writes which reduces the risk of conflicts. But for more complex use cases, you can get clever and implement things like OT conflict resolution as a layer on top of last-write wins, e.g. similar to how we implemented collaborative editing with www.firepad.io on the realtime database.

PS: Hi Chris! :-)


Hey Michael! Congrats on the launch :)

Providing a one-size-fits-all solution here is probably impossible, but it seems like it would be nice to provide some mechanism to be notified that you're making edits based on stale information. If such a mechanism existed, it would be easy to add a bunch of canned merge strategies. In doing so you can probably teach people a little bit about the pitfalls they're likely to run into (these sorts of bugs are insanely difficult to track down), while not really making them do much work.

The approach we've taken in Eve is that we can't solve all these problems for you, but we can at least let you know that things can go sideways and prompt you to make a deliberate decision about what should happen. It's amazing how helpful that ends up being.


Thanks for the feedback. I think you're right and we're interested in exploring what we can do to help people more in the future. One of the really nice things about Cloud Firestore is that documents are versioned with timestamps in such a way that we could definitely detect and expose conflicts and let you decide how to deal with them... It's mostly a matter of identifying the common use cases and then figuring out the right API to make them possible without going too far into the deep end of conflict resolution.


It looks to me like Firestore's API doesn't include a "default" way to upload user edits to documents. Conflict detection is possible using the transactions - https://cloud.google.com/firestore/docs/manage-data/update-d... - you can do something like HTTP's PUT If-Unmodified-Since (or PUT If-Match).


Good point. read-modify-write transactions are a good way to detect conflicts and get a chance to handle them, but they're unfortunately limited to while the client is online. If the client is offline, the transaction will fail so they're not useful for general conflict resolution. This was an intentional decision because there's not a straightforward way to preserve the intent of the transaction across app restarts. But there may be options for adding some sort of conflict resolution strategy in the future that leverages the same underlying primitives that transactions use today.


> OT conflict resolution as a layer on top of last-write wins

Can you link to somewhere where this layering is explained?

The www.firepad.io site has documentation on how to use the editor, but I'm interested in how "OT on top of last-write wins" is achieved.


Great question. This is a very real pain point with dynamic content in today's world of bots / crawlers. Many sites right now are completely or partially invisible to crawlers.

As you point out, pre-rendering content is the prescribed way to solve this and there are some existing solutions (prerender.io, brombone, etc.) that are a good start, but this is still a confusing / hard problem for people to solve when they'd like to focus on building their app instead.

So we're keenly looking into how we can best integrate with these sorts of services or provide our own solution as part of our hosting offering. Stay tuned!


Ok. Great. Glad you are aware of the problem and evaluating solutions. Thanks!


Thanks! We worked hard to try to make a tutorial that communicated the simplicity and power of the API. Glad you liked it!


This is a big question, and I'm biased by working at Firebase, so I'd welcome somebody from the community chiming in with their experiences. But one key differentiator worth mentioning is the realtime aspect of Firebase.

We believe that modern apps should be client-side apps that update in realtime as changes happen, without having to refresh the page or continually poll the server for updates. So this is baked into the core of Firebase. All of our features and APIs (and our new Hosting service!) were designed around this concept of how modern apps should be built.


In short, both. :-) This was a commonly-voiced pain point for our existing customers and fits very well with our vision to make Firebase the best platform for building modern apps.

But when we do something, we like to do it "right" and so we also think Firebase Hosting comes with a very compelling feature set (Simple Deploy/Rollback, Automatically-provisioned SSL, and a global CDN). So we're optimistic it'll also attract new developers to the Firebase platform.


Thanks for the feedback! We'd love to know what benchmarks would make you feel more comfortable. Internally, we have a lot of monitoring and diagnostics to make sure everything is running optimally. Downtime like yesterday's is rare and will become even rarer as we continue to advance our infrastructure.

In general, I agree with your point though. That's why I'd recommend using 3rd-party monitoring / measurement, even if we did expose more benchmarks for you. It's important to understand your external dependencies and verify they meet the service level you require.


Yes. We think our hosting service and realtime backend complement each other nicely for building modern web apps, but you certainly don't have to use both. :-)


For context, 2000 concurrent connections would be quite a large site. If you're hitting 2000 concurrents, $500 probably wouldn't be an issue for you. It's also worth noting that Firebase employs burstable billing at the 95th percentile, so only sustained overuse within the monthly billing period will result in a surcharge.

As for why we charge for connections in general, they do tend to be the most expensive thing to scale.  They're also useful as a proxy of how "big" a site is (in terms of users).  They're kind of the analog to "page views" in today's world of single-page-apps that update in real-time


Thanks for the feedback! Now that we've got the core deploy / rollback tooling in place, we're definitely looking for ways to plug into other common workflows (git, Dropbox, etc.). Stay tuned!


A git deployment would be great. Similar to heroku I guess.


Git integration is on our roadmap. However, you can also set this up yourself now by adding your own git hook that calls 'firebase deploy' on push. For more information, see the documentation for the commandline tool here: https://www.firebase.com/docs/command-line-tool.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: