yes, like i said, throw an overlay on that motherfucker and ignore the fact that when a customer request enters the network it does so at the cloud provider, then is proxied off to the final destination, possibly with multiple hops along the way.
you can't just slap an overlay on and expect everything to work in a reliable and performant manner. yes, it will work for your initial tests, but then shit gets real when you find that the route from datacenter a to datacenter b is asymmetric and/or shifts between providers, altering site to site performance on a regular basis.
the concept of bursting into on-prem is the most offensive bit about the original comment. when your site traffic is at its highest, you're going to add an extra network hop and proxy into the mix with a subset of your traffic getting shipped off to another datacenter over internet quality links.
a) Not every Kubernetes cluster is customer facing.
b) You should be architecting your platform to accomodate these very common networking scenarios i.e. having edge caching. Because slow backends can be caused by a range of non-networking issues as well.
c) Many cloud providers (even large ones like AWS) are hosted in or have special peering relationships with third party DCs e.g. [1]. So there are no "internet quality links" if you host your equipment in one of the major DCs.