Yes, also TriMap does quite well. PacMAP is faster though. This paper [1] (linked at the GitHub repo you have mentioned) goes into a fantastic amount of detail in comparing UMAP, t-sne, PacMAP and TriMAP.
I found it really refreshing that they report a bunch of stuff they tried that didn’t work in a way that clarifies the problem and leads to a lot of insight into the strengths and limitations of their final method and the leading alternatives
The big takeaway is that despite these UMAP is still useful.
I've used UMAP in the past. It's not quite as bad as you're suggesting. Points 2,3 and 4, are going to be things that you're going to want to verify quantitatively anyways. Despite this, it's still a fine way to throw points up and start exploring - just don't use it as the end all, be all.
I think a common problem is that these techniques get repurposed to solve problems that they weren't meant to. I have seen multiple people fall too often into the trap of using these visualizations to guess whether a dataset may be classified with high accuracy. I'm talking about cases where there already is a label - but viz. is used as a prior compute-cheap step to understand whether they would bother with classification at all, or should they pick a weak-vs-strong classifier, etc.
The problem of course is the insights from viz. provide "one-sided" information: IF your instances from different classes look separated, then you know that a decent classifier would do the job well. But if they don't appear separated, you don't know whether they can't be accurately classified: for all you know you don't have the right hyperparams. Also account for the fact that you're projecting d-dimensional data down to 2D/3D - this is heavily lossy; even with the right hyperparams there is a chance you won't see high separation. If you want to classify, just classify.
Yes, the visualizations are wonderful. It must have taken quite some time to produce the data to allow playing with the hyperparameters so smoothly in some of the examples.