> You’re not a mega company
Right, I'm asking why Steam, HTC and Meta did this, which are in the top <x> tech companies by market cap and spend billions on this stuff. Totally get why a small VR device dev would not do this.
> OS, drivers and libraries for (3D) graphics up and running
Right, except that I am not saying one should build a kernel, and webgl, cuda, etc are kernel dependant not OS dependant
> assurance that you still can get hardware to deploy your stuff on X years from now
I am unaware of android-specific hardware features to date (linux specific, maybe, but not android) -- you can literally add any bootloader and load a linux kernel and get drivers running on most phones even, w/o android, and where "android" is necessary it's simply an artifact of the company maintaining the phone having build the drivers into their distro and not released them separately.
> You also don’t want to order a million units up front.
Again, seems unrelated, I'm talking about building the VR hardware i.e. not order off the shelf and white label (unless your claim here is that meta, htc, steam etc are doing white labeling, which doesn't seem to be the case)
Right, I'm saying for a VR devices, no a mobile phone. It makes perfect sense why android would be the choice for a mobile phone. I'm rather confused what you are replying to.
Expand? The design constraints seem very dissimilar:
- weight is important, form factor isn't
- any app is either doing 3d rendering or integrated into a windowing system ala a trad desktop
- they are not constantly-on, they are used for long periods with breaks
- they are limited by processing power not by UX
- the peripherals are very complex - to the extent of complex requiring access to hand and/or controller movement at a very fine level
If anything they are similar to laptops, but overall they are their own device class.
If "similar to mobile phones" means "similar chips and batteries"... sure, but so are laptops build after 2020, besides that I don't quite understand the comparison.
Did add a quick explanation about methods at the end ... I forgot that when I say "you can read the science around this topic to see how" and provide some links to examples of papers, people don't actually do it.
it's more like, in a world with thousands of oncologists that treat pancreatitis cancer, saying: You should go to one, I'm not going to recommended mine in particular, but I went to one and she cured my cancer, so maybe you should give it a shot.
This seems like an insane stance to have, it's like saying businesses should ship their own stock, using their own drivers, and their in-house made cars and planes and in-house trained pilots.
Heck, why stop at having servers on-site? Cast your own silicon waffers, after all you don't want spectrum exploits.
Because you are worst at it. If a specialist is this bad, and the market is fully open, then it's because the problem is hard.
AWS has fewer outages in one zone alone than the best self-hosted institutions, your facebooks and petagons. In-house servers would lead to an insane amount of outage.
And guess what? AWS (and all other IAAS providers) will beg you to use multiple region because of this. The team/person that has millions of dollars a day staked on a single AWS region is an idiot and could not be entrusted to order a gaming PC from newegg, let alone run an in-house datacenter.
edit: I will add that AWS specifically is meh and I wouldn't use it myself, there's better IASS. But it's insanity to even imagine self-hosted is more reliable than using even the shittiest of IASS providers.
> This seems like an insane stance to have, it's like saying businesses should ship their own stock, using their own drivers, and their in-house made cars and planes and in-house trained pilots.
> Heck, why stop at having servers on-site? Cast your own silicon waffers, after all you don't want spectrum exploits.
That's an overblown argument. Nobody is saying that, but it's clear that businesses that maintain their own infrastructure would've avoided today's AWS' outage. So just avoiding a single level of abstraction would've kept your company running today.
> Because you are worst at it. If a specialist is this bad, and the market is fully open, then it's because the problem is hard.
The problem is hard mostly because of scale. If you're a small business running a few websites with a few million hits per month, it might be cheaper and easier to colocate a few servers and hire a few DevOps or old-school sysadmins to administer the infrastructure. The tooling is there, and is not much more difficult to manage than a hundred different AWS products. I'm actually more worried about the DevOps trend where engineers are trained purely on cloud infrastructure and don't understand low-level tooling these systems are built on.
> AWS has fewer outages in one zone alone than the best self-hosted institutions, your facebooks and petagons. In-house servers would lead to an insane amount of outage.
That's anecdotal and would depend on the capability of your DevOps team and your in-house / colocation facility.
> And guess what? AWS (and all other IAAS providers) will beg you to use multiple region because of this. The team/person that has millions of dollars a day staked on a single AWS region is an idiot and could not be entrusted to order a gaming PC from newegg, let alone run an in-house datacenter.
Oh great, so the solution is to put even more of our eggs in a single provider's basket? The real solution would be having failover to a different cloud provider, and the infrastructure changes needed for that are _far_ from trivial. Even with that, there's only 3 major cloud providers you can pick from. Again, colocation in a trusted datacenter would've avoided all of this.
>, but it's clear that businesses that maintain their own infrastructure would've avoided today's AWS' outage.
When Netflix was running its own datacenters in 2008, they had a 3 day outage from a database corruption and couldn't ship DVDs to customers. That was the disaster that pushed CEO Reed Hastings to get out of managing his own datacenters and migrate to AWS.
The flaw in the reasoning that running your own hardware would avoid today's outage is that it doesn't also consider the extra unplanned outages on other days because your homegrown IT team (especially at non-tech companies) isn't as skilled as the engineers working at AWS/GCP/Azure.
> it's clear that businesses that maintain their own infrastructure would've avoided today's AWS' outage.
Sure, that's trivially obvious. But how many other outages would they have had instead because they aren't as experienced at running this sort of infrastructure as AWS is?
You seem to be arguing from the a priori assumption that rolling your own is inherently more stable than renting infra from AWS, without actually providing any justification for that assumption.
You also seem to be under the assumption that any amount of downtime is always unnacceptable, and worth spending large amounts of time and effort to avoid. For a lot of businesses systems going down for a few hours every once in a while just isn't a big deal, and is much more preferable than spending thousands more on cloud bills, or hiring more full time staff to ensure X 9s of uptime.
You and GP are making the same assumption that my DevOps engineers _aren't_ as experienced as AWS' are. There are plenty of engineers capable of maintaining an in-house infrastructure running X 9s because, again, the complexity comes from the scale AWS operates at. So we're both arguing with an a priori assumption that the grass is greener on our side.
To be fair, I'm not saying never use cloud providers. If your systems require the complexity cloud providers simplify, and you operate at a scale where it would be prohibitively expensive to maintain yourself, by all means go with a cloud provider. But it's clear that not many companies are prepared for this type of failure, and protecting against it is not trivial to accomplish. Not to mention the conceptual overhead and knowledge required with dealing with the provider's specific products, APIs, etc. Whereas maintaining these systems yourself is transferrable across any datacenter.
This feels like a discussion that could sorely use some numbers.
What are good examples of
>a small business running a few websites with a few million hits per month, it might be cheaper and easier to colocate a few servers and hire a few DevOps or old-school sysadmins to administer the infrastructure.
depends I guess, I am running on-prem workstation for our DWH. So far in 2 years it went down minutes at the time, when I decided to do so, because of hardware updates.
I have no idea where this narrative came from, but usually hardware you have is very reliable and doesn't turn off every 15 minutes.
Heck, I use old T430 for my home server and still it doesn't go down on completely random occasions (but thats very simplified example, I know)
The one in work yes, but for internal network, as we are not exposed to internet. But to be honest, we are probably one of few companies that make priority that there is always electricity and internet in the office (with UPS, electricity generator, multiple internet providers).
No idea what are the standards for other companies.
There are at least 6 cloud providers I can name that I've used which run their own data centers with capabilities similar to AWSs core products (ec2, route53, s3, cloud watch, rdb)
Ovh, scaleway, online.net, azure, gcp, aws
That's one's I've used in production, I've heard of a dozen more including big names like HP and IBM, I assume they can match aws for the most part.
...
That being said I agree multi tenant is the way to go for reliability. But I was pointing out that in this case even the simple solution of multi region on one provider was not implemented by those affected.
...
As for running your own data center as a small company. I have done it, buying components building servers and all.
Expenses and ISP issues aside, I can't imagine using in house without at least a few outages a year for anywhere near the price of hiring a DevOps person to build a MT solution for you.
If you think you can you've either never tried doing it OR you are being severely underpaid for your job.
Competent teams to build and run reliable in house infrastructure exist, and they can get you SLA similar to multi region AWS or GC (aka 100% over the last 5 years)... But the price tag has 7 to 8 figures in it.
This is the right answer, I recall studying for the solutions architect professional certification and reading this countless times: outages will happen and you should plan for them by using multi-region if you care about downtime.
It's not AWS fault here, it's the companies', which assume that it will never be down. In-house servers also have outages, it's a very naive assumption to think that it'd be all better if all of those services were using their own servers.
Facebook doesn't use AWS and they were down for several hours a couple weeks ago, and that's because they have way better engineers than the average company, working on their infrastructure, exclusively.
If all you wanted to do was vacuum the floor you would not have gotten that particular vacuum cleaner.
Clearly you wanted to do more than just vacuum the floor and something like this happening should be weighed with the purchase of the vacuum.
> AWS (and all other IAAS providers) will beg you to use multiple region
will they? because AWS still puts new stuff in us-east-1 before anywhere else, and there is often a LONG delay before those things go to other regions. there are many other examples of why people use us-east-1 so often, but it all boils down to this: AWS encourage everyone to use us-east-1 and discourage the use of other regions for the same reasons.
if they want to change how and where people deploy, they should change how they encourage it's customers to deploy.
my employer uses multi-region deployments where possible, and we can't do that anywhere nearly as much as we'd like because of limitations that AWS has chosen to have.
so if cloud providers want to encourage multi-region adoption, they need to stop discouraging and outright preventing it, first.
> AWS still puts new stuff in us-east-1 before anywhere else, and there is often a LONG delay before those things go to other regions.
Come to think of it (far down the second page of comments): Why east?
Amazon is still mainly in Seattle, right? And Silicon Valley is in California. So one would have thought the high-tech hub both of Amazon and of the USA in general is still in the west, not east. So why us-east-1 before anywhere else, and not us-west-1?
Most features roll out to IAD second, third, or fourth. PDX and CMH are good candidates for earlier feature rollout, and usually it's tested in a small region first. I use PDX (us-west-2) for almost everything these days.
I also think that they've been making a lot of the default region dropdowns and such point to CMH (us-east-2) to get folks to migrate away from IAD. Your contention that they're encouraging people to use that region just don't ring true to me.
It works really well imo. All the people who want to use new stuff at the expense of stability choose us-east-1; those who want stability at the expense of new stuff run multi-region (usually not in us-east-1 )
This argument seems rather contrived. Which feature available in only one region for a very long time has specifically impacted you? And what was the solution?
Quick follow up. I once used a IASS provider (hyperstreet) that was terrible. Long story short provider ended closing shop and the owner of the company now sells real estate in California.
Was a nightmare recovering data. Even when the service was operational was sub par.
Just saying perhaps the “shittiest” providers may not be more reliable.
> In-house servers would lead to an insane amount of outage.
That might be true, but the effects of any given outage would be felt much less widely. If Disney has an outage, I can just find a movie on Netflix to watch instead. But now if one provider goes down, it can take down everything. To me, the problem isn't the cloud per se, it's one player's dominance in the space. We've taken the inherently distributed structure of the internet and re-centralized it, losing some robustness along the way.
> That might be true, but the effects of any given outage would be felt much less widely.
If my system has an hour of downtime every year and the dozen other systems it interacts with and depends on each have an hour of downtime every year, it can be better that those tend to be correlated rather than independent.
I think you're missing the point of the comment. It's not "don't use cloud". It's "be prepared for when cloud goes down". Because it will, despite many companies either thinking it won't, or not planning for it.
> AWS has fewer outages in one zone alone than the best self-hosted institutions, your facebooks and petagons. In-house servers would lead to an insane amount of outage.
> they usually beg you to use multiple availability zones though
Doesn't help you if it what goes down is AWS global services on which you directly, or other AWS services, depend (which tend to be tied to US-east-1).
The market needs to open up for companies selling NFTs representing IpV4 address blocks. Not to provide any right to it's usage or anything, just for you know, the bragging rights.
You may not be the target audience though, the target audience are probably people that aren't yet "bough into" the google ecosystem. So they are unaware of this axing policy. Anyone logging into chromium, using gmail and google calendar and an android device is already "theirs".
What they want is to increase the share of people that are in the MS ecosystem (and presumably also OSX ecosystem, but that's mostly a status signaling thing, so the strategy there is probably different).
You will be surprised how many people are indeed aware of Google’s axing policy, if not even consciously.
The reality is that so many of them have been burnt by Google. Picasa was an extremely popular photo management platform used by many. Google Play Music was extremely popular. Google Plus was not exactly popular but touched nearly every Google user, was highly promoted, and then disappeared. And then you come to the massive list of chat and calling apps that Google has arbitrarily spun up, promoted, and then killed.
But even if we go with the idea that regular people are unaware of Google’s tendency, even subconsciously, there’s also the fact that the way most products become popular is through a smaller subset of influencers. And tech influencers, almost necessarily, are almost certainly aware of googles tendencies.
This one irritates me most. GPM was mature and required...what? Minimal maintenance at this point?
But no, how about we completely rebuild a music service on top of YouTube, miss a bunch of minor simple features that every streaming service offers (you know, like save current playlist/radio as Playlist) and FORCE every user over to this far inferior service.
At least with transitions like MOG->Beats->Apple Music, it made since as the entire corporate entity changed. But Google just...literally can’t invest in anything that takes a small ounce of manpower to manage.
At this point, OP is correct. The Google “curse” is well known and few people trust their whim-products, no matter their investment/marketing budget.
The perfect music service and discovery engine. Axed for no cause.
I don't like Spotify or Pandora, and I'm filled with seething hatred for Youtube Music. I keep trying to use it, but it's horrible and doesn't play what I want to listen to. How can you design an app that's so bad that it actively does what you don't want?
YouTube Music plays meme videos [1, 2] in my alternative music stream. And anime music I listened to ten years ago in the middle of my EDM. Seriously what the fuck, Google? I never asked to mix my YouTube viewing experience with my music tastes.
For the first time in my life, I've stopped listening to music. I want to go back to managing my own highly curated playlists, but it's too much work to set up.
>For the first time in my life, I've stopped listening to music. I want to go back to managing my own highly curated playlists, but it's too much work to set up.
I'm sorry, I'm sympathetic to your main point but this reads to me as... silly, to put it lightly. Google made you give up on music? You could do nothing but play CDs and still have absurdly more access to music than anyone in history. Music has literally never been more abundant, discoverable, and obtainable than it is today. The technology to replay it has never sounded better for a given price, and never been more ubiquitous. Neither has the tech to create it - there is an incredible Cambrian explosion of musical styles happening right now, as more people than ever before have access to studios and are using the internet to borrow and remix each other's work in interesting ways. This is an incredible golden age for music. And you can't be bothered because Google axed a product? Can't even be bothered to do it the old way?
This was the one digital service I was happy to pay google for and caused me a 100% ban on their consumer services going forward.
Google music hadnt changed in a few years, so of course we need to get a promotion by destroying it and replacing it with something that might grow faster but definitely wont.
Microsoft gets a lot of flak for their endless rebrands, but its not like they rewrote Lync from scratch when they renamed it to Skype for Business. Google seems to have adopted a similar marketing-driven rebrand culture ("Google Play is a confusing brand and right now consumer confidence is in the YouTube brand, we should move our media streaming holdings to the YouTube brand") but confusingly adopted it as yet another excuse to generally rewrite the apple pie from scratch instead of just renaming things that aren't broken.
> but confusingly adopted it as yet another excuse to generally rewrite the apple pie from scratch instead of just renaming things that aren't broken.
I think that inverts history. The YT Music implementation and brand existed long before the decision to replace GPM with it. The branding decision followed most of the reimplementation, it didn't provide an excuse for it.
Google has a strong tendency to have multiple parallel offerings in a field for a while before consolidating them (and, also, a history of botching the consolidation.)
My impression from second and third hand sources was still that YT Music implementation and brand was after GPM was asked to "code freeze" and the team directed to other projects (including some to YTM). That the products stood side-by-side for so long is only further indictment on the rewrite-the-world approach that it took them so long to reach "feature parity" enough that they felt comfortable sunsetting GPM, well after the writing had been on the wall, the development ended, and the marketing decision to change brands had been handed down.
The multiple parallel offerings thing is of course its own problem that seems to often indicate communications issues up/down the decision chains, but specifically with reference to GPM/YTM I heard it was more a symptom of a rewrite than one of those communication breakdowns. Though again, that's only from impressions I got from scuttlebutt I heard second and third hand.
> But no, how about we completely rebuild a music service on top of YouTube, miss a bunch of minor simple features that every streaming service offers (you know, like save current playlist/radio as Playlist) and FORCE every user over to this far inferior service.
I still can't work out how I'm supposed to listen to an mp3 on my phone now. YT Music has a "Device Only" button which I thought was simple enough, but then it just refuses to actually play anything I select.
I was surprised when my Dad, someone completely detached from the tech world, brought up Google's tendency to cancel stuff. He was a Google Music subscriber and still uses Chromecast Audio. Last year he tried buying another Audio device to discover it was discontinued, and then over the summer got notifications related to Music being cancelled. He was super disappointed!
In January he started looking into getting some IoT cameras for his house (a doorbell and something for the backyard, nothing crazy). He was looking at Nest, but the moment I mentioned Nest is owned by Google, he stopped looking at their products.
Cancelling products and services that people use and rely on leaves a bad taste in people's mouths. I used to excuse it, but in the last 5-6 years Google has done away with about 4-5 services that I used daily. It's honestly too much and I won't put the time into using their products anymore.
My father was so proud of having uploaded all his music to Google Play. It was the single largest source of loyalty for him to Android over iPhone. With the iPhone's missing back button being second.
And since Hangout chat used to be integrated with gmail, many a gmail user saw deprecation notices for ages now. And terribly unhelpful deprecation notices at that. They basically said something like "I'm going to stop working soon. You should do something about that."
> tech influencers, almost necessarily, are almost certainly aware of googles tendencies.
This is huge, bigger than most acknowledge. Most people don't understand the tech and why should they?
"none of my nerd friends use it" vs "all of my nerd friends are on it" is a massive, massive signal about whether something is good or incredibly dangerous, dishonest and a foul trap. There's plenty of the latter about so we're talking about degrees of badness for most. The thinking resembles:
"I'm not super happy with any of this tech and definitely not something that my niece the computer-hacker and cousin the IT guy aren't using. I'll definitely draw a line there."
vs
"Have you heard about signal? It's whatsapp but no facebook tracking." From a family member, friend or acquaintance who you know knows more than you about this stuff.
I honestly think that this attitude to shutting things down, plus their piss-poor customer service, is killing any chance that Google Cloud has of succeeding. I'm just not willing to invest any time into the Google ecosystem where there's a high chance that the service will be discontinued or I'll get locked out of my account.
We still use Picasa on Windows 7 & 10 and Mac OS X. It works easily on 10.13 (High Sierra) and with some annoyance on 10.14 (Mojave) (requires clicking 5 times on complaining popup when starting, then runs fine).
As far as I can tell MADlib is not automl, it just provides various statistic analysis and "classical" ml algorithms as function/macros in various database, and integration seems to be quite different from the way we do it (and I'd say more complex for the user, but maybe that's just my bias talking).
So I don't think there's a lot of overlap there. But if you think otherwise and work on it or know someone that does, I'd be quite excited to have a chat, just to share experiences and tips if nothing else.
Deep learning, including automated model selection, is under development in MADlib. See for example https://madlib.apache.org/docs/latest/group__grp__keras__run...
I guess in the next couple of releases it will probably stabilize and be promoted out of "early development".
I don't work on it or know anyone who does, but it is one of the most established open projects for ML in the database, as far as I know.
> OS, drivers and libraries for (3D) graphics up and running Right, except that I am not saying one should build a kernel, and webgl, cuda, etc are kernel dependant not OS dependant
> assurance that you still can get hardware to deploy your stuff on X years from now I am unaware of android-specific hardware features to date (linux specific, maybe, but not android) -- you can literally add any bootloader and load a linux kernel and get drivers running on most phones even, w/o android, and where "android" is necessary it's simply an artifact of the company maintaining the phone having build the drivers into their distro and not released them separately.
> You also don’t want to order a million units up front. Again, seems unrelated, I'm talking about building the VR hardware i.e. not order off the shelf and white label (unless your claim here is that meta, htc, steam etc are doing white labeling, which doesn't seem to be the case)