A) this is absolutely not gonna happen; fed contracts are huge, sticky money
B) everything can be used for good and bad. "AI" can help fighter pilots make decisions during mission just as much as "AI" can help Air Force researchers find the next breakthrough in materials science.
C) Defense has absolutely insane amounts of data. (Satellite images, for example.) This is great for the advancement of AI in general.
Wondering... why this happens only in Google (Alphabet)? Don't the employees have the duty to accept employment contracts? Why and how MS or Oracle or Amazon never have these problems?
It’s their company culture and Google’s heavy toleration of radical political speech in the workplace. This can make Google a very toxic place to work at.
When I was at Google I saw an email that went out to a large number of employees calling all us vets murderers. The employee who sent it was not fired. Those other companies you listed would bring that person up to hr, and likely start an investigation with some percentages of offenders fired.
Being against military contracts isn't really politically radical except I guess in the US.
But being called a murderer is a pretty predictable outcome of joining an organization built for and dedicated to killing. I'm sorry if you didn't understand that before joining and/or that your feelings were hurt.
You'll notice I'm not making the claim that they are the same, just that being called a murderer is a predictable outcome of joining the US military. It may be a naive belief but so is the expectation that your coworkers will be righteously fired for expressing it.
"Just doing their job" isn't really a great reason to do anything. The term "veteran" treats "Country and honor" folks the same as "Duty" folks. And that's what the law gets to be, but if murder isn't one thing, then neither is any fact of military service.
If they didn't use the economically desperate and every aux member to sanitize their image, more people would be willing to call at least infantry participants, if not "Veterans" by other names.
See the google search summary below. The discussion of US vets being murderers and additional creation of a hostile workplace environment for US vets, is likely categorized as a violation of federal law. Google could be sued.
>Two federal laws prohibit discrimination in employment based on your status as a service member or veteran: The Uniformed Services Employment and Reemployment Rights Act (USERRA) prohibits civilian employers from discriminating against you based on your present, past, and future military service.
>> But as the AI race heated up, DeepMind was drawn more tightly into Google proper. A bid by the lab’s leaders in 2021 to secure more autonomy failed, and in 2023 it merged with Google’s other AI team—Google Brain—bringing it closer to the heart of the tech giant
The way I understand it what happened was that Reinforcement Learning (RL) went out of fashion at the same time that LLMs became wildly popular. DeepMind was all about RL so their needs and wants were basically sidelined in favour of the new New Big Thing in AI™.
The reason of course that RL "fell out of fashion" as I say is the continuing failure of RL approaches to work convincingly and reliably in the real world. RL (basically Deep RL, since that's all anyone's doing these days) works great in simulation but there are two big problems with it.
The first one is generalisation, or lack thereof. RL doesn't generalise. You can train an RL agent in one environment and it will learn to solve the environment perfectly, if sometimes awkwardly, but if you take the same agent and put in a different environment, even one from the same domain, it will basically die [1,2].
The second problem is that RL agents rely on a model of the dynamics of the environment and those are not easy to come by: only humans are able to create robust, useful models of real world environments. There are of course model-free RL approaches that learn a model by interaction with an environment but those only work in virtual environments, for the simple reason that you can't learn real-world dynamics by model-free interaction with the physical world without dying many thousands of times.
So it looks like it's RL out, LLMs in, in Google as in everything else, and I guess we'll see what the Next Big Thing in AI™ is going to be after LLMs and who is going to make their fortune with it.
[2] I can't find that paper, if it was a paper, but there was a story about moving the paddle in Breakout a few pixels away and thereby causing an RL agent to fail.
Only a child brings their activism to work. Do the job you were hired to do; it is your livelihood and how you survive. Their priorities are all mixed up, but I can't fault them too much. Such people were raised by others who told them that their feelings matter more than objective reality.
Heavily disagree. We are not robots/tools to simply do what we are told, we are people with free will and morals. It's only a child that will simply do what they are told by "authority" without asking any questions, having any objections, and not having an opinion on their own.
> Do the job you were hired to do
That's the motto of many war criminals, notably the Nazis, who were doing horrendous things to other humans, simply following orders. We each have a brain to think, and morality to guide us, and there is nothing wrong with people banding up to change what their job requires of them.
That's how we got the 40 hours work week and a weekend. People gathered together and together pressured their employers to respect their demands. And now you have your Saturday and Sunday off. https://en.wikipedia.org/wiki/Amalgamated_Clothing_Workers_o...
> it is your livelihood and how you survive
You are forgetting the fact that workers also have rights and also can demand things from their employers. It's not as one sided, as you try to make it sound, and as a matter of fact, it's employers who need their workers, not the other way around.
> Such people were raised by others who told them that their feelings matter
Not sure if this is trolling, but people's feelings do matter, and those people seem to be raised right, to have a moral spine.
> than objective reality.
Objective reality is that the workers are pushing the company to drop the military contracts, and, as many other cases where workers gathered together, they might actually succeed. I wish them luck!
B) everything can be used for good and bad. "AI" can help fighter pilots make decisions during mission just as much as "AI" can help Air Force researchers find the next breakthrough in materials science.
C) Defense has absolutely insane amounts of data. (Satellite images, for example.) This is great for the advancement of AI in general.