May I suggest that the political divide is extremely harmful? I don’t understand why [other camp] is so hateful, socially excluding at the first sign of our political leaning, etc.
I agree we also need to organize activities, but when social circles are occupied by the other camp with a witchhunt bonus, it is discouraging to try. And recursively encourages political extremism.
Incoming comment: “Not our problem, don’t be a Nazi.”
How do you suggest to deal with Gemini? Extremely useful to understand whether something is worrying or not. Whether we like it or not, it’s a main participant to the discussion.
Apparently we should hire the Guardian to evaluate LLM output accuracy?
Why are these products being put out there for these kinds of things with no attempt to quantify accuracy?
In many areas AI has become this toy that we use because it looks real enough.
It sometimes works for some things in math and science because we test its output, but overall you don't go to Gemini and it says "there's a 80% chance this is correct". At least then you could evaluate that claim.
There's a kind of task LLMs aren't well suited to because there's no intrinsic empirical verifiability, for lack of a better way of putting it.
This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.
The "well we already have a bunch of people doing this and it would be difficult to introduce guardrails that are consistently effective so fuck it we ball" is one of the most toxic belief systems in the tech industry.
> This "random output machine" is already in large use in medicine
By doctors. It's like handling dangerous chemicals. If you know what you're doing you get some good results, otherwise you just melt your face off.
> Should I trust the young doctor fresh out of the Uni
You trust the process that got the doctor there. The knowledge they absorbed, the checks they passed. The doctor doesn't operate in a vacuum, there's a structure in place to validate critical decisions. Anyway you won't blindly trust one young doctor, if it's important you get a second opinion from another qualified doctor.
In the fields I know a lot about, LLMs fail spectacularly so, so often. Having that experience and knowing how badly they fail, I have no reason to trust them in any critical field where I cannot personally verify the output. A medical AI could enhance a trained doctor, or give false confidence to an inexperienced one, but on its own it's just dangerous.
No, I'm not asking you spend $150, I'm providing you the evidence your looking for. Mayo Clinic, probably one of the most prominent private clinics in the US, is using transformers in their workflow, and there's many other similar links you could find online, but you choose to remain ignorant. Congratulations
The existence of a course on this topic is NOT evidence of "large use". The contents of the course might contain such evidence, or they might contain evidence that LLM use is practically non-existent at this point (the flowery language used to describe the course is used for almost any course tangentially related to new technology in the business context, so that's not evidence either).
But your focus on the existence of this course as your only piece of evidence is evidence enough for me.
Focus? You asked me for an evidence. I provided you with the one. And with the one which has a big weight on it. If that's the focus you're looking for then sure. Take it as you will, I am not here to convince anyone in anything. Have a look in the past to see how Transformers have solved the long standing problems nobody believed they are tractable up to that point.
LLM is just a tool. How the tool is used is also an important question. People vibe code these days, sometimes without proper review, but do you want them to vibe code a nuclear reactor controller without reviewing the code?
In principle we can just let anyone use LLM for medical advice provided that they should know LLMs are not reliable. But LLMs are engineered to sound reliable, and people often just believe its output. And cases showed that this can have severe consequences...
There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.
Even so, it's rather common for doctors to not be albe to diagonise correctly. It's a guessing game for them too. I don't know so much about US but it's a real problem in large parts of the world. As the comment stated, I would take anything a doctor says with a pinch of salt. Particularly so when the problem is not obvious.
This is really not that far off from the argument that "well, people make mistakes a lot, too, so really, LLMs are just like people, and they're probably conscious too!"
Yes, doctors make mistakes. Yes, some doctors make a lot of mistakes. Yes, some patients get misdiagnosed a bunch (because they have something unusual, or because they are a member of a group—like women, people of color, overweight people, or some combination—that American doctors have a tendency to disbelieve).
None of that means that it's a good idea to replace those human doctors with LLMs that can make up brand-new diseases that don't exist occasionally.
It takes 10 years of hard work to become a profound engineer too yet it doesn't prohibit us missing the things. That argument cannot hold. AI is already wide-spread in medical treatment.
An engineer is not a doctor, nor a doctor an engineer. Yes, AI is being used in medicines - as a tool for the professional - and that's the right use for it. Helping a radiologist read an X-Ray, MRI scan or CT Scan, helping a doctor create an effective treatment plan, warning a pharmacologist about unsafe combinations (dangerous drug interactions) when different medications are prescribed etc are all areas where an AI can make the job of a professional easier and better, and also help create better AI.
Nobody can (and should) stop you from learning and educating yourself. It however doesn't mean just because you can use Google or use AI, you think you can become a doctor:
Educating a user about their illness and treatment is a legitimate use case for AI, but acting on its advise to treat yourself or self-medicate would be plain stupidity. (Thankfully, self-medicating isn't as easy because most medication require a prescription. However, so called "alternate" medicines are often a grey area, even with regulations (for example, in India).
With robust fines based on % revenue whenever it breaks the law, would be my preference. I'm nit here to attempt solutions to Google's self-inflicted business-model challenges.
If it's giving out medical advice without a license, it should be banned from giving medical advice and the parent company fined or forced to retire it.
As a certified electrical engineer, the amount of times googles LLM suggested a thing that would have at minimum started a fire is staggering.
I have the capacity to know when it is wrong, but I teach this at university level. What worries me, are the people who are on the starting end of the Dunning-Kruger curve and needed that wrong advice to start "fixing" the spaces where this might become a danger to human life.
No information is superior to wrong information presented in a convincing way.
Ollama! Why didn’t they just run Ollama and a public model! They’ve kept the last 10 years with a Siri who doesn’t know any contact named Chronometer only to require the best in class LLM?
The other day I was trying to navigate to a Costco in my car. So I opened google maps on Android Auto on the screen in my car and pressed the search box. My car won't allow me to type even while parked... so I have to speak to the Google Voice Assistant.
I was in the map search, so I just said "Costco" and it said "I can't help with that right now, please try again later" or something of the sort. I tried a couple more times until I changed up to saying "Navigate me to Costco" where it finally did the search in the textbox and found it for me.
Obviously this isn't the same thing as Gemini but the experience with Android Auto becomes more and more garbage as time passes and I'm concerned that now we're going to have 2 google product voice assistants.
Also, tbh, Gemini was great a month ago but since then it's become total garbage. Maybe it passes benchmarks or whatever but interacting with it is awful. It takes more time to interact with than to just do stuff yourself at this point.
I tried Google Maps AI last night and, wow. The experience was about as garbage as you can imagine.
I'm genuinely curious about this too. If you really only need the language and common sense parts of an LLM -- not deep factual knowledge of every technical and cultural domain -- then aren't the public models great? Just exactly what you need? Nobody's using Siri for coding.
Are there licensing issues regarding commercial use at scale or something?
Pure speculation, but I’d guess that an arrangement with Google comes with all sorts of ancillary support that will help things go smoothly: managed fine tuning/post-training, access to updated models as they become available, safety/content-related guarantees, reliability/availability terms so the whole thing doesn’t fall flat on launch day etc.
Probably repeatability and privacy guarantees around infrastructure and training too. Google already have very defined splits for their Gemma and in house models with engineers and researchers rarely communicating directly.
> Not only that but Markdown use the conventions people already used in text files
So why not Markup? At the time, everyone was using markup because Wikipedia was in wikimarkUP, with # for numbered lists, {} for macros and === to denominate titles. The latter still works in Markdown, but the former doesn’t. Funny heritage: Confluence shortcuts are also expressed in markup because it was the trend at the time, but they changed the shortcuts when they went to the Cloud.
MediaWiki syntax was its own odd duck. It used '''bold''' and ''italics'', and [https://example.com/ external links like this] - almost nothing else followed their lead.
And for a long time MediaWiki didn't have a proper parser for that markup, just a bunch of regexes that would transform it into HTML. I don't know if they have a proper parser now, but for reasons of backwards compatibility it must be lenient/forgiving, which means that converting all of Wikipedia to markdown is basically impossible now. So MediaWiki markup will stay with us for as long as there are MediaWiki wikis.
Observation bias. Those who succeed to drive themselves to do an effort, are already out of the depression. Try doing 7hrs sports a week and falling into depression: You won’t have the energy. It’s not sports that gets you out of depression, but sure it’s a stage on the way back and you gotta constantly give it a chance.
This is spot on. "To stop being sick you just have to do all the stuff healthy people do, and not do the stuff sick people do! I have never seen a healthy person being sick, so it must work!".
True. All internet packets are REST API packets - there's no other type of packet. And all cell radio traffic is internet packets (which are REST API packets).
I agree we also need to organize activities, but when social circles are occupied by the other camp with a witchhunt bonus, it is discouraging to try. And recursively encourages political extremism.
Incoming comment: “Not our problem, don’t be a Nazi.”
reply