Hacker Newsnew | past | comments | ask | show | jobs | submit | druskacik's commentslogin

I usually love names by Mistral (Mixtral, Ministral, Pixstral) but this one just sucks. Not sure about Clistral though :D

It's not just you, I hear this often, but I am always suprised people can read for so long in bed. No matter how interesting a book is, I can rarely read more than 20-30 minutes before the urge to fall asleep becomes too strong.

But I can sometimes code until like 4AM. Weird.


Reading is usually more passive than coding. I'm often never sleepy if I'm actively coding something late at night but reading a book (no matter how engaging) or watching a tv show can very easily make me sleepy. That said, everyone's brains work very differently.


This is just so weird. In general coding won't let me fall asleep but a book 100% will never let me sleep until I finish.

I also find the idea of "forcing" yourself to read rather peculiar, but we're all different people. I wonder if there's genuinely something different in how the brain reacts.


Well you are sitting in front of a relatively bright lamp when coding.


Funny coincidence, these are the exact sci-fi books I read this and previous year, in the exact order I read them (I read some non-sci-fi books in between to not get overwhelmed). I finished Project Hail Mary literally one hour ago. All the books were great, but Remembrance of Earth's Past series was literally life-changing, truly a masterpiece.

I'm guessing you plan to read Dune next? ;) I plan to start with it during Christmas break.


This is my experience as well. Mistral models may not be the best according to benchmarks and I don't use them for personal chats or coding, but for simple tasks with pre-defined scope (such as categorization, summarization, etc.) they are the option I choose. I use mistral-small with batch API and it's probably the best cost-efficient option out there.


Did you compare it to gemini-2.0-flash-lite?


Answering my own question:

Artificial Analysis ranks them close in terms of price (both 0.3 USD/1M tokens) and intelligence (27 / 29 for gemini/mistral), but ranks gemini-2.0-flash-lite higher in terms of speed (189 tokens/s vs. 130).

So they should be interchangeable. Looking forward to testing this.

[0] https://artificialanalysis.ai/?models=o3%2Cgemini-2-5-pro%2C...


I did some vibe-evals only and it seemed slightly worse for my use case, so I didn't change it.


I like this quote:

'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.'


To me that seems as pointless as saying "everything a person sees is a hallucination, it's just some of those hallucinations are true". Sure, technically whenever we see anything it's actually our brain interpreting how light bounces off stuff and combining that with the mental models we have of the world to produce an image in our mind of what we're looking at... but if we start calling everything we see a hallucination, there's no longer any purpose in having that word.

So instead of being that pedantic, we decided that "hallucination" only applies to when what our brain thinks we see does not match reality, so now hallucination is actually a useful word to use. Equally with LLMs, when people talk about hallucinations part of the definition includes that the output be incorrect in some way. If you just go with your quote's way of thinking about it, then once again the word loses all purpose and we can just scrap it since it now means exactly the same thing as "all LLM output".


> everything a person sees is a hallucination, it's just some of those hallucinations are true

Except it's not. People can have hallucinations that are true (dreams), but most perception isn't generated by your brain, but comes from the outside.


Is it because the model is not good enough at following the prompt, or because the prompt is unclear?

Something similar has been the case with text models. People write vague instructions and are dissatisfied when the model does not correctly guess their intentions. With image models it's even harder for model to guess it right without enough details.


Remember in image editing, the source image itself is a huge part of the prompt, and that's often the source of the ambiguity. The model may clearly understand your prompt to change the color of a shirt, but struggle to understand the boundaries of the shirt. I was just struggling to use AI to edit an image where the model really wanted the hat in the image to be the hair of the person wearing it. My guess for that bias is that it had just been trained on more faces without hats than with them on.


No, my prompts are very, very clear. It just won't follow them sometimes. Also this model seems to prefer shorter prompts, in my experience.


> BDSM has been a thing for decades

But decades ago it was not possible to reach content like that in a few seconds, using magical device we carry 24/7.


I always wondered about the idea posed in this short story. Does the "everything that can happen does happen" theory apply to free will? If there really are infinite universes, is there a one when I'm walking a street full of people and out of nowhere we all start singing Ode to Joy in perfect unisono? Or get naked and have a massive orgy? No law of physics rules this out.

(Sorry, I'm a layman.)


I've always wondered if this (kinda widespread?) theory stems from most people thinking that "infitnity" includes every possible option, which is not true.

(I'm a layman, too)


Mathematician here, so educated layman on the physics but expert on infinity if you like.

Mathematically, "infinity" doesn't imply every possible option. But in terms of quantum physics, yes it kind of does include every possible option. There is a kind of joke classroom exercise in quantum physics class to calculate the probability that a piano would instantaneously rematerialize a meter away from its previously observed location. Its 10^-[ ridiculous number] but still thats not zero.

The size of physical reconfiguration of a person's brain to cause them to break out singing is a much smaller deviation so comparatively likely. So 10^-[somewhat less ridiculous nunber]


The bigger issue with all those non-zero probabilities is they're meaningless while you still experience actual time as a human...but become pretty damn significant when you experience no time after you die.

So tiny probabilities become essentially guarantees unless the heat death of the universe is so thorough as to erase the slight probability that the whole thing pops back into existence.


Isn't it cold death of the universe?


This is related to the question whether a system/the universe is ergodic (among other properties changing energy, space).


What are examples of things that are NOT ergodic?


I think an example would be the two body problem. It stays on an eccentricity. So it does not explore different eccentricities although they can have the same total energy.

(But I just looked that up too because this concept is mostly used/assumes in statistical physics)


Doesn't infinity include every possible option (possible meaning that it can happen within rules of physics)? If the model of the universe is one where events are happening with some probability, then if the probability is nonzero and the number of universes is infinite, then the event should happen in some of the universes.

(Still a layman, though.)


evil.example.com can be a legitimate-looking website (e.g. a new tool a person might want to try). If it has a login with email code, it can try to get the code from a different website (e.g. aforementioned Shopify).

For the username + password hack to work, the evil.example.com would have to look like Shopify, which is definitely more suspicious than if it's just a random legitimate-looking website.


Intentionality of the path is a good prerequisite of the object being technological, and its hostility is a possibility given the Dark Forest resolution is true (which we can't prove nor disprove). The sentence sounds a bit sensationalist but it seems scientifically valid to me, considering this is an area where we have little more than a bunch of unprovable hypotheses.


My favorite aspect of Dark Forest is that simply coming up with the concept also provides a resolution to the Fermi Paradox.


It isn't a good resolution, because it assumes all intelligent species in the universe must think and act according to the same rationale. But the one example of an intelligent species we're aware of (humanity) doesn't think and act this way - we've been blindly sending signals and probes out for decades now, and anyone observing our planet would probably notice obvious tech signatures.


The ones who behave that way don’t last long enough to be witnessed by new civilizations like humanity, hence the darkness


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: