Compete for talent. You can never compete with big tech salaries, and often you can't compete with their benefits either. But you can still compete in creative ways. The most obvious way that no one does is to promote people into lower hours worked; instead of a pay raise you give them every Friday off, for example. There are a lot of types of people out there motivated by a lot of different things than money.
This is kind of a weird example to begin with on a forum mostly populated by software engineers, because I'd find it very weird if a manager ever objected to someone using their personal phone at a SWE or similar office job, but I'd guess that the sets of jobs where a "boss" would object to someone using their phone during work (but still getting their work done) and those that would potentially have a company phone are mostly disjoint... a complete prohibition on using your phone seems like entry-level retail job type rules. excepting corner-case stuff like some very high security facility where you wouldn't even be allowed to bring any outside electronics in.
Software is a bit isolated from this (there's computers for "research" regardless, after all). But phone policies can be very strict in most other sectors of work. Seen as a dostraction at worst and unprofessional at best. A teacher wouldn't be able to just get away with having their phone out during class unless there's an emergency.
The color Red is often used. A human can experience 'Red', but 'Red' does not exist out in the universe somewhere. 'Red' Doesn't exist outside of someone experiencing 'Red'.
I think philosophers are just using the word qualia to quantify this 'experiencing' inputs.
But, it is still just a way to try and describe this process of processing the inputs from the world.
It isn't metaphysical, because it can be measured.
I might have said 'unknowable' a little flippantly.
I just meant, in these arguments, some people start using 'qualia' to actually mean some extreme things like our mind creates the universe or something.
Can someone who's never seen red hallucinate something and assume it to be red? What if that red is correctly the red they would see if they saw red?
Can you reproduce this feeling in someone by doing something to their physical body without showing them red?
If so, how does it differ from the latent encoding for uploading an all red pdf to your favorite multi modal model?
Instead of doing that socratic bs you see a lot here, I'll be more direct:
Until there's some useful lines that can be drawn to predict things, I won't accept using a fuzzy concept to make statements about classification as it's an ever shifting goalpost.
There are answers to my legitimate above questions that would make me consider qualia useful, but when I first learned about them, they seemed fuzzy to the point of being empirically not useful. It seems like a secular attempt at a soul.
Now, obviously if you're trying to describe something with experience, it needs some actual memory and processing sensory input. Current Generative AI doesnt have a continuity of experience that would imply whatever qualia could mean, but I find it hard to definitely say that their encodings for image related stuff isn't qualia if we don't have hard lines for what qualia are
I can feel an object and say 'its hot' on a scale of 1-10. The temperature is known.
And I can do that multiple times, with some 1-10 scale, to get a sample.
Then do that with multiple people.
You can then get a distribution of what people think is 'hot' versus 'cold'.
What is icy, versus, bearable.
When you go to a doctors office and they ask you on a scale to rate pain, do you think that is completely bogus?
It isn't exact, but you can correlate between people. Yes, red heads feel more pain, there are outliers.
But a far cry from metaphysical.
The problem here is the word 'qualia'. Its just too fuzzy a term.
Qualia are phenomenal properties of experience, a soul is something some religions claim exists outside of measurable physical reality which represents the "essence" of an organism, implying that consciousness is some divine process and conveniently letting us draw lines over whom and what we can and can't morally kill.
Qualia can be an entirely physical phenomenon and is not loaded with theological baggage.
If they're entirely physical, what's the argument that multimodal models don't have them? Is it continuity of experience? Do they not encode their input into something that has a latent space? What makes this differ from experience?
They can be physical, but I'm not claiming to know definitively. The lines are extremely blurry, and I'll agree that current models have at least some of the necessary components for qualia, but again lack a sensory feedback loop. In another comment [0] I quote myself as saying:
As an independent organism, my system is a culmination of a great deal many different kinds of kins, which can usually be broken down into simple rules, such as the activation potential of a neuron in my brain being a straight-forward non-linear response to the amount of voltage it is receiving from other neurons, as well as non-kins, such as a protein "walking" across a cell, a.k.a continuously "falling" into the lowest energy state. Thus I do not gain any conscious perception from such proteins, but I do gain it from the total network effect of all my brain's neuronal structures making simple calculations based on sensory input.
which attempts to address why physically-based qualia doesn't invoke panpsychism.
I do think AI will have them. Nothing says they can't.
And we'll have just as hard a time defining it as we do with humans, and we'll argue how to measure it, and if it is real, just like with humans.
I don't know if LLM's will. But there are lots of AI models, and when someone puts them on a continuous learning loop with goals, will be hard to argue they aren't experiencing something.
How do LLMs do on things that are common confusions? Do they specifically have to be trained against them?
I'm imagining a Monty Hall problem that isn't in the training set tripping them up the same way a full wine glass does
You can make it work if you spoof the VM's SMBIOS strings and rename some device objects so it's not obvious you're running in a VM. Plenty of guides on how to do this, eg: https://github.com/Scrut1ny/Hypervisor-Phantom
reply