No, there isn't a plug and play one yet, but I've have great success with Home Assistant and the Home Assistant Voice Preview edition and its goal is pretty much to get rid of Alexa.
I'd imagine you'd have a bunch of cheap ones in the house that are all WiFi + Mic + Speakers, streaming back to your actual voice processing box (which would cost a wee bit more, but also have local access to all the data it needs).
You can see quite quickly that this becomes just another program running on a host, so if you use a slightly beefier machine and chuck a WiFi card in as well you've got your WiFi extenders.
> but I've have great success with Home Assistant and the Home Assistant Voice Preview edition
As compared to Alexa? I bought their preview hardware (and had a home-rolled ESP32 version before that even) and things are getting closer, I can see the future where this works but we aren't there today IMHO. HA Voice (the current hardware) does not do well enough in the mic or speaker [0] department when compared to the Echos. My Echo can hear me over just about anything and I can hear it back, the HA Voice hardware is too quiet and the mic does not pick my up from the same distances or noise pollution levels as the Echo.
I _love_ my HA setup and run everything through it. I'd like nothing more than to trash all my Echos, I cam close to ordering multiple of the preview devices but convinced myself to get just 1 to test (glad I did).
Bottom line: I think HA Voice is the future (for me) but it's not ready yet, it doesn't compare to the Echos. I wish so much that my Sonos speakers could integrate with HA Voice since I already have those everywhere and I know they sound good.
[0] I use Sonos for all my music/audio listening in my house so I only care about the speaker for hearing it talk back to me, I don't need high-end audiophile speakers.
I've not had any issues with the audio picking up, but its in the living room rather than the kitchen. I have Alexa's in most rooms. I don't play music through it, which I do from the Alexa. Tbh I think the mic and the speakers will be fine when the rest of the 'product' is sorted.
I failed to mention I have Claude connected to it rather than their default assistant. To us, this just beats Alexa hands down. I have the default assistant another wake word and mistral on the last, they're about as good as Alexa but I rarely use them.
Interesting, well I'm glad it's working well for you all. I tested with local, HA Cloud, and ChatGPT/Claude and that wasn't the sticking point, it was getting the hardware to hear me or for me to hear it.
I will say, while it was too slow (today) with the my local inference hardware (CPU, older computer and a little on my newer MBP) it was magically to talk to and hear back from HA all locally. I look forward to a future where I can do that at the same speed/quality as the cloud models. Yes, I know cloud models will continue to get better but turning on/off my fans/lights/etc doesn't need to best model available, just needs to be reliable and fast, I'm even fine with it "shelling out" to the cloud if I ask it for something outside of the basics though I doubt I'll care to do that.
> Yes, I know cloud models will continue to get better but turning on/off my fans/lights/etc doesn't need to best model available, just needs to be reliable and fast, I'm even fine with it "shelling out" to the cloud if I ask it for something outside of the basics though I doubt I'll care to do that.
This is exactly how I feel. Its also why I like the multiple wake words - one for remote and one for local.
One of the amazing things I've found with the LLM powered voice assistants is being able to 'recover' from mistakes - e.g. when cooking and forgetting to set the next timer, I can recover by asking about another event like when the last timer ended or when I turned off the bedroom light. Its annoying you can't do that with Alexa. This 'complexity' doesn't need a huge or SOTA model to resolve! I also enjoy being able to ask for a song by half title and half description - my wife was trying to play Ghost by Au/Ra, which we just can't get the Alexa to do, and I can't reasonably get my local LLMs to fail at.
After your comment earlier I took the preview edition into the kitchen, where it did perform a lot worse with the multiple bits of white noise and odd room shape.
I had the same experience, eBay suggests that I'll have a Jabra speakerphone in my mailbox tomorrow to try moving everything to a better audio setup. The software seems good but the audio performance is miserable on the preview device, you essentially have to be talking directly at the microphone from not more than a few feet away for anything to recognize.
Sadly, the Jabra (or any USB) audio device means I'll need to shift over to an rPi which comes with it's own lifecycle challenges.
I've been on linux since 2014; I'm an ocassional user of windows, booting into it with much regret to deal with client's issues. I generally dislike working with MacOS... but for someone used to macOS I see no meaningful degradation of the kind there is with windows - your time is better spent earning/buying/setting up an m series mac air.
> I only really ever play one game, so that's not a blocker for me.
I play loads of games; its mainly AAA multiplayers that aren't able to run on linux due to kernel anti-cheat - nearly everything else runs well with minimal effort using proton via steam (either installed via steam or imported as a non-steam game).
When I'm coding I have about 6 instances of VSCode on the go at once; each with their own worktree and the terminal is a dangerous cc in docker. most of the time they are sitting waiting for me. Generally a few are doing spec work/reporting for me to understand something - sometimes with issue context; these are used to plan or redirect my attention if I might've missed something. A few will be just hacking on issues with little to no oversight - I just want it to iterate tests+code+screenshots to come up with a way to do a thing / fix a thing, I'll likely not use the code it generates directly. Then one or two are actually doing work that I'll end up PR'ing or if I'm reviewing they'll be helping me do the review - either mechanically (hey claude, give me a script to launch n instances with a configuration that would show X ... ok, launch them ... ok, change to this ... grab X from the db ... etc.) or insight based (hey claude, check issue X against code Y - does the code reflect their comments; look up the docs for A and compare to the usage in B, give me references).
I've TL'd and PM'd as well as IC'd. Now my IC work feels a lot more like a cross between being a TL and being a senior with a handful of exuberant and reasonably competent juniors. Lots of reviewing, but still having to get into the weeds quickly and then get out of their way.
>I've TL'd and PM'd as well as IC'd. Now my IC work feels a lot more like a cross between being a TL
Interesting... I've been in management for a few years now and recently doing some AI coding work. I've found my skills as a manager/TL are far more adaptable to getting the best out of AI agents than my skills as a coder.
Same, I was a very average dev coming out of CS, and a PM before this. I find that my product training has been more useful, especially with prototypes, but I do leave nearly all of the hard system, infra, and backend work to my much much more competent engineering teammates.
Being able to partial index into JSON has made this much more straight forwards now than ever before, but historically pre-creating empty indexed custom columns was somewhat common (leading to hard limits like max 20 custom tags), as was EAV (which arguably is inner-platform).
There are more solutions than these, but until you're at truly custom DB scale with a specific problem here, these will solve it for you.
yeah, generally instancing a table per customer is an old smell indicating they have either a permissions issue (no RLS) or you're using a db which doesn't support partial indexes (which basically everything does now).
The institutional advice would struggle to directly recommend any other action as it can only be seen to create an unconstrained liability - if not legal, then social. That does not mean they wouldn't be amenable to a system of placing people who were vetted and managed if they were sufficiently convinced it would resolve the issue (at institutional scale).
Its worth noting that the homeless person in this situation was in fact known to those who provided the home - and not as casually as the first para suggests.
As the multiple siblings say, let it sit. Some desiccant next to it to suck moisture out of the air will help - rice is famously OK for this - no need to put it in the desiccant. A little bit of airflow is also good.
You may also find that rotating it into different positions accelerates it.
I'm using an OLED X1 Carbon right now in the UK. I use it all the time in low light.
I just turned all the lights off (even the Christmas tree) and ran through a handful of usage situations and couldn't see any issues. I turned some lights on and did the same, I couldn't see any issues. I asked Claude, and got told to do the finger test, and that is barely perceptible. I then used my phone to record the screen and yes - I can confirm that there is an effect that my pixel 9a's camera picks up, barely noticeable at 240Hz, and definitely noticeable at 480Hz.
Maybe the guy is particularly sensitive, but from the framing of the rest of the article I think he's blowing a few things out of proportion.
I probably should've done a better job at clarifying this, but my issue with OLEDs isn't just that (at least historically) they tend to be too bright even at lower brightness, but also the other issues they come with such as burn-in and text potentially looking less pleasant compared to IPSs displays. Burn-in is probably my biggest concern here, especially since it really seems to be a case of winning the lottery or not (i.e. for some it's fine for years, others get burn-in after just a few months).
Basically I just trust IPS more than any other technology :)
Burn-out probably depends on the model, not a lottery, but shouldn't be a major concern for typical usage patterns in recent models. The text issue is caused by a pentile subpixel layout which are no longer common. I love OLED for low-light evening usage because IPS displays always have some backlight bleed, whereas OLEDs can display true blacks/pure warm tones which I find much more pleasant in the evenings. IMO power consumption is the only major downside of OLED displays for general-purpose laptops and phones.
I've only recently bought OLED laptops so I can't speak to burn-in but out of the three I've tested, they have a lower minimum brightness than my other IPS laptops.
In terms of text clarity, "2k" OLEDs (1920x1200) are a bit blurry. IPSs and 3k OLEDs are noticeably sharper, with not much difference between each other.
I'd imagine you'd have a bunch of cheap ones in the house that are all WiFi + Mic + Speakers, streaming back to your actual voice processing box (which would cost a wee bit more, but also have local access to all the data it needs).
You can see quite quickly that this becomes just another program running on a host, so if you use a slightly beefier machine and chuck a WiFi card in as well you've got your WiFi extenders.
reply