Hacker Newsnew | past | comments | ask | show | jobs | submit | more blensor's commentslogin

So you are basically looking at "fMRI" of the "brain" while it's doing a wide range of things and cutting out the things that stay dark the most?


Oh that's a good analogy! Yes that sounds right!


Not an AI researcher here so this is probably common knowledge for people in this field, but I saw a video about the quantization recently and wondered exactly about that, if it's possible to compress a net by using more precision where it counts and less precision where it's not important. And also wondered how one would go about deciding which parts count and which don't

Great to know that this is already a thing and I assume model "compression" is going to be the next hot topic


Yes you're exactly thinking correctly! We shouldn't quantize a model naively to 2bit or 4bit, but we should do it smartly!


How do you pick which one should be 2, which one should be 4, etc. Is this secret sauce? or, something open?


Oh I wrote about it here: https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs We might provide some scripts for them in the future!


Thanks! But, I can't find any details on how you "intelligently adjust quantization for every possible layer" from that page. I assume this is a secret?

I am wondering about the possibility that different use cases might require different "intelligent quantization", i.e., quantization for LLM for financial analysis might be different from LLM for code generation. I am currently doing a postdoc in this. Interested in doing research together?


Oh we haven't yet published about it yet! I talk about in bits and pieces - we might do a larger blog on it!

Yes different use cases will be different - oh interesting! Sorry I doubt I can be of much in our research - I'm mainly an engineering guy so less research focused!


I recently ( a few months ago ) finally caved an bought Davinci Resolve for linux and despite that I almost exclusively use Kdenlive instead every time I quickly want to make a video.

This of course is most likely a skill issue at my part but it really feels to me like kdenlive is getting in the way of my flow much less than davinci


I really tried to onboard on Davinci Resolve but it's just too complex for the basic editing I need. It does have brilliant color grading options that are not available in Kdenlive but I'm using premade LUTs for 99% of my edits and Kdenlive supports LUTs really well these days!


I started using resolve last year and while I get it now and my hands know the shortcuts I don't feel it's a very intuitive editor.

Very nice color options though, that part is hard to beat


Not really that much of a game but falling notes and with standard MIDI support is Synthesia.

https://synthesiagame.com/


Just yesterday I was thinking if we need a code comment system that separates intentional comments from ai note/thoughts comments when working in the same files.

I don't want to delete all thoughts right away as it makes it easier for the AI to continue but I also don't want to weed trhough endless superfluous comments


GGWave is a really great tool and does support audible and inaudible versions.

We are using it in XRWorkout to automatically sync up ingame recordings with external recordings, we are using the audible version instead of the ultrasound version so a human can sync it up too if they are using a regular video editor instead of doing it automatically in our own tools

Here is an example how that sounds https://xrworkout.nyc3.digitaloceanspaces.com/data/video/036...


This seems like such a fun way to work out but not with a VR helmet on.


Tangential thought, but someone should make a library for freely available AI generated videos and songs.

There is so much energy wasted for generating and most of the time you aren't getting exactly what you wanted anyway


So you want a free, legal source of art, which was generated with tools that scrapped other people's art without their consent, respecting it's licensisng and without their knowledge? :)

Doesn't make sense to me - result has questionable legal status at best.


Yes

For non commercial purposes


It’s called TikTok.


I meant as a resource for others to use, like shutterstock and artlist but solely for AI generated b roll to be used by others.

If you generate videos that are then sitting in some walled garden why not make them available for others to use.

I am pretty sure there are endless hours of similar videos in all accounts, and instead of recreating it from scratch every single time why not check with a public database first if there is already something for you to use


Probably quicker to recreate it from scratch than spend lots of time scrolling through that public database looked for the perfect one that's more likely than not indexed incorrectly and thus effectively unfindable.


If the prompt is saved alongside with it then it should be manageable.

Or at least if platforms would show similar results you can use before generating something new.

For all the first frame/last frame video creations it's probably not feasable but for pure text to video it could work


check with a public database first if there is already something for you to use

That's from the old world. Think of it like, why would the kids post-Napster flip through an album store? They wouldn't and they didn't. We had to flip through databases of stock images because we couldn't conjure up whatever we wanted instantly before.


When it becomes free and much more energy efficient, sure, but look at how much veo 3 costs.


Take a look at Google Whisk, it doesn't cost much at all.


I think they suit bigger heads as well is because the hinge on the stems can go slightly beyond 90°


So far I haven't had any issue but I haven't done long tests yet. I'm using BredOS not their official OrangePI images though


If I had XReal I could look into that too, it's probably pretty similar to how it works for the Viture.

Regarding the HDMI on Quest 3, I believe it was sometime last year when they made that possible https://www.meta.com/help/quest/1350679463006443


Yeah I bet it's real similar. But I understand.

Thanks for the heads up about that Quest thing, I actually have some of those adapters so I will give it a try.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: