Description alone sounds like something I would love to give a try.
As someone who never opened tiktok, how does the app handles that? Do people even upload recipes on tiktok? I thought tiktok posts are mostly videos. So does it mean that it uses some AI model?
Great question! TikTok recipes are a mixed bag honestly.
When you paste a TikTok link, we pull the video caption and try to extract ingredients/instructions from the text. If the creator wrote out the recipe in the description (or linked to their blog), it works really well.
But you're right - a lot of TikTok recipes are just "watch me cook" with no written recipe. In those cases, we save the video embed so you can still watch it directly in the app, add your own tags, favourite it, and keep it organised with your other recipes. Not as ideal as having the full recipe extracted, but at least it's not lost in your TikTok likes anymore.
For recipe cards shown visually in videos, you can screenshot and use our photo import feature - that one does use GPT-4 Vision to OCR the text.
I was referring to this paper a lot when it was hyped, when people cared about architectural decisions of neural networks. It was also the year I started studying neural networks.
I think the idea still holds. Although the interest has been shifted towards test-time scaling and thinking, researcher still care about architectures like nemotron 3, recently published.
Can anyone give more updates on this direction of research, more recent papers?
I will wait for the actual reviews from users. But I lost faith in Intel chips.
I was in CES2024 and saw snapdragon X elite chip running a local LLM (llama I believe). How did it turn out? Users cannot use that laptop except for running an LLM. They had no plans for translation layer like Apple Rosetta. Intel would be different for sure in that regard, but I just don't think that it will fly against Ryzen AI chips or Apple silicon.
Isn't it a bit exaggerating to say that users cannot use Snapdragon laptops except for running LLMs? Qualcomm and Microsoft already has a translation layer named Prism (not as good as Rosetta but pretty good nevertheless): https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-x8...
>Isn't it a bit exaggerating to say that users cannot use Snapdragon laptops except for running LLMs?
I think maybe what OP meant was that the memory occupied by the model meant you couldn't do anything alongside inferecing, e.g. have a compile job or whatever running (unless you unload the model once you've done asking it questions.)
to be honest, we could really do with RAM abundance. Imagine if 128GB ram became like 8GB ram is today - now that would normalize local LLM inferencing (or atleast, make a decent attempt.)
Lost faith from what? On x86 mobile Lunar lake chips are the clear best for battery life at the moment, and mobile arrowlake is competitive with amd's offerings. Only thing they're missing is a Strix halo equivalent but AMD messed that one up and there's like 2 laptops with it.
The new intel node seems to be kinda weaker than tsmc's going by the frequency numbers of the CPUs, but what'll matter the most in a laptop is real battery life anyway
Lunar Lake throttles a lot. It can lose 50% of its performance on battery life. It's not the same as Apple Silicon where the performance is exactly the same plugged in or not.
Lunar Lake is also very slow in ST and MT compared to Apple.
Qualcomm's X Elite 2 SoCs have a much better chance of duplicating the Macbook experience.
Nobody is duplicating the macbook experience because Apple is integrating both hardware and os, while others are fighting Windows, and OEMs being horrible at firmware.
LNL should only power throttle when you go to power saver modes, battery life will suffer when you let it boost high on all cores but you're not getting great battery life when doing heavy all core loads either way. Overall MT should be better on Panther lake with the unified architecture, as afaik LNLs main problem was being too expensive so higher end high core count SKUs were served by mobile arrow lake.
And we're also getting what seems to be a very good iGPU while AMD's iGPUs outside of Strix Halo are barely worth talking about
ST is about the same as AMD. Apple being ahead is nothing out of the ordinary since their ARM switch, as there's the node advantage, what I mentioned with the OS, and just better architecture as they plainly have the best people at the moment working at it
> LNL throttles heavily even on the default profile, not just power saver modes.
This does also show it not changing in other benchmarks, but I don't have a LNL laptop myself to test things myself, just going off of what people I know tested. It's still also balanced so best performance power plan would I assume push it to use its cores normally - on windows laptops I've owned this could be done with a hotkey.
> Lunar Lake uses TSMC N3 for compute tile. There is no node advantage.
LNL is N3B, Apple is on N3E which is a slight improvement for efficency
> Yet, M4 is 42% faster in ST and M5 is 50% faster based on Geekbench 6 ST.
Like I said they simply have a better architecture at the moment, which also more focused on client that GB benchmarks because their use cases are narrower.
If you compare something like optimized SIMD Intel/AMD will come out on top with perf/watt.
And I'm not sure why being behind the market leader would make one lose faith in Intel, their most recent client fuckup was raptor lake instability and I'd say that was handled decently. For now nothing else that'd indicate Windows ARM getting to Apple level battery performance without all of the vertical integration
ETA: looking at things the throttling behaviour seems to be very much OEM dependent, though the tradeoffs will always remain the same
This does also show it not changing in other benchmarks, but I don't have a LNL laptop myself to test things myself, just going off of what people I know tested. It's still also balanced so best performance power plan would I assume push it to use its cores normally - on windows laptops I've owned this could be done with a hotkey.
It literally throttles in every benchmark shown. Some more than others. It throttles even more than the older Intel SoC LNL replaced.
LNL is N3B, Apple is on N3E which is a slight improvement for efficency
Still the same family. The difference is tiny. Not nearly enough to make up the vast difference between LNL and M4. Note that N3B actually has higher density than N3E.
Like I said they simply have a better architecture at the moment, which also more focused on client that GB benchmarks because their use cases are narrower. If you compare something like optimized SIMD Intel/AMD will come out on top with perf/watt.
It is so inspiring. Recently, I've been thinking of making a side project using LLMs for learning new languages too. Transformers were originally designed for machine translation and now we have much better ones. My idea is to write a mobile app which I have zero experience.
I actually believe that the era of lives will come. But I can't wait for UBI. Nobody can't. Even if the era of jobs ends, for us who are living in this era, the event is not going to be a period but an ellipsis. I can't wait to see how it unfolds. Yet, I doubt that I would see the end happening in my lifetime.
What struck me most was the requirement. I've been self-hosting librechat on a raspberry pi. I was hoping to test Onyx on it, but no way. Its requirement is 4 vCPU and 10GB RAM? I wish I can take out components that resource hungry and have only the basic so that I can serve it on a less demanding server.
As much as I am excited by the price, the tools they called "the advanced tool"[1] look so useful to me; Tool search, programmatic tool calling (smolagents.CodeAgent by HF), and tool use examples (in-context learning).
They said that they have seen 134K tokens for tool definition alone. That is insane. I also really liked the puzzle game video.
As someone who never opened tiktok, how does the app handles that? Do people even upload recipes on tiktok? I thought tiktok posts are mostly videos. So does it mean that it uses some AI model?
reply