Hacker Newsnew | past | comments | ask | show | jobs | submit | radicality's commentslogin

I don’t know about Google Wallet, but for iOS Wallet, it is not possible to create a new entry there yourself as a normal user. It has to be signed with a $99/yr certificate, so this thing does the signing for you. The utility is that whatever you created now lives with the rest of the passes in one place.

Oh, okay, thanks.

So yeah, in Google Wallet you can just add the loyalty card like that (scan the qr/bat ode or type the number), and then have it synchronised to your account (to have it available on your other phone for example).

Sure, not every kind of the pass can be added like this (not movie tickets or boarding passes), but all that matters.


and they are accessible without unlocking your device.

Yep, same with Google wallet. Display boarding pass, lock the device, wake up the phone without unlocking, and it's right there.

How do I get Gemini to be more proactive in finding/double-checking itself against new world information and doing searches?

For that reason I still find chatgpt way better for me, many things I ask it first goes off to do online research and has up to date information - which is surprising as you would expect Google to be way better at this. For example, was asking Gemini 3 Pro recently about how to do something with a “RTX 6000 Blackwell 96GB” card, and it told me this card doesn’t exist and that I probably meant the rtx 6000 ada… Or just today I asked about something on macOS 26.2, and it told me to be cautious as it’s a beta release (it’s not). Whereas with chatgpt I trust the final output more since it very often goes to find live sources and info.


Gemini is bad at this sort of thing but I find all models tend to do this to some degree. You have to know this could be coming and give it indicators to assume that it’s training data is going to be out of date. And it must web search the latest as of today or this month. They aren’t taught to ask themselves “is my understanding of this topic based on info that is likely out of date” but understand after the fact. I usually just get annoyed and low key condescend to it for assuming its old ass training data is sufficient grounding for correcting me.

That epistemic calibration is is something they are capable of thinking through if you point it out. But they aren’t trained to stop and ask/check themselves on how confident do they have a right to be. This is a meta cognitive interrupt that is socialized into girls between 6 and 9 and is socialized into boys between 11-13. While meta cognitive interrupt to calibrate to appropriate confidence levels of knowledge is a cognitive skill that models aren’t taught and humans learn socially by pissing off other humans. It’s why we get pissed off st models when they correct ua with old bad data. Our anger is the training tool to stop doing that. Just that they can’t take in that training signal at inference time


Yeah any time I mention GPT-5, the other models start having panic attacks and correcting it to GPT-4. Even if it's a model name in source code!

They think GPT-5 won't be released until the distant future, but what they don't realize is we have already arrived ;)


Oh wow, I had no idea that “virtual location” is even a thing. Imo it should not, I don’t even see a use case for that, it just seems like straight-up lying about the traffic exit location. Glad to see the provider I occasionally use, Mullvad, passed the test.


Many providers in the list, such as PIA, warn the user when a virtual location is chosen. The point is to get a wider range of countries. Most websites, such as YouTube and Netflix, are fooled by the virtual locations, so it works!


Yeah, I'm really not seeing how a "virtual location" is any different from outright fraud.


It depends on whether the VPN is lying to you. Proton, for example, makes them quite explicit in the software and even lists them for you here: https://protonvpn.com/support/how-smart-routing-works and seems like NordVPN also has a page explaining that.


I used a VPN that had a virtual location of China for a while, which avoided ads on some websites; China blocks those sites, so those sites don't have any ads in China, but the VPN exit wasn't actually in China so it could reach the sites fine.


Or try the very sarcastic and nihilistic ‘Monday’ gpt, which surprisingly is an official openAI gpt.

edit, add link: https://chatgpt.com/g/g-67ec3b4988f8819184c5454e18f5e84b-mon...


Thanks for the link! I didn’t know Monday existed. I laughed so hard at its output. But I fear that using it regularly would poison my soul…


I actually had Monday help me write a system prompt to replicates its behavior. I vastly prefer Monday. It feels much more grounded compared to the base model. It was also a big learning moment for me about how LLMs work.


Think the article should also mention how OpenAI is likely responsible for it. Good article I found from another thread here yesterday: https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram...


yes. on the moore's law is dead podcast they were talking about rumors where some 'AI enterprise company's representatives' were trying to buy memory in bulk from brick and mortar stores. in some cases openai was mentioned. crazy if true. also interesting considering none of those would be ECC certified like what you would opt for for a commercial server.


The author does mention:

> The reason for all this, of course, is AI datacenter buildouts.

This bubble can't pop fast enough. I'm curious to see which actually useful AIs remain after the burst, and how expensive they are once the cash-burning subsides.


Similarly for TVs - i got a samsung oled few years ago, and while the hardware seems great, I do wish I had gone with LG as their TVs seem more open to install custom firmware. (I do pretty much just use appleTV and fireTV devices plugged in to the Samsung, but still, the main TV ui is pretty abysmal)


Photos is definitely not great, though I still try and deal with it for the easy iCloud syncing. Some examples off top of my head for Photos crappiness, all on my top of the range 128GB macbook m4 max

- doing 'cmd-R' (rotate) on a standard few-megabyte image might beachball the app for a few seconds. Rotating a small image file...

- Rotating a video seems to re-encode the whole video, instead of setting some metadata flags. Imagine you have, say, a 20GB video recording, and rotate it. That will now be a separate new 20GB file on your mac drive.

- If i view the album of some specific person that has many pictures with location metadata, and I scroll to the bottom where the map is, it almost immediately starts allocating >100GB memory, beachballs, starts gigabytes of memory paging, and you gotta kill the app asap.


Easy syncing? Kinda. It works, generally, eventually.

No progress, no indicators. And what little status you get has no connection to reality.

Will it sync? When? Who knows? You’re on WiFi with a full battery and charging? So? Might be a minute, might be an hour. Oh, you restarted Photos? Who cares? Not Photos.

They'll get there. Sometime.


I just checked how much I paid around 12 months ago for Crucial 96GB kit (2x48GB ddr5 5600 so-dimm). Was $224, same kit today I see listed at $592, wild :/


This is insane!

I got 2 sticks of 16GB DDR4 SODIMM for €65.98 back in February. The same two sticks in the same store now cost €186


Same, bought in August for $250 (EU), now it's ~$840. I ended up returning the laptop I'd bought it for and thought 'why hold on to the RAM, it'll only depreciate in value,' so I returned that too. Better hold on to my PS5, I guess.


I bought 2x 32 GB DDR5 in august for $150, Now its $440. I dodged a HUGE bullet.


I did buy 384 GB worth of Samsung DDR5-4800 sticks for my homelab a few months ago. I was wondering at the time if I really needed it, well ended up using it anyway, and turns out, dodged a bullet big time.


Just bought that exact kit for my Minisforum 790S7 build at the eye watering $592... Kicking myself as I was just starting to contemplate it early Oct but not yet seriously looking


96GB 6400, 380€ 2023-11


Do you know if it’s supported on Mac too, with whatever platform specific optimizations like running it on the gpu / with MPS ?


You mean Vulkan? In the blog post there is reference to all vulkan supported platforms

If you mean ffmpeg build with whisper - from memory I didn't see ffmpeg-builds for mac, so you will probably need to compile yourself


> iPhone Pocket features a singular 3D-knitted construction

What does that mean? What would be an example of 2D knitted construction ?


Imagine being the copywriters tasked with making this sound cool.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: