Hacker Newsnew | past | comments | ask | show | jobs | submit | trezm's commentslogin

Rivian's R1* max battery offers an EPA over 400, anecdotal experience has taught me it's actually around 325-350 on the highway with heating/AC, but still.


Yes! It would be great to either support a generic "custom" CLI option. Just trying to cover the main ones that I use initially.


Superposition is a way to access claude code (and other CLIs) running on your laptop from anywhere, with multiple sessions and workspace isolation (thanks to git worktrees.)

Since my last checkin, I've made quite a few improvements to Superposition including:

- Gateway (docker image included) to access your laptop from anywhere without needing to open your ports

- Custom CLI command support

- Local git repos (no need for github)

- Automatic updates for the runner process (simply restart the main binary)

I've been using this every day to do a large portion of my own development, and it's proven to be very useful. Let me know what you think!


Super exciting news -- shuttle has been a really nice breath of fresh air after dealing with containers and such. I write a lot of rust-based backends and it was a pain that even for a simple service like fly.io I still had to manually write docker files... shuttle made that a LOT easier. Love to see them moving forward!


Same. Used shuttle for https://endler.dev/2022/zerocal/ lately and was super happy with the experience. I no longer have to worry about hosting and can focus on the product instead.


Interesting you mention it. I've actually been working on a fairly fully featured app in flutter, and was shocked at how well it performed on iOS. Coming from the React Native world its miles ahead in terms of perf, at least in our case.

Shameless plug, the app is https://droppod.gg, an app to find time to play video games with your friends!


I like the idea so I just installed it. In the sign up screen I can swipe back and get to a second sign up screen, then swipe back again and I go to a loading screen forever.

I don't know if that's flutter or you, but it doesn't fill me with confidence.


I tried it - there's some jank here and there, but the general experience is smooth. Image loading while scrolling through your game feed seems to cause skipped frames, but that's not necessarily the toolkit's fault.


This is a great idea, I can't wait to see how it turns out. Really a tool that the internet has been lacking as a whole.


creator here, feel free to ask questions/give constructive criticism!


"The AngularJS framework for HTML5 is relatively new, and a valid concern is whether it will be replaced by a new framework in the future." I'd say AngularJS is pretty battle tested at this point... Not to mention that it's now considered old for the industry and is not necessarily performant.

I get that this is put out by qt and is pretty obviously bias, but it seems like they should have given html5 a stronger chance...


Angular 2 was released in 2017. QML was released in 2009.

So what he said was true... but misleading. A more fair comparison would be to compare QML to HTML5's release date of 2014, or HTML 2.0's release date of 1995.

Sure Angular may get replaced, but in 20 years HTML is going to still be around.


Interesting take! It might also be worth mentioning the difference between the all-in-one machine and updates. Chances are an update to the all in one won't break functionality, whereas if each piece is made by a separate manufacturer, updates to some pieces might disable or invalidate others. I've seen this point often made in the react vs angular debate.


I don't have a cardboard to properly see if the image really is "3D", but it seems like it's just a photosphere and therefore doesn't really have depth information. Is that right?


Reminds me of Quicktime VR. Does anyone remember Quicktime VR? I mean...we had this tech... in the early 90s.


Now I have the nightmares again. (I got to implement spherical projections in realtime on a Pentium 90. It's... an interesting problem. Yeah. That's what I'll call it)

But yes, we had it. We had VR helmets, too. Consumer-grade. Remember the VFX-1? (https://en.wikipedia.org/wiki/VFX1_Headgear)

But this time, VR is going to be different. Honest.


That is correct.

Sometime in the future, consumers will have software that can turn 2D images into simulated 3D [0]. It's still in dev mode right now, though.

[0] https://www.youtube.com/watch?v=Oie1ZXWceqM


Yeah, and there are some other in-dev camera rigs that use multiple cameras plus software interpolation to simulate 3d as well.

The thing that's most intriguing to me are the experiments using multiple depth cameras set up around a space and using software to build a live, 3d model and overlay the video data as a texture on top of the models. It's all very rudimentary and low-res at the moment but it's the sort of thing that can eventually become 3d/VR telepresence and that just strikes me as awesome.


That video is extremely impressive! Colour me excited


I'm very new to this sort of thing, but it seems like "stereoscopic video" isn't nearly as common as I expected it to be. 2D panoramas seem to be very much the norm. I'm not sure if this is down to production difficulty, delivery difficult or stereoscopy just not being impressive enough.

One thing is that it seems to me that anything that's recorded by a camera (rather than rendered in realtime) is going to be "wrong" once you tilt your head, as your eyes are now on top of each other rather than next to each other.


Yup. The Ricoh Theta S captures spherical panoramas. The VR terminology is just overselling it. Neat idea though. It would be cool if there was a sonic component.


I think you're on to something. Say we record monaural audio with directional mics on/beside each cam, then encode and compress each stream, allowing for realtime stereo mixing during playback determined by view angle. Add a compass, accelerometer, gyro to track orientation. Couldn't we then achieve the desired effect and even simulate spatial audio effects, 6DoF movement in scene, blend UI sounds and add 3D sound to the environment? AR anyone? With a small peripheral you could emit a few chirps at diff frequencies and measure them using same mic rig to create a virtual map of the environment's acoustic characteristics and use it to render sound effects for composite elements, generated UI, nav feedback, similar to the way image based lighting is used today to make artificially generated objects appear as if they were really present in the scene.

Sounds like a good open hardware/software project but I'm short on cameras and mics for something like that. Anyone see potential there?

Combine with laser rangers and filters for their wavelength on the cams, and you can sample 3d point cloud data too and render the environment as a 3D (4D) scene, use it for composite reference, or slap a small LiDAR scanner under the whole thing for precise measurement.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: