Hacker Newsnew | past | comments | ask | show | jobs | submit | ibrarmalik's commentslogin

How is plenoxels a direct predecessor of gaussian splatting?


They both emerged out of the pursuit of a more efficient solution for addressing the inefficiencies in NeRF, which was mainly due to expensive ray marching and MLP calls. Before the emergence of Gaussian splatting, grids, such as plenoxels were all the rage. Of course, Gaussian splatting here refers to the paper, “3D Gaussian Splatting for Real-Time Radiance Field Rendering”


By output you mean the extracted surface geometry? Or are you directly rendering NeRFs in VR.


Given the scale it wouldn't be wise to render them directly. There's also the issue of being able to record in real life without changes happening while doing so.

I should've have clarified it, but yes I was talking about the extracted surface geometry.


tmux works out of the box for me.

I tried Zellij and couldn’t get the Alt key to work on mac. And then when ssh’ing into a server I couldn’t see some of the icons because it required a special patched font.


There's an entry in the FAQ for alt key on mac: https://zellij.dev/documentation/faq#i-am-a-macos-user-how-c... Fixed it for me in Alacritty


I’m assuming this is taking that into account. Otherwise why would it compute a route?


You’re under the right paper for doing this. Instead of one big model, they have several smaller ones for regions in the scene. This way rendering is fast for large scenes.

This is similar to Block-NeRF [0], in their project page they show some videos of what you’re asking.

As for an easy way of doing this, nothing out-of-the-box. You can keep an eye on nerfstudio [1], and if you feel brave you could implement this paper and make a PR!

[0] https://waymo.com/intl/es/research/block-nerf/

[1] https://github.com/nerfstudio-project/nerfstudio


Oooh fun. I'm glad it seems possible nowadays. I might take a swing at putting together an out of the box tool at some point if nobody beats me to it first.



Be careful with this one! Luma's offering requires that the camera follow the recorded video path. Our method lets the camera go wherever you desire!


The new models and data would stay at OpenAI. You can have thousands of researchers and compute, but if you don’t have “it”, you are behind (ask Google).

In Microsoft he still has access to the models, and that’s all he needs to execute his ideas.



Same. I like it better than the vscode vim emulation, which is more “strict” and turns to the visual mode when selecting anything with the cursor. Which I personally think is the worse part of vim.


Where does the model come from? Hard to trust the license when we have no idea with what data it has been trained.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: