Hacker Newsnew | past | comments | ask | show | jobs | submit | GistNoesis's commentslogin

I think it boils down to the alternate view of rotations as two successive reflections.

You can then use householder matrix to avoid trigonometry.

These geometric math tricks are sometimes useful for efficient computations.

For example you can improve Vector-Quantization Variational AutoEncoder (VQ-VAE) using a rotation trick, and compute it efficiently without trigonometry using Householder matrix to find the optimal rotation which map one vector to the other. See section 4.2 of [1]

The question why would someone avoid trigonometry instead of looking toward it is another one. Trigonometry [2] is related to the study of the triangles and connect it naturally to the notion of rotation.

Rotations [3] are a very rich concept related to exponentiation (Multiplication is repeated addition, Exponentiation is repeated multiplication).

As doing things repeatedly tend to diverge, rotations are self stabilizing, which makes them good candidates as building blocks for the universe [4].

Because those operations are non commutative, tremendous complexity emerge just from the order in which the simple operations are repeated, yet it's stable by construction [5][6]

[0]https://en.wikipedia.org/wiki/Householder_transformation

[1]https://arxiv.org/abs/2410.06424

[2]https://en.wikipedia.org/wiki/Trigonometry

[3]https://en.wikipedia.org/wiki/Matrix_exponential

[4]https://en.wikipedia.org/wiki/Exponential_map_(Lie_theory)

[5]https://en.wikipedia.org/wiki/Geometric_algebra

[6]https://en.wikipedia.org/wiki/Clifford_algebra


citing the Wikipedia page for trigonometry makes this feel a lot like you just told an LLM the expected comment format and told it to write insightful comments

I had to check the precise definition for trigonometry while writing my comment, found it interesting so I added a reference.

As with many subject that we learn early in school, it's often interesting revisiting them as adult to perceive additional layer of depth by casting a new look.

With trigonometry we tend to associate it with circle. But fundamentally it's the study of tri-angles.

What is interesting is that the whole theory is "relative". I would reference the wikipedia page for angle but it may make me look like an LLM. The triangle doesn't have positions and orientation baked-in, what matters is the length of the sides and the angle between them.

The theory by definition becomes translation and rotation invariant. And from this symmetry emerge the concept of rotations.

What is also interesting about the concept of angle is that it is a scalar whereas the original objects like lines live in an higher dimension. To avoid losing information you therefore need multiple of these scalars to fully describe the scene.

But there is a degree of redundancy because the angles of a triangle sums to pi. And from this degree of freedom results multiple paths to do the computations. But with this liberty comes the risks of not making progress and going in circles. Also it's harder to see if two points coming from different paths are the same or not, and that's why you have "identities".

Often for doing the computation it's useful to break the symmetry, by picking a center, even though all points could be centers, (but you pick one and that has made all the difference).

Similar situation arise in Elliptic Curve Cryptography, where all points could have the same role, but you pick one as your generator. Also in physics the concept of gauge invariance.


Interesting work. It's a nice introduction to usage of holography.

We can etch the inside of a photosensitive material by focusing a laser at a specific point, and moving this point of focus. That's what is done in [3] [4].

But here instead of doing this sequentially they print all points simultaneously using holography.

Here they use holography to light volumetrically some photosensitive resin, in a similar fashion as it used to be done for volumetric display (In [2] you can find a figure of using an agarose gel tank as a display for volumetric hologram).

They just put a new "spin" on it, by spinning mirror around the resin tank, to project from all direction, to be able to the back of objects. The technique called "Digital Incoherent Synthesis of Holographic Light Fields" paints the resin bath with a 3D-paint brush, sequentially from multiple direction. It's called incoherent because each angle is treated independently from the other and it's light doesn't need to be interfered (in the wave sense).

The natural extension of using a conical mirror instead of spinning the mirror would need to also consider the interference of light of nearby angles making the inverse problem computation harder, and need to have higher resolution, but would avoid moving objects.

Here the holography is a fancy way of focusing the light where we want the resin to cure. It needs to have a resin with optical properties which don't change once cured or then the light behind the cured resin won't be focused where it should, even though it should cure everywhere simultaneously. Unfocused light is still absorbed by the resin which contributes to the curing, but photosensitive resin are non linear meaning nothing happens until you cross a threshold.

To do this holography, they use Digital Micromirror Devices (DMD) : a chip which has an array of micro mirror pixels. [0]

Although these mirrors are on-off only, some technique from the 1970s, allows you to control the shape of the light-field, in amplitude, phase and polarization [1].

DMD is a chip located inside the widely available technology available in Digital Light Projector. They are used as display and that's how you control the mirrors. You use them as a screen over the HDMI interface to display the right pattern. Then you just have to bounce some laser on it to have a "structured light" beam, from which you use a few lenses and hole (in a 4f arragement) to extract the "mode" you are interested whose light will refocus itself at the right 3d points.

The limits of this technique is due to the resolution of the DMD (as explained in [2]), where the smaller the pixel size the better. But here this limit is mitigated by integrating over time and angle, because what matters is resin exposition time.

[0] "Structuring Light with Digital Micromirror Devices (Photonics West 2021)" https://www.youtube.com/watch?v=vurtdU0FRm4

[1] Binary amplitude holograms for shaping complex light fields with digital micromirror devices https://www.institut-langevin.espci.fr/biblio/2025/1/18/2280...

[2] Holographic video display using digital micromirrors (Invited Paper) [the raven has the key]

[3] "how to put 3D images into glass or crystal objects 3d crystal Inside carving" https://www.youtube.com/watch?v=dkK6c45U6EU

[4] "What is Sub-surface Laser Engraving or a 'Bubblegram'? Technology Explained" https://www.youtube.com/watch?v=sOrby692Uag


It's even worse than that.

The positives outcomes are structurally being closed. The race to the bottom means that you can't even profit from it.

Even if you release something that have plenty of positive aspects, it can and is immediately corrupted and turned against you.

At the same time you have created desperate people/companies and given them huge capabilities for very low cost and the necessity to stir things up.

So for every good door that someone open, it pushes ten other companies/people to either open random potentially bad doors or die.

Regulating is also out of the question because otherwise either people who don't respect regulations get ahead or the regulators win and we are under their control.

If you still see some positive door, I don't think sharing them would lead to good outcomes. But at the same time the bad doors are being shared and therefore enjoy network effects. There is some silent threshold which probably has already been crossed, which drastically change the sign of the expected return of the technology.


I like the game up until 7 or 8 notes, but it keeps adding note.

I couldn't find a setting to freeze the difficulty where it's comfortable and where the melody can still be construed to make sense.

When adding more notes, it breaks the flow and turn a training for pitch practicing into a memory game for rain man, even more so when we make a mistake and must redo the melody partially.


Hmm - I was hoping that the Practice Mode would cover this, but I think there's probably some room to add an option where you can freeze the difficulty level - I'll see if I can add this later in the evening.

The "construed melody" is a harder problem. I've been playing around with the idea of using a markov model or even borrowing what CPU Bach did to try to create more coherent melodies over time.

Thanks for the feedback!


Transformers are nice. You can train a very minimal network that can output reasonable sequences very easily. They won't be high quality, or too pretty, but they will "make sense" way more than randomness and (usually) change keys in a coherent way.


Hey viraptor! I haven't even thought about using transformers but that sounds like a great idea. The current generator is just a standard random walk across the major/minor intervals and could definitely use some TLC!


Have you checked his physical keyboard ?

My laptop is getting old, and some keys need to be pressed with more insistence and more accurately for them to register properly. It also breaks the flow, and muscle memory for things like passwords. It also lead to letter inversions, because the tonic accent need to be put on letter which need to be pressed more, rather than on the first letter of the word. It's driving me crazy but unfortunately computer are too expensive for now (and it's probably only getting worse).



Lasers in space are fun! We[1] are actually doing this for real but automated and inversed -- launching a satellite with a laser to beam data down to Earth. Like these searchlights, but from orbit!

[1] A bunch of students at https://satlab.agh.edu.pl


If it's not done properly, and you happen at any point in the chain to put black blocks on a compressed image (and PDF do compress internal images), you are leaking some bits of information in the shadow casted by the compression algorithm : (Self-plug : https://github.com/unrealwill/jpguncrop )


And that's just in the non-adversarial simple case.

If you don't know the provenance of images you are putting black box on (for example because of a rogue employee intentionally wanting to leak them, or if the image sensor of your target had been compromised to leak some info by another team), your redaction can be rendered ineffective, as some images can be made uncroppable by construction .

(Self-plug : https://github.com/unrealwill/uncroppable )

And also be aware that compression is hiding everywhere : https://en.wikipedia.org/wiki/Compressed_sensing


>Let's crop it anyway

That is not cropping.

https://en.wikipedia.org/wiki/Cropping_(image)

>Cropping is the removal of unwanted _outer_ areas from a photographic or illustrated image.


Please forgive my outside the box use of word.

I used it at the time as a reference to the "PNG aCropalypse" ( https://news.ycombinator.com/item?id=35208721 where I originally shared it in a comment).

The algorithm does also work if you remove the outer areas of the photo.


Right, using stenography to encode some parity bits into an image so that lost information can be reconstructed seems like an obvious approach - all sorts of approaches you could use, akin to FEC. Haven't looked at your site yet, will be interested to see what you've built :)

Edit: I checked it out, nice, I like the lower res stenography approach, can work very nicely with good upscaling filters - gave it a star :)


steganography — stenography is courtroom transcription


People protect their secrets from stenographers with steganography.


Somewhat related, I once sent a FOI request to a government agency that decided the most secure way to redact documents was to print them, use a permanent marker, and then scan them. Unfortunately they used dye based markers over laser print, so simply throwing the document into Photoshop and turning up the contrast made it readable.


I remember noticing that a teacher in high school had used white-out to hide the marks for the correct multiple choice answer on final exam practice questions before copying them. Then she literally cut-and-pasted questions from the practice questions for the final. I did mediocre on the essay, but got the highest score in the class on the multiple choice questions, because I could see little black dots where the white out was used.


I was thinking I understand what's going on but then I came to the image showing the diff and I don't understand at all how that diff can unredact anything.


It's not that you can unredact them from scratch (you could never get the blue circle back from this software). It's that you can tell which of the redacted images is which of the origin images. Investigative teams often find themselves in a situation where they have all four images, but need to work out which redacted files are which of the origins. Take for example, where headed paper is otherwise entirely redacted.

So with this technique, you can definitively say "Redacted-file-A is definitely a redacted version of Origin-file-A". Super useful for identifying forgeries in a stack of otherwise legitimate files.

Also good for for saying "the date on origin-file-B is 1993, and the file you've presented as evidence is provable as origin-file-b, so you definitely know of [whatever event] in 1993".


Ok thanks. That sounds reasonable.

>... and therefore you can unredact them

from that readme is just not true then I guess?


I mean, even the "crop" isn't used at all correctly, is it?

I think the word should be "redact".


I'm trying to understand this cause it sounds fascinating but I don't get it. I don't have an advanced understanding of compression so that might be part of why.

If you compare an image to another image, you could guess by compression what is under the blocked part, that makes some sense to me conceptually, what I don't get is for the PDF specifically why does it compressing the black boxes I put have any risk? It's compressing the internal image which is just the black box part? Or are you saying the whole screenshot is an internal image?


The problem of computers is the problem of time : How to obtain a consistent causal chain !

The classical naive way of obtaining a consistent causal chain, is to put the links one after the other following the order defined by the simulation time.

The funnier question is : can it be done another way ? With the advance of generative AI, and things like diffusion model it's proven that it's possible theoretically (universal distribution approximation). It's not so much simulating a timeline, but more sampling the whole timeline while enforcing its physics-law self-consistency from both directions of the causal graph.

In toy models like game of life, we can even have recursivity of simulation : https://news.ycombinator.com/item?id=33978978 unlike section 7.3 of this paper where the computers of the lower simulations are started in ordered-time

In other toy model you can diffusion-model learn and map the chaotic distribution of all possible three-body problem trajectories.

Although sampling can be simulated, the efficient way of doing it necessitate to explore all the possible universes simultaneously like in QM (which we can do by only exploring a finite number of them while bounding the neighbor universe region according to the question we are trying to answer using the Lipschitz continuity property).

Sampling allows you to bound maximal computational usage and be sure to reach your end-time target, but at the risk of not being perfectly physically consistent. Whereas simulating present the risk of the lower simulations siphoning the computational resources and preventing the simulation time to reach its end-time target, but what you could compute is guaranteed consistent.

Sampled bottled universe are ideal for answering question like how many years must a universe have before life can emerge, while simulated bottled universe are like a box of chocolate, you never know what you are going to get.

The question being can you tell which bottle you are currently in, and which bottle would you rather get.


Causality also is not a universal thing. Some things just coexist and obey to some laws.

Does the potential cause current? No, they coexist.


I’m not sure Einstein would allow your concept of “simulation time”. Events are only partially ordered.


What they need is not so much memory but memory bandwidth.

For training, their models have a certain number of memory needed to store the parameters, and this memory is touched for every example of every iteration. Big models have 10^12 (>1T )parameters, and with typical values of 10^3 examples per batch, and 10^6 number of iteration. They need ~10^21 memory accesses per run. And they want to do multiple runs.

DDR5 RAM bandwidth is 100G/s = 10^11, Graphics RAM (HBM) is 1T/s = 10^12. By buying the wafer they get to choose which types of memory they get.

10^21 / 10^12 = 10^9s = 30 years of memory access (just to update the model weights), you need to also add a factor 10^1-10^3 to account for the memory access needed for the model computation)

But the good news is that it parallelize extremely well. If you parallelize you 1T parameters, 10^3 times, your run time is brought down to 10^6 s = 12 days. But you need 10^3 *10^12 = 10^15 Bytes of RAM by run for weight update and 10^18 for computation (your 120 billions gigabytes is 10^20, so not so far off).

Are all these memory access technically required : No if you use other algorithms, but more compute and memory is better if money is not a problem.

Is it strategically good to deprive your concurrents from access to memory : Very short-sighted yes.

It's a textbook cornering of the computing market to prevent the emergence of local models, because customers won't be able to buy the minimal RAM necessary to run the models locally even just the inferencing part (not the training). Basically a war on people where little Timmy won't be able to get a RAM stick to play computer games at Xmas.


Thanks - but this seems like fairly extreme speculation.

> if money is not a problem.

Money is a problem, even for them.


In the video, in the continuous version the game never end and highlight the "loser" strategy.

When you are behind the optimal play is to make a gamble, which most likely will make you even worse. From the naive winning side it seems the loser is just doing a stupid strategy of not following the optimal dichotomy strategy, and therefore that's why they are losing. But in fact they are a "player" doing not only their best, but the best that can be done.

The infinite sum of ever smaller probabilities like in Zeno's paradox, converge towards a finite value. The inevitable is a large fraction of the time, you are playing catch-up and will never escape.

You are losing, playing optimally, but slowly realising the probabilities that you are a loser as evidence by the score which will most likely go down even more next round. Most likely the entire future is an endless sequence of more and more desperate looking losing bets, just hoping to strike it once that will most likely never comes.

In economics such things are called "traps", for example the poverty trap exhibits similar mechanics. Where even though you display incredible ingenuity by playing the optimal game strategy, most of the time you will never escape, and you will need to take even more desperate measures in the future. That's separating the wheat from the chaff from the chaff's perspective or how you make good villains : because like Bane in Batman there are some times (the probability is slim but finite) where the gamble pays off once and you escape the hell hole you were born in and become legend.

If you don't play this optimal strategy you will lose slower but even more surely. The optimal strategy is to bet just enough to go from your current situation to the winning side. It's also important not to overshoot : this is not always taking moonshots, but betting just enough to escape the hole, because once out, the probabilities plays in your favor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: