Yes, you can relax logic gates into continuous versions which makes the system differentiable. An AND gate can be constructed with the function x*y and NOT by 1-x (on inputs in the range [0,1]. From there you can construct a NAND gate, which is universal and can be used to construct all other gates. Sigmoid can be used to squash the inputs into [0,1] if necessary.
This paper lists out all 16 possible logic gates in Table 1 if you're interested in this sort of thing: https://arxiv.org/abs/2210.08277
I have aphantasia and have been practicing meditation with the goal of improving the condition for a couple years. I have seen some minor improvements - when I'm in a pretty relaxed state I can see some visuals, but am not able to control the stream of images.
I haven't been working on this quite as much recently since there seems to be a connection with the meditation causing an ocular migraine with aura.
In my experience the main benefit of functional programming is function purity. I am completely fine with mutation inside of a function since the all of the mutation logic is self-contained in a single small block of text.
I think everyone should take a shot at writing a non-trivial functional program to see the benefit. Once you understand what makes it great, you can apply what you've learned to the majority of OOP/impure languages.
I'm not so certain that non-desk jobs will be safe either. What makes the current LLMs great at programming is the vast amount of training data. There might be some other breakthrough for typical jobs - some combination of reinforcement learning, training on videos of people doing things, LLMs and old-fashioned AI.
I do networked game development on Windows and I've found the clumsy program to be very valuable to simulate adverse network conditions. You can set it up to simulate arbitrary network latency, packet loss and so forth.
Perhaps the reason modern programs use so much memory vs what I remember from the Windows XP era is precisely because we went to 64 bits. Imagine how many pointers are used in the average program. When we switched over to 64 bits, the memory used by all those pointers instantly doubled. It's clear that 32 bits wasn't enough, but maybe some intermediate number between 32 and 64 would have added sufficient capacity without wasting a ton of extra space.
> Imagine how many pointers are used in the average program. When we switched over to 64 bits, the memory used by all those pointers instantly doubled.
This is a very real issue (not just on the Windows platform, either) but well-coded software can recover much of that space by using arena allocation and storing indexes instead of general pointers. It would also be nice if we could easily restrict the system allocator to staying within some arbitrary fraction of the program's virtual address space - then we could simply go back to 4-byte general pointers (provided that all library code was updated in due course to support this too) and not even need to mess with arenas.
(We need this anyway to support programs that assume a 48-bit virtual address space on newer systems with 56-bit virtual addresses. Might as well deal with the 32-bit case too.)
I agree that's wasteful, but if software were only 2x bigger, we'd be in really good shape now. Unfortunately there are still another one or two more orders of magnitude to account for somehow.
SGI used three ABIs for their 64-bit computers.. O32, N32, N64. N32 was 64-bit except for pointers which were still 32 bits for exactly that reason - to avoid doubling the memory needed for storing pointers.
I think the most likely path forward for commercialization/widespread use is to use AI as a post-processing filter for low poly games. Imagine if you could take low quality/low poly assets, run it through a game engine to add some basic lighting, then pass this through AI to get a photo-realistic image. This solves the most egregious cases of world inconsistency and still allows for creative human fine-tuning. The trick will be getting the post-processor to run at a reasonable frame rate.
Don’t we already have upscalers which are frequently used in games for this purpose? Maybe they could go further and get better but I’d expect a model specifically designed to improve the quality of an existing image to be better/more efficient at doing so than an image generation model retrofitted to this purpose.
As a videogame developer, I've always thought this take was just silly. I couldn't even imagine spending the time and effort into integrating someone else's assets into my game and keeping things balanced. The closest thing that we will get to this is something like Fortnite or Roblox, which will limit the type of games and creative choices that can be made.
I wrote a blog [1] a couple years ago about this solution - it turns out it is possible to use timestamp authority servers in combination with hashing functions to create a verified edit history. Like the other comment said, it merely starts an arms race where the AI side is likely to win, which is why I haven't pursued this further.
For something like digital art creation verifying the edit history is much more fruitful since the diffusion process is nothing like how humans create art.
This paper lists out all 16 possible logic gates in Table 1 if you're interested in this sort of thing: https://arxiv.org/abs/2210.08277