Firefox should be the browser that respects you privacy (the only one...). Integrating AI undermining the efforts of making it the privacy oriented browser.
For now the AI is forced and ridiculously complicated to disable (with new options in about:config poping in each new version).
The promise to have an "disable all IA features" is still a promise.
Years ago our company consolidated on Firefox because we could rely on it to not send our information to remote servers. At that time other browsers made it hard to disable telemetry. Firefox was then the only browser that could forward Kerberos tickets to remote servers, for highly secure two-factor authentication and single-sign on.
I'm personnally sad that now we have to consider banning Firefox for company use, because it's hard to verify that we've disabled every AI "feature" that might funnel our data to remote servers.
Seems extremely dangerous to be doing those kinds of things with software from someone politically hostile. Perhaps the EU should be weaning itself off that too?
> And how do we fight terrorists, CSAM and political opponents without Palantir ?
By doing police legwork and by prevention work (i.e. offer help to pedophiles, don't go and wreck MENA countries for funsies, but invest in helping the civilian populations).
So this citation, is basically fake news and FUD.
The *now* part is false and this hide the fact that the "platform" is only the SaaS.
> Phillip Torrone had warned [...] Arduino’s users were now “explicitly forbidden from reverse engineering or even attempting to understand how the platform works unless Arduino gives permission.”
This can fool someone from one location and only in one way (if you are near Somalia and expect a 10ms latency, a virtual VPN can't reduce latency to simulate been in Somalia).
So it have to be dynamic to fool multiple locations to stay probable.
But anyway, *you can't fool the last-hop latency* (unless you control it, but you can control all of it), and basically it impossible to fool that.
Yeah... I come here to talk about that. Should have been
for i in range(0, 2**8, 2):
print(" if (number == "+str(i)+")")
print(" printf(\"even\\n\");")
print(" if (number == "+str(i + 1)+")")
print(" printf(\"odd\\n\");")
or
for i in range(0, 2**8, 2):
print(f""" if (number == {i})
puts("even");
if (number == {i + 1})
puts("odd");""")
I embedded a chess engine in SVG image of a chess board (https://github.com/jnykopp/svg-embedded-chess) so that the engine moved the pieces automatically and played against itself, just by viewing the SVG.
This was done for a friend of mine who made an art installation that projected like some 50x20 (can’t remember exactly) of these images in a grid on a wall, for perpetual chess madness.
The number of chess SVGs a laptop’s browser was able to run simultaneously did feel suprisingly low, but luckily it was enough for that particular piece of art.
Sadly, seems there is not. But the artist has still the web page up he used for the installation: https://heikkihumberg.com/chess/
He said he used ipads as renderers. And even one grid may have looked different back in the day than that page now, as the font might be different. The SVG just uses system fonts and the chess pieces are just unicode characters.
Is there a way to control the speed. When I load a single SVG into browser, it runs through the whole game in a flash. (Edge shows animation; chrome and firefox show static image for me)
You can increase COMP_MOVE_TIMEOUT (which is now 1 millisecond) to, say, 100 milliseconds.
RESET TIMEOUT defines how long the game is paused after game is finished to let the viewer to see the result, and NEW_GAME_START_TIMEOUT defines how long to wait before doing the first move when a new game is started.
The static image may be because of some browser security mechanisms; served as raw from GitHub the SVG is not animated for me either on Firefox, but when I download the SVG and view it from local drive in Firefox, it works. (It did work when served from GitHub at some point in history, though.)
Is embedding intelligent logic inside of SVGs for animation a common thing -- feels very novel to me. Kudos for the idea and execution!
I am wondering if it is possible to push it even further and bring more and more creative logic -- say to create some unique patterns / designs etc that render differently each time. Say a swirling ripples animation that keeps changing over time but never feels like it is "pre-recorded".
Also, can animated SVGs be embedded in powerpoint and the like -- so we get crisp vector animated design elements in a compact portable format?
I do worry that this can also open some possible attacks -- malicious URLs in a dynamically generated QR, for example.
Yeah; I've built a map viewer in SVG+JS for my small browser game, and it works quite well for that purpose, but when I tried to repurpose the underlying code for a different game, with a much higher object density, it became quite unmanageably slow. (I rebuilt the map for that game using canvas, but it does lose me some functionality.)
Looks very fake. Self published (Anima-Core is NOT a journal), no academic anteriority, very strong statement, no peer-review, no public history of technical skills. Did I mention the use of Github via the interface only?
At the same time, possible since it's only classification tasks.
I mean, the method explained is technically plausible, a lot of people thought about it, we were just unable to find a method to do so.
Did you not see the author's note about being an outsider to academia? Not everyone has the background to pull all that off. This is an earnest attempt to come as close as possible and they even invite feedback that would help it become a real academic submission.
I mean, the process should have been to contact some local academics to discuss the mater. If I say it works (or it doesn't) I'm adding near nothing to the claim, as I'm not an academic myself.
Big claims like this need clear and solid work. Here it just looks like LLM generated.
Have you run the walk-through to reproduce? They provide a highly detailed step by step document. They welcome raising an issue if reproduction doesn't yield the claimed results within 2%.
It's OK to call out fake claims. But it requires going through the process if such is reasonable, it just seems to take a couple of hours to find out.
The fake claim here is compression. The results in the repo are likely real, but they're done by running the full transformer teacher model every time. This doesn't achieve anything novel.
That's not how the method works... The full transformer is only needed once to extract the activation fields. That step can even be done offline. Then the teacher can be discarded entirely. The compression result refers to the size of the learned field representation and the small student head that operates directly on it. Simple. No fake claim there. Inference with the student does not involve the transformer at all.
If you look at the student-only scripts in the repo, those runs never load the teacher. That's the novel part.
Can you please share the relevant code that has the training of such a tiny student model that can operate independently of the big teacher model after training? The repository has no such code.
> You can't know 200 people, but you can know 10 people who each know 10 people
You are still 100 people short to know 200 people, but I got the idea.
The 100 people limit is already know by most of teachers. Having more than 3 classes, it is mostly impossible (very hard) to have a "deep" follow up of each student.
Having more than 6 classes and it is strictly impossible to follow them even in the best conditions.
Ideally working with toad to experiment with it.
reply