Hacker Newsnew | past | comments | ask | show | jobs | submit | qouteall's commentslogin

WebAssembly standard design has considered binary size optimization. The format itself is quite compact. But porting native code to Wasm often brings many large existing libraries which contain a lot of code which makes the binary large.

The native ecosystem never payed attention to binary size optimization, but the JS ecosystem payed attention to code size in the very beginning.


It's the fact that WASM in a browser provide very little. Anything you have there required to bring a bunch of things that JavaScript just has access out of the box. That include both, things like Fetch API and things like Garbage Collector.

> The native ecosystem never payed attention to binary size optimization, but the JS ecosystem payed attention to code size in the very beginning.

That's not true at all. Native ecosystem stopped paying attention to it outside of embed once storage/memory got relatively cheap (i.e. we went from kilo and megabytes to gigabytes). Native also gets to use shared libraries...sometimes.

JavaScript ecosystem is like pendulum, sometimes they care, sometimes they don't.

I have high hopes for components, both for reducing size and for avoiding JavaScript.


Yes, because compilers and linkers have optimize for size switches since the 1980's for fun.

They originally used JS realm polyfill, which is not real JS realm. The polyfill has some security holes. Now they switched to Js interpreter in Wasm.

https://www.figma.com/blog/an-update-on-plugin-security/


Thanks!

I've written about limitations of WebAssembly https://qouteall.fun/qouteall-blog/2025/WebAsembly%20Limitat...

WebAssembly still doesn't provide a way to release memory back to browser (unless using Wasm GC). The linear memory can only grow.

The Wasm GC limits memory layout and doesn't yet support multi-threading.

Wasm multithreading has many limitations. Such as cannot block on main thread, cannot share function table, etc. And web worker has "impedance mismatch" between native threads.

And tooling is also immature (debugging requires print debugging)


Honestly lack of true multithreading (without the Web Worker hack) is the biggest downside for me. Every major project I work on needs the concept of a main thread for UI and a separate thread for processing.

One limitation of Rust macro is that it can only access code token, not actual type information.

When macro sees a type `X` macro can never be sure whether it's `&str` as Rust allows type alias. If `X` is not `&str` it may also be a struct that contains `&str`, still macro cannot know.

Wasm-bindgen workarounds this issue by generating "fake functions" then read and remove them in CLI:

https://wasm-bindgen.github.io/wasm-bindgen/contributing/des...

Zig comptime allows getting full type information. This is one advantage of Zig.


While this is true, it is kind of orthogonal to the issue here, since the macro correctly guessed that `&str` referred to a borrowed string and generated code for it.

The issue was that the deserializer could produce non-borrowable strings at runtime, which cannot be used to create an instance of this type and hence caused a runtime panic.

Catching this should be possible (even without precise type informations) but I'm not sure it can be done without breaking changes to the Deserializer trait. It would also suffer from false positives since sometimes you might know that the inputs you will deserialize only include strings that can be deserialized in this way.


In real-world business requirements it often need to read some data then touch other data based on previous read result.

It violates the "every transaction can only be in one shard" constraint.

For a specific business requirement it's possible to design clever sharding to make transaction fit into one shard. However new business requirements can emerge and invalidate it.

"Every transaction can only be in one shard" only works for simple business logics.


I talk about these problems in the "How hard can sharding be?" section of the article — long story short, not all business requirements can be dealt with easily, but surprisingly many can if you choose a smart sharding key.

You can also still do optimistic concurrency across shards! That covers most of the remaining ones. Anything that requires anything more complex — sagas, 2PC, etc. — is relatively rare, and at scale, a traditional SQL OLTP will also struggle with those.


Thanks for reply.

So in my understanding:

- The transactions that only touch one shard is simple

- The transactions that read multiple shards but only write shard can use simple optimistic concurrency control

- The transactions that writes (and reads) multiple shards stay complex. Can be avoided by designing a smart sharding key. (hard to do if business requirement is complex)


The optimistic concurrency control that reads multiple shards cannot use simple CAS. It probably needs to do something like two-phase committing


That's right!

If you anticipate you will encounter the third type a lot, and you don't anticipate that you will need to shard either way, what I'm talking about here makes no sense for you.


Business people have a nasty habit of identifying two independent pieces of data you have and finding ideas to combine them to do something new. They aren’t happy until every piece of data is copied with every other piece and then they still aren’t happy because now everything is horrible because everything is coupled to everything.


in my experience most backends I have worked on people don't use the facilities of their database. They indeed simply hit the database two or more times. But that doesn't mean it's not possible to do better if you actually put more care in your queries. Most of the time multiple transactions can be eliminated. So I don't agree this is a business requirement complexity problem. It's a "it works so it's good enough" problem, or a "lazy developer" problem depending on how you want to frame it.


This (along with n+1) is somewhat encouraged in business applications due to the prevalence of the repository pattern.


Give each business or customer its own schema and you almost never need sharding.


Yes, but you could also flip it the other way around — make the business or customer your sharding key, and you'll only need to manage one schema!


This video criticizes Rust using perfect solution fallacy. Critizing a useful thing just because it's imperfect.

https://en.wikipedia.org/wiki/Nirvana_fallacy


I'm going to disagree: it definitely felt self-aware without being full-on satire (and there's was more than a few obscure in-jokes in there too).


We are in the down season.


There are two kinds of slowness. One is trying hard while getting no visible result. Another is procrastination. The article refers to the first


And the challenge is making management understand the difference!


But here's the question: how do you tell the difference? Especially when hard-work without visible result / legible failure is hard to communicate...


The better phrase is that "if it compiles, then many possible Heisenbugs vanish"

https://qouteall.fun/qouteall-blog/2025/How%20to%20Avoid%20F...


I want to add Contagious Borrow Issue https://qouteall.fun/qouteall-blog/2025/How%20to%20Avoid%20F...

Contagious borrow issue is a common problem for beginners.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: