Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe they are thinking about the cost of building the next NumPy/PyTorch/TensorFlow.


PyTorch supports fusion and CPU/GPU codegen, https://towardsdatascience.com/how-pytorch-2-0-accelerates-d...

Taichi already allows for amazing speedups and supports all the backends (including Metal) https://www.taichi-lang.org/ the fact that they didn't mention Taichi is a glaring omission.

JAX is that next version of TensorFlow, https://jax.readthedocs.io/en/latest/notebooks/quickstart.ht...

This looks like a reboot of Numba with more resources devoted to it. Or a juiced up Shedskin. https://shedskin.github.io/

I think Mojo is a minor mistake, I would caution adoption, it is a superset fork of Python instead of a being a performance subset. Subsets always decay back to the host language. Supersets fork the base language. I would rather see Mojo integrated into Python rather than adopting the language and extending.

I have a theory why Mojo exists. The team was reading a lot of PyTorch and Tensorflow, getting frustrated with what a PIA working with C++ and MLIR is for their model-backend retargeting codegen, so they created their perfect C++/Python mashup rather than use TVM. They nerd sniped themselves and rather than just use Python3.12+mypy, they made a whole new language based off of Python.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: