If you feel that TypeScript, or hell even JavaScript, is becoming more alike C#, it's actually deliberately done by Microsoft in benefiting their ecosystem. In this interview they mentioned they had internal demands to convert/transpile C# into JavaScript or TypeScript. So by making these target languages more like C#, it directly benefits their need. But I don't think this should be the driving force in designing ECMAScript. When they are pushing a language feature, they have an unspoken internal goal, and every choice they make is to make JS/TS look more like C#, and they are more likely to dismiss proposals that preventing them from deliverying that goal. There's likely a bit of conflict of interest there.
As a long term observer: definitely not a goal. But you have to be clear here: JavaScript and C# both are OO languages, both are having origins stories in Java/C++, both are facing the same niche (system development), same challenges (processor counts, ...) and so on. And then, you put teams on it which look left and right when they face a problem and then you wonder that they reuse what they like?
C# language team is also really good. They did not do a lot of mistakes in the 25+ years. They are a very valid source of OO and OO-hybrid concepts. It is not only TS/JS but also Java and C++ who often look to C#.
The story was not to transform C# code to JS but to use C# to write the code in the first place and transpile it. Not for the sake of having .NET usage but for the sake of having a good IDE.
> They did not do a lot of mistakes in the 25+ years
If my memory serves, .NET and WinFS were the two major forces that sunk Longhorn, and both have been given their walking papers after the reset [1].
.NET and C# have grown to be mature and well-engineered projects, but the road there was certainly not without bumps. It's just that a lot of the bad parts haven't spilled outside of Microsoft, thankfully.
Not only that, they went as deep as mixing in project issues with language design. A massive rewrite mixed with massive feature changes is always a tricky thing no matter the language.
.NET was already a going concern before Longhorn even started. What sank Longhorn was the fact that writing an OS from scratch is hard and maintaining compatibility with existing OSes in the process is even harder, especially when you're adopting a completely new architecture. Longhorn would have been a microkernel running 100% on the .NET runtime, mainline Windows is a monolithic kernel written in C++. I don't know how it would have ever worked, whether .NET was "perfect" or not.
Android still runs on a monolithic kernel written in a memory-unsafe language. I'm finding it suprisingly difficult to find information on Meadow, other than it runs .NET DLLs as user-space applications, but nothing about the structure of the kernel.
Longhorn was going to be more than that. Microsoft did have Singularity/Midori projects, started around the middle of Longhorn/Vista, and continued much longer after Vista released to build out the managed microkernel concept. It's been about a decade since they've put any work into it, though.
The article is presenting some stuff mixed up. Had nothing to do with the language or the framework. WinFS was a database product. Over engineered and abstract.
.NET and C# were researched a lot for operating system usage (Midori, Singularity) but that was after Longhorn.
The operating system group UI toolkits was a further problem and they pivoted there dozen of times in the years. Particular for a C++ based os group.
But the death of longhorn was ultimately about the security restart of Bill Gates
Windows team is a C++ kingdom, and those devs will not adopt .NET even at gun point.
They redid Longhorn with COM and called it WinRT, irony of ironies, WinRT applications run slower than .NET with COM reference counting all over the place.
Google has showed those folks what happens when everyone plays on the same team, and now it owns the mobile phone market, a managed userspace with 70% world market.
Not at all. Before the use of TypeScript exploded, they had two features brought into it from C# which were namespaces and enums (both of which are amazingly good features. For the first one, no one knew what was the right choice back then. We had almost a dozen different module systems and TypeScript had gone their way to support all of them and namespaces were their own solution to the mess (remember they were trying to solve their own problems at first, it wasn't to dominate anything). I personally used namespaces and I could have only the TypeScript compiler running and producing a single JS file for rapid development without the burden of --- then very slow --- webpack.
And for enums, using strings as enums was not a very efficient idea. I think JavaScript introduced Symbols for locked/hidden properties but also meant to use them as enums. It never worked either and then the sum type, union type feature of TypeScript made the whole community to keep using strings as enums. This is still a very bad idea, it is not ergonomic, it is prone to many problems, and very inefficient to compare strings instead of integers. But hey TypeScript tried to fix the problem and almost everyone rejected it. And so enum is now discontinued.
Rest of the changes to TypeScript came from almost any other language but C#, probably the biggest changes ever to happen to JavaScript came directly from CoffeeScript. And then I personally saw how each of these new changes --- one by one --- arrived at C#. For what I have seen firsthand by reading the TC39 proposals, each feature came from a different community and different programming languages, (think about null operators !/?, the nullish coalescing ??, the incoming pipes, fat arrows and lambdas, mixings) as JavaScript is the only language everyone has to use, and it has benefited everyone to have a language that has all the great things from all other languages.
Well, for one, benefiting Microsoft's ecosystem does not imply being detrimental to other ecosystems per se.
Furthermore, couldn't the convergence of TypeScript towards C# be simply a result of shared goals and values of the two languages, especially considering they have the same principal designer?
The sequence of turbo pascal / delphi / c# / typescript which brought us LSP as a sidekick (!) IMHO has benefitted the whole industry at least as much as "transpile c# to ecma script via typescript" . no. much much much more.
I do not see a problem with MS also having an internal use case .
you know I wouldn't stop using python "because" Guido now works at MS ...
Python has an elected steering council and core team. The governance process explicitly tries to avoid conflict of interest by disallowing more than two steering council members working for the same employer. See PEP 13 [1].
By contrast, .NET is controlled by Microsoft (with veto over board decisions [2] and code changes [3]), integrates Microsoft's telemetry to send your data to Microsoft by default [4] and deliberately hobbles features to benefit Microsoft [5].
The complaint above was that JS was becoming too much like C#, so the steering committee of .NET isn't the one of the original concern. (Also, as pointed out, that "deliberate hobbling" case was litigated in the public square on HN at the time and then revised and "unhobbled" after the outcry.)
As far as the other direction, JS has a somewhat similar (but rather more complex) situation to Python with its steering committee being Ecma International's TC39 (Technical Committee 39).
Ecma International has similar By-Laws and Rules designed to manage conflict of interest and too much power consolidate in a single employer of committee members. Ecma is maybe even a little "stricter" than Python because its rules consider the companies themselves to be the members, and companies only get one vote no matter how many employees interact with the process.
Does implementing algebraic effects requires stack switching support? If so, I wonder what runtime cost we must pay when heavily using algebraic effects. Is there any empirical study on the performance of algebraic effects implementations?
In OCaml 5, we’ve made it quite fast: https://kcsrk.info/papers/drafts/retro-concurrency.pdf. For us, the goal is to implement concurrent programming, for which a stack switching implementation works well. If you use OCaml effect handlers to implement state effect, it is going to be slower than using mutable state directly. And that’s fine. We’re not aiming to simulate all effects using effect handlers, only non-local control flow primitives like concurrency, generators, etc.
Suppose one of your effects is `read()`, and you want to be able to drop in an asynchronous implementation. Then you'll either require something equivalent to stack switching or you'll require one of the restrictions to asynchronicity allowing you to get away without stack shenanigans -- practical algorithms usually start requiring stack switching though.
You can get a lot of mileage out of algebraic effects without allowing such ideas though. Language constructs like allocation, logging, prng, database sessions, authorization, deteterministic multithreaded prng, etc, are all fairly naturally described as functions+data you would like to have in scope (runtime scope -- foo() called bar() -- as opposed to lexicographic scope), potentially refining or changing them for child scopes. That's a weaker effect system than you would get with the normal AE languages, but there are enough concepts you feasibly might want to build on such a system that it's still potentially worthwhile.
Netflix implements "imgsub"[1] - it actually delivers a zipped archive of transparent images to the player. So technically they can pre-render positioned typesetted subtitles on server and render them as images overlay, as long as there's no animated text effects.
In general, streaming services have to ensure maximum compatibility when playing their contents on all kinds of devices - high end and low end. For which on low end device it could be very resource constraining to render typesetted subtitles. There are other platforms where all video playback have to be managed by the platform system frameworks with limited format support, and streaming services can't do much about it.
The priority of streaming service is extending their market reach, and I think Crunchyroll itself is facing the same challenge of market reaching.
I think the right solution is trying to get typesetted subtitles, and the end-to-end workflow - creation, packaging, delivery, rendering with adaptation (device capabilities, user preferences, localizations etc) all standardized. A more efficient workflow is needed, so a single source of subtitle is able to generate a set of renditions suitable for different player render capabilities. Chrunchyroll should actively participate in these standard bodies and push for adaption for more features and support in the streaming industry.
Unfortunately, as the link describes, Netflix only makes this available for a very limited set of languages, while everyone else is stuck with the extremely limited text-based standards.
Frankly, those text-based subtitle standards are quite maddening on their own. Netflix's text-based subtitle rendering seems to support a much wider set of TTML features than what it actually allows subtitle providers to use - so if these restrictions were to be slightly relaxed, providers could start offering better subtitles for anime immediately with no additional effort from Netflix.
What Netflix supports on their main website might not be what they care about, though; you used to be able to watch Netflix on the Nintendo Wii, and they probably still have some users on stupidly old smart TVs.
Fast forward to 2025 and BBC's streaming app on ApppleTV only just added subtitles; vastly more powerful hardware but so many restrictions from Apple on how developers use it.
In 2008 I was watching fansubbed anime with decent typesetting on a netbook with a shitass-even-for-the-time Atom processor, so I don't buy for one second that this is a device capabilities issue.
> In general, streaming services have to ensure maximum compatibility when playing their contents on all kinds of devices - high end and low end. For which on low end device it could be very resource constraining to render typesetted subtitles. There are other platforms where all video playback have to be managed by the platform system frameworks with limited format support, and streaming services can't do much about it.
Surely if my mid-end phone from 2015 supported everything .ASS has to offer, they could do it either?
In any case... I don’t believe the problem is that Netflix and Crunchyroll have to support low-end devices, it’s that they don’t want to pay $$$ for typesetting. They are big enough now that they don’t have to care, so they don’t – just another example of enshittification.
I wouldn't bet that every smart TV Crunchyroll wants to be available on has more processing power than your phone from 2015 (some of those TVs might be older than that), but yes, it's probably less about hardware capabilities than about platform limitations that make the usual solution of compiling libass into a blob and integrating it into the player not so easy to implement.
I remember arguing with Ron on the TC39 disposable proposal that I think Go's `defer` is a better pattern than C#'s `using`, and he tried to convince me the otherwise.
I was surprised to see they choose Go instead of C# for the TypeScript compiler port. Microsoft has been trying to make ECMAScript look more similar to C#, and their Windows Universal SDK has made a lot of efforts to provide a seamless transition for developers to port their code between C# and TypeScript. And yet they still think porting TypeScript compiler to Go is easier to do than porting it to C#.
Despite my different tech view with Ron, I appreciate and respect the great work he has done to TypeScript & ECMAScript. And I wish him the best with his next adventure.
Hejlsberg's arguments for choosing go over c# sounded well founded and very pragmatic to me though, with things like battle tested AOT compilation (.NETs current iteration is very promising, but still relatively nascent) and the type system being a stronger fit (I really miss TS's structural typing sometimes in C# :) ).
As someone who has a lot of .NET projects at work it's a bit of a bummer since the dogfooding would have been a huge benefit for .NET, but I honestly can't argue with their choice.
They should really rebrand their home server to another name, so the Matrix name is unambiguously referring to the protocol.
reply