Hacker Newsnew | past | comments | ask | show | jobs | submit | mrsmrtss's commentslogin

I don't know why this is downvoted, because the statement is not wrong (https://benchmarksgame-team.pages.debian.net/benchmarksgame/...). Times have changed, modern .NET is very fast and is getting faster still (https://devblogs.microsoft.com/dotnet/performance-improvemen...).

I have nearly a decade of experience building .NET C# solutions on Linux and lately also on Mac, with almost everything hosted on Linux via Docker. I’m not sure what’s still missing for it to be accepted into the "first class citizen" club by the Linux elite.

The goal post must be moved. I constantly hear “there is no GUI story”… well thats not a core part of the language!

There actually is a great GUI story also for Linux today enabled by the excellent Avalonia (https://avaloniaui.net).

I have worked with AWS, Google and Azure. Google Cloud has the worst UI of them, it slow, broken and just horrible. UI in AWS may be faster than Azure, but overal layout and organization feels a lot better in Azure. I would strongly recommend clearly separating builds from deployments, if you don't want bad surprises. In the age of containers there should really be no difference in how, where or what you deploy.

Why would C# be the worst choice? Do you gave any real arguments or is it just your biased opinion.


Sorry, made a typo with 'gave' -> 'have'. But the point stays , why would C# be (one of) the worst choices here (when C# has small AOT binaries, hot reload etc)?


Nowadays C# has good AOT support, but it wasn't when Flutter was in its infancy.


> when C# has small AOT binaries, hot reload etc

In 2015?


>Most games today are "made" in C#, which is also a language with GC and Go beats it in every aspect in performance.

That is just nonsense. Go may beat C# in memory consumption, but even that is a close call if you use AOT. C# is arguably the fastest GC based language there is: https://benchmarksgame-team.pages.debian.net/benchmarksgame/... This was already so before .NET10, which brought many more optimizations: https://devblogs.microsoft.com/dotnet/performance-improvemen...


Regarding GC pauses, there is an interesting alternative GC with ultra low pauses for .NET called Satori. It's primarly discussed here https://github.com/dotnet/runtime/discussions/115627, and the GC itself can be found here https://github.com/VSadov/Satori


And Linux kernel is written in C etc, so by this logic you don't even need memory safety. There is no good excuse for designing a language in modern times (this century) with every object nullable by default. C# at least mostly has solved this design mistake later by introducing nullable reference types (https://learn.microsoft.com/en-us/dotnet/csharp/nullable-ref...). Then again, Go designers insisted that generics were also unnecessary, until they changed their mind.


On the contrary, because there we have 40 years of security exploits to prove otherwise, and Linux kernel has plenty of CVEs.

C# solution doesn't work, most projects never adopted it, because it is a mess to use with third party libraries that never bothered to add the required annotations, hence why it is still a warning and optional to this day.


I’m not sure which .NET libraries you are referring to, but all the ones we use have nullable reference types enabled. If you configure warnings as errors (as you should), then it works exceptionally well. Even if you were to use a library where nullable reference types are not enabled, you only need to check for null once during the library call, rather than everywhere in your codebase.


What? NRTs are used everywhere with WarningAsErrors:nullable also gaining popularity. Whatever environment you are dealing with C# in, if it’s the opposite I suggest getting away from that ASAP.


sidenote: just a heads up that I tried emailing you recently to let you know that you might want to contact the HN mods to find out why all your comments get set to dead/hidden automatically.

Your account might have triggered some flag sometime back and relies on users vouching for your comments so they can become visible again.



ah thank you for the context


I saw the email, and thanks. This is okay - I did not exercise (nor anyone should) good impulse control when dealing with bad faith arguments, which inevitably led to an account ban. Either way, Merry Christas!


Agree, mapping libraries make things only more complicated and harder to debug.


Auto mappers sincerely need to go away. They work kind of fine initially, but at the first custom field mapping or nested field extraction, you have to invest hours into mostly complete failures of unecessary DSLs in order to something that is extremely trivial to do in basic C#, and often it is impossible to shoe horn the necessary mapping into place. Then you have to deal with package upgrades which regularly rewrite custom mapping logic, and to be sure you have to write additional tests just to hand hold. With multi-caret editors and regex there is no need for automappers. You can write a mapping once and forget about it.


>so preoccupied with whether or not they could, they didn't stop to think if they should

This describes more than half of .net community packages and patterns. So much stuff driven by chasing "oh that's clever" high - forgetting that clever code is miserable to support in prod/maintain. Even when it's your code, but when it's third party libs - it's just asking for weekend debugging sessions and all nighters 2 months past initial delivery date. At some point you just get too old for that shit.


C# non SIMD (naive non optimized version) is in the same ballbark as other similar GC languages. Nim version is not some naive version also and seem rather specially crafted so it can be vectorized and still looses to C# SIMD.


Loses? My comparison is regarding GP's metric perf/lines_of_code. Let m := perf/lines_of_code = 1/(t × lines_of_code) [highest is better], or to make comparison simpler*, m' := 1/m = t × lines_of_code [lowest is better]. Then**:

   Nim          1672
   Julia        3012
   D            3479
   C# (SIMD)    5853
   C#           8919
>Nim version is not some naive version

It's direct translation of formula, using `mod` rather `x = -x`.

*Rather comparing numbers << 1. **No blank/comment lines. As cloc and similar tools count.


Nim "cheats" in a similar way C and C++ submissions do: -fno-signed-zeros -fno-trapping-math

Although arguably these flags are more reasonable than allowing the use of -march=native.

Also consider the inherent advantage popular languages have: you don't need to break out to a completely niche language, while achieving high performance. Saying this, this microbenchmark is naive and does not showcase realistic bottlenecks applications would face like how well-optimized standard library and popular frameworks are, whether the compiler deals with complexity and abstractions well, whether there are issues with multi-threaded scaling, etc etc. You can tell this by performance of dynamically typed languages - since all data is defined in scope of a single function, the compiler needs to do very little work and can hide the true cost of using something like Lua (LuaJIT).


> Nim "cheats" in a similar way C and C++ submissions do: -fno-signed-zeros -fno-trapping-math

I don't see these flags in Nim compilation config. The only extra option used is "-march=native"[0].

[0] https://github.com/niklas-heer/speed-comparison/blob/9681e8e...



Per the rules[0]: "Use idiomatic code for the language. Compiler optimizations flags are fine."

Agree with the rest of your comment.

[0]: https://github.com/niklas-heer/speed-comparison#rules


When using switch expression in C#, they are a lot more similar:

    public int Fib(int n) => n switch
    {
        <= 1 => n,
        _    => Fib(n - 1) + Fib(n - 2)
    };


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: