> I bet the compile times would improve if it wasn't header only.
If any hypothetical compilation time problem concerns you then rest assure that C++ enables you to develop submodules that wrap and instantiate your templates, eliminating the need to recompile them every single build.
IMHO returning status 400 is only reasonable if the request itself is malformed. A request to check whether a CC is valid does not become malformed depending on whether the CC is valid or not.
> It is ridiculous that header-only is considered an advantage.
Why do you believe it's ridiculous? Being able to integrate a third-party library by just adding a few source files to your source tree is as simple as it gets.
> The state of C++ build tools is very poor, and it is harming the language as a whole.
This assertion makes no sense, particularly in the light of this discussion. Installing a headers-only library is a solved problem, and even template-heavy libraries such as Eigen are already distributed and installed quite easily with standard linux package managers.
I should clarify. Being able to add headers to a project in C++ is easy but adding translation units is not (usually). This encourages header-only libraries even when they are not really appropriate, increasing compilation times etc.
Other things that become a huge albatross and are thus avoided:
- Adding compile-time build steps, e.g. for code generation.
- Adding dependencies of your own, even on, say, a tiny library of helper functions – unless you want to copy and paste it into your header.
With a package manager, those things 'just work'. Package managers also make it much easier to update to newer versions of the library as they're released.
I don't see your point. Adding custom build steps is a basic feature that's supported by pretty much every single popular build system for decades now, just like adding your own dependencies. Heck, cmake even allows users to configure a project to download packages from the web and integrate them in a build and with custom build steps if needed. Even if we ignore this fact, there are also package management tools such as Conan which handle this case quite nicely and also support cross-platform deployments.
And let's not pretend that in some platforms such as pretty much each and every single popular linux distro already package and distribute C++ libraries and offer packaging tools and also package repository services to distribute any dependency.
I'm starting to suspet that those who complain about these sort of issues have little to no experience with C++.
If that were sufficient, we wouldn't see so many header-only libraries.
> Even if we ignore this fact, there are also package management tools such as Conan which handle thjs case quite nicely and also supporting cross-platform deployments.
Conan would be a decent solution if everyone used it. I tried it briefly last year; I ended up not using it for that project because it didn't support certain features I wanted (namely, compiler toolchain management), plus Bintray was having issues at the time. But my overall impression was that it was... fine. I may end up using it in the future.
For now, though, as a library author, most of your users won't have Conan set up; as a library consumer, most of the libraries you use won't be on Conan; and in either position, most people building your software won't know how to use Conan. Until that changes, it doesn't really solve the library friction problem.
> And let's not pretend that in some platforms such as pretty much each and every single popular linux distro already package and distribute C++ libraries and offer packaging tools and also package repository services to distribute any dependency.
If you have a library which is popular or a dependency for something that's popular, then yes, the N different Linux distributions and package managers for other operating systems will all handle packaging your library themselves. If you just want to be able to upload some code and have it be immediately reusable by the wider world, well, it's not particularly feasible to make packages for N different distributions yourself, and even if you do, the Linux distros most people use are months to years behind the bleeding edge of software releases, so you'll have a while to wait before those packages actually reach people.
Sometimes I wonder why projects are either "single header only" or tens to hundreds of separate modules. One header and one source module would be a nice compromise regarding compile time vs. ease of use.
Yes, I realize that some header only libraries support using the header as either header or implementation, but this still blows up compile time.
Does it really blow up compile time? Translation units and optimization are usually where compile time is spent. Having fewer but fatter translation units helps compile times tremendously while most of the time per translation unit is spent in stages beyond the source compiling in LLVM.
Boost is really the only example I can think of where small utility comes at the expense of huge compile time increases.
> I should clarify. Being able to add headers to a project in C++ is easy but adding translation units is not (usually).
Where do you see a difference?
>This encourages header-only libraries even when they are not really appropriate, increasing compilation times etc.
Thus assertion makes no sense. Headers only declare interfaces, and you only require headers-only libraries if you're deep in template and template metaprogramming land. Evenso it's quite trivial to package and distribute those libraries just like any other library
Being realistic and admitting that C++ has tooling and packaging issues isn't being ignorant. Quite the opposite.
The fact that Header Only is one of the five bolded features of this project indicates that "modern C++ development" includes basically giving up on many desirable features of software projects. It's probably fair enough, but I wouldn't necessarily hold it up over other less "modern" designs like providing a C ABI, which you can still do with C++ using C++17, following basically all of CppCoreGuidelines in the implementation, using state of the art tooling, etc.
> giving up on many desirable features of software projects
Some C++ libraries are header only because they were designed for highest performance possible.
Classic example is std::sort versus qsort, C++ usually wins because inlining.
My favorite example is Eigen, they use template metaprogramming to save RAM traffic. When you write x=a+b+c for matrices or vectors, the library doesn't compute a+b intermediate vector. That C++ expression returns a placeholder object of a weird type, CPU computes a+b+c in a single loop over them, reading a,b,c, and writing to x.
Features are header only for performance. Entire libraries are rarely header only for that reason. Typically it's more for ease of consumption, but there are other major drawbacks to that approach, like having the widest possible API and ABI, making long term maintenance harder.
For instance, on some platforms system headers use preprocessor variables called "major" and "minor". If your code is in source files, you have strong control over whether you care. If your code is all in headers, your don't since all your code is affected by unrelated textual inclusions.
> having the widest possible API and ABI, making long term maintenance harder.
For cases where it matters, C++ can do that just fine.
Look at Direct3D, DirectWrite, Media Foundation. They're written in OO C++, and they use IUnknown ABI from COM. They're not COM objects, the ABI is the only feature they took from there.
Apparently, C++ library developers don't care enough.
> you basically give up on a stable narrow interface when shipping header only.
I agree.
But in some cases, you also gain performance not achievable with stable interfaces.
It’s about tradeoffs.
I think modern C++ needs stable ABI and well-isolated modules less than it did 10-20 years ago. 20 years ago people wrote everything in C++. Now C++ lost the majority of the market to higher level and safer languages, it’s only used for components where performance really matters. For such components, runtime overhead of stable ABIs sometimes matters.
> Taking advantage of so many extra cores by a single process is not all that easy or common.
Browsers are both multiprocess and multithreaded. The ability to run a few webapps without having your system drag to a halt is a feature that's important to essentially everyone.
>The ability to run a few webapps without having your system drag to a halt is a feature that's important to essentially everyone.
Webapps such as? I have a 4 core CPU from 2014 (i7-4790K) and I can't recall that ever happening to me. Primarily because any modern OS will throttle crazy runaway threads to ensure UI responsiveness, so the system doesn't 'drag to a halt' as you claim.
Also honestly.. how many people are looking at CPU benchmarks to run browsers better? I'd wager a twenty that its mostly gaming nerds who are obsessed with CPU benchmarks. Then.. its also a question of knowing your audience. I'm sure they have a better idea of who their audience is than you or I.
Gmail is the #1 culprit for completely locking some of my lesser systems (i5-7200U). I also have the fun thing sometimes where opening a large PDF or Google Doc can take an unreasonable amount of resources from the rest of the system for processing. I had Facebook do it constantly for a period where I made the bad decision to use it for a while. I find a good bit of information from benchmarks for non-desktop processors/whole system configurations, especially when a new feature set/generation swings the difference between an i3 and an i5 for instance.
"Primarily because any modern OS will throttle crazy runaway threads to ensure UI responsiveness" seems like you have a particular OS in mind, and I would be interested in hearing more. I do not observe that behaviour on Debian 9 (and other Linux distros), Mac OS X, and Windows 7/8. I regularly bring any of those to UI stuttering/freeze from various workloads. Webapps only really breaks the lesser ones singlehandedly though (most of my other systems are 4+ core with 32GB+ RAM).
Well.. I don't know what to say, I guess you should report those bugs to the appropriate vendors then. I don't believe a memory leak means we tell people to buy more memory :)
>"Primarily because any modern OS will throttle crazy runaway threads to ensure UI responsiveness" seems like you have a particular OS in mind, and I would be interested in hearing more.
Sure. You should read up about thread scheduling and how an OS scheduler works. I don't think I can explain that in a comment, and I'd do a poor job anyway.
>I do not observe that behaviour on Debian 9 (and other Linux distros), Mac OS X, and Windows 7/8. I regularly bring any of those to UI stuttering/freeze from various workloads. Webapps only really breaks the lesser ones singlehandedly though (most of my other systems are 4+ core with 32GB+ RAM).
CPU utilization is an extremely poor indicator for whether or not your system will feel stuttering or freezing. What makes your system freeze is having more tasks to complete than can be reasonably scheduled in a time frame to as to appear continuous/realtime. Most OS schedulers that I am aware of do not provide special treatment to UI processes and defer to the fact that most modern UI's require many programs to respond in a timely fashion to provide a user interface and do not assign priority unless specially instructed. Most provide (relatively) even priority and have high priority interrupts like Window's ctrl-alt-delete and higher priority to kernel threads.
In Linux actual scenarios for stuttering/freezing are generally represented as a load average >1, and stuttering for me generally starts to happen when I get above 3 and I assure you no Linux system will work without observable stutter when you start getting into the 20+ load average range.
In earnest I have no idea why you tried to use a JavaScript based benchmark to support your point. Even after observing it the number of nonvoluntary ctxt switches barely even registered from baseline, probably from the variety of ways browsers do their own internal threading strategies. I could not see how to change that JavaScript benchmark to make it actually provide an interesting load on my system, so I'll just leave it at that.
I have no idea what happens under the hood in Linux, but I have never observed my Linux desktop 'crawling to a halt' because of some silly webapp. This is normal expected behavior. I know for a fact that ensuring responsiveness of GUI apps has been in the NT scheduler for decades. Look up dynamic priority groups. There is really no point rehashing and arguing over basic design issues that anyone can lookup and as such this thread is not really productive for either of us. Hope you have a nice day, Goodbye!
Smart pointers are strangely absent from this comment, which is rather weird as handling raw pointers ceased to make sense and became a pungent code smell with the inception of C++11.
> these days it's even considered "disruptive" if you offer a couple of scooter for rent. Whom are you disrupting exactly?
Public transportation, and how people move around a city?
Sure, it's scooters/bikes/cars/helicoptera for rent. But being ubiquitous and affordable makes it usable and in some cases even preferable than established solutions. That changes a lot of stuff. Heck, airbnb is just a middleman in renting deals, right?
> Perhaps there is a reason for the industry not to like him since he's competition
You're commenting on a man who accused a cave rescuer of being a pedophile after he criticised mr Musk's brain-dead proposal to rescue the stranded kids.
The problem does not lie on strawmen such as this silly idea tha the industry does not like him.
Comparatively, service-per-vm approaches are very wasteful and ineficcient, moreso if a container orchestration system is used to manage the deployment. It makes no sense to fine-tune VMs just to match the resource requirements of a single process, particularly as they change over time and as that approach leads you to a collection of custom-tayloted VMs that are needlessly cumbersome to manage and scale.
Meanwhile containers enable you to run multiple services on the same VM, scale them horizontally as you need on the same pre-determined amount of resources, use blue/green deployments to spread your services throughout your VMs automatically, and achieve all of this automatically and effortlessly.
If any hypothetical compilation time problem concerns you then rest assure that C++ enables you to develop submodules that wrap and instantiate your templates, eliminating the need to recompile them every single build.