Not to take away from your point, but in my experience the vast majority of C++ build pipelines even at major companies can still be improved. Few people enjoy 'improving the build', it often touches everything, and requires discipline to keep it working. Most of the projects I've worked on have been larger than Chrome, I've seen the compile time for BioShock Infinite go from 2 hours down to 15 minutes with serious work on header use, precompiled headers, and all the other tricks people use. Epic's build system is a pretty good example. There is even a older book, Large-scale C++ Design, that is specifically about this point.
Starting with a full build that initially takes hours and it shrinking to < 15-20 minutes and better seems pretty par for the course for truely large C++ projects. You don't get a fast build process for free, but if the team makes it a priority, alot can be done.
EDIT: Times mentioned were for a full build, often you rarely due a full build, incremental builds should be majority. Places that don't make incremental builds 100% reliable drive me crazy and waste so much developer time. This is common, but it's a lame excuse. Just do the work and fix it.
I worked on exactly this problem for Chrome! I agree with all your major points -- in particular, optimizing incremental builds is the most important thing for developer sanity.
Doom 3 actually used Scons across all the OSes (~2004). At the time, it was so nice to have a python build system. I sort of hoped it was the future, but it sort of died as it failed to scale. I've seen a few home-brewed python build systems work well, but typically we're back to CMake/Make
Meson is strongly-typed; it goes beyond just having a notion of "paths" and tracks what kind of object paths point to, and what kind of resource strings name. This is invaluable, because it means you get feedback when you accidentally pass an object file instead of a library name or any number of other confusions.
Personally, this meant the error messages I got were helpful enough that my first meson-built project was working after a half-hour of deciding to port it over despite using several system libraries and doing compile-time code generation.
Meson's language is not Turing-complete, so it's easy to analyze for errors. Unlike CMake and autotools, Meson's language looks like a real (pythonish) programming language, and it isn't string-oriented; dances of escaping, substitution, and unescaping are uncommon.
Compared to autotools or hand-rolled Makefiles, CMake is a step in the right direction; meson is a leap.
How happy have you been with Meson in complicated projects with multiple directories. Especially where things are complex and different options are used in different places. Make, in spite of all it's craziness would be a good tool if it any sane kind of support for this.
CMake tries hard to to do better, but then introduces its own layers of craziness. So it's fine as long as I am not doing anything unusual, but as soon as I need to understand what is going on, I find a dizzying array of barely working moving parts beneath me.
I would expect kernels to be quite small (#file, line-count) compared to major Applications like Unreal Games and Civilization 5. I've never worked on Chrome, but I can safely say the amount of source code in a few Unreal Games and Civilization5 dwarfs the drivers and OS code I've worked on. Take Unreal then add a team of developers adding onto it for multiple years thru multiple releases. Then add all the middleware (Havok, AudioEngines, NaturalMotion).
OS are much larger than kernel, I'd guess all the driver code exceeds the actual kernel.
People always think they code base is large, but having built most of the Call of Duties and many Unreal games, all the OS code I've worked on is trivial in size comparison. There is probably something bigger, but games seem bigger than many major apps in my experience.
For reference, the kernel has ~15 million LoC, and according to a not exactly reliable or verifiable infographic on reddit, BioShock infinite contains 631 miles, which would be between 3 and 10 million LoC.
A lot of time has been spent on optimizing Chrome's build:
- Ninja build system will perfectly parallelize the build without overloading resources (modulo this OS bug)
- Meta-build system was recently completely replaced (gyp-> gn) to improve builds
- Lots of work on clang-cl to allow compiling Chrome for Windows without using Microsoft's compiler
- Distributed build system to further increase parallelism
So, lots of work has been done to deal with the build times. And probably not a lot of low hanging fruit to be found. But, still more work is being done. Support is being added for 'jumbo' builds (aka unity builds, where multiple translation units are #included into one) which is helping a bit with compile and link times.
13% cpu usage at the lowest c-state is a also very different than 13% at an elevated c-states. I've recently spent a lot of time analyzing c-states/p-states and the power mgmt modes of the GPU. After learning more about the complexity behind the clocks, bus speeds, etc. underlying each state, whenever I hear someone quote a utilization number of a minor workload, I want to know at what power state.
Not to take away but the author's point, just an aside that utilization numbers can be a lot more complicated when there are dozens of energy states and the utilization might be utilization at a particular state rather than utilization at maximum power
You mean p-states in your first sentence, right? Anything but c0 represents different levels of 'retiring no instructions' (totally idle). The rest of your comment seems accurate though.
You're correct that I just meant a lower power state.
The author mentioned blinking cursor, so it reminded me of graphics issues. A more efficient CPU state has the possibility of slowing an app due to CPU-GPU sync points. A blocking CPU in an energy efficient state can reawaken slower from GPU done notifications, so FPS is lower. So both c-state and p-states can affect performance. General point was just utilization may not be utilization at max power.
I've worked on problems where utilization was 15% at lower power and it was a problem. But to compare different workloads, it'd be < 1% at max power.
That's actually even worse: it will prevent the cpu to enter deep sleep states and save more power.
A while ago I was trying to minimize the power usage of my laptop (yeah, slow day) to maximize my battery time.
Armed with powertop, I removed any undesired process until I was left with an otherwise idle emacs (less than 1<% cpu) as the last major source of wakeups. Sure enough, disabling the blinking cursor brought that down to nothing, allowing the cpu to stay into deep sleep state much longer.
Hey ben, those were fun days! For others, Metrowerks hit the wall that many Mac developers hit and continue to hit. The Mac market size simply isn't big enough for a 3rd party developer, so they tried other platforms. As Metrowerks tried many embedded markets, the Mac product suffered. Also, since only Metrowerks had a PowerPC compiler, Apple was very dependent on Metrowerks while Metrowerks was heading towards Chapter 11. The tension between the execs (give me more money/support or else) created a 'complicated' relationship.
Motorola rescued Metrowerks from the brink of Chapter 11 with the Apple/Metrowerks relationship already tense. Motorola also gave many long-time employees an excuse to leave (pre dotcom crash). PalmOS took so many people that lawyers were involved at the HR level between Motorola/PalmOS
In 1995, we watched these 1979 videos of Smalltalk in my programming languages college computer science course, which focused on Scheme.
I vividly remember the reaction of my fellow students. Given the mockery and jokes from my fellow students, you'd think they were watching a bad sci-fi movie. Most students discounted everything they saw, 'real men and real programmers used C'. I remember being so disheartened that it seemed we'd evolved so little in tools/languages from 1979 - 1995.
At the time, everything was Unix and C programming (DEC Alpha were just being installed on campus, Windows 95 had just been released). There were a lot of reasons Unix/C succeeded, there is a great classic paper about why C beat Lisp, and I agree with the author.
However, what always troubled me, is how my fellow students completely ignored any potential learnings from those videos. In many ways, those early Smalltalk programs were far more impressive than anything they had created, but they just wrote them off.
At GDC 2014, a post-mortem was presented on the classic game Myst. That was written entirely in Hypercard.
The prejudice of programmers is one of the biggest hindrances of technological advancement in computing AFAIC.
Think about it: currently, functional programming is, finally, getting some well deserved recognition in the wider programming world.
Yet almost everything it presents has been present in programming for as much as 45 years. The original paper on Hindley's type system was published in 1969. Milner's theory went to print in 1978. Scheme first appeared in 1975 and was already building off functional ideas that had been spawned by earlier Lisps. Guy Steele designed an actual Scheme CPU in hardware in 1979.
And yet even today, a non-trivial number of programmers react with absolute horror at the idea of a Lisp (usually based solely on ignorant trivialities like the parens-syntax), more or less exactly as your C programming classmates did in 1995, and while FP is starting to gain major inroads in some spheres, others dismiss the whole field as wank and Java and C remain kings that are unlikely to be unseated for another decade at a minimum, if ever.
We remain utterly bound to one model of hardware, one model of programming, and largely, only a couple models of operating system, after decades of development, because so many programmers react with horror at anything they're unfamiliar with or that deviates from the percieved norm, be it in features, syntax, or focus.
And God forbid you make anything that might actually be easy for non-programmers to learn. It will be more or less met with instant and persistent scorn, and its users derided and outcast, simply because they didn't use a 'Real Programming Language' like C. Go ask a BASIC coder what life has been like for the last 40 years, or a game dev who worked in AGS or GameMaker prior to the last half decade or so. Hell, I have a friend who still sneers at visual layout designers.
The divide described in the article is very much culturally enforced as much as economically.
> Think about it: currently, functional programming is, finally, getting some well deserved recognition in the wider programming world.
People assume that functional wasn't used because of programmer prejudice.
Why don't people assume that functional wasn't used because there were good reasons not to use it?
Imperative programming lives in a world where CPU cycles and memory are scarce. Gee, once CPU and memory became abundant and free, people started using functional programming. Go figure.
The divide described in the article is very much culturally enforced as much as economically.
Statements of this kind trouble me every time I see them, last time yesterday in the discussion about Greece.
I guess Marx's ideas aren't very popular over here. But implying that economy and culture need to be mentioned separately seems at least a little naive.
More fundamentally, you're ignoring the huge difference in power between consumers (in this case the programmers) and the people that create the tools and take them to market.
Actually, what I mean by that is rather the difference in power between the guys doing the coding, and the guys making the business decisions.
Both tend to enforce that divide, but for very different reasons; the coders for cliquish reasons as I described, and the business guys like Apple for the reasons in the original link.
The coders can't really enforce anything. We have opinions and fall in groupthink indeed. But the mere fact that there are different tools is obviously caused by the people and organizations that create them.
Bologna. Plenty of programmers are constantly searching for the new new thing, obviously (ahem hn). It's mostly the maintenance types of people who are prejudiced, and they aren't just programmers.
its not just a language issue, there is a divide amongst platforms too. You can use the same language and you will find people who will still talk down or ignore your points because of platform or language choice.
And God forbid you make anything that might actually be easy for non-programmers to learn. It will be more or less met with instant and persistent scorn, and its users derided and outcast, simply because they didn't use a 'Real Programming Language' like C.
Prejudice doesn't seem to have anything to do with it. Functional programmers think differently, and what's obvious to the Functionals isn't to the Statefuls. And the Statefuls are currently most of the world. I've flip-flopped myself, because while I love the elegance of being a Fucntional, being a Stateful is just so much more productive. There are a few reasons for this: If I want to make a game, there's no good functional framework. If I want to write a script to get something done, like download a webpage, my goto language is Python because I know for a fact that their libraries work and that their documentation is almost always stellar. Contrast that with Lisp where you can spend at least a day just getting the environment set up in a way that asdf doesn't hate. Especially on Windows. (Yes, if you want to make games, Windows needs to support your dev environment.) My info about asdf is a couple years out of date, because to be honest I haven't felt inclined to look into it again after some bad experiences.
Haskell could be wonderful. Never tried it. Will someday. Until then, I'd love some sort of competition where a Haskell programmer and myself are given a task, like "write a script to X," where X is some real-world relevant task and not an academic puzzle, and see who finishes it first. It would be illuminating, since I'd give myself about a 30% chance of finishing first, but it would reveal what I'm lacking.
Arc had potential. It really did. Everyone just gave up on it, and it never attracted the kind of heroic programmers that Clojure did.
So the wildcard seems to be Clojure. It's a decent mix of performant, practical, and documented.
I'm out of time to pursue this comment further, but the main point is just that FP's problems have very little to do with societal acceptance or scorn. If you're running into that, you're probably running with a bad crowd anyway. It's mostly because imperative languages are popular, so network effects mean they'll just get better and better. If FP wants to chip away at that, it'll need to start off better and stay better. "Better" is many things, but it includes performance, cross-platformability (yes, Windows is necessary), documentation, and practicality (the ability to quickly accomplish random tasks without a huge prior time investment, Python seems to be the best at this so far).
> Haskell could be wonderful. Never tried it. Will someday. Until then, I'd love some sort of competition where a Haskell programmer and myself are given a task, like "write a script to X," where X is some real-world relevant task and not an academic puzzle, and see who finishes it first. It would be illuminating, since I'd give myself about a 30% chance of finishing first, but it would reveal what I'm lacking.
I think one of Haskell's biggest marketing problems is that its strong points (strong static types + separation of side effects) aren't all that important in scripts (or any program that's small enough to fit in someone's head in its entirety), which makes it difficult to convince people of its merits in reasonably-sized examples.
What Haskell gives you are good, solid abstraction boundaries that you cannot accidentally break, and the ability to refactor code with a high degree of confidence that it's still going to work fine afterwards.
Neither of those are particularly helpful for any program that you might write in a competion, but both are incredibly important in day-to-day software development.
My apologies, that portion of my point was meant to be independent of the FP comparison. It is absolutely true that, save for the occasional grudging exception, the 'easy languages' have largely been scorned and shunned throughout programming history. Hell, I've been guilty of it myself where HyperTalk is concerned despite loving the concept.
As for "getting work done" in Lisps/FP, I earnestly recommend checking out Racket. The developers have said that they've aimed to create 'the Python of Lisps' and by and large they've succeeded at exactly that. The documentation is thorough, the functional tools live alongside OOP and imperative ones quite nicely, the standard library is enormous, and DrRacket makes for a very good 'just open up the damn editor and start writing' tool.
Really, most of the FP languages are much more multi-paradigm friendly than Haskell, to the extent that I wouldn't even consider CL a functional-first language at all. CL's problems there are more a lifetime of neglect + Emacs loyalty, though Allegro and LispWorks offer more 'everyone else' friendly options. F# is fantastic as well, and integrates nicely with the rest of .NET and allows for a mixed paradigm while still getting all the functional tools and the power of type inference on top.
No no, I'd never claim something as foolish as FP is inherently less productive. It's the opposite in my experience. Once you have a bunch of libraries and utility functions that you know how to use, you'll wipe the floor with any stateful programmer. The hard part is getting to that point. I'll elaborate more in an edit when I'm home in like 30 minutes.
I'm short on time, but my main point was just that if you're already a stateful programmer, it's difficult to become productive when switching to functional programming due to a combination of factors. FP isn't inherently more difficult than stateful programming. It's simply that stateful programmers have already spent the time to become productive, whereas switching to FP requires another time investment in terms of learning a new mindset, getting a programming environment set up, learning how to exploit the environment (how to be "fiercely effective"), learning which libraries to use and how to find them and set them up, and most importantly: deployment. Deploying native apps in stateful languages has seemed easier for some reason, whether it's python or C or whatever else. I tried to use lisp for gamedev, but it became a nightmare to figure out how to actually deploy everything including libraries along with all dependencies. The end result of deployment needs to be a program that's launchable by double clicking on an icon, which seemed difficult. (Again, my info is out of date, so maybe things are much better now.)
So it's just a combination of mindset change plus a different "context" for programmers to get used to. That, along with deployment problems and sometimes lack of documentation, leads to a lack of incentive for anyone to make the switch. Since stateful is more popular, it's almost always better for productivity to start and stay a stateful programmer. However, for personal skill level, everyone needs to become a functional programmer for at least several months and try to develop production apps with it. Who knows, you'll probably get much further than I did, and figure out a way to convince all your friends to make the switch too.
Very recently computers got to the point where we can afford to write slow code, this is the reason higher level people are coming out of the woodwork.
Yeah, everything pretty much fell apart in 1958 with that stinking mccarthy. Since then trying to write any performant code meant dealing with an innane chatter of closures and inheritance and parametric polymorphism. Maybe, if we're lucky, computers will get slow again and performance will matter.
Coincidentally, I spoke at length with a grad student software developer for the RoboCup challenge in which autonomous robots compete. In this particular league, Nao robots are supplied to each team as the standard, making the performance emphasis on software that runs in the robot.
So I asked the developer, does he use the famous ROS software for running the robot. He said, no way, because the robot had only an onboard Intel Atom of relatively low performance, all the complex multi-threaded behavior had to be written in low-level C. No space nor execution time for anything more.
I'll bet you a nickel that grad student didn't learn about complex multi threaded behavior in low level C. His challenge was to take a high level concept and express it in a resource constrained environment.
Of course there is a need for freestanding C. But for every one of him there are a hundred developers using c++ or java or javascript. They may not appreciate the power of working in such high level languages, but the vast majority of developers work in HLLs.
Sorry, it's just bemusing thinking of 19-year-old college students bragging about being real men.
But this is the problem with our profession, it's baked in at a deep level that old == bad. So every new generation reinvents the wheel, worse than the last. The truth is all the big problems in software were solved in the 1970s, if only we would pay attention. The only actual progress in computing is that the hardware guys make everything faster.
With regard to only this point, probably half if not more of the the top 15 grossing games on the Mac AppStore Games page today are either Carbon Apps or at minimum require the Carbon framework.
And a bunch of those are using Wine, which requires mapping code at the zero page, so they're compiled with a special load command that allows them to map the zero page.
On Yosemite, 64-bit binaries aren't allowed to do this.
Whoa, seriously? I didn't realize Wine for OS X was real enough to be usable for anything, let alone be admitted to the app store and make it to the top. That's pretty awesome.
Although every time I go "I didn't realize X on Y worked", it seems like games are the rationale, and not very surprisingly, since they exercise a relatively small part of the API surface apart from OpenGL. (Mono, some HTML 5 things as mobile apps, and Humble Bundle's asm.js collection all come to mind.)
Can OS X take a cue from iOS and require a special Apple-signed entitlement to do this (unless root overrides it in a config file), or is it not worth the trouble?
Also, does Wine actually require this in general? My understanding was that Wine needed this for legacy Windows apps that themselves map the zero page, but recent-ish apps designed for XP or later shouldn't do that, right?
Wine on OS X has been stable enough for gaming for years! Sims 3 used it, as does EVE Online, and quite a few other major titles, mostly those from EA. They all use TransGaming's Cider fork though, I'm not sure if TransGaming every contribute back to the Wine community these days.
CodeWeaver's Crossover is quite good as well. It can play Skyrim on my 2012 MPB with decent quality. I mostly use it for older games though - rollercoaster tycoon and the like.
While the theory sounds good, the Steambox never evolved to a list of supported configs. Valve stated that Nvidia, AMD, and Intel would all be supported and Valve displayed great hesitancy in providing any specifics. Rich's earlier post on the state of Graphics Drivers remains mostly valid.
In addition, the release of a few Linux games such as Civ5 & Borderlands, have provided sales figures to those companies on whether the investment was worthwhile.
Many companies considered Linux primarily because SteamBox was an interesting market. While some companies were taking a wait-and-see audience, the slow play of SteamBox only makes more people take a wait-and-see attitude.
The OpenGL mess of "4.4 is good enough", while both AMD & Intel failed to deliver good drivers, is further muddled by OpenGL Next. Basically, accepting the OpenGL API needs a re-write and will become more Mantle/Metal/DX12 like. So will Intel & AMD deliver good-enough 4.4 drivers, or do we all wait for OpenGL-Next. Don't forget that AMD just did a 7% layoff, limited resources.
In Rich's source article, he questions the future of SteamOS and the partners involved in some of the ports are also questioning. Some people have started using past tense. I have no idea what Valve is planning, but my guess is that it's a long term play with limited resources. I'm guessing some of the interest inside Valve has waned.
Remember SteamOS was born before Win8 was announced and Valve was scared where MS was going. Steam on Windows seems pretty safe, the pressure is off.
> Remember SteamOS was born before Win8 was announced and Valve was scared where MS was going. Steam on Windows seems pretty safe, the pressure is off.
It may be off, but Microsoft may become more of a games distributor in the future through their own app store, so it may eat up some of Valve's revenue because MS app store is included by default while Steam is not.
I nod my head in agreement with most of these OpenGL are broken articles. I've work on the OpenGL version of Call of Duty, Civilization, and more for the Mac. I think Timothy misses the real point on driver quality.