I'd imagine I'm pretty similar to grandparent post in this respect. If I find something interesting, I save it. Even if it's seemingly arbitrary or not immediately useful. With powerful search and tagging it's nice to run a query/update to a personal archive to refresh my memory, browse when bored, catalogue sources, wishfully think about future study, notice patterns in my interests over time, meta-analysis, etc.
More cynically, I've noticed that it's a relatively benign form of hoarding. i.e. I get the quick dopamine rush of "oh this is interesting, now I have it" without, say, crowding up my living space with trinkets. With an abundance of storage that is essentially invisible to me when I don't want to think about it, I can keep what I want, when I want; often while fully aware that I'll never look at most items individually again. I think of it as a kludgey form of external memory / internet butterfly collecting. The only downsides I've thought of are: time frittered revisiting archives, the (small) transaction cost of tagging and placing items into the archive, and the externalized costs of maintaining the hardware.
Imagine a general with a finite amount of artillery shells and a Howitzer on the fritz (i.e. the intended trajectory of the shots are somewhat off the mark today). Given the choice of where to deploy the artillery, the general may choose to concentrate their fire on the narrow beachhead landing instead of upon a widely scattered formation of units approaching across a vast plain.
The artillery that lands on the plain may strike an advancing unit, or it may fall (possibly harmlessly) between a set of advancing units. The artillery that lands on the narrow beachhead is more likely to hit a unit.
This analogy is far from perfect: sometimes mutations are good, which is one primary driver of evolution. Non-coding regions and/or "baggage to be refactored" (paraphrased great-great-gp comment) in DNA (the regions of the plain/beach not occupied by an advancing unit) can absorb "errors". Also, there are other types of mutations (insertions, deletions, ...), aside from the single point mutations that this analogy was attempting to help convey.
The point is: it's like bunching up a lot of important things over a few points of failure. If you increase "the genetic surface area", you lower the chance of the important thing getting hit.
On evolutionary scales, viable DNA has been selected with a lot of non-coding (and sometimes useful) regions, we know that if we reduce that down, we are more likely to be susceptible to fatal mutations on coding regions (e.g. a region that codes for a vital protein).
I think that's not the best analogy. You're imagining a constant amount of mutations (artillery shells) spreading over the size of the genome (the beach). It doesn't quite work like that, which is why mutation rates are usually measured in errors per base pair per generation.
In fact, copying DNA is more like downloading a large file over an unreliable network. There's a certain chance that each individual bit is flipped and the file becomes useless. You can reduce that chance by sending it multiple times, or introducing checksums, both of which add redundant data. But simply adding an extra TB of junk bytes to your download won't help preserve the integrity of the original file.
Does the number of mutations in a genome increase with the size of the genome, or with the length of time the genome is "in use?" Let's assume that the mutations are evenly distributed throughout the genome. If genetic mutation count is dependent on the size of the genome then it fits your "unreliable transmission" model.
However, if genetic mutation count is time-dependent but totally independent of the size of the genome, then having a larger genome actually does protect you from individual mutations, and it would do so exactly using the mechanisms described previously.
Think of two genomes, one large and one small, both existing throughout time. Both will accumulate a similar quantity of mutations from mutagenic processes which are time-dependent like radiation exposure.
I was more trying to explain the intuition of why single nucleotide mutations (resulting in non-viability or undesired effects) are probably more likely in a strand of DNA with regions removed versus a strand of DNA left as-is.
Basically, trying to give an example of the grandparent's point. (i.e. fewer nucleotides to be flipped -> more likely that an important one will be). I agree that it was a poorly executed analogy. The metaphor I was trying to make is that on the 'vast field' a random single point mutation is probably going to land on an individually unimportant nucleotide, and in the 'narrow beach' (the smaller strand/higher geninfo density) an individually important nucleotide is more likely to be hit. I'm still probably not articulating my point well, sorry.
But I think your analogy is better for a subtly different point; describing how DNA replication works in a system, where stands can be selected out, errors corrected, and genetic information can be preserved at a systemic level.
This actually kind of reinforces (or at least points to) one of the central ideas of the essay, that we cannot know what it is to be a bat--we would always contextualize or speculate about the bat experience in human terms of experience.
If you accept this, no matter how great We make this simulation (of objective physical phenomena and our best theorized non-human perceptions of these phenomena), this game is always false.
(For what it's worth, I'm not sure yet if I hold any firm viewpoints on this. The essay (at least partially) is an argument against reductionism, but I'm no philosopher; to be honest I'm still wrapping my head around the entire argument. I just think it's interesting to think about.)
For what it's worth [], I'm pretty sure CERN(/LHC), ESRF, SLAC Nat. Accelerator Labs, Lawrence Livermore, Lawrence Bekeley, Sandia, Princeton Plasma Phys. Lab, Brookhaven, Fermilab, Argonne Nat. Lab, TRIUMF, Sudbury Neutrino Observatory, Oak Ridge, Research Triangle Park, various institutions within the Max Planck Society, Institute for Advanced Study, Bell Labs, Xerox PARC, RAND Corporation, (and likely many others...), all have/had facilities within (at most) 100 km of a city with a population of at least 150,000 people. Several of these institutions are within commuting distance of major world cities.
In the set of nation state funded research facilities (e.g. funded by US Dept. of Energy, European Research Council, etc.) I'm willing to bet that most of the facilities are more often relatively near cities (greater than 150K pop). Many (high energy / high capital cost) facilities, employ on the orders of 1-10K staff alone.
[] (... and that Basic Science, High Energy Physics, Computational Science, Basic Tech, Corporate Research, and/or Public Policy/Economic/GeoPol research are the hard problem spaces you are alluding to...)
> have/had facilities within (at most) 100 km of a city with a population of at least 150,000 people.
I mean, yeah, everywhere is basically within walking distance if you change the definition around enough.
I've actually spent a bit of time at towns near big labs, and for the most part these are not the kinds of big cities solving problems the article is trying to assert. They're sleepy little towns, many of which barely have a building over 2 stories. In fact, I live 20 minutes from a major biomed lab. I definitely do not live in a major city. 20 year ago people would have called this entire area "farms".
The article is positing that Cities, BIG Cities, are the places hard problems get solved. Not places withing an hour drive of a big city. CERN, ESRF, SLAC, etc. aren't in major cities of their respective countries. 150,000 people is not a big city. If the supposition was correct, it would make sense to set these kinds of hard problem solvers in your densest, most populated cities. NYC would be thick with labs, Paris, Tokyo and London would all be centers of major Research facilities.
I think the Soviets got it right with not being coy about what these places are: наукогра́д "science cities". For example, Tri-cities, WA is a "farm" town that only exists because one of the biggest U.S. nuke labs was built there. It was built there because it's out in the middle of nowhere. ORNL is a 2.5 hour drive from Nashville. Brookhaven, 1.5 hours.
My point is that cities don't solve hard problems. Cities merely bring efficiency that can be a useful tool in aiding this process. It'd be hard and more expensive to build something like the LHC without good infrastructure in place to move all the equipment in, so of course you're going to have good transit links and communications infrastructure etc. But the LHC wasn't built in downtown Paris was it?
[] (... and that Basic Science, High Energy Physics, Computational Science, Basic Tech, Corporate Research, and/or Public Policy/Economic/GeoPol research are the hard problem spaces you are alluding to...)
yeah basically. Let's be real honest here, the kinds of "problems" the Bay Area seems to spend most of it's time solving appear to be mostly figuring out how to extract the maximum amount of money out of fairly uninteresting technical implementations designed to let people chat and share cat pictures. In other words, money for engineering, not problem solving.
> But selfish in a good, weird way - working 80 hours a week so someday I don't have to work 40, so I can take my wife and kids to Hawaii for the summer someday, or whatever.
Epochs are not fungible, especially when raising children. Missing a toddler play at the beach for the first time cannot be reclaimed by sharing their first scuba lesson.
I disagree with the "firsts" argument. Trying to be there for the firsts of everything is silly and clinging to them is even sillier. Its area under the curve, height and width matter.
I also disagree with the 'firsts' argument, to a certain extent. I didn't articulate my point well enough.
I was responding to the 'I'll trade this time now, for more time later' line of reasoning (which I believe is a false transaction). Not only should one recognize that each epoch is unique (in a child's life, in a relationship, in a career, etc.) and therefore not interchangeable with later epochs; but one should also recognize that "selfish" (OP's word choice) actions now are usually not offset by generous actions later.
I suppose one can recognize those tradeoffs and traverse "borrowing time" reasoning with a bit more self awareness, though not many do. Especially those that subscribe to the 'compress your 40 year career into a 4-10 year startup' meme without reading the fine print.
If you've got about an hour and a half to spend, and you enjoyed this article, I'd like to recommend a lecture given by Will Wright in 2003.
He covers his background, early computer games, the origins of Sim City, the Sims, as well as more abstract/high-level discussion of game design and simulation.
Some high level themes of the talk: using the computer as a modeling tool, simulation design, Will Wright's game design principles and philosophy, programming for two dynamic processes "the software" and "the player's model of the system", and traversal of possibility spaces in game design and simulation.
Will's talks and interviews are all fascinating and well worth watching, taking notes, and studying. It's interesting to see how the projects he was talking about at the time turned out years later. I'll also link to an excellent interview with Chris Trottier, one of the designers on The Sims, who is an absolutely brilliant designer who worked Will on The Sims and Spore, who according to Will can manipulate the very fabric to Time and Space, and is a pretty good designer, for a girl: https://web.archive.org/web/20131117041434/http://pickleodeo...
A summary of Will Wright's talk to Terry Winnograd's User Interface Class at Stanford, in 1996.
Written by Don Hopkins.
Will Wright, the designer of SimCity, SimEarth, SimAnt, and other popular games from Maxis, gave a talk at Terry Winnograd's user interface class at Stanford, in 1996 (before the release of The Sims in 2000). At the end of the talk, he demonstrated an early version of The Sims, called Dollhouse at the time. I attended the talk and took notes, on which this article elaborates. I was fascinated by Dollhouse, and subsequently went to work with Will Wright at Maxis for three years. We finally released it as The Sims in 2000, after several name changes: TDS (Tactical Domestic Simulator), Project-X (everybody has one of those), Jefferson (after the president, not the sitcom), happy fun house (or some other forgetable Japanese placism).
At the talk, he reflected on the design of simulators and user interfaces in SimCity, SimEarth, and SimAnt. He demonstrated several of his games, including his current project, Dollhouse.
Here are some important points Will Wright made, at this and other talks. I've elaborated on some of his ideas with my own comments, based on my experiences playing lots of SimCity, talking with Will, studying the source code and porting it to Unix, reworking the user interface, and adding multi player support.
The Future of Content - Will Wright's Spore Demo at GDC 3/11/2005
What I learned about content from the Sims.
...and why it's driven me to procedural methods.
...And what I now plan to do with them.
Will Wright
Game Developers Conference
3/11/2005
Notes taken by Don Hopkins at the talk, and from other discussions with Will Wright.
Sims Designer Chris Trottier on Tuned Emergence and Design by Accretion
The Armchair Empire interviewed Chris Trottier, one of the designers of The Sims and The Sims Online. She touches on some important ideas, including "Tuned Emergence" and "Design by Accretion".
Chris' honest analysis of how and why "the gameplay didn't come together until the months before the ship" is right on the mark, and that's the secret to the success of games like The Sims and SimCity.
The essential element that was missing until the last minute was tuning: The approach to game design that Maxis brought to the table is called "Tuned Emergence" and "Design by Accretion". Before it was tuned, The Sims wasn't missing any structure or content, but it just wasn't balanced yet. But it's OK, because that's how it's supposed to work!
In justifying their approach to The Sims, Maxis had to explain to EA that SimCity 2000 was not fun until 6 weeks before it shipped. But EA was not comfortable with that approach, which went against every rule in their play book. It required Will Wright's tremendous stamina to convince EA not to cancel The Sims, because according to EA's formula, it would never work.
If a game isn't tuned, it's a drag, and you can't stand to play it for an hour. The Sims and SimCity were "designed by accretion": incrementally assembled together out of "a mass of separate components", like a planet forming out of a cloud of dust orbiting around star. They had to reach critical mass first, before they could even start down the road towards "Tuned Emergence", like life finally taking hold on the planet surface. Even then, they weren't fun until they were carefully tuned just before they shipped, like the renaissance of civilization suddenly developing science and technology. Before it was properly tuned, The Sims was called "the toilet game", for the obvious reason that there wasn't much else to do!
IIRC, what you are referring to is in the text-epilogue/credits of the film. It said something along the lines of 'the Camorra even invested in the reconstruction of the Twin Towers.' Agreed that it was a somewhat bizarre 2-3 seconds of information. But I don't remember the film explicitly connecting them (or insinuating connection) to the attacks, just the reconstruction investment.
It's excellent otherwise. Especially in that it doesn't romanticize "the lifestyle"; which is a rubbish cliché, endemic in many criminal enterprise films. Other films that I think are successful in this sense are: "City of God" (Portuguese title: "Cidade de Deus") and "Maria Full of Grace" (Spanish title: "María llena eres de gracia").
https://en.wikipedia.org/wiki/MIM-104_Patriot#Failure_at_Dha...
Whether or not this can be attributed to technical debt, I don't know. But it is an instance of a software failure that resulted in death.