Hacker Newsnew | past | comments | ask | show | jobs | submit | myffical's commentslogin

Some DeepMind researchers used mechanistic interpretability techniques to find concepts in AlphaZero and teach them to human chess Grandmasters: https://www.pnas.org/doi/10.1073/pnas.2406675122


Another appropriate analogy would be be HeLa / Henrietta Lacks: https://en.wikipedia.org/wiki/HeLa


https://www.federalreserve.gov/publications/2024-economic-we...

> When faced with a hypothetical expense of $400, 63 percent of all adults in 2023 said they would have covered it exclusively using cash, savings, or a credit card paid off at the next statement [...] 37 percent of adults [...] would not have covered a $400 expense completely with cash or its equivalent


Thanks, very helpful! I am still puzzled how this data reconciles with the median net worth figures that are also produced by the federal government (too lazy to google again).


Thanks for link; this post is indeed a dupe of that one.


Note that OP of the tweet has since successfully got their Facebook profile unblocked: https://twitter.com/KikiDoodleTweet/status/14064498487699415...


Add musician Jonathan Coulton to that list: makes ~$500,000 a year without having ever signed on to a major label.


True, in indie music that's possible, though that also predates the internet; e.g. Fugazi made millions in the 90s with self-distribution (Ian MacKaye's estimated net worth is something like $20 million these days). Would be curious if it's accelerated, decelerated, or remained constant in frequency.


This is a good lesson for everyone who builds online communities: Some design decisions that are acceptable to the majority, like the real-name policy, harm groups of people who are already marginalized.


The Kelvin problem: what is the optimum way to partition 3D space into identical cells, such that the surface area between them is minimized? Lord Kelvin conjectured that the Kelvin cell gave this optimum partitioning. This conjecture stood for 100 years until Denis Weaire and Robert Phelan found a better cell in 1993.

http://en.wikipedia.org/wiki/Weaire%E2%80%93Phelan_structure


Maybe.

One of TED's purposes is to bring together people with ideas and people with the capital and influence to make them happen.


You need to massage your data to get more meaningful results.

It might be interesting to compare your word counts with the word counts from a general-purpose word corpus, then pick out words that appear more frequently by a statistically-significant amount. Something like Amazon's statistically improbable phrases algorithm.


I'd suggest, as a simple heuristic for ranking words for improbability/relevance, contribution to K-L divergence from the frequencies in the general-purpose word corpus:

Pln(P/Q)

where P is the frequency of the word in the narrow corpus (HN titles)

and Q is the frequency of the word in the general-purpose corpus

(formula doesn't work if Q is ever zero; this won't happen if the broader corpus includes the narrower one, as it should, but as a practicality, just make Q:=(1-a)Q+a*P for small positive a to simulate merging the smaller corpus into the larger)

http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_diverg...

Anybody with more time than I have at the moment want to code this up?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: