Hacker Newsnew | past | comments | ask | show | jobs | submit | ethbr1's commentslogin

What objective evidence do we have that Musk is smart?

Basically: he's extremely rich

Working hard and being very good at negotiating compensation packages and picking the right companies/products to be involved with (and a bit of luck) are all sufficient for exorbitant wealth though.

E.g. there are plenty of people just as smart as Bezos who didn't hitch their wagon to the "sell something easy on the web" idea at the right time


What is a secular term for prosperity gospel?

Social darwinism.

Plutocracy?

In feedback solicitation situations with multiple stakeholders, it's important to attach cost to suggestions. Consulted without being responsible is always a dangerous offering.

Not in the sense of "This is how long that will take me" (because who cares about someone else's time?), but in "Of the 3 things you requested, which 1 is your must-have?"

Often this is approximated via design/dev team pushback, but it's easier just to be explicit about it: i.e. everyone gets X change request tokens.


Imho, this will be the key goal of gen 1.5 AI products that start delivering real-world efficiency at scale: multiplying output rather than full human replacement.

In almost every transformational AI use case, the economics don't distinguish between automating 80% of the work and 100% of the work, because there are upstream or downstream limitations that will take a decade to work out.

And at 80% automation, you're already 5x'ing someone's productivity (naive assumptions, etc etc), which translates into either 5x the supply of a good (same labor pool) or 1/5 labor costs (same output).

Granted, Amdahl's law applies [0], and there are going to be fractions unsuitable to automation.

But it feels like AI tech is relearning a computing lesson that's always been true: do the easy things first (cooperative systems with humans) and then tackle the harder things (100% end to end automation).

[0] https://en.wikipedia.org/wiki/Amdahl%27s_law


If you read it and only took that away, you might need an LLM to summarize the other 95% for you.

Because school IT doesn't pay salaries to attract top-tier talent (50-125k, depending on position level?).

So you get the typical (a) IT makes bad decisions or (b) admin is so annoyed at IT's slowness that they override them by buying a vendor solution.

And add in that there's a huge amount of centralization of vendors, especially at the platform solution level.


Maybe CS capstone projects should be about delivering useful software to the rest of the university.

Imho, we'd probably get better software developers if capstone was "build a system we'll still be using in 5 years."

Agreed that a huge part of effectively using LLMs in education will be teaching proper evaluation of sources and what does/doesn't constitute a primary source.

A lot of this feels like the conversation when Wikipedia was new. (Yes, I'm from that grade-school generation)

The key lesson was Wikipedia isn't a primary source and can't be used to directly support a claim. It can absolutely be used to help locate a primary source in the research process though!

Granted, LLM use is a bit trickier than Wikipedia, but fundamentally it's the same: if a paper needs citations, and kids understand that LLMs aren't valid sources, then they'll figure it out.

To me, the more critical gap will be in the thinking process, and I expect "no computer" assignments and in-class exercises to become more popular.


> It's not clear what it means to "challenge users to reflect and evaluate"

In childhood education, you're developing complex thinking pathways in their brains. (Or not, depending on quality of education)

The idea here isn't to corral their thinking along specific truths, as it sounds like you're interpreting it, but rather to foster in them skills to explore and evaluate multiple truths.

That's doable with current technology because the goal is truth-agnostic. From a sibling comment's suggestion, simply asking LLMs to also come up with counterfactuals produces results -- but that isn't their default behavior / system prompt.

I'd describe the Brookings and GP recommendation in terms of adjusting teenager/educational LLMs by lessening their assumption of user correctness/primacy.

If a user in that cohort asks an LLM something true, it would still help their development for an LLM to also offer counterfactuals as part of its answer.


I don't think that study supports your assertion.

Parent is saying that AI tools can be useful in structured learning environments (i.e. curriculum and teacher-driven).

The study you linked is talking about unstructured research (i.e. participants decide how to use it and when they're done).


You can no true Scotsman it, but that study is a structured task. It's possible to generate an ever-more structured tutorial, but that's asking ever more more from teachers. And to what end? Why should they do that? Where's the data suggesting it's worth the trouble? And cui bono?

Students have had access to modern LLMs for years now, which is plenty long to spin up and read out a study...


To quote the article:

"To be clear, we do not believe the solution to these issues is to avoid using LLMs, especially given the undeniable benefits they offer in many contexts. Rather, our message is that people simply need to become smarter or more strategic users of LLMs – which starts by understanding the domains wherein LLMs are beneficial versus harmful to their goals."

And that is also our goal as instructors.

I agree with that study when using an LLM for search. But there's more to life than search.

The best argument I have to why we should not ban LLMs in school is this: students will use it anyway and they will harm themselves. That is reason enough.

So the question becomes, "What do instructors do with LLMs in school so the LLM's effect is at least neutral?"

And this is where we're still figuring it out. And in my experience, there are things we can do to get there, and then some.


Comparing that study to how any classroom works, from kindergarten through high school, is ridiculous.

What grade school classes have you ever been in where the teacher said "Okay, get to it" and then ignored the class until the task was completed?

I'm not saying it's not a Scotsman: I'm saying you grabbed an orange in your rush to refute apples.


After watching the below video, it's the excess bearing play and thus no-longer-constrained force directions that would seem to be the issue.

With a proper tolerance bearing in place, the force is constrained so that other parts are only stressed in directions they're well suited to handle (because the bearing takes the load).

Once the bearing develops excess tolerance, you've got a bucking engine that (to your point) is directly loading other parts in unexpected ways/directions, eventually causing failure.

The fact that Boeing supposedly modeled this and came up with non-safety critical in the event of bearing breakage... curious how that will turn out.


> The fact that Boeing supposedly modeled this and came up with non-safety critical in the event of bearing breakage... curious how that will turn out.

They'd have to show at least one plane with a bearing gone that still flies as intended. I suggest we break one on purpose, put the full complement of Boeing execs on that plane to prove its safety given the alternative of retracting that statement.


> They'd have to show at least one plane with a bearing gone that still flies as intended.

That depends on the meaning of “safety of flight”. I don’t know what it means in aviation, but do not rule out that there is significant room between “flies as intended” and “result in a safety of flight condition”.

For example, if an engine were to complete drop off the plane, would that necessarily result in a safety of flight condition, or does “the plane will be able to continue take off and land again” mean safety of flight isn’t affected?


Some of it may be related to the 3-engine design, if Boeing had modeled that 2 engines still provided sufficient power in all scenarios.

But a takeoff does seem like the worst time to catastrophically lose 1/3 power, even without FOD intake by the central engine.


My company has a policy limiting the number of high level execs traveling on a plane at a time. I wonder if plane manufacturers have similar restrictions. It’d be an ironic to for them to simultaneously assert that their planes are safe for the general public, and also believe the risk is too high for a planeload of their execs to fly in one.

Controlled flight into terrain is a thing

Niki Lauda, eat your heart out

To see extreme examples of this, look at any wallowed-out/wallered-out through-bore in construction equipment (e.g. excavator buckets), particularly when a pin hasn't been greased, or is seized.

This same scenario combined with the amount of vibration and stresses caused by the engine, should scream "this is a catastrophe waiting to happen" for any engineer.


Deep link to the most relevant portion: https://www.youtube.com/watch?v=q5OQzpilyag&t=5m36s (spherical bearing cut-away diagram, actual bearing again, and failure mode explained)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: