Hacker Newsnew | past | comments | ask | show | jobs | submit | rndmio's commentslogin

That was a terrible experience. For a start that site has an expired certificate, as do many of the pages it suggested, and of the pages that worked it was mostly people that dipped a toe in a few years ago and never came back or other broken function.


You're welcome, I guess? xD


Sounds trite but my surefire method is to stop thinking. I literally lie down, close my eyes, and stop thinking. Sleep comes quickly.


My mom always told me to do this as a child. I’ve never understood this nor any other visualization technique. I have never found this to be possible. At best I actively think about not thinking which is counterproductive but even then within about 10 seconds my mind has already wandered. And if I’m not actively trying I’ll always have some other thoughts popping in no matter what


It might be because it wasn’t a technique as such. I don’t visualise not thinking, I just stop thinking about things, but I also don’t have a constant inner voice talking to/with me as I understand many people do.


" I just stop thinking about things" is the part that I can't comprehend. I've never been able to do this. If I try, I immediately start thinking about things w/o intending to do so.


Something that works for me is count from 10 to 1 slowly. Concentrate getting to 1. If another thought enters, start the count again. Repeat. This is like counting the proverbial sheep.


That's in the same category of techniques that will only work for a very brief time for me. At some point I realize I have background thoughts, and then those take over.


This happens to me as well, but when I become aware of other non welcome thoughts, I just start counting again. /shrug


In the UK it was “Attack Henry Cooper, outside his shop, on a Tuesday” no idea why the random violence but I never forgot it


For me, UK, posh school, 1980s it was just "sohcahtoa" - easy enough to be its own mnemonic. No need to gild a lily.

Your order is cosine, sine, tangent - CST. A quick look at the other examples here seem to prefer SCT - as do I but only because that is what I was taught.

I also note your mnemonic is very different to the one I learned in having the function name last. So AHC vs CAH.

There is no right or wrong here but I'm sure we can agree that there are loads of mnemonics for these basic trig formulae and nationality isn't involved.


UK, state school, late 90s/early 2000s, also just "sohcahtoa" - pronounced as a single word mostly. It never felt like it needed more than that?


It seems we have an agreement on this. There is no need to gild the lily!

I also went to a lot more schools than normal, thanks to living in multiple countries and my dad (army) moving every 18 months or so!

sohcahtoa is nearly a word.


Allegedly your grandpa, armed with his slide rule, has even more random violence:

"Spitfire or Hurricane come and hurry to our aid"

This works for me as the order of the functions matches the order shown on my trusty FX82A. Your version is kind of messed up.

I am giving this AI thing a wide birth, however, could we ask a LLM to invent a new aide memoire for this? We have got the silent generation and the boomers covered, but is there something we can do for kids today? Maybe it references Cinnamoroll, Hello Kitty or Octonauts characters that actual kids know, without it being ultra-violent.


UK, state school: “some officers have coaches and horses to order about”


For us it was: "two old angels skipped over heaven carrying a harp"


The repetition of adjectives in adjacent sentences is much more of a tell than using the word delve, imo.


This is a good call-out, and one that might be fixable by setting a repetition penalty in generation — or better in training.


I recall in 1980's, style-analyzing software reporting word counts for words with high frequency in the text but low frequency in general usage - jargon. The suggestion was to re-word, and it was often a relevant clue to re-think by examining why one word was doing so much work. Detecting that sounds like more than repetition, but still feasible probabilistically and relevant at both usage and conceptual levels.


Eh, it’s not a _great_ one, because it’s also common in very bad human writing, particularly fiction.


I don't know, maybe the 10 prior weeks of sleeping in hypoxic tents might have had some effect too. If they walked off the street, huffed some Xenon then blasted up Everest that'd be amazing, but it sounds like people are overfocusing on the Xenon bit.


How are you supposed to know in advance if it is going to be able to usefully answer your question or will just make up something?


Training people is a cost, an investment, if everyone does it the cost is amortized across the industry. If I can cut that cost by using AI I'm now at a competitive advantage, everyone will look to cut that cost because the downside of paying for training juniors that may leave is worse for that company than the downside of the whole industry not training juniors any more.


That's true in the short term.

Fast forward half a decade forward, and there's no juniors or mediors to onboard, no-one to have gemeric expertise that can be fine tuned to your product.


But they don't learn from that, they turn the crank of the AI tooling and once they have something that works it goes in and they move on. I've seen this directly, you can't shortcut actual understanding.


I disagree. LLM assisted coding is yet another level of abstraction. It’s the same thing as an assembly programmer saying that OOP programmers don’t learn from OOP coding.

Today’s juniors still learn the same core skill, the abstract thinking and formalization of a problem is still there. It’s just done on a higher level of abstraction and in an explicitly formulated natural language instead of a formal one for the first time ever. They’re not even leaving out the formal one completely, because they need to integrate and fine tune the code for it to work as needed.

Does it introduce new problems? Sure. Does it mean that today’s juniors will be less capable compared to today’s seniors once they have the same amount of experience? I really doubt it.


>It’s the same thing as an assembly programmer saying that OOP programmers don’t learn from OOP coding.

Not the same. Whether you are writing Assembly or Java, you have to determine and express your intent with high accuracy - that is a learned skill. Providing problem details to an LLM until it produces something that looks correct is not the same type of skill.

If that is the skill that the business world demands in 5 years, so be it, but arguing that it's simply the next step in the Assembly, C, Java progression makes no sense.


> Providing problem details to an LLM until it produces something that looks correct

If you're using LLMs to code like this, you're using them incorrectly. You need to specify your intent precisely from the start which is a skill you learn. You can use other tools like OOP languages incorrectly with "working" results as well, that's why code quality, clean code and best practices are a thing.

I'm sure many junior developers use LLMs the way you described but maybe that's exactly what the universities and seniors need to teach them?


I disagree. I use LLMs to help me like a teacher would.


You might, just as ages ago when people were complaining about juniors c+ping stack overflow answers you might have said you used them to learn from. LLMs are a turbo charged version of the same problem, only now rather than copying a code fragment from stack overflow you can have an LLM spit out a working solution. There is no need to understand what you're doing to be productive, and if you don't understand it you have no model or reasoning to apply in the future to other problems, maybe AI will save you there too for a while, but eventually it won't and you'll find you've built your career on sand.

Or maybe I'm wrong and we're all headed for a future of being prompt engineers.


Generally the understanding happens fairly quickly just by glancing through the code.

I was a sceptic up until recently, where I failed to create a solution myself.

Since I am mostly a hobbyist programmer(for 25 years and counting) I often find I don’t have the time to sit down for prolonged periods to reason about my code.

ChatGPT helps my tired brain from work develop my code 10x quicker, easily, by removing road blocks for me.

I find it an excellent tool.


> fairly quickly just by glancing through the code.

This is a senior-level view. Juniors don't have this skill yet, and is one of the things we're concerned they won't pick up.


The difference is that most teachers know what they are talking about. LLMs don't necessarily.


The why is because we can, but damn am I finding the tools being built with, or having tacked on, AI depressing. Is this a small glimpse of the future we're building for ourselves? Communication is valuable because thought and effort went into it, lowering the bar on producing content doesn't mean more choice, it means lower quality. Already I see a reaction against this amongst some peers when they find out something they were asked to review was AI generated, why should they put effort in if the other person didn't.


If we find the "thought and effort" part of communication valuable, we'll keep it. If not, we won't.


That would be a fine posture to take, very naturally-selective, but I find it discomforting because I've seen so many different ways that humans act that don't benefit them (individually or on the whole). It isn't always out of self-destruction or lack of self-preservation. More often than not, the choice was based on what's easiest -- a tendency towards the path of least resistance. This technology looks a lot like trading off intention, and attention, for quick and good-enough(?) results. Enough so that I can understand GP's concern for our communication skills as a society.

I think we could find ourselves losing the "thought and effort" in spite of it being more valuable, because many people find it easier. Or that even those who continue on with it, despite it not being easier, are broadly labeled as a bot because their writings failed the vibe check.

I have confidence that there will always be small communities that continue holding this as valuable, but that maintaining it in a community setting will become more strained as the general zeitgeist drifts in the direction of regarding output higher than effort.


I think you're ignoring the externalities of supporting a business/country that is doing things you don't agree with. If ACo can sell me widget cheaper than BCo because they dump the polluting manufacturing waste in a local river it's perfectly rational to decide not to deal with them. The intent is to make them change their behavior by impacting their business. Should we ignore the effects of our purchases just because we can get something cheaper or better?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: