Hacker Newsnew | past | comments | ask | show | jobs | submit | xtracto's commentslogin

I've done a mix of SOC2, ISO27001 and PCI L1 for 3 different startups. 2 of them b2b. All certified 100% and fully compliant.

The problem with the current frameworks is that the "controls" are so asinine and auditors so hard headed, that getting certified becomes a matter of "checking the box" .

Particularly most of those frameworks REQUIRE maintaining so much paper red tape that make a 10 person startup want to kill themselves. And in addition the costs are stupid high for startups that are just "starting up".

On the flip side, how many large companies have we seen that have all the SOCs, ISOS and whatnot certifications, and they get pwn3d and their data stolen or exposed.

It tells you that a place being certified doesn't guarantee shit.

The reality is that large companies ask for certs as a CYA mechanism: the "security" department of LargeCo, asks for the compliance cert so that when shit hits the fan, they can say "not my fault, they told me they were compliant"

The good thing is that with the new Bullshit generators (llm) this certifification/compliance process will collapse.


Well, yes, but that's the point of many contracts, they are often designed to shift risk to parties that are better equipped to handle those risks. We run our app on GCP because as a 20 person company I don't want to be responsible for physical security and a million other risks.

With ISO27001 or SOC 2, I have more information about the other party's ability to manage those risks than just taking their word for it. I'm trusting a third party auditor to vouch for them.

Fraud undermines all kinds of relationships and yes LLMs make it worse. The last job we opened I got hundreds of perfect cover letters asserting the candidates met all of the criteria. Bah.

My perhaps naive hope is that a few of these companies involved will face criminal fraud charges and we will start to develop new reflexes as a society that just bc LLMs making lying very very easy, there are still consequences.


> With ISO27001 or SOC 2, I have more information about the other party's ability to

... spend time and money to emulate the asinine requirements of outdated standards instead of actually making the product better and more secure.

> I'm trusting a third party auditor to vouch for them.

Like Delve?


The standards are very sensible. If you can't be bothered to provide even simple evidence that your employees are using basic harddrive encryption, use password managers, and your product has backup in place, I don't want to do business with you.

And Delve isn't an auditor. Though they were apparently in cohoots with equally criminal third party auditors. So I guess I'm going to be looking more closely at just exactly who exactly are auditing our vendors in the future...


X11 was started in 1984 in MIT. That means, when Wayland was first conceived in 2008, there had been 24 years of X development.

I guess Kristian grossly underestimated the effort required to write a full features Display manager.

FWIW, innmy career the times I've had to perform very impactful changes in software, I always start from the current codebase and remove/simplify stuff.

As an example, once I was in a company that had built a huge Ruby monolith which was not scaling at all. It had APIs for everything, including "high frequency trading" in the same codebase server, under a METAL aws instance (that's how they scaled).

What we did initially was simply copy the repo N times (sign up, compliance, risk, trading, etc), spin up an copies of the same server and use a balancer to route APIs to the different boxes.

Then we started removing unused stuff from each of the repository to specialize them. Fiinally we simplified complexity on each separate codebase.

I would have approached X11 codebase similarly.


I also use MacOS, but have used Linux since 1997 (way too many distros), so i hold it close to my heart.

For me the Wayland story is a great example of https://www.joelonsoftware.com/2000/04/06/things-you-should-...

They started saying "let's rewrite from scratch, X is too complicated "; 17 years later, they have realized the reasons for all the complexity that was written during 25 years (1984 by MIT).

I guess in around 8 years we will have 2 implementations of X.


Indeed. Clean sheets of paper don’t stay clean very long in the real world. That said, in the manner of “plan to throw one away,” we can learn from our mistakes and do better the next time around. Though perhaps X10 was the one that got thrown away before X11.

Or as a good friend told me when I was starting my PhD: "those of us that finish our PhDs are not the most intelligent, but the most stubborn "

Or lucky! I had a great time during mine because my advisory was amazing. However, my cohort mates, many of whom I'd say are much smarter/intelligent than I, got stuck with terrible mentors.

Ha! You just made me remember how much I used JabRef (open source bibtex reference app) back in 2004 when I did my PhD.

It was the best/worst 4 years of my life. I studied overseas (uk), met my future wife and got a PhD that really wasn't useful for much to me. Fortunately it was under a scholarship.


The other day I (well, the AI) just wrote a Rust app to merge two (huge, GB of data) tables by discovering columns with data in common based on text distance (levenshtein and Dice) . It worked beautifully

An i have NEVER made one line of Rust.

I dont understand nay-sayers, to me the state of gen.AI is like the simpsons quote "worst day so far". Look were we are within 5 years of the first real GPT/LLM. The next 5 years are going to be crazy exciting.

The "programmer" position will become a "builder". When we've got LLMs that generate Opus quality text at 100x speed (think, ASIC based models) , things will get crazy.


Human minds are built to find patterns, and you should be careful not to assume the rate of improvement will continue forever based on nothing but a pattern.

Just the fact that even retail quality hardware is still improving at local LLM significantly is still a great sign. If AI quality remained the same, and the cost for local hardware dropped to $1000, it would still be the greatest thing since the internet IMO. So even if the worst happens and all progress stops, I'm still very happy with what we got.

>I'm still very happy with what we got

"One person's slop is another person's treasure"

I'm not all that impressed with "AI". I often "race" the AI by giving it a task to do, and then I start coding my own solution in parallel. I often beat the AI, or deliver a better result.

Artificial Intelligence is like artificial flavoring. It's cheap and tastes passable to most people, but real flavors are far better in every way even if it costs more.


At their current stage, this feels like the wrong way to use them. I use them fully supervised, (despite the fact that feels like I’m fighting the tools), which is kind of the best of both worlds. I review every line of code before I allow the edit, and if something is wrong, I tell it to fix it. It learns over time, especially as I set rules in memories, and so the process has sped up, to the point that this goes way faster than if I would have done that myself. Not all tasks are appropriate for LLMs at all, but when they are, this supervised mode is quite fast, and I don’t believe the output to be slop, but anyways I feel like I own every line of code still.

The happy path for me is with erlang, due to the concurrency model the blast radius of an error is exceptionally small, so the programming style is to let things crash if they go wrong. So, really you are writing the happy path code only (most of the time). Combine this approach with some very robust tests (does this thing pass the tests / behave how we need it to?) then you’re close to the point of not really caring about the implementation at all.

Of course, i still do, but i could see not caring being possible down the road with such architectures..


Home made food is better than anything you can buy too. Im 40 but I still drive 30 minutes to my parents once a week for dinner because the food they make feels like the elixir of life compared to the slop I can buy in trader joes, Costco, or most restaurants.

But I'm pretty glad trader joes exists too.


Trust me, Trader Joes is real food compared to a lot of the toxic waste being sold as food out there.

That crap will fill your belly but it won't keep you healthy. Your brain is like a muscle, if you stop flexing it, you'll end up weaker.


The overall trend in AI performance will still be up and to the right like everything else in computing over the past 50 years, improvement doesn't have to be linear

Assuming newer, more efficient architectures are discovered.

Because if you don't know the language or problem space, there are footguns in there that you can't find, you won't know what to look for to find them. Only until you try to actually use this in a production environment will the issues become evident. At that point, you'll have to either know how to read and diagnose the code, or keep prompting till you fix it, which may introduce another footgun that you didn't know that you didn't know.

This is what gets me. The tools can be powerful, but my job has become a thankless effort in pointing out people's ignorance. Time and again, people prompt something in a language or problem space they don't understand, it "works" and then it hits a snag because the AI just muddled over a very important detail, and then we're back to the drawing board because that snag turned out to be an architectural blunder that didn't scale past "it worked in my very controlled, perfect circumstances, test run." It is getting really frustrating seeing this happen on repeat and instead of people realizing they need to get their hands dirty, they just keep prompting more and more slop, making my job more tedious. I am basically at the point where I'm looking for new avenues for work. I say let the industry just run rampant with these tools. I suspect I'll be getting a lot of job offers a few years from now as everything falls apart and their $10k a day prompting fixed one bug to cause multiple regressions elsewhere. I hope you're all keeping your skills sharp for the energy crisis.


Before LLMs, I've watched in horror as colleagues immediately copy-paste-ran Stack Overflow solutions in terminal, without even reading them.

LLM agents are basically the same, except now everyone is doing it. They copy-paste-run lots of code without meaningfully reviewing it.

My fear is that some colleagues are getting more skilled at prompting but less skilled at coding and writing. And the prompting skills may not generalize much outside of certain LLMs.


The less you know about a domain/language the better AI seems to be :)

The next 5 years are going to be crazy exciting.

I don't want exciting. I want a stable, well-paying job that allows me to put food on the table, raise a family with a sense of security and hope, and have free time.


I seem to remember doing it in SQL (EDIT_DISTANCE) 20ish years ago. While I wouldn't say it worked beautifully, I also didn't need to make a single line of Rust :) also no more than 2 line s of SQL were needed.

Edit_distance uses pure levenstein which is quadratic, so for tables of 500k rows and 20+ columns each it will slowdown to a crawl. Without going into a lot of detail, I needed this to work for datasets of that size. So a lot of "trick" optimization and pre-processing has to be done.

Otherwise simple merges in pandas or sql/duckdb would had sufficed.


And how many years of experience you needed to know what to write, and what if you can replace that time with how long prompting takes?

It's an interesting question.

Years of school (reading, calculus etc) to get to the point of learning basics of set theory. One day to learn basic SQL based on understanding the set theory. Maybe few weeks of using SQL at work for ad hoc queries to be proficient enough (the query itself wasn't really complex).

For the domain itself I was consulting experts to see what matters.

I'm not sure that time it would take to know what to prompt and verify the results is much different.

Fun fact - management decided that SQL solution wasn't enerprisely enough so they hired external consultants to build a system doing essentialy that but in Java + formed an 8 people internal team to guide them. I heard they finished 2 years later with a lot of manual matching.


Let me explain the naysayers, they know "programmer" has always meant "builder" and just because search is better and you can copy and paste faster doesn't mean you've built anything.First thing people need to realize is no proprietary code is in those databases, and using Ai will ultimately just get you regurgitated things people don't really care about. Use it all you want, you won't be able to do anything interesting, they aren't giving you valuable things for free. Anything of value will still take time and knowledge. The marketing hype is to reduce wages and prevent competition. Go for it.

I want a "reddit" like discussion board where:

- Users don't have to pay to post links/stories - Users have to pay to comment on links/stories - Users have to pay to "upvote" comments. Downvotes don't exist - Each link "lives" a certain amount of time before it is locked. - After lock time, users who posted the link get "paid" a % of the collected $ comments/upvotes. Comments that are upvoted also earn $ proportionally to the upvotes.

Hashcash was conceived to solve automated spam/email. Participating in a discussion must cost something, that's the only way bots and spam will get partially stopped. Or, if they start to optimize to get "the most votes", then so be it, their content will increase in quality.


Paying users for their posts is what killed YouTube, Twitter Facebook, Instagram... You will only get shitty ragebait comments. Not to mention that you have to link some bank account with your full name, etc.

This sounds like a platform that has no appeal to the average person, and an incredible appeal to people wishing to launder money or use money to run an influence campaign. Deliberately determining popularity proportionally to the amount of money spent is little different than advertising, but this would be under the false premise of "someone thought this was important/valuable enough to pay money to suggest I see it".

If this were to exist today, I know I would be incredibly critical of it.


Makes me think of how prediction markets have a Republican bias because some rich people just gotta bet on their tribe every time

https://aaltodoc.aalto.fi/server/api/core/bitstreams/4176474...

Every election I see internet-connected gym machines have the leaderboards spammed with right wing messages because some people don’t have to work and just spin all day.


I’m missing something. What’s the incentive for people to pay to upvote or comment?

It seems like that would lead to a proliferation of ragebait, deliberately controversial posts, and overly simplistic articles to attract the greatest amount of comments. I frequently see deeply technical high-value posts on HN with very few comments but each thread about politics ends up getting hundreds of comments.

What's stopping you from building it yourself?

You could build this on ATProto.

+1 let's make this

What a time to be a Mice!

No? I have recommended Freestyle sugar free soda as a way to replace heavy CocaCola consumption. Here in Mexico it's a big problem, and I helped me get out of the addiction. ( add Allulose to the soda to add the sweet)

Dr Shore device has been decades in development. It's been all the rage in r/tinnitus , r/tinnitusresearch and T. Facebook groups. Still according to people that have tried it, it's no silver bullet.

I've had tinnitus for 25+ years and followed a lot of science. At some point some Brasil researchers found a drug that reduced tinnitus volume as a secondary effect. There wrote papers about it, but unfortunately, nothing came of it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: