Hacker Newsnew | past | comments | ask | show | jobs | submit | bilalq's commentslogin

Look into git reflog. If the changes were committed, it was almost certainly possible to still restore them, even if the commit is no longer in your branch.

There are probably other tools like this that keep version history based on filesystem events, independent from the project's git repository

https://www.jetbrains.com/help/idea/local-history.html


This has its own risk factors. If your domain renewal lapses due to credit card expiry or something and you fail to notice, it's catastrophic. This is just not realistic advice for the average person.


You can usually purchase 10 years up front. But then you should set a reminder for every 3 years or so to keep topping up, or else you'll forget how to even sign into the registrar.

You're right that having a vanity domain for your primary email address isn't for the faint of heart. There isn't any realistic advice for the average person because it's not for the average person.


Not really? You just jump in and fix the domain name. You have 75 days before a lapsed domain is released into general availability.

Sure, you'll likely miss some emails, but otherwise it's safe.


XCode and Pages are a delight in comparison to VSCode and Notion is certainly one of the takes of all time.

XCode is usually the first example that comes to mind of a terrible native app in comparison to the much nicer VSCode.


These were absolutely incredible when they first opened up right on until covid. The blue-apron style meal kits they had were actually really tasty and the gimmicky integration with Alexa to tell you the next step in the recipe was actually kind of useful when you were busy stirring a pot or cutting something and too busy to pull out the recipe card. It was like a 7-Eleven, but with the prices of a normal grocery store and higher quality prepared food. Not needing to deal with checkout felt freeing. I substituted many grocery store runs with a quick walk over to the original Amazon Go back in the day.

After covid, it was never the same. Open for shorter windows, closed on Sundays, reduced selection, no more meal kits etc.

I had many friends who worked on Amazon Go, so it's a bit sad to see that work come to an end.


> I had many friends who worked on Amazon Go, so it's a bit sad to see that work come to an end.

What did they do?


This question is surprising to me, because I consider AI code review the single most valuable aspect of AI-assisted software development today. It's ahead of line/next-edit tab completion, agentic task completion, etc.

AI code review does not replace human review. But AI reviewers will often notice little things that a human may miss. Sometimes the things they flag are false positives, but it's still worth checking in on them. If even one logical error or edge case gets caught by an AI reviewer that would've otherwise made it to production with just human review, it's a win.

Some AI reviewers will also factor in context of related files not visible in the diff. Humans can do this, but it's time consuming, and many don't.

AI reviews are also a great place to put "lint" like rules that would be complicated to express in standard linting tools like Eslint.

We currently run 3-4 AI reviewers on our PRs. The biggest problem I run into is outdated knowledge. We've had AI reviewers leave comments based on limitations of DynamoDB or whatever that haven't been true for the last year or two. And of course it feels tedious when 3 bots all leave similar comments on the same line, but even that is useful as reinforcement of a signal.


(And in my experience, if GPT-5 misunderstands something, then that thing is either async Python, modern F#, or in need of better comments.)


I have one and a half decades of muscle memory burned in with inoremap jj <Esc>`^

It's not something I can just shift away from.


Definitely let your keybinding keep you from trying out and using a fantastic tool.


I already use Graphite today on top of git. Others are using alternatives like Sapling, etc.

To go back to your question around why people still use these workarounds on top of git, it's because the CLI is just one piece of it. With Graphite, I also get a stack-aware merge queue and review dashboard.


This is really exciting. Step functions were a big improvement over SWF and the Flow framework, but declarative workflow authoring sucks from a type-safety standpoint. Workflows-as-code is the way to go, and that was missing from AWS. Can't wait to build on top of this.


You're probably already planning this, but please setup an alarm to fire off if a new package release is published that is not correlated with a CI/CD run.


Or require manual intervention to publish a new package. I'm not sure why we need to have a fully automated pipeline here to go from CI/CD to public package release. It seems like having some kind of manual user interaction to push a new version of a library would be a good thing.


The basic issue with manual interaction is a question of authority: a pretty common problem for companies (and open source groups) is when $EARLY_EMPLOYEE/$EARLY_CONTRIBUTOR creates and owns the entire publishing process for a key package, and then leaves without performing a proper transfer of responsibility. This essentially locks the company/group out of its own work, and increases support load on community maintained indices to essentially adjudicate rightful ownership of the package name.

(There are a variety of ways to solve this, but the one I like best is automated publishing a la Trusted Publishing with environment mediated manual signoffs. GitHub and other CI/CD providers enable this.)


I don’t buy this. I mean, I’m sympathetic to the issue. It’s easy for things to get jumbled when companies are young. But we’re talking about libraries that are used by your customers. To many, this is the externally visible interface to the company.

What you describe sounds like a process problem to me. If an $EARLY_EMPLOYEE if the only one with the deploy keys for what is a product of the company, then that’s a problem. If a deployment of that key library can be made without anyone approving it, that’s also a problem. But those are both people problems… and you can’t solve a people problem with a technical solution.


> But those are both people problems… and you can’t solve a people problem with a technical solution.

I don’t think it is a people problem in this case: the only reason there’s a person involved at all is because we’ve decided to introduce one as an intermediating party. A more misuse-resistant scheme disintermediates the human, because the human was never actually a mandatory part of the scheme.


What person do you want to remove from the process?


The person who intermediates the trust relationship between the index and the source repository. There’s no reason for the credential that links those two parties to be intermediated by a human; they’re two machine services talking.

(You obviously can’t disintermediate the human from maintenance or development!)


But that’s the person I think is mandatory.

You’re saying that whatever is in the source repository should be uploaded in the npm index, right? If the code is tagged as release, the built artifact is automatically uploaded to npm. Is that what you’re proposing?

That exactly what got PostHog into this position. The keys to publish to npm were available to an engineer or GitHub to push a malware build into npm automatically. This isn’t a technical issue… it’s a process issue. I don’t see the problem as that the keys were misused. I see the problem as that it was possible to misuse the keys at all. Why do you need that process to be automatic? How often are you pushing new updates?

I would argue that those npm assets/libraries are your work product. That is what your customer needs to use your service. It is a published product from your company. It is too important to allow a new version to be published out to the public without a human in the loop to approve it.

When you have a fully automatic publishing cycle, you’re trading security for convenience. It’s all about how much risk you’re willing to accept. For me, that’s too much of a risk to the reputation to the company. I also think the balance shifts if you’re talking about purely internal assets, having completely automatic ci/cd makes perfect sense for most companies. For me, it is about who is hurt if there is an issue (and you should expect for there to be an issue).

Putting a person in the loop for releasing a product is one way to solve this. It’s not perfect, but at the moment, I think it’s the most secure (for the public).


> You’re saying that whatever is in the source repository should be uploaded in the npm index, right? If the code is tagged as release, the built artifact is automatically uploaded to npm. Is that what you’re proposing?

No, I'm saying that the source repository should act as an authentication principal itself. A human should still initiate the release process, but the authentication process that connects the source repository (more precisely CI/CD) to the index should not involve a credential that's implicitly bound to a human identity (because the human's role within a project or company is ephemeral).

As far as I can tell, what got PostHog into this situation wasn't a fully automated release process (very few companies/groups have fully automated processes), but the fact that they had user-created long-lived credentials that an attacker could store and weaponize at a time most convenient to them. That's a problem regardless of whether there's normally a human in the loop or not, because the long-lived credential itself was sufficient for publishing.

(In other words, we basically agree about human approval being good; what I'm saying is that we should formalize human approval without making the authentication scheme explicitly require an intermediating partner who doesn't inherently represent the actual principal, i.e. the source repository.)


I think we agree more than we don’t and the rest are personal preferences and policy differences. But we largely agree in principle.

I like the idea of having a person whose job is approving releases. Kind of like a QC tag — this release was approved by XX. I saw the issue as PostHog having a credential available to the CI/CD that had the authority to push releases automatically. When a new GitHub action was added, that credential was abused to push a bad update to npm. I might be wrong, I don’t deal with npm that much.

There are many ways to fix this.


You can't "require" manual intervention. Sure you can say that the keys stays on say 2 developers laptops, but personal devices have even more surface area for key leak than CI/CD pipeline. It wouldn't have prevented attacks like this issue in any case where the binary just searched for keys across the system.

One alternative is to do the signing on airlocked system stored in physically safe but accessible location, but I guess that's just way too much inconvenience.


As someone else mentioned, the easiest way would be to have some kind of MFA in the loop. It’s not perfect, but better than what we have now.


I get that it can be useful sometimes. But requiring physical MFA to make a package available to the general public seems like a no-brainer to me.

Users who really want to could opt in to the bleeding edge.


This is orthogonal to the issue at hand. The problem is a malicious actor cutting a release outside of the normal release process. It doesn't matter if the normal process is automated or manual.


It could have eliminated an attack surface where they steal the credentials from the CI/CD...

...But then you if I understand NPM publishing well, you would still have the credentials on someone's computer laying around? I guess you could always revoke the tokens after publishing? It's all balancing convenience and security, with some options being bad at both?


This is built in NPM. You can get an email on every pkg publishing.

Sure, it might be a little bit of noise, but if you get a notice @ 3am of an unexpected publishing, you can jump on unpublishing it.


Very nice way of putting it, kudos!


I did a double-take when I read that as well. I went and checked the license under rubygems, and sure enough, it's standard MIT with no warranties.

https://github.com/rubygems/rubygems/blob/master/LICENSE.txt


I'm willing to bet the people who published that have no idea what they just said, and probably don't understand what the MIT license contains.


They are talking about the rubygems.org package hosting service...


Right, take a look at Section 8 of the Terms of Service (https://rubygems.org/policies/terms-of-service):

THE SERVICE IS PROVIDED STRICTLY ON AN “AS IS” AND “AS AVAILABLE” BASIS, AND PROVIDER MAKES NO WARRANTY THAT THE SERVICE IS COMPLETE, SUITABLE FOR YOUR PURPOSE, RELIABLE, USEFUL, OR ACCURATE.

What warranty are they providing exactly?


It comes with a 100,000 mile drive train warranty.


Yeah we've been meaning to talk to you about that warranty actually


What warranty does it come with?



Yep right there in the TOS:

a. THE SERVICE IS PROVIDED STRICTLY ON AN “AS IS” AND “AS AVAILABLE” BASIS, AND PROVIDER MAKES NO WARRANTY THAT THE SERVICE IS COMPLETE, SUITABLE FOR YOUR PURPOSE, RELIABLE, USEFUL, OR ACCURATE.


That’s not the only thing though. E.g. the collect PII, so I assume there are regulations to abide by etc.


They're storing PII in the github repo they kicked the core maintainers out of?

Just stop trying to make excuses for these people. They screwed up, and based on this press release, don't seem to have any interest in actually correcting those mistakes.


The license is not an accurate way to check if there is a warranty or not.


I'm not a lawyer, so maybe a silly question: is it possible the software license is different from service warranty? And I guess another thing that comes to mind is that maybe they didn't mean _legal_ warranty, but something that was used colloquially?


The MIT license is a copyright license. The developer is free to offer a warranty or any other contract they want.


This is an excellent write-up of the problem. New hires out of college/bootcamps often have no awareness of the risks here at all. Sometimes even engineers with years of experience but no operational mentorship in their career.

The kitchen sink example in particular is one that trips up people. Without knowing the specifics of how a library may deal with failure edge cases, it can catch you off guard (e.g., axios errors including API key headers).

A lot of these problems come from architectures where secrets go over the wire instead of just using signatures/ids. But in cases where you have to use some third party platform, there's often no choice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: