Hacker Newsnew | past | comments | ask | show | jobs | submit | mcpherrinm's commentslogin

The "client cert" requirements were specifically not a CABF rule because that would rule it out for everyone complying with those rules, which is much broader than just the CAs included in Chrome.

Some CAs will continue to run PKIs which support client certs, for use outside of Chrome.

In general, the "baseline requirements" are intended to be just that: A shared baseline that is met by everyone. All the major root programs today have requirements which are unique to their program.


Thanks for chiming in! I remember now that you also said this on the LE community forum.

Right, that explains it. So the use would be for things other than websites or for websites that don't need to support Chrome (and also need clientAuth)?

I guess I find it hard to wrap my head around this because I don't have experience with any applications where this plus a publicly trusted certificate makes sense. But I suppose they must exist, otherwise there would've been an effort to vote it into the BRs.

If you or someone else here knows more about these use cases, then I'd like to hear about it to better understand this.


Are you asking why an HTTPS server would need to use client auth outside of the browser? The answer is mTLS. If you want to use one cert for your one domain to serve both "normal" browser content and HTTPS APIs with mTLS, your cert needs to be able to do it all.


The server that wants to authenticate clients via mTLS doesn't need the clientAuth EKU on its certificate, only the clients do.

Most of the time you set up mTLS by creating your own self-signed certificate and verifying that the client has that cert (or one that chains up to it). I'm wondering what systems exist that need a publicly trusted cert with clientAuth.

Only think I've heard of so far is XMPP for server-to-server auth, but there are alternative auth methods it supports.


This is a two-sided solution, and one significant reason for shorter certificate lifetimes helps make revocation work better.


I chose 160 hours.

The CA/B Forum defines a "short-lived" certificate as 7 days, which has some reduced requirements on revocation that we want. That time, in turn, was chosen based on previous requirements on OCSP responses.

We chose a value that's under the maximum, which we do in general, to make sure we have some wiggle room. https://bugzilla.mozilla.org/show_bug.cgi?id=1715455 is one example of why.

Those are based on a rough idea that responding to any incident (outage, etc) might take a day or two, so (assuming renewal of certificate or OCSP response midway through lifetime) you need at least 2 days for incident response + another day to resign everything, so your lifetime needs to be at least 6 days, and then the requirement is rounded up to another day (to allow the wiggle, as previously mentioned).

Plus, in general, we don't want to align to things like days or weeks or months, or else you can get "resonant frequency" type problems.

We've always struggled with people doing things like renewing on a cronjob at midnight on the 1st monday of the month, which leads to huge traffic surges. I spend more time than I'd like convincing people to update their cronjobs to run at a randomized time.


I have always been a bit puzzled by this. By issuing fixed length certificates you practically guarantee oscillation. If you have a massive traffic spike from, say, a CDN mass reissuing after a data breach - you are guaranteed to have the same spike [160 - $renewal_buffer] hours later.

Fuzzing the lifetime of certificates would smooth out traffic, encourage no hardcoded values, and most importantly statistical analysis from CT logs could add confidence that these validity windows are not carefully selected to further a cryptographic or practical attack.

A https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number if you will.


There is a solution for smoothing out the traffic: RFC 9733, ACME Renewal Information (ARI) Extension

https://datatracker.ietf.org/doc/rfc9773/


That only addresses half the problem and is just a suggestion vs something clients can't ignore.


Some ACME clients that I think currently support IP addresses are acme.sh, lego, traefik, acmez, caddy, and cert-manager. Certbot support should hopefully land pretty soon.


cert-manager maintainter chiming in to say that yes, cert-manager should support IP address certs - if anyone finds any bugs, we'd love to hear from you!

We also support ACME profiles (required for short lived certs) as of v1.18 which is our oldest currently supported[1] version.

We've got some basic docs[2] available. Profiles are set on a per-issuer basis, so it's easy to have two separate ACME issuers, one issuing longer lived certs and one issuing shorter, allowing for a gradual migration to shorter certs.

[1]: https://cert-manager.io/docs/releases/ [2]: https://cert-manager.io/docs/configuration/acme/#acme-certif...


I'm sure this is a difference-of-learning or whatever, but I'm usually unwilling to try a product until I can understand it and how it works from the documentation


Understandable. Our current take is that there's not really much to know, and that the people this will really light up are good with that. Of course, we'll flesh out documentation!

I'm really jazzed about this particular product as a product (I just really enjoy using it), but the post is mostly about how we built it, and deliberately not much about how best to use it.


You can look at who the "Stratum 2" servers are, in the NTP.org pool and otherwise. Those are servers who sync from Stratum 1, like NIST.

Anyone can join the NTP.org pool so it's hard to make blanket statements about it. I believe there's some monitoring of servers in the pool but I don't know the details.

For example, Ubuntu systems point to their Stratum 2 timeservers by default, and I'd have to imagine that NIST is probably one of their upstreams.

An NTP server usually has multiple upstream sources and can steer its clock to minimize the error across multiple servers, as well as detecting misbehaving servers and reject them ("Falseticker"). Different NTP server implementations might do this a bit differently.


I've been running servers for the pool for years. They are checked regularly for accuracy/uptime or it's score goes down in the pool and eventually gets removed. I sync from 5 stratum 1 servers and use chrony these days.

Facebook had a really interesting engineering blog about building their own timeservers: https://engineering.fb.com/2020/03/18/production-engineering...

Really well written for anyone who is interested.


Hm, it's supposed to be https://letsencrypt.org/docs/integration-guide/ - but it looks like the link is broken. I'll fix it.


It was actually outside of my small apartment with bad lighting


Let’s Encrypt does operate CT logs. I wrote a blog post about our current-generation logs at https://letsencrypt.org/2024/03/14/introducing-sunlight


Let’s Encrypt currently has a single primary with a handful of replicas, split across a primary and backup DC.

We’re in progress of adopting Vitess to shard into a handful of smaller instances, as our single big database is getting unwieldy.


Let’s Encrypt is an incredible project and the internet is better off for it. If you ever have questions about vitess or need help please let me know.


Thanks. Would love to see a tech blog post once you get Vitess implemented.


We’ve already started drafting it :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: