NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
DNS-Persist-01: A New Model for DNS-Based Challenge Validation (letsencrypt.org)
jcalvinowens 1 days ago [-]
Really happy to see this.

In the meantime, if you use bind as your authoritative nameserver, you can limit an hmac-secret to one TXT record, so each webserver that uses rfc2136 for certificate renewals is only capable of updating its specific record:

  key "bob.acme." {
    algorithm hmac-sha512;
    secret "blahblahblah";
  };
  
  key "joe.acme." {
    algorithm hmac-sha512;
    secret "blahblahblah2";
  };

  zone "example.com" IN {
   type master;
   file "/var/lib/bind/example.com.zone";
   update-policy {
    grant bob.acme. name _acme-challenge.bob.acme.example.com. TXT;
    grant joe.acme. name _acme-challenge.joe.acme.example.com. TXT;
   };
   key-directory "/var/lib/bind/keys-acme.example.com";
   dnssec-policy "acme";
   inline-signing yes;
  };
I like this because it means an attacker who compromises "bob" can only get certs for "bob". The server part looks like this:

  export LE_CONFIG_HOME="/etc/acme-sh/"
  export NSUPDATE_SERVER="${YOUR_NS_ADDR}"
  export NSUPDATE_KEY="/var/lib/bob-nsupdate.key"
  export NSUPDATE_KEY_NAME="bob.acme."
  export NSUPDATE_ZONE="acme.example.com."

  acme.sh --issue --server letsencrypt -d 'bob.example.com' \
        --certificate-profile shortlived \
        --days 6 \
        --dns dns_nsupdate
xav0989 11 hours ago [-]
If not using something like bind, but willing to run a dedicated dns server for acme challenges, acmedns offers something similar. When you generate a new account, it gets given a unique subdomain. You then cname the challenge domain to the acmedns subdomain and the account can only affect the associated subdomain.
TrueDuality 1 days ago [-]
I think this is solving a real operational pain point, definitely one that I've experienced. My biggest hesitation here is the direct exposure of the managing account identity not that I need to protect the accounts key material, I already need to do that.

While "usernames" are not generally protected to the same degree as credentials, they do matter and act as an important gate to even know about before a real attack can commence. This also provides the ability to associate random found credentials back to the sites you can now issue certificates for if they're using the same account. This is free scope expansion for any breach that occurs.

I guarantee sites like Shodan will start indexing these IDs on all domains they look at to provide those reverse lookup services.

liambigelow 1 days ago [-]
CAA records including an accounturi already expose the account identity in the same manner, so I feel like that ship has already sailed somewhat (and I would prefer that the CAA and persist record formats match).
Bender 4 hours ago [-]
I think the difference is that using the existing DNS method listing the account is entirely optional. I have left it out on domains that I don't want correlated for that very reason.
krunck 1 days ago [-]
Exactly. They should provide the user with a list of UUIDs(or any other randomish ID tied to the actual account) that can be used in the accounturi URL for these operations.
gsich 1 days ago [-]
The account is the same as you create in any acme client. I don't see potential for a reverse lookup.
Ayesh 1 days ago [-]
I think the previous post is talking about a search that will find the sibling domain names that have obtained certificates with the same account ID. That is a strong indication that those domains are in the same certificate renewal pipeline, most likely on the same physical/virtual server.
mschuster91 1 days ago [-]
Run ACME inside a Docker container, one instance (and credentials) for each domain name. Doesn't consume much resources. The real problem is IP addresses anyway, CT logs "thankfully" feed information to every bad actor in real time, which makes data mining trivially easy.
cortesoft 23 hours ago [-]
you dont even need a docker container to do that.
mschuster91 23 hours ago [-]
Agreed, that's just a personal preference thing of me. Harder to mess up and easier to route.
solatic 14 hours ago [-]
I'm surprised that this doesn't require DNSSEC or at the very least actively encourage configuring DNSSEC. While I used to be fully in the camp that DNSSEC was way more trouble than it was worth, in particular when access was de-facto secured by trusted CA certificates, more and more DNS record types (CAA, CERT, SSHFP, these TXT records) are responsible for storing information that can be manipulated in MITM attacks to seize control of a root of trust.

Of course, this has little applicability to anyone who is small enough not to have nation-state level actors in their threat model. But when I look behind the curtain of even Fortune 100 companies that really ought to have nation-state level actors in their threat model, too often you find people who are just not operating at that level or are swamped with unrelated work. So I'm starting to become of the opinion that guidance should change here and at the very least be documented recommendations - if it's not encouraged down the organizational size scale, too often it's not applied further up where it's needed.

ajnin 12 hours ago [-]
DNSSEC is encouraged ("SHOULD" wording) in the RFC draft : https://datatracker.ietf.org/doc/html/draft-ietf-acme-dns-pe...
solatic 11 hours ago [-]
The RFC wording is a little weird. If the zone has DNSSEC configured, then the wording should be stronger and use MUST wording, and not imply that CAs will be compliant if they choose to avoid verifying signatures despite the presence of signstures. Likewise, these TXT records for dns-persist-01 ideally "SHOULD NOT" be deployed when DNSSEC is not configured.
pepdar 3 hours ago [-]
An open PR on the draft (#35) adds exactly this language: if a CA performs DNSSEC validation and it fails (expired signatures, broken chain of trust), the CA MUST treat it as a challenge failure and MUST NOT use the record. The rationale is that dns-persist-01 records are long-lived, so a DNSSEC failure has more severe consequences than it would for a transient challenge.
Ajedi32 8 hours ago [-]
DNS has always been a single-point-of-failure for TLS cert issuance. The threat is real, but not at all unique to this validation method.

(For example, an attacker with control of DNS could switch the A record to their server and use that to pass HTTP-01 or TLS-ALPN-01 validation, or update the _acme-challenge TXT record and use that to pass DNS-01.)

paulnpace 9 hours ago [-]
When it comes to DNSSEC, I wish it were TLSA, but for whatever reason that was pretty much not supported anywhere, most notably by browsers.
Ajedi32 1 days ago [-]
This is going to make it way easier to get publicly trusted certs for LAN servers that aren't internet facing.

I'm looking forward to every admin UI out there being able to generate a string you can just paste into a DNS record to instantly get a Let's Encrypt cert.

kami23 22 hours ago [-]
Just experienced this with my heavily networked off openclaw setup. I gave up and will do manual renewals until I have more time to figure out a good way of doing it. I was trying to get a cert for some headscale magic dns setups, but I think that's way more complicated than I thought it would be.
CaliforniaKarl 22 hours ago [-]
For folks who use certbot, here is where they are tracking work on support for this feature: https://github.com/certbot/certbot/issues/10549
bob1029 1 days ago [-]
I've changed my mind about the short lived cert stuff after seeing what is enabled by IP address certificates with the HTTP-01 verification method. I don't even bother writing the cert to disk anymore. There is a background thread that checks to see if the current instance of the cert is null or older than 24h. The cert selector on aspnetcore just looks at this reference and blocks until its not null.

Being able to distribute self-hostable software to users that can be deployed onto a VM and made operational literally within 5 minutes is a big selling point. Domain registration & DNS are a massive pain to deal with at the novice end of the spectrum. You can combine this with things like https://checkip.amazonaws.com to build properly turnkey solutions.

cube00 1 days ago [-]
Pretty risky given the rate limits of Let's Encrypt are non negotiable with no choice but to wait them out.
muvlon 1 days ago [-]
They are quite literally negotiable: https://isrg.formstack.com/forms/rate_limit_adjustment_reque...

There are also a bunch of rate limit exemptions that automatically apply whenever you "renew" a cert: https://letsencrypt.org/docs/rate-limits/#non-ari-renewals. That means whenever you request a cert and there already is an issued certificate for the same set of identities.

dextercd 23 hours ago [-]
Your comment is 100% correct, but I just want to point out that this doesn't negate the risks of bob's approach here.

LE wouldn't see this as a legitimate reason to raise rate limits, and such a request takes weeks to handle anyway.

Indeed, some rate limits don't apply for renewals but some still do.

inahga 1 days ago [-]
You should persist certs somewhere. Otherwise your availability is heavily tied to LE’s uptime.
tialaramex 23 hours ago [-]
Technically, because Let's Encrypt always publishes all requested certificates to the logs (this isn't mandatory, it's just easier for most people so Let's Encrypt always does this) your tool can go look in the logs to get the certificate. You do need to know your private key, nobody else ever knew that so if you don't have that then you're done.
plagiat0r 6 hours ago [-]
X509 certificates published in CT logs are "pre-certificates". They contains a poison extension so you don't be able to use them with your private key.

The final certificate (without poison and with SCT proof) is usually not published in any CT logs but you can submit it yourself if you wish.

tialaramex 1 hours ago [-]
Although the poisoned pre-certificates† are logged as a necessary part of offering the least hassle product which is the business Let's Encrypt are in, they, like most CAs, also log the finished certificate shortly after.

Here's the pre-certificate for this web site's current certificate:

https://crt.sh/?id=23696530376

and here, just a few later in the log, is the finished certificate:

https://crt.sh/?id=23696528656

This is good practice, but it's also just easier, because if anything goes wrong, and sometimes things do go wrong, when the trust store says hey, please provide all certificates you issued with these properties, if you've logged them they are right there published in the logs for everybody to see - no bother, no risk - if you haven't then you need your own storage and better hope there aren't any mistakes. I'm sure LE do have their own copies if they needed them, but it sure is nice to know that's not what you're betting on.

† Poisoned pre-certificates are a "temporary" hack so that the certificate logging system can be demonstrated. If we ever really wanted this of course we'd develop a proper solution instead, right? Right? Every experienced software engineer knows that "temporary" usually means permanent in practice and so nobody was surprised by how this turned out.

xyzzy_plugh 20 hours ago [-]
Now you depend on CT log providers uptime, which as far as I can tell is worse than LE.
pests 15 hours ago [-]
Google, Cloudflare, Let's Encrypt, DigiCert, GlobalSign and others run logs. Here is the list Chrome checks: https://www.gstatic.com/ct/log_list/v3/log_list.json

While they do not have direct SLAs, they still have to comply with rules enforced by browser vendors, as they will remove you from CT checks and you'll be marked retired/untrusted (you can find some in the above list).

This means a 99% uptime on a 90 day rolling average, a 1 minute update frequency for new entries (24 hours on an older RFC). No split views, strict append-only, sharding by year, etc.

I think OP's original idea would work.

plagiat0r 6 hours ago [-]
X509 certificates published in CT logs are "pre-certificates". They contains a poison extension so you don't be able to use them with your private key.

The final certificate (without poison and with SCT proof) is usually not published in any CT logs but you can submit it yourself if you wish.

OP idea won't work unless OP will submit final certificate himself to CT logs.

zamadatix 1 days ago [-]
Yeessss! This should finally make certificates for internal only web services actually easier to orchestrate than before ACME. This closes probably the biggest operational pain point I've had with letsencrypt/modern web certificates.

Thank you so much to all inolved!

jmholla 1 days ago [-]
There's a missing part here, and that's validating your ACME account ownership.

I think most users depend on automation that creates their accounts, so they never have to deal with it. But now, you need to propagate some credential to validate your account ownership to the ACME provider. I would have liked to see some conversation about that in this announcement.

I'm not familiar with Let's Encrypt's authentication model. If they don't have token creation that can be limited by target domain, but I expect you'll need to create separate accounts for each of your target domains, or else anything with that secret can create a cert for any domain your account controls.

mschuster91 1 days ago [-]
> There's a missing part here, and that's validating your ACME account ownership.

Why? ACME accounts have credentials so that the ACME client can authenticate against the certificate issuer, and ACME providers require the placement of a DNS record or a .well-known HTTP endpoint to verify that the account is authorized to act upon the demands of whoever owns the domain.

If either your ACME credentials leak out or, even worse, someone manages to place DNS records or hijack your .well-known endpoint, you got far bigger problems at hand than someone being able to mis-issue SSL certificates under your domain name.

jmholla 16 hours ago [-]
> Why? ACME accounts have credentials so that the ACME client can authenticate against the certificate issuer, and ACME providers require the placement of a DNS record or a .well-known HTTP endpoint to verify that the account is authorized to act upon the demands of whoever owns the domain.

This is the previous models. In this case, DNS-Persist-01, the record is permanent and never changes. So to prove that your request is valid, they need to authenticate in some other manner. Otherwise, once you create that persistent record, anybody could request a cert for your domain.

Edit: Spivak explains the flow differences better in their comment: https://news.ycombinator.com/item?id=47065821

stego-tech 7 hours ago [-]
This is a blessing for us Dynamic DNS folks whose DNS providers demand a static IP for changes to come from (i.e., Namecheap). In theory, it means we can set this up once (or on a schedule that works for our needs), and trust that renewals will happen without continued maintenance or involvement.

Eager to give this a try as I modernize the homelab.

IgorPartola 23 hours ago [-]
Am I just stupidly missing something or does this in theory allow anyone who controls a DNS server for my domain or anyone who controls traffic between LE and the DNS server for my domain to get a TLS certificate they can use to impersonate my domain?

I suppose the same is true for DNS-01 but this would make it even easier because the attacker can just put up their LE account instead of mine into the DNS response and get a certificate.

At this point why not just put my public cert into a DNS record and be done with it?

gurjeet 23 hours ago [-]
If you don't trust your DNS provider to _not_ do malicious acts against you, you shouldn't be in that relationship.

If someone can perform MITM attack between LetsEncrypt and a DNS server, we've got bigger problem than just certificate issuance.

pests 15 hours ago [-]
This is why the big names pay MarkMonitor $250-$1000 per domain with a minimum $10,00/yr spend.

They have a good reputation, lock down the domain technically at all levels, and have the connections and people/social skills to take care of any domain issues involving person-to-person contact.

Which is not easy, I recall spending months like a decade ago on email/phone attempting (successfully) to get my personal domain out of expiry hell (made more complicated by wrong records).

bombcar 23 hours ago [-]
Yes, anyone who controls your DNS can get a TLS certificate from anyone who offers them - because, uh, they control your DNS!

Try to figure out a way to block me from getting a TLS certificate if I can modify your DNS.

IgorPartola 23 hours ago [-]
That’s fair but I also have to trust every provider between my DNS server and LE’s servers to not intercept DNS responses. Since DNS isn’t encrypted anyone anywhere between them can modify the traffic and get a certificate if I understand correctly.
mcpherrinm 23 hours ago [-]
Two current mitigations and one future:

DNSSEC prevents any modification of records, but isn’t widely deployed.

We query authoritative nameservers directly from at least four places, over a diverse set of network connections, from multiple parts of the world. This (called MPIC) makes interception more difficult.

We are also working on DNS over secure transports to authoritative nameservers, for cases where DNSSEC isn’t or won’t be deployed.

IgorPartola 22 hours ago [-]
Ah that makes sense. I was wondering why I haven’t heard of cases of successfully attacks like this. Thank you for the info!
echoangle 23 hours ago [-]
If I control your DNS, I can also just do the HTTP Acme challenge. If you control the DNS, it’s basically your domain anyways.
msmith 21 hours ago [-]
To mitigate the threat from an attacker who controls the network between the cert issuer and the DNS server, CAs will check the DNS records from multiple vantage points.

Let's Encrypt has been doing this for several years, and it's a requirement for all CAs as of 2024.

[1] https://cabforum.org/2024/08/05/ballot-sc067v3-require-domai...

tkel 19 hours ago [-]
itintheory 1 days ago [-]
I'm really excited for this. We moved 120+ hand renewed certs to ACME, but still manually validate the domains annually. Many of them are on private/internal load balancers (no HTTP-01 challenge possible), and our DNS host doesn't support automation (no DNS-01 challenges either). While manually renewing the DCV for ~30 domains once a year isn't too bad, when the lifetime of that validity shrinks, ultimately to 9 days, it'd become a full time job. I just hope Sectigo implements this as quickly as LE.
arccy 12 hours ago [-]
Note that you can delegate the _acme-challenge subdomain to a validation-specific server or zone, so a different server that supports automation if you can't / don't want to change your main DNS provider.

https://letsencrypt.org/docs/challenge-types/#:~:text=This%2...

9dev 1 days ago [-]
For the love of god, switch to a DNS provider with an API. Whatever legacy behemoth you’re working with doesn’t justify a gap this wide.
eichin 20 hours ago [-]
What open source DNS servers have an API? (I saw someone elsewhere in the thread talking about doing this with dnsmasq, but it sounded like they'd cobbled something together, rather than the software handling it.)
skinner927 19 hours ago [-]
aragilar 12 hours ago [-]
I personally wouldn't use dnsmasq for this (as its far more suited as a recursive server and DHCP provider with some basic authoritative records, rather than an authoritative-only server), but every open source authoritative DNS server worth using about has RFC 2136 support.
quicksilver03 12 hours ago [-]
PowerDNS has an API which is working pretty well, I've been using it to generate ACME certificates since a few years and I also built a DNS hosting service around it.
amluto 23 hours ago [-]
Name one that doesn’t have an AWS-style per-query cost.

(There might well be a nice one, but I haven’t found it yet.)

zufallsheld 16 hours ago [-]
Hetzner does not charge any money for their dns service and they have an api.
Hetzner_OL 12 hours ago [-]
Hi there, Hetzner here. Thanks for mentioning us. For anyone who is interested: - https://www.hetzner.com/dns/ - https://docs.hetzner.com/networking/dns - https://docs.hetzner.cloud/reference/cloud#tag/zones --Katie
amluto 9 hours ago [-]
Are there docs for how to create and configure the API keys?
toast0 23 hours ago [-]
If it's for a business, I would contact them to see if they have a commercial offering, but I think the Hurricane Electric Free DNS might actually fit.

https://dns.he.net/

amluto 19 hours ago [-]
Interestingly, HE’s commercial offerings are in some respects excellent, but their login system is every bit as primitive as the free stuff.
nfredericks 23 hours ago [-]
Might be obvious, but Cloudflare
amluto 23 hours ago [-]
No. Cloudflare will give a key scoped to an entire administrative domain in the Cloudflare sense like “a.com”. They will not give you a key scoped to a single entry within that domain. (That entry would be a domain in the RFC 9499 sense, but do you really expect anyone to agree on the terminology?)

In particular, there is no support for getting a key scoped to _acme-challenge.a.b.c or, even better, to a particular RR.

Maybe if you have an enterprise plan you can very awkwardly fudge it using lots of CNAMEs and subdomains.

Some DNS hosts that support old-school dynamic dns can do this. dns.he.net is an example, but they have a login system that very much stuck in the nineties.

dboreham 21 hours ago [-]
Cloudflare DNS isn't fully functional (at least for me). Can't be used for general purpose DNS hosting imho.
radiator 23 hours ago [-]
Hetzner DNS
jcgl 9 hours ago [-]
desec.io
gerdesj 20 hours ago [-]
My LE experience (post HTTP-01 and now DNS-01) - its a bit of a palava. I don't have to open port 80 which is nice for ... security audits but gains zero security benefit.

I have a PowerDNS server running locally with a static IPv4 address via NAT and I have created a DNS domain and enabled dynamic DNS updates from certain IPv4 addresses with a pre-shared key.

For each cert you need a DNS CNAME pointing to my DNS domain in a specific format. Then we have to get to grips with software to do the deed. acme.sh is superb for !Windows. simple-acme is fine for Windows. I still setup each one by hand instead of ansible/Zenworks/whatever because I'm a sucker for punishment and still small enough for now.

DNS-Persist-01 is not something I think I will ever need but clearly someone does.

mscdex 1 days ago [-]
After having to deal with VM hosts that do GeoIP blocking, which unintentionally blocks Let's Encrypt and others from properly verifying domains via http-01/tls-alpn-01, I settled on a DIY solution that uses CNAME redirects and a custom, minimal DNS server for handling the redirected dns-01 challenges. It's essentially a greatly simplified version of the acme-dns project tailored to my project's needs (and written in node.js instead of Go).

Unfortunately with dns-persist-01 including account information in the DNS record itself, that's a bit of a show stopper for me. If/when account information changes, that means DNS records need changing and getting clients to update their DNS records (for any reason) has long been a pain.

pepdar 3 hours ago [-]
Key rotation doesn't change the account URI — ACME key rollover (RFC 8555 §7.3.5) replaces the key pair but keeps the same account URL, which is one of the reasons the draft uses account URI rather than a public key. Your DNS record stays unchanged through key rotations.

The only case that requires a DNS update is creating an entirely new account, and that's deliberate — the record binds a specific account to the domain so a stolen record can't be used by someone else.

For your setup with CNAME delegation to a custom DNS server, this should actually be simpler than dns-01. You would point _validation-persist instead of _acme-challenge, and the target record is static. No per-issuance dynamic updates at all.

basilikum 1 days ago [-]
> The timestamp is expressed as UTC seconds since 1970-01-01

That should be TAI, right? Is that really correct or do they actually mean unix timestamps (those shift with leap seconds unlike TAI which is actually just the number of seconds that have passed since 1970001Z)?

wtallis 1 days ago [-]
Do leap seconds even matter here? Doing anything involving DNS or certificates in a way that requires clock synchronization down to the second would seem to be asking for trouble.
tialaramex 22 hours ago [-]
Abolition of the Leap Second is basically a done deal. So, the differences caused by leap seconds will become frozen as arbitrary offsets, GPS time versus UTC for example.

Basically when it was invented leap seconds seemed like a good idea because we assumed the inconvenience versus value was a good trade, but in practice we've discovered the value is negligible and the inconvenience more than we expected, so, bye bye leap seconds.

The body responsible has formal treaty promises to make UTC track the Earth's spin and replacing those treaties is a huge pain, so, the "hack" proposed is to imagine into existence a leap minute or even a leap hour that could correct for the spin, and then in practice those will never be used either because it's even less convenient than a leap second - but by the time they're asked to set a date for these hypothetical changes likely the signatory countries won't exist and their successors can just sign a revised treaty, countries only tend to last a few hundred years, look at the poor US which is preparing 250th anniversary celebrations while also approaching civil war.

aragilar 12 hours ago [-]
Which is how we get the October revolution in November.
imtringued 13 hours ago [-]
I tried to think about and leap seconds on their own don't seem to be a real problem. The problem is that leap seconds, minutes, hours, days, years, etc are a human interface concept and therefore only make sense to humans, but we've decided to force machines to deal with these human interface concepts as the primary way of dealing with time, when only the presentation layer should even know what a leap second is.
tialaramex 7 hours ago [-]
Leap seconds are not a human interface concept. Humans don't care. People who haven't thought very hard about this tend to believe humans care but they don't.

If humans cared the existing systems couldn't exist. For more than a century we've all lived with time "zones" which are thousands of seconds wide and we're not bothered by that. Many of us have civil time systems which shift twice per year by 3600 seconds for really no good reason, and while that's annoying it's barely worth a brief mention on TV news or in small talk. Leap seconds are 3600 times smaller and happen way less often, they're entirely negligible.

They existed because we thought we cared, and we actually don't care, and we thought it was pretty easy to do, and it actually wasn't very easy after all.

toast0 23 hours ago [-]
Probably yeah, seconds don't really matter here. You would have to work hard for the 27 second difference to be material. But precision is nice.

unixtime is almost certainly what is meant by the standard, but it is not the count of UTC seconds since 1970; unix time is the number of seconds since 1970 as if all days had 86400 seconds. UTC, TAI, and GPS seconds are all the same length, and the same number have happened since 1970, but TAI appears 37 seconds ahead of UTC because TAI has days with 86400 seconds, while UTC has some days with 86401 seconds and was 10 seconds ahead of UTC in 1970. unixtime and UTC are in sync because unixtime allows some days to encompass 86401 UTC seconds while unixtime only counts 86400 seconds.

newsoftheday 1 days ago [-]
Today I do the following:

/usr/bin/letsencrypt renew -n --agree-tos --email me@example.com --keep-until-expiring

Will I need to change that? Will I need to manually add custom DNS entries to all my domains?

PS To add, compared to dealing with some paid certificate services, LetsEncrypt has been a dream.

dextercd 1 days ago [-]
This adds a new validation method that people can use if they want. The existing validation methods (https://letsencrypt.org/docs/challenge-types/) aren't going away, so your current setup will keep working.
jsheard 1 days ago [-]
And to elaborate, the reasons you might want to use a DNS challenge are to acquire wildcard certificates, or to acquire regular certificates on a machine or domain which isn't directly internet-facing. If neither of those apply to you then the regular HTTP/TLS methods are fine.
newsoftheday 1 days ago [-]
OK I was sort of thinking that might be the case but wanted to make sure in case I had to start prepping now, thanks. We use no wildcard domains today, maybe down the road.
bombcar 23 hours ago [-]
Wildcard domains are a great way to get certs for all your "internal systems" with only having to expose one (or a bit of one on DNS) to the Internet at large.

This is going to greatly simplify some of my scripts.

newsoftheday 1 days ago [-]
This is good news, not sure I got that from reading the article but even if I had to do it, it wouldn't be the end of the world I guess.
rmoriz 21 hours ago [-]
I would have loved to see mandatory DNSSEC requirements
micw 1 days ago [-]
I wonder why they switched from a super-secure-super-complex (in terms of operations) way of doing DNS auth to a super-simple-no-cryptography-involved method that just relies on the account id.

Why not using some public/private key auth where the dns contains a public key and the requesting server uses the private key to sign the cert request? This would decouple the authorization from the actual account. It would not reveal the account's identity. It could be used with multiple account (useful for a wildcard on the DNS plus several independent systems requesting certs for subdomains).

tptacek 1 days ago [-]
The most common vector for DNS-based attacks on issuance is compromised registrar accounts, and no matter how complicated you make the cryptography, if you're layering it onto the DNS, those attacks will preempt the cryptography.
raquuk 15 hours ago [-]
Using the account identifier in the record and and LE mapping the identifier to a public key internally enables key rotario without touching the record again.
Spivak 1 days ago [-]
Because LE keeps a mapping of account ids to emails and public keys. You have to have the private key to the ACME account to issue a cert. The cryptography is still there but the dance is done by certbot behind the scenes.

Prior to this accounts were nearly pointless as proof of control was checked every time so people (rightfully) just threw away the account key LE generated for them. Now if you use PERSIST you have to keep it around and deploy it to servers you want to be able to issue certs.

qwertox 1 days ago [-]
This will make things so much easier.

Here, certbot runs in Docker in the intranet, and on a VPS I have a custom-built nameserver to which all the _acme-challenge are redirected to via NS records.

The system in the intranet starts certbot, makes it pass it the token-domain-pair from letsencrypt, it then sends those pairs to the nameserver which then attaches the token to a TXT record for that domain, so that the DNS reply can send this to letsencrypt when they request it.

All that will be gone and I thank you for that! You add as much value to the internet as Wikipedia or OpenStreetMap.

tripdout 19 hours ago [-]
For local services, I don't see the benefit of using DNS challenges and a Let's Encrypt certificate over running my own CA and generating my own certificates. It's not that much work to trust my root certificate on each device, and then I don't need an internet connection to verify local service certificates.
sebiw 19 hours ago [-]
> It's not that much work to trust my root certificate on each device

Sure, but is trusting your homebrewed CA on all your devices for essentially everything really a good idea?

When your homebrewed CA somehow gets compromised, all your devices are effectively compromised and not only for local connections, but everything that uses PKIX.

NewJazz 18 hours ago [-]
8organicbits 9 hours ago [-]
Make sure all the TLS clients you use have support for name constraints. When I evaluated this in 2023, Chrome was in the process of adding support. I'd love to see a caniuse style analysis of TLS features, people assume they work but support varies.
Hamuko 16 hours ago [-]
I can either add a Cloudflare API key and Certbot on my NAS, or I could generate a root certificate and add it to my desktop computers, laptop, tablet, phones, Apple TV, etc.

Doesn't seem that tough of a choice. I guess in the future I could even forego the Cloudflare API key and just have the persistent DNS record there once.

csense 1 days ago [-]
To get a Let's Encrypt wildcard cert, I ended up running my own DNS server with dnsmasq and delegating the _acme-challenge subdomain to it.

Pasting a challenge string once and letting its continued presence prove continued ownership of a domain is a great step forward. But I agree with others that there is absolutely no reason to expose account numbers; it should be a random ID associated with the account in Let's Encrypt's database.

As a workaround, you should probably make a new account for each domain.

bombcar 23 hours ago [-]
Your account ID is exposed in the certificate generated; what's the real difference?
pepdar 5 hours ago [-]
The ACME account URI does not appear in issued certificates. X.509 certs contain the subject, issuer, SANs, validity period, SCTs, etc., but no ACME account identifier. You can verify this by inspecting any Let's Encrypt certificate. What CT logs do reveal is which CA issued certs for which domain(s), and multi-domain certs group SANs together, so some correlation is possible. But the account URI itself is not exposed — dns-persist-01 records in DNS would be a new exposure surface for that identifier. That's a real tradeoff, which is why the draft supports using separate accounts per domain if isolation matters to you.
Spivak 1 days ago [-]
You bothered to manage your LE accounts? I only say because when using the other two challenge types with most deployment scenarios you were generating a new account per cert so your account ID was just a string of random numbers.
mmh0000 1 days ago [-]
I really like and hate this at the same time.

Years ago, I had a really fubar shell script for generating the DNS-01 records on my own (non-cloud) run authoritative nameserver. It "worked," but its reliability was highly questionable.

I like this DNS-PERSIST fixes that.

But I don't understand why they chose to include the account as a plain-text string in the DNS record. Seems they could have just as easily used a randomly generated key that wouldn't mean anything to anyone outside Let's Encrypt, and without exposing my account to every privacy-invasive bot and hacker.

Ajedi32 1 days ago [-]
> they could have just as easily used a randomly generated key

Isn't that pretty much what an accounturi is in the context of ACME? Who goes around manually creating Let's Encrypt accounts and re-using them on every server they manage?

ragall 1 days ago [-]
Those who choose to use DNS-PERSIST-01 should fully commit to automation and create one LetsEncrypt account per FQDN (or at least per loadbalancer), using a UUID as username.
mcpherrinm 1 days ago [-]
There is no username in ACME besides the account URI, so the UUID you’re suggesting isn’t needed. The account uri themselves just have a number (db primary key).

If you’re worried about correlating between domains, then yes just make multiple accounts.

There is an email field in ACME account registration but we don’t persist that since we dropped sending expiry emails.

9dev 1 days ago [-]
It’s still a valid point IMHO - why not just use the public key directly? It seems like the account URI just adds problems instead of resolving any.
mcpherrinm 23 hours ago [-]
It has these primary advantages:

1. It matches what the CAA accounturi field has

2. Its consistent across an account, making it easier to set up new domains without needing to make any API calls

3. It doesn’t pin a users key, so they can rotate it without needing to update DNS records - which this method assumes is nontrivial, otherwise you’d use the classic DNS validation method

glzone1 1 days ago [-]
Interesting.

I didn't realize the email field wasn't persisted. I assumed it could be used in some type of account recovery scenario.

bflesch 22 hours ago [-]
> But I don't understand why they chose to include the account as a plain-text string in the DNS record.

Simple: it's for tracking. Someone paid for that.

Ayesh 1 days ago [-]
I'm surprised the ballot passed, unanimously even! I get that storing the DNS credentials in the certificate renewal pipeline is risky, but many DNS providers have granular API access controls, so it is already possible to limit the surface area in case the keys get leaked. Plus, you can revoke the keys easily.

The ACME account credentials are also accessible by the same renewal pipelines that has the DNS API credentials, so this does not provide any new isolation.

~It's also not quite clear how to revoke this challenge, and how domain expiration deal with this. The DNS record contents should have been at least the HMAC of the account key, the FQDN, and something that will invalidate if the domain is transferred somewhere else. The leaf DNSSEC key would have been perfect, but DNSSEC key rotation is also quite broken, so it wouldn't play nice.~

Is there a way to limit the challenge types with CAA records? You can limit it by an account number, and I believe that is the most tight control you have so far.

---

Edit: thanks to the replies to this comment, I learned that this would provide invalidation simply by removing the DNS record, and that the DNS records are checked at renewal time with a much shorter validation TTL.

amluto 1 days ago [-]
> but many DNS providers have granular API access controls

And many providers don't. (Even big ones that are supposedly competent like Cloudflare.)

And basically everyone who uses granular API keys are storing a cleartext key, which is no better and possibly worse than storing a credential for an ACME account.

agwa 1 days ago [-]
> It's also not quite clear how to revoke this challenge, and how domain expiration deal with this

CAs can cache the record lookup for no longer than 10 days. After 10 days, they have to check it again. If the record is gone, which would be expected if the domain has expired or been transferred, then the authorization is no longer valid.

(I would have preferred a much shorter limit, like 8 hours, but 10 days is a lot better than the current 398 day limit for the original ACME DNS validation method.)

mcpherrinm 1 days ago [-]
We (Let’s Encrypt) also agree 10 days seems too long, so we are migrating to 7 hours, aligning with the restrictions on CAA records.
mcpherrinm 1 days ago [-]
Yes, you can limit both challenge types and account URIs in CAA records.

To revoke the record, delete it from DNS. Let’s Encrypt queries authoritative nameservers with caches capped at 1 minute. Authorizations that have succeeded will soon be capped at 7 hours, though that’s independent of this challenge.

mcpherrinm 1 days ago [-]
This wasn’t the first version of the ballot, so there was substantial work to get consensus on a ballot before the vote.

CAs were already doing something like this (CNAME to a dns server controlled by the CA), so there was interest from everyone involved to standardize and decide on what the rules should be.

UltraSane 22 hours ago [-]
I use AWS Route53 and you can get incredibly granular with API permissions

Key condition keys for this purpose include:

    route53:ChangeResourceRecordSetsActions: Limits actions to CREATE, UPDATE, or DELETE.

    route53:ChangeResourceRecordSetsRecordTypes: Limits actions to specific DNS record types (e.g., A, CNAME, TXT).

    route53:ChangeResourceRecordSetsRecordValues: Limits actions based on the specific value of the DNS record.

    route53:ChangeResourceRecordSetsResourceRecords: For more complex scenarios, this can be used to control access based on the full record set details.
chaz6 1 days ago [-]
Is it possible to create an ACME account without requesting a certificate? AFAICT is is not so you cannot use this method unless you have first requested a certificate with some other method. I hope I am wrong!
dextercd 1 days ago [-]
An account needs to be created before you can request a certificate. Some ACME clients might create the account for you implicitly when you request the first certificate, but in the background it still needs to start by registering an account.

`certbot register` followed by `certbot show_account` is how you'd do this with certbot.

chaz6 1 days ago [-]
Great, thank you!
plagiat0r 6 hours ago [-]
Most acmev2 clients create account on certificate request.

That is precisely why I wrote this: https://github.com/pawlakus/acmecli

This small tool will allow you to just create, rekey and deactivate your acmev2 account(s).

Havoc 1 days ago [-]
Interesting. Think a lot of the security headaches went away for me when I discovered providers like CF can restrict the scope of tokens to a single domain and lock it to my IP.
amluto 1 days ago [-]
Even CF cannot restrict the scope of a token to a single host.
cube00 1 days ago [-]
Or a single DNS record.
blahgeek 19 hours ago [-]
Given the name of DNS-01, you would think it would be called DNS-02...
pepdar 10 hours ago [-]
I'm one of the draft authors. Several questions here touch real design tradeoffs — addressing the main threads:

Why account URI instead of a public key in the record? (micw, 9dev, csense)

Three reasons:

1. Key rotation without DNS changes. dns-persist-01 exists because DNS updates are expensive. Embedding a public key forces a DNS update on every key rotation — the exact problem this method solves. The account URI survives key rotation (RFC 8555 §7.3.5).

2. CAA alignment. The accounturi parameter matches CAA record syntax (RFC 8657 §3). Domain owners use the same identifier in validation and policy records.

3. Simplicity. Matching uses simple string comparison — no key encoding, no signature verification, no algorithm negotiation. The cryptographic binding between account URI and key pair happens inside ACME, where it belongs.

"Exposing account numbers" / privacy (csense, mmh0000, bflesch)

The account URI is opaque — a URL containing a database key, like https://acme-v02.api.letsencrypt.org/acme/acct/123456789. No email, no name. The privacy exposure is modest: it reveals which CA account controls the domain, similar to what CT logs already show about CA-domain relationships, but with explicit account-level grouping. If you want isolation between domains, use separate accounts.

The accounturi binds validation to a specific account so a stolen DNS record can't be used by a different subscriber. An open PR (#35) adds accounturi to the challenge object so clients can verify it before provisioning.

10-day reuse limit (agwa)

The 10-day maximum comes from the CA/Browser Forum ballot (SC-088), not the IETF draft. The draft defers reuse period to CA policy and caps it at the DNS TTL (see "Validation Data Reuse and TTL Handling" in the Security Considerations). Let's Encrypt is migrating to 7 hours. The TTL cap lets domain owners enforce shorter windows directly.

Mandatory DNSSEC (rmoriz)

Requiring DNSSEC would exclude most domains and block adoption. The current draft specifies DNSSEC validation as SHOULD. An open PR (#35) tightens this: if a CA performs DNSSEC validation and it fails — expired signatures, broken chain of trust — the CA MUST reject the record. This is stricter than general ACME guidance because dns-persist-01 records are long-lived. MPIC (multi-perspective validation) provides the primary defense against on-path attacks regardless of DNSSEC.

Unencrypted DNS queries (1vuio0pswjnm7)

Yes, standard DNS queries are unencrypted. The threat model relies on MPIC — querying from multiple vantage points — not transport encryption. DNSSEC adds an integrity layer where available.

CAA interaction (Ayesh)

Yes. A CAA record with validationmethods=dns-persist-01 combined with accounturi restricts who can validate using this method.

Name-constrained intermediate CAs (infogulch)

Separate problem. dns-persist-01 reduces operational cost of leaf cert issuance by eliminating per-issuance DNS interaction. Delegated intermediates shift the trust model. Both could coexist.

Draft: <https://github.com/ietf-wg-acme/draft-ietf-acme-dns-persist> (PR #35 is an open pull request on the draft with several of the improvements mentioned above.)

plagiat0r 6 hours ago [-]
Thank you, this draft is literally perfect and I wish we had this years ago. Most people don't know about acmev2 account rekeying either. It is great you decided to use account uri instead of public key thumbprint.

Recently I wrote a simple acmev2 tool specifically for manual upfront acmev2 account creation, rekeying and getting TXT records on stout for dns-persist-01:

https://github.com/pawlakus/acmecli

It also helps with stateless http01 printing thumbprint...

8organicbits 8 hours ago [-]
> Requiring DNSSEC would exclude most domains and block adoption.

I think this is a good call. For the web, the CAB sets CA requirements and they could choose to require DNSSEC at a later date. It would be a breaking change, but the CAB can, and has, made breaking changes to the BR. The IETF draft seems flexible enough that we could end up with a DNSSEC MUST for the web, in practice, based on the CAB's discretion.

ocdtrekkie 1 days ago [-]
This might be the first time in ten years that a certificate proposal intends to make issuing certificates more reasonable and not less. More of this, less of 7-day-lifetime stupidity.
infogulch 24 hours ago [-]
This is a nice increment in ACME usability.

Once again I would like to ask CA/B to permit name constrained, short lifespan, automatically issued intermediate CAs. Last year's request: https://news.ycombinator.com/item?id=43563676

dangoodmanUT 21 hours ago [-]
Love this, such a better method
1vuio0pswjnm7 21 hours ago [-]
Is it false that DNS requests sent from LE to authoritatuve nameservers are unencrypted
1vuio0pswjnm7 6 hours ago [-]
Answer: "Yes, standard DNS queries are unencrypted. The threat model relies on MPIC - querying from multiple vantage points - not transport encryption. DNSSEC adds an integrity layer where available."

https://news.ycombinator.com/item?id=47073054

CqtGLRGcukpy 1 days ago [-]
"Support for the draft specification is available now in Pebble, a miniature version of Boulder, our production CA software. Work is also in progress on a lego-cli client implementation to make it easier for subscribers to experiment with and adopt. Staging rollout is planned for late Q1 2026, with a production rollout targeted for some time in Q2 2026."
aaomidi 1 days ago [-]
This is significantly better than my draft of DNS-ACCOUNT-01. Thank you Let's Encrypt team!
cyberax 1 days ago [-]
Ah, the next step towards True DANE!

We then can just staple the Persist DNS key to the certificate itself.

And then we just need to cut out the middleman and add a new IETF standard for browsers to directly validate the certificates, as long as they confirm the DNS response using DNSSEC.

tptacek 1 days ago [-]
This decreases the salience of DANE/DNSSEC by taking DNS queries off the per-issuance critical path. Attackers targeting multitenant platforms get only a small number of bites at the apple in this model.
NoahZuniga 1 days ago [-]
DNS queries are still part of the critical path, as let's encrypt needs to check that the username is still allowed to receive a cert before each issuance.
cyberax 1 days ago [-]
Sure. It's yet another advantage of doing True DANE. But it still requires DNS to be reliable for the certificate issuance to work, there's no way around it.

So why not cut out the middleman?

(And the answer right now is "legacy compatibility")

tptacek 1 days ago [-]
I mean, the reason not to do DANE is that nobody will DNSSEC-sign, because DNSSEC signing is dangerous.
cyberax 1 days ago [-]
Come on. It's not dangerous, it's just inconvenient and clumsy. So nobody is really using it.
akerl_ 1 days ago [-]
Ok, it's inconvenient and clumsy in ways that make it easy to shoot oneself in the foot. But that's not dangerous?
cyberax 1 days ago [-]
When you shoot yourself in the foot with DNSSEC, you typically end up with a non-working setup.

The biggest problem is that DNS replies are often cached, so fixes for the mistakes can take a while to propagate. With Let's Encrypt you typically can fix stuff right away if something fails.

tptacek 24 hours ago [-]
When you shoot yourself in the foot with DNSSEC, your entire domain falls of the Internet, as if it had never existed in the first place. It's basically the worst possible case failure and it's happened to multiple large shops; Slack being the most notorious recent example.
cyberax 23 hours ago [-]
Yes, and it'd be great if DNSSEC added an "advisory" signature level. So it can be deployed without doing a leap of faith.

But let's not pretend that WebPKI is perfect. More than one large service failed at some point because of a forgotten TLS certificate renewal. And more than one service was pwned because a signing key leaked. Or a wildcard certificate turned out to be more wildcard than expected.

I understand the failures of DNSSEC and DNS in general. And we need to do something about it because it's really showing signs of its age as we continue to pile on functionality onto it.

I don't have an idea for a good solution for everything, but I just can't imagine us piling EVERYTHING onto WebPKI either.

akerl_ 22 hours ago [-]
> But let's not pretend that WebPKI is perfect.

You're commenting on a post about LetsEncrypt working with other entities in the industry to make improvements to WebPKI. It's safe to say that nobody's claiming it's perfect.

But you can't go from ~"WebPKI isn't perfect" and ~"DNSSEC/DANE exist" and draw a magic path where using DNSSEC or DANE is actually a good thing for people to roll out. They'd need to be actually a good fit, and for DANE we have direct evidence that it isn't: a rollout was attempted and it was walked back due to multiple issues.

tptacek 23 hours ago [-]
I don't really understand most of this comment but you opened up this subthread with "Come on. It's not dangerous", and, as you're acknowledging here, it clearly is quite dangerous.
cyberax 20 hours ago [-]
DNSSEC is not dangerous. Pretty much the worst thing is breakage, not an accidental compromise.

It's also more secure, compared to ACME. An on-path attacker can impersonate the site operator and get credentials. DNSSEC is immune to that.

tptacek 20 hours ago [-]
This is a very strange definition of "dangerous".
cyberax 17 hours ago [-]
I'm mostly thinking about dangerous from the security point of view. I agree that it might not be the best from the operational point of view. DNSSEC in its current state makes DNS updates even more risky than they are, I agree with that.
tptacek 16 hours ago [-]
You remember what CIA stands for, right?
Borealid 15 hours ago [-]
In order for an attacker to reduce a site's Availability via DNS they must alter the records received by resolvers.

If they can do that, they can just refuse to send the records at all (or mangle them such that they are ignored). DNSSEC makes the situation no worse.

It does, however, increase Integrity.

For the record, the 'A' in CIA refers to resilience against some party's purposeful attempt to make something unavailable. It does not stand for Areliability or Asimplicity.

akerl_ 10 hours ago [-]
> For the record, the 'A' in CIA refers to resilience against some party's purposeful attempt to make something unavailable.

That’s pretty clearly not correct.

Borealid 3 hours ago [-]
Care to explain what you think is correct, if that is incorrect?

CIA is about security. It's not about some kind of operational best practices.

Supporting example: creating a system where someone failing to enter their password correctly one time locks them out for a day is problematic, because that system can be made unavailable by an attacker. This is not an Available system, and thus not as secure as one that has a more flexible lockout policy.

Supporting example: creating a system where an application is only available from one IP address is problematic, because an attacker can take out one ISP and knock that IP address off the Internet. Making the system more Available by allowing users to access it from other IPs improves the overall security posture.

akerl_ 2 hours ago [-]
I'm not sure why you're trying to build up what CIA means by inventing scenarios.

https://en.wikipedia.org/wiki/Information_security#CIA_triad

> For any information system to serve its purpose, the information must be available when it is needed.[78] This means the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly.[79] High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades.[80] Ensuring availability also involves preventing denial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down.[81]

https://www.fortinet.com/resources/cyberglossary/cia-triad

> If, for example, there is a power outage and there is no disaster recovery system in place to help users regain access to critical systems, availability will be compromised. Also, a natural disaster like a flood or even a severe snowstorm may prevent users from getting to the office, which can interrupt the availability of their workstations and other devices that provide business-critical information or applications. Availability can also be compromised through deliberate acts of sabotage, such as the use of denial-of-service (DoS) attacks or ransomware.

https://online.utulsa.edu/blog/what-is-the-cia-triad/

> Software bugs or misconfigurations. Incorrect software configurations or glitches can cause system outages.

kittbuilds 1 days ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:42:14 GMT+0000 (Coordinated Universal Time) with Vercel.