NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
OpenSSL 4.0.0 (github.com)
capitol_ 1 days ago [-]
Finally encrypted client hello support \o/
bombcar 1 days ago [-]
Is this something that we can enable "today" or is it going to take 12 years for browsers and servers to support?
arcfour 1 days ago [-]
CloudFlare has supported it since 2023: https://blog.cloudflare.com/announcing-encrypted-client-hell... Firefox has had it enabled by default since version 119: https://support.mozilla.org/en-US/kb/faq-encrypted-client-he... so you can use it today.
1vuio0pswjnm7 21 hours ago [-]
"... so you can use it today."

What if he wanted to use it for requesting blog.cloudflare.com

   ;; ANSWER SECTION:
   blog.cloudflare.com. 300 IN HTTPS 1 . alpn="h3,h2" ipv4hint=104.18.28.7,104.18.29.7 ipv6hint=2606:4700::6812:1c07,2606:4700::6812:1d07
Where are the ECH keys

For example,

   ;; ANSWER SECTION:
   test.defo.ie. 300 IN HTTPS 1 . ech="AEb+DQBCqQAgACBlm7cfDx/gKuUAwRTe+Y9MExbIyuLpLcgTORIdi69uewAEAAEAAQATcHVibGljLnRlc3QuZGVmby5pZQAA"
or

   ;; ANSWER SECTION:
   cloudflare-ech.com. 300 IN HTTPS 1 . alpn="h3,h2" ipv4hint=104.18.10.118,104.18.11.118 ech="AEX+DQBBpQAgACB/RU5hAC5mXe3uOZtNY58Bc8UU1cd4QBxQzqirMlWZeQAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA=" ipv6hint=2606:4700::6812:a76,2606:4700::6812:b76
It's true one can "use it today". One could use it for the past several years as well. The software has been around for a while

But ECH has never been consistently enabled for the general public beyond a small number of test sites that are only for testing ECH

bombcar 1 days ago [-]
https://tls-ech.dev indicates that Safari doesn't support it, but Chrome does.
altairprime 1 days ago [-]
That’s likely due to iOS/macOS not supporting it in production-default-enabled yet; there’s an experimental opt-in flag at the OS level, but Safari apparently hasn’t (yet) added a dev feature switch for it.

https://developer.apple.com/documentation/security/sec_proto...

Presumably anyone besides Safari can opt-in to that testing today, but I wouldn’t ship it worldwide and expect nice outcomes until (I suspect) after this fall’s 27 releases. Maybe someone could PR the WebKit team to add that feature flag in the meantime?

kro 1 days ago [-]
Nginx mainline 1.29.x supports it. So once you get that and also the openssl version on your system, good to go. Likely too late for ubuntu 26.04, maybe in debian 14 next year, or of course rolling release distros / containers.

But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there. The real benefits are on huge cloud hosting platforms.

Bender 24 hours ago [-]
FWIW Nginx 1.30 [1] just released and supports it so most distributions will have support as soon as those responsible for builds and testing builds push it forward.

"Nginx 1.30 incorporates all of the changes from the Nginx 1.29.x mainline branch to provide a lot of new functionality like Multipath TCP (MPTCP)."

"Nginx 1.30 also adds HTTP/2 to backend and Encrypted Client Hello (ECH), sticky sessions support for upstreams, and the default proxy HTTP version being set to HTTP/1.1 with Keep-Alive enabled."

But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there

I don't quite follow. I have dozens of throw-away silly hobby domains. I can use any of them as the outer-SNI. How is someone observing the traffic going to know the inner-SNI domain unless someone builds a massive database of all known inner+outer combinations which can be changed on a whim? ECH requires DOH so unless the ISP has tricked the user into using their DOH end-point they can't see the HTTPS resource record.

[1] - https://news.ycombinator.com/item?id=47770007

ameliaquining 23 hours ago [-]
It's not that adversaries can directly see the domain name; this doesn't have anything to do with domain fronting. The issue is that ECH doesn't hide the server's IP address, so it's mostly useless for privacy if that IP address uniquely identifies that server. The situation where it helps is if the server shares that IP address with lots of other people, i.e., if it's behind a big cloud CDN that supports ECH (AFAIK that's currently just Cloudflare). But if that's the case, it doesn't matter whether Nginx or whatever other web server you run supports ECH, because your users' TLS negotiations aren't with that server, they're with Cloudflare.
Bender 22 hours ago [-]
I can't speak for anyone else but I think I can work around that by moving the site around to different VPS nodes from time to time. I get bored with my silly hobby sites all the time and nuke the VM's then fire them up later which gives them a new IP. I don't know what others might do if anything.

If I had a long running site I could do the same thing by having multiple font-end caching nodes using HAProxy or NGinx that come and go but I acknowledge others may not have the time to do that and most probably would not.

duskwuff 21 hours ago [-]
That's not quite it. The issue is that there's no other traffic bound to that IP - ECH doesn't buy you any security, because an observer doesn't even need to look at the content of the traffic to know where it's headed.
Bender 20 hours ago [-]
Maybe it will be more useful for outbound from NGinx or HAProxy to the origin server using ECH so the destination ISP has no idea what sites are on the origin assuming that traffic is not passing over a VPN already.
ameliaquining 21 hours ago [-]
Anyone who wants to track your users can just follow the IP changes as they occur in real time.
Bender 21 hours ago [-]
Anyone who wants to track your users can just follow the IP changes as they occur in real time.

That's cool. I only make my own mini-CDN's.

There is always the option to put sites on a .onion domain but I don't host anything nearly exciting or controversial enough. For text that's probably a good option. I don't know if Tor is fast enough for binary or streaming sites yet. No idea how many here even know how to access a .onion site.

I will test out your theory and see if anyone bothers to track my IP addresses and does anything with them. I probably need to come up with something edgy that people would want to block. Idea's for something edgy?

bombcar 19 hours ago [-]
Tor is completely usable at reasonable speeds by even normies via Brave.
Bender 19 hours ago [-]
That's kindof what I suspected but have not kept up with it.
throw_a_grenade 14 hours ago [-]
Doesn't matter, I (not OP, but also operating VPS) still want to support this, so the clients can eventually assume all correctly configured servers support it.
tialaramex 24 hours ago [-]
TLS (the IETF Working Group not the protocol family named for them) have long experience with the fact that if you specify how B is compatible with A based on how you specified A and ship B what you did won't work because the middleboxes are all cost optimized and don't implement what you specified but instead whatever got the sale for the least investment.

So e.g. they'd work for exactly the way you use say TLS 1.0 in the Netscape 4 web browser which was popular when the middlebox was first marketed, or maybe they cope with exactly the features used in Safari but since Safari never sets this bit flag here they reject all connections with that flag.

What TLS learned is summarized as "have one joint and keep it well oiled" and they invented a technique to provide that oiling for one working joint in TLS, GREASE, Generate Random Extensions And Sustain Extensibility. The idea of GREASE is, if a popular client (say, the Chrome web browser) just insists on uttering random nonsense extensions then to survive in the world where that happens you must not freak out when there are extensions you do not understand. If your middlebox firmware freaks out when seeing this happen, your customers say "This middlebox I bought last week is broken, I want my money back" so you have to spend a few cents more to never do that.

But, since random nonsense is now OK, we can ship a new feature and the middleboxes won't freak out, so long as our feature looks similar enough to GREASE.

ECH achieves the same idea, when a participating client connects to a server which does not support ECH as far as it knows, it acts exactly the same as it would for ECH except, since it has neither a "real" name to hide nor a key to encrypt that name it fills the space where those would fit with random gibberish. As a server, you get this ECH extension you don't understand, and it is filled with random gibberish you also don't understand, this seems fine because you didn't understand any of it (or maybe you've switched it off, either way it's not relevant to you).

But for a middlebox this ensures they can't tell whether you're doing ECH. So, either they reject every client which could do ECH, which again that's how you get a bunch of angry customers, or, they accept such clients and so ECH works.

ekr____ 23 hours ago [-]
Even if the browsers and servers don't support it, you could still enable it because the system is designed to be backward compatible.
22 hours ago [-]
philipnee 20 hours ago [-]
And QUIC.
kybishop 15 hours ago [-]
Wasn't QUIC all done in the 3.x versions? Is there something in this release related to QUIC support?
ocdtrekkie 1 days ago [-]
Just be aware any reasonable network will block this.
Bender 24 hours ago [-]
Just be aware any reasonable network will block this.

Russia blocked it for Cloudflare because the outer SNI was obviously just for ECH but that won't stop anyone from using generic or throw-away domains as the outer SNI. As for reasonable I don't quite follow. Only censorious countries or ISP's would do such a thing.

I can foresee Firewall vendors possibly adding a category for known outer-SNI domains used for ECH but at some point that list would be quite cumbersome and may run into the same problems as blocking CDN IP addresses.

kstrauser 23 hours ago [-]
Once upon a time, "reasonable networks" blocked ICMP, too.

They were wrong then, of course, and they're still wrong now.

ocdtrekkie 23 hours ago [-]
Once upon a time, like today? ICMP is most definitely only allowed situationally through firewalls today.
tredre3 22 hours ago [-]
I'd say that ICMP is only situationally blocked by firewalls, not the other way around.

Because I can ping almost any public server on the internet and they will reply. I can ping your website just fine and it replies to me!

ocdtrekkie 20 hours ago [-]
You'd say incorrectly, firewalls have an implicit deny rule, so any case ICMP traverses a firewall, someone wanted it to. Obviously large hosting providers tend to find value in ICMP being enabled.

But for example, our firewall at work responds to ICMP but all of the endpoints which aren't meant for public use do not. That is less because ICMP is a problem and more because everything works fine without it and least privilege is good design.

ICMP is also more than just ping, and some parts of ICMP are considered a vulnerability if exposed to the public internet by some scanning services.

AtNightWeCode 12 hours ago [-]
The normal behavior is that firewalls and proxys respond to the ICMP requests instead of forwarding them though...
kstrauser 21 hours ago [-]
That kind of cargo culted tradition is how you end up with weird packet loss and VPNs that flat-out refuse to work.

I could be convinced to block inbound pings. Anything past that and I'd want solid evidence that it wouldn't break anything, with the expectation that it would.

tolciho 17 hours ago [-]
address-mask-request and redirect and timestamp-request for IPv4 might be problematic to allow inbound from who knows where. echo-request might well be rate limited so remote hosts can ping certain servers (but not random client host IPs), but not too many pings per second.
quantummagic 1 days ago [-]
Why is it "reasonable" to block it?
vman81 1 days ago [-]
Well, I may want to have a say in what websites the employees at work access in their browsers. For example.
altairprime 1 days ago [-]
That’s not a meaningful issue here. Either snoop competently or snoop wire traffic, pick one.

In the snooping-mandatory scenario, either you have a mandatory outbound PAC with SSL-terminating proxy that either refuses CONNECT traffic or only allows that which it can root CA mitm, or you have a self-signed root CA mitm’ing all encrypted connections it recognizes. The former will continue functioning just fine with no issues at providing that; the latter will likely already be having issues with certificate-pinned apps and operating system components, not to mention likely being completely unaware of 80/udp, and should be scheduled for replacement by a solution that’s actually effective during your next capital budgeting interval.

kccqzy 1 days ago [-]
That’s usually done not on the network side but through the device itself. Think MDM and endpoint management.
ocdtrekkie 1 days ago [-]
A good solution is tackling it on both. At work we have network level firewalls with separate policies for internal and guest networks, and our managed PCs sync a filter policy as well (through primarily for when those devices are not on our network). The network level is more efficient, easier to manage and troubleshoot, and works on appliances, rogue hardware, and other things that happen not to have client management.
ekr____ 23 hours ago [-]
Well, if you have MDM you should be able to just disable ECH.
ocdtrekkie 23 hours ago [-]
This is also indeed done on both. Browser policies.
miladyincontrol 15 hours ago [-]
Any "reasonable" network just sees a regular Client Hello, the rest is encrypted. They designed it with your very concern in mind to obscure that the ECH even happens.
hypeatei 1 days ago [-]
Procrastinators. FTFY.

Eventually these blocks won't be viable when big sites only support ECH. It's a stopgap solution that's delaying the inevitable death of SNI filtering.

ocdtrekkie 1 days ago [-]
This will never happen. Because between enterprise networks and countries with laws, ECH will end up blocked a lot of places.

Big sites care about money more than your privacy, and forcing ECH is bad business.

And sure, kill SNI filtering, most places that block ECH will be happy to require DPI instead, while you're busy shooting yourself in the foot. I don't want to see all of the data you transmit to every web provider over my networks, but if you remove SNI, I really don't have another option.

kelnos 20 hours ago [-]
> I don't want to see all of the data you transmit to every web provider over my networks

Then don't look. I'm serious. This idea that corporations need to snoop on everything their employees do is disgusting.

ocdtrekkie 19 hours ago [-]
So, if you are not at minimum inspecting SNI, you are not meaningfully providing security for your network. Where I work we do not really pay attention to what people are doing with their computers (that is an HR problem, not an IT problem), but the prevalence of ransomware almost certainly starts and ends with people not making rational network security decisions, which starts with filtering. We also remove the ads. =)
hypeatei 23 hours ago [-]
> Because between enterprise networks

> require DPI

Enterprises own the device that I'm connected to the network with, I don't see how you can get any more invasive than that.

> countries with laws

1) what countries do national-level SNI filtering, and 2) why are you using a hyptothetical authoritarian, privacy invading state actor as a good reason to keep plaintext SNI?

> Big sites care about money

Yes, and you could say that overbearing, antiquated network operators stop them from making more money with things like SNI filtering.

georgthegreat 1 days ago [-]
https://www.haproxy.com/blog/state-of-ssl-stacks

According to this one should not be using v3 at all..

danudey 1 days ago [-]
Nice that OpenSSL finally relented and provided an API for developers to use to implement QUIC support - last year, apparently.

For those not familiar: until OpenSSL 3.4.1, if you wanted use OpenSSL and wanted to implement HTTP/3, which uses QUIC as the underlying protocol, you had to use their entire QUIC stack; you couldn't have a QUIC implementation and only use OpenSSL for the encryption parts.

QUIC, for those not familiar, is basically "what if we re-implemented TCP's functionality on top of UDP, but we could throw out all the old legacy crap". Complicated but interesting, except that if OpenSSL's implementation didn't do what you want or didn't do it well, you either had to put up with it or go use some other SSL library somewhere else. That meant that if you were using e.g. curl built against OpenSSL then curl also inherently had to use OpenSSL's QUIC implementation even if there were better ones available.

Daniel Stenberg from Curl wrote a great blog post about how bad and dumb that was if anyone is interested. https://daniel.haxx.se/blog/2026/01/17/more-http-3-focus-one...

caycep 1 days ago [-]
How is OpenSSl these days? I vaguely remember the big ruckus a while back, was it Heartbleed? where everyone to their horror realized it was maybe 1 or 2 people trying to maintain OpenSSL, and the OpenBSD people then throwing manpower at it to clear up a lot of old outstanding bugs. It seems like it is on firmer/more organized footing these days?
tptacek 1 days ago [-]
The security side of OpenSSL improved significantly since Heartbleed, which was a galvanizing moment for the maintenance practices of the project. It doesn't hurt that OpenSSL is now one of the most actively researched software security targets on the Internet.

The software quality side of OpenSSL paradoxically probably regressed since Heartbleed: there's a rough consensus that the design of OpenSSL 3.0 was a major step backwards, not least for performance, and more than one large project (but most notably pyca/cryptography) is actively considering moving away from OpenSSL entirely as a result. Again: while security concerns might be an ancillary issue in those potential migrations, the core issue is just that OpenSSL sucks to work with now.

ImJasonH 24 hours ago [-]
On this topic, there was a great episode of a little-known podcast about Python cryptography and OpenSSL that was really eye opening: https://securitycryptographywhatever.buzzsprout.com/1822302/...

:)

dadrian 22 hours ago [-]
I dunno, they'll let anybody get on the Internet and start a podcast.
ignoramous 22 hours ago [-]
> ... the core issue is just that OpenSSL sucks to work with now.

NodeJS working group don't seem happy working with OpenSSL, either. There's been indication Node may move off of it (though, I remain sceptical):

  I'd actually like us to consider the possibility of switching entirely to BoringSSL and away from OpenSSL. While BoringSSL does not carry the same Long Term Support guarantees that OpenSSL does, and has a much more constrained set of algorithms/options -- meaning it would absolutely be a breaking change -- the model they follow echoes that approach that v8 takes and we've been able to deal with that just fine.
Update on QUIC, https://github.com/nodejs/node/issues/57281 (2025).
kccqzy 1 days ago [-]
It’s still terrible. There was a brief period immediately after Heartbleed that it was rapidly improving but the entire OpenSSL 3 was a huge disappointment to anyone who cared about performance and complexity and developer experience (ergonomics). Core operations in OpenSSL 3 are still much much slower than in OpenSSL 1.1.1.

The HAProxy people wrote a very good blog post on the state of SSL stacks: https://www.haproxy.com/blog/state-of-ssl-stacks And the Python cryptography people wrote an even more damning indictment: https://cryptography.io/en/latest/statements/state-of-openss...

Here are some juicy quotes:

> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency.

> After everything imaginable was done, the performance of OpenSSL 3.x remains highly inferior to that of OpenSSL 1.1.1. The ratio is hard to predict, as it depends heavily on the workload, but losses from 10% to 99% were reported.

> OpenSSL 3 started the process of substantially changing its APIs — it introduced OSSL_PARAM and has been using those for all new API surfaces (including those for post-quantum cryptographic algorithms). In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. This reduces performance, reduces compile-time verification, increases verbosity, and makes code less readable.

awongh 21 hours ago [-]
Wow, also this:

> The OpenSSL project does not sufficiently prioritize testing. [... ]the project was [...] reliant on the community to report regressions experienced during the extended alpha and beta period [...], because their own tests were insufficient to catch unintended real-world breakages. Despite the known gaps in OpenSSL’s test coverage, it’s still common for bug fixes to land without an accompanying regression test.

I don't know anything about these libraries, but this makes their process sound pretty bad.

kccqzy 20 hours ago [-]
This quote about testing is way worse:

> OpenSSL’s CI is exceptionally flaky, and the OpenSSL project has grown to tolerate this flakiness, which masks serious bugs. OpenSSL 3.0.4 contained a critical buffer overflow in the RSA implementation on AVX-512-capable CPUs. This bug was actually caught by CI — but because the crash only occurred when the CI runner happened to have an AVX-512 CPU (not all did), the failures were apparently dismissed as flakiness.

wavemode 17 hours ago [-]
OpenSSL is (famously) an extremely terrible codebase.

It's likely that over the long-term the tech industry will replace it with something else, but for now there's too much infrastructure relying on it.

gavinray 1 days ago [-]

  > In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. 
Ah yes, the ole' " fn(args: Map<String, Any>)" approach. Highly auditable, and Very Safe.
wahern 24 hours ago [-]
I think one of the main motivators was supporting the new module framework that replaced engines. The FIPS module specifically is OpenSSL's gravy train, and at the time the FIPS certification and compliance mandate effectively required the ability to maintain ABI compatibility of a compiled FIPS module across multiple major OpenSSL releases, so end users could easily upgrade OpenSSL for bug fixes and otherwise stay current. But OpenSSL also didn't want that ability to inhibit evolution of its internal and external APIs and ABIs.

Though, while the binary certification issue nominally remains, there's much more wiggle room today when it comes to compliance and auditing. You can typically maintain compliance when using modules built from updated sources of a previously certified module, and which are in the pipeline for re-certification. So the ABI dilemma is arguably less onerous today than it was when the OSSL_PARAM architecture took shape. Today, like with Go, you can lean on process, i.e. constant cycling of the implementation through the certification pipeline, more than technical solutions. The real unforced error was committing to OSSL_PARAMs for the public application APIs, letting the backend design choices (flexibility, etc) bleed through to the frontend. The temptation is understandable, but the ergonomics are horrible. I think performance problems are less a consequence of OSSL_PARAMS, per se, but about the architecture of state management between the library and module contexts.

PunchyHamster 22 hours ago [-]
Sensible way would be dropping FIPS security threathre entirely and let it rot in the stupid corner companies dug themselves into, but of course the problem is OpenSSL's main income source...

I really wish Linux Foundation or some other big OSS founded complete replacement of it, then just write a shim that translates ABI calls from this to openssl 1.1 lookalike

parliament32 4 hours ago [-]
I see you've never had to manage multiple products coming out of less-than-good development teams. FIPS mode is a godsend.
nulltrace 22 hours ago [-]
Fair, but from the user side it still hurts. Setting up an Ed25519 signing context used to be maybe ten lines. Now you're constructing OSSL_PARAM arrays, looking up providers by string name, and hoping you got the key type right because nothing checks at compile time.
wahern 21 hours ago [-]
Yeah. Some of the more complex EVP interfaces from before and around the time of the forks had design flaws, and with PQC that problem is only going to grow. Capturing the semantics of complex modes is difficult, and maybe that figured into motivations. But OSSL_PARAMs on the frontend feels more like a punt than a solution, and to maintain API compatibility you still end up with all the same cruft in both the library and application, it's just more opaque and confusing figuring out which textual parameter names to use and not use, when to refactor, etc. You can't tag a string parameter key with __attribute__((deprecated)). With the module interface decoupled, and faster release cadence, exploring and iterating more strongly typed and structured EVP interfaces should be easier, I would think. That's what the forks seem to do. There are incompatibilities across BoringSSL, libressl, etc, but also cross pollination and communication, and over time interfaces are refined and unified.
omcnoe 14 hours ago [-]
This is a hilarious, and also terrible, reason.

Why can't we let the FIPS people play in their own weird corner, while not compromising whole internet security for their sake? OpenSSL is too important to get distracted by a weird US-specific security standard. I'm not convinced FIPS is a path to actual computer security. Ah well it's the way the world goes I suppose.

selfmodruntime 22 hours ago [-]
There are little other options. `Ring` is not for production use. WolfSSL lags behind in features a bit. BoringSSL and AWS-LC are the best we have.
thayne 17 hours ago [-]
BoringSSL has an unstable API, and Google specifically recommends against using it[1].

AWS-LC is ok, but afaict there aren't really any pre-built binaries available, and you need to compile it yourself, and is a little difficult to use if you aren't using c/c++ or rust. (The same is largely true of boringssl).

[1]: https://github.com/google/boringssl?tab=readme-ov-file#borin...

semiquaver 18 hours ago [-]
This is incredible, and damning. What do the OpenSSL maintainers say in response to these criticisms?
ibrahimhossain 14 hours ago [-]
Manual opt out processes are becoming a major friction point. It's interesting how these tools only improve their defaults after a community backlash. Trust is so hard to build but so easy to burn in this space
rwmj 1 days ago [-]
Compared to OpenSSL 3 this transition has been very smooth. Only dropping of "Engines" was a problem at all, and in Fedora most of those dependencies have been changed.
ge96 1 days ago [-]
Just in time for the suckerpinch video
yjftsjthsd-h 1 days ago [-]
As a complete non-expert:

On the one hand, looks like decent cleanup. (IIRC, engines in particular will not be missed).

On the other hand, breaking compatibility is always a tradeoff, and I still remember 3.x being... not universally loved.

moralestapia 1 days ago [-]
That's why it is version 4.
1 days ago [-]
semiquaver 18 hours ago [-]
Major version bump? I wonder how much slower it will get now.
sudo_cowsay 14 hours ago [-]
Actually, it doesn't get that slow. I ran benchmarks and avg. increase was 10%. Compared to other things on the internet, it doesn't cause that much of a slow down.
Neywiny 20 hours ago [-]
Good to see const more prevalent. Too often I have to add that in to libraries for embedded. Possibly I believe in const by default but it is what it is at this point
GZGavinZhao 18 hours ago [-]
*Linux distro package maintainers screams
bensyverson 1 days ago [-]
I just updated to 3.5x to get pq support. Anything that might tempt me to upgrade to 4.0?
altairprime 1 days ago [-]
The top feature, “ Support for Encrypted Client Hello (ECH, RFC 9849)”, is of prime importance to those operating Internet-accessible servers, or clients; hopefully your Postgres server is not one such!
bensyverson 23 hours ago [-]
It's a web server (pg / post-quantum, not pg / Postgres), but that's a great feature!
altairprime 18 hours ago [-]
gqgq

Er. Your first acronym is pg not pq. (I had to font test above to be sure!) But point taken! You might care then, I saw various elliptic changes and I assume it’s got pq advancements somewhere in it.

snvzz 12 hours ago [-]
Kind reminder we should be using Libressl.
jmclnx 1 days ago [-]
I wonder how hard it is to move from 3.x to 4.0.0 ?

From what I remember hearing, the move from 2 to 3 was hard.

georgthegreat 1 days ago [-]
That's because there was no version 2...
some_furry 1 days ago [-]
Yes there was!

But, thousand yard stare it was the version for the FIPS patches to 1.0.2.

cookiengineer 18 hours ago [-]
> libcrypto no longer cleans up globally allocated data via atexit().

> OPENSSL_cleanup() now runs in a global destructor, or not at all by default.

Oh oh. Heartbleed 2.0 incoming.

I really do hope that they broke APIs specifically throwing errors or race conditions so that devs are forced to cleanup. Otherwise this is going to be a nightmare to find out in terms of maintenance and audits.

I mean it's a new major release so it's a valid design change. But I hope they're thinking of providing and migration/update guide or a checklist to reduce usage errata.

(I'm heavily in favor of deprecating the fixed version method names)

theowaway 21 hours ago [-]
oh no not another breaking ABI change
pixel_popping 23 hours ago [-]
Mythos is coming for yaaaaa (just kidding).
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:42:05 GMT+0000 (Coordinated Universal Time) with Vercel.