OpenSSL 3 performance issues causing a scalability and security dilemma for organizations
SSL crisis summary
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
One of the great things about open source is the availability of diverse approaches. This article examines the critical importance of performance scalability in SSL/TLS libraries, using recent shifts in the OpenSSL ecosystem as a case study. We are grateful for open source projects and their contributions to the world
Director of Technical Marketing at HAProxy Technologies.
Imagine your car suddenly slowing to a crawl, no matter how hard you press the gas pedal. Now imagine this happening to many vehicles on the road at the same time. That's basically what happened with OpenSSL when version 3.0 was released 4 years ago.
OpenSSL, the most widely used SSL library in operating systems, experienced a severe performance regression with its 3.0 release. Designed to enhance security and modularity, the new architecture introduced significant performance regressions in multi-threaded environments.
Specifically, the performance plateaus around 24 threads, without being able to take advantage of more threads.
It is important to note that a new 3.5 LTS version fixes many of the performance problems in 3.0. However, while the 3.5 version performs much better than the 3.0 version, it is still not as fast as 1.1.1.
The situation has stabilized, with the current performance levels representing a new baseline for OpenSSL, though ongoing developments are anticipated. Let's keep an eye on 3.6, which is due in October (we haven't tested it yet).
Despite these performance challenges, OpenSSL remains a cornerstone of secure internet communications, and its evolution reflects the complex trade-offs inherent in large-scale open-source projects.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
However, this is actually bigger than OpenSSL. The reality is that the performance issues noted in OpenSSL 3.x highlight a potential risk in the tools we use to connect the digital world.
This presents a tangible challenge for the industry, as this "silent threat to scalability" may remain undetected until projects attempt to scale, resulting in unforeseen bottlenecks and drastically increased infrastructure expenses.
Understanding this issue (and how to fix it) is vital to the success of any project that needs to serve more than basic levels of traffic.
The role of the SSL/TLS layer
Consider how ubiquitous SSL (Secure Sockets Layer), or more accurately TLS (Transport Layer Security), is today. It is a security protocol that establishes an encrypted link between a web server and a browser. It is the foundation of secure internet communications and a critical component of most internet requests.
This link ensures that all data transmitted remains private and secure, which is why a significant performance issue with SSL/TLS can impact nearly everything online.
When selecting an SSL/TLS library, organizations must consider several key aspects:
Functional requirements: The library must implement modern, secure cryptographic protocols correctly.
Maintenance requirements: The library needs a predictable support cycle, especially for Long-Term Support (LTS) versions, which are crucial for stable, long-term deployments.
Performance considerations: Beyond raw speed, a key performance indicator is the library's ability to scale with additional processing power. In modern multi-core systems, performance is expected to increase as more processors or cores are added.
Performance problems in SSL/TLS libraries can force organizations into a tough spot: prioritize security by adopting the latest, but performance-hindered version, or maintain performance with the older, now unsupported, version, thereby risking critical security vulnerabilities.
This situation is even more concerning because it's a systemic issue. SSL/TLS isn't just another piece of software; it's the backbone of secure communication for most internet-connected systems.
From web servers to IoT devices, SSL/TLS is everywhere. So, any performance problems in these libraries aren't isolated incidents but rather a threat to the stability and security of the broader internet infrastructure.
The OpenSSL changes actually provide a great opportunity to examine this part of the stack and understand how architectural choices often involve trade-offs. By understanding the situation from 4 years ago, and how the project has adapted, we can understand how to adapt to changes in the landscape.
A case study in performance scaling
The release of OpenSSL 3.0, the most widely used SSL library in operating systems, introduced a tangible challenge for the industry. The 3.x version was designed to be more dynamic and flexible, benefiting developers and many of the standard use cases it employs.
However, this new design had an unintended consequence on performance, making it less suitable for performance-critical workloads. This performance hit has serious real-world consequences. Systems that used to handle thousands of requests per second were struggling.
Some organizations need up to 42 times more hardware just to maintain the same level of service. This was a major blow for anyone relying on multi-threaded environments.
Testing scenarios, including server-mode full TLS handshakes and end-to-end connections with session resumption, revealed a significant performance drop compared to its predecessor, version 1.1.1. The core issue was not just a slowdown but a failure to scale effectively
In some cases, the slowdown gets worse as you add more processing power. This is the opposite of what you'd expect with modern multi-core systems.
This behavior stems from fundamental design changes, including runtime lookups, excessive locking mechanisms, and an over-reliance on atomic operations, which created significant performance bottlenecks.
As noted above, OpenSSL has responded by improving performance on the 3.x version, with the current 3.5 LTS version showing much improvement.
We don’t want to harp on past problems. In fact, for many people, OpenSSL continues to be a great choice that meets their needs, with its dynamic design offering advantages over the earlier versions. Indeed, this is often the case, where a choice to optimize one area (in this case, flexibility) can often lead to trade-offs in another area.
However, this is an important case study on how easy it is to become complacent about widely used and standardized components in our tech stack. It's also a reminder that a more diverse ecosystem is usually safer and better for everyone.
Navigating the SSL library ecosystem
New performance challenges force organizations into a tough spot. One path is to accept the performance penalty, which can lead to overprovisioning hardware. For many, this is not a viable long-term strategy.
This leads to an alternative path: finding a replacement. The idea of switching core cryptographic libraries is daunting, and it's true that no alternative offers an entirely seamless, drop-in solution. However, strong alternatives like wolfSSL and AWS-LC exist, each with strengths.
These libraries may require careful testing to integrate, but they often offer compelling performance advantages. In fact, AWS-LC achieved even higher multi-threaded performance than OpenSSL 1.1.1, which has long been the gold standard.
The challenge lies not in a lack of viable options, but in the effort required to transition from a deeply entrenched standard. However, these situations can actually lead to increased performance, a more diverse ecosystem, and awareness of alternative options that may be more suitable for specific scenarios.
The path forward: A focus on future needs
The performance of an SSL/TLS library is a critical factor that can remain undetected until a project attempts to scale, resulting in unforeseen bottlenecks and increased infrastructure expenses. As the industry navigates this challenge, projects must remain proactive about the performance implications of their chosen SSL library.
The most promising path forward involves a broader consideration of the available tools. Organizations are encouraged to familiarize themselves with alternatives that are not necessarily replacements for all use cases but viable options for specific, performance-sensitive scenarios.
You should familiarize yourself with alternatives to help decide when and if you may need to replace OpenSSL. We can mitigate these risks by actively monitoring performance, considering alternative libraries, and engaging with the open-source community.
Above all, we should remind ourselves not to become complacent about our tech stack. Changes can bring benefits or problems, but they are often also opportunities.
For a comprehensive analysis of OpenSSL 3.x performance, including detailed benchmarks, profiling results, and a comparison of alternative libraries, refer to the in-depth blog post "The State of SSL Stacks". Note that this comes from internal testing from a year ago, and has limited coverage of more recent versions.
We've featured the best business computer.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Ron Northcutt is Director of Technical Marketing at HAProxy Technologies.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.