https://www.haproxy.com/blog/announcing-haproxy-2-7/ NEWS HAProxyConf 2022 is a wrap. Read the key takeaways. * Blog * Customer Login * English + Francais + Deutsch HAProxy Technologies * Products HAProxy Products Products Overview HAProxy Enterprise An enterprise-class software load balancer with cutting edge features, suite of add-ons, and support. HAProxy ALOHA A plug-and-play hardware or virtual load balancer based on HAProxy Enterprise. HAProxy Enterprise Kubernetes Ingress Controller Route traffic into a Kubernetes cluster leveraging powerful features of HAProxy Enterprise. HAProxy Edge A globally distributed application delivery network, or ADN, with turnkey services at massive scale. HAProxy Fusion Control Plane Manage all of your HAProxy Enterprise instances from a single, graphical interface or directly through its API. HAProxy One An industry-first end-to-end application delivery platform designed to simplify and secure modern application architectures. Learn More Success Stories [success-st] View All Success Stories * Solutions HAProxy Solutions Solutions Overview Load Balancing High Availability Administration Application Acceleration Security Web Application Firewall API Gateway Kubernetes Featured Webinar Wha's New in HAProxy Data Plane API 2.6 View All Webinars * Resources Documentation HAProxy Enterprise HAProxy ALOHA HAProxy Kubernetes Ingress Controller HAProxy Data Plane API Product Overview Community vs Enterprise Product Comparison Certified Integrations Datasheets Learning Hub Blog Webinars eBooks Content Library Knowledge Base Use Cases success-stories Success Stories User Spotlight Series * Support Expert Support Support Details Professional Services Customer Support Portal Community Community Mailing List Slack Reddit Featured Webinar Multilayered Security with HAProxy Enterprise HAProxy Enterprise * Company Partners Partner Program Certified Integration Program Find a Partner Company About Us News Careers NOW HIRING User References Connect With Us Contact Us Slack Twitter Facebook LinkedIn Reddit Events KubeCon + CloudNativeCon Detroit 2022 HAProxyConf View All Events * Contact Us * Get HAProxy + HAProxy Enterprise Enterprise-class features, services, and premium support. + HAProxy ALOHA Virtual Load Balancer Powerful plug-and-play appliance. Perfect for every environment. + HAProxy ALOHA Hardware Load Balancer Flexible and simple to use. Deploy new applications in minutes. + HAProxy Community Open-source community version of HAProxy. * [Search] [ ] [ ] Announcing HAProxy 2.7 Nick Ramirez Nick Ramirez | Dec 1, 2022 | LUA, NEWS | 0 comments [HAProxy-2_7-Release-image-1] HAProxy 2.7 is now available! Register for the webinar HAProxy 2.7 Feature Roundup to learn more about this release and participate in a live Q&A with our experts. Once again, the latest HAProxy update features improvements across the board, upgrading old features and introducing some new ones. New elements in this release include: * the debut of traffic shaping to control client upload and download speeds * an improvement to health check performance to reduce CPU load * updated layer 7 retries that reuse idle HTTP connections even for first client requests * stick table locking efficiency improvements * the introduction of stick table data shards to accelerate the processing of large datasets * a range of new converter and Runtime API command additions * as well as other small updates to Lua script passing and Master CLI control What a list! As always, these improvements are only possible thanks to the support of the incredible HAProxy Community, from discussions over the mailing list to lively debate on the HAProxy GitHub project. Each community member is invaluable in providing code for new functionality and bug fixes, QA testing, documentation updates, bug reports, advice and suggestions, and much more. The project would not exist without you! If joining this vibrant community is of interest, it can be found on GitHub, Slack, Discourse, and the HAProxy mailing list. New feature: Traffic shaping HAProxy has a new traffic shaping feature that lets you limit the speed at which clients can upload or download data. For example, this allows you to limit the maximum download speed of a file to 5 Mbps even for clients that have faster connections. Or conversely, you can slow a client's upload speed. Through traffic shaping, you can apply a bandwidth limit for each individual HTTP stream, meaning that each stream gets its own bandwidth allotment, or set a limit that applies to a particular client's IP address or collectively to all clients accessing a backend. The new filter bwlim-out directive and http-response set-bandwidth-limit together set download speeds, while filter bwlim-in and http-request set-bandwidth-limit set upload speeds. The filters can specify a stick table to enforce limits based on the keys in the table, such as a client's IP address or the ID of a backend. A nice thing about these filters is that the bandwidth limits are not necessarily fixed constants in the configuration, and you can define them based on data collected from your traffic. For example, a video service could use the contents of an HTTP header provided by the server to set the appropriate bandwidth limit for a given video to avoid too much network bandwidth from being used by agents prefetching large parts of the contents. Read more about Traffic Shaping. Overcoming the 64 threads barrier Massively multi-core, modern CPUs allow us to build a product that packs a lot of features inside a single computer process, which validates the choice made years ago to adopt a one-thread-per-core model to take advantage of those CPU cores. However, due to the fast, atomic operations involved at many places, HAProxy was previously limited to 64 threads, and therefore 64 CPU cores, on 64-bit machines. This limit is now raised to 4096 threads by the introduction of thread groups. A thread group, which you create with the thread-group directive in the global section of your configuration, lets you assign a range of threads, for example 1-64, to a group and then use that group on a bind line in your configuration. You can define up to 64 groups of 64 threads. In addition to taking better advantage of available threads, thread groups help to limit the number of threads that compete to handle incoming connections, thereby reducing contention. Thread groups also deal much better with non-uniform memory architecture machines (NUMA) that have multiple CPU sockets or processors with uneven access to the L3 cache, where performance gains of up to 4x were observed in the lab. Better performing health checks Server health checks became more efficient with this release. You will recall that traditionally HAProxy checks its connectivity to servers at a defined interval. Previously, when HAProxy completed a check, it placed the next scheduled health check into a queue for any thread to pick up the next time. This combined with the increase of thread count had been causing a thundering herd problem in which many threads awoke to compete for the task. Now, to reduce contention, HAProxy keeps the recurring work with the same thread. As a failsafe to prevent a thread from becoming overloaded, before starting the next health check the thread compares its workload to see if there's another, less busy thread available. If so, it hands the task over to that thread. Overall, allowing one thread to own health checks has reduced CPU load and latency. Revisiting HTTP reuse with L7 retries Since HAProxy introduced layer-7 retries in version 2.0, HAProxy can repeat its attempt to send an HTTP request to a server when its connection to that server breaks mid-communication. That makes it possible to more aggressively use idle connections, comfortable with the knowledge that if an idle connection suddenly closes, HAProxy can retry it. In this current release, HAProxy capitalizes on that by changing the http-reuse safe directive to reuse idle connections even for a client's first request as long as retries are enabled for broken connections (the retry-on directive in a backend is set to conn-failure, empty-response, and response-timeout). QUIC and HTTP/3 The QUIC stack in HAProxy continues to evolve and has received numerous fixes and improvements to remain future-proof, such as support for QUICv2, complying with the QUIC Compatible Version Negotiation draft-08, CUBIC congestion control algorithm, and much more (252 commits in total). All these improvements and fixes were progressively backported to 2.6 as they stabilized. Many more are coming, and with 2.7 released, much less will be backported to 2.6, which will now mostly focus on stability fixes only. Stick tables use more efficient locking Given that reads are more common than writes, when accessing a stick table HAProxy now uses an rwlock, which allows multiple threads to read from the table simultaneously but allows only a single thread at a time to write. This replaces the spinlock, which had enforced exclusive access for both reads and writes. This has unleashed unused performance for stick tables that had been held up by threads waiting to acquire a lock. Performance gains of up to 11 times the initial request rate were observed on a 24-core system that was making intense use of stick tables and track-sc rules. Sharding stick table data sent to peers While many of you are aware that you can use a peers section in an HAProxy configuration to share stick table data between load balancers in an active-standby setup, did you know that you can also use it to share data with agents that process the data? When using an external agent that collects and processes stick table data, a challenge can be the volume of that data. You can now split a stick table's data into subsets, called shards, before distributing the shards among different stick table peers. This helps divide the work of processing a large dataset. The new shards directive sets the number of shards to create, while the shard argument on a peer in a peers section indicates the key used when creating the distribution hash. All stick tables associated with the peers section will be affected. In the example below, data is split into two shards so that half of the data goes to the first peer and half goes to the other. You can use the Runtime API's show table command to view the contents of a stick table. SSL usability improvements HAProxy 2.7 improves two of its bind directive options, ca-ignore-err and crt-ignore-err, which set a list of SSL certificate errors to ignore. Previously you would define a list of numeric error IDs here. Now, you can specify their human-readable names instead, for which the OpenSSL site provides a list of error codes. Similarly, the x509_v_err_str() converter converts a numeric error ID to its human-readable constant, which is useful for logs. Building HAProxy with QUIC relies on using an underlying SSL library that supports QUIC. This requirement will become progressively easier since we have adopted LibreSSL 3.6 as an experimental status. HAProxy will also have initial, but incomplete, support for the WolfSSL library. Pass arguments to Lua scripts HAProxy 2.7 supports passing optional arguments to Lua scripts via the lua-load and lua-load-per-thread directives. This facilitates passing initial settings to your scripts from your HAProxy configuration, without needing to modify the script's hardcoded values or pass values via environment variables. In your /etc/haproxy/haproxy.cfg file, pass arguments to the script: In your Lua file, use the table.pack command to retrieve the script's arguments. The three dots passed to table.pack signify that this command accepts a variable number of arguments, which will be stored in the variable args. The example script below adds a new action named http-request lua.log-args that simply prints the arguments to the HAProxy log file. In this trivial example, those values will be printed to the HAProxy log (e.g. /var/log/haproxy.cfg) when the new action is called: New converters The following converters have been added: Converter Description table_expire Returns the remaining time before a given key will (