https://blog.min.io/nvme_benchmark/ MinIO Blog Customer Login Product Hybrid Cloud Object Storage Hybrid Cloud Object Storage Overview Architecture Baremetal Baremetal Overview Architecture Erasure Code Calculator Erasure Code Calculator Reference Hardware Reference Hardware Features Active Active Replication Identity & Access Management Encryption Bucket & Object Immutability Bucket & Object Versioning Data Life Cycle Management & Tiering Automated Data Management Interfaces Monitoring Scalability Kubernetes Distributions MinIO for VMware Tanzu MinIO for OpenShift MinIO for SUSE Rancher MinIO for Amazon Elastic Kubernetes Service MinIO for Azure Kubernetes Service MinIO for Google Kubernetes Engine Solutions Integrations Browse our vast portfolio of integrations VMware Discover how MinIO integrates with VMware across the portfolio from the Persistent Data platform to TKGI and how we support their Kubernetes ambitions. Splunk Find out how MinIO is delivering performance at scale for Splunk SmartStores Veeam Learn how MinIO and Veeam have partnered to drive performance and scalability for a variety of backup use cases. Azure to AWS S3 Gateway Learn how MinIO allows Azure Blob to speak Amazon's S3 API HDFS Migration Modernize and simplify your big data storage infrastructure with high-performance, Kubernetes-native object storage from MinIO. Teradata Discover why MinIO is the Native Object Store (NOS) of choice for at-scale Teradata deployments Docs MinIO Baremetal MinIO Object Storage for Baremetal Infrastructure MinIO Hybrid Cloud MinIO Object Storage for Kubernetes-Managed Private and Public Cloud Infrastructure MinIO for VMware Cloud Foundation MinIO Object Storage for VMware Cloud Foundation 4.2 MinIO Legacy Documentation Legacy Documentation for MinIO Object Storage Blog Resources Partner Pricing Download [ ] Topics All Architect's Guide Operator's Guide Performance Kubernetes Hybrid Cloud Integrations Benchmarks Security Multicloud Get a Quote Get started with the MinIO Subscription Network Get a Quote Updated NVMe Benchmark: 2.6Tbps+ for READS Harshavardhana Harshavardhana on Performance 4 January 2022 Updated NVMe Benchmark: 2.6Tbps+ for READS MinIO is a strong believer in transparency and data driven discussions. It is why we publish our benchmarks and challenge the rest of the industry to do so as well. It also is why we develop tools that allow a clean, clear measurement of performance and can be easily replicated. We want people to test for themselves. Further, we do our benchmarks on commodity hardware without tuning. This is fundamentally different from the highly tuned, specialized hardware approaches used by other vendors which, predictability, have given benchmarks a bad name. We challenge the rest of the industry to follow suit. We recently updated our benchmark for primary storage. For our customers, primary storage utilizes NVMe drives due to their price/ performance characteristics. We will update our HDD benchmark shortly for those customers looking to understand HDD price/performance. In this post we will cover the benchmarking environment, the tools, how to replicate this on your own and the detailed results. For those looking for a quick take, the 32 node MinIO cluster results can be summarized as follows: Instance Type PUT/ GET/ Parity mc CLI ver. MinIO ver. Write Read i3en.24xlarge 325 165 GiB/ EC:4 RELEASE.2021-12-29T06-52-55Z RELEASE.2021-12-29T06-49-06Z GiB/ sec sec On an aggregate basis this delivers PUT throughput of 1.32 Tbps and GET throughput of 2.6 Tbps. We believe this to be the fastest in the industry. Benchmarking Setup MinIO believes in benchmarking on the same HW it would recommend to its customers. For primary storage, we recommend NVMe. We have followed this recommendation for over a year now as our customers have shown us that the price/performance characteristics of NVMe represent the sweet spot for these primary storage workloads. We used standard AWS bare-metal, storage optimized instances with local NVMe drives and 100 GbE networking for our efforts. These are the same instances that MinIO recommends to its production clients for use in the AWS cloud. Instance # Nodes AWS Instance Type CPU MEM Storage Network Server 32 i3en.24xlarge 96 768GB 8x7500GB 100 Gbps For the software, we used the default Ubuntu 20.04 install on AWS, the latest release of MinIO and our built in Speedtest capability. Property Value Server OS RELEASE.2021-12-29T06-52-55Z MinIO Version RELEASE.2021-12-29T06-49-06Z Benchmark Tool mc admin speedtest Speedtest is built into the MinIO Server and is accessed through the Console UI or mc admin speedtest command. It requires no special skills or additional software. You can read more about it here. Measuring Single Drive Performance The performance of each drive was measured using the command dd. DD is a unix tool used to perform bit-by-bit copy of data from one file to another. It provides options to control the block size of each read and write. Here is a sample of a single NVMe drive's Write Performance with 16MB block-size, O_DIRECT option for a total of 64 copies. Note that we achieve greater than 1.1 GB/sec of write performance for each drive. [f661cjCeIm] Here is the output of a single HDD drive's Read Performance with 16MB block-size using the O_DIRECT option and a total count of 64. Note that we achieved greater than 2.3 GB/sec of read performance for each drive. [DliFHtB9cy] Measuring JBOD Performance JBOD performance with O_DIRECT was measured using https://github.com/ minio/dperf. dperf is a filesystem benchmark tool that generates and measures filesystem performance for both read and write. dperf command operating with 64 parallel threads, 4MB block-size and O_DIRECT by default. [Of6nNv8PMN] Network Performance The network hardware on these nodes allows a maximum of 100 Gbit/sec. 100 Gbit/sec equates to 12.5 Gbyte/sec (1 Gbyte = 8 Gbit). Therefore, the maximum throughput that can be expected from each of these nodes would be 12.5 Gbyte/sec. Running the 32-node Distributed MinIO benchmark MinIO ran Speedtest in autotune mode. The autotune mode incrementally increases the load to pinpoint maximum aggregate throughput. $ mc admin speedtest minio/ The test will run and present results on screen. The test may take anywhere from a few seconds to several minutes to execute depending on your MinIO cluster. The flag -v indicates verbose mode. The user can determine the appropriate Erasure Code setting. We recommend EC:4 but include EC:2 and EC:4 below. MINIO_STORAGE_CLASS_STANDARD=EC:2 [4os4GTdlR7] MINIO_STORAGE_CLASS_STANDARD=EC:3 [AhGOX25E5Y] MINIO_STORAGE_CLASS_STANDARD=EC:4 (default) [mnRN7W45rb] Interpretation of Results The average network bandwidth utilization during the write phase was 77 Gbit/sec and during the read phase was 84.6 Gbit/sec. This represents client traffic as well as internode traffic. The portion of this bandwidth available to clients is about half for both reads and writes. The network was almost entirely choked during these tests. Higher throughput can be expected if a dedicated network was available for inter-node traffic. Note that the write benchmark is slower than read because benchmark tools do not account for write amplification (traffic from parity data generated during writes). In this case, the 100 Gbit network is the bottleneck as MinIO gets close to hardware performance for both reads and writes. Conclusion Based on the results above, we found that MinIO takes complete advantage of the available hardware. Its performance is only constrained by the underlying hardware available to it. This benchmark has been tested with our recommended configuration for performance workloads and can be easily replicated in an hour for less than $350. You can download a PDF of the Benchmark here. You can download MinIO here. If you have any questions, ping us on hello@min.io or join the Slack community. Previous Post S3 Select S3 Performance Operator's Guide AI/ML Security Advisory Brand/Design Apache Spark Modern Data Lakes Benchmarks Security Integrations Apache Presto Architect's Guide Apache Kafka Kubernetes VMware SQL Open Source Cloud Computing Programming Golang Cloud Native Microservices Docker Edge Computing AWS API Scalability SUBNET Awards Splunk Intel Veeam Sidekick Secure-by-Design Apache Nifi Hybrid Cloud Multicloud Immutability Software Defined Storage Apache Arrow AGPLv3 Red Hat OpenShift Cloud Field Day COSI Apache Hadoop MinIO Versioning, Metadata and Storage Deep Dive MinIO Versioning, Metadata and Storage Deep Dive Klaus Post Jan 03, 2022 Introducing Speedtest for MinIO Introducing Speedtest for MinIO Matt Sarrel Nov 19, 2021 Supermicro Cloud DC Benchmark Supermicro Cloud DC Benchmark Eco Matt Sarrel Sep 15, 2021 * [footer-log] (c) 2014-2021 MinIO, Inc. Privacy Policy License Compliance * COMPANY About Partners Pricing Logo * CONTACT + + 530 University Ave Ste B Palo Alto, CA 94301 United States * Sign up for MinIO Updates Contact Sales Contact Sales Get a Quote 1 Select Plan (*) Standard ( ) Enterprise 2 Choose Capacity [100 ] TB 3 * Name [ ] * Business Email [ ] Submit