This document discusses how Mellanox networks enable high performance Ceph storage clusters. It notes that Ceph performance and scalability are dictated by the backend cluster network performance. It provides examples of customers deploying Ceph with Mellanox 40GbE and 10GbE interconnects, and highlights how these networks allow building scalable, high performing storage solutions. Specifically, it shows how 40GbE cluster networks and 40GbE client networks provide much higher throughput and IOPS compared to 10GbE. The document concludes by mentioning how RDMA offloads can free CPU for application processing, and how the Accelio library enables high performance RDMA for Ceph.