A hp sfs g3 performance, A.1 benchmark platform, Benchmark platform – HP StorageWorks Scalable File Share User Manual

Page 59

Advertising
background image

A HP SFS G3 Performance

A.1 Benchmark Platform

Performance data in this appendix is based on HP SFS G3.0-0. Performance analysis of HP SFS
G3.1-0 is not available at the time of this edition. However, HP SFS G3.1-0 performance is expected
to be comparable to HP SFS G3.0-0. Look for updates to performance testing in this document
at

http://www.docs.hp.com/en/storage

.

HP SFS G3.0-0, based on Lustre File System Software, is designed to provide the performance
and scalability needed for very large high-performance computing clusters. This appendix
presents HP SFS G3.0-0 performance measurements. HP SFS G3.0-0 can also be used to estimate
the I/O performance and specify performance requirements of HPC clusters.
The end-to-end I/O performance of a large cluster depends on many factors, including disk
drives, storage controllers, storage interconnects, Linux, Lustre server and client software, the
cluster interconnect network, server and client hardware, and finally the characteristics of the
I/O load generated by applications. A large number of parameters at various points in the I/O
path interact to determine overall throughput. Use care and caution when attempting to
extrapolate from these measurements to other cluster configurations and other workloads.

Figure A-1

shows the test platform used. Starting on the left, the head node launched the test

jobs on the client nodes, for example IOR processes under the control of mpirun. The head node
also consolidated the results from the clients.

Figure A-1 Benchmark Platform

The clients were 16 HP BL460c blades in a c7000 enclosure. Each blade had two quad-core
processors, 16 GB of memory, and a DDR IB HCA. The blades were running HP XC V4.0 BL4
software that included a Lustre 1.6.5 patchless client.
The blade enclosure included a 4X DDR IB switch module with eight uplinks. These uplinks and
the six Lustre servers were connected to a large InfiniBand switch (Voltaire 2012). The Lustre
servers used ConnectX HCAs. This fabric minimized any InfiniBand bottlenecks in our tests.

A.1 Benchmark Platform

59

Advertising