White paper – QLogic 8200 Series The Value of Full Hardware Offload in a Converged Ethernet Environment User Manual

Page 4

Advertising
background image

hSG-WP10004

SN0330923-00 rev. B 09/11

4

White PaPer

Server virtualization is an ideal candidate for FCoe because the increased
bandwidth of 10Gbe can support high throughput from multiple VMs. in
addition, mobility of virtual servers, load balancing, and failover will
require similarly high throughput, especially when it is being shared by
multiple VMs. herein is the problem with Open FCoe; decreased scalability,
increased bottlenecks, and hidden costs.

Decreased Scalability

One adapter running a software initiator could easily consume up to one-
third of the CPU processors’ capabilities. the addition of multiple adapters
into a single system relying on the CPU to perform their multiple operations
only compounds the problem, as does the addition of VMs. However,
offloading onto a QLogic 8200 Series Converged Network Adapter leaves
plenty of room for scaling to multiple adapters and multiple VMs without
impacting the overall performance of the server.

Bottlenecks

Using a software initiator on a NiC requires that every incoming tCP/iP,
FCoe, and iSCSi packet traverse the PCi bus in the server. Sending packets
back and forth increases the PCi bus’ busy state, and can cause bottlenecks
with other hardware on the PCi bus. the QLogic 8200 Series Converged
Network adapter offloads all protocol processing (FCoe, tCP, iSCSi, and
SCSi digest for data integrity) onto the adapter. therefore, using a QLogic
8200 Series Converged Network adapter will result in reduced bottlenecks
and increased throughput of application data across the PCi bus.

hidden Cost

to simplify this, let’s start by assuming you spend $3,000 on a typical
server with one CPU core. With a software initiator you’d be relying on
one-third of the CPU to process FCoe and/or iSCSi requests for one NiC.
Simple math tells us that you’re actually paying $1,000 to process your
FCoe traffic. Now, let’s complicate things by looking at a similar solution
with next-generation servers running a dual socket, quad core CPU with
server virtualization. today, typical virtual operation environments (VOe) are
estimated at running approximately four VMs per CPU core (iDC reports

indicate that the industry is quickly moving toward 10 – 12 VMs per core).
even in this light VOe, the server is supporting 32 VMs with eight cores or
one-forth of a core per VM. in this environment, there just is not enough
CPU processing power to support protocol processing for all VMs, let alone
headroom for future scaling requirements. this work can be done at a
fraction of the cost with the hardware acceleration offered by the offload
engine of a QLogic 8200 Series Converged Network adapter, leaving the
CPU to process business applications as intended and leaving plenty of
room to scale.

When it comes to cost, the initial offering of Open FCoe and an inexpensive
NiC may sound like you are getting something for free, but do not forget
about the opportunity costs you’ll be missing out on by not having offload
functions. if you still are unsure about an offload solution, be advised that
offloading functions from the CPU is not a novel or risky concept. the
popularity of dedicated graphics controllers in the gaming world as well as
dedicated disk controllers in the server world has become the norm. Offload
engines are a better alternative to software initiators in enterprise servers
because organizations can maximize CPU cycle availability for application
or i/O services, as well as address emerging and future performance and
scalability requirements within enterprise data centers.

What’s the Cost of Data Integrity?

along with the server efficiencies that the QLogic 8200 Series Converged
Network adapter can provide, there are also efficiencies and advantages
within the i/O processing of a Converged Network adapter over an Open
FCoe solution. these advanced features will ensure that your data is being
delivered accurately, as well as the validity of the data when it reaches the
disk.

reliable Data Delivery

the FCoe protocol adopts a data processing mechanism similar to Fibre
Channel to maintain the same level of data integrity, while sending storage
data over ethernet. these mechanisms analyze storage packet headers
and the data transmitted to ensure the integrity of the data. this integrity
checking is a compute-intensive process that is either performed by the CPU
in solutions using a software initiator or performed by the offload engine of
a Converged Network adapter, such as a QLogic solution. an offload engine
ensures the highest performance of your Converged Network adapter and
maintains the highest level of data integrity as native Fibre Channel.

Low impact error recoverability

as traffic is increased across an ethernet network, dropped and out-of-
order data frames will result. With a software initiator, recovering from both
these issues will become a significant burden on the CPU and may cripple
overall performance on a 10Gb ethernet network. a QLogic offload engine,
on the other hand, can reassemble out-of-order frames and complete

Advertising