Questions/comments/bug-reports should be emailed to netpipe@lists.bitspjoule.org If you do not get a response, please subscribe to the Netpipe mailing list and re-post since We've been getting a lot of junk mail on that address.
NetPIPE was originally developed at the SCL by
Quinn Snell,
Armin Mikler,
John Gustafson,
and Guy Helmer.
Their IASTED conference paper in
postscript
or html format, along with the
slides
in postscript format, provide a basic description of NetPIPE.
NetPIPE is currently being maintained by Troy Benjegerdes.
From October 2000 to July 2005, the code was developed and maintained by
Dave Turner, with contributions from several past students
(Xuehua Chen, Adam Oline, Bogdan Vasiliu, and Brian Smith).
Modules have been added for PVM, TCGMSG, and the 1-sided
message-passing standards of MPI-2 and SHMEM.
Low level modules have been developed to evaluate
GM for Myrinet cards,
the GPSHMEM implementation that brings the Cray SHMEM interface to other
machines, the low-level ARMCI library,
and the LAPI interface for IBM SP systems.
Internal testing can be done using a new memcpy module.
The latest stable version is
NetPIPE-3.7.2.tar.gz here,
or follow this link for a list of
old versions.
We have added an InfiniBand module for the Mellanox VAPI,
incorporated an integrity check option (-i), added the ability to
test with and without cache effects, allowed offsets of the source
and destination buffers, and fixed problems with the streaming mode.
Jimmy Hill contributed a uDAPL module for version 3.6.2 that
you can download from the old versions directory.
He is planning to contribute a uDAPL version for NetPIPE 4.x too, which
will be integrated into the official distribution. A version of NetPIPE for Java is also available,
based on the NetPIPE 2.4 C code.
NOTE: Version 3.6.2 fixes the bug causing
segfaults in Red Hat Enterprise
Linux systems (and most likely other distributions as well). Additionally,
a number of portability issues with 64-bit architectures have been fixed.
NOTE: Version 3.6.1 fixes a bug in the InfiniBand module that arose
from changes in the Mellanox VAPI code.
Note to developers: I am planning an incremental stable 3.6.3 release which will include at least a new OpenIB InfiniBand module, and possibly integrating the UDAPL contribution.
This web page has been saying NetPIPE 4.0 has been close to release for a long time. Unfortunately, I've not really had time to devote to getting it done. There are many new features that have been added, and I really need to get more feedback from people using NetPIPE about how to balance new features versus maintaining integrity of the benchmark.
Some of the features in the 4.x (aka, unstable) branch including a theoretical module,
the ability to measure the workload that the communication system
puts on the CPU, and the ability to do multiple simultaneous ping-pong
tests to measure the performance across a switch. There is a tarball
containing Dr Turner's working copy of NetPIPE 4.x in the
old versions directory. If you are developing
a new module for NetPIPE, please join the
netpipe
mailing list and let us know what you are trying to do.
The message-passing modules have been used to compare the various
implementations of MPI (MPICH, LAM/MPI, MPI/Pro, MP_Lite) with each other
and with PVM and TCGMSG. They are also very useful in determining how
to fully tune the library and OS parameters for optimal performance.
The lower level modules are useful for determining how much efficiency
is lost in the message-passing layer, and to point out any idiosyncrasies
in the underlying network hardware.
A paper Protocol-Dependent Message-Passing Performance on Linux Clusters was presented at the Cluster 2002 conference in Chicago on September 25th of 2002. The paper and PowerPoint presentation provide some insight into how NetPIPE can be used to investigate the performance of a variety of systems.
A paper Integrating New Capabilities into NetPIPE was presented at the
Euro PVM/MPI conference in Venice Italy on September 30th of 2003.
The paper and
PowerPoint presentation
provides an overview of some of the new capabilities of NetPIPE including
the ability to test with or without the cache effects that have dramatic
effects in SMP and InfiniBand communications. Performance examples are given
for Channel-Bonded Gigabit Ethernet, InfiniBand, and memcpy to show how some of
the new modules are being used.
Below is an example of how NetPIPE has been used to measure the performance for a variety of high-speed interconnects. The latencies for each are given in microseconds. InfiniBand is using a 133 MHz PCI bus, delivering 6.5 Gbps out of a maximum of 8 Gbps possible. It tops out at 4.5 Gbps when a new buffer is used that has not been registered with the adapter. The experimental ATOLL hardware from the University of Mannheim provides the best results for messages below 10 kB, with SCI and Myrinet not far behind. The Intel 10 Gigabit Ethernet performance is poor, but several groups have newer 10 Gigabit Ethernet cards out that we have not tested yet. Gigabit Ethernet delivers around 900 Mbps with latencies of 25-62 microseconds from 64-bit 66 MHz PCI buses. Channel bonding 2 Gigabit Ethernet cards can double the thorughput, but this requires the MP_Lite message-passing library at this point (Linux kernel channel bonding currently doesn't work at Gigabit speeds).