SkillAgentSearch skills...

Perftest

Infiniband Verbs Performance Tests

Install / Use

/learn @linux-rdma/Perftest
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

     Open Fabrics Enterprise Distribution (OFED)
            	Performance Tests README

=============================================================================== Table of Contents

  1. Overview
  2. Installation
  3. Notes on Testing Methodology
  4. Test Descriptions
  5. Running Tests
  6. Known Issues

===============================================================================

  1. Overview =============================================================================== This is a collection of tests written over uverbs intended for use as a performance micro-benchmark. The tests may be used for HW or SW tuning as well as for functional testing.

The collection contains a set of bandwidth and latency benchmark such as:

* Send        - ib_send_bw and ib_send_lat
* RDMA Read   - ib_read_bw and ib_read_lat
* RDMA Write  - ib_write_bw and ib_write_lat
* RDMA Atomic - ib_atomic_bw and ib_atomic_lat
* Native Ethernet (when working with MOFED2) - raw_ethernet_bw, raw_ethernet_lat

Please post results/observations to the openib-general mailing list. See "Contact Us" at http://openib.org/mailman/listinfo/openib-general and http://www.openib.org.

=============================================================================== 2. Installation

-After cloning the repository a perftest directory should appear in your current directory

-Cloning example : git clone <URL>, In our situation its --> git clone https://github.com/linux-rdma/perftest.git

-After cloning, Follow this commands:

-cd perftest/

-./autogen.sh

-./configure Note:If you want to install in a specific directory use the optional flag --prefix=<Directory path> , e.g: ./configure --prefix=<Directory path>

-make

-make install

-All of the tests will appear in the perftest directory and in the install directory.

  1. Notes on Testing Methodology ===============================================================================
  • The benchmarks use the CPU cycle counter to get time stamps without context switch. Some CPU architectures (e.g., Intel's 80486 or older PPC) do not have such capability.

  • The latency benchmarks measure round-trip time but report half of that as one-way latency. This means that the results may not be accurate for asymmetrical configurations.

  • On all unidirectional bandwidth benchmarks, the client measures the bandwidth. On bidirectional bandwidth benchmarks, each side measures the bandwidth of the traffic it initiates, and at the end of the measurement period, the server reports the result to the client, who combines them together.

  • Latency tests report minimum, median and maximum latency results. The median latency is typically less sensitive to high latency variations, compared to average latency measurement. Typically, the first value measured is the maximum value, due to warmup effects.

  • Long sampling periods have very limited impact on measurement accuracy. The default value of 1000 iterations is pretty good. Note that the program keeps data structures with memory footprint proportional to the number of iterations. Setting a very high number of iteration may have negative impact on the measured performance which are not related to the devices under test. If a high number of iterations is strictly necessary, it is recommended to use the -N flag (No Peak).

  • Bandwidth benchmarks may be run for a number of iterations, or for a fixed duration. Use the -D flag to instruct the test to run for the specified number of seconds. The --run_infinitely flag instructs the program to run until interrupted by the user, and print the measured bandwidth every 5 seconds.

  • The "-H" option in latency benchmarks dumps a histogram of the results. See xgraph, ygraph, r-base (http://www.r-project.org/), PSPP, or other statistical analysis programs.

    *** IMPORTANT NOTE: When running the benchmarks over an Infiniband fabric, a Subnet Manager must run on the switch or on one of the nodes in your fabric, prior to starting the benchmarks.

Architectures tested: i686, x86_64, ia64

=============================================================================== 4. Benchmarks Description

The benchmarks generate a synthetic stream of operations, which is very useful for hardware and software benchmarking and analysis. The benchmarks are not designed to emulate any real application traffic. Real application traffic may be affected by many parameters, and hence might not be predictable based only on the results of those benchmarks.

ib_send_lat latency test with send transactions ib_send_bw bandwidth test with send transactions ib_write_lat latency test with RDMA write transactions ib_write_bw bandwidth test with RDMA write transactions ib_read_lat latency test with RDMA read transactions ib_read_bw bandwidth test with RDMA read transactions ib_atomic_lat latency test with atomic transactions ib_atomic_bw bandwidth test with atomic transactions

Raw Ethernet interface benchmarks: raw_ethernet_send_lat latency test over raw Ethernet interface raw_ethernet_send_bw bandwidth test over raw Ethernet interface

=============================================================================== 5. Running Tests

Prerequisites: kernel 2.6 (kernel module) matches libibverbs (kernel module) matches librdmacm (kernel module) matches libibumad (kernel module) matches libmath (lm) (linux kernel module) matches pciutils (lpci).

Server: ./<test name> <options> Client: ./<test name> <options> <server IP address>

	o  <server address> is IPv4 or IPv6 address. You can use the IPoIB
               address if IPoIB is configured.
	o  --help lists the available <options>

*** IMPORTANT NOTE: The SAME OPTIONS must be passed to both server and client.

Common Options to all tests:

-h, --help Display this help message screen -p, --port=<port> Listen on/connect to port <port> (default: 18515) -R, --rdma_cm Connect QPs with rdma_cm and run test on those QPs -z, --comm_rdma_cm Communicate with rdma_cm module to exchange data - use regular QPs -m, --mtu=<mtu> QP Mtu size (default: active_mtu from ibv_devinfo) -c, --connection=<type> Connection type RC/UC/UD/XRC/DC/SRD (default RC). -d, --ib-dev=<dev> Use IB device <dev> (default: first device found) -i, --ib-port=<port> Use network port <port> of IB device (default: 1) -s, --size=<size> Size of message to exchange (default: 1) -a, --all Run sizes from 2 till 2^23 -n, --iters=<iters> Number of exchanges (at least 100, default: 1000) -x, --gid-index=<index> Test uses GID with GID index taken from command -V, --version Display version number -e, --events Sleep on CQ events (default poll) -F, --CPU-freq Do not fail even if cpufreq_ondemand module -I, --inline_size=<size> Max size of message to be sent in inline mode -u, --qp-timeout=<timeout> QP timeout = (4 uSec)*(2^timeout) (default: 14) -S, --sl=<sl> Service Level (default 0) -r, --rx-depth=<dep> Receive queue depth (default 600)

Options for latency tests:

-C, --report-cycles Report times in CPU cycle units -H, --report-histogram Print out all results (Default: summary only) -U, --report-unsorted Print out unsorted results (default sorted)

Options for BW tests:

-b, --bidirectional Measure bidirectional bandwidth (default uni) -N, --no peak-bw Cancel peak-bw calculation (default with peak-bw) -Q, --cq-mod Generate Cqe only after <cq-mod> completion -t, --tx-depth=<dep> Size of tx queue (default: 128) -O, --dualport Run test in dual-port mode (2 QPs). Both ports must be active (default OFF) -D, --duration=<sec> Run test for <sec> period of seconds -f, --margin=<sec> When in Duration, measure results within margins (default: 2) -l, --post_list=<list size> Post list of send WQEs of <list size> size (instead of single post) --recv_post_list=<list size> Post list of receive WQEs of <list size> size (instead of single post) -q, --qp=<num of qp's> Num of QPs running in the process (default: 1) --run_infinitely Run test until interrupted by user, print results every 5 seconds

SEND tests (ib_send_lat or ib_send_bw) flags:

-r, --rx-depth=<dep> Size of receive queue (default: 512 in BW test) -g, --mcg=<num_of_qps> Send messages to multicast group with <num_of_qps> qps attached to it -M, --MGID=<multicast_gid> In multicast, uses <multicast_gid> as the group MGID

WRITE latency (ib_write_lat) flags:

--write_with_imm Use write-with-immediate verb instead of write

ATOMIC tests (ib_atomic_lat or ib_atomic_bw) flags:

-A, --atomic_type=<type> type of atomic operation from {CMP_AND_SWAP,FETCH_AND_ADD} -o, --outs=<num> Number of outstanding read/atomic requests - also on READ tests

Options for raw_ethernet_send_bw:

-B, --source_mac source MAC address by this format XX:XX:XX:XX:XX:XX (default take the MAC address form GID) -E, --dest_mac destination MAC address by this format XX:XX:XX:XX:XX:XX MUST be entered -J, --server_ip server ip address by this format X.X.X.X (using to send packets with IP header) -j, --client_ip client ip address by this format X.X.X.X (using to send packets with IP header) -K, --server_port server udp port number (using to send packets with UDP

View on GitHub
GitHub Stars938
CategoryDevelopment
Updated1d ago
Forks390

Languages

C

Security Score

80/100

Audited on Mar 31, 2026

No findings