How do I benchmark network throughput between Amazon EC2 Linux instances in the same Amazon VPC?

8 minute read
0

I want to measure the network bandwidth between Amazon Elastic Compute Cloud (Amazon EC2) Linux instances in the same Amazon Virtual Private Cloud (Amazon VPC).

Short description

There are several factors that might affect Amazon EC2 network performance when the instances are in the same Amazon VPC:

  • The physical proximity of the EC2 instances: Instances in the same Availability Zone are geographically closest to each other. In the following scenarios, instances are progressively farther away from each other:
    Instances in different Availability Zones in the same AWS Region
    Instances in different Regions on the same continent
    Instances in different Regions on different continents
  • The EC2 instance maximum transmission unit (MTU). The MTU of a network connection is the largest permissible packet size (in bytes) that your connection can pass. All EC2 instances types support 1500 MTU. All current generation Amazon EC2 instances support jumbo frames. The previous generation instances, C3, G2, I2, M3, and R3 also use jumbo frames. Jumbo frames allow more than 1500 MTU. However, there are scenarios where your instance is limited to 1500 MTU even with jumbo frames. For more information, see Jumbo frames (9001 MTU).
  • The size of your EC2 instance: Larger instance sizes for an instance type typically provide better network performance than smaller instance sizes of the same type. For more information, see Amazon EC2 instance types.
  • Amazon EC2 enhanced networking support for Linux: Enhanced networking support might impact the performance of instances other than T2 and M3 instance types. For more information, see Enhanced networking on Linux. For information on enhanced networking on your instance, see How do I turn on and configure enhanced networking on my EC2 instances?
  • Amazon EC2 high performance computing (HPC) support that uses placement groups: HPC provides full-bisection bandwidth and low latency. HPC allows support for up to 100-gigabit network speeds, depending on the instance type. To review network performance for each instance type, see Amazon Linux AMI instance type matrix. For more information, see Placement groups.
  • The instance uses a network I/O credit mechanism to allocate network bandwidth: Instances designated with a symbol in the Network performance column in General purpose instances - Network performance can reach the designated maximum network performance. However, these instances use a network I/O credit mechanism to allocate bandwidth to instances based on average bandwidth utilization. So, network performance varies for these instances.

Because of these factors, there can be significant performance differences between cloud environments. It's a best practice to regularly evaluate and baseline your environment's network performance to improve application performance. Testing network performance provides insight to determine the EC2 instance types, sizes, and configurations that suit your needs. You can run network performance tests on any combination of instances you choose.

For more information, open an AWS Support case and ask for additional network performance specifications for the specific instance types that you're interested in.

Resolution

Before you begin benchmark tests, launch and configure your EC2 Linux instances:

  1. Launch two Linux instances that you can run network performance testing from.
  2. Verify that the instances support enhanced networking for Linux, and that they are in the same Amazon VPC.
  3. (Optional) If you perform network testing between instances that don't support jumbo frames, then follow the steps in Network maximum transmission unit (MTU) for your EC2 instance.
  4. Connect to the instances to verify that you can access the instances.

Install the iperf network benchmark tool on both instances

In some distros, such as Amazon Linux, iperf is part of the Extra Packages for Enterprise Linux (EPEL) repository. To turn on the EPEL repository, see How do I turn on the EPEL repository for my Amazon EC2 instance that runs CentOS, RHEL, or Amazon Linux?

Note: The command iperf refers to version 2.x. The command iperf3 refers to version 3.x. Because version 2.x provides multi-thread support, use it to benchmark EC2 instances with high throughput. Version 3.x also supports parallel streams with the -P flag, but it's single-threaded and limited by a single CPU. Because of this, version 3.x requires multiple processes that it runs in parallel to drive the necessary throughput on bigger EC2 instances. For more information, see iperf2/iperf3 on the ESnet website.

Connect to your Linux instances, and then run the following commands to install iperf.

To install iperf on RHEL 6 Linux hosts, run the following command:

# yum -y install  https://dl.fedoraproject.org/pub/archive/epel/6/x86_64/epel-release-6-8.noarch.rpm  && yum -y install iperf

To install iperf on RHEL 7 Linux hosts, run the following command:

# yum -y install  https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm  && yum -y install iperf

To install iperf on Debian/Ubuntu hosts, run the following command:

# apt-get install -y iperf

To install iperf on CentOS 6/7 hosts, run the following command:

# yum -y install epel-release && yum -y install iperf

Amazon Linux 2023

Because Amazon Linux 2023 (AL2023) doesn't support EPEL, you can't download the iperf utility through the EPEL repository. For more information, see Extra Packages for Enterprise Linux (EPEL).

However, you can download and install iperf for AL2023 manually:

  1. Install development tools and git:

    sudo yum groupinstall "Development Tools"
    sudo yum install git
  2. Clone the iperf code:

    cd /usr/local/
    sudo git clone https://git.code.sf.net/p/iperf2/code iperf2-code
  3. Build and install the package:

    cd /usr/local/iperf2-codesudo 
    ./configure
    sudo make
    sudo make install

Test TCP network performance between the instances

By default, iperf communicates over port 5001 when it tests TCP performance. However, you can configure that port with the -p switch. Be sure to configure your security groups to allow communication over the port that iperf uses.

  1. Configure one instance as a server to listen on the default port, or specify an alternate listener port with the -p switch. Replace 5001 with your port, if different:

    $ sudo iperf -s [-p 5001]
  2. Configure a second instance as a client, and run a test against the server with the desired parameters. For example, the following command initiates a TCP test against the specified server instance with 40 parallel connections:

    $ iperf -c 172.31.30.41 --parallel 40 -i 1 -t 2

     
    Note: For a bidirectional test with iperf (version 2), use the -r option on the client side.

The output uses these parameters to show the interval per client stream, data transferred per client stream, and bandwidth that each client stream uses. The following iperf output shows test results for two c5n.18xlarge EC2 Linux instances that launch in a cluster placement group. The total transmitted bandwidth across all connections is 97.6 Gbits/second:

------------------------------------------------------------------------------------
Client connecting to 172.31.30.41, TCP port 5001
TCP window size:  975 KByte (default)
------------------------------------------------------------------------------------
[  8] local 172.31.20.27 port 49498 connected with 172.31.30.41 port 5001
[ 38] local 172.31.20.27 port 49560 connected with 172.31.30.41 port 5001
[ 33] local 172.31.20.27 port 49548 connected with 172.31.30.41 port 5001
[ 40] local 172.31.20.27 port 49558 connected with 172.31.30.41 port 5001
[ 36] local 172.31.20.27 port 49554 connected with 172.31.30.41 port 5001
[ 39] local 172.31.20.27 port 49562 connected with 172.31.30.41 port 5001
...
[SUM]  0.0- 2.0 sec  22.8 GBytes  97.6 Gbits/sec

Test UDP network performance between the instances

By default, iperf communicates over port 5001 when it tests UDP performance. However, you can use the -p switch to configure your port. Be sure to configure your security groups to allow communication over the port that iperf uses.

Note: The default for UDP is 1 Mbit per second unless you specify a different bandwidth.

  1. Configure one instance as a server to listen on the default UDP port. Or, specify an alternate listener port with the -p switch. Replace 5001 with your port, if different:

    $ sudo iperf -s -u [-p 5001]
  2. Configure a second instance as a client. Then, run a test against the server with the desired parameters. The following example runs a UDP test against the specified server instance with the -b parameter set to 5g. The -b parameter changes the bandwidth to 5g from the default of 1 Mbit per second. 5g is the maximum network performance a c5n18xlarge instance can provide for a single traffic flow within a VPC.
    Note: UDP is connectionless and doesn't have the congestion control algorithms that TCP has. When testing with iperf, the bandwidth obtained with UDP might be lower than the bandwidth obtained with TCIP.

    # iperf -c 172.31.1.152 -u -b 5g

The following is an example output from the UDP test:

$ iperf -c 172.31.30.41 -u -b 5g
------------------------------------------------------------------------------------
Client connecting to 172.31.30.41, UDP port 5001
Sending 1470 byte datagrams, IPG target: 2.35 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------------------------------
[  3] local 172.31.20.27 port 39022 connected with 172.31.30.41 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3] 0.0-10.0 sec  5.82 GBytes  5.00 Gbits/sec
[  3] Sent 4251700 datagrams
[  3] Server Report:
[  3] 0.0-10.0 sec  5.82 GBytes  5.00 Gbits/sec   0.003 ms 1911/4251700 (0.045%)
[  3] 0.00-10.00 sec  1 datagrams received out-of-order

This output shows the following values:

  • Interval (time)
  • Amount of data transferred
  • Bandwidth achieved
  • Jitter (the deviation in time for the periodic arrival of data grams)
  • Loss and total of UDP datagrams

Related information

Disk Testing using iperf3 on the ESnet website

Network tuning on the ESnet website

Throughput tool comparison on the ESnet website

iperf2 on the SourceForge website

AWS OFFICIAL
AWS OFFICIALUpdated 4 months ago