InfiLINK 2x2 / InfiMAN 2x2: Initial Link Configuration and Installation

06

Link performance, traffic separation and traffic shaping

Throughput

To check the capabilities of the radio link regarding capacity, we'll use the Performance Tests tool available in the Web GUI. It performs link throughput tests for the configured channel bandwidth by generating test traffic between the two nodes (Master and Slave) and displays the channel throughput for the traffic with chosen priority (in Kbps). The Performance Tests tool displays the values of the full channel throughput which is available under the current settings, for each bitrate, also indicating the errors (if available) in red stripe.

Task 13
  • Let's go to the "Device Status" section and select the active link in Links Statistics. 
  • Check Performance Tests and click the "OK" button. 
  • Check the Bidirectional type of test, set Priority to ‘0’ to occupy the entire channel with the test traffic and click on the "Run Tests" button.

Figure - Accessing Performance Test

Figure - Performance Test Results

Traffic separation

The scope of the tasks in this section is to check the capabilities of the wireless link to handle different traffic types. We'll build the following scenario:

Figure - Traffic separation


NOTE

Traffic can be separated by various protocols (e.g. ether, ip, arp, rarp, tcp, udp, etc.), or by type (e.g. host, net, port, portrange, vlan, mpls, pppoe, etc.).In this section, we will use the network subnet as traffic separation, as described below.


Two traffic flows will pass through the wireless link:

  1. The traffic between PC1 and PC2 within network 172.16.0.0/16 that pass within the switch group #1 which allows only packets from this network.
  2. The traffic between PC1 and PC2 within network 10.10.10.0/24 that pass within the switch group #2 which allows any other traffic type because it has no packet filtering rule defined.
Task 14
  • On PC1: let's set the IP 172.16.10.100 with mask 255.255.0.0, IP 10.10.10.100 with mask 255.255.255.0 and IP 10.10.20.100 with mask 255.255.255.0 at the network interface.
  • On PC2: let's set the IP 172.16.10.110 with mask 255.255.0.0, IP 10.10.10.110 with mask 255.255.255.0 and IP 10.10.20.110 with mask 255.255.255.0 at the network interface.

Figure - Multiple IP addresses allocation in Microsoft Windows 10

Task 15

On both units, Master (which is directly connected to PC1 and accessed at 10.10.20.1/24) and Slave (which is directly connected to PC2 and accessed at 10.10.20.1/24):

  • Delete the default configuration in the MAC Switch section clicking on the "Remove Management" and "Remove Group" buttons.
  • Create Switch Group #1 adding eth0 and rf5.0 physical ports to it and add a rule to filter the traffic based on the network address (172.16.0.0/16).
  • Create Switch Group #2 adding eth0 and rf5.0 physical ports to it; no rule should be defined for this Switch Group.
  • In the "Network Settings" section, create Switch Virtual Interface #2, assign IP 10.10.10.10/24 to it at the Master unit and IP 10.10.10.11/24 at the Slave unit.

NOTE

Make sure that the order of the two Switch Groups is the same as shown in the screen-shot below.


Figure - Configurations in MAC Switch section

Figure - Configurations in Network Settings section for Master unit

Figure - Configurations in Network Settings section for Slave unit


NOTE

After saving the configuration above, use IP 10.10.10.10 for accessing the Web GUI of the Master unit and 10.10.10.11 for accessing the Web GUI of the Slave unit.


Task 16

Let's verify that the data traffic sent to IP 172.16.10.100 is running within Switch Group #1 and data traffic sent to IP 10.10.10.100 is running within Switch Group #2:

  • Open Command Prompt in Microsoft Windows on both PCs and go to the location where iperf.exe is saved.
  • Execute the following Server command on PC1: iperf -s -u -f k (this will start the iPerf server at PC1).
  • Execute the following Client command on PC2: iperf -c 172.16.10.100 -u -i 1 -l 256B -f m -b 50M -t 60 -T 1 (this will generate 50 Mbps of UDP traffic from PC2 to PC1, with 256 bytes packets).
  • Check the Switch Statistics section in Device Status page.
  • When the traffic sent from 172.16.10.110 has been stopped, execute the following command on PC2: iperf -c 10.10.10.100 -u -i 1 -l 256B -f m -b 50M -t 60 -T 1.
  • Check the Switch Statistics.

The number of unicast packets should increment in Switch Group #1 after executing the command: iperf -c 172.16.10.100 -u -i 1 -l 256B -f m -b 50M -t 60 -T 1.

The number of unicast packets should increment in Switch Group #2 after executing the command: iperf -c 10.10.10.100 -u -i 1 -l 256B -f m -b 50M -t 60 -T 1.

Figure - Switch Statistics

Traffic shaping

The purpose of the tasks in this section is to limit the total traffic that passes through the wireless link and more, to limit the traffic that passes through each Switch Group.

Task 17

On top of all configurations performed so far at both Master and Slave units (to separate the two data flows), let's do the following configurations in Traffic Shaping section at both Master and Slave units:

  • Create a Class which limits the traffic assigned to it at a lower value than the maximum rate given by the Ethernet port (100 Mbps in case of use of eth0 port only of Smn(c)/Lmn(c) models) or by the air protocol (280 Mbps in case of use of Mmx/Omx models): e.g. 80 Mbps.
  • Create 2 QM channels within this Class: Channel 1 must limit the traffic assigned to it at a lower value than the one of the entire Class (for instance 80%) and Channel 2 must limit the traffic assigned to it at the remaining percentage (for instance 20%).
  • Create Rule 1 to assign to Channel 1 all traffic coming at eth0 interface from 172.16.0.0/16 network.
  • Create Rule 2 to assign to Channel 2 all traffic coming at eth0 interface from 10.10.10.0/24 network.
  • Save the configuration.

Figure - Traffic shaping

Task 18

Let's verify that the data traffic sent to IP 172.16.10.100 is limited to the set value and also that data traffic sent to IP 10.10.10.100 is limited the set value:

  • Open Command Prompt in Microsoft Windows on both PCs and go to the location where iperf.exe is saved.
  • Execute the following Server command on PC1: iperf -s -u -f k (this will start the iPerf server on PC1).
  • Execute the following Client command on PC2: iperf -c 172.16.10.100 -u -i 1 -l 256B -f m -b 50M -t 60 -T 1 (this will generate 50 Mbps of UDP traffic from PC2 to PC1, with 256 bytes packets).
  • Check the Switch Statistics section in Device Status page.
  • When the traffic sent from 172.16.10.110 has been stopped, execute the following command on PC2: iperf -c 10.10.10.100 -u -i 1 -l 256B -f m -b 50M -t 60 -T 1.
  • Check the Switch Statistics.

In Device Status page of the Slave unit, the rate indicated for the eth0 interface must correspond with the set value for Class 1 and the rate indicated for rf5.0 interface must correspond with the set value for Channel 1 when iPerf generates traffic towards 172.16.10.100 and with the set value for Channel 2 when iPerf generates traffic towards 10.10.10.100.

Round trip latency

The ultimate scope of this training course is to create the network configuration that highlights the system capabilities regarding latency. The traffic generated with iPerf through Switch Group #1 passes simultaneously with the ICMP traffic generated from PC1 to PC2 through Switch Group #2 that will show the latency of the system.
The traffic generated with iPerf after executing the command iperf -c 172.16.10.100 -P 10 -i 1 -p 5001 -w 256k -f m -b 8M -t 50 -T 1 consists in 10 parallel streams of 8 Mbps each from PC2 to PC1, in case of using Smn(c)/Lmn(c) models. In case of using Mmx/Omx models, execute the same command with 25M (it will generate 10 parallel streams of 25 Mbps each). The reason for passing the traffic through Switch Group #1 is to demonstrate that the wireless link adds a low latency, even in case of high amount of data processing if the ICMP traffic has a guaranteed band (in our case 20% of the total traffic processed at the eth0 port, according to the settings from the Traffic shaping tasks). More, in Link Settings section add an encryption key for the link to add even more data processing.

Task 19

Let's test the average latency between the 2 PCs:

  • Open Command Prompt in Microsoft Windows on both PCs and go to the location where iperf.exe is saved.
  • Execute the following Server command on PC1: iperf -s -w 256k -l 256k (this will start the iPerf server on PC1).
  • Execute the following Client command on PC2: iperf -c 172.16.10.100 -P 10 -i 1 -p 5001 -w 256k -f m -b 8M -t 50 -T 1 in case of using Smn(c)/Lmn(c) models
  • Execute the following Client command on PC2: iperf -c 172.16.10.100 -P 10 -i 1 -p 5001 -w 256k -f m -b 25M -t 50 -T 1 in case of using Mmx/Omx models and make sure that Class 1 limits the ingress traffic at eth0 interface to 250 Mbps (Class1 max=250000).
  • Open a new Command Prompt in Microsoft Windows on PC2 and execute the command ping 10.10.10.100 -n 10.
  • Record the average value displayed after sending 10 ICMP packets (make sure that the traffic sent with iPerf simultaneously is still passing when the ICMP traffic has been stopped).

Only in case of at least 99.99% for the link availability, the average end to end latency should be below 8 ms.

Previous Next