An TCP-based Unreliable, Congestion Controlled, Transport Protocol

In order to maintain stability of the Internet, multimedia flow should be congestion-controlled. While current TCP protocol provides congestion control, it also ensures reliability via persistent Automatic Retransmission reQuest (ARQ). But persistent ARQ is not necessary in multimedia streaming, which tolerates certain level of packet loss. TCP Urel is a new option for TCP, allowing congestion controlled but unreliable data streaming. TCP Urel sends fresh data in every data segment, no matter the segment is originally constructed to carry fresh data or retransmission data. With less than 750 lines of extra code in TCP stack of FreeBSD 5.4, it removes ARQ and meanwhile retains other functionalities including Additive Increase Multiplicative Decrease (AIMD) congestion control. By not changing congestion control, TCP Urel is able to retain exactly the same rate adaptation behavior of current TCP. Our test show adding UREL option to TCP SACK, NewReno, or Reno do not change their TCP friendliness. The independence of congestion control from retransmission allows TCP Urel to remain friendly with possible new TCP versions in the future.

Source Code Release 0.1, 1 October, 2007. For FreeBSD 5.4.
  • Modified TCP Stack
    TCP code available here is based on TCP stack in FreeBSD 5.4. To try out,
    1) Download the file, unzip it.
    	       [user@urels ~]$ su root
    	       [root@urels ~]$ tar -xzf urel.tgz

    2) Replace the original netinet dir in your FreeBSD5.4 system with the netinet in the unzipped package, and recompile the kernel. urel/urel_scripts/ provides, including necessary instructions on how to recompile a kernel with Urel support. For more information on how to compile a kernel, a quick reference is here. An example:
    	       [root@urels ~]# cp -rf /usr/src/sys/netinet netinet_backup
    	       [root@urels ~]# cp -rf urel/netinet /usr/src/sys/. 
    	       [root@urels ~]# cp urel/netinet/tcp.h /usr/include/netinet/tcp.h
    	       [root@urels ~]# cp urel/netinet/tcp_var.h /usr/include/netinet/tcp_var.h
    	       [root@urels ~]# bash urel/urel_scripts/ 0

    3) The above script reboot your system automatically. After rebooting, TCP Urel (with other TCP protocol unaffected) is ready to use.
    	       [root@urels ~]# uname -a

    Output should be something similar to:
    	       FreeBSD 5.4-RELEASE #0: Sun May  8 10:21:06 UTC 2005  

  • Iperf Testing Package
    We have modified Iperf 2.02 to test TCP Urel. New command line options are provided to Iperf to enable UREL option for streaming. Suppose you have downloaded the urel package and unzipped it, here is how you install Iperf as follow:
    	       [root@urels ~]# cd urel/iperf-urel
    	       [root@urels ~]# ./configure; make; make install;

    If the installation is successful, the following commands should not complain for invalid option, else please make sure iperf is successfully installed and the path of iperf is right.
    To start a server using TCP Urel:
    	       [root@urels ~]# iperf -s -E

  • "TCP Urel, a TCP Option for Unreliable Streaming",
    Lin Ma, Xiuchao Wu, Wei Tsang Ooi.
    Under Submission.
How to Test

Once the kernel are recompiled and reloaded, and Iperf is installed, we can carry out the following test to examine if TCP Urel is fair to other TCP versions. Here we take TCP SACK for example; friendliness to other TCP versions could be tested out via similar setting up.

  • Testbed
    Figure 1: Testbed for fairness

    Above figure shows 3 senders at the left and 3 receivers at the right. tcps and tcpr are the pair of sender and receiver of normal TCP flows. urels and urelr are the pair of nodes with TCP Urel installed. For testing purpose, we only need urels and urelr for the test now -- remember, normal TCP is also available on urels and urelr, as TCP Urel does not impair the functionality of normal TCP protocol.
  • Ipfw Setting
    phoebe is the node we used to simulate the bottleneck. It has Ipfw and dummynet installed to set the bandwidth, delay, queue type, and queue on the bottleneck. For more information related to Ipfw and dummynet please see this page, by Luigi Rizzo, the author.

    Before testing, we setup the bottleneck. The following commands setup a FIFO with 1Mbit/s bandwidth, 20 queue length, 20ms one-way delay (10ms for each interface) on phoebe:
    [root@phoebe ~]#ipfw pipe 0 config bw 1Mbit/s queue 20 delay 10ms; //configure a bottleneck.
    [root@phoebe ~]#ipfw add 100 pipe 0 ip from urels to urelr; //apply the pipe to urels-urelr pair.

  • TCP Setting
    TCP version should be set to Sack before the test. The following commands should be typed both on urels and
    	       [root@urels(r) ~]# sysctl -w net.inet.tcp.sack.enable=1; 
    	       [root@urels(r) ~]# sysctl -w net.inet.tcp.newreno=1; 

    If the above two variables are set to 0 and 1 respectively, TCP is configured to NewReno. If both of them are set to 0, TCP Reno is used. For testing purpose, we also disable the TCP-Inflight-Estimation feature and the delayed ACK feature (note, currently there is an unknown bug related to delayed ACK feature for TCP Urel: when delayed ACK is enabled, TCP Urel does not function normally). The following commands do so:
    	       [root@urels(r) ~]# sysctl -w net.inet.tcp.inflight.enable=0; 
    	       [root@urels(r) ~]# sysctl -w net.inet.tcp.delayed_ack=0; 

  • Data Collection
    We use tcpdump to log the whole streaming for future analysis. The following command is typed on the sender, urels to create a log called mylog:
    	       [root@urels ~]# tcpdump -w mylog & 

  • Start Streaming
    Now we can start two flows, one TCP Sack, one TCP Urel. The following commands should be carried out on urelr to start two Iperf servers, one (with -E) for TCP Urel receiver on port 5001, the other for TCP Sack receiver on port 4999:
    	       [root@urelr ~]# iperf -s -E -p5001 & iperf -s -p4999 & 

    Then we start streaming at two senders, on urels. Note, TCP Urel is started 10 seconds before TCP Sack, and each of them lasts 60 seconds in these case.
    	       [root@urels ~]# iperf -c urelr -t60 -E -p5001 & sleep 10; iperf -c urelr -t60 -p4999; 

    After the sessions finish after 70 seconds, type the following commands to kill tcpdump.
    	       [root@urels ~]# killall tcpdump;

    Now the file mylog should contain the trace data for analysis.
  • Testing in One Bash Script
    The above testing procedures, including bottleneck setting, environment setting, and streaming testing are all batched in a Bash script, that allow you to carry out the whole process by executing one commands. Suppose you have unzipped the downloaded package, and the TCP Urel and Iperf are both installed on urels and urelr:
    	       [root@mypc ~]# cd urel/urel_scripts/fairness;
    	       [root@mypc ~]# ./ 

    In order to allow the above scripts to work properly, you must:
    • Run this script on a third party, which I call mypc that has connections to all the nodes on the testbed.
    • Enable the SSH agent, so that you do not need type password when logging on to nodes in the testbed. Because the script uses SSH to input commands remotely to those nodes. On how to set SSH agent, please refer to the tutorial at here. Please note, this setting makes your testbed insecure, please remove the configuration after testing.
    • Edit the file, to configure the right host name, e.g. the sender, receiver, bottleneck, and the third party.
    There are other scripts, e.g. for data analysis. You need to read them before use them.
  • Results
    The following plot is generated from data collected from the above test. This plot shows TCP Sack and TCP Urel are friendly to each other.
    Figure 2: Friendly competition between TCP Sack and TCP Urel