SlideShare ist ein Scribd-Unternehmen logo
1 von 57
Downloaden Sie, um offline zu lesen
Extending TFWC
towards Higher Throughput

Midori Kato

Faculty of Environmental Information
Keio University
5322 Endo, Fujisawa Kanagawa 252-8520 JAPAN

Submitted in partial fulfillment of the requirements
for the degree of Bachelor of Science
Advisors:
Hideyuki Tokuda
Jun Murai
Osamu Nakamura
Kazunori Takashio
Jin Nakazawa
Rodney D. Van Meter III
Keisuke Uehara
Jin Mitsugi
Hiroyuki Kusumoto
Keishi Takeda
c
Copyright2011 Midori Kato
Abstract of Bachelor’s Thesis
Extending TFWC towards Higher Throughput

Multimedia streaming over TCP is common in the Internet. Under condition of scarce network resource, the rate oscillation due to congestion control degrades the user’s experience.
In addition, the data transmisson of multimedia services under the congested network needs
to consider the priority of services depending on the application type. While TFWC supports smooth congestion control for real-time multimedia streaming, it cannot provide a
high-enough sending rate to the services that require high throughput rather than smoothness. In this thesis, I propose Tunable TFWC (TTFWC), an extension to TFWC to achieve
variable throughput behavior with regard to trade-off between smoothness and the sending
rate based on application or user requirements. Unlike many existing works, this proposal
is suitable for both throughput-sensive applications and smoothness-sensitive application by
using a tuning parameter. I observe that TTFWC with large tuning parameter improves
around 10% higher throughput than the original congestion control algorithm in a low-speed
network. In a high-speed network, TTFWC throughput with large value of the tuning parameter over 4% compared with the original.

Midori Kato
Faculty of Environmental Information
Keio University

i
Abstract of Bachelor’s Thesis
Thesis Extending TFWC towards Higher Throughput
高スループット実現のための TFWC の拡張

近年のインターネット技術の発展に伴い,TCP を利用したマルチメディアストリーミングが普及しつつ
ある.しかし,ネットワーク資源が乏しいモバイルネットワーク環境などでは,TCP による輻輳制御が転送
レートの変動を大きくしてしまい,コンテンツ再生が不安定になる.またネットワークの輻輳が激しい環境で
は,サービスの品質を考慮した上でサービスタイプ別にデータ通信を行うことが求められる.リアルタイムマ
ルチメディアストリーミングサービス向けに提案された TFWC という輻輳制御アルゴリズムがあるが,転送
レートの滑らかさよりも大きさを重視するサービスには利用できない.本研究では, TFWC の拡張を行い,
アプリケーション,ユーザの需要にあわせて転送レートの大きさと滑らかさのトレードオフを調節可能にした

Tunable TFWC (TTFWC) を提案する.多くの既存研究とは異なり,本研究で提案する輻輳制御アルゴリズ
ムはチューニングパラメータを設定することで,転送レートの大きさと滑らかさのどちらを優先するアプリ
ケーションにも調節できる.実験の結果,既存の輻輳制御アルゴリズムと比較して,低速なネットワークでは

10%以上のスループットの向上,高速なネットワークでは 4%程度のスループットの向上が観測できた.
慶應義塾大学 環境情報学部
加藤 碧

ii
Table of Contents
1 Introduction

1

1.1

Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Challenges and Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.3

Structure of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

2 Problem Definition
2.1

4
4

2.1.1

AIMD-based Congestion Control Algorithm . . . . . . . . . . . . . .

4

2.1.2

Formula-based Congestion Control Algorithm . . . . . . . . . . . . .

5

2.1.3

CBR-based Congestion Control Algorithm . . . . . . . . . . . . . . .

5

2.1.4
2.2

Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

New Metric-based Congestion Control Algorithm . . . . . . . . . . .

5

TCP-Friendly Window-based Congestion Control . . . . . . . . . . . . . . .

6

3 Tunable TCP-Friendly Congestion Control

8

3.1

TTFWC Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

3.2

Base cwnd Update at the Beginning of the Increasing cwnd Period . . . . . .

10

3.3

Change of the Number of Loss Intervals in the ALI Calculation . . . . . . .

11

3.4

RENO Operation when cwnd is Less than two . . . . . . . . . . . . . . . . .

12

4 Implementation

13

iii
4.1

Slow Start and Congestion Avoidance . . . . . . . . . . . . . . . . . . . . . .

15

4.2

TTFWC Loss Recovery Behavior . . . . . . . . . . . . . . . . . . . . . . . .

16

5 TTFWC Performance Evaluation
5.1

21
21

5.1.1

Responsiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

5.1.2

Throughput and TCP-Friendliness . . . . . . . . . . . . . . . . . . .

24

5.1.3

Smoothness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

5.2.1

Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

5.2.2

Throughput and TCP-Friendliness with different RTT . . . . . . . .

35

5.2.3

Protocol Friendliness and Throughput Performance . . . . . . . . . .

36

5.2.4

5.2

Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CPU Processing Time . . . . . . . . . . . . . . . . . . . . . . . . . .

40

6 Conclusion and Future Work

43

iv
List of Figures
2.1

Window behavior in TFWC . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

3.1

TCP and TFWC cwnd behavior . . . . . . . . . . . . . . . . . . . . . . . . .

9

3.2

TTFWC cwnd behavior (When Eq.3.1 is true) . . . . . . . . . . . . . . . . .

9

3.3

TTFWC cwnd behavior when (When Eq.3.1 is false) . . . . . . . . . . . . . .

10

4.1

The TCP procedure diagram . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

4.2

ttfwc structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

4.3

the TTFWC procedures’ definitions . . . . . . . . . . . . . . . . . . . . . . .

15

4.4

ttfwc cong avoid function . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

4.5

ttfwc recalc ssthresh function . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

4.6

ttfwc alpha cwnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

5.1

TTFWC(0.0) on constant CBR flows . . . . . . . . . . . . . . . . . . . . . .

22

5.2

TTFWC(0.4) on constant CBR flows . . . . . . . . . . . . . . . . . . . . . .

22

5.3

TTFWC(0.9) on constant CBR flows . . . . . . . . . . . . . . . . . . . . . .

22

5.4

The packet loss rate in experiments in Fig. 5.1 - 5.3 . . . . . . . . . . . . . .

23

5.5

Simulation Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

5.6

TTFWC(0.0) and TCP throughput . . . . . . . . . . . . . . . . . . . . . . .

25

5.7

TTFWC(0.4) and TCP throughput . . . . . . . . . . . . . . . . . . . . . . .

26

5.8

TTFWC(0.9) and TCP throughput . . . . . . . . . . . . . . . . . . . . . . .

27

v
5.9

Throughput behavior of TCP, TTFWC(0.0), TTFWC(0.4) and TTFWC(0.9)

29

5.10 Experimental setup of all tests . . . . . . . . . . . . . . . . . . . . . . . . . .

30

5.11 Fairness with six active flows three flows with 120 ms delay and the other
three flows with varying delays. Results are shown for 10 Mbps (top) or 250
Mbps (bottom) bottleneck bandwidth. . . . . . . . . . . . . . . . . . . . . .

33

5.12 Fairness of active six flows under symmetric conditions (same propagation
delay, shared bottleneck link, same congestion control algorithm). Results are
shown for 10Mbps (top) or 250Mbps (bottom) bottleneck bandwidth. . . . .

34

5.13 TTFWC(0.0) and TCP throughput (Bottleneck Link 250Mbps) . . . . . . .

36

5.14 TTFWC(0.4) and TCP throughput (Bottleneck Link 250Mbps) . . . . . . .

36

5.15 TTFWC(0.9) and TCP throughput (Bottleneck Link 250Mbps) . . . . . . .

37

5.16 TTFWC(0.0) and TCP throughput (Bottleneck Link 10Mbps) . . . . . . . .

37

5.17 TTFWC(0.4) and TCP throughput (Bottleneck Link 10Mbps) . . . . . . . .

38

5.18 TTFWC(0.9) and TCP throughput (Bottleneck Link 10Mbps) . . . . . . . .

38

5.19 Normalized Throughput (bottleneck Link 250 Mbps) . . . . . . . . . . . . .

39

5.20 Normalized Throughput (bottleneck Link 10 Mbps) . . . . . . . . . . . . . .

40

vi
List of Tables
5.1

CPU processing time for congestion avoidance procedure . . . . . . . . . . .

41

5.2

CPU processing time for loss recovery procedure . . . . . . . . . . . . . . . .

42

vii
Chapter 1
Introduction
This chapter describes the background on this research topic. Also, I describe the problems
I are concerned with, and the approach to attacking the issues.

1.1

Motivation

The proliferation of high-speed, broadband networks and mobile networks has made multimedia streaming popular. The growth of multimedia streaming increases the demand to
distribute high-quality video and audio data in real-time streaming mode. As the usage of
multimedia streaming requires more bandwidth than normal web browsing or email services,
the design of congestion control multimedia streaming is an important issue in the network,
even atop non-congestion-controlled protocols such as UDP.
Congestion control for multimedia streaming is designed to meet two goals; first, it must be
TCP-Friendly, sharing the equally bandwidth with competing TCP flows at the bottleneck.
TCP transmits the majority of bytes on the Internet, thus compatibility with TCP congestion control is adopted as a reasonable metric to evaluate congestion control algorithms [9].
Second, its throughput must be smooth so that the application achieves constant sending
rate, whereas TCP throughput behavior is sawtooth-shaped. [16] shows that burst transmission does not provide a good user experience. Smoothness is the new functionality that

1
overcomes the drawbacks of TCP congestion control.
TCP-Friendly Rate Control (TFRC) [10] is the Internet standard congestion control for
streaming media applications. TFRC achieves a smooth sending rate without compromising
TCP-friendliness. However, TFRC transmits excessively high sending rate at short Round
Trip Time (RTT), depending on timer granularity of the end system. TCP-Friendly Window
Control (TFWC) [6] is a window-based variant of TFRC, thus it does not require accurate
RTT calculation. TFWC also achieves a smoother sending rate, and is more TCP-Friendly
than TFRC. In addition, TFWC can be implemented in various transport protocols, such
as TCP and SCTP [20], because they also perform window-based congestion control.

1.2

Challenges and Contribution

Smoothness and high sending rate are trade-off, because the congestion control algorithm for
smooth throughput pressures the sending rate not to suddenly increase, even if more bandwidth is available. Indeed, in interactive applications, such as VoIP and video conferencing,
data have to be delivered smoothly, because they cannot buffer large amounts of sending and
receiving data. However, applications such as on-demand video and audio streaming, and
non-interactive live streaming can buffer some data. They require a sending rate as high as
possible and a certain level of smoothness, rather than the perfect smoothness that TFRC,
TFWC and the other TFRC extensions strive to achieve. In other words, sending rate is
more important than smoothness in such applications.
In this thesis, I propose Tunable TFWC (TTFWC), a TCP-Friendly congestion control
algorithm that can tune the balance between the sending rate and smoothness. TTFWC
supports a wide range of multimedia applications, including those that desire as high sending rate as possible and certain level of smoothness. I define one tuning parameter to
control the balance between the sending rate and smoothness. In my experiments, TTFWC
2
achieves variable throughput behavior by changing the tuning parameter value. For example, TTFWC achieves around 10% higher throughput than TFWC in the low-speed network
and over 4% higher in the high-speed network. I also demonstrate TTFWC is TCP-Friendly
over a wide range of the tuning parameter value.

1.3

Structure of Thesis

The remainder of this thesis is organized as follows: Chapter 2 describe related work and
TFWC overview. I describe the design of TTFWC window calculation in Chapter 3. Chapter 4 explains my implementation. I present the results of implementation, over dummynet
and the simulation results in Chapter 5. I evaluate TTFWC in terms of responsiveness,
throughput, TCP-Friendliness, fairness and smoothness. In this Chapter, I include the discussion for the TTFWC required network. Then, the thesis concludes in Chapter 6.

3
Chapter 2
Problem Definition
This chapter describes existing proposals which target smooth congestion control and compare these works with my approach.

2.1

Related work

Congestion control for multimedia streaming falls into three categories: Additive Increase
Multiple Decrease (AIMD)-based [22, 3], formula-based [19, 10, 6, 13] and Constant Bit Rate
(CBR)-based [8]. At the end of this section, I show you another approach as well.

2.1.1

AIMD-based Congestion Control Algorithm

AIMD-based congestion control is RENO extension. In AIMD-based congestion control
algorithms, flows achieve lower throughput than competing TCP flows when congestion
frequently occurs. In congested network, as the sender decrease little window size after a
packet loss, it achieves lower throughput comparing to other types of congestion control. On
the other hand, TTFWC provides high sending rate without the amplitude of saw-toothed
bandwidth change and decrease.

4
2.1.2

Formula-based Congestion Control Algorithm

Formula-based congestion control uses TCP throughput equation [14] to estimate cwnd.
SIMD [19] and TEAR [13] are representatives of the second category. They maintain a high
sending rate against frequent congestion. However, SIMD has a slightly higher packet loss
rate than other congestion control algorithm, thus the overall performance is poor. Although
TEAR achieves a smooth sending rate by using sliding window with the receiver control, its
response to congestion is quite slow. TTFWC overcomes these problems, and as shown it in
Section 5.1.

2.1.3

CBR-based Congestion Control Algorithm

The CBR-based approach [8] proposes a congestion control algorithm that achieves a smooth
sending rate with a new definition of TCP-Friendliness. Since multimedia streaming historically uses UDP, CBR-based congestion control extends existing works to smooth throughput behavior. Flows employing this algorithm are stochastically TCP-Friendly, that is, these
flows are fair to aggregates of competing TCP flows. However, these flows cause the throughput of one or some of the competing TCP flows to drastically reduce. In addition, although
TTFWC is designed to meet the current definition of TCP-Friendliness, this approach has
not been throughly vetted.

2.1.4

New Metric-based Congestion Control Algorithm

STCP [21] also aims at smooth throughput behavior by using some counters such as timeout,
fast retransmit and so on. However, its TCP-friendliness is not evaluated and analyzed.
In addition, this uses extra ICMP packets to detect congestion, which will be affected by
middleboxes or routers [17].

5
$%'

(*

()

!#$!#$%%'()*

(+

!#

+,-'.(,/$!#$%%'()*

Figure 2.1: Window behavior in TFWC

2.2

TCP-Friendly Window-based Congestion Control

The TFWC sender detects packet loss from ACK packets sent from the receiver. The TFWC
sender calculates the congestion window size cwnd from the packet loss event rate p based
on TCP throughput equation (Equation 2.1) [14].

cwnd = 

2p
3

1

3p



(2.1)

+ 12 8 p (1 + 32p2 )

When the sender detects a packet loss event (i.e., loss of one or more packets in the window
size) from ACK packets, the sender calculates the loss-event interval, which is the number of
packets successfully arriving at the receiver between two loss events. The track of loss-event
intervals is called “loss-event history”. From the loss-event history, the sender calculates the
Average Loss Interval (ALI) converted to the calculated loss event rate by Equation 2.2.

6
Loss Event Rate (p) =

1
Average Loss Interval

(2.2)

The calculation of an ALI is important because it impacts the next cwnd. When the sender
updates its loss event history, two kinds of ALIs are calculated [5]. The first one is calculated
with Exponentially Weighted Moving Average (EWMA), which uses the most recent sixteen
loss intervals, the last one of which is the number of transmitted packets after the last
loss event. The other one is calculated with EWMA which uses sixteen loss intervals, not
counting the number of transmitted packets after the last loss event in (i.e., use an older
loss interval than the first one). The sender adopts the larger one as the estimated ALI to
derive cwnd. We call the interval that the first one is chosen a “flat cwnd period”, and the
interval that the other one is chosen an “increasing cwnd period”.
Fig. 2.1 illustrates TFWC cwnd behavior. A loss event happens at time t0 and t2 . Between
time t0 and t1 , the flat cwnd interval is applied. Between time t1 and t2 , the increasing cwnd
interval is applied, thus the cwnd increases little by little.

7
Chapter 3
Tunable TCP-Friendly Congestion
Control
This chapter describes my proposal congestion control algorithm for multimedia streaming
named TTFWC, which is composed of three algorithms.

3.1

TTFWC Overview

In this section, I explain the design of TTFWC and the effect of the tuning parameter tp
that varies between 0.0 and 1.0. I call the range of tp less than 0.5 as the “smooth mode”
and more than or be equal to 0.5 as the “responsive mode”. This is because, the former gives
relatively high priority to smoothness, whereas the latter gives that to throughput. More
importantly, TFWC uses the different window calculation and the number of loss intervals.
I make three modifications in TFWC cwnd calculation to set the tuning parameter: First is
that the sender increases its cwnd more aggressively at the increasing cwnd period, second is
that the sender changes the number of loss interval in the ALI calculation from this value,
and third is cwnd calculation in low bandwidth.

8
0123)

-

,

$%

$.

$/

!#

'()
*+'
Figure 3.1: TCP and TFWC cwnd behavior

15#)6789!:);2)=)+?8#

$%'
()*)+,

+3

+2

+4

!#

-./)
-01.
Figure 3.2: TTFWC cwnd behavior (When Eq.3.1 is true)

9
()#*+,-./!0*+12%+34+*0$+$5.#2

:;*+

$

$%

$'

!#

678+
69(7
Figure 3.3: TTFWC cwnd behavior when (When Eq.3.1 is false)

3.2

Base cwnd Update at the Beginning of the Increasing cwnd Period

In order to achieve higher throughput than the original TFWC, TTFWC increases the base
cwnd at the beginning of the increasing cwnd period. The rationale is that TFWC does
not increase the congestion window quickly when the calculated TCP-Friendly congestion
window is larger to keep the smooth sending rate as shown as δ in Fig. 3.1. This is also
the reason why TFWC achieves lower throughput than TCP (see Sec. 5.1 with Fig. 5.6 for
simulation results).
In the responsive mode (i.e., 0.5 ≤ tp  1.0), TTFWC increases the base cwnd by tp × δ,
where tp is the tuning parameter and δ is the difference between the current TCP cwnd and
the calculated TCP-Friendly cwnd (Fig. 3.2 illustrates δ at time t1 ). Hence, the responsive
mode achieves the higher throughput based on the tuning parameter.
In the smooth mode (i.e., 0 ≤ tp  0.5), TTFWC increases the base cwnd by tp2 × δ. Since
10
the smooth mode is sensitive to throughput oscillation caused by the transient congestion, I
conservatively increase the base cwnd.
In Fig. 3.2, the integral of TCP cwnd and TFWC cwnd are equal through the time between
t0 and t1 . When a packet is lost before the time t1 , the TFWC sender receives higher
throughput than TCP. Hence, I limit the opportunity to increase the base cwnd to prevent
unfair utilization of the bottleneck link. I increase the base cwnd only when Equation 3.1 is
true on the ALI calculation:
1
i=1 weighti

  n

(3.1)

Let  be the cwnd decrease fraction after the packet loss event, as illustrated in Fig. 3.1, and
weighti be the i-th EWMA weight. Fig. 3.2 and 3.3 show TTFWC cwnd behavior when
Equation 3.1 is true and not true, respectively. If the value  of Equation 3.1 is not true
after a packet loss, the sender uses its window size as is the case with TFWC cwnd behavior.

3.3

Change of the Number of Loss Intervals in the ALI
Calculation

I change the number of loss intervals from sixteen to eight in the responsive mode. The
larger number of loss intervals makes cwnd decrease more slowly. Hence, if I use the same
number of loss intervals as the original TFWC, the TTFWC flow becomes overly aggressive,
because it increases the base cwnd at the beginning of the increasing cwnd period. When the
number of loss intervals is halved, the sender receives the convergence to the ideal window
size up to twice as fast. Since the total EWMA weight in an ALI calculation is halved, the
sender decreases its cwnd by up to half after the sender experiences an extremely short loss
interval. After the sender experiences an extremely long loss interval, it increases its cwnd
up to twice. I note that throughput in responsive mode TTFWC using 8 loss intervals in its
ALI calculation is still much smoother than TCP (See Fig. 5.9 in Sec. 5.1).
11
I adopt the same number of loss intervals as the original TFWC in the smooth mode. If I
apply the tuning parameter in the same way as in the responsive mode, the smooth mode
TTFWC flow is obviously overly aggressive as the tuning parameter is larger. Hence, I have
to conservatively apply the tuning parameter to the base cwnd. My design choice is applying
the tuning parameter to the base cwnd exponentially.

3.4

RENO Operation when cwnd is Less than two

TTFWC performs RENO operation [7, 11] when cwnd falls below two. While the original
TFWC uses rate control, I adopt RENO operation to prevent from high CPU load as a result
of cwnd calculation. I will describe the CPU processing time for the main functions in the
evaluation.

12
Chapter 4
Implementation
This chapter describes the TTFWC implementation.
I implemented TTFWC as a TCP congestion control algorithm in the Linux 3.0.0 kernel.
The implementation consists of around 900 total lines of code, structured as a loadable kernel
module. Thus, the application and the system administrator can enable TTFWC with ease.
As TTFWC is a sender-only algorithm, I do not need modification to the receiver so long as
the receiver behaves as well as regular TCP. We enable TTFWC by a sysctl parameter. The
subsequent sections describe how to perform TTFWC to match the congestion avoidance,
slow start, and loss recovery behavior.
Fig. 4.1 shows the logic of a TCP socket after the connection is established on a sender. The
TCP option structure must be initialised at this time and a call to TCP register congestion
control is made. Then, every time the machine receives ACK, it checks the state of the socket
in the function tcp ack(). When the function tcp ack is dubious() returns its state is
normal, the machine enters the congestion avoidance phase. At this time, the soceket calls
congestion control function cong avoid(). In the Linux kernel, when the machine encounters
a loss event, three types of states are prepared for loss recoveries. TCP CA Recovery is
set for fast transmission. TCP CA CWR means that the machine raises congestion inside
NIC, or ECN and etc. The machine handles loss recovery depending on the loss event

13
!#$%'()*'
!!#$%#'()

34

Normal state?
tcp_ack_is_dubious()

+),'#-.'/012'
#$%#*+,%-*./()

56
Duplicate ACK?

34

tcp_fastretrans_alert()

7142'281.49$44$6.'
086#/8'

56
01#2#321!4456145!

Figure 4.1: The TCP procedure diagram
type. In the Linux implementation, the function tcp fastretrans alert() divides into
fast retransmission and other procedure.
Fig. 4.2 shows the variables in the TTFWC structure with the definitions and Fig. 4.3
shows the TTFWC procedures. The TTFWC structure and TTFWC procedures’ definition
are employed only if the kernel is compiled with the TTFWC support enabled.
1
2
3
4
5
6
7
8
9
10

struct ttfwc {
u32 tcp_cwnd;
// TCP cwnd calculated by TTFWC procedures
u32 is_reno;
// flag when it is in RENO phase
u32 reno_cwnd;
// RENO cwnd estimation
u32 alpha_cwnd;
// additional cwnd depending on a parameter
u32 li_dif;
// the number of packets until the ALI is over
u32 li_counter;
// the counter of loss intervals
u32 li_mean;
// the weighted ALI
struct ttfwc_loss_hist *lh; // array to stock loss intervals
};
Figure 4.2: ttfwc structure

14
1 static struct tcp_congestion_ops tcp_ttfwc __read_mostly = {
2
.init
= ttfwc_init,
// variables initiate
3
.ssthresh
= ttfwc_recalc_ssthresh,
// ssthresh calculation
4
.cong_avoid = ttfwc_cong_avoid, // cwnd update
5
.undo_cwnd = ttfwc_undo_cwnd,
// a false loss detected
6
.owner
= THIS_MODULE,
7
.name
= ttfwc,
8 };
Figure 4.3: the TTFWC procedures’ definitions

4.1

Slow Start and Congestion Avoidance

ttfwc cong avoid() updates the cwnd every time a packet is acknowledged. Fig. 4.4 displays congestion avoidance phase of TTFWC. TTFWC slow start and timeout behaviors are
the same as RENO. The sender applies TTFWC operation after the slow start phase.
Upon receiving an ACK from the receiver, the TTFWC sender performs the following update. The TTFWC sender calls RENO operation in the slow start phase or when cwnd is
less than two (Lines 9-16). When the sender is out of RENO mode, it estimates RENO cwnd
(Lines 17-26). In this scope, the sender estimates based on the window behavior as described
Chapter 3. After the number of packets is over the ALI, it increases cwnd (Lines 27-). In
this time, the TTFWC sender adds extra cwnd into base cwnd when a tuning parameter is
set (Lines 14-17).
The TTFWC sender after increasing cwnd period starts to calculate cwnd by using the ALI.
When the ALI is over the estimation of next loss interval, the sender add extra cwnd if the
tuning parameter is set more than zero (Line 29-30). I adopt inaccurate ALI estimation
(Line 32 and 34). In the substitute calculation, the smooth mode is a little smoother than
accurate ALI calculation because the accurate ALI is increased when the sender receives
5.3 ACKs. As well, the responsive mode is more aggressive than accurate ALI calculation
because the accurate ALI is increased when the sender receives 2.8 ACKs.
15
I find that the exact procedure of the ALI calculation wastes processing time. The cwnd
update with accurate ALI calculation sometimes stops data transmission due to the overhead,
TTFWC algorithm in the Virtual Machine (See Section 5.2.4). Through the experience, the
limitation of the machine resources affects data transmission. I recommend approximate ALI
calculation to minimize the overhead in the congestion avoidance phase. I will also comment
about ALI calculation in Section .
I explain the specific functions as below. tcp is cwnd limited shown in Lines 7-8 prevents
burst data transmission if the current cwnd is not much. ttfwc invert imean executes to be
the inverse number of the ALI (Line 38).

4.2

TTFWC Loss Recovery Behavior

The TTFWC sender enters into ttfwc recalc ssthresh when the SACK or ACK proof contains
a loss. Fig. 4.2 shows the procedure of TTFWC loss recovery.
For the request after a loss event, TTFWC performs the following operation: When its cwnd
is less than two, the TTFWC sender skips the following operation and is in RENO operation
(Line 7-10). When the sender detects the first loss event, it calculates a pseudo-loss interval,
loss event rate and the number of packets until the increasing cwnd period by using the
Equation (Line 11-17). In this scope, the sender estimates loss interval by using Equation
2.1 and sets the number of packets until the ALI is over as li dif. After the first loss
event, the sender updates loss intervals and calculates the ALI and the next cwnd. When
the calculated cwnd is more than two, the sender reverses TTFWC (Line 19-22). Then, the
sender returns the current cwnd as ssthresh (Line 16).
Fig. 4.6 indicates how to estimate RENO cwnd and how to calculate the additional cwnd
in the function ttfwc alpha cwnd. The sender is halved RENO cwnd and compares to next
TTFWC cwnd (line 4). A part of lines 13-14 indicates the exception for the estimation. The
16
sender enters this scope when its state is TCP CA CWR because the cwnd is set 1.
When next TTFWC cwnd is over the halved RENO cwnd, the sender checks Equation 3.1
(Line 10-12). Then, if Equation 3.1 is true shown in line 17 and 19, the sender calculates
additional cwnd. If not, additional cwnd is reset.

17
1 static void ttfwc_cong_avoid(struct sock *sk, u32 ack, u32 in_flight)
2 {
3
struct tcp_sock *tp = tcp_sk(sk);
4
struct ttfwc *ca = inet_csk_ca(sk);
5
u32 li_mean, p, cwnd;
6
7
if (!tcp_is_cwnd_limited(sk, in_flight))
8
return;
9
if (ca-is_reno || ttfwc_lh_length(ca)  1){
10
ca-lh-li_length[0]++;
11
if (tp-snd_cwnd = tp-snd_ssthresh)
12
tcp_slow_start(tp);
13
else
14
tcp_cong_avoid_ai(tp, tp-snd_cwnd);
15
return;
16
}
17
if (ca-lh-li_length[0] = ca-li_dif/2) {
18
if (ca-lh-li_length[0] == ca-li_dif/2)
19
ca-reno_cwnd = ca-tcp_cwnd;
20
if (tp-snd_cwnd_cnt = ca-reno_cwnd) {
21
if (ca-reno_cwnd  tp-snd_cwnd_clamp)
22
ca-reno_cwnd++;
23
tp-snd_cwnd_cnt = 0;
24
} else
25
tp-snd_cwnd_cnt++;
26
}
27
if (++ca-lh-li_length[0]  ca-li_dif) {
28
u32 count = ca-lh-li_length[0] - ca-li_dif;
29
if (count == 0  sysctl_ttfwc_tuning_p  0)
30
tp-snd_cwnd += ca-alpha_cwnd;
31
if (sysctl_ttfwc_tuning_p = 5){
32
if (count%6 == 0) goto common;
33
} else {
34
if (count%3 == 0) goto common;
35
}
36
}
37
common:
38
p = ttfwc_invert_imean(++ca-li_mean);
39
cwnd = ttfwc_calc_x(p);
40
if (cwnd + ca-alpha_cwnd  tp-snd_cwnd)
41
tp-snd_cwnd++;
42 }
Figure 4.4: ttfwc cong avoid function
18
1 static inline u32 ttfwc_recalc_ssthresh(struct sock *sk)
2 {
3
const struct tcp_sock *tp = tcp_sk(sk);
4
struct ttfwc *ca = inet_csk_ca(sk);
5
u32 next_cwnd, p;
6
7
if (tp-snd_cwnd  2){
8
next_cwnd = tp-snd_cwnd;
9
goto out;
10
}
11
if (++ca-li_counter == 1) {
12
ca-tcp_cwnd = tp-snd_cwnd  1;
13
ca-lh-li_length[1] = ca-li_mean = pseudo_li_calc(ca-tcp_cwnd);
14
ca-lh-li_length[0] = 1;
15
ca-li_dif = ca-li_mean/100;
16
ca-li_counter++;
17
goto out;
18
} else {
19
if (ca-is_reno  0)
20
ca-is_reno = 0;
21
ttfwc_loss_hist_update(ca);
22
ca-li_mean = ttfwc_avg_loss_interval(ca);
23
if(ca-li_mean  0) {
24
p = ttfwc_invert_imean(ca-li_mean);
25
next_cwnd = ttfwc_calc_x(p);
26
} else
27
next_cwnd = 0;
28
if (sysctl_ttfwc_tuning_p == 0) goto out;
29
else{
30
if (ca-tcp_cwnd = ca-reno_cwnd)
31
ttfwc_alpha_cwnd(ca, ca-tcp_cwnd, next_cwnd);
32
}
33
}
34
out:
35
ca-tcp_cwnd = next_cwnd;
36
if (cwnd = 2  ca-is_reno == 0) {
37
cwnd = 2;
38
ca-li_dif = 0;
39
ca-is_reno = 1;
40
}
41
return max(ca-tcp_cwnd, 2U);
42 }
Figure 4.5: ttfwc recalc ssthresh function
19
1 static inline void ttfwc_alpha_cwnd
2
(struct ttfwc *ca, u32 cwnd, u32 next_cwnd)
3 {
4
ca-reno_cwnd = ca-reno_cwnd  1;
5
if (ca-reno_cwnd  next_cwnd)
6
ca-alpha_cwnd = 0;
7
else{
8
u32 dthresh;
9
if (ca-alpha_cwnd  0)
10
dthresh = (u32) next_cwnd*100/(cwnd-ca-alpha_cwnd);
11
else
12
dthresh = (u32) next_cwnd*100/cwnd;
13
if (dthresh = 100)
14
ca-alpha_cwnd = 0;
15
else {
16
u32 thresh = next_cwnd - ca-reno_cwnd;
17
if (sysctl_ttfwc_tuning_p = 5  dthresh  72)
18
ca-alpha_cwnd = (u32) thresh*sysctl_ttfwc_tuning_p/10;
19
else if (sysctl_ttfwc_tuning_p  0  dthresh  81)
20
ca-alpha_cwnd = (u32) thresh*pow(sysctl_ttfwc_tuning_p, 2)/100;
21
else
22
ca-alpha_cwnd = 0;
23
}
24 }
25 }
Figure 4.6: ttfwc alpha cwnd

20
Chapter 5
TTFWC Performance Evaluation
This chapter evaluates the efficiency of TTFWC via simulation and experiment with the
implementation. First, I describe simulation testbed and results. Second, I evaluate my
implementation in the Linux kernel.

5.1

Simulation Results

I evaluate TTFWC in terms of responsiveness, throughput and TCP-Friendliness, and
smoothness through ns-2 simulation [18]. Through all experiments, I execute using three
values of the tuning parameter value 0.0, 0.4 and 0.9. All TCP flows leveraged in experiments are SACK TCP [12]. The tuning parameter value 0.4 and 0.9 are representatives of
the smoothness-sensitive and the throughput-sensitive node of TTFWC. When the tuning
parameter is 0.0, TTFWC is equivalent to the original TFWC shown in Equation 3.1. I
denote TTFWC with the tuning parameter a as TTFWC(a).

5.1.1

Responsiveness

I run TTFWC flows on a environment with dynamic changes of available bandwidth to
evaluate responsiveness. In the simulation, I run two TTFWC flows over the 15 Mbps
bottleneck link. RTTs of the two TFWC flows are 8.4 and 8.8 ms. At time x seconds,
10 Mbps CBR flows are injected, remain for x seconds, and leave. Each trial repeats this
21
Average utilization
of idle Bandwidth
(normalized)

1
0.8
0.6
0.4
TTFWC(0.0)
TCP

0.2

0
0.01
0.1
1
10
100
Length of low/high bandwidth (15Mb/5Mb) period

Average utilization
of idle Bandwidth
(normalized)

Figure 5.1: TTFWC(0.0) on constant CBR flows

1
0.8
0.6
0.4
TTFWC(0.4)
TCP

0.2

0
0.01
0.1
1
10
100
Length of low/high bandwidth (15Mb/5Mb) period

Average utilization
of idle Bandwidth
(normalized)

Figure 5.2: TTFWC(0.4) on constant CBR flows

1
0.8
0.6
0.4
TTFWC(0.9)
TCP

0.2

0
0.01
0.1
1
10
100
Length of low/high bandwidth (15Mb/5Mb) period

Figure 5.3: TTFWC(0.9) on constant CBR flows

22
0.05

TCP
TTFWC(0.0)
TTFWC(0.4)
TTFWC(0.9)

Droprate

0.04
0.03
0.02
0.01
0
0.01

0.1
1
10
Length of low/high bandwidth (15Mb/5Mb) period

100

Figure 5.4: The packet loss rate in experiments in Fig. 5.1 - 5.3
interval for 200 seconds, and x varies between 0.05 and 51.2. For comparison, I conduct the
equivalent experiments on TCP instead of TTFWC. These scenarios are called SlowCC [4],
being widely used to evaluate congestion control behavior on ON/OFF CBR flows.
Fig. 5.1 - 5.3 show utilization of the idle bandwidth of TTFWC and TCP flows. The horizontal axis and the vertical axis represent x and utilization of the idle bandwidth, respectively.
Values are normalized, thus 1 means idle bandwidth is fully utilized by TTFWC or TCP
flows. TTFWC(0.0) (i.e., original TFWC) and TTFWC(0.4) flows are up to approximately
20% less utilization of the idle bandwidth than TCP flows. On the other hand, TTFWC(0.9)
flows, more responsive in the TTFWC design, utilize almost equal idle bandwidth to TCP
flows. I conclude from these results that the TTFWC with the greater tuning parameter
responds to change of the idle bandwidth more quickly. As the result, they achieved higher
throughput than TTFWC flows with the lower tuning parameter. I observed higher packet
loss rate with the greater tuning parameter than with the smaller tuning parameter. Fig. 5.4
plots the packet loss rate during the experiments in Fig. 5.1 - 5.3. I believe the reason is
that the TTFWC mode toggles between the flat cwnd period and cwnd increase period frequently. However, I believe this problem is minor, because it happens in a very small range

23
!!!!!!

!!!!!!

#$%#
''()*%

)'*+%,-..
/00
'#$%#
+#()*%

!!!!!!

#$%#
''()*%

#%!(#

'#,#!-#$%#
''()*%

!!!!!!

123.#'0(,#

'#,#!-#$%#
''()*%

!#$%$'%#

11452.#'0(,#

Figure 5.5: Simulation Topology
of congestion patterns.

5.1.2

Throughput and TCP-Friendliness

I evaluate the TCP-Friendliness of TTFWC. I use the dumbbell topology in the simulation
shown in Fig. 5.5. I set the same number of TCP and TTFWC flows, which varies from 1
to 64. The bottleneck bandwidth fixes 15 Mbps with a 20 ms delay and droptail queuing.
The sender-side access links have 100 Mbps bandwidth with 1 ms delay. The receiver-side
access links have 100 Mbps bandwidth with 0 - 2.6 ms delay.
Fig. 5.6 - 5.8 show the throughput of TTFWC and TCP flows. The horizontal axis represents the number of TTFWC and TCP flows. The vertical axis represents the normalized
throughput. If the TCP flow’s value and TTFWC flow’s value overwrap, it means that TCP
flows and TTFWC flows achieve equal throughput. The bottom of each figure is the packet
loss rate on each trial. From these result, TTFWC is TCP-Friendly regardless of the tuning
parameter value, although TTFWC flows achieve a little lower throughput than TCP flows.
I observed TTFWC flows with the larger tuning parameter achieve higher throughput than
24
TCP Flows
TTFWC Flows(0.0)
Mean TCP
Mean TTFWC(0.0)

Normalized Throughput

2.5
2
1.5
1
0.5
0
Loss Rate (%)

16

32

48

64

80

96

112

128

64

80

96

112

128

15
10
5
0
16

32

48

Number of TCP Flows, Number of TTFWC Flows(0.0),
15Mb/s DropTail

Figure 5.6: TTFWC(0.0) and TCP throughput

25
TCP Flows
TTFWC Flows(0.4)
Mean TCP
Mean TTFWC(0.4)

Normalized Throughput

2.5
2
1.5
1
0.5
0
Loss Rate (%)

16

32

48

64

80

96

112

128

64

80

96

112

128

15
10
5
0
16

32

48

Number of TCP Flows, Number of TTFWC Flows(0.4),
15Mb/s DropTail

Figure 5.7: TTFWC(0.4) and TCP throughput

26
TCP Flows
TTFWC Flows(0.9)
Mean TCP
Mean TTFWC(0.9)

Normalized Throughput

2.5
2
1.5
1
0.5
0
Loss Rate (%)

16

32

48

64

80

96

112

128

64

80

96

112

128

15
10
5
0
16

32

48

Number of TCP Flows, Number of TTFWC Flows(0.9),
15Mb/s DropTail

Figure 5.8: TTFWC(0.9) and TCP throughput

27
those with the smaller tuning parameter (i.e., closer to the original TFWC). TFWC(0.4)
flows achieve approximately 2% higher throughput than TFWC(0.0) or the original TFWC.
TFWC(0.9) flows achieve approximately 10% higher throughput than TFWC(0.0). Therefore, I conclude that TTFWC certainly achieves the higher throughput than the original
TFWC with the larger tuning parameter.

5.1.3

Smoothness

Finally, I evaluate TTFWC smoothness. I adopt the same dumbbell topology as that for
evaluation in Section 5.1.2. The number of TTFWC and TCP flows are set to eight. I adopt
this value to observe steady-state behavior of both TCP and TTFWC flows, avoiding an
unacceptable number of timeouts.
Fig. 13 shows throughput behavior of these eight TTFWC flows. The horizontal axis
represents the time, and the vertical axis represents throughput. The bottom graph shows
behavior of TCP flows competing with TTFWC(0.9) flows for comparison with TTFWC
behavior.
I observe almost the same throughput behavior between TTFWC(0.0) and TTFWC(0.4)
as the difference of the tuning parameter value is small. On the other hand, I observe
that TTFWC(0.9) flows exhibit larger oscillation as they respond to the packet loss more
aggressively. However, their behavior is much smoother than that of TCP flows (See top
figure). From these results I conclude TTFWC behaves as intended in its design, which
achieves higher throughput with the larger oscillation when the tuning parameter value is
greater.

28
Throughput(Kb/s)

TTFWC(0.0) smoothness
350
300
250
200
150
100
50
0
50

100

150

200

250

200

250

200

250

200

250

Time(sec)

Throughput(Kb/s)

TTFWC(0.4) smoothness
350
300
250
200
150
100
50
0
50

100

150
Time(sec)

Throughput(Kb/s)

TTFWC(0.9) smoothness
350
300
250
200
150
100
50
0
50

100

150
Time(sec)

Throughput(Kb/s)

TCP throughput oscillation
350
300
250
200
150
100
50
0
50

100

150
Time(sec)

29
Figure 5.9: Throughput behavior of TCP, TTFWC(0.0), TTFWC(0.4) and TTFWC(0.9)
.

/
!##$%'

(%!)(

0
)*+,)(

Figure 5.10: Experimental setup of all tests

5.2

Experimental Results

In this section, I evaluate TTFWC by using the implementation. Fig. 5.10 shows the experimental setup. I use four machines: two sender nodes marked A and C and two receiver nodes
marked B and D. These senders share a bottleneck link that is regulated over dummynet.
One of the sender nodes is equipped with an Intel Core 2 Duo 1.6 Ghz CPU and 2GB memory. The other nodes are equipped with an Intel Core i5-2520M CPU and 2GB of memory.
Throughout all the experiments in this section, I set the bandwidth of the bottleneck link
to 10 Mbps and 250 Mbps to emulate relatively low- and high-speed networks.
I measure TTFWC and TCP throughput by iperf [2] in the experiments except for subsection 5.2.4. The sender continues to send 16Kbytes data during 200 seconds with the
configuration that produces five-second average of its throughput. I observe processing time
by using ftrace tool [1] in subsection 5.2.4.

5.2.1

Fairness

I analyze the protocol fairness between competing TTFWC flows by using two kinds of
viewpoints. First, I measure TTFWC throughput with different RTT. When flows with
different RTT compete, more bandwidth is unfairly allocated to the flow having smaller
RTT. Thus, I can see that a user with longer RTT may not be able to obtain sufficient

30
bandwidth by the congestion control. Second, I measure TTFWC throughput on a network
that competes same protocol with same RTT. In this test, we check the fair allocation of the
bandwidth for each protocol, when all the flows in the link uses the same congestion control.
In these experiments, I inject three parallel flows for each machine and set their delay of 16
ms, 32 ms, 120 ms, 240 ms and 320 ms.
I explain the index of fairness as below. Let xi (k) be the average throughput of flow i in
the five-second period k. If the same link pressure sustains the composition of flows during
k = 1, ..., m and changes when k is m + 1, I define that [1, m] is the maximum-length interval
over fixed link state. Suppose there are n active flows in this interval, indexed by i = 1, ..., n,
let
m

1 
xi =
xi (k)
m k=1

Jain’s fairness index [15] for the interval [1, m] is defined,

( n xi ) 2
i=1
F = n
n( i=1 xi 2 )

(5.1)

(5.2)

In this case, F = 1 indicates the ideal condition for the network.
Fairness with Different RTT
I measure the protocol unfairness between competing flows with different RTT. I show a
brief scenario as below. The node A transmits three TTFWC flows to the node B. RTT
between the node A and B is 120 ms and its value is static. The node C also transmits
three TTFWC flows to the node D. To measure RTT unfairness of TTFWC, I change RTTs
between the node C and D, and observe throughput of the flows transmitted from the node
A and C. I performed equivalent tests with RENO and CUBIC instead of TTFWC, then
compare each of fairness by using Equation 5.2.
Fig. 5.11 plots the results of experiments. The horizontal axis of the figure shows RTT in
the network and the vertical axis shows fairness. TTFWC achieves inversely proportional
31
throughput to RTT and the results mostly support this on a range of the tuning parameter
value of TTFWC.
From the figures, we can see that the fairness of TTFWC little differs from parameter values
or from other congestion control in the low-speed network. This results shows that TTFWC
remains fairness in the low-speed network.
In the high-speed network, TTFWC marks the same result as well. Interestingly, TTFWC(0.9)
over the short RTT is significantly fairer than RENO and CUBIC. This is because TTFWC
window behavior in responsive mode is aggressive for the idle bandwidth. TTFWC with
larger tuning parameter reacts to idle bandwidth generated by TTFWC flows in flat cwnd
period more quickly than TTFWC with smaller tuning parameter value.
Fairness with same protocol
In this subsection, I evaluate the fairness between active TTFWC flows. I run three TTFWC
flows for each node and inject 10Mbps bottleneck link. The competing flows are set with the
configuration of same congestion control and same RTTs. Then, I change the bandwidth of
the bottleneck link to 250Mbps for the equivalent experiment.
Fig. 5.12 plots the protocol fairness in the low- and high-speed network. The horizontal axis
and the vertical axis show RTT in the network and the index of fairness, respectively. When
I see fairness of TTFWC in the low-speed network, TTFWC marks less fairness than CUBIC
or RENO when the propagation delay is long. I assume that, when two kinds of window
behaviors of TTFWC are mixed in the link, the different throughput between competing
flows grows up. On the other hand, when the propagation delay is short, the fairness of
TTFWC is same with others.
The same observation with high-speed network can be observed in in the low-speed network.
We do not observe the difference between tuning parameter values and TTFWC keeps fair-

32
Normalized throughput

Fairness with 10 Mbps Bottleneck Link
1
0.95
0.9
0.85

TTFWC (0.0)
TTFWC(0.4)
TTFWC(0.9)
RENO
CUBIC

0.8
0.75
0.7

16

32

120

240 320

RTT (msec)

Normalized throughput

Fairness with 250 Mbps Bottleneck Link
1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6

TTFWC (0.0)
TTFWC(0.4)
TTFWC(0.9)
RENO
CUBIC
16

32

120

240 320

RTT (msec)

Figure 5.11: Fairness with six active flows three flows with 120 ms delay and the other
three flows with varying delays. Results are shown for 10 Mbps (top) or 250 Mbps (bottom)
bottleneck bandwidth.

33
Normalized throughput

Fairness with 250 Mbps Bottleneck Link
1
0.99
0.98
0.97
0.96
0.95
0.94
0.93

TTFWC (0.0)
TTFWC(0.4)
TTFWC(0.9)
RENO
CUBIC
16

32

120

240 320

RTT (msec)

Normalized throughput

Fairness with 10 Mbps Bottleneck Link
1
0.99
0.98
0.97
0.96
0.95
0.94
0.93

TTFWC (0.0)
TTFWC(0.4)
TTFWC(0.9)
RENO
CUBIC
16

32

120

240 320

RTT (msec)

Figure 5.12: Fairness of active six flows under symmetric conditions (same propagation delay,
shared bottleneck link, same congestion control algorithm). Results are shown for 10Mbps
(top) or 250Mbps (bottom) bottleneck bandwidth.

34
ness only when the propagation delay is short. Therefore, I consider that TTFWC design
does not work well without RENO flows.

5.2.2

Throughput and TCP-Friendliness with different RTT

In this subsection, I describe TTFWC throughput and TCP-Friendliness. Unlike the simulation results, I set three TTFWC flows and three RENO flows in the link with different RTTs.
Fig. 5.13 - 5.15 show the results when the bandwidth of the bottleneck link is 250Mbps and
Fig. 5.16 - 5.18 plot when the bandwidth is 10Mbps. The horizontal axis and the vertical
axis represent RTT and normalized throughput, respectively.
TTFWC gets high throughput in proportion to tuning parameters. When I compute average
throughput, TTFWC with great tuning parameter value increases 4% of original TFWC
throughput in the low-speed network and TTFWC with large tuning parameter increases
10% of TTFWC throughput in the high-speed network.
From the viewpoint of TCP-Friendliness, TTFWC remains totally TCP-Friendly in the lowspeed network and acceptable TCP-Friendly in the high-speed network. Although TTFWC
throughput is proportion to tuning parameter values, the difference between TCP and
TTFWC throughput is little. On the other hand, in the high-speed network, TTFWC with
large parameter value gets high throughput comparing to TCP. But the longer propagation
delay I set in the network, we do not observe the difference between TCP and TTFWC
throughput. I will show you detail observation and consideration as below.
TTFWC reacts to latency of the link in high-speed network when we see Fig. 5.13 - 5.15.
In the high-speed network with short delay, the bandwidth utilization of TTFWC(0.9) is
allocated more than TCP. I believe the influence of temporal idle bandwidth generated by
TCP. In the high-speed network, sawtooth shape throughput behavior makes idle bandwidth
between competing flows. As TTFWC window behavior does not make idle bandwidth, it
35
Normalized Throughput

4

TCP
TTFWC(0.0)

3.5
3
2.5
2
1.5
1
0.5
0

16

32

120

240 320

RTT

Figure 5.13: TTFWC(0.0) and TCP throughput (Bottleneck Link 250Mbps)

Normalized Throughput

4

TCP
TTFWC(0.4)

3.5
3
2.5
2
1.5
1
0.5
0

16

32

120

240 320

RTT

Figure 5.14: TTFWC(0.4) and TCP throughput (Bottleneck Link 250Mbps)
causes the throughput difference between TCP and TTFWC flows.
The same result is observed in low-speed network. When the available bandwidth is limited,
TCP leaves very little bandwidth in saw-tooth shaped throughput behavior. Therefore,
TTFWC throughput is basically as much as TCP from Fig.5.16 - 5.18.

5.2.3

Protocol Friendliness and Throughput Performance

In this subsection, I evaluate the protocol friendliness and throughput performance under
the network shared with the same protocol flows. I use experiment data of subsection 5.2.1.
36
Normalized Throughput

4

TCP
TTFWC(0.9)

3.5
3
2.5
2
1.5
1
0.5
0

16

32

120

240 320

RTT

Figure 5.15: TTFWC(0.9) and TCP throughput (Bottleneck Link 250Mbps)

Normalized Throughput

4

TCP
TTFWC(0.0)

3.5
3
2.5
2
1.5
1
0.5
0

16

32

120

240 320

RTT

Figure 5.16: TTFWC(0.0) and TCP throughput (Bottleneck Link 10Mbps)

37
Normalized Throughput

4

TCP
TTFWC(0.4)

3.5
3
2.5
2
1.5
1
0.5
0

16

32

120

240 320

RTT

Figure 5.17: TTFWC(0.4) and TCP throughput (Bottleneck Link 10Mbps)

Normalized Throughput

4

TCP
TTFWC(0.9)

3.5
3
2.5
2
1.5
1
0.5
0

16

32

120

240 320

RTT

Figure 5.18: TTFWC(0.9) and TCP throughput (Bottleneck Link 10Mbps)

38
16 msec RTT
3.45
3.4
3.35
3.3
3.25
3.2
3.15
3.1
3.05
3
2.95

32 msec RTT
3.4
3.3
3.2
3.1
3
2.9
2.8
2.7

RENO
CUBIC
TTFWC(0.0)

TTFWC(0.4)
TTFWC(0.9)

RENO
CUBIC
TTFWC(0.0)

240 msec RTT

120 msec RTT
2.6
2.4
2.2
2
1.8
1.6
1.4
1.2
1

TTFWC(0.4)
TTFWC(0.9)

RENO
CUBIC
TTFWC(0.0)

TTFWC(0.4)
TTFWC(0.9)

320 msec RTT

3.5

1.8

3

1.6
1.4

2.5

1.2

2

1

1.5

0.8

1

0.6

0.5

0.4

RENO
CUBIC
TTFWC(0.0)

TTFWC(0.4)
TTFWC(0.9)

RENO
CUBIC
TTFWC(0.0)

TTFWC(0.4)
TTFWC(0.9)

Figure 5.19: Normalized Throughput (bottleneck Link 250 Mbps)
Fig. 5.20 and 5.19 plot normalized throughput of each protocol flows with different RTT.
From the figures, we find that TTFWC throughput is not proportional to the value of
parameters on a symmetric network. Considering the result of protocol fairness, I think that
the difference of TTFWC throughput between competing flows is not fair. But we observe
that TTFWC throughput keeps protocol friendly only when the propagation delay is short.
They achieve higher throughput than RENO under the network at this time.
I find that protocol friendliness of TTFWC is not good between the difference of parameters
totally. We can see that TTFWC affects the influence of propagation delay, otherwise
CUBIC keeps relatively stable throughput for the delay from the figures. We will consider
the protocol friendliness and throughput as one of future works as mentioned above.

39
16 msec RTT
0.9
0.88
0.86
0.84
0.82
0.8
0.78
0.76
0.74
0.72

32 msec RTT
0.96
0.94
0.92
0.9
0.88
0.86
0.84
0.82
0.8
0.78
0.76

RENO
CUBIC
TTFWC(0.0)

TTFWC(0.4)
TTFWC(0.9)

0.55
0.5
0.45
0.4
0.35
0.3
0.25

RENO
CUBIC
TTFWC(0.0)

240 msec RTT
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2

120 msec RTT
0.6

TTFWC(0.4)
TTFWC(0.9)

RENO
CUBIC
TTFWC(0.0)

TTFWC(0.4)
TTFWC(0.9)

320 msec RTT
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1

RENO
CUBIC
TTFWC(0.0)

TTFWC(0.4)
TTFWC(0.9)

RENO
CUBIC
TTFWC(0.0)

TTFWC(0.4)
TTFWC(0.9)

Figure 5.20: Normalized Throughput (bottleneck Link 10 Mbps)

5.2.4

CPU Processing Time

I measure the processing time in congestion avoidance procedure and in ssthresh recalculation with ftrace. The ftrace tool traces procedures inside the kernel, thus I can use
this tool for debugging or analyzing latencies and performance issues from user-space. I
measure congestion avoidance procedure over 200,000 calls and sshthresh procedure more
than 1,500 calls for analysis.
My question is “How long does it take to process TTFWC in comparison with other congestion control algorithms? Is TTFWC impractical for machines?” As TTFWC calculates
cwnd with loss event history, it is unavoidable to take more processing time than RENO. I
compare TTFWC processing time with RENO or CUBIC in this subsection.
Table 5.1 shows that TTFWC takes nine times more processing time than RENO, however,
TTFWC takes as long as CUBIC. Recall the discussion some notes about inaccurate ALI
calculation in Section 4.1. When I use the accurate ALI calculation in congestion avoidance
40
Table 5.1: CPU processing time for congestion avoidance procedure
Protocol
Average Time Standard Deviation
TTFWC(0.0)

1.770 µs

0.271 µs

TTFWC(0.4)

1.951 µs

0.380 µs

TTFWC(0.9)

1.975 µs

0.400 µs

CUBIC

1.771 µs

1.069 µs

RENO

0.240 µs

0.015 µs

phase, I find that the ALI calculation adds about 0.78 µs to the current processing time of
TTFWC. Although TTFWC does not call the ALI calculation when the number of packets is
under the estimation of next loss interval, I find that this procedure makes a machine whose
resources are limited. In order to reduce the processing time, I adopt the inaccurate ALI
calculation. As the result of this modification, the TTFWC processing time is reasonable,
because optional congestion control procedure including CUBIC cannot avoid a little higher
overhead than RENO. In addition, since CUBIC is used as the default congestion control of
Linux, the processing time of TTFWC to RENO is not so important.
I find that there is only a small difference of processing time between different values of the
tuning parameters. The difference indicates the processing time of RENO cwnd estimation
and additional cwnd procedure. Also, I observe that the standard deviation of TTFWC is
proportional to the tuning parameter value. The result tells µs the condition branching that
adds base cwnd.
Table 5.2 shows the processing time to calculate ssthresh for each protocol. In this procedure, TTFWC sender updates the loss histories, then calculates the next window size and
additional cwnd. Therefore, based on high overhead in TTFWC, I compare how long the
processing time of TTFWC differs from others. I observe that TTFWC procedure takes
twice as long as CUBIC or RENO and the standard deviation of processing time of TTFWC
41
Table 5.2: CPU processing time for loss recovery procedure
Protocol
Average Time Standard Deviation
TTFWC(0.0)

2.200 µs

0.052 µs

TTFWC(0.4)

2.258 µs

0.508 µs

TTFWC(0.9)

2.159 µs

0.439 µs

CUBIC

0.189 µs

0.00 µs

RENO

0.225 µs

0.00 µs

affects the value of parameters from the tables.
Now, I return the first question. Although the answer to the question is not definitive,
I believe that the total processing time of TTFWC is not different from the others. For
example, CUBIC have two other procedures: one is parameters’ control upon receiving
ACKs and the other is when the congestion state of TCP is changed. The average of the
former is 1.307 µs and that of the latter is 0.100 µs. In addition, the former procedures are
called more frequently than the procedure of ssthresh recalculation. When I consider the
total processing time of the procedures, I can see that CUBIC is about the sames TTFWC.

42
Chapter 6
Conclusion and Future Work
In this thesis, I have proposed and examined TTFWC for multimedia streaming, which
allows us to tune the balance between smoothness and the sending rate. The motivation is
to improve TFWC throughput for throughput-sensitive multimedia services. As I control
TTFWC throughput by using tuning parameter, I can choose whether or not to use smooth
congestion control based on the application type.
I find that TTFWC with a large tuning parameter value keeps a high sending rate in the
network even in the case of frequent bandwidth changes through the simulation result and
real-world experiments. This is a significant contribution, because such services have becomes
common with the deployment of Content Delivery Networks (CDNs).
In future work, I plan to address three issues: first is the dynamic adaptation of the tuning
parameter, second is the convergence between compatible TTFWC flows, and third issue is
need to adapt it for use in adopt in the high-speed networks. The first issue is very difficult
to improve because the number of frames are not predictable, especially video applications.
Although I understand the number of packets stored in the queue, the data rate generated
for encoded video varies second by second, making flexible selection of the tuning parameter
hard. I will research the inner structure of multimedia application and consider approaches
for dynamic control.

43
The second issue is the limitation of TTFWC design. When we compare throughput-sensitive
TTFWC to original TFWC, the modification of TFWC towards high throughput affects
protocol friendliness and flows’ throughput between competing flows.
The third problem is a significant concern for future congestion control algorithms. The
limitation of my proposal is its formula-based congestion control algorithm. This means that
TTFWC throughput does not exceed RENO throughput. Today, it is important to have
not only TCP-Friendliness but also high throughput with fast convergence in the high-speed
network. For example, CUBIC is used as the Linux default congestion control algorithm
despite not being TCP-Friendly in the long, fat pipe. I think that one of the messages of
this research is the requirement to satisfy both stability and high transmission rates. Thus,
I will consider what approaches are suitable for multimedia streaming in the high-speed
network.

44
Acknowledgment
I really appreciate many people and great support around me. I will list the people as below.
I express my appreciation to Hideyuki Tokuda for his professional advice, guidance and
encouragement. Thanks to his support and kindness, I could challenge to write the thesis
and got chances to join the international community. I feel grateful to Rodney D. Van Meter
so much. Thanks to his great help and kindness, I can improve my thesis.
I would like to thank professors, associate professors and assistant professors in Keio University, Jun Murai, Osamu Nakamura, Hiroyuki Kusumoto, Kazunori Takashio, Keisuke Uehara,
Jin Mitsugi, Keishi Takeda and Jin Nakazawa. Without their encounter and support, I was
not what I am. Their classes and their personal character interests me and that is why I
study computer science now. I thanks Shoko Miyagawa, too. She took me to Grace Hopper
Celebration in Women Computing. Through the conference, I made friends and have have
communicate with the great student all over the world. Also, I was able to expand my world
through these conferences and these experience gave me fresh inspiration.
I feel grateful so much to my advisor, Michio Honda. Although it takes much time to pick
up the topic and to consider the proposal, he always indicates the errors and lead me to have
high motivation. Without his supports, I have not be able to accepted my paper and to get
chances to join the international conferences. He is also good friend of mine as well as good
advisor. I sometimes consulted with him as to my private concerns and my life. Thank you
for supporting me beyond the advisor.

45
I also gratefully acknowledge Hideaki Yoshifuji. When I am at loss for the kernel bug, he
taught me what knowledge I need. His guideline and professional advice also help me how
to evaluate my implementation.
I appreciate all the member in my laboratory and my friends, too. Without them, I have
not been able to built my skill and overcome the hardship.
Of course, I am thankful to my family, too. While it takes more than two hours from home
to SFC, my family keeps an eye on my personal life. My mother have to wake up early when
I have a class on first period. My father often keeps staying up until I come back home safely.
My grand parents always encourage me when I was disappointing the limitation of the skill
of mine. My younger sister and my younger brother always make me laugh and revive my
heart. They never complain about my life style. Thanks to their supports, I spend happy
school life.
January 21, 2012
Midori Kato

46
Bibliography
[1] ftrace. http://lwn.net/Articles/370423/.
[2] iperf. http://iperf.sourceforge.net/.
[3] V. Tsaoussidis A. Lahanas. Exploiting the efficiency and fairness potential of AIMDbased congestion avoidance and control. Computer Networks Volume 43, Issue 2, October 2003.
[4] D. Bansal, H. Balakrishnan, S. Floyd, and S. Shenker. Dynamic Behavior of SlowlyResponsive Congestion Control Algorithms. In Proc. ACM SIGCOMM, 2001.
[5] S. Choi. On studying tfwc protocol smoothness. April 2008.
[6] S. Choi and M. Handley. Designing TCP-Friendly window-based congestion control for
real-time multimedia applications. Proc. PFLDNeT, May 2009.
[7] K. Fall and S. Floyd. Simulation-based comparisons of Tahoe, Reno and SACK TCP.
SIGCOMM, July 1996.
[8] J. Feng and L. Xu. TCP-Friendly CBR-Like Rate Control. In Proc. ICNP IEEE,
October 2008.
[9] S. Floyd. Congestion Control Principles. RFC 2914, Sep. 2000.
[10] S. Floyd, M. Handley, J. Padhye, and J. Widmer. Equation-based congestion control for
unicast applications. ACM Computer Communication Review, 30(4):43–56, Oct. 2000.
[11] S. Floyd and T. Henderson. The NewReno Modification to TCP’s Fast Recovery Algorithm. RFC 2582, April 1999.
47
[12] S. Floyd and A. Romanow. TCP Selective Acknowledgment Options. RFC 2018, October 1996.
[13] V. Ozdemir I. Rhee and Y. Yi. TEAR: TCP Emulation At Receivers: flow control for
multimedia streaming. Technical Report, Department of Computer Science, 2000.
[14] D. Towsley J. Padhye, V. Firoiu and J. Kurose. Modeling TCP throughput: a simple
model and its empirical validation. August 1998.
[15] R. Jain. The art of computer systems performance analysis: techniques for experimental
design, measurement, simulation and modeling. John Wiley and Sons Inc., 1991.
[16] P. Huang K. Chen, C. Huang and C. Lei. Quantifying Skype user satisfaction. In ACM
SIGCOMM, September 2006.
[17] A. Medina, M. Allman, and S. Floyd. Measuring the Evolution of Transport Protocols
in the Internet. ACM CCR, 35(2):37–52, 2005.
[18] The network simulator - ns-2. URL http://www.isi.edu/nsnam/ns/.
[19] I. Matta S. Jin, L. Guo and A. Bestavros. A spectrum of TCP-Friendly window-based
congestion control algorithms. In Proc. IEEE/ACM Trans. Net, June 2003.
[20] R. Stewart. Stream Control Transmission Protocol. RFC 4960, September 2007.
[21] E. Vieira and M. Bauer. Round-trip Time Variation in SmoothTCP in the Face of
Spurious Errors. Journal of Communications, 35(2):48–56, 2006.
[22] Y. Yang and S. Lam. General AIMD congestion control. In Proc. IEEE ICNP, November
2000.

48
Published Papers Related to this thesis
• M. Kato, M. Honda and H. Tokuda
“Extending TFWC towards Higher Throughput”,
IEEE Consumer Communications and Networking Conference (CCNC), Jan 2012, Las
Vegas, U.S.A.

Posters and Demos
• M. Kato, M. Honda and H. Tokuda
“A practical congestion control algorithm for real-time multimedia streaming”, Grace
Hopper Conference (GHC), Nov 2011, Portland, U.S.A.
• M. Kato, M. Honda and H. Tokuda
“Implementation of congestion control for multimedia streaming” ,
IEEE International Conference on Embedded and Real-Time Computing Systems and
Applications (RTCSA), Aug 2011, Toyama, Japan

49

Weitere ähnliche Inhalte

Was ist angesagt?

Tcp congestion control (1)
Tcp congestion control (1)Tcp congestion control (1)
Tcp congestion control (1)
Abdo sayed
 
TCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
TCP-FIT: An Improved TCP Congestion Control Algorithm and its PerformanceTCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
TCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
Kevin Tong
 
Computer network (13)
Computer network (13)Computer network (13)
Computer network (13)
NYversity
 
Tcp congestion control
Tcp congestion controlTcp congestion control
Tcp congestion control
Abdo sayed
 

Was ist angesagt? (19)

Tcp congestion control (1)
Tcp congestion control (1)Tcp congestion control (1)
Tcp congestion control (1)
 
Pushing the limits of CAN - Scheduling frames with offsets provides a major p...
Pushing the limits of CAN - Scheduling frames with offsets provides a major p...Pushing the limits of CAN - Scheduling frames with offsets provides a major p...
Pushing the limits of CAN - Scheduling frames with offsets provides a major p...
 
Lect9
Lect9Lect9
Lect9
 
Cubic
CubicCubic
Cubic
 
TCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
TCP-FIT: An Improved TCP Congestion Control Algorithm and its PerformanceTCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
TCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
 
Computer network (13)
Computer network (13)Computer network (13)
Computer network (13)
 
Micazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project reportMicazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project report
 
Tcp congestion control
Tcp congestion controlTcp congestion control
Tcp congestion control
 
Tcp congestion avoidance algorithm identification
Tcp congestion avoidance algorithm identificationTcp congestion avoidance algorithm identification
Tcp congestion avoidance algorithm identification
 
Master Teset Specification SRCP
Master Teset Specification SRCPMaster Teset Specification SRCP
Master Teset Specification SRCP
 
TCP Congestion Control By Owais Jara
TCP Congestion Control By Owais JaraTCP Congestion Control By Owais Jara
TCP Congestion Control By Owais Jara
 
A Baker's dozen of TCP
A Baker's dozen of TCPA Baker's dozen of TCP
A Baker's dozen of TCP
 
Can We Multiplex ACKs without Harming the Performance of TCP?
Can We Multiplex ACKs without Harming the Performance of TCP?Can We Multiplex ACKs without Harming the Performance of TCP?
Can We Multiplex ACKs without Harming the Performance of TCP?
 
TCP congestion control
TCP congestion controlTCP congestion control
TCP congestion control
 
TCP Westwood
TCP WestwoodTCP Westwood
TCP Westwood
 
XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...
XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...
XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...
 
Secure shell performance testing
Secure shell performance testingSecure shell performance testing
Secure shell performance testing
 
Tcp performance simulationsusingns2
Tcp performance simulationsusingns2Tcp performance simulationsusingns2
Tcp performance simulationsusingns2
 
TCP Congestion Control
TCP Congestion ControlTCP Congestion Control
TCP Congestion Control
 

Andere mochten auch (12)

El nacionalismo musical por solange vàsconez
El nacionalismo musical por solange vàsconezEl nacionalismo musical por solange vàsconez
El nacionalismo musical por solange vàsconez
 
Música por Michelle Folleco
Música por Michelle Folleco Música por Michelle Folleco
Música por Michelle Folleco
 
El nacionalismo musical
El nacionalismo musicalEl nacionalismo musical
El nacionalismo musical
 
Historia de la música clásica méxicana
Historia de la música clásica méxicanaHistoria de la música clásica méxicana
Historia de la música clásica méxicana
 
Música Mexicana
Música MexicanaMúsica Mexicana
Música Mexicana
 
NACIONALISMO MUSICAL EN ESPAÑA
NACIONALISMO MUSICAL EN ESPAÑANACIONALISMO MUSICAL EN ESPAÑA
NACIONALISMO MUSICAL EN ESPAÑA
 
MUSICA ESCENICA EN ESPAÑA: TONADILLA Y ZARZUELA
MUSICA ESCENICA EN ESPAÑA: TONADILLA Y ZARZUELAMUSICA ESCENICA EN ESPAÑA: TONADILLA Y ZARZUELA
MUSICA ESCENICA EN ESPAÑA: TONADILLA Y ZARZUELA
 
EXPRESIONISMO MUSICAL
EXPRESIONISMO MUSICALEXPRESIONISMO MUSICAL
EXPRESIONISMO MUSICAL
 
MUSICA PROGRAMATICA Y POEMA SINFONICO
MUSICA PROGRAMATICA Y POEMA SINFONICOMUSICA PROGRAMATICA Y POEMA SINFONICO
MUSICA PROGRAMATICA Y POEMA SINFONICO
 
MUSICA EN EL ROMANTICISMO: PIANO Y LIED
MUSICA EN EL ROMANTICISMO: PIANO Y LIEDMUSICA EN EL ROMANTICISMO: PIANO Y LIED
MUSICA EN EL ROMANTICISMO: PIANO Y LIED
 
Nacionalismo Musical
Nacionalismo MusicalNacionalismo Musical
Nacionalismo Musical
 
Romanticismo y nacionalismo
Romanticismo y nacionalismo Romanticismo y nacionalismo
Romanticismo y nacionalismo
 

Ähnlich wie Extending TFWC towards Higher Throughput

High_Speed_TCP_for_Large_Congestion_Windows.pdf
High_Speed_TCP_for_Large_Congestion_Windows.pdfHigh_Speed_TCP_for_Large_Congestion_Windows.pdf
High_Speed_TCP_for_Large_Congestion_Windows.pdf
SHIKHAARYA26
 
Machine-Type-Communication in 5G Cellular System-Li_Yue_PhD_2018.pdf
Machine-Type-Communication in 5G Cellular System-Li_Yue_PhD_2018.pdfMachine-Type-Communication in 5G Cellular System-Li_Yue_PhD_2018.pdf
Machine-Type-Communication in 5G Cellular System-Li_Yue_PhD_2018.pdf
YAAKOVSOLOMON1
 
disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1
Pavel Prochazka
 
Improved kernel based port-knocking in linux
Improved kernel based port-knocking in linuxImproved kernel based port-knocking in linux
Improved kernel based port-knocking in linux
dinomasch
 
Teletraffic engineering handbook
Teletraffic engineering handbookTeletraffic engineering handbook
Teletraffic engineering handbook
Deepak Singh
 
bachelors_thesis_stephensen1987
bachelors_thesis_stephensen1987bachelors_thesis_stephensen1987
bachelors_thesis_stephensen1987
Hans Jacob Teglbj
 
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMARDCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
shay rubinstain
 

Ähnlich wie Extending TFWC towards Higher Throughput (20)

High_Speed_TCP_for_Large_Congestion_Windows.pdf
High_Speed_TCP_for_Large_Congestion_Windows.pdfHigh_Speed_TCP_for_Large_Congestion_Windows.pdf
High_Speed_TCP_for_Large_Congestion_Windows.pdf
 
High_Speed_TCP_for_Large_Congestion_Windows.pdf
High_Speed_TCP_for_Large_Congestion_Windows.pdfHigh_Speed_TCP_for_Large_Congestion_Windows.pdf
High_Speed_TCP_for_Large_Congestion_Windows.pdf
 
A Push-pull based Application Multicast Layer for P2P live video streaming.pdf
A Push-pull based Application Multicast Layer for P2P live video streaming.pdfA Push-pull based Application Multicast Layer for P2P live video streaming.pdf
A Push-pull based Application Multicast Layer for P2P live video streaming.pdf
 
comparison_of_scada_protocols_and_implementation_of_iec_104_and_mqtt.pdf
comparison_of_scada_protocols_and_implementation_of_iec_104_and_mqtt.pdfcomparison_of_scada_protocols_and_implementation_of_iec_104_and_mqtt.pdf
comparison_of_scada_protocols_and_implementation_of_iec_104_and_mqtt.pdf
 
Distributed Traffic management framework
Distributed Traffic management frameworkDistributed Traffic management framework
Distributed Traffic management framework
 
LTE_from_Theory_to_Practise.pdf
LTE_from_Theory_to_Practise.pdfLTE_from_Theory_to_Practise.pdf
LTE_from_Theory_to_Practise.pdf
 
trex_astf.pdf
trex_astf.pdftrex_astf.pdf
trex_astf.pdf
 
Performance assessment of the MASQUE extension for proxying scenarios in the ...
Performance assessment of the MASQUE extension for proxying scenarios in the ...Performance assessment of the MASQUE extension for proxying scenarios in the ...
Performance assessment of the MASQUE extension for proxying scenarios in the ...
 
Machine-Type-Communication in 5G Cellular System-Li_Yue_PhD_2018.pdf
Machine-Type-Communication in 5G Cellular System-Li_Yue_PhD_2018.pdfMachine-Type-Communication in 5G Cellular System-Li_Yue_PhD_2018.pdf
Machine-Type-Communication in 5G Cellular System-Li_Yue_PhD_2018.pdf
 
etd7288_MHamidirad
etd7288_MHamidiradetd7288_MHamidirad
etd7288_MHamidirad
 
Hfc
HfcHfc
Hfc
 
Thesis
ThesisThesis
Thesis
 
On-Line Presentation of Data from PMU Stations
On-Line Presentation of Data from PMU StationsOn-Line Presentation of Data from PMU Stations
On-Line Presentation of Data from PMU Stations
 
disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1
 
Improved kernel based port-knocking in linux
Improved kernel based port-knocking in linuxImproved kernel based port-knocking in linux
Improved kernel based port-knocking in linux
 
T401
T401T401
T401
 
Teletraffic engineering handbook
Teletraffic engineering handbookTeletraffic engineering handbook
Teletraffic engineering handbook
 
bachelors_thesis_stephensen1987
bachelors_thesis_stephensen1987bachelors_thesis_stephensen1987
bachelors_thesis_stephensen1987
 
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMARDCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
 
Report on TCP vulnerabilities
Report on TCP vulnerabilitiesReport on TCP vulnerabilities
Report on TCP vulnerabilities
 

Mehr von stucon

ビアマジ!美女
ビアマジ!美女ビアマジ!美女
ビアマジ!美女
stucon
 
サークル対抗テニス大会!
サークル対抗テニス大会!サークル対抗テニス大会!
サークル対抗テニス大会!
stucon
 

Mehr von stucon (20)

ロボットによるサッカー対決
ロボットによるサッカー対決ロボットによるサッカー対決
ロボットによるサッカー対決
 
ロボットによるサッカー対決
ロボットによるサッカー対決ロボットによるサッカー対決
ロボットによるサッカー対決
 
ビアマジ!美女
ビアマジ!美女ビアマジ!美女
ビアマジ!美女
 
テストプラン
テストプランテストプラン
テストプラン
 
test
testtest
test
 
サークル対抗テニス大会!
サークル対抗テニス大会!サークル対抗テニス大会!
サークル対抗テニス大会!
 
バズ・オブ・ビアマジ21
バズ・オブ・ビアマジ21バズ・オブ・ビアマジ21
バズ・オブ・ビアマジ21
 
「ビアマジ21」 ~大学生目線で認知度UP~
「ビアマジ21」
~大学生目線で認知度UP~「ビアマジ21」
~大学生目線で認知度UP~
「ビアマジ21」 ~大学生目線で認知度UP~
 
流行のSNSを活用したPR方法
流行のSNSを活用したPR方法流行のSNSを活用したPR方法
流行のSNSを活用したPR方法
 
ビアマジ!21認知企画
ビアマジ!21認知企画ビアマジ!21認知企画
ビアマジ!21認知企画
 
ビアマジ!21認知企画
ビアマジ!21認知企画ビアマジ!21認知企画
ビアマジ!21認知企画
 
ビアマジが学校にやってくる!プロジェクト
ビアマジが学校にやってくる!プロジェクトビアマジが学校にやってくる!プロジェクト
ビアマジが学校にやってくる!プロジェクト
 
エンジニアレンタルサービス
エンジニアレンタルサービスエンジニアレンタルサービス
エンジニアレンタルサービス
 
エンジニアレンタルサービス
エンジニアレンタルサービスエンジニアレンタルサービス
エンジニアレンタルサービス
 
子供の心を掴め!親にねだらせるPR方法
子供の心を掴め!親にねだらせるPR方法子供の心を掴め!親にねだらせるPR方法
子供の心を掴め!親にねだらせるPR方法
 
ビアメシプロジェクト
ビアメシプロジェクトビアメシプロジェクト
ビアメシプロジェクト
 
課題発見とビール一杯無料以外からの誘引施策
課題発見とビール一杯無料以外からの誘引施策課題発見とビール一杯無料以外からの誘引施策
課題発見とビール一杯無料以外からの誘引施策
 
ビアマジ!21 暇人マッチングサービス
ビアマジ!21 暇人マッチングサービスビアマジ!21 暇人マッチングサービス
ビアマジ!21 暇人マッチングサービス
 
暇人マッチングサービス
暇人マッチングサービス暇人マッチングサービス
暇人マッチングサービス
 
暇人マッチングサービス
暇人マッチングサービス暇人マッチングサービス
暇人マッチングサービス
 

Kürzlich hochgeladen

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Kürzlich hochgeladen (20)

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 

Extending TFWC towards Higher Throughput

  • 1. Extending TFWC towards Higher Throughput Midori Kato Faculty of Environmental Information Keio University 5322 Endo, Fujisawa Kanagawa 252-8520 JAPAN Submitted in partial fulfillment of the requirements for the degree of Bachelor of Science Advisors: Hideyuki Tokuda Jun Murai Osamu Nakamura Kazunori Takashio Jin Nakazawa Rodney D. Van Meter III Keisuke Uehara Jin Mitsugi Hiroyuki Kusumoto Keishi Takeda c Copyright2011 Midori Kato
  • 2. Abstract of Bachelor’s Thesis Extending TFWC towards Higher Throughput Multimedia streaming over TCP is common in the Internet. Under condition of scarce network resource, the rate oscillation due to congestion control degrades the user’s experience. In addition, the data transmisson of multimedia services under the congested network needs to consider the priority of services depending on the application type. While TFWC supports smooth congestion control for real-time multimedia streaming, it cannot provide a high-enough sending rate to the services that require high throughput rather than smoothness. In this thesis, I propose Tunable TFWC (TTFWC), an extension to TFWC to achieve variable throughput behavior with regard to trade-off between smoothness and the sending rate based on application or user requirements. Unlike many existing works, this proposal is suitable for both throughput-sensive applications and smoothness-sensitive application by using a tuning parameter. I observe that TTFWC with large tuning parameter improves around 10% higher throughput than the original congestion control algorithm in a low-speed network. In a high-speed network, TTFWC throughput with large value of the tuning parameter over 4% compared with the original. Midori Kato Faculty of Environmental Information Keio University i
  • 3. Abstract of Bachelor’s Thesis Thesis Extending TFWC towards Higher Throughput 高スループット実現のための TFWC の拡張 近年のインターネット技術の発展に伴い,TCP を利用したマルチメディアストリーミングが普及しつつ ある.しかし,ネットワーク資源が乏しいモバイルネットワーク環境などでは,TCP による輻輳制御が転送 レートの変動を大きくしてしまい,コンテンツ再生が不安定になる.またネットワークの輻輳が激しい環境で は,サービスの品質を考慮した上でサービスタイプ別にデータ通信を行うことが求められる.リアルタイムマ ルチメディアストリーミングサービス向けに提案された TFWC という輻輳制御アルゴリズムがあるが,転送 レートの滑らかさよりも大きさを重視するサービスには利用できない.本研究では, TFWC の拡張を行い, アプリケーション,ユーザの需要にあわせて転送レートの大きさと滑らかさのトレードオフを調節可能にした Tunable TFWC (TTFWC) を提案する.多くの既存研究とは異なり,本研究で提案する輻輳制御アルゴリズ ムはチューニングパラメータを設定することで,転送レートの大きさと滑らかさのどちらを優先するアプリ ケーションにも調節できる.実験の結果,既存の輻輳制御アルゴリズムと比較して,低速なネットワークでは 10%以上のスループットの向上,高速なネットワークでは 4%程度のスループットの向上が観測できた. 慶應義塾大学 環境情報学部 加藤 碧 ii
  • 4. Table of Contents 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Challenges and Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Structure of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Problem Definition 2.1 4 4 2.1.1 AIMD-based Congestion Control Algorithm . . . . . . . . . . . . . . 4 2.1.2 Formula-based Congestion Control Algorithm . . . . . . . . . . . . . 5 2.1.3 CBR-based Congestion Control Algorithm . . . . . . . . . . . . . . . 5 2.1.4 2.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Metric-based Congestion Control Algorithm . . . . . . . . . . . 5 TCP-Friendly Window-based Congestion Control . . . . . . . . . . . . . . . 6 3 Tunable TCP-Friendly Congestion Control 8 3.1 TTFWC Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Base cwnd Update at the Beginning of the Increasing cwnd Period . . . . . . 10 3.3 Change of the Number of Loss Intervals in the ALI Calculation . . . . . . . 11 3.4 RENO Operation when cwnd is Less than two . . . . . . . . . . . . . . . . . 12 4 Implementation 13 iii
  • 5. 4.1 Slow Start and Congestion Avoidance . . . . . . . . . . . . . . . . . . . . . . 15 4.2 TTFWC Loss Recovery Behavior . . . . . . . . . . . . . . . . . . . . . . . . 16 5 TTFWC Performance Evaluation 5.1 21 21 5.1.1 Responsiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.1.2 Throughput and TCP-Friendliness . . . . . . . . . . . . . . . . . . . 24 5.1.3 Smoothness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.2.1 Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.2.2 Throughput and TCP-Friendliness with different RTT . . . . . . . . 35 5.2.3 Protocol Friendliness and Throughput Performance . . . . . . . . . . 36 5.2.4 5.2 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CPU Processing Time . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6 Conclusion and Future Work 43 iv
  • 6. List of Figures 2.1 Window behavior in TFWC . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1 TCP and TFWC cwnd behavior . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 TTFWC cwnd behavior (When Eq.3.1 is true) . . . . . . . . . . . . . . . . . 9 3.3 TTFWC cwnd behavior when (When Eq.3.1 is false) . . . . . . . . . . . . . . 10 4.1 The TCP procedure diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 ttfwc structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.3 the TTFWC procedures’ definitions . . . . . . . . . . . . . . . . . . . . . . . 15 4.4 ttfwc cong avoid function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.5 ttfwc recalc ssthresh function . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.6 ttfwc alpha cwnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.1 TTFWC(0.0) on constant CBR flows . . . . . . . . . . . . . . . . . . . . . . 22 5.2 TTFWC(0.4) on constant CBR flows . . . . . . . . . . . . . . . . . . . . . . 22 5.3 TTFWC(0.9) on constant CBR flows . . . . . . . . . . . . . . . . . . . . . . 22 5.4 The packet loss rate in experiments in Fig. 5.1 - 5.3 . . . . . . . . . . . . . . 23 5.5 Simulation Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.6 TTFWC(0.0) and TCP throughput . . . . . . . . . . . . . . . . . . . . . . . 25 5.7 TTFWC(0.4) and TCP throughput . . . . . . . . . . . . . . . . . . . . . . . 26 5.8 TTFWC(0.9) and TCP throughput . . . . . . . . . . . . . . . . . . . . . . . 27 v
  • 7. 5.9 Throughput behavior of TCP, TTFWC(0.0), TTFWC(0.4) and TTFWC(0.9) 29 5.10 Experimental setup of all tests . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.11 Fairness with six active flows three flows with 120 ms delay and the other three flows with varying delays. Results are shown for 10 Mbps (top) or 250 Mbps (bottom) bottleneck bandwidth. . . . . . . . . . . . . . . . . . . . . . 33 5.12 Fairness of active six flows under symmetric conditions (same propagation delay, shared bottleneck link, same congestion control algorithm). Results are shown for 10Mbps (top) or 250Mbps (bottom) bottleneck bandwidth. . . . . 34 5.13 TTFWC(0.0) and TCP throughput (Bottleneck Link 250Mbps) . . . . . . . 36 5.14 TTFWC(0.4) and TCP throughput (Bottleneck Link 250Mbps) . . . . . . . 36 5.15 TTFWC(0.9) and TCP throughput (Bottleneck Link 250Mbps) . . . . . . . 37 5.16 TTFWC(0.0) and TCP throughput (Bottleneck Link 10Mbps) . . . . . . . . 37 5.17 TTFWC(0.4) and TCP throughput (Bottleneck Link 10Mbps) . . . . . . . . 38 5.18 TTFWC(0.9) and TCP throughput (Bottleneck Link 10Mbps) . . . . . . . . 38 5.19 Normalized Throughput (bottleneck Link 250 Mbps) . . . . . . . . . . . . . 39 5.20 Normalized Throughput (bottleneck Link 10 Mbps) . . . . . . . . . . . . . . 40 vi
  • 8. List of Tables 5.1 CPU processing time for congestion avoidance procedure . . . . . . . . . . . 41 5.2 CPU processing time for loss recovery procedure . . . . . . . . . . . . . . . . 42 vii
  • 9. Chapter 1 Introduction This chapter describes the background on this research topic. Also, I describe the problems I are concerned with, and the approach to attacking the issues. 1.1 Motivation The proliferation of high-speed, broadband networks and mobile networks has made multimedia streaming popular. The growth of multimedia streaming increases the demand to distribute high-quality video and audio data in real-time streaming mode. As the usage of multimedia streaming requires more bandwidth than normal web browsing or email services, the design of congestion control multimedia streaming is an important issue in the network, even atop non-congestion-controlled protocols such as UDP. Congestion control for multimedia streaming is designed to meet two goals; first, it must be TCP-Friendly, sharing the equally bandwidth with competing TCP flows at the bottleneck. TCP transmits the majority of bytes on the Internet, thus compatibility with TCP congestion control is adopted as a reasonable metric to evaluate congestion control algorithms [9]. Second, its throughput must be smooth so that the application achieves constant sending rate, whereas TCP throughput behavior is sawtooth-shaped. [16] shows that burst transmission does not provide a good user experience. Smoothness is the new functionality that 1
  • 10. overcomes the drawbacks of TCP congestion control. TCP-Friendly Rate Control (TFRC) [10] is the Internet standard congestion control for streaming media applications. TFRC achieves a smooth sending rate without compromising TCP-friendliness. However, TFRC transmits excessively high sending rate at short Round Trip Time (RTT), depending on timer granularity of the end system. TCP-Friendly Window Control (TFWC) [6] is a window-based variant of TFRC, thus it does not require accurate RTT calculation. TFWC also achieves a smoother sending rate, and is more TCP-Friendly than TFRC. In addition, TFWC can be implemented in various transport protocols, such as TCP and SCTP [20], because they also perform window-based congestion control. 1.2 Challenges and Contribution Smoothness and high sending rate are trade-off, because the congestion control algorithm for smooth throughput pressures the sending rate not to suddenly increase, even if more bandwidth is available. Indeed, in interactive applications, such as VoIP and video conferencing, data have to be delivered smoothly, because they cannot buffer large amounts of sending and receiving data. However, applications such as on-demand video and audio streaming, and non-interactive live streaming can buffer some data. They require a sending rate as high as possible and a certain level of smoothness, rather than the perfect smoothness that TFRC, TFWC and the other TFRC extensions strive to achieve. In other words, sending rate is more important than smoothness in such applications. In this thesis, I propose Tunable TFWC (TTFWC), a TCP-Friendly congestion control algorithm that can tune the balance between the sending rate and smoothness. TTFWC supports a wide range of multimedia applications, including those that desire as high sending rate as possible and certain level of smoothness. I define one tuning parameter to control the balance between the sending rate and smoothness. In my experiments, TTFWC 2
  • 11. achieves variable throughput behavior by changing the tuning parameter value. For example, TTFWC achieves around 10% higher throughput than TFWC in the low-speed network and over 4% higher in the high-speed network. I also demonstrate TTFWC is TCP-Friendly over a wide range of the tuning parameter value. 1.3 Structure of Thesis The remainder of this thesis is organized as follows: Chapter 2 describe related work and TFWC overview. I describe the design of TTFWC window calculation in Chapter 3. Chapter 4 explains my implementation. I present the results of implementation, over dummynet and the simulation results in Chapter 5. I evaluate TTFWC in terms of responsiveness, throughput, TCP-Friendliness, fairness and smoothness. In this Chapter, I include the discussion for the TTFWC required network. Then, the thesis concludes in Chapter 6. 3
  • 12. Chapter 2 Problem Definition This chapter describes existing proposals which target smooth congestion control and compare these works with my approach. 2.1 Related work Congestion control for multimedia streaming falls into three categories: Additive Increase Multiple Decrease (AIMD)-based [22, 3], formula-based [19, 10, 6, 13] and Constant Bit Rate (CBR)-based [8]. At the end of this section, I show you another approach as well. 2.1.1 AIMD-based Congestion Control Algorithm AIMD-based congestion control is RENO extension. In AIMD-based congestion control algorithms, flows achieve lower throughput than competing TCP flows when congestion frequently occurs. In congested network, as the sender decrease little window size after a packet loss, it achieves lower throughput comparing to other types of congestion control. On the other hand, TTFWC provides high sending rate without the amplitude of saw-toothed bandwidth change and decrease. 4
  • 13. 2.1.2 Formula-based Congestion Control Algorithm Formula-based congestion control uses TCP throughput equation [14] to estimate cwnd. SIMD [19] and TEAR [13] are representatives of the second category. They maintain a high sending rate against frequent congestion. However, SIMD has a slightly higher packet loss rate than other congestion control algorithm, thus the overall performance is poor. Although TEAR achieves a smooth sending rate by using sliding window with the receiver control, its response to congestion is quite slow. TTFWC overcomes these problems, and as shown it in Section 5.1. 2.1.3 CBR-based Congestion Control Algorithm The CBR-based approach [8] proposes a congestion control algorithm that achieves a smooth sending rate with a new definition of TCP-Friendliness. Since multimedia streaming historically uses UDP, CBR-based congestion control extends existing works to smooth throughput behavior. Flows employing this algorithm are stochastically TCP-Friendly, that is, these flows are fair to aggregates of competing TCP flows. However, these flows cause the throughput of one or some of the competing TCP flows to drastically reduce. In addition, although TTFWC is designed to meet the current definition of TCP-Friendliness, this approach has not been throughly vetted. 2.1.4 New Metric-based Congestion Control Algorithm STCP [21] also aims at smooth throughput behavior by using some counters such as timeout, fast retransmit and so on. However, its TCP-friendliness is not evaluated and analyzed. In addition, this uses extra ICMP packets to detect congestion, which will be affected by middleboxes or routers [17]. 5
  • 14. $%' (* () !#$!#$%%'()* (+ !# +,-'.(,/$!#$%%'()* Figure 2.1: Window behavior in TFWC 2.2 TCP-Friendly Window-based Congestion Control The TFWC sender detects packet loss from ACK packets sent from the receiver. The TFWC sender calculates the congestion window size cwnd from the packet loss event rate p based on TCP throughput equation (Equation 2.1) [14]. cwnd = 2p 3 1 3p (2.1) + 12 8 p (1 + 32p2 ) When the sender detects a packet loss event (i.e., loss of one or more packets in the window size) from ACK packets, the sender calculates the loss-event interval, which is the number of packets successfully arriving at the receiver between two loss events. The track of loss-event intervals is called “loss-event history”. From the loss-event history, the sender calculates the Average Loss Interval (ALI) converted to the calculated loss event rate by Equation 2.2. 6
  • 15. Loss Event Rate (p) = 1 Average Loss Interval (2.2) The calculation of an ALI is important because it impacts the next cwnd. When the sender updates its loss event history, two kinds of ALIs are calculated [5]. The first one is calculated with Exponentially Weighted Moving Average (EWMA), which uses the most recent sixteen loss intervals, the last one of which is the number of transmitted packets after the last loss event. The other one is calculated with EWMA which uses sixteen loss intervals, not counting the number of transmitted packets after the last loss event in (i.e., use an older loss interval than the first one). The sender adopts the larger one as the estimated ALI to derive cwnd. We call the interval that the first one is chosen a “flat cwnd period”, and the interval that the other one is chosen an “increasing cwnd period”. Fig. 2.1 illustrates TFWC cwnd behavior. A loss event happens at time t0 and t2 . Between time t0 and t1 , the flat cwnd interval is applied. Between time t1 and t2 , the increasing cwnd interval is applied, thus the cwnd increases little by little. 7
  • 16. Chapter 3 Tunable TCP-Friendly Congestion Control This chapter describes my proposal congestion control algorithm for multimedia streaming named TTFWC, which is composed of three algorithms. 3.1 TTFWC Overview In this section, I explain the design of TTFWC and the effect of the tuning parameter tp that varies between 0.0 and 1.0. I call the range of tp less than 0.5 as the “smooth mode” and more than or be equal to 0.5 as the “responsive mode”. This is because, the former gives relatively high priority to smoothness, whereas the latter gives that to throughput. More importantly, TFWC uses the different window calculation and the number of loss intervals. I make three modifications in TFWC cwnd calculation to set the tuning parameter: First is that the sender increases its cwnd more aggressively at the increasing cwnd period, second is that the sender changes the number of loss interval in the ALI calculation from this value, and third is cwnd calculation in low bandwidth. 8
  • 17. 0123) - , $% $. $/ !# '() *+' Figure 3.1: TCP and TFWC cwnd behavior 15#)6789!:);2)=)+?8# $%' ()*)+, +3 +2 +4 !# -./) -01. Figure 3.2: TTFWC cwnd behavior (When Eq.3.1 is true) 9
  • 18. ()#*+,-./!0*+12%+34+*0$+$5.#2 :;*+ $ $% $' !# 678+ 69(7 Figure 3.3: TTFWC cwnd behavior when (When Eq.3.1 is false) 3.2 Base cwnd Update at the Beginning of the Increasing cwnd Period In order to achieve higher throughput than the original TFWC, TTFWC increases the base cwnd at the beginning of the increasing cwnd period. The rationale is that TFWC does not increase the congestion window quickly when the calculated TCP-Friendly congestion window is larger to keep the smooth sending rate as shown as δ in Fig. 3.1. This is also the reason why TFWC achieves lower throughput than TCP (see Sec. 5.1 with Fig. 5.6 for simulation results). In the responsive mode (i.e., 0.5 ≤ tp 1.0), TTFWC increases the base cwnd by tp × δ, where tp is the tuning parameter and δ is the difference between the current TCP cwnd and the calculated TCP-Friendly cwnd (Fig. 3.2 illustrates δ at time t1 ). Hence, the responsive mode achieves the higher throughput based on the tuning parameter. In the smooth mode (i.e., 0 ≤ tp 0.5), TTFWC increases the base cwnd by tp2 × δ. Since 10
  • 19. the smooth mode is sensitive to throughput oscillation caused by the transient congestion, I conservatively increase the base cwnd. In Fig. 3.2, the integral of TCP cwnd and TFWC cwnd are equal through the time between t0 and t1 . When a packet is lost before the time t1 , the TFWC sender receives higher throughput than TCP. Hence, I limit the opportunity to increase the base cwnd to prevent unfair utilization of the bottleneck link. I increase the base cwnd only when Equation 3.1 is true on the ALI calculation: 1 i=1 weighti n (3.1) Let be the cwnd decrease fraction after the packet loss event, as illustrated in Fig. 3.1, and weighti be the i-th EWMA weight. Fig. 3.2 and 3.3 show TTFWC cwnd behavior when Equation 3.1 is true and not true, respectively. If the value of Equation 3.1 is not true after a packet loss, the sender uses its window size as is the case with TFWC cwnd behavior. 3.3 Change of the Number of Loss Intervals in the ALI Calculation I change the number of loss intervals from sixteen to eight in the responsive mode. The larger number of loss intervals makes cwnd decrease more slowly. Hence, if I use the same number of loss intervals as the original TFWC, the TTFWC flow becomes overly aggressive, because it increases the base cwnd at the beginning of the increasing cwnd period. When the number of loss intervals is halved, the sender receives the convergence to the ideal window size up to twice as fast. Since the total EWMA weight in an ALI calculation is halved, the sender decreases its cwnd by up to half after the sender experiences an extremely short loss interval. After the sender experiences an extremely long loss interval, it increases its cwnd up to twice. I note that throughput in responsive mode TTFWC using 8 loss intervals in its ALI calculation is still much smoother than TCP (See Fig. 5.9 in Sec. 5.1). 11
  • 20. I adopt the same number of loss intervals as the original TFWC in the smooth mode. If I apply the tuning parameter in the same way as in the responsive mode, the smooth mode TTFWC flow is obviously overly aggressive as the tuning parameter is larger. Hence, I have to conservatively apply the tuning parameter to the base cwnd. My design choice is applying the tuning parameter to the base cwnd exponentially. 3.4 RENO Operation when cwnd is Less than two TTFWC performs RENO operation [7, 11] when cwnd falls below two. While the original TFWC uses rate control, I adopt RENO operation to prevent from high CPU load as a result of cwnd calculation. I will describe the CPU processing time for the main functions in the evaluation. 12
  • 21. Chapter 4 Implementation This chapter describes the TTFWC implementation. I implemented TTFWC as a TCP congestion control algorithm in the Linux 3.0.0 kernel. The implementation consists of around 900 total lines of code, structured as a loadable kernel module. Thus, the application and the system administrator can enable TTFWC with ease. As TTFWC is a sender-only algorithm, I do not need modification to the receiver so long as the receiver behaves as well as regular TCP. We enable TTFWC by a sysctl parameter. The subsequent sections describe how to perform TTFWC to match the congestion avoidance, slow start, and loss recovery behavior. Fig. 4.1 shows the logic of a TCP socket after the connection is established on a sender. The TCP option structure must be initialised at this time and a call to TCP register congestion control is made. Then, every time the machine receives ACK, it checks the state of the socket in the function tcp ack(). When the function tcp ack is dubious() returns its state is normal, the machine enters the congestion avoidance phase. At this time, the soceket calls congestion control function cong avoid(). In the Linux kernel, when the machine encounters a loss event, three types of states are prepared for loss recoveries. TCP CA Recovery is set for fast transmission. TCP CA CWR means that the machine raises congestion inside NIC, or ECN and etc. The machine handles loss recovery depending on the loss event 13
  • 22. !#$%'()*' !!#$%#'() 34 Normal state? tcp_ack_is_dubious() +),'#-.'/012' #$%#*+,%-*./() 56 Duplicate ACK? 34 tcp_fastretrans_alert() 7142'281.49$44$6.' 086#/8' 56 01#2#321!4456145! Figure 4.1: The TCP procedure diagram type. In the Linux implementation, the function tcp fastretrans alert() divides into fast retransmission and other procedure. Fig. 4.2 shows the variables in the TTFWC structure with the definitions and Fig. 4.3 shows the TTFWC procedures. The TTFWC structure and TTFWC procedures’ definition are employed only if the kernel is compiled with the TTFWC support enabled. 1 2 3 4 5 6 7 8 9 10 struct ttfwc { u32 tcp_cwnd; // TCP cwnd calculated by TTFWC procedures u32 is_reno; // flag when it is in RENO phase u32 reno_cwnd; // RENO cwnd estimation u32 alpha_cwnd; // additional cwnd depending on a parameter u32 li_dif; // the number of packets until the ALI is over u32 li_counter; // the counter of loss intervals u32 li_mean; // the weighted ALI struct ttfwc_loss_hist *lh; // array to stock loss intervals }; Figure 4.2: ttfwc structure 14
  • 23. 1 static struct tcp_congestion_ops tcp_ttfwc __read_mostly = { 2 .init = ttfwc_init, // variables initiate 3 .ssthresh = ttfwc_recalc_ssthresh, // ssthresh calculation 4 .cong_avoid = ttfwc_cong_avoid, // cwnd update 5 .undo_cwnd = ttfwc_undo_cwnd, // a false loss detected 6 .owner = THIS_MODULE, 7 .name = ttfwc, 8 }; Figure 4.3: the TTFWC procedures’ definitions 4.1 Slow Start and Congestion Avoidance ttfwc cong avoid() updates the cwnd every time a packet is acknowledged. Fig. 4.4 displays congestion avoidance phase of TTFWC. TTFWC slow start and timeout behaviors are the same as RENO. The sender applies TTFWC operation after the slow start phase. Upon receiving an ACK from the receiver, the TTFWC sender performs the following update. The TTFWC sender calls RENO operation in the slow start phase or when cwnd is less than two (Lines 9-16). When the sender is out of RENO mode, it estimates RENO cwnd (Lines 17-26). In this scope, the sender estimates based on the window behavior as described Chapter 3. After the number of packets is over the ALI, it increases cwnd (Lines 27-). In this time, the TTFWC sender adds extra cwnd into base cwnd when a tuning parameter is set (Lines 14-17). The TTFWC sender after increasing cwnd period starts to calculate cwnd by using the ALI. When the ALI is over the estimation of next loss interval, the sender add extra cwnd if the tuning parameter is set more than zero (Line 29-30). I adopt inaccurate ALI estimation (Line 32 and 34). In the substitute calculation, the smooth mode is a little smoother than accurate ALI calculation because the accurate ALI is increased when the sender receives 5.3 ACKs. As well, the responsive mode is more aggressive than accurate ALI calculation because the accurate ALI is increased when the sender receives 2.8 ACKs. 15
  • 24. I find that the exact procedure of the ALI calculation wastes processing time. The cwnd update with accurate ALI calculation sometimes stops data transmission due to the overhead, TTFWC algorithm in the Virtual Machine (See Section 5.2.4). Through the experience, the limitation of the machine resources affects data transmission. I recommend approximate ALI calculation to minimize the overhead in the congestion avoidance phase. I will also comment about ALI calculation in Section . I explain the specific functions as below. tcp is cwnd limited shown in Lines 7-8 prevents burst data transmission if the current cwnd is not much. ttfwc invert imean executes to be the inverse number of the ALI (Line 38). 4.2 TTFWC Loss Recovery Behavior The TTFWC sender enters into ttfwc recalc ssthresh when the SACK or ACK proof contains a loss. Fig. 4.2 shows the procedure of TTFWC loss recovery. For the request after a loss event, TTFWC performs the following operation: When its cwnd is less than two, the TTFWC sender skips the following operation and is in RENO operation (Line 7-10). When the sender detects the first loss event, it calculates a pseudo-loss interval, loss event rate and the number of packets until the increasing cwnd period by using the Equation (Line 11-17). In this scope, the sender estimates loss interval by using Equation 2.1 and sets the number of packets until the ALI is over as li dif. After the first loss event, the sender updates loss intervals and calculates the ALI and the next cwnd. When the calculated cwnd is more than two, the sender reverses TTFWC (Line 19-22). Then, the sender returns the current cwnd as ssthresh (Line 16). Fig. 4.6 indicates how to estimate RENO cwnd and how to calculate the additional cwnd in the function ttfwc alpha cwnd. The sender is halved RENO cwnd and compares to next TTFWC cwnd (line 4). A part of lines 13-14 indicates the exception for the estimation. The 16
  • 25. sender enters this scope when its state is TCP CA CWR because the cwnd is set 1. When next TTFWC cwnd is over the halved RENO cwnd, the sender checks Equation 3.1 (Line 10-12). Then, if Equation 3.1 is true shown in line 17 and 19, the sender calculates additional cwnd. If not, additional cwnd is reset. 17
  • 26. 1 static void ttfwc_cong_avoid(struct sock *sk, u32 ack, u32 in_flight) 2 { 3 struct tcp_sock *tp = tcp_sk(sk); 4 struct ttfwc *ca = inet_csk_ca(sk); 5 u32 li_mean, p, cwnd; 6 7 if (!tcp_is_cwnd_limited(sk, in_flight)) 8 return; 9 if (ca-is_reno || ttfwc_lh_length(ca) 1){ 10 ca-lh-li_length[0]++; 11 if (tp-snd_cwnd = tp-snd_ssthresh) 12 tcp_slow_start(tp); 13 else 14 tcp_cong_avoid_ai(tp, tp-snd_cwnd); 15 return; 16 } 17 if (ca-lh-li_length[0] = ca-li_dif/2) { 18 if (ca-lh-li_length[0] == ca-li_dif/2) 19 ca-reno_cwnd = ca-tcp_cwnd; 20 if (tp-snd_cwnd_cnt = ca-reno_cwnd) { 21 if (ca-reno_cwnd tp-snd_cwnd_clamp) 22 ca-reno_cwnd++; 23 tp-snd_cwnd_cnt = 0; 24 } else 25 tp-snd_cwnd_cnt++; 26 } 27 if (++ca-lh-li_length[0] ca-li_dif) { 28 u32 count = ca-lh-li_length[0] - ca-li_dif; 29 if (count == 0 sysctl_ttfwc_tuning_p 0) 30 tp-snd_cwnd += ca-alpha_cwnd; 31 if (sysctl_ttfwc_tuning_p = 5){ 32 if (count%6 == 0) goto common; 33 } else { 34 if (count%3 == 0) goto common; 35 } 36 } 37 common: 38 p = ttfwc_invert_imean(++ca-li_mean); 39 cwnd = ttfwc_calc_x(p); 40 if (cwnd + ca-alpha_cwnd tp-snd_cwnd) 41 tp-snd_cwnd++; 42 } Figure 4.4: ttfwc cong avoid function 18
  • 27. 1 static inline u32 ttfwc_recalc_ssthresh(struct sock *sk) 2 { 3 const struct tcp_sock *tp = tcp_sk(sk); 4 struct ttfwc *ca = inet_csk_ca(sk); 5 u32 next_cwnd, p; 6 7 if (tp-snd_cwnd 2){ 8 next_cwnd = tp-snd_cwnd; 9 goto out; 10 } 11 if (++ca-li_counter == 1) { 12 ca-tcp_cwnd = tp-snd_cwnd 1; 13 ca-lh-li_length[1] = ca-li_mean = pseudo_li_calc(ca-tcp_cwnd); 14 ca-lh-li_length[0] = 1; 15 ca-li_dif = ca-li_mean/100; 16 ca-li_counter++; 17 goto out; 18 } else { 19 if (ca-is_reno 0) 20 ca-is_reno = 0; 21 ttfwc_loss_hist_update(ca); 22 ca-li_mean = ttfwc_avg_loss_interval(ca); 23 if(ca-li_mean 0) { 24 p = ttfwc_invert_imean(ca-li_mean); 25 next_cwnd = ttfwc_calc_x(p); 26 } else 27 next_cwnd = 0; 28 if (sysctl_ttfwc_tuning_p == 0) goto out; 29 else{ 30 if (ca-tcp_cwnd = ca-reno_cwnd) 31 ttfwc_alpha_cwnd(ca, ca-tcp_cwnd, next_cwnd); 32 } 33 } 34 out: 35 ca-tcp_cwnd = next_cwnd; 36 if (cwnd = 2 ca-is_reno == 0) { 37 cwnd = 2; 38 ca-li_dif = 0; 39 ca-is_reno = 1; 40 } 41 return max(ca-tcp_cwnd, 2U); 42 } Figure 4.5: ttfwc recalc ssthresh function 19
  • 28. 1 static inline void ttfwc_alpha_cwnd 2 (struct ttfwc *ca, u32 cwnd, u32 next_cwnd) 3 { 4 ca-reno_cwnd = ca-reno_cwnd 1; 5 if (ca-reno_cwnd next_cwnd) 6 ca-alpha_cwnd = 0; 7 else{ 8 u32 dthresh; 9 if (ca-alpha_cwnd 0) 10 dthresh = (u32) next_cwnd*100/(cwnd-ca-alpha_cwnd); 11 else 12 dthresh = (u32) next_cwnd*100/cwnd; 13 if (dthresh = 100) 14 ca-alpha_cwnd = 0; 15 else { 16 u32 thresh = next_cwnd - ca-reno_cwnd; 17 if (sysctl_ttfwc_tuning_p = 5 dthresh 72) 18 ca-alpha_cwnd = (u32) thresh*sysctl_ttfwc_tuning_p/10; 19 else if (sysctl_ttfwc_tuning_p 0 dthresh 81) 20 ca-alpha_cwnd = (u32) thresh*pow(sysctl_ttfwc_tuning_p, 2)/100; 21 else 22 ca-alpha_cwnd = 0; 23 } 24 } 25 } Figure 4.6: ttfwc alpha cwnd 20
  • 29. Chapter 5 TTFWC Performance Evaluation This chapter evaluates the efficiency of TTFWC via simulation and experiment with the implementation. First, I describe simulation testbed and results. Second, I evaluate my implementation in the Linux kernel. 5.1 Simulation Results I evaluate TTFWC in terms of responsiveness, throughput and TCP-Friendliness, and smoothness through ns-2 simulation [18]. Through all experiments, I execute using three values of the tuning parameter value 0.0, 0.4 and 0.9. All TCP flows leveraged in experiments are SACK TCP [12]. The tuning parameter value 0.4 and 0.9 are representatives of the smoothness-sensitive and the throughput-sensitive node of TTFWC. When the tuning parameter is 0.0, TTFWC is equivalent to the original TFWC shown in Equation 3.1. I denote TTFWC with the tuning parameter a as TTFWC(a). 5.1.1 Responsiveness I run TTFWC flows on a environment with dynamic changes of available bandwidth to evaluate responsiveness. In the simulation, I run two TTFWC flows over the 15 Mbps bottleneck link. RTTs of the two TFWC flows are 8.4 and 8.8 ms. At time x seconds, 10 Mbps CBR flows are injected, remain for x seconds, and leave. Each trial repeats this 21
  • 30. Average utilization of idle Bandwidth (normalized) 1 0.8 0.6 0.4 TTFWC(0.0) TCP 0.2 0 0.01 0.1 1 10 100 Length of low/high bandwidth (15Mb/5Mb) period Average utilization of idle Bandwidth (normalized) Figure 5.1: TTFWC(0.0) on constant CBR flows 1 0.8 0.6 0.4 TTFWC(0.4) TCP 0.2 0 0.01 0.1 1 10 100 Length of low/high bandwidth (15Mb/5Mb) period Average utilization of idle Bandwidth (normalized) Figure 5.2: TTFWC(0.4) on constant CBR flows 1 0.8 0.6 0.4 TTFWC(0.9) TCP 0.2 0 0.01 0.1 1 10 100 Length of low/high bandwidth (15Mb/5Mb) period Figure 5.3: TTFWC(0.9) on constant CBR flows 22
  • 31. 0.05 TCP TTFWC(0.0) TTFWC(0.4) TTFWC(0.9) Droprate 0.04 0.03 0.02 0.01 0 0.01 0.1 1 10 Length of low/high bandwidth (15Mb/5Mb) period 100 Figure 5.4: The packet loss rate in experiments in Fig. 5.1 - 5.3 interval for 200 seconds, and x varies between 0.05 and 51.2. For comparison, I conduct the equivalent experiments on TCP instead of TTFWC. These scenarios are called SlowCC [4], being widely used to evaluate congestion control behavior on ON/OFF CBR flows. Fig. 5.1 - 5.3 show utilization of the idle bandwidth of TTFWC and TCP flows. The horizontal axis and the vertical axis represent x and utilization of the idle bandwidth, respectively. Values are normalized, thus 1 means idle bandwidth is fully utilized by TTFWC or TCP flows. TTFWC(0.0) (i.e., original TFWC) and TTFWC(0.4) flows are up to approximately 20% less utilization of the idle bandwidth than TCP flows. On the other hand, TTFWC(0.9) flows, more responsive in the TTFWC design, utilize almost equal idle bandwidth to TCP flows. I conclude from these results that the TTFWC with the greater tuning parameter responds to change of the idle bandwidth more quickly. As the result, they achieved higher throughput than TTFWC flows with the lower tuning parameter. I observed higher packet loss rate with the greater tuning parameter than with the smaller tuning parameter. Fig. 5.4 plots the packet loss rate during the experiments in Fig. 5.1 - 5.3. I believe the reason is that the TTFWC mode toggles between the flat cwnd period and cwnd increase period frequently. However, I believe this problem is minor, because it happens in a very small range 23
  • 32. !!!!!! !!!!!! #$%# ''()*% )'*+%,-.. /00 '#$%# +#()*% !!!!!! #$%# ''()*% #%!(# '#,#!-#$%# ''()*% !!!!!! 123.#'0(,# '#,#!-#$%# ''()*% !#$%$'%# 11452.#'0(,# Figure 5.5: Simulation Topology of congestion patterns. 5.1.2 Throughput and TCP-Friendliness I evaluate the TCP-Friendliness of TTFWC. I use the dumbbell topology in the simulation shown in Fig. 5.5. I set the same number of TCP and TTFWC flows, which varies from 1 to 64. The bottleneck bandwidth fixes 15 Mbps with a 20 ms delay and droptail queuing. The sender-side access links have 100 Mbps bandwidth with 1 ms delay. The receiver-side access links have 100 Mbps bandwidth with 0 - 2.6 ms delay. Fig. 5.6 - 5.8 show the throughput of TTFWC and TCP flows. The horizontal axis represents the number of TTFWC and TCP flows. The vertical axis represents the normalized throughput. If the TCP flow’s value and TTFWC flow’s value overwrap, it means that TCP flows and TTFWC flows achieve equal throughput. The bottom of each figure is the packet loss rate on each trial. From these result, TTFWC is TCP-Friendly regardless of the tuning parameter value, although TTFWC flows achieve a little lower throughput than TCP flows. I observed TTFWC flows with the larger tuning parameter achieve higher throughput than 24
  • 33. TCP Flows TTFWC Flows(0.0) Mean TCP Mean TTFWC(0.0) Normalized Throughput 2.5 2 1.5 1 0.5 0 Loss Rate (%) 16 32 48 64 80 96 112 128 64 80 96 112 128 15 10 5 0 16 32 48 Number of TCP Flows, Number of TTFWC Flows(0.0), 15Mb/s DropTail Figure 5.6: TTFWC(0.0) and TCP throughput 25
  • 34. TCP Flows TTFWC Flows(0.4) Mean TCP Mean TTFWC(0.4) Normalized Throughput 2.5 2 1.5 1 0.5 0 Loss Rate (%) 16 32 48 64 80 96 112 128 64 80 96 112 128 15 10 5 0 16 32 48 Number of TCP Flows, Number of TTFWC Flows(0.4), 15Mb/s DropTail Figure 5.7: TTFWC(0.4) and TCP throughput 26
  • 35. TCP Flows TTFWC Flows(0.9) Mean TCP Mean TTFWC(0.9) Normalized Throughput 2.5 2 1.5 1 0.5 0 Loss Rate (%) 16 32 48 64 80 96 112 128 64 80 96 112 128 15 10 5 0 16 32 48 Number of TCP Flows, Number of TTFWC Flows(0.9), 15Mb/s DropTail Figure 5.8: TTFWC(0.9) and TCP throughput 27
  • 36. those with the smaller tuning parameter (i.e., closer to the original TFWC). TFWC(0.4) flows achieve approximately 2% higher throughput than TFWC(0.0) or the original TFWC. TFWC(0.9) flows achieve approximately 10% higher throughput than TFWC(0.0). Therefore, I conclude that TTFWC certainly achieves the higher throughput than the original TFWC with the larger tuning parameter. 5.1.3 Smoothness Finally, I evaluate TTFWC smoothness. I adopt the same dumbbell topology as that for evaluation in Section 5.1.2. The number of TTFWC and TCP flows are set to eight. I adopt this value to observe steady-state behavior of both TCP and TTFWC flows, avoiding an unacceptable number of timeouts. Fig. 13 shows throughput behavior of these eight TTFWC flows. The horizontal axis represents the time, and the vertical axis represents throughput. The bottom graph shows behavior of TCP flows competing with TTFWC(0.9) flows for comparison with TTFWC behavior. I observe almost the same throughput behavior between TTFWC(0.0) and TTFWC(0.4) as the difference of the tuning parameter value is small. On the other hand, I observe that TTFWC(0.9) flows exhibit larger oscillation as they respond to the packet loss more aggressively. However, their behavior is much smoother than that of TCP flows (See top figure). From these results I conclude TTFWC behaves as intended in its design, which achieves higher throughput with the larger oscillation when the tuning parameter value is greater. 28
  • 37. Throughput(Kb/s) TTFWC(0.0) smoothness 350 300 250 200 150 100 50 0 50 100 150 200 250 200 250 200 250 200 250 Time(sec) Throughput(Kb/s) TTFWC(0.4) smoothness 350 300 250 200 150 100 50 0 50 100 150 Time(sec) Throughput(Kb/s) TTFWC(0.9) smoothness 350 300 250 200 150 100 50 0 50 100 150 Time(sec) Throughput(Kb/s) TCP throughput oscillation 350 300 250 200 150 100 50 0 50 100 150 Time(sec) 29 Figure 5.9: Throughput behavior of TCP, TTFWC(0.0), TTFWC(0.4) and TTFWC(0.9)
  • 38. . / !##$%' (%!)( 0 )*+,)( Figure 5.10: Experimental setup of all tests 5.2 Experimental Results In this section, I evaluate TTFWC by using the implementation. Fig. 5.10 shows the experimental setup. I use four machines: two sender nodes marked A and C and two receiver nodes marked B and D. These senders share a bottleneck link that is regulated over dummynet. One of the sender nodes is equipped with an Intel Core 2 Duo 1.6 Ghz CPU and 2GB memory. The other nodes are equipped with an Intel Core i5-2520M CPU and 2GB of memory. Throughout all the experiments in this section, I set the bandwidth of the bottleneck link to 10 Mbps and 250 Mbps to emulate relatively low- and high-speed networks. I measure TTFWC and TCP throughput by iperf [2] in the experiments except for subsection 5.2.4. The sender continues to send 16Kbytes data during 200 seconds with the configuration that produces five-second average of its throughput. I observe processing time by using ftrace tool [1] in subsection 5.2.4. 5.2.1 Fairness I analyze the protocol fairness between competing TTFWC flows by using two kinds of viewpoints. First, I measure TTFWC throughput with different RTT. When flows with different RTT compete, more bandwidth is unfairly allocated to the flow having smaller RTT. Thus, I can see that a user with longer RTT may not be able to obtain sufficient 30
  • 39. bandwidth by the congestion control. Second, I measure TTFWC throughput on a network that competes same protocol with same RTT. In this test, we check the fair allocation of the bandwidth for each protocol, when all the flows in the link uses the same congestion control. In these experiments, I inject three parallel flows for each machine and set their delay of 16 ms, 32 ms, 120 ms, 240 ms and 320 ms. I explain the index of fairness as below. Let xi (k) be the average throughput of flow i in the five-second period k. If the same link pressure sustains the composition of flows during k = 1, ..., m and changes when k is m + 1, I define that [1, m] is the maximum-length interval over fixed link state. Suppose there are n active flows in this interval, indexed by i = 1, ..., n, let m 1 xi = xi (k) m k=1 Jain’s fairness index [15] for the interval [1, m] is defined, ( n xi ) 2 i=1 F = n n( i=1 xi 2 ) (5.1) (5.2) In this case, F = 1 indicates the ideal condition for the network. Fairness with Different RTT I measure the protocol unfairness between competing flows with different RTT. I show a brief scenario as below. The node A transmits three TTFWC flows to the node B. RTT between the node A and B is 120 ms and its value is static. The node C also transmits three TTFWC flows to the node D. To measure RTT unfairness of TTFWC, I change RTTs between the node C and D, and observe throughput of the flows transmitted from the node A and C. I performed equivalent tests with RENO and CUBIC instead of TTFWC, then compare each of fairness by using Equation 5.2. Fig. 5.11 plots the results of experiments. The horizontal axis of the figure shows RTT in the network and the vertical axis shows fairness. TTFWC achieves inversely proportional 31
  • 40. throughput to RTT and the results mostly support this on a range of the tuning parameter value of TTFWC. From the figures, we can see that the fairness of TTFWC little differs from parameter values or from other congestion control in the low-speed network. This results shows that TTFWC remains fairness in the low-speed network. In the high-speed network, TTFWC marks the same result as well. Interestingly, TTFWC(0.9) over the short RTT is significantly fairer than RENO and CUBIC. This is because TTFWC window behavior in responsive mode is aggressive for the idle bandwidth. TTFWC with larger tuning parameter reacts to idle bandwidth generated by TTFWC flows in flat cwnd period more quickly than TTFWC with smaller tuning parameter value. Fairness with same protocol In this subsection, I evaluate the fairness between active TTFWC flows. I run three TTFWC flows for each node and inject 10Mbps bottleneck link. The competing flows are set with the configuration of same congestion control and same RTTs. Then, I change the bandwidth of the bottleneck link to 250Mbps for the equivalent experiment. Fig. 5.12 plots the protocol fairness in the low- and high-speed network. The horizontal axis and the vertical axis show RTT in the network and the index of fairness, respectively. When I see fairness of TTFWC in the low-speed network, TTFWC marks less fairness than CUBIC or RENO when the propagation delay is long. I assume that, when two kinds of window behaviors of TTFWC are mixed in the link, the different throughput between competing flows grows up. On the other hand, when the propagation delay is short, the fairness of TTFWC is same with others. The same observation with high-speed network can be observed in in the low-speed network. We do not observe the difference between tuning parameter values and TTFWC keeps fair- 32
  • 41. Normalized throughput Fairness with 10 Mbps Bottleneck Link 1 0.95 0.9 0.85 TTFWC (0.0) TTFWC(0.4) TTFWC(0.9) RENO CUBIC 0.8 0.75 0.7 16 32 120 240 320 RTT (msec) Normalized throughput Fairness with 250 Mbps Bottleneck Link 1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 TTFWC (0.0) TTFWC(0.4) TTFWC(0.9) RENO CUBIC 16 32 120 240 320 RTT (msec) Figure 5.11: Fairness with six active flows three flows with 120 ms delay and the other three flows with varying delays. Results are shown for 10 Mbps (top) or 250 Mbps (bottom) bottleneck bandwidth. 33
  • 42. Normalized throughput Fairness with 250 Mbps Bottleneck Link 1 0.99 0.98 0.97 0.96 0.95 0.94 0.93 TTFWC (0.0) TTFWC(0.4) TTFWC(0.9) RENO CUBIC 16 32 120 240 320 RTT (msec) Normalized throughput Fairness with 10 Mbps Bottleneck Link 1 0.99 0.98 0.97 0.96 0.95 0.94 0.93 TTFWC (0.0) TTFWC(0.4) TTFWC(0.9) RENO CUBIC 16 32 120 240 320 RTT (msec) Figure 5.12: Fairness of active six flows under symmetric conditions (same propagation delay, shared bottleneck link, same congestion control algorithm). Results are shown for 10Mbps (top) or 250Mbps (bottom) bottleneck bandwidth. 34
  • 43. ness only when the propagation delay is short. Therefore, I consider that TTFWC design does not work well without RENO flows. 5.2.2 Throughput and TCP-Friendliness with different RTT In this subsection, I describe TTFWC throughput and TCP-Friendliness. Unlike the simulation results, I set three TTFWC flows and three RENO flows in the link with different RTTs. Fig. 5.13 - 5.15 show the results when the bandwidth of the bottleneck link is 250Mbps and Fig. 5.16 - 5.18 plot when the bandwidth is 10Mbps. The horizontal axis and the vertical axis represent RTT and normalized throughput, respectively. TTFWC gets high throughput in proportion to tuning parameters. When I compute average throughput, TTFWC with great tuning parameter value increases 4% of original TFWC throughput in the low-speed network and TTFWC with large tuning parameter increases 10% of TTFWC throughput in the high-speed network. From the viewpoint of TCP-Friendliness, TTFWC remains totally TCP-Friendly in the lowspeed network and acceptable TCP-Friendly in the high-speed network. Although TTFWC throughput is proportion to tuning parameter values, the difference between TCP and TTFWC throughput is little. On the other hand, in the high-speed network, TTFWC with large parameter value gets high throughput comparing to TCP. But the longer propagation delay I set in the network, we do not observe the difference between TCP and TTFWC throughput. I will show you detail observation and consideration as below. TTFWC reacts to latency of the link in high-speed network when we see Fig. 5.13 - 5.15. In the high-speed network with short delay, the bandwidth utilization of TTFWC(0.9) is allocated more than TCP. I believe the influence of temporal idle bandwidth generated by TCP. In the high-speed network, sawtooth shape throughput behavior makes idle bandwidth between competing flows. As TTFWC window behavior does not make idle bandwidth, it 35
  • 44. Normalized Throughput 4 TCP TTFWC(0.0) 3.5 3 2.5 2 1.5 1 0.5 0 16 32 120 240 320 RTT Figure 5.13: TTFWC(0.0) and TCP throughput (Bottleneck Link 250Mbps) Normalized Throughput 4 TCP TTFWC(0.4) 3.5 3 2.5 2 1.5 1 0.5 0 16 32 120 240 320 RTT Figure 5.14: TTFWC(0.4) and TCP throughput (Bottleneck Link 250Mbps) causes the throughput difference between TCP and TTFWC flows. The same result is observed in low-speed network. When the available bandwidth is limited, TCP leaves very little bandwidth in saw-tooth shaped throughput behavior. Therefore, TTFWC throughput is basically as much as TCP from Fig.5.16 - 5.18. 5.2.3 Protocol Friendliness and Throughput Performance In this subsection, I evaluate the protocol friendliness and throughput performance under the network shared with the same protocol flows. I use experiment data of subsection 5.2.1. 36
  • 45. Normalized Throughput 4 TCP TTFWC(0.9) 3.5 3 2.5 2 1.5 1 0.5 0 16 32 120 240 320 RTT Figure 5.15: TTFWC(0.9) and TCP throughput (Bottleneck Link 250Mbps) Normalized Throughput 4 TCP TTFWC(0.0) 3.5 3 2.5 2 1.5 1 0.5 0 16 32 120 240 320 RTT Figure 5.16: TTFWC(0.0) and TCP throughput (Bottleneck Link 10Mbps) 37
  • 46. Normalized Throughput 4 TCP TTFWC(0.4) 3.5 3 2.5 2 1.5 1 0.5 0 16 32 120 240 320 RTT Figure 5.17: TTFWC(0.4) and TCP throughput (Bottleneck Link 10Mbps) Normalized Throughput 4 TCP TTFWC(0.9) 3.5 3 2.5 2 1.5 1 0.5 0 16 32 120 240 320 RTT Figure 5.18: TTFWC(0.9) and TCP throughput (Bottleneck Link 10Mbps) 38
  • 47. 16 msec RTT 3.45 3.4 3.35 3.3 3.25 3.2 3.15 3.1 3.05 3 2.95 32 msec RTT 3.4 3.3 3.2 3.1 3 2.9 2.8 2.7 RENO CUBIC TTFWC(0.0) TTFWC(0.4) TTFWC(0.9) RENO CUBIC TTFWC(0.0) 240 msec RTT 120 msec RTT 2.6 2.4 2.2 2 1.8 1.6 1.4 1.2 1 TTFWC(0.4) TTFWC(0.9) RENO CUBIC TTFWC(0.0) TTFWC(0.4) TTFWC(0.9) 320 msec RTT 3.5 1.8 3 1.6 1.4 2.5 1.2 2 1 1.5 0.8 1 0.6 0.5 0.4 RENO CUBIC TTFWC(0.0) TTFWC(0.4) TTFWC(0.9) RENO CUBIC TTFWC(0.0) TTFWC(0.4) TTFWC(0.9) Figure 5.19: Normalized Throughput (bottleneck Link 250 Mbps) Fig. 5.20 and 5.19 plot normalized throughput of each protocol flows with different RTT. From the figures, we find that TTFWC throughput is not proportional to the value of parameters on a symmetric network. Considering the result of protocol fairness, I think that the difference of TTFWC throughput between competing flows is not fair. But we observe that TTFWC throughput keeps protocol friendly only when the propagation delay is short. They achieve higher throughput than RENO under the network at this time. I find that protocol friendliness of TTFWC is not good between the difference of parameters totally. We can see that TTFWC affects the influence of propagation delay, otherwise CUBIC keeps relatively stable throughput for the delay from the figures. We will consider the protocol friendliness and throughput as one of future works as mentioned above. 39
  • 48. 16 msec RTT 0.9 0.88 0.86 0.84 0.82 0.8 0.78 0.76 0.74 0.72 32 msec RTT 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0.82 0.8 0.78 0.76 RENO CUBIC TTFWC(0.0) TTFWC(0.4) TTFWC(0.9) 0.55 0.5 0.45 0.4 0.35 0.3 0.25 RENO CUBIC TTFWC(0.0) 240 msec RTT 0.65 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 0.2 120 msec RTT 0.6 TTFWC(0.4) TTFWC(0.9) RENO CUBIC TTFWC(0.0) TTFWC(0.4) TTFWC(0.9) 320 msec RTT 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 RENO CUBIC TTFWC(0.0) TTFWC(0.4) TTFWC(0.9) RENO CUBIC TTFWC(0.0) TTFWC(0.4) TTFWC(0.9) Figure 5.20: Normalized Throughput (bottleneck Link 10 Mbps) 5.2.4 CPU Processing Time I measure the processing time in congestion avoidance procedure and in ssthresh recalculation with ftrace. The ftrace tool traces procedures inside the kernel, thus I can use this tool for debugging or analyzing latencies and performance issues from user-space. I measure congestion avoidance procedure over 200,000 calls and sshthresh procedure more than 1,500 calls for analysis. My question is “How long does it take to process TTFWC in comparison with other congestion control algorithms? Is TTFWC impractical for machines?” As TTFWC calculates cwnd with loss event history, it is unavoidable to take more processing time than RENO. I compare TTFWC processing time with RENO or CUBIC in this subsection. Table 5.1 shows that TTFWC takes nine times more processing time than RENO, however, TTFWC takes as long as CUBIC. Recall the discussion some notes about inaccurate ALI calculation in Section 4.1. When I use the accurate ALI calculation in congestion avoidance 40
  • 49. Table 5.1: CPU processing time for congestion avoidance procedure Protocol Average Time Standard Deviation TTFWC(0.0) 1.770 µs 0.271 µs TTFWC(0.4) 1.951 µs 0.380 µs TTFWC(0.9) 1.975 µs 0.400 µs CUBIC 1.771 µs 1.069 µs RENO 0.240 µs 0.015 µs phase, I find that the ALI calculation adds about 0.78 µs to the current processing time of TTFWC. Although TTFWC does not call the ALI calculation when the number of packets is under the estimation of next loss interval, I find that this procedure makes a machine whose resources are limited. In order to reduce the processing time, I adopt the inaccurate ALI calculation. As the result of this modification, the TTFWC processing time is reasonable, because optional congestion control procedure including CUBIC cannot avoid a little higher overhead than RENO. In addition, since CUBIC is used as the default congestion control of Linux, the processing time of TTFWC to RENO is not so important. I find that there is only a small difference of processing time between different values of the tuning parameters. The difference indicates the processing time of RENO cwnd estimation and additional cwnd procedure. Also, I observe that the standard deviation of TTFWC is proportional to the tuning parameter value. The result tells µs the condition branching that adds base cwnd. Table 5.2 shows the processing time to calculate ssthresh for each protocol. In this procedure, TTFWC sender updates the loss histories, then calculates the next window size and additional cwnd. Therefore, based on high overhead in TTFWC, I compare how long the processing time of TTFWC differs from others. I observe that TTFWC procedure takes twice as long as CUBIC or RENO and the standard deviation of processing time of TTFWC 41
  • 50. Table 5.2: CPU processing time for loss recovery procedure Protocol Average Time Standard Deviation TTFWC(0.0) 2.200 µs 0.052 µs TTFWC(0.4) 2.258 µs 0.508 µs TTFWC(0.9) 2.159 µs 0.439 µs CUBIC 0.189 µs 0.00 µs RENO 0.225 µs 0.00 µs affects the value of parameters from the tables. Now, I return the first question. Although the answer to the question is not definitive, I believe that the total processing time of TTFWC is not different from the others. For example, CUBIC have two other procedures: one is parameters’ control upon receiving ACKs and the other is when the congestion state of TCP is changed. The average of the former is 1.307 µs and that of the latter is 0.100 µs. In addition, the former procedures are called more frequently than the procedure of ssthresh recalculation. When I consider the total processing time of the procedures, I can see that CUBIC is about the sames TTFWC. 42
  • 51. Chapter 6 Conclusion and Future Work In this thesis, I have proposed and examined TTFWC for multimedia streaming, which allows us to tune the balance between smoothness and the sending rate. The motivation is to improve TFWC throughput for throughput-sensitive multimedia services. As I control TTFWC throughput by using tuning parameter, I can choose whether or not to use smooth congestion control based on the application type. I find that TTFWC with a large tuning parameter value keeps a high sending rate in the network even in the case of frequent bandwidth changes through the simulation result and real-world experiments. This is a significant contribution, because such services have becomes common with the deployment of Content Delivery Networks (CDNs). In future work, I plan to address three issues: first is the dynamic adaptation of the tuning parameter, second is the convergence between compatible TTFWC flows, and third issue is need to adapt it for use in adopt in the high-speed networks. The first issue is very difficult to improve because the number of frames are not predictable, especially video applications. Although I understand the number of packets stored in the queue, the data rate generated for encoded video varies second by second, making flexible selection of the tuning parameter hard. I will research the inner structure of multimedia application and consider approaches for dynamic control. 43
  • 52. The second issue is the limitation of TTFWC design. When we compare throughput-sensitive TTFWC to original TFWC, the modification of TFWC towards high throughput affects protocol friendliness and flows’ throughput between competing flows. The third problem is a significant concern for future congestion control algorithms. The limitation of my proposal is its formula-based congestion control algorithm. This means that TTFWC throughput does not exceed RENO throughput. Today, it is important to have not only TCP-Friendliness but also high throughput with fast convergence in the high-speed network. For example, CUBIC is used as the Linux default congestion control algorithm despite not being TCP-Friendly in the long, fat pipe. I think that one of the messages of this research is the requirement to satisfy both stability and high transmission rates. Thus, I will consider what approaches are suitable for multimedia streaming in the high-speed network. 44
  • 53. Acknowledgment I really appreciate many people and great support around me. I will list the people as below. I express my appreciation to Hideyuki Tokuda for his professional advice, guidance and encouragement. Thanks to his support and kindness, I could challenge to write the thesis and got chances to join the international community. I feel grateful to Rodney D. Van Meter so much. Thanks to his great help and kindness, I can improve my thesis. I would like to thank professors, associate professors and assistant professors in Keio University, Jun Murai, Osamu Nakamura, Hiroyuki Kusumoto, Kazunori Takashio, Keisuke Uehara, Jin Mitsugi, Keishi Takeda and Jin Nakazawa. Without their encounter and support, I was not what I am. Their classes and their personal character interests me and that is why I study computer science now. I thanks Shoko Miyagawa, too. She took me to Grace Hopper Celebration in Women Computing. Through the conference, I made friends and have have communicate with the great student all over the world. Also, I was able to expand my world through these conferences and these experience gave me fresh inspiration. I feel grateful so much to my advisor, Michio Honda. Although it takes much time to pick up the topic and to consider the proposal, he always indicates the errors and lead me to have high motivation. Without his supports, I have not be able to accepted my paper and to get chances to join the international conferences. He is also good friend of mine as well as good advisor. I sometimes consulted with him as to my private concerns and my life. Thank you for supporting me beyond the advisor. 45
  • 54. I also gratefully acknowledge Hideaki Yoshifuji. When I am at loss for the kernel bug, he taught me what knowledge I need. His guideline and professional advice also help me how to evaluate my implementation. I appreciate all the member in my laboratory and my friends, too. Without them, I have not been able to built my skill and overcome the hardship. Of course, I am thankful to my family, too. While it takes more than two hours from home to SFC, my family keeps an eye on my personal life. My mother have to wake up early when I have a class on first period. My father often keeps staying up until I come back home safely. My grand parents always encourage me when I was disappointing the limitation of the skill of mine. My younger sister and my younger brother always make me laugh and revive my heart. They never complain about my life style. Thanks to their supports, I spend happy school life. January 21, 2012 Midori Kato 46
  • 55. Bibliography [1] ftrace. http://lwn.net/Articles/370423/. [2] iperf. http://iperf.sourceforge.net/. [3] V. Tsaoussidis A. Lahanas. Exploiting the efficiency and fairness potential of AIMDbased congestion avoidance and control. Computer Networks Volume 43, Issue 2, October 2003. [4] D. Bansal, H. Balakrishnan, S. Floyd, and S. Shenker. Dynamic Behavior of SlowlyResponsive Congestion Control Algorithms. In Proc. ACM SIGCOMM, 2001. [5] S. Choi. On studying tfwc protocol smoothness. April 2008. [6] S. Choi and M. Handley. Designing TCP-Friendly window-based congestion control for real-time multimedia applications. Proc. PFLDNeT, May 2009. [7] K. Fall and S. Floyd. Simulation-based comparisons of Tahoe, Reno and SACK TCP. SIGCOMM, July 1996. [8] J. Feng and L. Xu. TCP-Friendly CBR-Like Rate Control. In Proc. ICNP IEEE, October 2008. [9] S. Floyd. Congestion Control Principles. RFC 2914, Sep. 2000. [10] S. Floyd, M. Handley, J. Padhye, and J. Widmer. Equation-based congestion control for unicast applications. ACM Computer Communication Review, 30(4):43–56, Oct. 2000. [11] S. Floyd and T. Henderson. The NewReno Modification to TCP’s Fast Recovery Algorithm. RFC 2582, April 1999. 47
  • 56. [12] S. Floyd and A. Romanow. TCP Selective Acknowledgment Options. RFC 2018, October 1996. [13] V. Ozdemir I. Rhee and Y. Yi. TEAR: TCP Emulation At Receivers: flow control for multimedia streaming. Technical Report, Department of Computer Science, 2000. [14] D. Towsley J. Padhye, V. Firoiu and J. Kurose. Modeling TCP throughput: a simple model and its empirical validation. August 1998. [15] R. Jain. The art of computer systems performance analysis: techniques for experimental design, measurement, simulation and modeling. John Wiley and Sons Inc., 1991. [16] P. Huang K. Chen, C. Huang and C. Lei. Quantifying Skype user satisfaction. In ACM SIGCOMM, September 2006. [17] A. Medina, M. Allman, and S. Floyd. Measuring the Evolution of Transport Protocols in the Internet. ACM CCR, 35(2):37–52, 2005. [18] The network simulator - ns-2. URL http://www.isi.edu/nsnam/ns/. [19] I. Matta S. Jin, L. Guo and A. Bestavros. A spectrum of TCP-Friendly window-based congestion control algorithms. In Proc. IEEE/ACM Trans. Net, June 2003. [20] R. Stewart. Stream Control Transmission Protocol. RFC 4960, September 2007. [21] E. Vieira and M. Bauer. Round-trip Time Variation in SmoothTCP in the Face of Spurious Errors. Journal of Communications, 35(2):48–56, 2006. [22] Y. Yang and S. Lam. General AIMD congestion control. In Proc. IEEE ICNP, November 2000. 48
  • 57. Published Papers Related to this thesis • M. Kato, M. Honda and H. Tokuda “Extending TFWC towards Higher Throughput”, IEEE Consumer Communications and Networking Conference (CCNC), Jan 2012, Las Vegas, U.S.A. Posters and Demos • M. Kato, M. Honda and H. Tokuda “A practical congestion control algorithm for real-time multimedia streaming”, Grace Hopper Conference (GHC), Nov 2011, Portland, U.S.A. • M. Kato, M. Honda and H. Tokuda “Implementation of congestion control for multimedia streaming” , IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), Aug 2011, Toyama, Japan 49