SlideShare ist ein Scribd-Unternehmen logo
1 von 1
Downloaden Sie, um offline zu lesen
multiple sessions may reduce the time required to fail over.
However, as shown in the experimental results in Section 5,
current VPN switching is fast enough to hide VPN failures
from OSs and maintain TCP connections.
4.2. Packet Relay System
Our packet relay system implementation performs two
operations: IP capsuling and NAT. IP capsuling is used
for identifying clients from servers and hiding IP address
changes from the guest OS. NAT is used for hiding IP ad-
dress changes from servers. Figure 6 shows these opera-
tions when an IP packet is sent from a client to a server.
The packet relay system uses a simple IP header like IP
over IP (IPIP) [5] to encapsule IP packets. When a packet
is sent from a client to a server, the capsuling IP header has
the relay client IP as the source and the relay server IP as the
destination (see Figure 6). The relay client IP is assigned by
VPN gateways and different from the IP address assigned to
the guest OS (guest IP). We currently assume that the guest
IP is unique in a cloud and can be used as a client ID. When
there are many clients, we can append a large (e.g., 128bits)
unique identifier, after the capsule IP header.
The relay server manages the relationship between the
guest and gateway IPs. In Figure 6, it records the rela-
tionship between “Guest IP” and “VPN GW IP”, allowing
the relay server to determine the appropriate gateway for a
client. The relay server updates the relationship manage-
ment table every time it receives an IP packet from clients.
The relay server also translates the source address of the
original IP header to its own IP address. This allows the
server to use the same IP address even when the relay client
IP address is changed. It also recalculates the check-sum of
IP packets. When the server sends back an IP packet to the
client, the packet follows a reverse flow path.
The packet relay client is implemented as a module run-
ning in BitVisor, and the packet relay server is implemented
as a user-level process running on the server.
5. Experiments
This section shows the experimental results of evaluating
our scheme. We first show the results of measuring failover
time. Then we show the results of measuring the perfor-
mance overhead of our virtualization layer.
5.1. Setup
We conducted the experiments in a wide-area distributed
Internet environment in Japan. We placed a client at
Tsukuba, and connected it to VPN gateways in data centers
in Tokyo, Yokohama, and Fukuoka. The straight-line dis-
tances from Tsukuba is approximately 56 km to Tokyo, 84
0
5
10
15
20
25
30
35
40
45
50
0 100 200 300 400 500 600 700 800 900
Yokohama
Fukuoka
Tokyo
1000
Latency[msec]
Elapsed time [sec]
Figure 7. Transition of Latency to Data Cen-
ters
0
2
4
6
8
10
0 5 10 15 20 25 30
VPNthroughput[Mbit/sec]
Elapsed time [sec]
Failure occurred point Failure recovered point
15.1 19.2
Figure 8. Throughput Transition over Failure
km to Yokohama, and 926 km to Fukuoka. These data cen-
ters are connected via leased lines with a maximum speed
of 100 Mbps. The leased lines are actually implemented
with yet another VPN provided by an ISP.
We used a PC equipped with Intel Core 2 Duo E8600
(3.33 GHz), PC2-6400 4 GB memory, and Intel X25-V
SSD 40 GB as the client machine at Tsukuba. The base
VMM is BitVisor 1.1, available from sourceforge site, and
the guest OS is Windows XP. Server machines are an HP
ProLiand BL280c blade server equipped with Xeon E5502
(1.86 GHz), 2 GB memory, and 120 GB 5400 rpm 2.5 inch
HDD. We used a Kernel-based Virtual Machine (KVM) and
CentOS 5.4 for both guest and host OSs. A cloud server and
the packet relay server each ran on a virtual machine.
5.2. Failover Time
The VPN failover time consists of the time to (1) detect a
VPN failure, (2) switch VPN gateways, and (3) restart TCP
transmission. According to our scheme, the time to detect a
VPN failure is expected to be (n+1)×RT O, where RT O is
calculated by Jacobson’s algorithm and n is the retry num-
ber. To verify estimated RT O, we first measured the tran-
sition of the network latency to each data center. Figure 7
shows the results. The latency to both Tokyo and Yokoyama
was around 15 msec and Fukuoka was 35 msec. In these la-
tencies, the estimated RT O for Tokyo was about 1 s.
We then measured the failover time. We intentionally
caused a VPN failure and measured the transition of TCP
6

Weitere ähnliche Inhalte

Was ist angesagt?

Load Balancing with Apache
Load Balancing with ApacheLoad Balancing with Apache
Load Balancing with ApacheBradley Holt
 
Ip services
Ip servicesIp services
Ip servicesStudent
 
Preview of Apache Pulsar 2.5.0
Preview of Apache Pulsar 2.5.0Preview of Apache Pulsar 2.5.0
Preview of Apache Pulsar 2.5.0StreamNative
 
Transaction Support in Pulsar 2.5.0
Transaction Support in Pulsar 2.5.0Transaction Support in Pulsar 2.5.0
Transaction Support in Pulsar 2.5.0StreamNative
 
Dhcp & dhcp relay agent in cent os 5.3
Dhcp & dhcp relay agent in cent os 5.3Dhcp & dhcp relay agent in cent os 5.3
Dhcp & dhcp relay agent in cent os 5.3Sophan Nhean
 
Commication Framework in OpenStack
Commication Framework in OpenStackCommication Framework in OpenStack
Commication Framework in OpenStackSean Chang
 
Lab 4 final report
Lab 4 final reportLab 4 final report
Lab 4 final reportKyle Villano
 
Covert Timing Channels using HTTP Cache Headers
Covert Timing Channels using HTTP Cache HeadersCovert Timing Channels using HTTP Cache Headers
Covert Timing Channels using HTTP Cache HeadersDenis Kolegov
 
TGIPulsar - EP #006: Lifecycle of a Pulsar message
TGIPulsar - EP #006: Lifecycle of a Pulsar message TGIPulsar - EP #006: Lifecycle of a Pulsar message
TGIPulsar - EP #006: Lifecycle of a Pulsar message StreamNative
 
Single System Image Server Cluster
Single System Image Server ClusterSingle System Image Server Cluster
Single System Image Server ClusterIJERA Editor
 
[233] level 2 network programming using packet ngin rtos
[233] level 2 network programming using packet ngin rtos[233] level 2 network programming using packet ngin rtos
[233] level 2 network programming using packet ngin rtosNAVER D2
 
Wireshark UDP
Wireshark UDPWireshark UDP
Wireshark UDPwab030
 
How Zhaopin contributes to Pulsar community
How Zhaopin contributes to Pulsar communityHow Zhaopin contributes to Pulsar community
How Zhaopin contributes to Pulsar communityStreamNative
 
Covert Timing Channels using HTTP Cache Headers
Covert Timing Channels using HTTP Cache HeadersCovert Timing Channels using HTTP Cache Headers
Covert Timing Channels using HTTP Cache HeadersDenis Kolegov
 

Was ist angesagt? (20)

Load Balancing with Apache
Load Balancing with ApacheLoad Balancing with Apache
Load Balancing with Apache
 
Ip services
Ip servicesIp services
Ip services
 
Preview of Apache Pulsar 2.5.0
Preview of Apache Pulsar 2.5.0Preview of Apache Pulsar 2.5.0
Preview of Apache Pulsar 2.5.0
 
Transaction Support in Pulsar 2.5.0
Transaction Support in Pulsar 2.5.0Transaction Support in Pulsar 2.5.0
Transaction Support in Pulsar 2.5.0
 
Lecture set 7
Lecture set 7Lecture set 7
Lecture set 7
 
Opnet lab 5 solutions
Opnet lab 5 solutionsOpnet lab 5 solutions
Opnet lab 5 solutions
 
Dhcp & dhcp relay agent in cent os 5.3
Dhcp & dhcp relay agent in cent os 5.3Dhcp & dhcp relay agent in cent os 5.3
Dhcp & dhcp relay agent in cent os 5.3
 
Commication Framework in OpenStack
Commication Framework in OpenStackCommication Framework in OpenStack
Commication Framework in OpenStack
 
Lab 4 final report
Lab 4 final reportLab 4 final report
Lab 4 final report
 
Covert Timing Channels using HTTP Cache Headers
Covert Timing Channels using HTTP Cache HeadersCovert Timing Channels using HTTP Cache Headers
Covert Timing Channels using HTTP Cache Headers
 
TGIPulsar - EP #006: Lifecycle of a Pulsar message
TGIPulsar - EP #006: Lifecycle of a Pulsar message TGIPulsar - EP #006: Lifecycle of a Pulsar message
TGIPulsar - EP #006: Lifecycle of a Pulsar message
 
Opnet lab 3 solutions
Opnet lab 3 solutionsOpnet lab 3 solutions
Opnet lab 3 solutions
 
Single System Image Server Cluster
Single System Image Server ClusterSingle System Image Server Cluster
Single System Image Server Cluster
 
[233] level 2 network programming using packet ngin rtos
[233] level 2 network programming using packet ngin rtos[233] level 2 network programming using packet ngin rtos
[233] level 2 network programming using packet ngin rtos
 
Np unit iii
Np unit iiiNp unit iii
Np unit iii
 
Wireshark UDP
Wireshark UDPWireshark UDP
Wireshark UDP
 
Iperf Tutorial
Iperf Tutorial Iperf Tutorial
Iperf Tutorial
 
How Zhaopin contributes to Pulsar community
How Zhaopin contributes to Pulsar communityHow Zhaopin contributes to Pulsar community
How Zhaopin contributes to Pulsar community
 
Dhcp
DhcpDhcp
Dhcp
 
Covert Timing Channels using HTTP Cache Headers
Covert Timing Channels using HTTP Cache HeadersCovert Timing Channels using HTTP Cache Headers
Covert Timing Channels using HTTP Cache Headers
 

Ähnlich wie Dropped image 170

Load Balancing 101
Load Balancing 101Load Balancing 101
Load Balancing 101HungWei Chiu
 
NAT 64 FPGA Implementation
NAT 64 FPGA ImplementationNAT 64 FPGA Implementation
NAT 64 FPGA ImplementationJanith Rukman
 
IP forwarding architectures and Overlay Model
IP forwarding architectures and Overlay ModelIP forwarding architectures and Overlay Model
IP forwarding architectures and Overlay ModelPradnya Saval
 
Networking.pptx
Networking.pptxNetworking.pptx
Networking.pptxEsubesisay
 
Scaling Kubernetes to Support 50000 Services.pptx
Scaling Kubernetes to Support 50000 Services.pptxScaling Kubernetes to Support 50000 Services.pptx
Scaling Kubernetes to Support 50000 Services.pptxthaond2
 
CN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjj
CN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjjCN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjj
CN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjjPRADEEPERUKULLA2
 
Lecture 5 internet-protocol_assignments
Lecture 5 internet-protocol_assignmentsLecture 5 internet-protocol_assignments
Lecture 5 internet-protocol_assignmentsSerious_SamSoul
 
Bt0072 computer networks 2
Bt0072 computer networks  2Bt0072 computer networks  2
Bt0072 computer networks 2Techglyphs
 
Services in kubernetes-KnolX .pdf
Services in kubernetes-KnolX .pdfServices in kubernetes-KnolX .pdf
Services in kubernetes-KnolX .pdfKnoldus Inc.
 
Internet Protocol Version 6 By Suvo 2002
Internet Protocol Version 6 By Suvo 2002Internet Protocol Version 6 By Suvo 2002
Internet Protocol Version 6 By Suvo 2002suvobgd
 
Session 2 Tp 2
Session 2 Tp 2Session 2 Tp 2
Session 2 Tp 2githe26200
 

Ähnlich wie Dropped image 170 (20)

Load Balancing 101
Load Balancing 101Load Balancing 101
Load Balancing 101
 
Dropped image 173
Dropped image 173Dropped image 173
Dropped image 173
 
NAT 64 FPGA Implementation
NAT 64 FPGA ImplementationNAT 64 FPGA Implementation
NAT 64 FPGA Implementation
 
IP forwarding architectures and Overlay Model
IP forwarding architectures and Overlay ModelIP forwarding architectures and Overlay Model
IP forwarding architectures and Overlay Model
 
Networking in python by Rj
Networking in python by RjNetworking in python by Rj
Networking in python by Rj
 
Networking.pptx
Networking.pptxNetworking.pptx
Networking.pptx
 
Np unit iv ii
Np unit iv iiNp unit iv ii
Np unit iv ii
 
Scaling Kubernetes to Support 50000 Services.pptx
Scaling Kubernetes to Support 50000 Services.pptxScaling Kubernetes to Support 50000 Services.pptx
Scaling Kubernetes to Support 50000 Services.pptx
 
ACN solved Manual By Ketan.pdf
ACN solved Manual By Ketan.pdfACN solved Manual By Ketan.pdf
ACN solved Manual By Ketan.pdf
 
Chapter 4--converted.pptx
Chapter 4--converted.pptxChapter 4--converted.pptx
Chapter 4--converted.pptx
 
CN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjj
CN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjjCN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjj
CN_UNIT4.ppt ytutuim jykhjl fjghkhj gjjj
 
Lecture 5 internet-protocol_assignments
Lecture 5 internet-protocol_assignmentsLecture 5 internet-protocol_assignments
Lecture 5 internet-protocol_assignments
 
Np unit1
Np unit1Np unit1
Np unit1
 
Rpc
RpcRpc
Rpc
 
Bt0072 computer networks 2
Bt0072 computer networks  2Bt0072 computer networks  2
Bt0072 computer networks 2
 
EMEA Airheads- Manage Devices at Branch Office (BOC)
EMEA Airheads- Manage Devices at Branch Office (BOC)EMEA Airheads- Manage Devices at Branch Office (BOC)
EMEA Airheads- Manage Devices at Branch Office (BOC)
 
TCP/IP 3RD SEM.2012 AUG.ASSIGNMENT
TCP/IP 3RD SEM.2012 AUG.ASSIGNMENTTCP/IP 3RD SEM.2012 AUG.ASSIGNMENT
TCP/IP 3RD SEM.2012 AUG.ASSIGNMENT
 
Services in kubernetes-KnolX .pdf
Services in kubernetes-KnolX .pdfServices in kubernetes-KnolX .pdf
Services in kubernetes-KnolX .pdf
 
Internet Protocol Version 6 By Suvo 2002
Internet Protocol Version 6 By Suvo 2002Internet Protocol Version 6 By Suvo 2002
Internet Protocol Version 6 By Suvo 2002
 
Session 2 Tp 2
Session 2 Tp 2Session 2 Tp 2
Session 2 Tp 2
 

Mehr von Kazuhiko Kato

自律連合型基盤システムの構築
自律連合型基盤システムの構築自律連合型基盤システムの構築
自律連合型基盤システムの構築Kazuhiko Kato
 
Dependable Cloud Comuting
Dependable Cloud ComutingDependable Cloud Comuting
Dependable Cloud ComutingKazuhiko Kato
 
仮想化とシステムソフトウェア研究
仮想化とシステムソフトウェア研究仮想化とシステムソフトウェア研究
仮想化とシステムソフトウェア研究Kazuhiko Kato
 
ディペンダブルなクラウドコンピューティング基盤を目指して
ディペンダブルなクラウドコンピューティング基盤を目指してディペンダブルなクラウドコンピューティング基盤を目指して
ディペンダブルなクラウドコンピューティング基盤を目指してKazuhiko Kato
 
研究室紹介(2014年度卒研生募集)
研究室紹介(2014年度卒研生募集)研究室紹介(2014年度卒研生募集)
研究室紹介(2014年度卒研生募集)Kazuhiko Kato
 
拡張によるエンパワーメント:情報システム工学の立場から
拡張によるエンパワーメント:情報システム工学の立場から拡張によるエンパワーメント:情報システム工学の立場から
拡張によるエンパワーメント:情報システム工学の立場からKazuhiko Kato
 
Rubyとプログラミング言語の潮流
Rubyとプログラミング言語の潮流Rubyとプログラミング言語の潮流
Rubyとプログラミング言語の潮流Kazuhiko Kato
 
IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」
IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」 IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」
IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」 Kazuhiko Kato
 

Mehr von Kazuhiko Kato (10)

自律連合型基盤システムの構築
自律連合型基盤システムの構築自律連合型基盤システムの構築
自律連合型基盤システムの構築
 
Dependable Cloud Comuting
Dependable Cloud ComutingDependable Cloud Comuting
Dependable Cloud Comuting
 
仮想化とシステムソフトウェア研究
仮想化とシステムソフトウェア研究仮想化とシステムソフトウェア研究
仮想化とシステムソフトウェア研究
 
ディペンダブルなクラウドコンピューティング基盤を目指して
ディペンダブルなクラウドコンピューティング基盤を目指してディペンダブルなクラウドコンピューティング基盤を目指して
ディペンダブルなクラウドコンピューティング基盤を目指して
 
研究室紹介(2014年度卒研生募集)
研究室紹介(2014年度卒研生募集)研究室紹介(2014年度卒研生募集)
研究室紹介(2014年度卒研生募集)
 
拡張によるエンパワーメント:情報システム工学の立場から
拡張によるエンパワーメント:情報システム工学の立場から拡張によるエンパワーメント:情報システム工学の立場から
拡張によるエンパワーメント:情報システム工学の立場から
 
Dropped image 136
Dropped image 136Dropped image 136
Dropped image 136
 
Dropped image 110
Dropped image 110Dropped image 110
Dropped image 110
 
Rubyとプログラミング言語の潮流
Rubyとプログラミング言語の潮流Rubyとプログラミング言語の潮流
Rubyとプログラミング言語の潮流
 
IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」
IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」 IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」
IPAX 2004年9月 基調講演「ソフトウェア新創世紀へ向けて」
 

Dropped image 170

  • 1. multiple sessions may reduce the time required to fail over. However, as shown in the experimental results in Section 5, current VPN switching is fast enough to hide VPN failures from OSs and maintain TCP connections. 4.2. Packet Relay System Our packet relay system implementation performs two operations: IP capsuling and NAT. IP capsuling is used for identifying clients from servers and hiding IP address changes from the guest OS. NAT is used for hiding IP ad- dress changes from servers. Figure 6 shows these opera- tions when an IP packet is sent from a client to a server. The packet relay system uses a simple IP header like IP over IP (IPIP) [5] to encapsule IP packets. When a packet is sent from a client to a server, the capsuling IP header has the relay client IP as the source and the relay server IP as the destination (see Figure 6). The relay client IP is assigned by VPN gateways and different from the IP address assigned to the guest OS (guest IP). We currently assume that the guest IP is unique in a cloud and can be used as a client ID. When there are many clients, we can append a large (e.g., 128bits) unique identifier, after the capsule IP header. The relay server manages the relationship between the guest and gateway IPs. In Figure 6, it records the rela- tionship between “Guest IP” and “VPN GW IP”, allowing the relay server to determine the appropriate gateway for a client. The relay server updates the relationship manage- ment table every time it receives an IP packet from clients. The relay server also translates the source address of the original IP header to its own IP address. This allows the server to use the same IP address even when the relay client IP address is changed. It also recalculates the check-sum of IP packets. When the server sends back an IP packet to the client, the packet follows a reverse flow path. The packet relay client is implemented as a module run- ning in BitVisor, and the packet relay server is implemented as a user-level process running on the server. 5. Experiments This section shows the experimental results of evaluating our scheme. We first show the results of measuring failover time. Then we show the results of measuring the perfor- mance overhead of our virtualization layer. 5.1. Setup We conducted the experiments in a wide-area distributed Internet environment in Japan. We placed a client at Tsukuba, and connected it to VPN gateways in data centers in Tokyo, Yokohama, and Fukuoka. The straight-line dis- tances from Tsukuba is approximately 56 km to Tokyo, 84 0 5 10 15 20 25 30 35 40 45 50 0 100 200 300 400 500 600 700 800 900 Yokohama Fukuoka Tokyo 1000 Latency[msec] Elapsed time [sec] Figure 7. Transition of Latency to Data Cen- ters 0 2 4 6 8 10 0 5 10 15 20 25 30 VPNthroughput[Mbit/sec] Elapsed time [sec] Failure occurred point Failure recovered point 15.1 19.2 Figure 8. Throughput Transition over Failure km to Yokohama, and 926 km to Fukuoka. These data cen- ters are connected via leased lines with a maximum speed of 100 Mbps. The leased lines are actually implemented with yet another VPN provided by an ISP. We used a PC equipped with Intel Core 2 Duo E8600 (3.33 GHz), PC2-6400 4 GB memory, and Intel X25-V SSD 40 GB as the client machine at Tsukuba. The base VMM is BitVisor 1.1, available from sourceforge site, and the guest OS is Windows XP. Server machines are an HP ProLiand BL280c blade server equipped with Xeon E5502 (1.86 GHz), 2 GB memory, and 120 GB 5400 rpm 2.5 inch HDD. We used a Kernel-based Virtual Machine (KVM) and CentOS 5.4 for both guest and host OSs. A cloud server and the packet relay server each ran on a virtual machine. 5.2. Failover Time The VPN failover time consists of the time to (1) detect a VPN failure, (2) switch VPN gateways, and (3) restart TCP transmission. According to our scheme, the time to detect a VPN failure is expected to be (n+1)×RT O, where RT O is calculated by Jacobson’s algorithm and n is the retry num- ber. To verify estimated RT O, we first measured the tran- sition of the network latency to each data center. Figure 7 shows the results. The latency to both Tokyo and Yokoyama was around 15 msec and Fukuoka was 35 msec. In these la- tencies, the estimated RT O for Tokyo was about 1 s. We then measured the failover time. We intentionally caused a VPN failure and measured the transition of TCP 6