Shibata, N., Yasumoto, K., and Mori, M.: P2P Video Broadcast based on Per-Peer Transcoding and its Evaluation on PlanetLab, Proc. of 19th IASTED Int'l. Conf. on Parallel and Distributed Computing and Systems (PDCS2007), (November 2007).
http://ito-lab.naist.jp/themes/pdffiles/071121.shibata.pdcs2007.pdf
Driving Behavioral Change for Information Management through Data-Driven Gree...
(Slides) P2P video broadcast based on per-peer transcoding and its evaluation on PlanetLab
1. 1
P2P video broadcast based on per-peer transcoding
and its evaluation on PlanetLab
Naoki Shibata, † Keiichi Yasumoto, Masaaki Mori
Shiga University, †Nara Institute of Sci. and Tech.
2. 2
Motivation
Watching TV on various devices
Screen resolution of mobile phone : 96x64 ~
Screen resolution of Plasma TV : ~ 1920x1080
Video delivery method for wide variety of devices
Screen resolution
Computing power
Available bandwidth to Internet
Popularization of P2P video delivery
Joost
Zattoo
3. 3
Overview of this presentation
Improvement to our previously proposed video
delivery method named MTcast
Features of (previous) MTcast
Video delivery method based on P2P video streaming
Serves requests with different video qualities
Scalable with number of users
New improvement
Reduced backbone bandwidth for further scalability
Evaluation of performance on PlanetLab
Implementation in Java language
Evaluation in PlanetLab environment
5. 5
Multiversion method
Minimum delay
No need of transcoding
Low user satisfaction : # of served video qualities = # of versions
High network load
500k
G. Conklin, G. Greenbaum, K. Lillevold and A. Lippman :
``Video Coding for Streaming Media Delivery on the Internet,’’
IEEE Transactions on Circuits and Systems for Video Technology, 11(3), 2001.
300k
…
request 200k
deliver 300k
request 400k
deliver 500k
6. 6
Online transcoding method
S. Jacobs and A. Eleftheriadis :
``Streaming Video using Dynamic Rate Shaping and TCP Flow Control,’’
Visual Communication and Image Representation Journal, 1998.
1000k
Higher user satisfaction
Additional cost for proxies
# of qualities is restricted by capacity of proxies
Server Proxies
1000k
1000k
300k
500k
700kTranscode
7. 7
Layered multicast method
Low computation load on server
User satisfaction depends on # of layers
Limitation on # of layers … High CPU usage to decode many layers
J. Liu, B. Li and Y.-Q. Zhang :
``An End-to-End Adaptation Protocol for Layered Video Multicast Using Optimal Rate Allocation,’’
IEEE Transactions on Multimedia, 2004.
200k
Base layer
300k
2nd layer
…
200k
200k
200k
+300k
+300k+500k
9. 9
Service provided by MTcast
Network environment
Wide area network across multiple domains
Number of users
500 to 100,000
Kind of contents
Simultaneous broadcast of video - same as TV broadcast
New user can join to receive delivered video from the
scene currently on broadcast
Kind of request by users
Each user can specify bit rate of video
We assume resolution and frame rate are decided from bit rate
11. 11
Building transcode tree
User nodes
Bit rate request 2000k
Bit rate request 800k
Bit rate request 300k
Bit rate request 1200k
Bit rate request 1500k
User nodes
Bit rate request 2000k
Bit rate request 2000k
Bit rate request 1920k
Bit rate request 1850k
Bit rate request 1830k
Sort
Transcode tree is video delivery tree
12. 12
Building transcode tree
User nodes
Bit rate request 2000k
Bit rate request 2000k
Bit rate request 1920k
Bit rate request 1850k
Bit rate request 1830k
Bit rate request 1800k
Bit rate request 1800k
Bit rate request 1780k
• Make groups of k user nodes from the top
• Each group is called a layer
• Minimum requested bit rate for each layer
is actually delivered to the layer
Delivered bit rate
for each layer
A constant value decided
on each video broadcast
13. 13
Building transcode tree
• Put each layer at the place of nodes in a binary tree
• In the order of depth first search
• Construct a modified binary tree
2000k
1800k
1500k 1300k
1100k
900k 700k
Video server
14. 14
Advantages of building tree in this manner
Videos in many qualities can be served
Number of qualities = Number of layers
Each node is required to perform only one transcoding
Length of video delivery delay is O(log(# of nodes))
Tolerant to node failures
2000k
1800k
1500k 1300k
1100k
900k 700k
Video server
15. 15
Recovery from node failure
No increase in number of video transcoding on each node
• Degree of tolerance of node failure depends on :
• Number of nodes in each layer
If there are many nodes in layer, it has greater tolerance of failure
• Available bandwidth on each node
• Buffered video data is played back during recovery
• Users never notice node failures
16. 17
Extension for real world usage
Each link is an overlay link
Traffic may go back and forth many times
between ASs
Precious bandwidth between ASs is consumed
Nodes in service provider A
Nodes in service provider B
Nodes in service provider C
Idea of extension
Nodes in a same AS should connect in priority
Priority of connection is decided according to
hop count and available bandwidth
18. 19
Design policy
Usable on PlanetLab
Usable in many similar projects
Easily modifiable
Good performance, if possible
Why not use JMF?
It’s not maintained
Huge buffering delay
19. 20
Modular design
We designed many classes for video delivery
Transcoder
Transmitter and receiver to/from network
Buffer
etc.
Each node is a set of instances of these classes
Each node instantiate these classes and connects
the instances according to the command from a
central server
Workings of each node can be flexibly changed by
changing commands from the central server
21. 23
Results of evaluation published in [9]
Computation load of transcoding
Measured computation load when video playback and
transcoding are simultaneously executed
Measured on desktop PC, notebook PC and PDA
Result : All processing can be performed in real time
Computation load of making transcode tree
1.5 secs of computation on Pentium 4 2.4GHz
Time complexity: O( n log n )
Network load : Practical if the computation node of transcode tree
has enough bandwidth
User satisfaction
Satisfaction degree is defined as to [3]
Made a network with 6000 node using Inet 3.0
Satisfaction with our method was at least 6% higher than layered
multicast method
Satisfaction becomes better as the number of nodes increases
22. 24
Video quality degradation by transcoding
Video quality may degrade by multiple transcoding
We measured PSNR value when video is transcoded
in our method
We compared :
A video transcoded only once
A video transcoded multiple times
23. 25
Effectiveness of bandwidth reduction(1/2)
Compared physical hop count in transcode tree
By our method
By randomly selecting node to connect
Comparison by simulation
Number of user nodes : 1000
333 nodes has bandwidth between 100 and 500kbps
333 nodes has bandwidth between 2 and 5Mbps
334 nodes has bandwidth between 10 and 20Mbps
Result of simulation
Hop count by the random method : 4088
Hop count by our method : 3121
25% reduction of hop count by our method
24. 26
Effectiveness of bandwidth reduction(2/2)
Compared physical hop count in transcode tree
By our method
By randomly selecting node to connect
Comparison on PlanetLab
20 user nodes on 7 countries
Result
Random selection : hop count 343, 361, 335
Our method : hop count 314, 280, 277
16% reduction of hop count by our method
25. 27
Time to start up the system on PlanetLab
Measured time to the following events since beginning
All nodes complete establishing connection
All nodes receive the first one byte of data
Comparison on PlanetLab
20 user nodes on 7 countries
Nodes are cascaded, not connected in tree
Result of evaluation
Observation
Most of time is consumed to establish connections
All operations are performed in parallel, and thus the observed
time is the time for the slowest node to establish connection
26. 28
Time to recover from node failure on PlanetLab
Measured following time since a node failure
Establishing connection to a new node
Receiving data from the new node
Comparison on PlanetLab
20 user nodes on 7 countries
Nodes are cascaded, not connected in tree
Result of evaluation
Observation
These are practical values
During recovery time, buffered data is played back, and thus user never
notices node failure
27. 29
Conclusion
Improved MTcast
Bandwidth usage between ASs is reduced
Made a prototype system in Java
Evaluation on PlanetLab
Ongoing works include
Serving request of many parameters including picture
size, framerate and audio channels
Further reduction of bandwidth between nodes
28. 30
Shibata, N., Yasumoto, K., and Mori, M.: P2P Video Broadcast
based on Per-Peer Transcoding and its Evaluation on
PlanetLab, Proc. of 19th IASTED International Conference on
Parallel and Distributed Computing and Systems (PDCS2007),
pp. 478-483. [ PDF]
Sun, T., Tamai, M., Yasumoto, M., Shibata, N., Ito, M. and Mori,
M.: MTcast: Robust and Efficient P2P-based Video Delivery
for Heterogeneous Users, Proceedings of 9th International
Conference on Principles of Distributed Systems (OPODIS2005),
pp. 176-190.
DOI:10.1007/11795490_15 [ PDF ]