SlideShare ist ein Scribd-Unternehmen logo
1 von 9
Downloaden Sie, um offline zu lesen
Impact of Video Encoding Parameters on Dynamic Video
                                  Transcoding

           Vidyut Samanta, Ricardo Oliveira, Advait Dixit, Parixit Aghera, Petros Zerfos, Songwu Lu
                                                     University of California Los Angeles
                                                        Computer Science Department
                                             {vids,rveloso,advait,parixit,pzerfos,slu}@cs.ucla.edu




Abstract                                                                            which they connect to the Internet: some may connect
Currently there are a wide variety of devices with dif-                             through wired 100Mbps connections, some may use
ferent screen resolutions, color support, processing                                wireless technology like CDMA, 1xEV-DO, EDGE,
power, and network connectivity, capable of receiv-                                 and GPRS, while others may use 802.11b or 802.11g.
ing streaming video from the Internet. In order to serve                            Hence, the available bandwidth per connection also
different multimedia content to all these devices, mid-                             varies. These conditions make it impractical for content
dleware proxy servers which transcode content to fit                                 servers to provide different formats of the same content
different types of devices are becoming popular. In this                            to all different types of devices. Therefore, middleware
paper, we present result of several tests conducted with                            proxy servers capable of transcoding content for vari-
different combination of video encoding parameters                                  ous devices are becoming popular[15, 10, 9, 8].
and show their impact on network, CPU and energy re-                                    Due to increase in bandwidth availability, video
source usage of a transcoding server. For this purpose,                             streaming is becoming a popular method of deliver-
we implemented a dynamic video trancoding server,                                   ing video content to Internet enabled devices. Popu-
which enables dynamic changes in different trancoding                               lar video streaming services such as Yahoo Launch-
parameters during a video streaming session.                                        cast, MSN Video, and MSNBC news web-cast are few
                                                                                    such examples. Video streaming can also be used for
Categories and Subject Descriptors C.2.m [Computer-                                 web-casting lectures or presentations to a number of
Communication Networks]: Miscellaneous                                              online attendees at the same time. Client device het-
General Terms Design, Experimentation, Measure-                                     erogeneity in terms of screen size, processing capabil-
ment, Performance.                                                                  ities and network connectivity should be handled by a
                                                                                    video transcoding server. Moreover, a video transcod-
Keywords Dynamic video transcoding, middleware,
                                                                                    ing server must be able to address the following sce-
network bandwidth, processor load, streaming video
                                                                                    narios:
1.     Introduction
Today there are wide ranges of devices with varying                                 1. Switching between different devices: Some ubiqui-
features such as screen size, color depth, and process-                                tous computing applications allow transferring of a
ing power. These devices may have different means by                                   video streaming session from a small PDA device
                                                                                       to big screen TV and vice versa. In this case, it is
Permission to make digital or hard copies of all or part of this work for              required for transcoding server to change the frame
personal or classroom use is granted without fee provided that copies are              size of the video stream dynamically, which in turn
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
                                                                                       results into change in network and CPU load on the
republish, to post on servers or to redistribute to lists, requires prior specific      server. Changes in frame size can be disconcerting
permission and/or a fee.                                                               to the viewer, and so these changes would not be
COMSWARE’06, January 8–12, 2006, New Delhi, India.
Copyright c 2006 IEEE                                                                  continuous but would only be at isolated discrete
                                                                                       moments.
2. Varying network bandwidth at wireless client: For        the design and implementation of the dynamic video
   devices that are connected in wireless environments,     transcoding proxy we have used. Section 4 has ex-
   perceived bandwidth changes continuously due to          perimental results, which show the effect of different
   varying characteristics of wireless channels, e.g.       transcoding parameters on consumption of transcoding
   fading, interference. This should be handled by          server resources. We conclude in section 5.
   changing the transcoding of the content on server
   dynamically, which results in to increase or decrease    2. Related Work
   in CPU load on the server.                               Substantial work has been done in the area of mid-
3. Varying content access patterns: Access patterns to      dleware servers. In [5], the authors present a mid-
   Internet web sites are highly unpredictable and can      dleware architecture to handle heterogeneous mobile
   have spikes, which may be caused by a particular         client problems. [3] presents a middleware-oriented so-
   event like an important sports match or breaking         lution to the application session handoff problem. Mid-
   news. For example during a crisis a large number         dlware servers have been used to improve scalability in
   of clients will access streaming clips from a news       [14]. Seminal work on proxy-based content adaptation
   website. This scenario also results in drastic change    is BARWAN project [9]. The argument for adapting
   in CPU and network load on transcoding server.           data on the fly was made there. A middleware archi-
                                                            tecture specifically for dynamic video transcoding was
   Above scenarios indicate that transcoding server is      presented in [22]. They present a control algorithm,
subject to varying network and CPU loads under dif-         which adapts transcoding to the varying network band-
ferent circumstances. Since transcoding servers have        width of a 3G-network device. Their algorithm relies
fixed amount of computing resource and network band-         on experimental relationship between encoding bit rate
width, it should use dynamic transcoding capability to      and frame rate for a given quantization scale, but lacks
balance the required network and CPU resources with         data on proxy resource consumption in determining
available network and CPU resources without discon-         desired transcoding.
necting the existing client or denying access to new           We use the cascaded pixel domain transcoding (CPDT)
clients. Work in [4, 7] also indicates that energy con-     algorithm that was proposed in [16]. The authors of
sumption in server farms is becoming more important         [2] propose a frequency domain transcoding algorithm
as the number of servers increases. This indicates that     which significantly reduced computational complex-
energy consumption at transcoding server for different      ity compared to the CPDT approach. [6] proposes a
transcoding paramters is an interesting relationship to     video transcoding algorithm that integrates error con-
study. One can design an transcoding control algorithm      trol schemes into the transcoding. They present a two-
which optimizes the transcoding server resources while      pass video transcoding scheme that allocates bits be-
serving to maximum number of clients at their accept-       tween source coding and channel coding.
able video stream quality. This requires establishing re-      The effect of transcoding on visual quality has also
lationship of different transcoding parameters of video     been studied in literature. A number of studies [11]
stream with network, CPU and energy resources of            prove the intuition that sequences with lots of motion
transcoding servers.                                        require higher frame rate is wrong. Another study [21]
   In our work, we perform a systematic evaluation of       shows that large quantization steps should be avoided.
a transcoding server system by measuring impact of             Our work presents experimental measurements of a
different transcoding parameters such as q-scale, frame     real transcoding server and can be used in conjunction
size, and color depth on CPU, network and energy re-        with these results for designing algorithms for the mod-
source consumption at the trancoding server and a com-      ule that controls transcoding parameters.
parison of transcoding different video content. We also
describe our design and implementation of a dynamic         3. Design and Implementation
video transcoding proxy that we used for this evalua-       In this section, we describe the design and implementa-
tion.                                                       tion of an open source based dynamic video transcod-
   The rest of the paper is organized as follows. Sec-      ing proxy. The design of video transcoding proxy sys-
tion 2 talks about related works. Section 3 describes       tem leverages middleware architecture previously used
CLIENT                            ENCODER                                                  CONTENT
                                                                     DECODER
         DEVICE                              FRAME                                                  SERVER
                                              SIZE
                                            Q-SCALE
                                             COLOR



                  FEEDBACK                                 CONTROLLER

                                                   TRANSCODING PROXY


                                        Figure 1. Transcoding architecture


in many transcoding proxy designs. Following is an               can get that information along with the content re-
overview of the transcoding proxy design.                        quest. The controller module may get client’s avail-
                                                                 able network bandwidth information directly from
3.1   Design Overview
                                                                 client device or some by monitoring network condi-
The design of our video transcoding proxy is illustrated         tions. The dotted line in Figure 1 between the client
in Figure 1. The system consists of following three              device and the controller is an optional communica-
components.                                                      tion link between those modules. Note that design
1. The content server provides high quality video                of the control logic as well as control channel com-
   streams to the transcoding proxy. The content server          munication is a very complex issue and is not ad-
   is typically a network of file servers with load               dressed in this paper. We are studying the relation-
   balancing mechanisms. Examples of such content                ship between different transcoding parameters and
   servers are CNBC News web-cast server or MSN                  their impact on transcoding server resource usage,
   video server.                                                 which can be utilized in design of the transcoding
                                                                 control algorithm.
2. The transcoding proxy is responsible for perform-
   ing dynamic transcoding of the video content as            3. The client device is used by end user to view the
   per the client preferences and different transcoding          video streaming and is characterized by its screen
   parameters supplied by a control mechanism. The               size, color depth, processing power and network
   proxy uses the Cascaded Pixel Domain Transcoder               connectivity. Laptops, PDAs and cellphones are ex-
   (CPDT) scheme initially proposed in [16], which               amples of some such client devices.
   consists of an encoder connected to a decoder. This
                                                              3.2 Transcoding Parameters
   particular transcoding scheme is flexible since the
   frame is fully decoded to raw pixels before re-            Following is the description of the transcoding param-
   encoding, which makes dynamic change in transcod-          eters that can be changed dynamically in our current
   ing parameters easier. The controller module con-          design.
   trols transcoding by changing the frame size, Q-
   scale (or quantization scale) and color transcod-          3.2.1 Frame Size
   ing parameters for encoder. These parameters can           This is the size of the video frame. It can be initially
   be changed using a control algorithm, which takes          specified based on the clients screen size. In section 3,
   client device’s profile, user’s preferences, proxy          we have taken measurements of streaming video with
   server resource usage, and network bandwidth avail-        different frame sizes and the results show that smaller
   able at the client device into consideration.              frame sizes can take up considerably less network
      The controller can have access to client device’s       bandwidth. If the client is experiencing network con-
   profile and user preferences from user database or it       gestion while streaming a video, the parameter on the
transcoding proxy can be dynamically changed so that       in each frame block. Q-Scale value can be changed
the size of the frame is reduced and the client receives   from 1 to 31, where 1 indicates highest quality video
continuous streaming, but with smaller frame size (Fig-    and 31 indicates lowest quality video.
ure 2). If the network conditions improve, frame size
can be converted back to size the session started with.
One could extend the proxy further to support hand-
offs between devices, where the dynamic scaling of
the frame size would be very useful for devices with
different screen sizes. MPEG standard requires a fresh
restart of the video stream in order to change the frame
size. In order to circumvent this limitation we keep the
                                                           Figure 3. Transcoding operation with a high Q-scale.
original frame size, but insert a black band around the
video content as shown in Figure 2.
                                                           3.3 Implementation Details
                                                           The video transcoding proxy was implemented us-
                                                           ing Tinyproxy [20] and FFserver [17]. Tinyproxy is
                                                           a lightweight http proxy server licensed under GPL.
                                                           FFserver is a streaming server for both audio and video
                                                           licensed under LGPL.
                                                              FFserver provides two codec libraries (libavcodec
Figure 2. Frame Scaling for width and height by 50%        and libavf ormat) that can be used to encode and de-
                                                           code MPEG video. We modified tinyproxy and inte-
                                                           grated it with the FFmpeg libraries so that it would be
3.2.2   Color Depth                                        able to decode and encode a video stream.
MPEG standard specifies a single pixel format where            The transcoding process is initiated by the client by
each pixel is represented in the YUV scheme, where         sending its device specification, along with the request
Y is the luminance, and U and V specifies the chromi-       for a video to the proxy. If transcoding is required,
nance. [12] Because each pixel is represented always       the proxy begins to get the requested video stream and
with 12 bits (e.g. 4:2:0 planar format), we are only ex-   decodes each frame, and then re-encodes it applying
pecting minimal performance improvements by chang-         the parameters specified by the client during encoding
ing the video to gray scale.                               and streams it to the client device. We implemented Q-
                                                           scaling, frame scaling, and altering color depth while
3.2.3   Q-scale                                            re-encoding the frame at the proxy. The transcoding
The Q-scale is a parameter used to compress a given        then works as follows: Tinyproxy has one parent pro-
block of a video frame. Each block is a 8 × 8 pixel        cess, which spawns multiple http server processes that
square to which a DCT (Discrete Cosine Transform)          will serve incoming http requests from clients. We
is applied (similar to JPEG encoding). The coefficients     modified tinyproxy so that in addition it spawns an
Cij of each block are then divided by a quantization       additional child, which serves as a dynamic transcod-
matrix Mij and a quantization scale q:                     ing request handler. The controller shown in Figure 1
                                                           connects to this request handler on the proxy. The con-
                              Cij
                    Cij =                            (1)   troller can dynamically change the video transcoding
                            q · Mij                        parameters while streaming.
   The encoder quantizes to zero the values of Cij that       The algorithm for the transcoder is shown in the
are below a certain threshold. Each block is then fed      flowchart in Figure 4. If the input frame satisfies the
into a RLE module (Run Length Encoder) that does           transcoding requirements, it is directly sent to the
zigzag encoding to achieve better compression. A very      output buffer without being decoded and re-encoded.
high Q-Scale will give a “pixilated” look to a frame       Otherwise, the frame is decoded. Then, the codec
(Figure 3) due to the loss of high frequency components    is changed by the transcoder if the transcoding pa-
rameters have changed since the last frame was de-        3. Energy consumption
coded. The frame is then re-encoded with the current
codec and stored in the output buffer, from where it is   4.1 Testbed Setup
streamed to the client.                                   For each of the above measurements we used different
                                                          trans-coding parameters (Q-scale, video frame size and
                   Start                                  gray/color scale). All the measurements were done at
                                                          the proxy server. At the client side, we used M P layer
                                                          [13] to play the video streams. CPU usage was com-
                                                          puted as the CPU time spent by the transcoder during
                   End of      Yes
                  stream?
                                           Stop           the entire video stream. This CPU time has two com-
                                                          ponents: user time and system time. The user time is
                      No                                  the CPU time allocated to the main user-level process
                                                          and all its children, while the system usage is the CPU
                Fetch frame
                 from input                               time allocated to kernel processes (i.e., disk I/O, net-
                    buffer
                                                          work access) during the user process execution 1 . All
                                                          CPU measurements presented in this paper refer to the
                                                          total time (user + system).
                 Does frame                                  We used T cpdump [18] to trace the TCP connec-
                    need
                                 No
                transcoding?                              tions to the server and computed the throughput of
                                                          these connections using T cptrace [19].
                       Yes
                                                             For energy measurements, we used a laptop with
                                                          1.8GHz AMD processor and 512MB of RAM config-
               Decode frame
                                                          ured as the proxy. Since energy measurements are very
                                                          sensitive to other processes running in the system at
                                                          the same time as the transcoder, we turned off all un-
                                                          necessary processes leaving only the transcoding proxy
                   Have
                transcoding     Yes                       server and some kernel processes running. The ACPI
                                      Change codec
                parameters                                [1] system of the laptop allowed us to get readings of
                 changed?
                                                          voltage levels and current drawn from the battery. ACPI
                      No
                                                          can be integrated in the Linux kernel and has a mapping
                                                          to the /proc file system where current battery status can
               Encode frame
                                                          be accessed. Since ACPI takes measurements directly
                                                          from the battery, we unplugged the power supply to
                                                          avoid its constant recharge. We took periodic measure-
                                                          ments of voltage level and current intensity from the
                Store frame                               battery, as the proxy was processing the video stream.
                 in output
                   buffer                                 We then computed the energy as follows:

          Figure 4. Transcoding Algorithm                                         T
                                                                     E(T ) =          v(t) · i(t) · dt
                                                                                 0
                                                                                 N
4. Measurements                                                             ∼         v(n∆t) · i(n∆t) · ∆t           (2)
To understand how the trans-coding system reacts to                             n=1
different configuration parameters, we set up a testbed
                                                             Where v(t), i(t) are instantaneous voltage and cur-
and did the following measurements:
                                                          rent drawn from battery, T is the total measurement
1. CPU usage,                                             1
                                                           We used the C function getrusage() to return the values of CPU
2. Bandwidth usage, and,                                  usage
time, N is the number of samples taken during T and                                               250

       T
∆t = N .
   For the following measurements we used a MPEG-4                                                200


encoded video of length 3.5 minutes, with initial frame




                                                                          Throughput (KBytes/s)
size 320x240at24 frames a second.                                                                 150




4.2                         Q-scale Measurements                                                  100



For the CPU measurements with different Q-scales we
used a 4 minute video clip file in avi format (the video                                            50



stream was encoded in MPEG-2). We took two mea-
                                                                                                   0
surements for each allowed value of the Q-scale (1-31)                                                  0   5    10    15        20   25   30
                                                                                                                       Q-Scale
and averaged them. The results are shown in Figure 5.
                                                                                                  Figure 6. Throughput for different Q-scales.
                            40




                            35                                            Figure 6 clearly shows that throughput drops very
                                                                       fast for Q<5 and then converges to a stable value as we
      CPU usage (seconds)




                                                                       increase the Q-scale. This was also expected since as
                            30
                                                                       the Q-scale increases, the number of bits necessary to
                                                                       encode the blocks of each video frame is also reduced,
                            25                                         thus reducing the amount of data to be transmitted over
                                                                       the network.
                            20
                                 0   5   10   15
                                              Q-Scale
                                                        20   25   30   4.3 Frame Scale Measurements
 Figure 5. CPU usage for different values of Qscale.                   MPlayer uses avi decoder functions supplied by the
                                                                       libavcodec library, however the decoder does not sup-
                                                                       port frame size changes during the playback of the
   As can be seen from the figure, the total CPU us-                    video stream. To overcome this limitation we used an
age has a linear decay for Q-scale<5 and a descen-                     image resize function provided by the libavformat li-
dent trend as we go up in the Q-scale. This was ex-                    brary. This function resizes the video image and inserts
pected since MPEG encoding performs a lossy com-                       black strips around the resized image (in the case of a
pression over the video image and the amount of com-                   size reduction) but keeps the original height and width
pression/loss is greatly affected by the Q-scale parame-               of the video frame (Figure 2). Black blocks in a video
ter. As the value of Q-scale increases, the quantization               frame have a high degree of compression (∼ 100%)
matrix will contain smaller DCT components, which                      due to the MPEG image compression scheme (simi-
eventually will be quantized as zero. Since this is fol-               lar to JPEG). In addition, the constant black blocks in
lowed by a RLE (Run Length Encoding) in each block                     the image take advantage of the MPEG motion predic-
of the image, a sequence of consecutive zeros will be                  tion algorithm to increase the compression level of the
greatly compressed, thus reducing the time to encode a                 video stream and obtain both CPU and bandwidth sav-
sequence of video frames (containing B and P frames).                  ings.
   The irregularity of the curves (ups and downs) is due                  Figures 7 and 8 represent the CPU usage (at the
to the fact that we were using a precision of seconds to               proxy) and throughput (at the client) for different frame
measure the CPU time, which introduces some level of                   scales. As an example, after applying a 50% frame
noise in the measurements (at least one second error                   scale to a 640x480 frame, the size becomes 320x240,
margin).                                                               i.e., the scaling applies to both frame width and height.
   The throughput measurements of Figure 6 were                        As observed in Figure 7, the CPU usage increases as
taken using the same video file as above with also two                  we approach the original scaling of the frame. Indeed
samples per Q-scale:                                                   CPU cycles are saved as the original video frame is
20                                                4.4 Gray Scale Measurements
                                                                              The MPEG standard uses the Y U V scheme instead
                            15                                                of the traditional RGB scheme to encode each pixel.
                                                                              Y is the luminance component, while U and V are
   CPU usage (seconds)




                                                                              the blue and green chrominance respectively. The most
                            10
                                                                              common format is the Y U V 4 : 2 : 0 planar format
                                                                              that uses 3 planes (Y, U, V ) to encode a given frame.
                            5                                                 By using gray scale, we are effectively using only the
                                                                              Y plane, since we are discarding the color information.
                                                                              However, MPEG standards require that all three YUV
                            0
                                     20     40              60
                                                 Frame scale (%)
                                                                   80   100
                                                                              planes exist in the video stream, which means that
                                                                              each pixel is still represented by 12-bit YUV as in the
                           Figure 7. CPU usage for different frame size.
                                                                              color case. Most of the image information is in the
                                                                              luminance plane (Y plane). Therefore, the throughput
                                                                              and CPU usage are not significantly improved by using
shrunken and the black portion of the frame increases.                        gray scale, which were confirmed by our tests.
The encoding will be faster because of:
                                                                              4.5 Energy Measurements
1. MPEG motion prediction working in the time do-
                                                                              We did several energy measurements at the proxy, un-
   main and,
                                                                              der the conditions described previously in the testbed
2. MPEG image compression working in the spatial                              setup. By changing one transcoding parameter at a
   domain.                                                                    time, we could effectively identify the impact of each
   The same reasoning applies for the throughput curve                        parameter on the overall energy consumption. More-
in Figure 8.                                                                  over, we were not so much interested in observing
   Notice that both CPU usage and throughput curves                           the absolute values of the measurements, but to com-
drop at 100%. This is because with 90% scale, the                             pare energy consumptions of different configurations.
MPEG encoder still has to encode a tiny black strip                           These measurements can also extend to other systems
on bottom/right sides, however this strip is removed                          like desktops and server farms [7, 4]. Proxy energy
when the frame scaling is set to 100%. The black strip                        consumption is an important parameter in production
is a discontinuity in spatial domain (high frequencies                        middleware systems and large data centers, and need to
in DCT) and needs additional bits to be encoded (in                           be minimized as much as possible.
comparison to 100% scaling).                                                                   3.6

                                                                                               3.4

                            300                                                                3.2

                                                                                                3
                            250
                                                                                               2.8
                                                                                 Energy (KJ)




                                                                                               2.6
                            200
   Throughput (KBytes/s)




                                                                                               2.4

                            150                                                                2.2

                                                                                                2
                            100
                                                                                               1.8

                                                                                               1.6
                             50
                                                                                                     Q:30   Q:20   Q:15   Q:10   Q:5   FS:25 FS:50 FS:75   Full


                                 0
                                     20     40              60     80   100
                                                                              Figure 9. Energy consumption for different transcoder
                                                 Frame scale (%)
                                                                              parameters.
  Figure 8. Throughput for different frame scales.
                                                                                Energy was computed according to (2) over a 4-
                                                                              minute video. The frame scale (FS) tests were per-
formed with a Q-scale of 1 and the Full test was done                                                         700
                                                                                                                                                               News
                                                                                                                                                             Cartoon
without any transcoding at all. Figure 9 shows a behav-                                                       600
                                                                                                                                                              Music
                                                                                                                                                              Sports
ior similar to that of CPU usage for different transcod-
ing parameters. Indeed, great savings can be achieved                                                         500




                                                                                      Throughput (KBytes/s)
by using frame scale reduction or using high Q-scale                                                          400

values.
                                                                                                              300

4.6                         Different Video Contents
                                                                                                              200
The impact of transcoding parameters may depend also
on the content of the video streams, such as different                                                        100
                                                                                                                                               11.0
                                                                                                                                                      17.8 11.7
                                                                                                                                         9.7
textures, colors, and movement. We did experiments                                                             0
                                                                                                                          Q:2                         Q:31
with four different types of videos that provide a range
of different levels of color, textures and motion:                                                            Figure 11. Throughput for different videos.
1. Cartoon: video clip with flat colors, with fewer tex-
   tures.                                                                             Figure 10 shows the results for CPU usage: the num-
2. News: interview between a news caster and a re-                                 bers above the bars represent the percentage of CPU us-
   porter, the background behind the both the news                                 age of a transcoding with Q-scale=31 compared to the
   caster and the reporter is static and there is very little                      values achieved for Q=2, e.g. a bar with 80% means
   movement in each frame.                                                         that trancoding the video with Q-scale=31 takes 80%
3. Sports: Formula One race, with fast moving cars,                                of the CPU usage of the transcoding with Q=2.
   there is a lot of movement between each frame.                                     Note that the values are approximately the same for
4. Music Clip: video with static parts and parts with                              all videos (between 72% and 80%) , indicating that
   lots of movement, and has frames with flat colors as                             the result of variation of encoding parameters is not
   well as heavy textures.                                                         impacted by the content of the video.
                                                                                      A similar reasoning can be applied to Figure 11,
                            320
                                    News
                                  Cartoon
                                                                                   where we show the throughput measured at the server
                                   Music
                            300    Sports                                          for different videos.
                            280
                                                                                   5. Conclusion
      CPU usage (seconds)




                            260
                                                                                   We presented the design and implementation of an
                            240
                                                                            72.1   open source dynamic video transcoding proxy server
                            220
                                                                                   based on a three-tier transcoding proxy architecture.
                            200                                      80.7          The video transcoding proxy provides flexibility for
                            180
                                                                                   applying different transcoding control algorithms with-
                                                              76.2
                                                       75.6                        out modifying the other components of the system.
                            160
                                            Q:2                      Q:31             We conducted a systematic evaluation of the system
                            Figure 10. CPU usage for different videos.             parameters: Q-scale, color depth, and frame size on
                                                                                   CPU usage, bandwidth usage, and energy consumption
                                                                                   of the transcoding proxy.
All the videos were MPEG-4 encoded and each clip                                      Our measurements show that by reducing the image
was of length 45 seconds, with initial frame size                                  quality down to a Q-Scale of 10, one can dramatically
640x480at30 frames per second. For each video, we                                  decrease proxy CPU cycles and streaming through-
compared the performance in terms of CPU and through-                              put. Furthermore, video frame size reduction results in
put between an encoding with Q-scale=2 and an encod-                               a significant decrease of perceived throughput at the
ing with Q-scale=31. We expect to observe a decreases                              client side. Our measurements also show that a 50%
in both CPU and throughput usage when going from                                   scaling on both, width and height of the video frame ef-
Q-scale=2 to Q-scale=31, but would this decrease be                                fectively reduces the throughput to approximately 50%.
the same for all videos?                                                           In addition, energy measurements show that by using a
Q-scale of greater than 5, it is possible to reduce en-              and P. Gauthier. Cluster-based scalable network
ergy consumption at proxy server by more than 50%,                   services. In Proceedings of the 16th Symposium on
in comparison to the no-transcoding scenario. We also                Operating Systems Principles (SOSP-97), volume 31,5
study the impact of transcoding of different video con-              of Operating Systems Review, pages 78–91, New York,
                                                                     Oct.5–8 1997. ACM Press.
tent and find that the result of variation of these encod-
                                                              [10]   R. Han, P. Bhagwat, R. LaMaire, T. Mummert,
ing parameters is not impacted by the content of the
                                                                     V. Perret, and J. Rubas. Dynamic adaptation in an
video.                                                               image transcoding proxy for mobile www browsing.
   We plan to develop an automatic controller mod-                   IEEE Personal Communication, 5(6):8–17, December
ule based on our measurement results. The automatic                  1998.
control module would enable us to explore different           [11]   J. D. McCarthy, M. A. Sasse, and D. Miras. Sharp
feedback mechanisms to estimate the client’s perceived               or smooth?: comparing the effects of quantization
throughput and experience during the transcoding pro-                vs. frame rate for streamed video. In CHI ’04:
cess and this remains as future work.                                Proceedings of the SIGCHI conference on Human
                                                                     factors in computing systems, pages 535–542, New
                                                                     York, NY, USA, 2004. ACM Press.
References                                                    [12]   MPEG Home Page. http://www.chiariglione.
 [1] ACPI at Intel. http://developer.intel.com/                      org/mpeg/.
     technology/iapc.                                         [13]   MPlayer. http://www.mplayerhq.hu.
 [2] P. A. A. Assuncao and M. Ghanbari. A frequency-          [14]   T. Phan, R. G. Guy, and R. Bagrodia. A scalable,
     domain video transcoder for dynamic bit-rate reduction          distributed middleware service architecture to support
     of mpeg-2 bit streams. In IEEE Transactions on                  mobile internet applications. In Wireless Mobile
     Circuits Systems Video Technology, vol. 8, no. 8, pp.           Internet, pages 27–33, 2001.
     953-967, December 1998.                                  [15]   A. Singh, A. Trivedi, K. Ramamritham, and P. J.
 [3] R. Bagrodia, S. Bhattacharyya, F. Cheng, S. Gerding,            Shenoy. Ptc: Proxies that transcode and cache in
     G. Glazer, R. Guy, Z. Ji, J. Lin, T. Phan, E. Skow,             heterogeneous web client environments. World Wide
     M. Varshney, and G. Zorpas. imash: Interactive mobile           Web, 7(1):7–28, 2004.
     application session hando, 2003.                         [16]   H. Sun, W. Kwok, and J. Zpedski. Architectures for
 [4] G. Banga, P. Druschel, and J. Mogul. Resource                   mpeg compressed bitstream scaling. In IEEE Trans.
     containers: A new facility for resource management in           Circuits and Systems for Video Technology, April 1996.
     server systems. In Proceedings of the Third Symposium    [17]   FFmpeg. http://ffmpeg.sourceforge.net.
     on Operating System Design and Implementation
                                                              [18]   tcpdump / libpcap. http://www.tcpdump.org/.
     OSDI’99, February 1999.
                                                              [19]   tcptrace. http://www.tcptrace.org/.
 [5] A. Bond, M. Gallagher, and J. Indulska. An infor-
     mation model for nomadic environments. In DEXA           [20]   Tinyproxy. http://tinyproxy.sourceforge.
     Workshop, pages 400–405, 1998.                                  net/.
 [6] J. Cai and C. W. Chen. A high-performance and            [21]   D. Wang, F. Speranza, A. Vincent, T. Martin, and
     low-complexity video transcoding scheme for video               P. Blanchfield. Toward optimal rate control: a study
     streaming over wireless links. In Proceeding of                 of the impact of spatial resolution, frame rate, and
     Wireless Communications and Networking Conference               quantization on subjective video quality and bit rate.
     2002, March 2002.                                               In VCIP, pages 198–209, 2003.
 [7] J. Chase, D. Anderson, P. Thakur, and A. Vahdat.         [22]   A. Warabino, S. Ota, D. Morikawa, M. Ohashi,
     Managing energy and server resources in hosting                 H. Nakamura, H. Iwashita, and F. Watanabe. Video
     centers. In Proceedings of the 18th Symposium on                transcoding proxy for 3gwireless mobile internet
     Operating Systems Principles SOSP’01, October 2001.             access. Communications Magazine, IEEE, pages 66 –
                                                                     71, Oct 2000.
 [8] A. Fox, S. D. Gribble, E. A. Brewer, and E. Amir.
     Adapting to Network and Client Variability via On-
     Demand Dynamic Distillation. In Proceedings of
     the Seventh International Conference on Architectural
     Support for Programming Languages and Operating
     Systems, pages 160–170, Cambridge, MA, October
     1996.
 [9] A. Fox, S. D. Gribble, Y. Chawathe, E. A. Brewer,

Weitere ähnliche Inhalte

Was ist angesagt?

MULTIMEDIA COMMUNICATION & NETWORKS
MULTIMEDIA COMMUNICATION & NETWORKSMULTIMEDIA COMMUNICATION & NETWORKS
MULTIMEDIA COMMUNICATION & NETWORKSKathirvel Ayyaswamy
 
Hd Connect Spec Sheet V1 6
Hd Connect Spec Sheet V1 6Hd Connect Spec Sheet V1 6
Hd Connect Spec Sheet V1 6Tom Luketich
 
Application Brief
Application BriefApplication Brief
Application BriefVideoguy
 
Video service assurance across hybrid transport networks
Video service assurance across hybrid transport networksVideo service assurance across hybrid transport networks
Video service assurance across hybrid transport networksGlobal MarCom & LeadGen
 
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...ijwmn
 
(Paper) P2P VIDEO BROADCAST BASED ON PER-PEER TRANSCODING AND ITS EVALUATION ...
(Paper) P2P VIDEO BROADCAST BASED ON PER-PEER TRANSCODING AND ITS EVALUATION ...(Paper) P2P VIDEO BROADCAST BASED ON PER-PEER TRANSCODING AND ITS EVALUATION ...
(Paper) P2P VIDEO BROADCAST BASED ON PER-PEER TRANSCODING AND ITS EVALUATION ...Naoki Shibata
 
Network Configuration Example: Configuring LDP Over RSVP
Network Configuration Example: Configuring LDP Over RSVPNetwork Configuration Example: Configuring LDP Over RSVP
Network Configuration Example: Configuring LDP Over RSVPJuniper Networks
 
Chapter7 multimedia
Chapter7 multimediaChapter7 multimedia
Chapter7 multimediaKhánh Ghẻ
 
Minimizing Server Throughput for Low-Delay Live Streaming in Content Delivery...
Minimizing Server Throughput for Low-Delay Live Streaming in Content Delivery...Minimizing Server Throughput for Low-Delay Live Streaming in Content Delivery...
Minimizing Server Throughput for Low-Delay Live Streaming in Content Delivery...Gwendal Simon
 
Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...
Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...
Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...mgrafl
 
Streaming Video into Second Life
Streaming Video into Second LifeStreaming Video into Second Life
Streaming Video into Second LifeVideoguy
 
Secure multimedia
Secure multimediaSecure multimedia
Secure multimediaambitlick
 
IMPROVING TRANSMISSION EFFICIENCY IN OPTICAL COMMUNICATION
IMPROVING TRANSMISSION EFFICIENCY IN OPTICAL COMMUNICATIONIMPROVING TRANSMISSION EFFICIENCY IN OPTICAL COMMUNICATION
IMPROVING TRANSMISSION EFFICIENCY IN OPTICAL COMMUNICATIONradziwil
 
Vediocompressed
VediocompressedVediocompressed
Vediocompressedtangbinsen
 
Network Configuration Example: Configuring Assured Forwarding for High-Defini...
Network Configuration Example: Configuring Assured Forwarding for High-Defini...Network Configuration Example: Configuring Assured Forwarding for High-Defini...
Network Configuration Example: Configuring Assured Forwarding for High-Defini...Juniper Networks
 
Integrated Video Transport Network Multiplexer
Integrated Video Transport Network MultiplexerIntegrated Video Transport Network Multiplexer
Integrated Video Transport Network MultiplexerGlobal MarCom & LeadGen
 
Integration Platform For JMPS Using DDS
Integration Platform For JMPS Using DDSIntegration Platform For JMPS Using DDS
Integration Platform For JMPS Using DDSSupreet Oberoi
 

Was ist angesagt? (20)

MULTIMEDIA COMMUNICATION & NETWORKS
MULTIMEDIA COMMUNICATION & NETWORKSMULTIMEDIA COMMUNICATION & NETWORKS
MULTIMEDIA COMMUNICATION & NETWORKS
 
Hd Connect Spec Sheet V1 6
Hd Connect Spec Sheet V1 6Hd Connect Spec Sheet V1 6
Hd Connect Spec Sheet V1 6
 
Application Brief
Application BriefApplication Brief
Application Brief
 
Cliser
CliserCliser
Cliser
 
IP NETWORKS
IP NETWORKSIP NETWORKS
IP NETWORKS
 
Video service assurance across hybrid transport networks
Video service assurance across hybrid transport networksVideo service assurance across hybrid transport networks
Video service assurance across hybrid transport networks
 
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
 
(Paper) P2P VIDEO BROADCAST BASED ON PER-PEER TRANSCODING AND ITS EVALUATION ...
(Paper) P2P VIDEO BROADCAST BASED ON PER-PEER TRANSCODING AND ITS EVALUATION ...(Paper) P2P VIDEO BROADCAST BASED ON PER-PEER TRANSCODING AND ITS EVALUATION ...
(Paper) P2P VIDEO BROADCAST BASED ON PER-PEER TRANSCODING AND ITS EVALUATION ...
 
Network Configuration Example: Configuring LDP Over RSVP
Network Configuration Example: Configuring LDP Over RSVPNetwork Configuration Example: Configuring LDP Over RSVP
Network Configuration Example: Configuring LDP Over RSVP
 
Chapter7 multimedia
Chapter7 multimediaChapter7 multimedia
Chapter7 multimedia
 
Minimizing Server Throughput for Low-Delay Live Streaming in Content Delivery...
Minimizing Server Throughput for Low-Delay Live Streaming in Content Delivery...Minimizing Server Throughput for Low-Delay Live Streaming in Content Delivery...
Minimizing Server Throughput for Low-Delay Live Streaming in Content Delivery...
 
Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...
Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...
Distributed Adaptation Decision-Taking Framework and Scalable Video Coding Tu...
 
Streaming Video into Second Life
Streaming Video into Second LifeStreaming Video into Second Life
Streaming Video into Second Life
 
Secure multimedia
Secure multimediaSecure multimedia
Secure multimedia
 
IMPROVING TRANSMISSION EFFICIENCY IN OPTICAL COMMUNICATION
IMPROVING TRANSMISSION EFFICIENCY IN OPTICAL COMMUNICATIONIMPROVING TRANSMISSION EFFICIENCY IN OPTICAL COMMUNICATION
IMPROVING TRANSMISSION EFFICIENCY IN OPTICAL COMMUNICATION
 
Vediocompressed
VediocompressedVediocompressed
Vediocompressed
 
Network Configuration Example: Configuring Assured Forwarding for High-Defini...
Network Configuration Example: Configuring Assured Forwarding for High-Defini...Network Configuration Example: Configuring Assured Forwarding for High-Defini...
Network Configuration Example: Configuring Assured Forwarding for High-Defini...
 
Containing Chaos
Containing ChaosContaining Chaos
Containing Chaos
 
Integrated Video Transport Network Multiplexer
Integrated Video Transport Network MultiplexerIntegrated Video Transport Network Multiplexer
Integrated Video Transport Network Multiplexer
 
Integration Platform For JMPS Using DDS
Integration Platform For JMPS Using DDSIntegration Platform For JMPS Using DDS
Integration Platform For JMPS Using DDS
 

Ähnlich wie Impact of Video Encoding Parameters on Dynamic Video Transcoding

An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...Cisco Service Provider
 
An Overview on Multimedia Transcoding Techniques on Streaming Digital Contents
An Overview on Multimedia Transcoding Techniques on Streaming Digital ContentsAn Overview on Multimedia Transcoding Techniques on Streaming Digital Contents
An Overview on Multimedia Transcoding Techniques on Streaming Digital Contentsidescitation
 
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...IRJET Journal
 
A Real-Time Adaptive Algorithm for Video Streaming over Multiple Wireless Acc...
A Real-Time Adaptive Algorithm for Video Streaming over Multiple Wireless Acc...A Real-Time Adaptive Algorithm for Video Streaming over Multiple Wireless Acc...
A Real-Time Adaptive Algorithm for Video Streaming over Multiple Wireless Acc...Priti Kana
 
A real time adaptive algorithm for video streaming over multiple wireless acc...
A real time adaptive algorithm for video streaming over multiple wireless acc...A real time adaptive algorithm for video streaming over multiple wireless acc...
A real time adaptive algorithm for video streaming over multiple wireless acc...JPINFOTECH JAYAPRAKASH
 
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding f...
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding  f...Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding  f...
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding f...IRJET Journal
 
Iaetsd adaptive and well-organized mobile video streaming public
Iaetsd adaptive and well-organized mobile video streaming publicIaetsd adaptive and well-organized mobile video streaming public
Iaetsd adaptive and well-organized mobile video streaming publicIaetsd Iaetsd
 
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Zhenyun Zhuang
 
Using Bandwidth Aggregation to Improve the Performance of Video Quality- Adap...
Using Bandwidth Aggregation to Improve the Performance of Video Quality- Adap...Using Bandwidth Aggregation to Improve the Performance of Video Quality- Adap...
Using Bandwidth Aggregation to Improve the Performance of Video Quality- Adap...paperpublications3
 
Analyzing Video Streaming Quality by Using Various Error Correction Methods o...
Analyzing Video Streaming Quality by Using Various Error Correction Methods o...Analyzing Video Streaming Quality by Using Various Error Correction Methods o...
Analyzing Video Streaming Quality by Using Various Error Correction Methods o...IJERA Editor
 
A Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
A Benchmark to Evaluate Mobile Video Upload to Cloud InfrastructuresA Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
A Benchmark to Evaluate Mobile Video Upload to Cloud InfrastructuresUniversity of Southern California
 
Optimizing cloud resources for delivering iptv services through virtualization
Optimizing cloud resources for delivering iptv services through virtualizationOptimizing cloud resources for delivering iptv services through virtualization
Optimizing cloud resources for delivering iptv services through virtualizationJPINFOTECH JAYAPRAKASH
 
A FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUD
A FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUDA FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUD
A FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUDJournal For Research
 
An Adaptive Remote Display Framework to Improve Power Efficiency
An Adaptive Remote Display Framework to Improve Power Efficiency An Adaptive Remote Display Framework to Improve Power Efficiency
An Adaptive Remote Display Framework to Improve Power Efficiency csandit
 

Ähnlich wie Impact of Video Encoding Parameters on Dynamic Video Transcoding (20)

An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
 
An Overview on Multimedia Transcoding Techniques on Streaming Digital Contents
An Overview on Multimedia Transcoding Techniques on Streaming Digital ContentsAn Overview on Multimedia Transcoding Techniques on Streaming Digital Contents
An Overview on Multimedia Transcoding Techniques on Streaming Digital Contents
 
B044060814
B044060814B044060814
B044060814
 
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
 
A Real-Time Adaptive Algorithm for Video Streaming over Multiple Wireless Acc...
A Real-Time Adaptive Algorithm for Video Streaming over Multiple Wireless Acc...A Real-Time Adaptive Algorithm for Video Streaming over Multiple Wireless Acc...
A Real-Time Adaptive Algorithm for Video Streaming over Multiple Wireless Acc...
 
A real time adaptive algorithm for video streaming over multiple wireless acc...
A real time adaptive algorithm for video streaming over multiple wireless acc...A real time adaptive algorithm for video streaming over multiple wireless acc...
A real time adaptive algorithm for video streaming over multiple wireless acc...
 
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding f...
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding  f...Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding  f...
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding f...
 
Iaetsd adaptive and well-organized mobile video streaming public
Iaetsd adaptive and well-organized mobile video streaming publicIaetsd adaptive and well-organized mobile video streaming public
Iaetsd adaptive and well-organized mobile video streaming public
 
Simulation Study of Video Streaming in Multi-Hop Network
Simulation Study of Video Streaming in Multi-Hop NetworkSimulation Study of Video Streaming in Multi-Hop Network
Simulation Study of Video Streaming in Multi-Hop Network
 
F04024549
F04024549F04024549
F04024549
 
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
 
Using Bandwidth Aggregation to Improve the Performance of Video Quality- Adap...
Using Bandwidth Aggregation to Improve the Performance of Video Quality- Adap...Using Bandwidth Aggregation to Improve the Performance of Video Quality- Adap...
Using Bandwidth Aggregation to Improve the Performance of Video Quality- Adap...
 
Analyzing Video Streaming Quality by Using Various Error Correction Methods o...
Analyzing Video Streaming Quality by Using Various Error Correction Methods o...Analyzing Video Streaming Quality by Using Various Error Correction Methods o...
Analyzing Video Streaming Quality by Using Various Error Correction Methods o...
 
A Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
A Benchmark to Evaluate Mobile Video Upload to Cloud InfrastructuresA Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
A Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
 
Cg25492495
Cg25492495Cg25492495
Cg25492495
 
Ijcatr04061005
Ijcatr04061005Ijcatr04061005
Ijcatr04061005
 
[32]
[32][32]
[32]
 
Optimizing cloud resources for delivering iptv services through virtualization
Optimizing cloud resources for delivering iptv services through virtualizationOptimizing cloud resources for delivering iptv services through virtualization
Optimizing cloud resources for delivering iptv services through virtualization
 
A FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUD
A FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUDA FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUD
A FRAMEWORK FOR MOBILE VIDEO STREAMING AND VIDEO SHARING IN CLOUD
 
An Adaptive Remote Display Framework to Improve Power Efficiency
An Adaptive Remote Display Framework to Improve Power Efficiency An Adaptive Remote Display Framework to Improve Power Efficiency
An Adaptive Remote Display Framework to Improve Power Efficiency
 

Mehr von Videoguy

Free-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer NetworksFree-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer NetworksVideoguy
 
Video Streaming
Video StreamingVideo Streaming
Video StreamingVideoguy
 
Reaching a Broader Audience
Reaching a Broader AudienceReaching a Broader Audience
Reaching a Broader AudienceVideoguy
 
Impact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video StreamingImpact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video StreamingVideoguy
 
Video Streaming Services – Stage 1
Video Streaming Services – Stage 1Video Streaming Services – Stage 1
Video Streaming Services – Stage 1Videoguy
 
Flash Live Video Streaming Software
Flash Live Video Streaming SoftwareFlash Live Video Streaming Software
Flash Live Video Streaming SoftwareVideoguy
 
Videoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions CookbookVideoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions CookbookVideoguy
 
Streaming Video Formaten
Streaming Video FormatenStreaming Video Formaten
Streaming Video FormatenVideoguy
 
iPhone Live Video Streaming Software
iPhone Live Video Streaming SoftwareiPhone Live Video Streaming Software
iPhone Live Video Streaming SoftwareVideoguy
 
Glow: Video streaming training guide - Firefox
Glow: Video streaming training guide - FirefoxGlow: Video streaming training guide - Firefox
Glow: Video streaming training guide - FirefoxVideoguy
 
Video and Streaming in Nokia Phones v1.0
Video and Streaming in Nokia Phones v1.0Video and Streaming in Nokia Phones v1.0
Video and Streaming in Nokia Phones v1.0Videoguy
 
Video Streaming across wide area networks
Video Streaming across wide area networksVideo Streaming across wide area networks
Video Streaming across wide area networksVideoguy
 
University Information Systems Product Service Offering
University Information Systems Product Service OfferingUniversity Information Systems Product Service Offering
University Information Systems Product Service OfferingVideoguy
 
Video Communications and Video Streaming
Video Communications and Video StreamingVideo Communications and Video Streaming
Video Communications and Video StreamingVideoguy
 
Mixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless NetworksMixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless NetworksVideoguy
 
A Streaming Media Primer
A Streaming Media PrimerA Streaming Media Primer
A Streaming Media PrimerVideoguy
 
Video streaming
Video streamingVideo streaming
Video streamingVideoguy
 
Streaming Video
Streaming VideoStreaming Video
Streaming VideoVideoguy
 
Video_Streaming
Video_StreamingVideo_Streaming
Video_StreamingVideoguy
 

Mehr von Videoguy (20)

Adobe
AdobeAdobe
Adobe
 
Free-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer NetworksFree-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer Networks
 
Video Streaming
Video StreamingVideo Streaming
Video Streaming
 
Reaching a Broader Audience
Reaching a Broader AudienceReaching a Broader Audience
Reaching a Broader Audience
 
Impact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video StreamingImpact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video Streaming
 
Video Streaming Services – Stage 1
Video Streaming Services – Stage 1Video Streaming Services – Stage 1
Video Streaming Services – Stage 1
 
Flash Live Video Streaming Software
Flash Live Video Streaming SoftwareFlash Live Video Streaming Software
Flash Live Video Streaming Software
 
Videoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions CookbookVideoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions Cookbook
 
Streaming Video Formaten
Streaming Video FormatenStreaming Video Formaten
Streaming Video Formaten
 
iPhone Live Video Streaming Software
iPhone Live Video Streaming SoftwareiPhone Live Video Streaming Software
iPhone Live Video Streaming Software
 
Glow: Video streaming training guide - Firefox
Glow: Video streaming training guide - FirefoxGlow: Video streaming training guide - Firefox
Glow: Video streaming training guide - Firefox
 
Video and Streaming in Nokia Phones v1.0
Video and Streaming in Nokia Phones v1.0Video and Streaming in Nokia Phones v1.0
Video and Streaming in Nokia Phones v1.0
 
Video Streaming across wide area networks
Video Streaming across wide area networksVideo Streaming across wide area networks
Video Streaming across wide area networks
 
University Information Systems Product Service Offering
University Information Systems Product Service OfferingUniversity Information Systems Product Service Offering
University Information Systems Product Service Offering
 
Video Communications and Video Streaming
Video Communications and Video StreamingVideo Communications and Video Streaming
Video Communications and Video Streaming
 
Mixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless NetworksMixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless Networks
 
A Streaming Media Primer
A Streaming Media PrimerA Streaming Media Primer
A Streaming Media Primer
 
Video streaming
Video streamingVideo streaming
Video streaming
 
Streaming Video
Streaming VideoStreaming Video
Streaming Video
 
Video_Streaming
Video_StreamingVideo_Streaming
Video_Streaming
 

Impact of Video Encoding Parameters on Dynamic Video Transcoding

  • 1. Impact of Video Encoding Parameters on Dynamic Video Transcoding Vidyut Samanta, Ricardo Oliveira, Advait Dixit, Parixit Aghera, Petros Zerfos, Songwu Lu University of California Los Angeles Computer Science Department {vids,rveloso,advait,parixit,pzerfos,slu}@cs.ucla.edu Abstract which they connect to the Internet: some may connect Currently there are a wide variety of devices with dif- through wired 100Mbps connections, some may use ferent screen resolutions, color support, processing wireless technology like CDMA, 1xEV-DO, EDGE, power, and network connectivity, capable of receiv- and GPRS, while others may use 802.11b or 802.11g. ing streaming video from the Internet. In order to serve Hence, the available bandwidth per connection also different multimedia content to all these devices, mid- varies. These conditions make it impractical for content dleware proxy servers which transcode content to fit servers to provide different formats of the same content different types of devices are becoming popular. In this to all different types of devices. Therefore, middleware paper, we present result of several tests conducted with proxy servers capable of transcoding content for vari- different combination of video encoding parameters ous devices are becoming popular[15, 10, 9, 8]. and show their impact on network, CPU and energy re- Due to increase in bandwidth availability, video source usage of a transcoding server. For this purpose, streaming is becoming a popular method of deliver- we implemented a dynamic video trancoding server, ing video content to Internet enabled devices. Popu- which enables dynamic changes in different trancoding lar video streaming services such as Yahoo Launch- parameters during a video streaming session. cast, MSN Video, and MSNBC news web-cast are few such examples. Video streaming can also be used for Categories and Subject Descriptors C.2.m [Computer- web-casting lectures or presentations to a number of Communication Networks]: Miscellaneous online attendees at the same time. Client device het- General Terms Design, Experimentation, Measure- erogeneity in terms of screen size, processing capabil- ment, Performance. ities and network connectivity should be handled by a video transcoding server. Moreover, a video transcod- Keywords Dynamic video transcoding, middleware, ing server must be able to address the following sce- network bandwidth, processor load, streaming video narios: 1. Introduction Today there are wide ranges of devices with varying 1. Switching between different devices: Some ubiqui- features such as screen size, color depth, and process- tous computing applications allow transferring of a ing power. These devices may have different means by video streaming session from a small PDA device to big screen TV and vice versa. In this case, it is Permission to make digital or hard copies of all or part of this work for required for transcoding server to change the frame personal or classroom use is granted without fee provided that copies are size of the video stream dynamically, which in turn not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to results into change in network and CPU load on the republish, to post on servers or to redistribute to lists, requires prior specific server. Changes in frame size can be disconcerting permission and/or a fee. to the viewer, and so these changes would not be COMSWARE’06, January 8–12, 2006, New Delhi, India. Copyright c 2006 IEEE continuous but would only be at isolated discrete moments.
  • 2. 2. Varying network bandwidth at wireless client: For the design and implementation of the dynamic video devices that are connected in wireless environments, transcoding proxy we have used. Section 4 has ex- perceived bandwidth changes continuously due to perimental results, which show the effect of different varying characteristics of wireless channels, e.g. transcoding parameters on consumption of transcoding fading, interference. This should be handled by server resources. We conclude in section 5. changing the transcoding of the content on server dynamically, which results in to increase or decrease 2. Related Work in CPU load on the server. Substantial work has been done in the area of mid- 3. Varying content access patterns: Access patterns to dleware servers. In [5], the authors present a mid- Internet web sites are highly unpredictable and can dleware architecture to handle heterogeneous mobile have spikes, which may be caused by a particular client problems. [3] presents a middleware-oriented so- event like an important sports match or breaking lution to the application session handoff problem. Mid- news. For example during a crisis a large number dlware servers have been used to improve scalability in of clients will access streaming clips from a news [14]. Seminal work on proxy-based content adaptation website. This scenario also results in drastic change is BARWAN project [9]. The argument for adapting in CPU and network load on transcoding server. data on the fly was made there. A middleware archi- tecture specifically for dynamic video transcoding was Above scenarios indicate that transcoding server is presented in [22]. They present a control algorithm, subject to varying network and CPU loads under dif- which adapts transcoding to the varying network band- ferent circumstances. Since transcoding servers have width of a 3G-network device. Their algorithm relies fixed amount of computing resource and network band- on experimental relationship between encoding bit rate width, it should use dynamic transcoding capability to and frame rate for a given quantization scale, but lacks balance the required network and CPU resources with data on proxy resource consumption in determining available network and CPU resources without discon- desired transcoding. necting the existing client or denying access to new We use the cascaded pixel domain transcoding (CPDT) clients. Work in [4, 7] also indicates that energy con- algorithm that was proposed in [16]. The authors of sumption in server farms is becoming more important [2] propose a frequency domain transcoding algorithm as the number of servers increases. This indicates that which significantly reduced computational complex- energy consumption at transcoding server for different ity compared to the CPDT approach. [6] proposes a transcoding paramters is an interesting relationship to video transcoding algorithm that integrates error con- study. One can design an transcoding control algorithm trol schemes into the transcoding. They present a two- which optimizes the transcoding server resources while pass video transcoding scheme that allocates bits be- serving to maximum number of clients at their accept- tween source coding and channel coding. able video stream quality. This requires establishing re- The effect of transcoding on visual quality has also lationship of different transcoding parameters of video been studied in literature. A number of studies [11] stream with network, CPU and energy resources of prove the intuition that sequences with lots of motion transcoding servers. require higher frame rate is wrong. Another study [21] In our work, we perform a systematic evaluation of shows that large quantization steps should be avoided. a transcoding server system by measuring impact of Our work presents experimental measurements of a different transcoding parameters such as q-scale, frame real transcoding server and can be used in conjunction size, and color depth on CPU, network and energy re- with these results for designing algorithms for the mod- source consumption at the trancoding server and a com- ule that controls transcoding parameters. parison of transcoding different video content. We also describe our design and implementation of a dynamic 3. Design and Implementation video transcoding proxy that we used for this evalua- In this section, we describe the design and implementa- tion. tion of an open source based dynamic video transcod- The rest of the paper is organized as follows. Sec- ing proxy. The design of video transcoding proxy sys- tion 2 talks about related works. Section 3 describes tem leverages middleware architecture previously used
  • 3. CLIENT ENCODER CONTENT DECODER DEVICE FRAME SERVER SIZE Q-SCALE COLOR FEEDBACK CONTROLLER TRANSCODING PROXY Figure 1. Transcoding architecture in many transcoding proxy designs. Following is an can get that information along with the content re- overview of the transcoding proxy design. quest. The controller module may get client’s avail- able network bandwidth information directly from 3.1 Design Overview client device or some by monitoring network condi- The design of our video transcoding proxy is illustrated tions. The dotted line in Figure 1 between the client in Figure 1. The system consists of following three device and the controller is an optional communica- components. tion link between those modules. Note that design 1. The content server provides high quality video of the control logic as well as control channel com- streams to the transcoding proxy. The content server munication is a very complex issue and is not ad- is typically a network of file servers with load dressed in this paper. We are studying the relation- balancing mechanisms. Examples of such content ship between different transcoding parameters and servers are CNBC News web-cast server or MSN their impact on transcoding server resource usage, video server. which can be utilized in design of the transcoding control algorithm. 2. The transcoding proxy is responsible for perform- ing dynamic transcoding of the video content as 3. The client device is used by end user to view the per the client preferences and different transcoding video streaming and is characterized by its screen parameters supplied by a control mechanism. The size, color depth, processing power and network proxy uses the Cascaded Pixel Domain Transcoder connectivity. Laptops, PDAs and cellphones are ex- (CPDT) scheme initially proposed in [16], which amples of some such client devices. consists of an encoder connected to a decoder. This 3.2 Transcoding Parameters particular transcoding scheme is flexible since the frame is fully decoded to raw pixels before re- Following is the description of the transcoding param- encoding, which makes dynamic change in transcod- eters that can be changed dynamically in our current ing parameters easier. The controller module con- design. trols transcoding by changing the frame size, Q- scale (or quantization scale) and color transcod- 3.2.1 Frame Size ing parameters for encoder. These parameters can This is the size of the video frame. It can be initially be changed using a control algorithm, which takes specified based on the clients screen size. In section 3, client device’s profile, user’s preferences, proxy we have taken measurements of streaming video with server resource usage, and network bandwidth avail- different frame sizes and the results show that smaller able at the client device into consideration. frame sizes can take up considerably less network The controller can have access to client device’s bandwidth. If the client is experiencing network con- profile and user preferences from user database or it gestion while streaming a video, the parameter on the
  • 4. transcoding proxy can be dynamically changed so that in each frame block. Q-Scale value can be changed the size of the frame is reduced and the client receives from 1 to 31, where 1 indicates highest quality video continuous streaming, but with smaller frame size (Fig- and 31 indicates lowest quality video. ure 2). If the network conditions improve, frame size can be converted back to size the session started with. One could extend the proxy further to support hand- offs between devices, where the dynamic scaling of the frame size would be very useful for devices with different screen sizes. MPEG standard requires a fresh restart of the video stream in order to change the frame size. In order to circumvent this limitation we keep the Figure 3. Transcoding operation with a high Q-scale. original frame size, but insert a black band around the video content as shown in Figure 2. 3.3 Implementation Details The video transcoding proxy was implemented us- ing Tinyproxy [20] and FFserver [17]. Tinyproxy is a lightweight http proxy server licensed under GPL. FFserver is a streaming server for both audio and video licensed under LGPL. FFserver provides two codec libraries (libavcodec Figure 2. Frame Scaling for width and height by 50% and libavf ormat) that can be used to encode and de- code MPEG video. We modified tinyproxy and inte- grated it with the FFmpeg libraries so that it would be 3.2.2 Color Depth able to decode and encode a video stream. MPEG standard specifies a single pixel format where The transcoding process is initiated by the client by each pixel is represented in the YUV scheme, where sending its device specification, along with the request Y is the luminance, and U and V specifies the chromi- for a video to the proxy. If transcoding is required, nance. [12] Because each pixel is represented always the proxy begins to get the requested video stream and with 12 bits (e.g. 4:2:0 planar format), we are only ex- decodes each frame, and then re-encodes it applying pecting minimal performance improvements by chang- the parameters specified by the client during encoding ing the video to gray scale. and streams it to the client device. We implemented Q- scaling, frame scaling, and altering color depth while 3.2.3 Q-scale re-encoding the frame at the proxy. The transcoding The Q-scale is a parameter used to compress a given then works as follows: Tinyproxy has one parent pro- block of a video frame. Each block is a 8 × 8 pixel cess, which spawns multiple http server processes that square to which a DCT (Discrete Cosine Transform) will serve incoming http requests from clients. We is applied (similar to JPEG encoding). The coefficients modified tinyproxy so that in addition it spawns an Cij of each block are then divided by a quantization additional child, which serves as a dynamic transcod- matrix Mij and a quantization scale q: ing request handler. The controller shown in Figure 1 connects to this request handler on the proxy. The con- Cij Cij = (1) troller can dynamically change the video transcoding q · Mij parameters while streaming. The encoder quantizes to zero the values of Cij that The algorithm for the transcoder is shown in the are below a certain threshold. Each block is then fed flowchart in Figure 4. If the input frame satisfies the into a RLE module (Run Length Encoder) that does transcoding requirements, it is directly sent to the zigzag encoding to achieve better compression. A very output buffer without being decoded and re-encoded. high Q-Scale will give a “pixilated” look to a frame Otherwise, the frame is decoded. Then, the codec (Figure 3) due to the loss of high frequency components is changed by the transcoder if the transcoding pa-
  • 5. rameters have changed since the last frame was de- 3. Energy consumption coded. The frame is then re-encoded with the current codec and stored in the output buffer, from where it is 4.1 Testbed Setup streamed to the client. For each of the above measurements we used different trans-coding parameters (Q-scale, video frame size and Start gray/color scale). All the measurements were done at the proxy server. At the client side, we used M P layer [13] to play the video streams. CPU usage was com- puted as the CPU time spent by the transcoder during End of Yes stream? Stop the entire video stream. This CPU time has two com- ponents: user time and system time. The user time is No the CPU time allocated to the main user-level process and all its children, while the system usage is the CPU Fetch frame from input time allocated to kernel processes (i.e., disk I/O, net- buffer work access) during the user process execution 1 . All CPU measurements presented in this paper refer to the total time (user + system). Does frame We used T cpdump [18] to trace the TCP connec- need No transcoding? tions to the server and computed the throughput of these connections using T cptrace [19]. Yes For energy measurements, we used a laptop with 1.8GHz AMD processor and 512MB of RAM config- Decode frame ured as the proxy. Since energy measurements are very sensitive to other processes running in the system at the same time as the transcoder, we turned off all un- necessary processes leaving only the transcoding proxy Have transcoding Yes server and some kernel processes running. The ACPI Change codec parameters [1] system of the laptop allowed us to get readings of changed? voltage levels and current drawn from the battery. ACPI No can be integrated in the Linux kernel and has a mapping to the /proc file system where current battery status can Encode frame be accessed. Since ACPI takes measurements directly from the battery, we unplugged the power supply to avoid its constant recharge. We took periodic measure- ments of voltage level and current intensity from the Store frame battery, as the proxy was processing the video stream. in output buffer We then computed the energy as follows: Figure 4. Transcoding Algorithm T E(T ) = v(t) · i(t) · dt 0 N 4. Measurements ∼ v(n∆t) · i(n∆t) · ∆t (2) To understand how the trans-coding system reacts to n=1 different configuration parameters, we set up a testbed Where v(t), i(t) are instantaneous voltage and cur- and did the following measurements: rent drawn from battery, T is the total measurement 1. CPU usage, 1 We used the C function getrusage() to return the values of CPU 2. Bandwidth usage, and, usage
  • 6. time, N is the number of samples taken during T and 250 T ∆t = N . For the following measurements we used a MPEG-4 200 encoded video of length 3.5 minutes, with initial frame Throughput (KBytes/s) size 320x240at24 frames a second. 150 4.2 Q-scale Measurements 100 For the CPU measurements with different Q-scales we used a 4 minute video clip file in avi format (the video 50 stream was encoded in MPEG-2). We took two mea- 0 surements for each allowed value of the Q-scale (1-31) 0 5 10 15 20 25 30 Q-Scale and averaged them. The results are shown in Figure 5. Figure 6. Throughput for different Q-scales. 40 35 Figure 6 clearly shows that throughput drops very fast for Q<5 and then converges to a stable value as we CPU usage (seconds) increase the Q-scale. This was also expected since as 30 the Q-scale increases, the number of bits necessary to encode the blocks of each video frame is also reduced, 25 thus reducing the amount of data to be transmitted over the network. 20 0 5 10 15 Q-Scale 20 25 30 4.3 Frame Scale Measurements Figure 5. CPU usage for different values of Qscale. MPlayer uses avi decoder functions supplied by the libavcodec library, however the decoder does not sup- port frame size changes during the playback of the As can be seen from the figure, the total CPU us- video stream. To overcome this limitation we used an age has a linear decay for Q-scale<5 and a descen- image resize function provided by the libavformat li- dent trend as we go up in the Q-scale. This was ex- brary. This function resizes the video image and inserts pected since MPEG encoding performs a lossy com- black strips around the resized image (in the case of a pression over the video image and the amount of com- size reduction) but keeps the original height and width pression/loss is greatly affected by the Q-scale parame- of the video frame (Figure 2). Black blocks in a video ter. As the value of Q-scale increases, the quantization frame have a high degree of compression (∼ 100%) matrix will contain smaller DCT components, which due to the MPEG image compression scheme (simi- eventually will be quantized as zero. Since this is fol- lar to JPEG). In addition, the constant black blocks in lowed by a RLE (Run Length Encoding) in each block the image take advantage of the MPEG motion predic- of the image, a sequence of consecutive zeros will be tion algorithm to increase the compression level of the greatly compressed, thus reducing the time to encode a video stream and obtain both CPU and bandwidth sav- sequence of video frames (containing B and P frames). ings. The irregularity of the curves (ups and downs) is due Figures 7 and 8 represent the CPU usage (at the to the fact that we were using a precision of seconds to proxy) and throughput (at the client) for different frame measure the CPU time, which introduces some level of scales. As an example, after applying a 50% frame noise in the measurements (at least one second error scale to a 640x480 frame, the size becomes 320x240, margin). i.e., the scaling applies to both frame width and height. The throughput measurements of Figure 6 were As observed in Figure 7, the CPU usage increases as taken using the same video file as above with also two we approach the original scaling of the frame. Indeed samples per Q-scale: CPU cycles are saved as the original video frame is
  • 7. 20 4.4 Gray Scale Measurements The MPEG standard uses the Y U V scheme instead 15 of the traditional RGB scheme to encode each pixel. Y is the luminance component, while U and V are CPU usage (seconds) the blue and green chrominance respectively. The most 10 common format is the Y U V 4 : 2 : 0 planar format that uses 3 planes (Y, U, V ) to encode a given frame. 5 By using gray scale, we are effectively using only the Y plane, since we are discarding the color information. However, MPEG standards require that all three YUV 0 20 40 60 Frame scale (%) 80 100 planes exist in the video stream, which means that each pixel is still represented by 12-bit YUV as in the Figure 7. CPU usage for different frame size. color case. Most of the image information is in the luminance plane (Y plane). Therefore, the throughput and CPU usage are not significantly improved by using shrunken and the black portion of the frame increases. gray scale, which were confirmed by our tests. The encoding will be faster because of: 4.5 Energy Measurements 1. MPEG motion prediction working in the time do- We did several energy measurements at the proxy, un- main and, der the conditions described previously in the testbed 2. MPEG image compression working in the spatial setup. By changing one transcoding parameter at a domain. time, we could effectively identify the impact of each The same reasoning applies for the throughput curve parameter on the overall energy consumption. More- in Figure 8. over, we were not so much interested in observing Notice that both CPU usage and throughput curves the absolute values of the measurements, but to com- drop at 100%. This is because with 90% scale, the pare energy consumptions of different configurations. MPEG encoder still has to encode a tiny black strip These measurements can also extend to other systems on bottom/right sides, however this strip is removed like desktops and server farms [7, 4]. Proxy energy when the frame scaling is set to 100%. The black strip consumption is an important parameter in production is a discontinuity in spatial domain (high frequencies middleware systems and large data centers, and need to in DCT) and needs additional bits to be encoded (in be minimized as much as possible. comparison to 100% scaling). 3.6 3.4 300 3.2 3 250 2.8 Energy (KJ) 2.6 200 Throughput (KBytes/s) 2.4 150 2.2 2 100 1.8 1.6 50 Q:30 Q:20 Q:15 Q:10 Q:5 FS:25 FS:50 FS:75 Full 0 20 40 60 80 100 Figure 9. Energy consumption for different transcoder Frame scale (%) parameters. Figure 8. Throughput for different frame scales. Energy was computed according to (2) over a 4- minute video. The frame scale (FS) tests were per-
  • 8. formed with a Q-scale of 1 and the Full test was done 700 News Cartoon without any transcoding at all. Figure 9 shows a behav- 600 Music Sports ior similar to that of CPU usage for different transcod- ing parameters. Indeed, great savings can be achieved 500 Throughput (KBytes/s) by using frame scale reduction or using high Q-scale 400 values. 300 4.6 Different Video Contents 200 The impact of transcoding parameters may depend also on the content of the video streams, such as different 100 11.0 17.8 11.7 9.7 textures, colors, and movement. We did experiments 0 Q:2 Q:31 with four different types of videos that provide a range of different levels of color, textures and motion: Figure 11. Throughput for different videos. 1. Cartoon: video clip with flat colors, with fewer tex- tures. Figure 10 shows the results for CPU usage: the num- 2. News: interview between a news caster and a re- bers above the bars represent the percentage of CPU us- porter, the background behind the both the news age of a transcoding with Q-scale=31 compared to the caster and the reporter is static and there is very little values achieved for Q=2, e.g. a bar with 80% means movement in each frame. that trancoding the video with Q-scale=31 takes 80% 3. Sports: Formula One race, with fast moving cars, of the CPU usage of the transcoding with Q=2. there is a lot of movement between each frame. Note that the values are approximately the same for 4. Music Clip: video with static parts and parts with all videos (between 72% and 80%) , indicating that lots of movement, and has frames with flat colors as the result of variation of encoding parameters is not well as heavy textures. impacted by the content of the video. A similar reasoning can be applied to Figure 11, 320 News Cartoon where we show the throughput measured at the server Music 300 Sports for different videos. 280 5. Conclusion CPU usage (seconds) 260 We presented the design and implementation of an 240 72.1 open source dynamic video transcoding proxy server 220 based on a three-tier transcoding proxy architecture. 200 80.7 The video transcoding proxy provides flexibility for 180 applying different transcoding control algorithms with- 76.2 75.6 out modifying the other components of the system. 160 Q:2 Q:31 We conducted a systematic evaluation of the system Figure 10. CPU usage for different videos. parameters: Q-scale, color depth, and frame size on CPU usage, bandwidth usage, and energy consumption of the transcoding proxy. All the videos were MPEG-4 encoded and each clip Our measurements show that by reducing the image was of length 45 seconds, with initial frame size quality down to a Q-Scale of 10, one can dramatically 640x480at30 frames per second. For each video, we decrease proxy CPU cycles and streaming through- compared the performance in terms of CPU and through- put. Furthermore, video frame size reduction results in put between an encoding with Q-scale=2 and an encod- a significant decrease of perceived throughput at the ing with Q-scale=31. We expect to observe a decreases client side. Our measurements also show that a 50% in both CPU and throughput usage when going from scaling on both, width and height of the video frame ef- Q-scale=2 to Q-scale=31, but would this decrease be fectively reduces the throughput to approximately 50%. the same for all videos? In addition, energy measurements show that by using a
  • 9. Q-scale of greater than 5, it is possible to reduce en- and P. Gauthier. Cluster-based scalable network ergy consumption at proxy server by more than 50%, services. In Proceedings of the 16th Symposium on in comparison to the no-transcoding scenario. We also Operating Systems Principles (SOSP-97), volume 31,5 study the impact of transcoding of different video con- of Operating Systems Review, pages 78–91, New York, Oct.5–8 1997. ACM Press. tent and find that the result of variation of these encod- [10] R. Han, P. Bhagwat, R. LaMaire, T. Mummert, ing parameters is not impacted by the content of the V. Perret, and J. Rubas. Dynamic adaptation in an video. image transcoding proxy for mobile www browsing. We plan to develop an automatic controller mod- IEEE Personal Communication, 5(6):8–17, December ule based on our measurement results. The automatic 1998. control module would enable us to explore different [11] J. D. McCarthy, M. A. Sasse, and D. Miras. Sharp feedback mechanisms to estimate the client’s perceived or smooth?: comparing the effects of quantization throughput and experience during the transcoding pro- vs. frame rate for streamed video. In CHI ’04: cess and this remains as future work. Proceedings of the SIGCHI conference on Human factors in computing systems, pages 535–542, New York, NY, USA, 2004. ACM Press. References [12] MPEG Home Page. http://www.chiariglione. [1] ACPI at Intel. http://developer.intel.com/ org/mpeg/. technology/iapc. [13] MPlayer. http://www.mplayerhq.hu. [2] P. A. A. Assuncao and M. Ghanbari. A frequency- [14] T. Phan, R. G. Guy, and R. Bagrodia. A scalable, domain video transcoder for dynamic bit-rate reduction distributed middleware service architecture to support of mpeg-2 bit streams. In IEEE Transactions on mobile internet applications. In Wireless Mobile Circuits Systems Video Technology, vol. 8, no. 8, pp. Internet, pages 27–33, 2001. 953-967, December 1998. [15] A. Singh, A. Trivedi, K. Ramamritham, and P. J. [3] R. Bagrodia, S. Bhattacharyya, F. Cheng, S. Gerding, Shenoy. Ptc: Proxies that transcode and cache in G. Glazer, R. Guy, Z. Ji, J. Lin, T. Phan, E. Skow, heterogeneous web client environments. World Wide M. Varshney, and G. Zorpas. imash: Interactive mobile Web, 7(1):7–28, 2004. application session hando, 2003. [16] H. Sun, W. Kwok, and J. Zpedski. Architectures for [4] G. Banga, P. Druschel, and J. Mogul. Resource mpeg compressed bitstream scaling. In IEEE Trans. containers: A new facility for resource management in Circuits and Systems for Video Technology, April 1996. server systems. In Proceedings of the Third Symposium [17] FFmpeg. http://ffmpeg.sourceforge.net. on Operating System Design and Implementation [18] tcpdump / libpcap. http://www.tcpdump.org/. OSDI’99, February 1999. [19] tcptrace. http://www.tcptrace.org/. [5] A. Bond, M. Gallagher, and J. Indulska. An infor- mation model for nomadic environments. In DEXA [20] Tinyproxy. http://tinyproxy.sourceforge. Workshop, pages 400–405, 1998. net/. [6] J. Cai and C. W. Chen. A high-performance and [21] D. Wang, F. Speranza, A. Vincent, T. Martin, and low-complexity video transcoding scheme for video P. Blanchfield. Toward optimal rate control: a study streaming over wireless links. In Proceeding of of the impact of spatial resolution, frame rate, and Wireless Communications and Networking Conference quantization on subjective video quality and bit rate. 2002, March 2002. In VCIP, pages 198–209, 2003. [7] J. Chase, D. Anderson, P. Thakur, and A. Vahdat. [22] A. Warabino, S. Ota, D. Morikawa, M. Ohashi, Managing energy and server resources in hosting H. Nakamura, H. Iwashita, and F. Watanabe. Video centers. In Proceedings of the 18th Symposium on transcoding proxy for 3gwireless mobile internet Operating Systems Principles SOSP’01, October 2001. access. Communications Magazine, IEEE, pages 66 – 71, Oct 2000. [8] A. Fox, S. D. Gribble, E. A. Brewer, and E. Amir. Adapting to Network and Client Variability via On- Demand Dynamic Distillation. In Proceedings of the Seventh International Conference on Architectural Support for Programming Languages and Operating Systems, pages 160–170, Cambridge, MA, October 1996. [9] A. Fox, S. D. Gribble, Y. Chawathe, E. A. Brewer,