Provide a cost-effective and easy to use feature in order to distribute load across multiple servers, located in different geographic regions, according to their availability, resources and client's proximity

2. Target: Distributed Load
"To provide a cost-effective and easy to use
feature in order to distribute load across
multiple servers, located in different
geographic regions, according to their
availability, resources and client's proximity"
3. Sample Architecture Diagram
In the left part of the diagram, the WebStream Clients are fed with
live and on-demand contents. The WebStream platform is
depicted in the central part of the diagram and provides the core
features, such as source streaming and platform management,
including:
● satellite polling component.
● URL-redirection algorithm based on client location and
satellite subsystems availability.
4. Location Components
Each location features a subsystem which is made of the following
components:
● Helix Universal Server.
● Helix Session Manager.
● Middleware Adapter.
Each subsystem is allowed to communicate two-ways with the core
system:
● from core network to satellite network:
○ on-demand and live streaming.
○ server remote-management.
○ alerting and performance monitoring.
● from satellite network to core network:
○ authorization and access control.
○ URL generation and redirection.
○ logging and reporting data.
5. Helix Satellite Servers
The satellite Helix Servers are configured in a transmitter/receiver configuration with the
core Helix Servers. Communication between the core and satellite servers is based on
unicast of one stream for each content (live or on-demand).
The satellite Helix Servers operate to perform the required re-packetization (e.g. from mp4
to Flash, or to iOS) and, for live streaming.
The above process allows a great extent of bandwidth optimization:
● on-demand clips are cached locally, therefore ideally only one transmission of an
on-demand clip takes place between the core and satellite subsystems,
independent from how many clients request that clip (e.g. 10 clients that request
clip xyz will generate traffic for just 1 clip between the core and satellite networks).
Additionally future requests to the same clip should take place from the cache
directly (when available) without further bandwidth usage.
● live clips are broadcast from the satellite servers; between the core and satellite
networks only one flow is activated, independent from the number of clients (e.g.
10 clients that request live xyz will generate traffic for just 1 live between the core
and satellite networks).
7. Application Workflow
The previous diagram depicts the typical workflow when a user access the streaming
content from a web front-end. The web front-end operates server-side to request the
streaming URLs from the middleware; the request includes also the requesting client IP
address.
The middleware performs, in sequence, the following checks:
● the requested clip (on-demand or live) must exist.
● access rules (for the clip) are satisfied.
● run the redirection decision-maker algorithm in order to determine the list of
potential URLs ordered by priority.
● exclude from the redirection URLs those that might be unavailable (according
to the data collected by the satellite polling component to the middleware
adapters installed at the satellite locations).
● return the so determined URLs to the requesting client.
8. Application Workflow
The web front-end loads the embedded media player with the provided URLs that are
in turn requested by the media player.
The media player contact the satellite servers identified by the URLs. The requests are
handled by the satellite Helix Servers that forward the request to the Session
Manager which in turn send a Session Start request to the middleware.
The middleware performs access checks and grant/deny access to the clip
accordingly.
When access is granted, the media player starts playing the streaming URLs from the
local system. When the player is stopped, the satellite Helix Server forwards the
Session Stop event via the Session Manager to the middleware, which records the log
for reports generation.
9. Streamer Decision Algorithm
In order to choose the best available server for a client, the middleware will be
upgraded to support configuration of decision rules. Rules are base on network
addresses and servers, so that each network will have a list of servers defined and
configured along their priority, e.g.:
● subnet a.b.c.d/16
● priority 10: satellite-1.example.org
● priority 20: satellite-2.example.org
● priority 100: core.example.org
● subnet e.f.g.h/16
● priority 10: satellite-2.example.org
● priority 20: satellite-1.example.org
● priority 100: core.example.org
● default (for undefined subnets): core.example.org