In this presentation, we’ll take a look at some of the challenges that the emerging IoT network creates for validating the quality of end-to-end services that are offered across traditional and IoT networks. We’ll look at some of the particular characteristics of the emerging IoT network, and how those characteristics create additional challenges for the already difficult task of validating end-to-end service quality. Then we’ll see why traditional approaches to service validation typically fall short for complex emerging technologies like IoT, and lastly cover some best practices for creating a validation framework that can effectively address the challenges of validating end-to-end IoT services.
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Characerizing and Validating QoS in the Emerging IoT Network
1. Slide Header…
QualiSystems Proprietary & Confidential
Friday, April 17, 2015Friday, April 17, 2015
Characterizing and Validating QoS in the
Emerging IoT Network
Hans Ashlock, Technical Marketing
2. Slide Header…
QualiSystems Proprietary & Confidential
Global Software Company
• Established 2004, privately held
• North America HQ: Santa Clara, CA
• R&D Center: Tel Aviv, Israel
Market-leading supplier of automation solutions for:
• DevOps cloud and network orchestration
• Physical and virtual lab management
• Test and continuous integration automation
Mature, proven technology:
• Hundreds of customer deployments
• Millions of infrastructure elements managed
• $Billions in infrastructure managed
Quali Company Overview
5. Slide Header…
QualiSystems Proprietary & Confidential
State of IoT Validation/Test Today
Architectur
e /
Framework
Certification
Program
Compliance
Testing
Interop
Testing
Third Party
Validation
Labs
Formal
Testbeds
AllSeen Alliance
AllJoyn
AllJoyn Self-Guided
Certification
Mid 2015 Mid 2015 Mid 2015
Thread Group Thread /
6LoWPAN
Mid 2015 Mid 2015 Mid 2015 Mid 2015
UL & Granite
River
OIC
IoTivity
IoTivity Mid 2015
IIC Reference
Architecture
Smart Tooling,
Microgrids
IPSO Interop
showcases
IEEE IEEE P2413
developmen
6. Slide Header…
QualiSystems Proprietary & Confidential
What is the state of IoT service validation
today?
Incubation
n
R&D Labs
Standards
Architectural
Specs
Certification
Programs
Formalized
Test and
Interop
Vendor and
Provider
Solution
Ecosystems
Industry
Maturity
Traditional
Emerging
(SDN
NFV)
IoT
7. Slide Header…
QualiSystems Proprietary & Confidential
IoT Use Case Complexity
Isolated
Non-Critical
Isolated
Critical
Ubiquitous
Non-Critical
Ubiquitous
Critical
Vehicle Control
System
Smart Thermostat
Quetym™
Traffic Monitoring
Smart Home
Smart Retail Pervasive
Health Care
Structural Monitoring
Self Driving Car
Where We’re Headed…
8. Slide Header…
QualiSystems Proprietary & Confidential
Explosion of Device Count and Data
• 50 billion in 2020
Network Heterogeneity
• Technologies, protocols, devices, architectures, local, global
Dynamic Service Composition
• Creating services on the fly
• Unpredictable nature of underlying network
Compound Service Composition
• Repurposing things for different services
• Multiple applications co-existing
Nodes function as end-points and routers
• QoS must account for demand of intermediary nodes
Emerging IoT Network Challenges
VLANS?
10. Slide Header…
QualiSystems Proprietary & Confidential
Autonomous
• IoT network isolated as “smart network”
• Single gateway “smart connector”
Ubiquitous
• IoT devices integrated into internet
• Multi-hop, multi-access (radio), shifting topology
Application Overlay
• NFV enables overlay network
• In-network data processing reduces congestion
Service Oriented Network
• Network functions as services
• Networks composed on the fly (literally)
IoT Network Architecture Diversity
11. Slide Header…
QualiSystems Proprietary & Confidential
Traditional Validation Approaches
Requirements
• Resource Pool
• Modularity
• Reusability
• Scalable
Script Based
• No inherent notion of the
infrastructure resource pool
• Often multi-step list of
commands
• Often poorly documented
and unusable beyond
original developer
• Often hard coded; not
architected to scale
13. Slide Header…
QualiSystems Proprietary & Confidential
Inventory objects:
• Actual inventory including physical, virtual, NFV, apps, tools, subnets, etc.
• Abstracted meta-model
• Enables reservation/multi-tenancy
Provisioning objects
• Resource level interfaces
Process task objects
• Test automation (functional, sanity, regression)
• Continuous Integration
Build and Maintain an Object Layer
14. Slide Header…
QualiSystems Proprietary & Confidential
Pre-packaged libraries (of course)
Independent interface creation:
• Integrate and “objectize” API’s
• Utilize existing scripts (TCL, python, etc.)— no “starting from zero”
• Capture and objectize CLI, SNMP, terminal interactions
Make them small and maintainable
Overcomes interfacing obstacles
Removes roadmap dependencies
Helps integrate legacy/special infrastructure
OOTB & DIY Integration Approach
15. Slide Header…
QualiSystems Proprietary & Confidential
Visual environment/topology modelling
• Inventory-based modeling
• Enables reservation of entire environments
• Model any arbitrary network topology
• Abstract design to maximize utilization
• Dramatically scale service creation
Visual workflow authoring
• Hide syntax from users
• Abstract service test flows
• Continuous integration, test automation
Self-Service, Transparent End User Access
• End user modelling of IoT validation environment
• Repeatability
• Multi-tenant
De-Couple Modeling from Automation
In this talk, we’ll take a look at some of the challenges that the emerging IoT network creates for validating the quality of end-to-end services that are offered across traditional and IoT networks. We’ll look at some of the particular characteristics of the emerging IoT network, and how those characteristics create additional challenges for the already difficult task of validating end-to-end service quality. Then we’ll see why traditional approaches to service validation typically fall short for complex emerging technologies like IoT, and lastly cover some best practices for creating a validation framework that can effectively address the challenges of validating end-to-end IoT services.
Our conception of IoT has changed dramatically since it’s first inception. What we once conceived as meshed networks of passive RFID chips tagged on everything has grown to incorporate multiple dimensions including both sensor networks that provide real-world intelligence and networks of goal oriented distributed smart objects with services that span from local meshed networks across global interconnections via the internet. This is important to acknowledge because we need to recognize that the scope is in fact so huge, that IoT can look different depending how we’re approaching it; but ultimately, from a characterization and validation point of view, we’ll have to contend with the whole elephant.
As a blogger recently described: “To see where IoT is headed, we need to Imagine a world of disposable endpoints, scattered like grains of sand. Like $5 Internet-connected LED binky-wands at a concert synchronized by IP and turning a crowd into a giant, human Jumbotron.” This is an apt and helpful description because it gives us a sense of the absurd scale of IoT.
We also want to clarify that quality of service refers not only to the traditional notion of network metrics - latency, jitter, etc. but also to the true end-to-end quality that applies at the application level, which must account for device and environment constraints like energy, speed, quality, resolution, efficiency, etc. This is important because to fully understand the complexity and challenges of the emerging IoT network, we need must consider the entire end-to-end service and applications. IoT will bring challenges at both the network metric as well as the end-to-end platform level.
Here is a quick snapshot of where we stand today with regard to IoS service validation and characterization, using a sample of what are some of the primary IoT industry initiatives mapped against key test and validation milestones. This does not include the pervasive academic work that continues to thrive only because it’s the progress at the industry level that marks the real progression of the technology.
AllSeen / AllJoyn – The Qualcom lead consortium currently has an active self-certification program in place for validating against a single DUT for compliance with the AllJoyn IoT framework API; it’s a compliance test only. AllSeen has announced that compliance, interoperability, and third party lab validation will be in place by September 2015.
Thread – Google’s Nest lead initiative does not have any certification in place, but plans to have certification, compliance, and interoperability initiatives in place by mid 2015, as well as a joint third party lab with UL and Granite River Labs.
Open Interconnect Consortium / IoTivity – The OIC group plans to have certification in place by mid 2015, however it is unclear what their plans are for interoperability and third-party labs.
Industrial Internet Consortium – The IIC focuses on developing a reference architecture, use cases, and test beds (rather than focusing on standards); The use cases are specific technology use case studies that define requirements, performance, QoS, etc. that will feed the requirements of the reference architecture. Test Beds will function to verify functionality and interoperability against a specific industry use case in real world conditions. The Two formal test beds defined as of Feb 2015: Track and Trace (smart tooling) and Microgrids; these are not shared open labs, but closed experiments with participating organizations (Bosch, Cicso, NI, Tech Mahindra, Southern California Edison, etc.). It is interesting to note that the OIC is entering into a partnership with IIC in which the IIC will ensure that the IoTivity framework will comply with the IIC findings and architecture requirements.
IPSO – IPSO is not pushing a standard or framework, but just promoting the IP protocol for smart devices; I added them only to highlight that they’re actively running interop tests and plugfests, often with the partnership of ETSI, so this is significant in terms of moving the technology forward.
Lastly IEEE is currently developing IEEE P2413, which defines an Architectural Framework for the IoT.
IoT is clearly is in the emerging adolescent growing pains stage, but we are in fact seeing real industry activity. Here we highlight that emerging technology adoption tends to follow a general maturity process with regard to test and validation.
The first stage is the development of incubation and R&D labs, which tend to be purely academic or academic/industry partnerships that are academic in nature.
The second stage is the beginning of development of standards and specifications, and often we’ll see like we do with IoT a horse race of industry out front of the standards bodies like IEEE to get a head start. That’s definitely what we’re seeing.
Next is the development of certification programs, which typically are offered by vendors and alliances to promote product and market consolidation.
As standards and frameworks become more mature formalized test and interop programs are put in place, with a sure sign of market maturity being that third-party validation labs and interoperability events are utilized. For example, ONF has an entire ecosystem of certification labs.
Now – as the market really begins to take shape we’ll see vendors and providers build out partner ecosystems for developing solutions on… for example, HP and Cisco have both created self-service labs for partner solution development on actual real-world end-to-end hardware; service providers like AT&T and Comcast have initiatives like this as well.
Now the interesting thing to note here is that the incubation, test, interop, and provider solution initiatives will all benefit from next generation self-service validation approaches like these to really accelerate adoption; and I’ll talk about this later.
So here we’ve mapped IoT’s progress, and we can see that it’s still in its initial phases of R&D incubation and architectural development; but that means there is a lot of opportunity to take an approach to service validation that will accelerate the adoption of IoT technologies.
As mentioned, the IoT of tomorrow is not the IoT we see today. To give a sense of the complexity of tomorrow’s IoT network, we’ll look at it’s evolution across two axis: isolated vs. ubiquitous and non-critical vs. critical. The most complex and most challenging services to validate and characterize will be those that are ubiquitous and critical.
Right now, the Internet of Things is really just traditional products repurposed to connect to the cloud; the thermostat has its cloud, the refrigerator has it’s cloud. These systems are isolated and non-critical.
Now an example of a more complex, critical system might vehicle control. Lot’s of *things* - sensors, systems, etc. – but isolated, so the challenge of ensuring the required QoS is an isolated, relatively static engineering problem.
Progressing further, an ubiquitous non-critical system might be a traffic monitoring system, smart retail, or smart home… where multiple systems and devices are interconnected via each other and the internet to enable dynamic, goal-oriented, services and applications.
Lastly, the ubiquitous and mission critical application is the most complex and challenging. Consider the application of pervasive healthcare. An at home patient fall-detection application that alerts caregivers within a certain amount of time needs to operate at a certain level of reliability and it need to dynamically adapt and compose the service based on where the patient is… as they move through the home, into other builtings… the service needs to be dynamically composed of the appropriate devices – perhaps accelerometers, ceiling cameras, floor sensors, etc. and meet both the functional and QoS requiremets. And this all needs to happen on the fly.
Or the ultimate example of the self-driving car – which is the incorporation of the vehicle control system, traffic monitoring, and other systems all within a constantly moving system. How do we ensure and validate the required Quality of Service for these kinds of systems?
So thinking in these terms let’s articulate some of the specific characteristics of emerging IoT that pose a challenge:
First the IoT will constitute an explosion of devices and data
Second, services across both traditional and IoT networks will be extremely heterogeneous, traversing multiple protocols, technologies, network architectures, and devices.
Third, IoT services will be dynamic by nature. The arrival of a customer in a store, the initiation of an emergency response, the mobile nature of subscribers, as well as the unpredictable nature of the underlying network as it changes and morphs as it’s composite sensors and things constantly move, will pose significant challenges to ensuring service quality.
Fourth, IoT services will be composed of other services and multiple services and applications will co-exist simultaneously.
Fifth, sensor network nodes will have to function as both end-points and routers, so QoS must account for the requirements of intermediary nodes, since those intermediary nodes may in fact be functioning as their own end points for other services and applications.
These five characteristics are unique to the emerging IoT network, and represent a new and distinct challenge to the already difficult task of ensuring consistent and reliable service quality.
To emphasize the level of complexity of technologies, transport mechanisms, and protocols involved in IoT you can get a sense of just the vast matrix of characterization and validation that is required to ensure the quality of an end-to-end service. Today’s networks are relatively static and fixed compared to the IoT of tomorrow, and even today service providers, equipment manufactures, and even enterprise are having to invest heavily in their approaches to validating services, like, for example, Triple Play cable services. The info graphic here gives a good idea of the range of technologies involved. Here is a brief description of some of the primary protocols that are vying for space with the emerging IoT ecosystem:
MQTT - Is primarily implemented in remote monitoring applications such as energy use and equipment maintenance. Facebook, for example, is a well-known user of MQTT for its Facebook Messenger application as it is able to function with limited battery power and data bandwidth
CoAP - Used by IoTivity; an application layer protocol for simple devices to communicate over internet (IP); eg. WSN nodes; multi-cast support, low overhead, and simplicity
DDS - Protocol enables device-to-device communications through transmitting data collected. This type of communication is typically used for high-performance systems such as medical devices, transportation, smart cities, and military devices that require instant connectivity. NASA, for example, has used DDS middleware to support human-to-robot communications from earth to space
XMPP - Serves for person-to-person communication, enabling personal control of IoT-enabled devices through users’ smartphones. It is generally used for consumer devices, and applications such as Google Talk have used the XMPP protocol
AMQP - Is a protocol that facilitates server-to-server communication and enables a secure and reliable connection for control or analysis of data collected. AMQP was developed in the banking industry and is used most often in business messaging to send messages between servers using a tracking mechanism that ensures secure delivery
We also want to consider the various kinds of network architectures that may come into play in the emerging IoT network. Again – this is part of the complexity and challenge of validating end-to-end services which may span multiple types of networks.
The first is the autonomous network, where the IoT network is relatively isolated and connected to the internet via a single gateway and seen as a single entity.
The second is the ubiquitous architecture in which individual IoT devices are integrated into the internet and end-to-end networks are composed of multi-hop, multi-access subnets and where the underlying topology of the network or portions of the network may be constantly shifting and changing. Validating and simulating this kind of scenario is extremely challenging because it requires a high level of sophistication with provisioning automation.
The next is the application overlay approach which will likely be employed throughout the emerging IoT network. Here, network function virtualization enables creating an overlay network that also performs in-network data processing to, for example, reduce congestion. Take as an example a traffic monitoring system: the amount of data in this kind of IoT network will normally be distributed across the system as a whole, but data sink events like a traffic jam will create and exponential increase in the data feed and possibly create rapid network congestion. An application overlay network that can adjust and perform in-network processing to limit the feed to a single or group of cluster nodes can dynamically adapt to these kinds of situations.
And lastly, the service oriented network approach sees the emerging IoT network as an opportunity to develop new network architectures that treat network functions as services, and in which networks – meaning the entire network stack – is literally composed on the fly.
Now that we have a sense of the complexity of IoT, let’s take a look at some of the key requirements that validating a service across a complex IoT network might cause a traditional validation approach to fail. Typically network services are validated using test benches and validation automation that is fundamentally script based.
The first requirement is having access to the massive a pool of diverse infrastructure resources we mentioned; this includes applications, services, protocols, and devices.
Second, because of the dynamic, and heterogeneous nature of the emerging IoT network, an automation framework needs to be modular - able to handle a variety of environments, combinations of resources, and adapt to changes in requirements in an agile fashion.
Third, because of the highly customized and composed nature of services across an IoT network, a validation framework needs to have a repository of reusable components.
Lastly, because of the vast amount of devices and data that typifies an IoT network, a validation framework needs to be able to easily scale to adapt to real-world scale out situations
If we look at how more traditional script based approaches line up with these requirements we see that they are likely to fall short.
They tend not to have intrinsic awareness of the resource pool,
they are most often not modular but rather based on multi-step lists of commands, and
they’re often poorly documented and unusable by anyone beyond the original developer.
(4) They often have no underlying architectural design but rather consist of performance scripts, so scaling is difficult if not impossible.
So if our goal is validating end to end service quality with traditional automation approaches, we’ll find that, like this poster says, it is a race that has no finish line, so technically it’s more like a death march.
So how do we break out of a death march?
Here we’ll discuss three key principles that we’ve come to advocate for achieving agile and effective service validation on complex network topologies.
The first principle is to build and maintain an object layer.
This means capturing all automation elements such as the inventory and interfaces to network infrastructure components, provisioning actions such as spinning up a service or NFV, and testing tasks for validating service quality, as limited scope, reusable, building block objects.
When we say limited scope, we mean that one object may simply be to log into a particular vendor’s equipment. Another object might be to run a ping test. Another object might be to load a particular OS image. Another object might be to bring up a virtual machine in a hypervisor. Because they are small in scope, they are easy to maintain, update, and make new versions of.
These objects are parameterized and given searchable attributes and live in an organized library, which promotes a high level of re-use. Objects can be assembled into higher-order objects. For example, VMs, virtual storage devices, and SAN switch ports can be assembled as a test topology. Further, object’s can be structured to create abstractions of underlying resources, allowing resources to be interchangeable.
Now we’ll come back to scripting. An object-oriented architecture is the ideal way to leverage scripting. TCL, Python, and even Puppet can be used to create the objects here, but rather than the commands living in long, hard to maintain and difficult to reuse scripts, they live as easy to maintain and highly reusable objects.
The second principle is to pursue a balanced OOTB and DIY approach to automation. Often organizations find themselves having to choose between one or the other - either “build it all myself” or buy an OOTB solution. The problem with the former is that organizations like service providers should be focusing on what they do best, not on building automation frameworks, and the problem with the latter is that OOTB solutions invariably lead to the silo’d nature of automation tools that is so common today.
Ideally you want to be able to independently create interfaces with APIs, and common equipment/service interactions like SNMP, as well as existing scripts, and create reusable automation objects around these integrations, but also leverage any out of the box libraries and integrations that are offered by tools and objectize those as well into the common automation object library.
Finding the right balance helps remove product roadmap dependencies, and helps integrate with legacy/special infrastructure.
Lastly, what we want to be able to do is model the test bed environment using resources that have been modelled in a generic way so that a single test flow or set of test flows can consume and operate within those environments w/o having to be written.
So the third best practice that we advocate is to de-couple the modelling of the test bed from the test flow and use an object based model to allow the test flow to reference the resources in the test bed. For example, a model based structure can allow different types of IoT resources to be considered part of a common class with a common interface and attributes so that the test flow can refer to them in a generic fashion.
Now once we de-couple the modelling of our IoT infrastructure from the automation, and if we’ve “objectized” our automation components to a granular enough scale, we can begin to scale and change our test automation and provisioning just by changing the infrastructure model.
And ideally you want both sets of tools visual based.
And then for automation authoring, you really want any developers you have to be creating the reusable automation components we talked about. To be “objectizing” the resource interactions, existing scripts, etc. and serving those up to non-programmers who can take those components and create higher level automation flows… hides syntax, allows the ability to scale out service creation, as well as higher level orchestration like test automation and continuous validation…
And lastly, you want to be able to present the test environment models and test automation to end users as self-service, transparent, environments that they can consume to test and validate services and applications on. Ideally you want end users to be able to interact with the network topologies – modify them and save their configurations for repeatability.
I want to close with an example of a vendor who’s doing just what we described. Cisco has created a self-service shared cloud lab for their global developer community to develop and validate SDN, NFV, and next generation services on top of real Cisco infrastructure stack.
End users can model their own complex network and data center topologies, run test validation flows to certify and perform interoperability and conformance test on their own applications and services running on cisco hardware. Now, this is an example of where industry and vendors are moving; HP, for example has similar initiatives for enabling partner development on their NFV and SDN platforms; so too do providers like AT&T and Comcast, who are moving to similar models in response to agile business transformation initiatives.
So the industry is moving to incorporate an extremely sophisticated approach to validating services on next generation technologies, and this kind of approach will clearly speed service characterization and validation, as well as benefit the adoption and development of the emerging IoT network.
Further Reading:
http://standards.ieee.org/innovate/iot/study.html
http://www.grifs-project.eu/data/File/CERP-IoT%20SRA_IoT_v11.pdf
http://www.iiconsortium.org/test-beds.htm
http://netlab.cs.ucla.edu/medhoc2011/papers/p171-hellbruck.pdf
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6582811
https://hal.archives-ouvertes.fr/inria-00630092/document
http://mpc.ece.utexas.edu/Papers/SESENA2011.pdf
http://ieeexplore.ieee.org/stamp/stamApp.jsp?tp=&arnumber=6381043