The document discusses connectivity frameworks and standards for industrial IoT systems. It provides an overview of the Industrial Internet Consortium's Connectivity Framework, which defines different levels of interoperability and analyzes popular connectivity technologies and standards. The framework guides practitioners in assessing their requirements and choosing the appropriate connectivity standard for their system based on factors like data usage, latency needs, device interchangeability requirements, and integration with other systems and software. Popular standards discussed include DDS, OPC-UA, oneM2M, MQTT, and HTTP.
7. Industrial Internet Consortium: 270+ Companies, 30+ Countries
IIC Founding and Contributing Members
The World’s Largest IoT Consortium
The IIC created the IIoT market
9. IICF Goals
Clarity
GuidanceFoundation
10
Guide map to the rich but
often confusing landscape of
IIoT connectivity
Useful, practical, tangible
guidance for requirements
assessment, technology
evaluation and selection
Sets a stable long
term foundation for
IIoT interoperability
13. Semantic Interoperability Example…
14
Messages
Data (State, Events, Streams)
Information (Data in Context)
TempType
Room
value
72.4
units
DEGREES_F
id
202
TempType
Patient
value
37.1
units
DEGREES_C
id
Breitenbach
?
Share data-objects in context
14. Evolution of the IIoT Connectivity Stack
15
7-Layer OSI
Stack Model
(1994)
4-Layer Internet
Stack Model
(1989)
Industrial IoT
(2014)
Transport
Link
Framework
Distributed Data
Interoperability and Management
Physical
Network
IIoT Connectivity
Stack Model
(2017)
15. IIoT Connectivity Stack Model
Participant X
Connectivity
Information
Networking
IICF Focus
Information (Data in Context)
Participant X
Transport
Link
Framework
Distributed Data
Interoperability and Management
Physical
Network
Participant Y
Data (State, Events, Streams)
Messages
Packets
Frames
Bits
Transport
Link
Framework
Distributed Data
Interoperability and Management
Physical
Network
Technical
Interoperability
(bytes)
Syntactic
Interoperability
(data structures)
Semantic
Interoperability
(data context)
16. Connectivity Transport Layer
• Above: Technical Interoperability
– Share byte sequences
– Opaque data
• Below: Byte protocol
– May observe byte flows & optimize
byte sequence sharing and delivery
• Any computing platform
Information (Data in Context)
Participant X
Transport
Link
Framework
Distributed Data
Interoperability and Management
Physical
Network
Participant Y
Data (State, Events, Streams)
Messages
Packets
Frames
Bits
Transport
Link
Framework
Distributed Data
Interoperability and Management
Physical
Network
17. Connectivity Transport Layer
Transport
Link
Distributed Data Interoperability & Management
Framework
Connectivity
Transport
Functions
Physical
Network
Messaging Protocol
Endpoint
Addressing
Connectedness Prioritization
Timing &
Synchronization
Security
Communication Modes
Technical
Interoperability
18. Connectivity Framework Layer
• Above: Syntactic Interoperability
– Share structured datatypes
– Common and unambiguous data
format
• Below: Opaque Data
– May observe data flows & optimize
datatype sharing and delivery
• Any computing platform
• Any programming environment
Information (Data in Context)
Participant X
Transport
Link
Framework
Distributed Data
Interoperability and Management
Physical
Network
Participant Y
Data (State, Events, Streams)
Messages
Packets
Frames
Bits
Transport
Link
Framework
Distributed Data
Interoperability and Management
Physical
Network
19. Connectivity Framework Layer
Transport
Link
Distributed Data Interoperability & Management
Framework
Quality
of
Service
Security
Publish-Subscribe Request-Reply Discovery
Data Resource Model
Id and Addressing Data Type System Lifecycle (CRUD)
Exception Handling
State
Management
Connectivity
Framework
Functions
Physical
Network
API Governance
Syntactic
Interoperability
20. Fundamental N2 Connectivity Challenge
Reality Check
Accept that there will be
multiple connectivity
technologies
• Brownfield
• Existing technologies
• May be specific to verticals
• Greenfield
• Innovation
O(N2)
21. Connectivity Core Standards Architecture
• Connectivity Core Standards
– Provide syntactic
interoperability
– Stable, deployed, open
standard
– Standard Core Gateways to
all other CCS
• Domain-Specific Connectivity
Technologies
– Connect via non-standard
gateway to any connectivity
core standard
Few Core Standards
Standard Core
Gateways
Many domain
technologies
23. Assessment Template
• Which layers(s) of the Connectivity
Stack does it provide?
• What Core Functions does it provide?
• How does it rank against the Typical
Considerations (of the layers spanned) ?
• How does it impact system Architectural
Qualities?
• Does it fit Connectivity Core Standard
Criteria?
Transport
Link
Framework
Distributed Data
Interoperability and
Management
Physical
Network
28. IICF Architecture Process
Use assessment template
worksheet to determine your
system requirements
Pick the potential connectivity core
standard best aligned with your
system requirements
Build a gateway for other domain-
specific connectivity technologies
29
Manufacturing
Origin
TS
N /
Eth
ern
et
(80
2.1
,
80
2.3
)
DDS
Wi
rel
es
s
PA
N
(8
02.
15
)
Wire
less
2G/3
G/LT
E
(3GP
P)
Wi
rel
es
s
LA
N
(8
02.
11
Wi
-
Fi)
Internet Protocol (IP)
C
o
A
P
M
Q
T
T
We
b
Ser
vic
es
Wi
rel
es
s
Wi
de
Ar
ea
(80
2.1
6)
HTTP
DDSI-RTPS
oneM2M OPC-UA
OPC
-UA
Bin
Telecommunicati ons
Origin
UDP TCP
T
C
P
Transport
Link
Framework
Distributed Data
Interoperability and Management
Physical
N
e
t
w
o
r
k
Healthcare
Transportatio
n
Manufacturin
g
… …
Energy &
Utilities
40. MQTT: Collect Device Data
Message Queuing
Telemetry Transport
(MQTT)
Collect data
from sensors
For cloud
MQTT does not qualify as a core connectivity
standard because it has no standard type system
The IIC Vertical Taxonomy
The IIoT is much larger than the Internet (that only really changed a few industries like banking and travel agencies).
Its economic impact will be larger than the Internet, mobile phones & apps, and the cloud. Combined.
It’s Big. Really big.
Many things missing! Unmanned military vehicles. Hyperloop. Medical imaging. Emergency medicine & transport. Radiation therapy.
There are some huge numbers out there. Cisco $19T. 50B devices. 2/3 of the world economy.
The IoT may be hyped, but this isn’t just a technology like 3D printing or even Big Data. This is the revolution of our lives. It will change everything.
Sensor to Cloud, Road to Hospital, Power to Plant
Usable, performant, reliable, scalable.
IIC is building the infrastructure that enables the IIoT to deliver on its promise. That inspires entire ecosystems.
Its really a guide map
addressing key questions, core functions, and typical considerations around IIoT connectivity
May speak different languages in the different lands.
IICF: Accelerating IIoT
Clarity: It is a guide map to the rich but often confusing landscape of IIoT connectivity.
Foundation: It defines the minimum expectations from IIoT connectivity and sets up a stable long term foundation for interoperability, critical for the next generation of IIoT capabilities.
Guidance: It provides practical, useful, tangible guidance for IIoT connectivity requirements assessment, technology evaluation and selection.
IIRA defines: Connectivity = Infrastructure for Communications Interoperability
Example: Two people are integrable (level 1) if both are able to speak and listen; interoperable (level 2) if they speak the same language; and composable (level 3+) if they share similar educational background so that enables them to collaborate on specific tasks.
Syntactic interoperability: make it easy to share types. Evolve those types. Be typesafe
Semantic interoperability: select an existing or industry-specific model and use it. Generic data modeling is hard. Mandate all lower levels.
Stop here. Above this is human work
Level 0: Stand-alone systems. No Interoperability.
Level 1: Technical Interoperability. A communication protocol exists for exchanging data between participating systems. On this level, a communication infrastructure is established allowing systems to exchange bits and bytes, and the underlying networks and protocols are unambiguously defined.
Level 2: Syntactic Interoperability. A common structure to exchange information; i.e., a common data format is used. On this level, a common protocol to structure the data is used; the format of the information exchange is unambiguously defined. This layer defines structure.
Level 3: Semantic Interoperability. The meaning of the data is shared; the content of the information exchange requests are unambiguously defined. This layer defines (word) meaning. There is a related but slightly different interpretation of the phrase semantic interoperability, which is closer to what is here termed Conceptual Interoperability, i.e. information in a form whose meaning is independent of the application generating or using it.
Level 4: Pragmatic Interoperability is reached when the interoperating systems are aware of the methods and procedures that each system is employing. In other words, the use of the data – or the context of its application – is understood by the participating systems; the context in which the information is exchanged is unambiguously defined. This layer puts the (word) meaning into context.
Level 5: Dynamic Interoperability: As a system operates on data over time, the state of that system will change, and this includes the assumptions and constraints that affect its data interchange....
Level 6: Conceptual Interoperability …
This is a real on-the-wire trace!
(using off the shelf tools)
This is a real data-object trace!
(using off the shelf tools)
This is a hypothetical example of two possible semantic interpretations of the data.
Framework & Transport Layer
How to get interoperable data?
What is the lingua franca?
Get all parties to communicate!
Assume, Networking layer already setup
People know how to do this
Recognize the other layers
Role | Purpose
Timing & Sync: PTP, NTP, TSN etc.
Role | Purpose
Data Resource Model = Meta-Model for modeling application data/resources
Exception Handling = e.g. disconnects. Or when QoS cannot be met. System detects changes that cannot be healed.
Connecting two systems requires matching many aspects, including protocol, data model, communication pattern, and “quality of service” parameters like reliability, data rate, or timing deadlines.
Connect with a special-purpose “bridge”. Connecting N systems together in general requires (N-1)^2 bridges.
Note: this is not scalability of the protocols themselves. This is translation between standards.
42 of them?
Determine the right prescription
Determine how a connectivity standards fits system requirements?
Determine the alignment with connectivity core standards?
Assessment Template Worksheet is is organized using IIRA Viewpoints
Shows two sides of the coin:
Reference model (hourglass)
Implementation (standards, technologies)
Why this list?
Process:
Start with a proposed connectivity core standards for the transport and the framework layer
Pick from what exists and is used broadly across IIoT systems
Use the IIoT connectivity considerations and requirements as a guide
And then expand out to organize the other standards
By vertical/use-case/applications/technology
Scope it to standards relevant to IIoT systems
Don’t try to be exhaustive in covering all the applicable standards
Could become an intractable task with little practical utility
Can be built up over time by the community using the assessment template
IIoT Connectivity Core Standards Criteria applied to key connectivity framework standards.
Look at the detailed assessment sheets for the specific details
Out in interstellar space? Bring your towel!
We got the biggest box!
What about MQTT? Not a connectivity framework: no data model. Used only for collection
What’s wrong with all of these? They are all about messages. The data is an afterthought.
But distributed IIoT systems are all about the data.
Data-Centric DDS is a fourth generation disruptive networking technology
Point-to-Point. Doesn’t branch.
Telephone, TCP
Client/Server. Branches, but choke points
File systems, Database, RPC, CORBA, DCOM
Good if information is naturally centralized
Single point failure, performance bottlenecks, slow failover
Publish/Subscribe Messaging. Branches, no bottlenecks. No data model.
Magazines, Newspaper, TV
Excels at many-to-many communications
Excels at distributing time-critical information
Queuing
Extends PS concept to abstract receptors that can sequence between endpoints for load balancing, etc.
Publish/Subscribe Data Replication
Libraries, Distributed databases
Excels at managing state and integration
Loosely coupled: low integration & lifecycle costs
Supports edge-to-enterprise, embedded
The interface is the data.
There are no artificial wrappers or blockers to that interface like messages, or objects, or files, or access patterns.
The infrastructure understands that data.
This enables filtering/searching, tools, & selectivity. It decouples applications from the data and thereby removes much of the complexity from the applications.
The system manages the data and imposes rules on how applications exchange data.
This provides a notion of "truth". It enables data lifetimes, data model matching, CRUD interfaces, etc.
Databases are data-centric storage. They fit all the above:a) Data tables directly describe data. Unstructured databases also provide direct data access. Applications interact only with the table, not with the location of the data in a file.b) The database provides an environment to search, match, and manage data. I can access data, or write a tool that manipulates the data without talking to whoever put the data there. SQL is a language that leverages this power.c) The database imposes ACID properties. The data won't get corrupted, etc. In a system, the data in the database is Truth. (what's the right value of this data?)DDS is a data-centric communications. a "databus". It also fits all of the above:a) IDL defines data types, and then you interact with those types. b) You can filter data, write tools, & select what you want directly. We also leverage an SGL-like capability.c) The databus imposes QoS rules. The notion of truth is very well defined. It includes timing (how old is this data? how fast will I get it?)
In summary, data-centric systems manage state directly. Applications interact by directly accessing that state. Note that the method of storage or the pattern of interaction is orthogonal.If you want a sound bite, I had one in my 2012 webinar on "The single most important decision in designing your distributed system": Will the apps manage truth & the infrastructure exchange messages?OrWill the infrastructure manage truth& the apps exchange state?
Reliability: Easy redundancy, no servers
Performance: Peer-peer, ms or µs delivery
Scale: Integrate software modules & teams
Data control: Right data, right time, right place
Architecture: Next generation IIoT, 3+ yrs lifetime
Telemetry is the highly automated communications process by which measurements are made and other data collected at remote or inaccessible points and transmitted to receiving equipment for monitoring. (wikipedia)
A hub-and-spoke architecture is natural for MQTT. All the devices connect to a data concentrator server, like IBM's new MessageSight appliance.
You don't want to lose data, so the protocol works on top of TCP, which provides a simple, reliable stream. Since the data is used by the IT infrastructure, the entire system is designed to easily transport data into enterprise technologies like ActiveMQ and ESBs.
Only 3 QoS settings.
There are some huge numbers out there. Cisco $19T. 50B devices. 2/3 of the world economy.
The IoT may be hyped, but this isn’t just a technology like 3D printing or even Big Data. This is the revolution of our lives. It will change everything.
Sensor to Cloud, Road to Hospital, Power to Plant
Usable, performant, reliable, scalable.
IIC is building the infrastructure that enables the IIoT to deliver on its promise. That inspires entire ecosystems.