1. Big Data Architecture and Deployment
Sean McKeown â Technical Solutions Architect
In partnership with:
2. Thank you for attending Cisco Connect Toronto 2015, here are a few
housekeeping notes to ensure we all enjoy the session today.
§ď§âŻ Please ensure your cellphones / laptops are set on silent to ensure no
one is disturbed during the session
§ď§âŻ A power bar is available under each desk in case you need to charge
your laptop (Labs only)
House Keeping Notes
3. §ď§âŻ Big Data Concepts and Overview
§ď§âŻ Enterprise data management and big data
§ď§âŻ Infrastructure attributes
§ď§âŻ Hadoop, NOSQL and MPP Architecture concepts
§ď§âŻ Hadoop and the Network
§ď§âŻ Network behaviour, FAQs
§ď§âŻ Cisco UCS for Big Data
§ď§âŻ Building a big data cluster with the UCS Common Platform Architecture (CPA)
§ď§âŻ UCS Networking, Management, and Scaling for big data
§ď§âŻ Q & A
Agenda
5. âMore data usually beats
better algorithms.â
-Anand Rajaraman, SVP @WalmartLabs
6. The Explosion of Unstructured Data
6
2005 20152010
â˘âŻMore than 90% is unstructured
data
â˘âŻApprox. 500 quadrillion files
â˘âŻQuantity doubles every 2 years
â˘âŻMost unstructured data is neither
stored nor analysed!
1.8 trillion gigabytes of data
was created in 2011âŚ
10,000
0
GBofData
(INBILLIONS)
STRUCTURED DATA
UNSTRUCTURED DATA
Source: Cloudera
7. When the size of the data itself is part of the problem.
What is Big Data?
7
8. What isnât Big Data?
â˘âŻ Usually not blade servers
(not enough local storage)
â˘âŻ Usually not virtualised
(hypervisor only adds overhead)
â˘âŻ Usually not highly
oversubscribed
(significant east-west traffic)
â˘âŻ Usually not SAN/NAS
9. Classic NAS/SAN vs. New Scale-out DAS
Traditional â
separate
compute from
storage
New â
move the
compute to
the storage
Bottlenecks
$$$
Low-cost, DAS-based,
scale-out clustered
filesystem
9
11. Three Common Big Data Architectures
11
NoSQL
Fast key-value store/
retrieve in real time
Hadoop
Distributed batch, query,
and processing platform
MPP Relational
Database
Scale-out BI/DW
12. §ď§âŻ Hadoop is a distributed, fault-
tolerant framework for storing and
analysing data
Hadoop: A Closer Look
§ď§âŻ Its two primary components are the
Hadoop Filesystem (HDFS) and the
MapReduce application engine
Hadoop
12
Hadoop Distributed File System
(HDFS)
Map-Reduce
PIG Hive
HBASE
13. Hadoop: A Closer Look
§ď§âŻ Hadoop 2.0 (with YARN) adds the
ability to run additional distributed
application engines concurrently on
the same underlying filesystem
Hadoop
13
Hadoop Distributed File System
(HDFS)
Map-Reduce
PIG Hive
HBASEImpalaSpark
YARN (Resource Negotiator)
14. Hadoop
MapReduce Example: Word Count
14
the
quick
brown
fox
the fox
ate the
mouse
how now
brown
cow
Map
Map
Map
Reduce
Reduce
brown, 2
fox, 2
how, 1
now, 1
the, 3
ate, 1
cow, 1
mouse, 1
quick, 1
the, 1
brown, 1
fox, 1
quick, 1
quick, 1
the, 1
fox, 1
the, 1
how, 1
now, 1
brown, 1
ate, 1
mouse, 1
cow, 1
Input Map Shuffle & Sort Reduce Output
cat grep sort uniq
15. Hadoop Distributed File System
Block
1
Block
2
Block
3
Block
4
Block
5
Block
6â˘âŻ Scalable & Fault Tolerant
â˘âŻ Filesystem is distributed, stored across all
data nodes in the cluster
â˘âŻ Files are divided into multiple large blocks â
64MB default, typically 128MB â 512MB
â˘âŻ Data is stored reliably. Each block is
replicated 3 times by default
â˘âŻ Types of Nodes
â⯠Name Node - Manages HDFS
â⯠Job Tracker â Manages MapReduce
Jobs
â⯠Data Node/Task Tracker â stores
blocks/does work
ToR FEX/
switch
Data
node 1
Data
node 2
Data
node 3
Data
node 4
Data
node 5
ToR FEX/
switch
Data
node 6
Data
node 7
Data
node 8
Data
node 9
Data
node 10
ToR FEX/
switch
Data
node 11
Data
node 12
Data
node 13
Name
Node
Job
Tracker
File
Hadoop
15
17. âFailure is the defining difference between distributed and local
programmingâ
- Ken Arnold, CORBA designer
17
Hadoop
18. HDFS Architecture
ToR FEX/
switch
Data
node 1
Data
node 2
Data
node 3
Data
node 4
Data
node 5
ToR FEX/
switch
Data
node 6
Data
node 7
Data
node 8
Data
node 9
Data
node 10
ToR FEX/
switch
Data
node 11
Data
node 12
Data
node 13
Data
node 14
Data
node 15
1
Switch
Name Node
/usr/sean/foo.txt:blk_1,blk_2
/usr/jacob/bar.txt:blk_3,blk_4
Data node 1:blk_1
Data node 2:blk_2, blk_3
Data node 3:blk_3
1
1
2
2
2
3
3
3
4
4
4
4
18
Hadoop
23. Hadoop Network Design
â˘âŻ The network is the fabric â the âbusâ
- of the âsupercomputerâ
â˘âŻ Big data clusters often create high
east-west, any-to-any traffic flows
compared to traditional DC networks
â˘âŻ Hadoop networks are typically
isolated/dedicated; simple leaf-spine
designs are ideal
â˘âŻ 10GE typical from server to ToR, low
oversubscription from ToR to spine
â˘âŻ With Hadoop 2.0, clusters will likely
have heterogeneous, multi-workload
behaviour
23
24. Hadoop Network Traffic Types
24
Small Flows/Messaging
(Admin Related, Heart-beats, Keep-alive,
delay sensitive application messaging)
Small â Medium Incast
(Hadoop Shuffle)
Large Flows
(HDFS egress)
Large Pipeline
(Hadoop Replication)
26. Typical Hadoop Job Patterns
Different workloads can have widely varying network impact
26
Analyse (1:0.25) Transform (1:1) Explode (1:1.2)
Map
(input/read)
Shuffle
(network)
Reduce
(output/write)
Data
27. Reducers Start
Maps Finish
Job
Complete
Maps Start
The red line is
the total
amount of
traffic received
by hpc064
These symbols
represent a
node sending
traffic to
HPC064
Note:
Due the combination of the length of the Map phase and the reduced data set being shuffled, the
network is being utilised throughout the job, but by a limited amount.
Analyse Workload
Wordcount on 200K Copies of complete works of Shakespeare
27
Network graph of all
traffic received on a
single node (80
node run)
28. Transform Workload (1TB Terasort)
Network graph of all traffic received on a single node (80 node run)
Reducers Start
Maps Finish
Job
CompleteMaps Start
These symbols
represent a
node sending
traffic to
HPC064
The red line is
the total
amount of
traffic received
by hpc064
29. Output Data Replication Enabled
§ď§âŻReplication of 3 enabled (1 copy stored locally, 2 stored remotely)
§ď§âŻEach reduce output is replicated now, instead of just stored locally
Note:
If output replication is enabled, then at the end of the job HDFS must store additional copies. For a
1TB sort, additional 2TB will need to be replicated across the network.
Transform Workload (With Output Replication)
29
Network graph
of all traffic
received on a
single node (80
node run)
30. Job Patterns - Summary
Job Patterns have varying impact on network utilisation
Analyse
Simulated with Shakespeare Wordcount
Extract Transform Load
(ETL)
Simulated with Yahoo TeraSort
Extract Transform Load
(ETL)
Simulated with Yahoo TeraSort with
output replication
30
31. Data Locality in Hadoop
The ability to process data where it is locally stored
31
Reducers Start
Maps Finish
Job
CompleteMaps Start
Observations
§ď§Notice this initial
spike in RX Traffic is
before the Reducers
kick in.
Â§ď§ It represents data
each map task needs
that is not local.
Â§ď§ Looking at the spike
it is mainly data from
only a few nodes.
Map Tasks:
Initial spike for
non-local data.
Sometimes a
task may be
scheduled on
a node that
does not have
the data
available
locally.
33. Can Hadoop Really Use 10GE?
â˘âŻ Analytic workloads tend to be
lighter on the network
â˘âŻ Transform workloads tend to be
heavier on the network
â˘âŻ Hadoop has numerous
parameters which affect network
â˘âŻ Take advantage of 10GE:
â⯠mapred.reduce.slowstart.completed.maps
â⯠dfs.balance.bandwidthPerSec
â⯠mapred.reduce.parallel.copies
â⯠mapred.reduce.tasks
â⯠mapred.tasktracker.reduce.tasks.maximum
â⯠mapred.compress.map.output
Definitely, so tune for it!
33
34. Can QoS Help?
An example with HBase
34
Map 1 Map 2 Map NMap 3
Reducer
1
Reducer
2
Reducer
3
Reducer
N
HDFS
Shuffle
Output
Replication
Region
Server
Region
Server
Client Client
Read/
Write
Read
Update
Update
Read
Major Compaction
36. ACI Fabric Load Balancing
Flowlet Switching
H1 H2
TCP flow
â˘âŻ Flowlet switching routes bursts
of packets from the same flow
independently, based on
measured congestion of both
external wires and internal
ASICs
â˘âŻ Allows packets from the same
flow to take different paths
â˘âŻ Maintains packet ordering
â˘âŻ Better path utilisation
â˘âŻ Transparent â nothing to modify
at the host/app level
37. ACI Fabric Load Balancing
Dynamic Packet Prioritisation
Real traffic is a mix of large (elephant) and small (mice) flows.
F1
F2
F3
Standard (single priority):
Large flows severely impact
performance (latency & loss).
for small flows
High
Priority
Dynamic Flow Prioritisation:
Fabric automatically gives a higher
priority to small flows.
Standard
Priority
Key Idea:
Fabric detects initial few flowlets of
each flow and assigns them to a
high priority class.
38. Dynamic Packet Prioritisation
Helping heterogeneous workloads
0
0.5
1
1.5
2
2.5
3
MemSQL Only MemSQL + Hadoop
on Traditional
Network
MemSQL + Hadoop
+ Dynamic Packet
Prioritization
ReadQueries/Sec(InMillions)
§ď§âŻ 80-node test cluster
§ď§âŻ MemSQL used to generate
heavy #âs of small flows - mice
§ď§âŻ Large file copy workload
unleashes elephant flows that
trample MemSQL performance
§ď§âŻ DPP enabled, helping to
âprotectâ the mice from the
elephants
2x Improvement in
Reads per Second
39. Network Summary
â˘âŻThe network is the âsystem busâ of the Hadoop
âsupercomputerâ
â˘âŻAnalytic- and ETL-style workloads can behave
very differently on the network
â˘âŻMinimise oversubscription, leverage QoS and DPP,
and tune Hadoop to take advantage of 10GE
39
41. âLife is unfair, and the
unfairness is distributed
unfairly.â
-Russian proverb
42. Hadoop Server Hardware Evolving
Typical 2009
Hadoop node
â˘âŻ1RU server
â˘âŻ4 x 1TB 3.5â
spindles
â˘âŻ2 x 4-core CPU
â˘âŻ1 x GE
â˘âŻ24 GB RAM
â˘âŻSingle PSU
â˘âŻRunning Apache
â˘âŻ$
Economics favor
âfatâ nodes
â˘âŻ6x-9x more data/
node
â˘âŻ3x-6x more IOPS/
node
â˘âŻSaturated gigabit,
10GE on the rise
â˘âŻFewer total nodes
lowers licensing/
support costs
â˘âŻIncreased
significance of node
and switch failure
Typical 2015
Hadoop node
â˘âŻ2RU server
â˘âŻ12 x 4TB 3.5â or 24
x 1TB 2.5â spindles
â˘âŻ2 x 6-12 core CPU
â˘âŻ2 x 10GE
â˘âŻ128-256 GB RAM
â˘âŻDual PSU
â˘âŻRunning
commercial/licensed
distribution
â˘âŻ$$$
42
43. Cisco UCS Common Platform Architecture
Building Blocks for Big Data
UCS
 6200
 Series
Â
Fabric
 Interconnects
Â
Nexus
 2232
Â
Fabric
 Extenders
Â
(op<onal)
Â
Â
UCS
 Manager
Â
UCS
 C220/C240
Â
M4
 Servers
Â
LAN,
 SAN,
 Management
Â
43
44. UCS Reference Configurations for Big Data
Quarter-Rack UCS
Solution for MPP,
NoSQL â High
Performance
Full Rack UCS
Solution for Hadoop
Capacity-Optimised
Full Rack UCS
Solution for Hadoop,
NoSQL â Balanced
2 x UCS 6248
8 x C220 M4 (SFF)
2 x E5-2680v3
256GB
6 x 400-GB SAS SSD
2 x UCS 6296
16 x C240 M4 (LFF)
2 x E5-2620v3
128GB
12 x 4TB 7.2K SATA
2 x UCS 6296
16 x C240 M4 (SFF)
2 x E5-2680v3
256GB
24 x 1.2TB 10K SAS
46. Hadoop and JBOD
â˘âŻ It hurts performance:
â⯠RAID-5 turns parallel sequential
reads into slower random reads
â⯠RAID-5 means speed limited to the
slowest device in the group
â˘âŻ Itâs wasteful: Hadoop already
replicates data, no need for more
replication
â⯠Hadoop block copies serve two
purposes: 1) redundancy and 2)
performance (more copies available
increases data locality % for map
tasks)
Why not use RAID-5?
46
read read read read
JBOD
read read read read
RAID-5
47. Can I Virtualise?
â˘âŻHadoop and most big data architectures can run virtualised
â˘âŻHowever this is typically not recommended for performance
reasons
â⯠Virtualised data nodes will contend for storage and network I/O
â⯠Hypervisor adds overhead, typically without benefit
â˘âŻSome customers are running master/admin nodes (e.g.
Name Node, Job Tracker, Zookeeper, gateways, etc.) in
VMâs, but consider single point of failure
â˘âŻUCS is ideal for virtualisation if you go this route
Yes you can (easy with UCS), but should you?
47
48. Does HDFS Support Storage Tiering?
â˘âŻHDP 2.2 uses the concept of storage types â
SSD, DISK, ARCHIVE (other distros have similar
features)
â˘âŻThe flag is set at a volume level
â˘âŻARCHIVE has three associated policies for files:
â⯠HOT: all replicas on DISK
â⯠WARM: one replica on DISK, others on ARCHIVE
â⯠COLD: all replicas on ARCHIVE
An archiving example with Hortonworks
48
Hadoop
50. UCS C3160 Dense Storage Rack Server
Up to 360TB in 4RU
50
Server
 Node
Â
2x
 E5-Ââ2600
 V2
 CPUs
Â
128/256GB
 RAM
Â
1GB/4GB
 RAID
 Cache
Â
Op+onal
 Disk
 Expansion
Â
4x
 hot-Ââswappable,
 rear-Ââload
Â
LFF
 4TB/6TB
 HDD
Â
HDD
Â
4
 Rows
 of
 hot-Ââswappable
 HDD
Â
4TB/6TB
Â
Total
 top
 load:
 56
 drives
Â
Two
 120GB
 SSDs
 (OS/Boot)
Â
52. Cisco UCS: Physical Architecture
6200
Fabric A
6200
Fabric B
B200
VIC
F
E
X
B
F
E
X
A
SAN
 A
 SAN
 B
 ETH
 1
 ETH
 2
Â
MGMT MGMT
Chassis 1
Fabric Switch
Fabric Extenders
Uplink Ports
Compute Blades
Half / Full width
OOB Mgmt
Server Ports
Virtualised Adapters
Cluster
Rack Mount C240
VIC
FEX A FEX B
52
Optional, for
scalability
53. CPA: Single-connect Topology
Single wire for data and management, no oversubscription
53
2 x 10GE links
per server for all
traffic, data and
management
New (cheaper) bare-metal
port licensing available
54. CPA: FEX Topology (Optional, For Scalability)
Single wire for data and management
8 x 10GE
uplinks per
FEX= 2:1
oversub (16
servers/rack),
no portchannel
(static pinning)
2 x 10GE links
per server for all
traffic, data and
management
54
55. CPA Recommended FEX Connectivity
55
â˘âŻ 2232 FEX has 4 buffer groups: ports 1-8, 9-16, 17-24, 25-32
â˘âŻ Distribute servers across port groups to maximise buffer
performance and predictably distribute static pinning on uplinks
56. Virtualise the Physical Network Pipe
Adapter
Switch
10GE
A
Eth 1/1
FEX A
6200-A
Physical
Cable
Virtual Cable
(VN-Tag)
Server
Cables
vNIC
1
10GE
A
vEth
1
FEX A
Adapter
6200-A
vHBA
1
vFC
1
Service Profile
Server
vNIC
1
vEth
1
6200-A
vHBA
1
vFC
1
(Server)
ĂźďźâŻ Dynamic,
Rapid
Provisioning
ĂźďźâŻ State
abstraction
ĂźďźâŻ Location
Independence
ĂźďźâŻ Blade or Rack
What you getWhat you see
Chassis
56
57. âNIC bonding is one of Clouderaâs
highest case drivers for
misconfigurations.â
http://blog.cloudera.com/blog/2015/01/how-to-deploy-apache-hadoop-
clusters-like-a-boss/
58. UCS Fabric Failover
â˘âŻ Fabric provides NIC failover
capabilities chosen when
defining a service profile
â˘âŻ Avoids traditional NIC
bonding in the OS
â˘âŻ Provides failover for both
unicast and multicast traffic
â˘âŻ Works for any OS on
bare metal
â˘âŻ (Also works for any
hypervisor-based servers)
vNIC
Â
1
Â
10GE
 10GE
Â
vEth
Â
1
Â
OS
 /
 Hypervisor
 /
 VM
Â
vEth
Â
1
Â
FEX
 FEX
Â
Physical
Adapter
Virtual
Adapter
6200-A 6200-B
L1
L2
L1
L2
Physical Cable
Virtual Cable
Cisco
VIC 1225
58
59. UCS Networking with Hadoop
59
â˘âŻ VNIC 1 on Fabric A with
FF to B (internal cluster)
â˘âŻ VNIC 2 on Fabric B with
FF to A (external data)
â˘âŻ No OS bonding required
â˘âŻ VNIC 0 (management)
wiring not shown for
clarity (primary on Fabric
B, FF to A)
Note: cluster traffic will
flow northbound in the
event of a VNIC1
failover. Ensure
appropriate bandwidth/
topology.
VNIC
1
L2/L3 Switching
Data
 Node
 1
Â
VNIC
2
Data
 Node
 2
Â
6200 A
VNIC
2
6200 B
VNIC
1
EHM
 EHM
Â
Data ingress/egress
VNIC
0
VNIC
0
60. Create QoS Policies
Leverage simplicity of UCS Service Profiles
60
! !
Best Effort policy for management VLAN Platinum policy for cluster VLAN
61. Enable JumboFrames for Cluster VLAN
!
1.⯠Select the LAN tab in the left
pane in the UCSM GUI.
2.⯠Select LAN Cloud > QoS System
Class.
3.⯠In the right pane, select the
General tab
4.⯠In the Platinum row, enter 9000
for MTU.
5.⯠Check the Enabled Check box
next to Platinum.
6.⯠Click Save Changes.
7.⯠Click OK.
61
63. Cluster Scalability
A general characteristic
of an optimally
configured cluster is a
linear relationship
between data set sizes
and job completion
times
63
64. Sizing
â˘âŻ Start with current storage requirement
â⯠Factor in replication (typically 3x) and compression (varies by data set)
â⯠Factor in 20-30% free space for temp (Hadoop) or up to 50% for some NoSQL
systems
â⯠Factor in average daily/weekly data ingest rate
â⯠Factor in expected growth rate (i.e. increase in ingest rate over time)
â˘âŻ If I/O requirement known, use next table for guidance
â˘âŻ Most big data architectures are very linear, so more nodes = more
capacity and better performance
â˘âŻ Strike a balance between price/performance of individual nodes vs. total
# of nodes
Part science, part art
64
66. CPA Sizing and Application Guidelines
Server
CPU
Â
2 x E5-2680v3
 2 x E5-2680v3
 2 x E5-2620v3
Â
Memory (GB)
 256
 256
 128
Â
Disk Drives
 6 x 400GB SSD
 24 x 1.2TB 10K SFF
 12 x 4TB 7.2K LFF
Â
IO Bandwidth (GB/Sec)
 2.6
 2.6
 1.1
Â
Rack-Level
(32 x C220 or 16 x C240)
Cores
 768
 384
 192
Â
Memory (TB)
 8
 4
 2
Â
Capacity (TB)
 64
 460
 768
Â
IO Bandwidth (GB/Sec)
 192
 42
 16
Â
Applications
MPP DB
NoSQL
Hadoop
NoSQL
Â
Hadoop
Â
Best Performance Best Price/TB
66
67. Scaling the CPA
Single Rack
16 servers
Single Domain
Up to 10 racks, 160 servers
Multiple Domains
L2/L3 Switching
67
68. Scaling via Nexus 9K Validated Design
68
Use Nexus 9000 with ACI
to scale out multiple UCS
CPA domains (1000âs of
nodes) and/or to connect
them to other application
systems
Enable ACIâs Dynamic
Packet Prioritisation and
Dynamic Load Balancing
to optimise multi-workload
traffic flows
69. Consider
 intra-Ââ
 and
 inter-Ââdomain
 bandwidth:
Â
Servers
 Per
Â
Domain
Â
Â
(Pair
 of
 Fabric
Â
Interconnects)
Â
Available
Â
North-ÂâBound
Â
10GE
 ports
Â
(per
 fabric)
Â
Southbound
Â
oversubscrip<on
Â
(per
 fabric)
Â
Northbound
Â
oversubscrip<on
Â
(per
 fabric)
Â
Intra-Ââdomain
Â
server-Ââto-Ââserver
Â
bandwidth
 (per
Â
fabric,
 Gbits/sec)
Â
Inter-Ââdomain
Â
server-Ââto-Ââserver
Â
bandwidth
 (per
Â
fabric,
 Gbits/sec)
Â
160
 16
 2:1
 (FEX)
 5:1
 5
 1
Â
128
 32
 2:1
 (FEX)
 2:1
 5
 2.5
Â
80
 16
 1:1
 (no
 FEX)
 5:1
 10
 2
Â
64
 32
 1:1
 (no
 FEX)
 2:1
 10
 5
Â
Scaling the Common Platform Architecture
69
70. Rack Awareness
â˘âŻ Rack Awareness provides Hadoop the
optional ability to group nodes together in
logical âracksâ
â˘âŻ Logical âracksâ may or may not correspond to
physical data centre racks
â˘âŻ Distributes blocks across different âracksâ to
avoid failure domain of a single ârackâ
â˘âŻ It can also lessen block movement between
âracksâ
â˘âŻ Can be useful to control block placement and
movement in UCSM integrated environments
âRackâ 1
Data
node 1
Data
node 2
Data
node 3
Data
node 4
Data
node 5
âRackâ 2
Data
node 6
Data
node 7
Data
node 8
Data
node 9
Data
node 10
âRackâ 3
Data
node 11
Data
node 12
Data
node 13
Data
node 14
Data
node 15
1
1
1
2
2
2
3
3
3
4
4
4
70
71. Recommendations: UCS Domains and Racks
Single Domain Recommendation
Turn off or enable at physical rack level
â˘âŻ For simplicity and ease of
use, leave Rack Awareness
off
â˘âŻ Consider turning it on to limit
physical rack level fault
domain (e.g. localised
failures due to physical data
centre issues â water, power,
cooling, etc.)
Multi Domain Recommendation
Create one Hadoop rack per UCS Domain
â˘âŻ With multiple domains,
enable Rack Awareness
such that each UCS Domain
is its own Hadoop rack
â˘âŻ Provides HDFS data
protection across domains
â˘âŻ Helps minimise cross-
domain traffic
71
72. âThe future is here, itâs just
not evenly distributed.â
-William Gibson, author
73. Summary
â˘âŻThink of big data clusters as a single âsupercomputerâ
â˘âŻThink of the network as the âsystem busâ of the
supercomputer
â˘âŻStrive for consistency in your deployments
â˘âŻThe goal is an even distribution of load â distribute fairly
â˘âŻCisco Nexus and UCS Common Platform Architecture
for Big Data can help!
Leverage UCS and Nexus to integrate big data into your data centre operations
73
74. §ď§âŻ Cisco dCloud is a self-service platform that can be accessed via a browser, a high-speed
Internet connection, and a cisco.com account
§ď§âŻ Customers will have direct access to a subset of dCloud demos and labs
§ď§âŻ Restricted content must be brokered by an authorized user (Cisco or Partner) and then shared
with the customers (cisco.com user).
§ď§âŻ Go to dcloud.cisco.com, select the location closest to you, and log in with your cisco.com
credentials
§ď§âŻ Review the getting started videos and try Cisco dCloud today: https://dcloud-cms.cisco.com/help
dCloud
Customers now get full dCloud experience!