SlideShare ist ein Scribd-Unternehmen logo
1 von 41
Tim Bell
CERN
@noggin143
OpenStack UK Days
26th September 2017
Understanding the Universe
through Clouds
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 1
2
CERN: founded in 1954: 12 European States
“Science for Peace”
Today: 22 Member States
Member States: Austria, Belgium, Bulgaria, Czech Republic, Denmark, Finland,
France, Germany, Greece, Hungary, Israel, Italy, Netherlands, Norway, Poland,
Portugal, Romania, Slovak Republic, Spain, Sweden, Switzerland and
United Kingdom
Associate Member States: Pakistan, India, Ukraine, Turkey
States in accession to Membership: Cyprus, Serbia
Applications for Membership or Associate Membership:
Brazil, Croatia, Lithuania, Russia, Slovenia
Observers to Council: India, Japan, Russia, United States of America;
European Union, JINR and UNESCO
~ 2300 staff
~ 1400 other paid personnel
~ 12500 scientific users
Budget (2017) ~1000 MCHF
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 2
The Large Hadron Collider (LHC)
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 3
~700 MB/s
~10 GB/s
>1 GB/s
>1 GB/s
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 4
Tim.Bell@cern.ch 5Universe and Clouds - 26th September 2017
6
Tier-1: permanent
storage, re-processing,
analysis
Tier-0
(CERN and Hungary):
data recording,
reconstruction and
distribution
Tier-2: Simulation,
end-user analysis
> 2 million jobs/day
~750k CPU cores
600 PB of storage
~170 sites,
42 countries
10-100 Gb links
WLCG:
An International collaboration to distribute and analyse LHC data
Integrates computer centres worldwide that provide computing and storage
resource into a single infrastructure accessible by all LHC physicists
The Worldwide LHC Computing Grid
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 7
Asia North America
South America
Europe
LHCOne: Overlay network
Allows national network providers to
manage HEP traffic on general
purpose network
0
10
20
30
40
50
60
70
JAN FEB MAR APR MAY JUN JUL AUG SEPT OCT NOV DEC JAN FEB MAR APR MAY
A big data problem
Tim.Bell@cern.ch 8
2016: 49.4 PB LHC data/
58 PB all experiments/
73 PB total
ALICE: 7.6 PB
ATLAS: 17.4 PB
CMS: 16.0 PB
LHCb: 8.5 PB
11 PB in July
180 PB on tape
800 M files
Universe and Clouds - 26th September 2017
Public Procurement Cycle
Step Time (Days) Elapsed (Days)
User expresses requirement 0
Market Survey prepared 15 15
Market Survey for possible
vendors
30 45
Specifications prepared 15 60
Vendor responses 30 90
Test systems evaluated 30 120
Offers adjudicated 10 130
Finance committee 30 160
Hardware delivered 90 250
Burn in and acceptance 30 days typical with 380 worst
case
280
Total 280+ Days
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 9
OpenStack London July 2011 Vinopolis
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 10
CERN Tool Chain
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 11
CERN OpenStack Service Timeline
(*) Pilot (?) Trial Retired
ESSEX
Nova (*)
Glance (*)
Horizon (*)
Keystone (*)
FOLSOM
Nova (*)
Glance (*)
Horizon (*)
Keystone (*)
Quantum
Cinder
GRIZZLY
Nova
Glance
Horizon
Keystone
Quantum
Cinder
Ceilometer (*)
HAVANA
Nova
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer (*)
Heat
ICEHOUSE
Nova
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer
Heat
Ironic
Trove
JUNO
Nova
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer
Heat (*)
Rally (*)
5 April 2012 27 September 2012 4 April 2013 17 October 2013 17 April 2014 16 October 2014
July 2013
CERN OpenStack
Production
February 2014
CERN OpenStack
Havana
October 2014
CERN OpenStack
Icehouse
March2015
CERN OpenStack
Juno
LIBERTY
Nova
Glance
Horizon
Keystone
Neutron (*)
Cinder
Ceilometer
Heat
Rally
EC2API
Magnum (*)
Barbican (*)
September 2015
CERN OpenStack
Kilo
KILO
Nova
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer
Heat
Rally
Manila
September 2016
CERN OpenStack
Liberty
MITAKA
Nova
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer
Heat
Rally
EC2API
Magnum
Barbican
Ironic (?)
Mistral (?)
Manila (?)
March 2017
CERN OpenStack
Mitaka
NEWTON
Nova
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer
Heat
Rally
EC2API
Magnum
Barbican
Ironic (?)
Mistral (?)
Manila (?)
7 April 201630 April 2015 15 October 2015
OCATA
Nova
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer
Heat
Rally
EC2API
Magnum
Barbican
Ironic (?)
Mistral (?)
Manila (*)
22 Feb 2017
June 2017
CERN OpenStack
Newton
6 Oct 2016
PIKE
28 Aug 2017
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 12
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 13
Currently >8000 hypervisors, 281K cores running 33,000 VMs
 From ~200TB total to ~450 TB of RBD + 50 TB RGW• From ~200TB total to ~450 TB of RBD + 50 TB RGW
OpenStack Glance + Cinder
Example: ~25 puppet masters reading
node configurations at up to 40kHz
iops
14Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
• Scale tests with Ceph Luminous up to 65PB in a block storage pool
http://ceph.com/community/new-luminous-scalability/
Software Deployment
15Tim.Bell@cern.ch
 Deployment based on CentOS and RDO
- Upstream, only patched where necessary
(e.g. nova/neutron for CERN networks)
- Some few customizations
- Works well for us
 Puppet for config’ management
- Introduced with the adoption of AI paradigm
 We submit upstream whenever possible
- openstack, openstack-puppet, RDO, …
 Updates done service-by-service over several months
- Running services on dedicated (virtual) servers helps
(Exception: ceilometer and nova on compute nodes)
- Aim to be around 6-9 months behind trunk
 Upgrade testing done with packstack and devstack
- Depends on service: from simple DB upgrades to full shadow installations
Universe and Clouds - 26th September 2017
Community Experience
 Open source collaboration sets model for in-
house teams
 External recognition by the community is highly
rewarding for contributors
 Reviews and being reviewed is a constant
learning experience
 Productive for job market for staff
 Working groups, like the Scientific and Large
Deployment teams, discuss wide range of
topics
 Effective knowledge transfer mechanisms
consistent with the CERN mission
 Dojos at CERN bring good attendance
 Ceph, CentOS, Elastic, OpenStack CH, …
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 16
Top level cell
 Runs API service
 Top cell scheduler
~50 child cells run
 Compute nodes
 Scheduler
 Conductor
 Decided to not use HA
Version 2 coming
 Default for all
Scaling Nova
17Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
Rally
18Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
What’s new? Magnum
 Container Engine as a Service
 Kubernetes, Docker, Mesos…
$ magnum cluster-create --name myswarmcluster --cluster-template swarm --node-count 100
$ magnum cluster-list
+------+----------------+------------+--------------+-----------------+
| uuid | name | node_count | master_count | status |
+------+----------------+------------+--------------+-----------------+
| .... | myswarmcluster | 100 | 1 | CREATE_COMPLETE |
+------+----------------+------------+--------------+-----------------+
$ $(magnum cluster-config myswarmcluster --dir magnum/myswarmcluster)
$ docker info / ps / ...
$ docker run --volume-driver cvmfs -v atlas.cern.ch:/cvmfs/atlas -it centos /bin/bash
[root@32f4cf39128d /]#
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 19
Scaling Magnum to 7M req/s
Rally drove the tests
1000 node clusters (4000 cores)
Cluster Size (Nodes) Concurrency Deployment Time
(min)
2 50 2.5
16 10 4
32 10 4
128 5 5.5
512 1 14
1000 1 23
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 20
What’s new? Mistral
 Workflow-as-a-Service used for multi-step actions,
triggered by users or events
 Horizon dashboard for visualising results
 Examples
 Multi-step project creation
 Scheduled snapshot of VMs
 Expire personal resources after 6 months
 Code at https://gitlab.cern.ch/cloud-
infrastructure/mistral-workflows
 Some more complex cases coming in the pipeline
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 21
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 22
Automate provisioning
23
Automate routine procedures
- Common place for workflows
- Clean web interface
- Scheduled jobs, cron-style
- Traceability and auditing
- Fine-grained access control
- …
Procedures for
- OpenStack project creation
- OpenStack quota changes
- Notifications of VM owners
- Usage and health reports
- …
Disable
compute
node
• Disable nova-service
• Switch Alarms OFF
• Update Service-Now ticket
Notifications
• Send e-mail to VM owners
Other tasks
Post new
message broker
Add remote AT
job
Save intervention
details
Send calendar
invitation
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
Manila: Overview
24
• File Share Project in OpenStack
- Provisioning of shared file systems to VMs
- ‘Cinder for file shares’
• APIs for tenants to request shares
- Fulfilled by backend drivers
- Acessed from instances
• Support for variety of NAS protocols
- NFS, CIFS, MapR-FS, GlusterFS, CephFS, …
• Supports the notion of share types
- Map features to backends
Manila
Backend
1. Request share
2. Create share
4. Access share
User
instances
3. Provide handle
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
25
LHC Incident in April 2016
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
Manila testing: #fouinehammer
26
m-share
Driver
RabbitMQ
m-sched
m-api DBm-api m-api
1 … 500 nodes
1 ... 10k PODs
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
Commercial Clouds
Universe and Clouds - 26th September 2017Tim.Bell@cern.ch 27
Development areas going forward
 Spot Market
 Cells V2
 Neutron scaling – no Cells equivalent yet
 Magnum rolling upgrades
 Collaborations with Industry
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 28
Operations areas going forward
 Further automate migrations
 Around 5,000 VMs / year
 First campaign in 2016 needed some additional
scripting such as pausing very active VMs
 Newton live migration includes most use cases
 Software Defined Networking
 Nova network to Neutron migration to be completed
 In addition to flat network in use currently
 Introduce higher level functions such as LBaaS
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 29
Future Challenges
0
100
200
300
400
500
600
700
800
900
1000
Raw Derived
Data estimates for 1st year of HL-LHC (PB)
ALICE ATLAS CMS LHCb
0
50000
100000
150000
200000
250000
CPU (HS06)
CPU Needs for 1st Year of HL-LHC (kHS06)
ALICE ATLAS CMS LHCb
B
First run LS1 Second run LS2 Third run LS3 HL-LHC
…
FCC?
2013 2014 2015 2016 2017 201820112010 2012 2019 2023 2024 2030?20212020 2022 …2025
CPU:
• x60 from 2016
Data:
• Raw 2016: 50 PB  2027: 600 PB
• Derived (1 copy): 2016: 80 PB  2027: 900 PB
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 30
 Raw data volume for LHC increases exponentially and with it processing and analysis
load
 Technology at ~20%/year will bring x6-10 in 10-11 years
 Estimates of resource needs at HL-LHC x10 above what is realistic to expect from
technology with reasonably constant cost
Summary
 OpenStack has provided a strong base for
scaling resources over the past 4 years without
significant increase in CERN staff
 Additional functionality on top of pure
Infrastructure-as-a-Service is now coming to
production
 Community and industry collaboration has been
productive and inspirational for the CERN team
 Some big computing challenges up ahead…
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 31
Further Information
Technical details on the CERN cloud at
http://openstack-in-production.blogspot.fr
Custom CERN code is at https://github.com/cernops
Scientific Working Group at
https://wiki.openstack.org/wiki/Scientific_working_group
Helix Nebula details at http://www.helix-nebula.eu/
http://cern.ch/IT ©CERN CC-BY-SA 4.0Universe and Clouds - 19th June 2017 Tim.Bell@cern.ch 32
Backup
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 34
WLCG MoU Signatures
2017:
- 63 MoU’s
- 167 sites; 42 countries
Partners
Contributors
Associates
Research
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 35
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 36
How do we monitor?
37
Processing
kafka
Data
Centres
Data
Sources
Data
Access
Storage/Search
WLCG
Transport
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
Tuning
38
 Many hypervisors are configured for compute
optimisation
 CPU Passthrough so VM sees identical CPU
 Extended Page Tables so memory page mapping is
done in hardware
 Core pinning so scheduler keeps the cores on the
underlying physical cores
 Huge pages to improve memory page cache utilisation
 Flavors are set to be NUMA aware
 Improvements of up to 20% in performance
 Impact is that the VMs cannot be live migrated so
service machines are not configured this way
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
Provisioning services Moving towards
Elastic Hybrid IaaS
model:
• In house resources at full
occupation
• Elastic use of commercial
& public clouds
• Assume “spot-market”
style pricing
OpenStack Resource Provisioning
(>1 physical data centre)
HTCondor
Public Cloud
VMsContainersBare Metal and HPC
(LSF)
Volunteer
Computing
IT & Experiment
Services
End Users CI/CD
APIs
CLIs
GUIs
Experiment Pilot Factories
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 39
Simulating Elasticity
 Deliveries are around 1-2 times per year
 Resources are for
 Batch compute … immediately needed … compute optimised
 Services … needed as projects request quota ... Support live
migration with generic CPU definition
 Elasticity is simulated by
 Creating opportunistic batch projects running on resources available
for services in the future
 Draining opportunistic batch as needed
 End result is
 High utilisation of ‘spare’ resources
 Simulation of an elastic cloud
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 40
Pick the interesting events
 40 million per second
 Fast, simple information
 Hardware trigger in
a few micro seconds
 100 thousand per second
 Fast algorithms in local
computer farm
 Software trigger in <1 second
 Few 100 per second
 Recorded for study
41
Muon
tracks
Energy
deposits
Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch

Weitere ähnliche Inhalte

Was ist angesagt?

Containers on Baremetal and Preemptible VMs at CERN and SKA
Containers on Baremetal and Preemptible VMs at CERN and SKAContainers on Baremetal and Preemptible VMs at CERN and SKA
Containers on Baremetal and Preemptible VMs at CERN and SKABelmiro Moreira
 
CERN OpenStack Cloud Control Plane - From VMs to K8s
CERN OpenStack Cloud Control Plane - From VMs to K8sCERN OpenStack Cloud Control Plane - From VMs to K8s
CERN OpenStack Cloud Control Plane - From VMs to K8sBelmiro Moreira
 
Moving from CellsV1 to CellsV2 at CERN
Moving from CellsV1 to CellsV2 at CERNMoving from CellsV1 to CellsV2 at CERN
Moving from CellsV1 to CellsV2 at CERNBelmiro Moreira
 
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014Belmiro Moreira
 
20121017 OpenStack CERN Accelerating Science
20121017 OpenStack CERN Accelerating Science20121017 OpenStack CERN Accelerating Science
20121017 OpenStack CERN Accelerating ScienceTim Bell
 
20190620 accelerating containers v3
20190620 accelerating containers v320190620 accelerating containers v3
20190620 accelerating containers v3Tim Bell
 
OpenStack at CERN : A 5 year perspective
OpenStack at CERN : A 5 year perspectiveOpenStack at CERN : A 5 year perspective
OpenStack at CERN : A 5 year perspectiveTim Bell
 
Evolution of Openstack Networking at CERN
Evolution of Openstack Networking at CERNEvolution of Openstack Networking at CERN
Evolution of Openstack Networking at CERNBelmiro Moreira
 
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015Belmiro Moreira
 
The OpenStack Cloud at CERN
The OpenStack Cloud at CERNThe OpenStack Cloud at CERN
The OpenStack Cloud at CERNArne Wiebalck
 
Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015Belmiro Moreira
 
Deep Dive Into the CERN Cloud Infrastructure - November, 2013
Deep Dive Into the CERN Cloud Infrastructure - November, 2013Deep Dive Into the CERN Cloud Infrastructure - November, 2013
Deep Dive Into the CERN Cloud Infrastructure - November, 2013Belmiro Moreira
 
OpenContrail Implementations
OpenContrail ImplementationsOpenContrail Implementations
OpenContrail ImplementationsJakub Pavlik
 
Operators experience and perspective on SDN with VLANs and L3 Networks
Operators experience and perspective on SDN with VLANs and L3 NetworksOperators experience and perspective on SDN with VLANs and L3 Networks
Operators experience and perspective on SDN with VLANs and L3 NetworksJakub Pavlik
 
Manila on CephFS at CERN (OpenStack Summit Boston, 11 May 2017)
Manila on CephFS at CERN (OpenStack Summit Boston, 11 May 2017)Manila on CephFS at CERN (OpenStack Summit Boston, 11 May 2017)
Manila on CephFS at CERN (OpenStack Summit Boston, 11 May 2017)Arne Wiebalck
 
Integrating Bare-metal Provisioning into CERN's Private Cloud
Integrating Bare-metal Provisioning into CERN's Private CloudIntegrating Bare-metal Provisioning into CERN's Private Cloud
Integrating Bare-metal Provisioning into CERN's Private CloudArne Wiebalck
 
OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies
OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies
OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies Jakub Pavlik
 
Operational War Stories from 5 Years of Running OpenStack in Production
Operational War Stories from 5 Years of Running OpenStack in ProductionOperational War Stories from 5 Years of Running OpenStack in Production
Operational War Stories from 5 Years of Running OpenStack in ProductionArne Wiebalck
 
OpenContrail Experience tcp cloud OpenStack Summit Tokyo
OpenContrail Experience tcp cloud OpenStack Summit TokyoOpenContrail Experience tcp cloud OpenStack Summit Tokyo
OpenContrail Experience tcp cloud OpenStack Summit TokyoJakub Pavlik
 
OpenStack Toronto Q3 MeetUp - September 28th 2017
OpenStack Toronto Q3 MeetUp - September 28th 2017OpenStack Toronto Q3 MeetUp - September 28th 2017
OpenStack Toronto Q3 MeetUp - September 28th 2017Stacy Véronneau
 

Was ist angesagt? (20)

Containers on Baremetal and Preemptible VMs at CERN and SKA
Containers on Baremetal and Preemptible VMs at CERN and SKAContainers on Baremetal and Preemptible VMs at CERN and SKA
Containers on Baremetal and Preemptible VMs at CERN and SKA
 
CERN OpenStack Cloud Control Plane - From VMs to K8s
CERN OpenStack Cloud Control Plane - From VMs to K8sCERN OpenStack Cloud Control Plane - From VMs to K8s
CERN OpenStack Cloud Control Plane - From VMs to K8s
 
Moving from CellsV1 to CellsV2 at CERN
Moving from CellsV1 to CellsV2 at CERNMoving from CellsV1 to CellsV2 at CERN
Moving from CellsV1 to CellsV2 at CERN
 
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
 
20121017 OpenStack CERN Accelerating Science
20121017 OpenStack CERN Accelerating Science20121017 OpenStack CERN Accelerating Science
20121017 OpenStack CERN Accelerating Science
 
20190620 accelerating containers v3
20190620 accelerating containers v320190620 accelerating containers v3
20190620 accelerating containers v3
 
OpenStack at CERN : A 5 year perspective
OpenStack at CERN : A 5 year perspectiveOpenStack at CERN : A 5 year perspective
OpenStack at CERN : A 5 year perspective
 
Evolution of Openstack Networking at CERN
Evolution of Openstack Networking at CERNEvolution of Openstack Networking at CERN
Evolution of Openstack Networking at CERN
 
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015
 
The OpenStack Cloud at CERN
The OpenStack Cloud at CERNThe OpenStack Cloud at CERN
The OpenStack Cloud at CERN
 
Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015
 
Deep Dive Into the CERN Cloud Infrastructure - November, 2013
Deep Dive Into the CERN Cloud Infrastructure - November, 2013Deep Dive Into the CERN Cloud Infrastructure - November, 2013
Deep Dive Into the CERN Cloud Infrastructure - November, 2013
 
OpenContrail Implementations
OpenContrail ImplementationsOpenContrail Implementations
OpenContrail Implementations
 
Operators experience and perspective on SDN with VLANs and L3 Networks
Operators experience and perspective on SDN with VLANs and L3 NetworksOperators experience and perspective on SDN with VLANs and L3 Networks
Operators experience and perspective on SDN with VLANs and L3 Networks
 
Manila on CephFS at CERN (OpenStack Summit Boston, 11 May 2017)
Manila on CephFS at CERN (OpenStack Summit Boston, 11 May 2017)Manila on CephFS at CERN (OpenStack Summit Boston, 11 May 2017)
Manila on CephFS at CERN (OpenStack Summit Boston, 11 May 2017)
 
Integrating Bare-metal Provisioning into CERN's Private Cloud
Integrating Bare-metal Provisioning into CERN's Private CloudIntegrating Bare-metal Provisioning into CERN's Private Cloud
Integrating Bare-metal Provisioning into CERN's Private Cloud
 
OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies
OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies
OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies
 
Operational War Stories from 5 Years of Running OpenStack in Production
Operational War Stories from 5 Years of Running OpenStack in ProductionOperational War Stories from 5 Years of Running OpenStack in Production
Operational War Stories from 5 Years of Running OpenStack in Production
 
OpenContrail Experience tcp cloud OpenStack Summit Tokyo
OpenContrail Experience tcp cloud OpenStack Summit TokyoOpenContrail Experience tcp cloud OpenStack Summit Tokyo
OpenContrail Experience tcp cloud OpenStack Summit Tokyo
 
OpenStack Toronto Q3 MeetUp - September 28th 2017
OpenStack Toronto Q3 MeetUp - September 28th 2017OpenStack Toronto Q3 MeetUp - September 28th 2017
OpenStack Toronto Q3 MeetUp - September 28th 2017
 

Ähnlich wie 20170926 cern cloud v4

CERN Mass and Agility talk at OSCON 2014
CERN Mass and Agility talk at OSCON 2014CERN Mass and Agility talk at OSCON 2014
CERN Mass and Agility talk at OSCON 2014Tim Bell
 
Swami osi bangalore2017days pike release_updates
Swami osi bangalore2017days pike release_updatesSwami osi bangalore2017days pike release_updates
Swami osi bangalore2017days pike release_updatesRanga Swami Reddy Muthumula
 
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...Amazon Web Services
 
Introduction and Overview of OpenStack for IaaS
Introduction and Overview of OpenStack for IaaSIntroduction and Overview of OpenStack for IaaS
Introduction and Overview of OpenStack for IaaSKeith Basil
 
Openstack Pakistan Workshop (intro)
Openstack Pakistan Workshop (intro)Openstack Pakistan Workshop (intro)
Openstack Pakistan Workshop (intro)Affan Syed
 
Openstack For Beginners
Openstack For BeginnersOpenstack For Beginners
Openstack For Beginnerscpallares
 
CloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use CaseCloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use CaseCloudLightning
 
20181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v320181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v3Tim Bell
 
20181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v320181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v3Tim Bell
 
Better Information Faster: Programming the Continuum
Better Information Faster: Programming the ContinuumBetter Information Faster: Programming the Continuum
Better Information Faster: Programming the ContinuumIan Foster
 
What is OpenStack and the added value of IBM solutions
What is OpenStack and the added value of IBM solutionsWhat is OpenStack and the added value of IBM solutions
What is OpenStack and the added value of IBM solutionsSasha Lazarevic
 
20140509 cern open_stack_linuxtag_v3
20140509 cern open_stack_linuxtag_v320140509 cern open_stack_linuxtag_v3
20140509 cern open_stack_linuxtag_v3Tim Bell
 
Cloud Infrastructure
Cloud InfrastructureCloud Infrastructure
Cloud InfrastructureKamruddin Nur
 
Kubernetes meetup 102
Kubernetes meetup 102Kubernetes meetup 102
Kubernetes meetup 102Jakir Patel
 
Intro to OpenStack
Intro to OpenStackIntro to OpenStack
Intro to OpenStackdonnieh1
 
PEPS: CNES Sentinel Satellite Image Analysis, On-Premises and in the Cloud wi...
PEPS: CNES Sentinel Satellite Image Analysis, On-Premises and in the Cloud wi...PEPS: CNES Sentinel Satellite Image Analysis, On-Premises and in the Cloud wi...
PEPS: CNES Sentinel Satellite Image Analysis, On-Premises and in the Cloud wi...OW2
 
CERN & Huawei collaboration to improve OpenStack for running large scale scie...
CERN & Huawei collaboration to improve OpenStack for running large scale scie...CERN & Huawei collaboration to improve OpenStack for running large scale scie...
CERN & Huawei collaboration to improve OpenStack for running large scale scie...Helix Nebula The Science Cloud
 
Using the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchUsing the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchRobert Grossman
 

Ähnlich wie 20170926 cern cloud v4 (20)

Helix Nebula - The Science Cloud, Status Update
Helix Nebula - The Science Cloud, Status UpdateHelix Nebula - The Science Cloud, Status Update
Helix Nebula - The Science Cloud, Status Update
 
CERN Mass and Agility talk at OSCON 2014
CERN Mass and Agility talk at OSCON 2014CERN Mass and Agility talk at OSCON 2014
CERN Mass and Agility talk at OSCON 2014
 
Swami osi bangalore2017days pike release_updates
Swami osi bangalore2017days pike release_updatesSwami osi bangalore2017days pike release_updates
Swami osi bangalore2017days pike release_updates
 
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...
 
Introduction and Overview of OpenStack for IaaS
Introduction and Overview of OpenStack for IaaSIntroduction and Overview of OpenStack for IaaS
Introduction and Overview of OpenStack for IaaS
 
Openstack Pakistan Workshop (intro)
Openstack Pakistan Workshop (intro)Openstack Pakistan Workshop (intro)
Openstack Pakistan Workshop (intro)
 
Openstack For Beginners
Openstack For BeginnersOpenstack For Beginners
Openstack For Beginners
 
CloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use CaseCloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use Case
 
20181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v320181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v3
 
20181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v320181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v3
 
Better Information Faster: Programming the Continuum
Better Information Faster: Programming the ContinuumBetter Information Faster: Programming the Continuum
Better Information Faster: Programming the Continuum
 
What is OpenStack and the added value of IBM solutions
What is OpenStack and the added value of IBM solutionsWhat is OpenStack and the added value of IBM solutions
What is OpenStack and the added value of IBM solutions
 
20140509 cern open_stack_linuxtag_v3
20140509 cern open_stack_linuxtag_v320140509 cern open_stack_linuxtag_v3
20140509 cern open_stack_linuxtag_v3
 
Cloud Infrastructure
Cloud InfrastructureCloud Infrastructure
Cloud Infrastructure
 
Kubernetes meetup 102
Kubernetes meetup 102Kubernetes meetup 102
Kubernetes meetup 102
 
Intro to OpenStack
Intro to OpenStackIntro to OpenStack
Intro to OpenStack
 
Hybrid Cloud for CERN
Hybrid Cloud for CERN Hybrid Cloud for CERN
Hybrid Cloud for CERN
 
PEPS: CNES Sentinel Satellite Image Analysis, On-Premises and in the Cloud wi...
PEPS: CNES Sentinel Satellite Image Analysis, On-Premises and in the Cloud wi...PEPS: CNES Sentinel Satellite Image Analysis, On-Premises and in the Cloud wi...
PEPS: CNES Sentinel Satellite Image Analysis, On-Premises and in the Cloud wi...
 
CERN & Huawei collaboration to improve OpenStack for running large scale scie...
CERN & Huawei collaboration to improve OpenStack for running large scale scie...CERN & Huawei collaboration to improve OpenStack for running large scale scie...
CERN & Huawei collaboration to improve OpenStack for running large scale scie...
 
Using the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchUsing the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science Research
 

Mehr von Tim Bell

CERN IT Monitoring
CERN IT Monitoring CERN IT Monitoring
CERN IT Monitoring Tim Bell
 
CERN Status at OpenStack Shanghai Summit November 2019
CERN Status at OpenStack Shanghai Summit November 2019CERN Status at OpenStack Shanghai Summit November 2019
CERN Status at OpenStack Shanghai Summit November 2019Tim Bell
 
20190314 cern register v3
20190314 cern register v320190314 cern register v3
20190314 cern register v3Tim Bell
 
OpenStack Paris 2014 - Federation, are we there yet ?
OpenStack Paris 2014 - Federation, are we there yet ?OpenStack Paris 2014 - Federation, are we there yet ?
OpenStack Paris 2014 - Federation, are we there yet ?Tim Bell
 
20141103 cern open_stack_paris_v3
20141103 cern open_stack_paris_v320141103 cern open_stack_paris_v3
20141103 cern open_stack_paris_v3Tim Bell
 
Open stack operations feedback loop v1.4
Open stack operations feedback loop v1.4Open stack operations feedback loop v1.4
Open stack operations feedback loop v1.4Tim Bell
 
CERN clouds and culture at GigaOm London 2013
CERN clouds and culture at GigaOm London 2013CERN clouds and culture at GigaOm London 2013
CERN clouds and culture at GigaOm London 2013Tim Bell
 
20130529 openstack cee_day_v6
20130529 openstack cee_day_v620130529 openstack cee_day_v6
20130529 openstack cee_day_v6Tim Bell
 
Academic cloud experiences cern v4
Academic cloud experiences cern v4Academic cloud experiences cern v4
Academic cloud experiences cern v4Tim Bell
 
Ceilometer lsf-intergration-openstack-summit
Ceilometer lsf-intergration-openstack-summitCeilometer lsf-intergration-openstack-summit
Ceilometer lsf-intergration-openstack-summitTim Bell
 
Havana survey results-final-v2
Havana survey results-final-v2Havana survey results-final-v2
Havana survey results-final-v2Tim Bell
 
Havana survey results-final
Havana survey results-finalHavana survey results-final
Havana survey results-finalTim Bell
 
20121205 open stack_accelerating_science_v3
20121205 open stack_accelerating_science_v320121205 open stack_accelerating_science_v3
20121205 open stack_accelerating_science_v3Tim Bell
 
20121115 open stack_ch_user_group_v1.2
20121115 open stack_ch_user_group_v1.220121115 open stack_ch_user_group_v1.2
20121115 open stack_ch_user_group_v1.2Tim Bell
 
20121017 OpenStack Accelerating Science
20121017 OpenStack Accelerating Science20121017 OpenStack Accelerating Science
20121017 OpenStack Accelerating ScienceTim Bell
 
Accelerating science with Puppet
Accelerating science with PuppetAccelerating science with Puppet
Accelerating science with PuppetTim Bell
 
20120524 cern data centre evolution v2
20120524 cern data centre evolution v220120524 cern data centre evolution v2
20120524 cern data centre evolution v2Tim Bell
 

Mehr von Tim Bell (17)

CERN IT Monitoring
CERN IT Monitoring CERN IT Monitoring
CERN IT Monitoring
 
CERN Status at OpenStack Shanghai Summit November 2019
CERN Status at OpenStack Shanghai Summit November 2019CERN Status at OpenStack Shanghai Summit November 2019
CERN Status at OpenStack Shanghai Summit November 2019
 
20190314 cern register v3
20190314 cern register v320190314 cern register v3
20190314 cern register v3
 
OpenStack Paris 2014 - Federation, are we there yet ?
OpenStack Paris 2014 - Federation, are we there yet ?OpenStack Paris 2014 - Federation, are we there yet ?
OpenStack Paris 2014 - Federation, are we there yet ?
 
20141103 cern open_stack_paris_v3
20141103 cern open_stack_paris_v320141103 cern open_stack_paris_v3
20141103 cern open_stack_paris_v3
 
Open stack operations feedback loop v1.4
Open stack operations feedback loop v1.4Open stack operations feedback loop v1.4
Open stack operations feedback loop v1.4
 
CERN clouds and culture at GigaOm London 2013
CERN clouds and culture at GigaOm London 2013CERN clouds and culture at GigaOm London 2013
CERN clouds and culture at GigaOm London 2013
 
20130529 openstack cee_day_v6
20130529 openstack cee_day_v620130529 openstack cee_day_v6
20130529 openstack cee_day_v6
 
Academic cloud experiences cern v4
Academic cloud experiences cern v4Academic cloud experiences cern v4
Academic cloud experiences cern v4
 
Ceilometer lsf-intergration-openstack-summit
Ceilometer lsf-intergration-openstack-summitCeilometer lsf-intergration-openstack-summit
Ceilometer lsf-intergration-openstack-summit
 
Havana survey results-final-v2
Havana survey results-final-v2Havana survey results-final-v2
Havana survey results-final-v2
 
Havana survey results-final
Havana survey results-finalHavana survey results-final
Havana survey results-final
 
20121205 open stack_accelerating_science_v3
20121205 open stack_accelerating_science_v320121205 open stack_accelerating_science_v3
20121205 open stack_accelerating_science_v3
 
20121115 open stack_ch_user_group_v1.2
20121115 open stack_ch_user_group_v1.220121115 open stack_ch_user_group_v1.2
20121115 open stack_ch_user_group_v1.2
 
20121017 OpenStack Accelerating Science
20121017 OpenStack Accelerating Science20121017 OpenStack Accelerating Science
20121017 OpenStack Accelerating Science
 
Accelerating science with Puppet
Accelerating science with PuppetAccelerating science with Puppet
Accelerating science with Puppet
 
20120524 cern data centre evolution v2
20120524 cern data centre evolution v220120524 cern data centre evolution v2
20120524 cern data centre evolution v2
 

Kürzlich hochgeladen

Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 

Kürzlich hochgeladen (20)

Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 

20170926 cern cloud v4

  • 1. Tim Bell CERN @noggin143 OpenStack UK Days 26th September 2017 Understanding the Universe through Clouds Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 1
  • 2. 2 CERN: founded in 1954: 12 European States “Science for Peace” Today: 22 Member States Member States: Austria, Belgium, Bulgaria, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, Netherlands, Norway, Poland, Portugal, Romania, Slovak Republic, Spain, Sweden, Switzerland and United Kingdom Associate Member States: Pakistan, India, Ukraine, Turkey States in accession to Membership: Cyprus, Serbia Applications for Membership or Associate Membership: Brazil, Croatia, Lithuania, Russia, Slovenia Observers to Council: India, Japan, Russia, United States of America; European Union, JINR and UNESCO ~ 2300 staff ~ 1400 other paid personnel ~ 12500 scientific users Budget (2017) ~1000 MCHF Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 2
  • 3. The Large Hadron Collider (LHC) Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 3 ~700 MB/s ~10 GB/s >1 GB/s >1 GB/s
  • 4. Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 4
  • 5. Tim.Bell@cern.ch 5Universe and Clouds - 26th September 2017
  • 6. 6 Tier-1: permanent storage, re-processing, analysis Tier-0 (CERN and Hungary): data recording, reconstruction and distribution Tier-2: Simulation, end-user analysis > 2 million jobs/day ~750k CPU cores 600 PB of storage ~170 sites, 42 countries 10-100 Gb links WLCG: An International collaboration to distribute and analyse LHC data Integrates computer centres worldwide that provide computing and storage resource into a single infrastructure accessible by all LHC physicists The Worldwide LHC Computing Grid Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
  • 7. Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 7 Asia North America South America Europe LHCOne: Overlay network Allows national network providers to manage HEP traffic on general purpose network 0 10 20 30 40 50 60 70 JAN FEB MAR APR MAY JUN JUL AUG SEPT OCT NOV DEC JAN FEB MAR APR MAY
  • 8. A big data problem Tim.Bell@cern.ch 8 2016: 49.4 PB LHC data/ 58 PB all experiments/ 73 PB total ALICE: 7.6 PB ATLAS: 17.4 PB CMS: 16.0 PB LHCb: 8.5 PB 11 PB in July 180 PB on tape 800 M files Universe and Clouds - 26th September 2017
  • 9. Public Procurement Cycle Step Time (Days) Elapsed (Days) User expresses requirement 0 Market Survey prepared 15 15 Market Survey for possible vendors 30 45 Specifications prepared 15 60 Vendor responses 30 90 Test systems evaluated 30 120 Offers adjudicated 10 130 Finance committee 30 160 Hardware delivered 90 250 Burn in and acceptance 30 days typical with 380 worst case 280 Total 280+ Days Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 9
  • 10. OpenStack London July 2011 Vinopolis Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 10
  • 11. CERN Tool Chain Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 11
  • 12. CERN OpenStack Service Timeline (*) Pilot (?) Trial Retired ESSEX Nova (*) Glance (*) Horizon (*) Keystone (*) FOLSOM Nova (*) Glance (*) Horizon (*) Keystone (*) Quantum Cinder GRIZZLY Nova Glance Horizon Keystone Quantum Cinder Ceilometer (*) HAVANA Nova Glance Horizon Keystone Neutron Cinder Ceilometer (*) Heat ICEHOUSE Nova Glance Horizon Keystone Neutron Cinder Ceilometer Heat Ironic Trove JUNO Nova Glance Horizon Keystone Neutron Cinder Ceilometer Heat (*) Rally (*) 5 April 2012 27 September 2012 4 April 2013 17 October 2013 17 April 2014 16 October 2014 July 2013 CERN OpenStack Production February 2014 CERN OpenStack Havana October 2014 CERN OpenStack Icehouse March2015 CERN OpenStack Juno LIBERTY Nova Glance Horizon Keystone Neutron (*) Cinder Ceilometer Heat Rally EC2API Magnum (*) Barbican (*) September 2015 CERN OpenStack Kilo KILO Nova Glance Horizon Keystone Neutron Cinder Ceilometer Heat Rally Manila September 2016 CERN OpenStack Liberty MITAKA Nova Glance Horizon Keystone Neutron Cinder Ceilometer Heat Rally EC2API Magnum Barbican Ironic (?) Mistral (?) Manila (?) March 2017 CERN OpenStack Mitaka NEWTON Nova Glance Horizon Keystone Neutron Cinder Ceilometer Heat Rally EC2API Magnum Barbican Ironic (?) Mistral (?) Manila (?) 7 April 201630 April 2015 15 October 2015 OCATA Nova Glance Horizon Keystone Neutron Cinder Ceilometer Heat Rally EC2API Magnum Barbican Ironic (?) Mistral (?) Manila (*) 22 Feb 2017 June 2017 CERN OpenStack Newton 6 Oct 2016 PIKE 28 Aug 2017 Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 12
  • 13. Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 13 Currently >8000 hypervisors, 281K cores running 33,000 VMs
  • 14.  From ~200TB total to ~450 TB of RBD + 50 TB RGW• From ~200TB total to ~450 TB of RBD + 50 TB RGW OpenStack Glance + Cinder Example: ~25 puppet masters reading node configurations at up to 40kHz iops 14Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch • Scale tests with Ceph Luminous up to 65PB in a block storage pool http://ceph.com/community/new-luminous-scalability/
  • 15. Software Deployment 15Tim.Bell@cern.ch  Deployment based on CentOS and RDO - Upstream, only patched where necessary (e.g. nova/neutron for CERN networks) - Some few customizations - Works well for us  Puppet for config’ management - Introduced with the adoption of AI paradigm  We submit upstream whenever possible - openstack, openstack-puppet, RDO, …  Updates done service-by-service over several months - Running services on dedicated (virtual) servers helps (Exception: ceilometer and nova on compute nodes) - Aim to be around 6-9 months behind trunk  Upgrade testing done with packstack and devstack - Depends on service: from simple DB upgrades to full shadow installations Universe and Clouds - 26th September 2017
  • 16. Community Experience  Open source collaboration sets model for in- house teams  External recognition by the community is highly rewarding for contributors  Reviews and being reviewed is a constant learning experience  Productive for job market for staff  Working groups, like the Scientific and Large Deployment teams, discuss wide range of topics  Effective knowledge transfer mechanisms consistent with the CERN mission  Dojos at CERN bring good attendance  Ceph, CentOS, Elastic, OpenStack CH, … Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 16
  • 17. Top level cell  Runs API service  Top cell scheduler ~50 child cells run  Compute nodes  Scheduler  Conductor  Decided to not use HA Version 2 coming  Default for all Scaling Nova 17Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
  • 18. Rally 18Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
  • 19. What’s new? Magnum  Container Engine as a Service  Kubernetes, Docker, Mesos… $ magnum cluster-create --name myswarmcluster --cluster-template swarm --node-count 100 $ magnum cluster-list +------+----------------+------------+--------------+-----------------+ | uuid | name | node_count | master_count | status | +------+----------------+------------+--------------+-----------------+ | .... | myswarmcluster | 100 | 1 | CREATE_COMPLETE | +------+----------------+------------+--------------+-----------------+ $ $(magnum cluster-config myswarmcluster --dir magnum/myswarmcluster) $ docker info / ps / ... $ docker run --volume-driver cvmfs -v atlas.cern.ch:/cvmfs/atlas -it centos /bin/bash [root@32f4cf39128d /]# Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 19
  • 20. Scaling Magnum to 7M req/s Rally drove the tests 1000 node clusters (4000 cores) Cluster Size (Nodes) Concurrency Deployment Time (min) 2 50 2.5 16 10 4 32 10 4 128 5 5.5 512 1 14 1000 1 23 Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 20
  • 21. What’s new? Mistral  Workflow-as-a-Service used for multi-step actions, triggered by users or events  Horizon dashboard for visualising results  Examples  Multi-step project creation  Scheduled snapshot of VMs  Expire personal resources after 6 months  Code at https://gitlab.cern.ch/cloud- infrastructure/mistral-workflows  Some more complex cases coming in the pipeline Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 21
  • 22. Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 22
  • 23. Automate provisioning 23 Automate routine procedures - Common place for workflows - Clean web interface - Scheduled jobs, cron-style - Traceability and auditing - Fine-grained access control - … Procedures for - OpenStack project creation - OpenStack quota changes - Notifications of VM owners - Usage and health reports - … Disable compute node • Disable nova-service • Switch Alarms OFF • Update Service-Now ticket Notifications • Send e-mail to VM owners Other tasks Post new message broker Add remote AT job Save intervention details Send calendar invitation Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
  • 24. Manila: Overview 24 • File Share Project in OpenStack - Provisioning of shared file systems to VMs - ‘Cinder for file shares’ • APIs for tenants to request shares - Fulfilled by backend drivers - Acessed from instances • Support for variety of NAS protocols - NFS, CIFS, MapR-FS, GlusterFS, CephFS, … • Supports the notion of share types - Map features to backends Manila Backend 1. Request share 2. Create share 4. Access share User instances 3. Provide handle Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
  • 25. 25 LHC Incident in April 2016 Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
  • 26. Manila testing: #fouinehammer 26 m-share Driver RabbitMQ m-sched m-api DBm-api m-api 1 … 500 nodes 1 ... 10k PODs Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
  • 27. Commercial Clouds Universe and Clouds - 26th September 2017Tim.Bell@cern.ch 27
  • 28. Development areas going forward  Spot Market  Cells V2  Neutron scaling – no Cells equivalent yet  Magnum rolling upgrades  Collaborations with Industry Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 28
  • 29. Operations areas going forward  Further automate migrations  Around 5,000 VMs / year  First campaign in 2016 needed some additional scripting such as pausing very active VMs  Newton live migration includes most use cases  Software Defined Networking  Nova network to Neutron migration to be completed  In addition to flat network in use currently  Introduce higher level functions such as LBaaS Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 29
  • 30. Future Challenges 0 100 200 300 400 500 600 700 800 900 1000 Raw Derived Data estimates for 1st year of HL-LHC (PB) ALICE ATLAS CMS LHCb 0 50000 100000 150000 200000 250000 CPU (HS06) CPU Needs for 1st Year of HL-LHC (kHS06) ALICE ATLAS CMS LHCb B First run LS1 Second run LS2 Third run LS3 HL-LHC … FCC? 2013 2014 2015 2016 2017 201820112010 2012 2019 2023 2024 2030?20212020 2022 …2025 CPU: • x60 from 2016 Data: • Raw 2016: 50 PB  2027: 600 PB • Derived (1 copy): 2016: 80 PB  2027: 900 PB Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 30  Raw data volume for LHC increases exponentially and with it processing and analysis load  Technology at ~20%/year will bring x6-10 in 10-11 years  Estimates of resource needs at HL-LHC x10 above what is realistic to expect from technology with reasonably constant cost
  • 31. Summary  OpenStack has provided a strong base for scaling resources over the past 4 years without significant increase in CERN staff  Additional functionality on top of pure Infrastructure-as-a-Service is now coming to production  Community and industry collaboration has been productive and inspirational for the CERN team  Some big computing challenges up ahead… Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 31
  • 32. Further Information Technical details on the CERN cloud at http://openstack-in-production.blogspot.fr Custom CERN code is at https://github.com/cernops Scientific Working Group at https://wiki.openstack.org/wiki/Scientific_working_group Helix Nebula details at http://www.helix-nebula.eu/ http://cern.ch/IT ©CERN CC-BY-SA 4.0Universe and Clouds - 19th June 2017 Tim.Bell@cern.ch 32
  • 34. Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 34 WLCG MoU Signatures 2017: - 63 MoU’s - 167 sites; 42 countries
  • 35. Partners Contributors Associates Research Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 35
  • 36. Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 36
  • 37. How do we monitor? 37 Processing kafka Data Centres Data Sources Data Access Storage/Search WLCG Transport Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
  • 38. Tuning 38  Many hypervisors are configured for compute optimisation  CPU Passthrough so VM sees identical CPU  Extended Page Tables so memory page mapping is done in hardware  Core pinning so scheduler keeps the cores on the underlying physical cores  Huge pages to improve memory page cache utilisation  Flavors are set to be NUMA aware  Improvements of up to 20% in performance  Impact is that the VMs cannot be live migrated so service machines are not configured this way Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch
  • 39. Provisioning services Moving towards Elastic Hybrid IaaS model: • In house resources at full occupation • Elastic use of commercial & public clouds • Assume “spot-market” style pricing OpenStack Resource Provisioning (>1 physical data centre) HTCondor Public Cloud VMsContainersBare Metal and HPC (LSF) Volunteer Computing IT & Experiment Services End Users CI/CD APIs CLIs GUIs Experiment Pilot Factories Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 39
  • 40. Simulating Elasticity  Deliveries are around 1-2 times per year  Resources are for  Batch compute … immediately needed … compute optimised  Services … needed as projects request quota ... Support live migration with generic CPU definition  Elasticity is simulated by  Creating opportunistic batch projects running on resources available for services in the future  Draining opportunistic batch as needed  End result is  High utilisation of ‘spare’ resources  Simulation of an elastic cloud Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch 40
  • 41. Pick the interesting events  40 million per second  Fast, simple information  Hardware trigger in a few micro seconds  100 thousand per second  Fast algorithms in local computer farm  Software trigger in <1 second  Few 100 per second  Recorded for study 41 Muon tracks Energy deposits Universe and Clouds - 26th September 2017 Tim.Bell@cern.ch

Hinweis der Redaktion

  1. 2
  2. Largest scientific apparatus ever build 27km around 2 general purpose detectors: Huge microscopes – to explore the very small – using a long lever arm 2 specialized detectors
  3. Over 1,600 magnets lowered down shafts and cooled to -271 C to become superconducting. Two beam pipes, vacuum 10 times less than the moon
  4. However, CERN is a publically funded body with strict purchasing rules to make sure that the contributions from our contributing countries are also provided back to the member states, our hardware purchases should be distributed to each of the countries in ratio of their contributions., So, we have a public procurement cycle that takes 280 days in the best case… we define the specifications 6 months before we actually have the h/w available and that is in the best case. Worst case, we find issues when the servers are delivered. We’ve had cases such as swapping out 7,000 disk drives where you stop tracking by the drive but measure it by the pallet of disks. With these constraints, we needed to find an approach that allows us to be flexible for the physicists while still being compliant with the rules.
  5. We started looking at OpenStack in 2011, at an event in London at the Vinopolis and started pilots. We are gradually expanding the functionality of the CERN cloud through the releases. We experiment with some new technology, some make it to production in a release or so, others such as Ironic, we have a look at and then come back a year or so later The service catalog functions allow us to easily expose selected function to early users
  6. Should take around 1 0/15 minutes to execute first command
  7. Should take around 1 0/15 minutes to execute first command