Intesa Sanpaolo is one of the first banking groups in the Euro zone, with over 12 million customers and 4,600 branches in Italy. With a lot of traditional monolithic applications that are difficult to maintain and evolve, Intesa turned to Docker to help them both modernize the applications and improve their portability so that they could consider a multi-site architecture across multiple data centers. Using Docker Enterprise Edition (EE), Intesa took the first step to “break the monolith” by containerizing their infrastructure, self-described as an “Infrastructure-as-code” pattern, and now use Docker EE to orchestrate the applications across sites.
In this talk Diego Braga, Infrastructure System Specialist at Intesa, and Lorenzo Fontana, DevOps Engineer at Kiratech will share how they implemented Docker EE along with software-defined networking and storage solutions to validate Intesa’s architectural model and to build a geographical distributed multi-data center cluster, all while saving infrastructure costs and remaining compliant with regulations.
They will highlight their CI/CD process using Docker and Jenkins, how the developer and ops team are now working together to implement a DevOps methodology and Intesa’s ROI in using Docker EE. They will also share Intesa’s future plans, including creating mixed Linux/Windows clusters that use the same overlay network and on-prem/public cloud clusters opportunities.
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Building a Secure and Resilient Foundation for Banking at Intesa Sanpaolo with Docker EE
1. Building a Secure
and Resilient
Foundation for
Banking at Intesa
Sanpaolo
Intesa Sanpaolo
2. Agenda
1. Intesa Sanpaolo - Who We Are
2. The Actual Needs
3. Proposed Solution: Docker EE
4. Technological Stacks
Supported – We Are Legacy!
5. The Business Case – Are
There Any ROIs?
6. Architecture Design &
Implementation
7. What We Achieved
8. Next Steps
3. Who We Are
The Leader
in Italy
Unique
Customer
Reach
Strategic
International
Presence
• Leader in all segments with a market share of 17% in customer deposits and 16% in
customer loans
• Leadership in most attractive products
• Strong capital base and asset quality
• Largest domestic network: over 3,900 branches, 13%(1) market share and 11.1 million clients
• Best branch footprint making the Group truly nationwide: market share ≥ 12%(1) in 13 out of 20 regions
• High penetration of local markets: market share ≥ 5%(1) in 106 out of 107 provinces
• Particular strength in the wealthiest areas of Italy: strong retail presence covering more
than 70% of Italian household wealth
• Selected commercial banking presence in Central and Eastern Europe and Middle Eastern
and North African countries reaching 7.7 million clients in 12 countries through a network of
over 1,100 branches
• International network presence in 28 countries in support of cross-border activities of
corporate customers
Figures as at 31 March 2017
(1) Bank of Italy criteria figures as at 31 December 2016
INTESA SANPAOLO
4. Who We Are
Financial
Highlights
Total Assets: euro 739,453 m
Loans to Customers: euro 366,468 m
Direct Deposits from
Banking
Business: euro 383,222 m
Direct Deposits from Insurance
Business
and Technical Reserves: euro 146,295 m
Shareholders’ Equity(1): euro 50,735 m
1Q17 Net Income: euro 901 m
~ 18.8 million
Customers
5,075
Branches
Market Capitalisation(2): euro 44.7 bn
~ 11.1 million in Italy
3,937 in Italy
1,138 abroad
~ 7.7 million abroad
Figures as at 31 March 2017
(1) Including Net Income (2) As at 28 April 2017
INTESA SANPAOLO
5. Who We Are 151.6
87.3
80.8
60.4
58.8
58.0
57.5
49.0
45.7
44.7
42.9
40.6
38.8
37.4
34.2
33.3
31.2
29.2
28.2
27.7
25.3
24.9
HSBC
Banco Santander
BNP Paribas
UBS
Lloyds Banking Group
ING
Sberbank
BBVA
Nordea Bank
Intesa Sanpaolo
Barclays
Société Générale
Crédit Agricole
Royal B. of Scotland
Deutsche Bank
UniCredit
Danske Bank
Credit Suisse
Standard Chartered
KBC
Svenska Handelsb.
Caixabank
1
2
3
4
5
6
7
8
9
10
11
EUROZONE RANKING
BANK’S MARKET CAPITALISATION (euro
bn)
Source: Bloomberg
Prices as at 28 April 2017
INTESA SANPAOLO
6. The Actual Needs
We believe that IaaS and PaaS can
enable the cloud-readiness of apps but
manage them in hybrid environments
can be complex. Infrastructure-As-Code
is a step closer to what we mean as
cloud-readiness but apps aren’t all
stateless, expecially in legacy companies
Monolith apps represent the majority of
our perimeter as they represent the
legacy of a consolidated way of
developing code. Change management
of monoliths is straightforward: even
the smallest modification of the code
requires a complete redeploy
Having the same, unchanged
infrastructure regardless of the
environment in which it is located
allows to clear the human error
while deploying the infrastructure,
but forces the developer to know
also non-pertinent domains
Cloud-Ready Break The Monolith Infrastructure As Code
By peak10.com By 99acres.comBy tumblr.com
7. Proposed Solution:
Docker Enterprise Edition
An Enterprise
Container-As-a-
Service Solution
From «https://europe-2017.dockercon.com/enterprise-summit/»
App
Existing
Application
Modern
Methodologies
Integrate to CI/CD and
automation system
Convert to a
Docker EE
container
Modern
Infrastructure
Built on premises, in the
cloud, or as part of a
hybrid environment
Ongoing
Innovation
Add new services or
start peeling off
services from
monolith code base
The quickest way to cut into that 80%
8. Technological Stack Supported
Stack Description
Docker
Compatibility
Vendor Support
MICROSOFT-BASED
STACK
Stack that uses the Microsoft products suite
and can be used for custom applications or
market products on a Windows platform
• All the stack elements can be made in
containers with full support
JAVA-BASED STACK
Stack for Java applications with relational
DB. It is the most widely used ISP platform
for critical applications and it is based on
a Linux platform
• All the core elements of the stack are
available in fully supported containers
OPEN SOURCE STACK
WITH RELATIONAL
DATABASE
Java application stack that uses open
source products and provides a relational type
database
• Red Hat makes available only Wildfly
JBoss Docker containers without
enterprise-level support
• The other elements are fully supported
OPEN SOURCE STACK
WITH NON-RELATIONAL
DATABASE
Java application stack that uses open
source products and provides a non-relational
database
• Red Hat makes available only Wildfly
JBoss Docker containers without
enterprise-level support
• The other elements are fully supported
Supportato
Parzialmente
Supportato
9. Business Case and ROIs
Consolidation is the key:
with Docker Enterprise Edition you can
consolidate more apps on a single physical
machine
There is no big gap between the licence for an ESXi
or a Docker Enterprise engine so there’s no saving
based merely on licence subscriptions
The worst business case is having Docker EE on virtual
machines – but a virtual infrastructure raises me from
having to think about the high reliability and storage
availability through datacenters
Everything is really nice
and supported but ..
Am I saving
money?
11. 5 Networking switches configured as an IP Fabric L3 (3 Leaf + 2 Spine)
5 Management servers UCP instances and DTRs + ingress services (Infra nodes)
4 server Worker general purpose workloads, (3 CentOS 7.4 + 1 Windows Server 2016)
3 server Storage Nodes (Elastifile storage nodes)
RackMount Server DELL R730xd
4 Worker + 3 Storage Nodes
Total requested Storage: 22,4 TB RAW
Resources
• Memory: 768GB RAM
• CPU: 2x E5-2690v4 (28 core 2.6GHz)
• LAN: 4x 10Gb Eth (with SFP)
• Boot: 2x300GB SAS
• Disks: 4x 800GB NVMe 2,5’ = 3,2TB NVMe
7 Worker
RackMount Server DELL R430
Infra nodes (UCP+DTR+Ingress)
Total requested storage: 4,8 TB RAW
Resources
• Memory: 64GB RAM
• CPU: 1x E5-2620v4 (8 core 2.1GHz)
• LAN: 4x 10Gb Eth (with SFP)
• Disks: 2x 600GB SAS 2,5’ = 1,2TB SAS 10k RPM
5 Infra
Switch DELL S4048
Spine and Leaf
Resources
▪ S4048-ON multilayer witch with 10G - 48 x 10G
di SFP+ type interfaces and 6 x 40G QSFP+
interfaces
▪ Switching capacity 1.44Tbps
▪ Forwarding rate: 1080Mpps
▪ Fabric (Spine & Leaf) DAC 40G 7mt
▪ 32 transceiver SFP+ 10G-SR per 8 server
5 Switch
What we GOT – w00t!
12. R A C K 1 R A C K 2 R A C K 3
WAN (Remote DC)
Leaf: 2x Dell S4048-ON
Spine: 3x Dell S4048-ON
Spine Leaf
L3 IP Fabric
13. Spine Leaf L3 IP Fabric
R A C K 1 R A C K 2 R A C K
3Turin
R A C K 1 R A C K 2 R A C K 3
Parma
W A NW A N
14. Spine Leaf L3 IP Fabric
Internal
Networking
Traffic
Core
Networking
Traffic
R A C K 1 R A C K 2 R A C K
3Turin
R A C K 1 R A C K 2 R A C K 3
Parma
W A NW A N
15. Spine Leaf L3 IP Fabric
The Spine Leaf Layer 3
Fabric Design allows to
predictably scale out
container workloads
It has constant latency
even while adding
rackmount servers and
workload
It is easily integrable
in existing Core IP
network topologies
Since we will use a SDN on
top of this the core switches
need to know nothing else but
the MAC and ARP entries of
the ToR switches.
16. What about the Software?
R A C K 1 R A C K 2 R A C K
3Turin
R A C K 1 R A C K 2 R A C K 3
Parma
W A NW A N
Avinetworks
UCP
DTR
Elastifile
Worker
19. Deployed Anywhere on common
Hardware
• On-Prem, Private Enterprise Cloud, Public Cloud
• Works well in noisy & fluctuating Public Cloud environments
Full Stack Written from Scratch
• Streamlined for Flash/SSD/NVMe (3D XPoint in future)
• No Read Cache / No Write Cache (eliminates costly NVRAM)
• Combines Metadata and Data into a single write which reduces
Write Amplification + Extends Flash lifetime (Patented)
• Delivers linear scalability with < 2 ms latency in the Public Cloud
Enterprise Grade Feature / Functionality
• Dynamic Data Path for Directories and Files (Patented)
• Advanced data services – Compression, Dedupe, Snapshots,
Async DR
• POSIX semantics
Elastifile Design Objective
20. Avi’s Web Scale Application Services Fabric
Scalable Network
Services
Separated control and data
plane
Centralized Management
Manage a single fabric, not
many devices
Visibility & Analytics
Actionable insights key to
automation
Hybrid Cloud
Single solution, any environment
Service
Engine
Data Plant
Controller
Data Plant
Applianc
e
Bare Metal Virtualized Container Public
Cloud
VM
VM
VM
VM
VM
21. Docker Universal Control Plane
CD
Docker
Trusted RegistryVersion Control
It’s all about
Components
CI
ProductionStagingUATBuild ImagesBuild Apps Integration
Performance
Testing
Functional Testing
Non-Production Environments ProductionBuild Cluster
23. Next Steps
Infrastructure
Sizing Based on
Average Traffic
Cloud Brokering
Tools
Peaks can be handled scaling out the cluster on public cloud
All the infrastructure components can follow the Docker stack on
hybrid on-prem and off-prem infrastructures
Evaluate tools that can handle the cloud provider to choose in case of
bursts
The peak must not be more expensive than designing the on-prem
infrastructure on peaks
Figures as at 31 March 2017
(1) Bank of Italy criteria figures as at 31 December 2016