SlideShare ist ein Scribd-Unternehmen logo
1 von 45
Downloaden Sie, um offline zu lesen
Backing Data Silo Atack:
Alfresco sharding,
SOLR for non-flat objects
Alexey Vasyukov <avasyukov@itdhq.com>
Backing Data Silo Atack:
Alfresco sharding,
SOLR for non-flat objects
Alexey Vasyukov <avasyukov@itdhq.com>
Buzzwords
The Scheme
Repo #1 Repo #2 Repo #3 Repo #4
● Each server is
independent
● Each server stores
a part of content
The Scheme
Repo #1 Repo #2 Repo #3 Repo #4
Level 7 Switch / Balancer
Share &
3rd party
● Each server is
independent
● Each server stores
a part of content
● We can query them all
with native Alfresco
REST API and a single
API entry point
● We can even run an
unmodified Share or
3rd party app on top
of federation
Repo #1 Repo #2 Repo #3 Repo #4
Level 7 Switch / Balancer
Sharded
SOLR
Cloud
Share &
3rd party
The Scheme
● External SOLR Cloud:
single index of all repos
Questions regarding the scheme
• What the hell is going on?
• Are you guys crazy?
• Why do you need this scheme at all?
• It is a wheel reinvented, isn't it?
Background information
• The project started a year ago (Spring 2015)
• The need for a content platform:
– Few billions objects in several years — nobody knows
the real size of the repo in 5-10 years
– Several thousands concurrent users
– Few dozens external apps — nobody knows content
structure in several years (but for sure it will be compex)
Background information
• Concerns regarding Alfresco (Spring 2015!)
– uncertanty on repo size scalability (strictly unofficial
rumors regarding 100M nodes as a soft limit for a single
repo, internal testing confirms this)
– no SOLR sharding (yes, the customer wants full text for
billions of documents — hello, Google)
Background information
• And ... Q4 2015
– Alfresco releases 1B benchmark
– Alfresco supports sharded SOLR (finally!)
• Do we still need our wheel? Yes!
Different architectures & use cases
1B benchmark Sharded scheme
• Powered by repo clustering
• Single DB & content store
• All features
• Powered by L7 switch / balancer
• Separate DBs & content stores
• Limited features
– No native assocs between repos
Source: Alfresco 1B Benchmark
Source: Alfresco Wiki
Files were spoofed, so not on the filesystem.
Alfresco server will generate a consistent text
document each time the content is requested
by an API, SOLR or Share.
Key Decision Point: Storage!
• Not addressed in 1B benchmark
Key Decision Point: Storage!
• Not addressed in 1B benchmark
– Reasonable for load testing repo & SOLR
– Not acceptable for infrastructure planning
Key Decision Point: Storage!
• Not addressed in 1B benchmark
– Reasonable for load testing repo & SOLR
– Not acceptable for infrastructure planning
• 1B files x 100 kb x 1 version = 100 Tb
1B files x 100 kb x 10 versions = 1 Pb
– Clustering requires shared storage
– Terribly expensive, complex in operation
Key Decision Point: Storage!
• Not addressed in 1B benchmark
– Reasonable for load testing repo & SOLR
– Not acceptable for infrastructure planning
• 1B files x 100 kb x 1 version = 100 Tb
1B files x 100 kb x 10 versions = 1 Pb
– Clustering requires shared storage
– Terribly expensive, complex in operation
• Answer? Sharding allows to use local disks!
– Fast, cheap, simple, vendor neutral
– Hello, Google
External SOLR Cloud
• Single SOLR collection for all federated repos
– Native sorting, paging, etc
– No heavy logic and overhead in L7 switch
External SOLR Cloud
• Complex queries on associated objects
• Common approach — policy to copy properties
between related objects:
– Overhead in DB and index
– Your clear content structure goes crazy
– Tons on policies to handle assoc creation / deletion,
properties update for all objects
– What about multiple assocs with queries inside child nodes?
– What about full text, touching more than one object?
External SOLR Cloud
• Complex queries on associated objects
• Alternative approach — nested SOLR objects:
– No redundacy in DB tables / SOLR index
– Metadata and full-text in a single query
External SOLR Cloud
• Complex queries on associated objects:
• Alternative approach — nested SOLR objects:
– No redundacy in DB tables / SOLR index
– Metadata and full-text in a single query
– Not a silver bullet, not an only option
– Requires custom indexer implementation
L7 Switch Internals
• Separate Spring App
– Yes, we implemented the switch in Java
• Handles requests, communicates with
backend repos and external SOLR Cloud
L7 Switch Internals
• Separate Spring App
– Yes, we implemented the switch in Java
• Handles requests, communicates with
backend repos and external SOLR Cloud
• Stateless, can be scaled out with zero changes
L7 Switch Internals
• Separate Spring App
– Yes, we implemented the switch in Java
• Handles requests, communicates with
backend repos and external SOLR Cloud
• Stateless, can be scaled out with zero changes
• Common Alfresco REST API
– Native API for Share and 3rd party apps
– No heavy logic in the switch
L7 Switch Internals: Deeper Look
• Public APIs from L7 switch point of view:
– Distributed (all repos)
– Replicated (write to all repos, read from any repo)
– Single-node (single repo, content based routing)
– SOLR Cloud calls
L7 Switch Internals: Real Life
• Mapping all repo APIs to correct switch
controllers is hard and time consuming:
– Test, test, test
– (+ upstream changes track, track, track)
• However, you do not need all APIs for your
project in real life
– Still test, test, test
Demo
L7 Switch Benchmark
• Concentrates on L7 switch benchmark
– Backend repos and SOLR are «fast enough» in the test
See benchmark details
in backup slides
(Appendix A)
L7 Switch Benchmark
• Concentrates on L7 switch benchmark
– Backend repos and SOLR are «fast enough» in the test
• 15k users with 30s think time
– 125 browse req/s (federated)
– 125 search req/s (SOLR)
– 250 node access req/s (single repo access)
See benchmark details
in backup slides
(Appendix A)
L7 Switch Benchmark
• Concentrates on L7 switch benchmark
– Backend repos and SOLR are «fast enough» in the test
• 15k users with 30s think time
– 125 browse req/s (federated)
– 125 search req/s (SOLR)
– 250 node access req/s (single repo access)
• Switch HW: 8 cores, 8 Gb RAM
See benchmark details
in backup slides
(Appendix A)
L7 Switch Benchmark
• Concentrates on L7 switch benchmark
– Backend repos and SOLR are «fast enough» in the test
• 15k users with 30s think time
– 125 browse req/s (federated)
– 125 search req/s (SOLR)
– 250 node access req/s (single repo access)
• Switch HW: 8 cores, 8 Gb RAM
• Results (user wait time):
– Browse: < 3.9s
– Node access: < 1.2s
– Search: < 9.7s
See benchmark details
in backup slides
(Appendix A)
Distributed system in production
• Docker + Ansible: nice but not enough
• JGroups for auto-discovery
• Pushing configs from
the central location
• Reconfigure running nodes without restart when a
member joins / leaves federation
• Basic safety checks against human mistakes (like
adding repos with incompatible code versions into
the same federation)
See more details
in backup slides
(Appendix B)
Summary
• Yes, you can have sharded Alfresco
• Sharding in an option to solve storage problem for
really large repository
Summary
• Yes, you can have sharded Alfresco
• Sharding in an option to solve storage problem for
really large repository
• Sharding is not a silver bullet, there are limitations,
one size does not fit all
Summary
• Yes, you can have sharded Alfresco
• Sharding in an option to solve storage problem for
really large repository
• Sharding is not a silver bullet, there are limitations,
one size does not fit all
• Sharded Alfresco (Community 5.1.e) on commodity
hardware can scale up to 15k concurrent users
Summary
• Yes, you can have sharded Alfresco
• Sharding in an option to solve storage problem for
really large repository
• Sharding is not a silver bullet, there are limitations,
one size does not fit all
• Sharded Alfresco (Community 5.1.e) on commodity
hardware can scale up to 15k concurrent users
• We would like to share and re-implement the switch
and indexer from scratch as separate open source
projects, if the community is interested
Thank you for your time
and attention!
Alexey Vasyukov <avasyukov@itdhq.com>
Appendix A:
Benchmark Details
L7 Switch Benchmark: Infra
• 3 repositories
●
1 separate PostgreSQL database server per repo
●
sharded content is not indexed by internal Alfresco
SOLR, only by external SOLR Cloud
• 3 SOLR Cloud shards
●
+ 1 ZooKeeper node
• 1 custom indexer server
• 1 custom L7 switch server
L7 Switch Benchmark: Limits
• Synthetic load with just 'ab' tool
– Share servers exist in real setup, but they are not included
into benchmark
• Quite simple SOLR queries
– Not a real nested objects benchmark yet
• Backends benchmark is out of scope
– They are «fast enough» to make L7 switch a bottleneck
– Real system performance data is the customer property
– Real system has a lot of deep repo customizations, this data
is not relevant for any other use case
L7 Switch Benchmark: Load
• 15k users:
– 25% browse (federated)
– 25% search (SOLR)
– 50% access data (single repo)
• Think time 30 seconds:
– 125 browse req/s
– 125 search req/s
– 250 node access req/s
• Load lasts for 1 hour
L7 Switch Benchmark: HW
• L7 switch hardware
– 8 cores
– 8 Gb RAM
L7 Switch Benchmark: Results
• Browse (ms):
– 50% 969
– 66% 1181
– 75% 1345
– 80% 1447
– 90% 2074
– 95% 2624
– 98% 2953
– 99% 3319
– 100% 3895 (longest request)
L7 Switch Benchmark: Results
• Node access (ms):
– 50% 423
– 66% 515
– 75% 592
– 80% 662
– 90% 821
– 95% 895
– 98% 962
– 99% 1065
– 100% 1165 (longest request)
L7 Switch Benchmark: Results
• Search (ms):
– 50% 3180
– 66% 3708
– 75% 4059
– 80% 4291
– 90% 5134
– 95% 5659
– 98% 6674
– 99% 7571
– 100% 9638 (longest request)
Appendix B:
Production Consideration
Distributed system in production
• We do use Docker + Ansible
• It's not a silver bullet and it's not enough
– Components should detect and handle restarts and failures
of each other
– Adding new nodes into running federation — too many
interconnections and relations, too many configs to edit on
each server, existing nodes should be reconfigured
– Handling human mistakes — starting new system while the
old one is still running, starting new repos with the new
application version while old repos are running the old
version
Distributed system in production
• Auto-discovery with JGroups
– Plus safety-check against multiple federations on the
same subnet
• Pushing configs from L7 switch
– Single config for sysadmin to rule the federation
– On-the-fly reconfiguration of existing nodes (without
restart) when federation topology changes
• Code version check on joining federation
– Protection against running incompatible code versions in
the same federation

Weitere ähnliche Inhalte

Was ist angesagt?

Database Build and Release - SQL In The City - Ernest Hwang
Database Build and Release - SQL In The City - Ernest HwangDatabase Build and Release - SQL In The City - Ernest Hwang
Database Build and Release - SQL In The City - Ernest Hwang
Red Gate Software
 

Was ist angesagt? (20)

You Were Lied To About Optimization
You Were Lied To About OptimizationYou Were Lied To About Optimization
You Were Lied To About Optimization
 
Maven - Taming the Beast
Maven - Taming the BeastMaven - Taming the Beast
Maven - Taming the Beast
 
Java 8 in Anger (JavaOne)
Java 8 in Anger (JavaOne)Java 8 in Anger (JavaOne)
Java 8 in Anger (JavaOne)
 
How to analyze your codebase with Exakat using Docker - Longhorn PHP
How to analyze your codebase with Exakat using Docker - Longhorn PHPHow to analyze your codebase with Exakat using Docker - Longhorn PHP
How to analyze your codebase with Exakat using Docker - Longhorn PHP
 
Perforce Helix Never Dies: DevOps at Bandai Namco Studios
Perforce Helix Never Dies: DevOps at Bandai Namco StudiosPerforce Helix Never Dies: DevOps at Bandai Namco Studios
Perforce Helix Never Dies: DevOps at Bandai Namco Studios
 
Magento 2 performance profiling and best practices
Magento 2 performance profiling and best practicesMagento 2 performance profiling and best practices
Magento 2 performance profiling and best practices
 
Continous integration and delivery for single page applications
Continous integration and delivery for single page applicationsContinous integration and delivery for single page applications
Continous integration and delivery for single page applications
 
2021.laravelconf.tw.slides1
2021.laravelconf.tw.slides12021.laravelconf.tw.slides1
2021.laravelconf.tw.slides1
 
Ruby conf 2016 - Secrets of Testing Rails 5 Apps
Ruby conf 2016 - Secrets of Testing Rails 5 AppsRuby conf 2016 - Secrets of Testing Rails 5 Apps
Ruby conf 2016 - Secrets of Testing Rails 5 Apps
 
Developing Microservices using Spring - Beginner's Guide
Developing Microservices using Spring - Beginner's GuideDeveloping Microservices using Spring - Beginner's Guide
Developing Microservices using Spring - Beginner's Guide
 
Software Design Patterns in Laravel by Phill Sparks
Software Design Patterns in Laravel by Phill SparksSoftware Design Patterns in Laravel by Phill Sparks
Software Design Patterns in Laravel by Phill Sparks
 
Building Scalable Applications with Laravel
Building Scalable Applications with LaravelBuilding Scalable Applications with Laravel
Building Scalable Applications with Laravel
 
Onion Architecture with S#arp
Onion Architecture with S#arpOnion Architecture with S#arp
Onion Architecture with S#arp
 
Sbt, idea and eclipse
Sbt, idea and eclipseSbt, idea and eclipse
Sbt, idea and eclipse
 
Zeppelin meetup 2016 madrid
Zeppelin meetup 2016 madridZeppelin meetup 2016 madrid
Zeppelin meetup 2016 madrid
 
What I Learned From Writing a Test Framework (And Why I May Never Write One A...
What I Learned From Writing a Test Framework (And Why I May Never Write One A...What I Learned From Writing a Test Framework (And Why I May Never Write One A...
What I Learned From Writing a Test Framework (And Why I May Never Write One A...
 
Java APIs - the missing manual
Java APIs - the missing manualJava APIs - the missing manual
Java APIs - the missing manual
 
JavaOne 2015: 12 Factor App
JavaOne 2015: 12 Factor AppJavaOne 2015: 12 Factor App
JavaOne 2015: 12 Factor App
 
Microservice - All is Small, All is Well?
Microservice - All is Small, All is Well?Microservice - All is Small, All is Well?
Microservice - All is Small, All is Well?
 
Database Build and Release - SQL In The City - Ernest Hwang
Database Build and Release - SQL In The City - Ernest HwangDatabase Build and Release - SQL In The City - Ernest Hwang
Database Build and Release - SQL In The City - Ernest Hwang
 

Ähnlich wie Backing Data Silo Atack: Alfresco sharding, SOLR for non-flat objects

Kb 40 kevin_klineukug_reading20070717[1]
Kb 40 kevin_klineukug_reading20070717[1]Kb 40 kevin_klineukug_reading20070717[1]
Kb 40 kevin_klineukug_reading20070717[1]
shuwutong
 

Ähnlich wie Backing Data Silo Atack: Alfresco sharding, SOLR for non-flat objects (20)

Solr @ eBay Kleinanzeigen
Solr @ eBay KleinanzeigenSolr @ eBay Kleinanzeigen
Solr @ eBay Kleinanzeigen
 
Cost Effectively Run Multiple Oracle Database Copies at Scale
Cost Effectively Run Multiple Oracle Database Copies at Scale Cost Effectively Run Multiple Oracle Database Copies at Scale
Cost Effectively Run Multiple Oracle Database Copies at Scale
 
Change Management for Oracle Database with SQLcl
Change Management for Oracle Database with SQLcl Change Management for Oracle Database with SQLcl
Change Management for Oracle Database with SQLcl
 
Performance and Abstractions
Performance and AbstractionsPerformance and Abstractions
Performance and Abstractions
 
Best practices for highly available and large scale SolrCloud
Best practices for highly available and large scale SolrCloudBest practices for highly available and large scale SolrCloud
Best practices for highly available and large scale SolrCloud
 
JSR 335 / java 8 - update reference
JSR 335 / java 8 - update referenceJSR 335 / java 8 - update reference
JSR 335 / java 8 - update reference
 
From Concept to Clustered JAC (jira.atlassian.com) - Graham Carrick
From Concept to Clustered JAC (jira.atlassian.com) - Graham CarrickFrom Concept to Clustered JAC (jira.atlassian.com) - Graham Carrick
From Concept to Clustered JAC (jira.atlassian.com) - Graham Carrick
 
Solr 4
Solr 4Solr 4
Solr 4
 
Oracle ADF Architecture TV - Development - Version Control
Oracle ADF Architecture TV - Development - Version ControlOracle ADF Architecture TV - Development - Version Control
Oracle ADF Architecture TV - Development - Version Control
 
What's new in Solr 5.0
What's new in Solr 5.0What's new in Solr 5.0
What's new in Solr 5.0
 
Kb 40 kevin_klineukug_reading20070717[1]
Kb 40 kevin_klineukug_reading20070717[1]Kb 40 kevin_klineukug_reading20070717[1]
Kb 40 kevin_klineukug_reading20070717[1]
 
Workflow Engines for Hadoop
Workflow Engines for HadoopWorkflow Engines for Hadoop
Workflow Engines for Hadoop
 
Database as a Service on the Oracle Database Appliance Platform
Database as a Service on the Oracle Database Appliance PlatformDatabase as a Service on the Oracle Database Appliance Platform
Database as a Service on the Oracle Database Appliance Platform
 
NoSQL and SQL - Why Choose? Enjoy the best of both worlds with MySQL
NoSQL and SQL - Why Choose? Enjoy the best of both worlds with MySQLNoSQL and SQL - Why Choose? Enjoy the best of both worlds with MySQL
NoSQL and SQL - Why Choose? Enjoy the best of both worlds with MySQL
 
New Persistence Features in Spring Roo 1.1
New Persistence Features in Spring Roo 1.1New Persistence Features in Spring Roo 1.1
New Persistence Features in Spring Roo 1.1
 
Fastest Servlets in the West
Fastest Servlets in the WestFastest Servlets in the West
Fastest Servlets in the West
 
CosmosDB for DBAs & Developers
CosmosDB for DBAs & DevelopersCosmosDB for DBAs & Developers
CosmosDB for DBAs & Developers
 
LLAP: Building Cloud First BI
LLAP: Building Cloud First BILLAP: Building Cloud First BI
LLAP: Building Cloud First BI
 
Velocity - Edge UG
Velocity - Edge UGVelocity - Edge UG
Velocity - Edge UG
 
Escalando Foursquare basado en Checkins y Recomendaciones
Escalando Foursquare basado en Checkins y RecomendacionesEscalando Foursquare basado en Checkins y Recomendaciones
Escalando Foursquare basado en Checkins y Recomendaciones
 

Mehr von ITD Systems

PM Innovation 2013 - Управление непредсказуемым проектом
PM Innovation 2013 - Управление непредсказуемым проектомPM Innovation 2013 - Управление непредсказуемым проектом
PM Innovation 2013 - Управление непредсказуемым проектом
ITD Systems
 
Orgchart for Alfresco lightning talk
Orgchart for Alfresco lightning talkOrgchart for Alfresco lightning talk
Orgchart for Alfresco lightning talk
ITD Systems
 
Dynamic bpm design by doing lightning talk
Dynamic bpm design by doing lightning talkDynamic bpm design by doing lightning talk
Dynamic bpm design by doing lightning talk
ITD Systems
 

Mehr von ITD Systems (10)

Overcoming common knowledge: 100k nodes in a single folder
Overcoming common knowledge: 100k nodes in a single folderOvercoming common knowledge: 100k nodes in a single folder
Overcoming common knowledge: 100k nodes in a single folder
 
Data Spider for Legacy Infrastructure: Capturing content from multiple file s...
Data Spider for Legacy Infrastructure: Capturing content from multiple file s...Data Spider for Legacy Infrastructure: Capturing content from multiple file s...
Data Spider for Legacy Infrastructure: Capturing content from multiple file s...
 
Примечания к демо
Примечания к демоПримечания к демо
Примечания к демо
 
Special Notes for the Alvex Demo
Special Notes for the Alvex DemoSpecial Notes for the Alvex Demo
Special Notes for the Alvex Demo
 
Project Management with Alfresco
Project Management with AlfrescoProject Management with Alfresco
Project Management with Alfresco
 
PM Innovation 2013 - Управление непредсказуемым проектом
PM Innovation 2013 - Управление непредсказуемым проектомPM Innovation 2013 - Управление непредсказуемым проектом
PM Innovation 2013 - Управление непредсказуемым проектом
 
Alvex. Become a partner
Alvex. Become a partnerAlvex. Become a partner
Alvex. Become a partner
 
Краткий обзор возможностей Alfresco и Alvex
Краткий обзор возможностей Alfresco и AlvexКраткий обзор возможностей Alfresco и Alvex
Краткий обзор возможностей Alfresco и Alvex
 
Orgchart for Alfresco lightning talk
Orgchart for Alfresco lightning talkOrgchart for Alfresco lightning talk
Orgchart for Alfresco lightning talk
 
Dynamic bpm design by doing lightning talk
Dynamic bpm design by doing lightning talkDynamic bpm design by doing lightning talk
Dynamic bpm design by doing lightning talk
 

Kürzlich hochgeladen

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Kürzlich hochgeladen (20)

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 

Backing Data Silo Atack: Alfresco sharding, SOLR for non-flat objects

  • 1. Backing Data Silo Atack: Alfresco sharding, SOLR for non-flat objects Alexey Vasyukov <avasyukov@itdhq.com>
  • 2. Backing Data Silo Atack: Alfresco sharding, SOLR for non-flat objects Alexey Vasyukov <avasyukov@itdhq.com> Buzzwords
  • 3. The Scheme Repo #1 Repo #2 Repo #3 Repo #4 ● Each server is independent ● Each server stores a part of content
  • 4. The Scheme Repo #1 Repo #2 Repo #3 Repo #4 Level 7 Switch / Balancer Share & 3rd party ● Each server is independent ● Each server stores a part of content ● We can query them all with native Alfresco REST API and a single API entry point ● We can even run an unmodified Share or 3rd party app on top of federation
  • 5. Repo #1 Repo #2 Repo #3 Repo #4 Level 7 Switch / Balancer Sharded SOLR Cloud Share & 3rd party The Scheme ● External SOLR Cloud: single index of all repos
  • 6. Questions regarding the scheme • What the hell is going on? • Are you guys crazy? • Why do you need this scheme at all? • It is a wheel reinvented, isn't it?
  • 7. Background information • The project started a year ago (Spring 2015) • The need for a content platform: – Few billions objects in several years — nobody knows the real size of the repo in 5-10 years – Several thousands concurrent users – Few dozens external apps — nobody knows content structure in several years (but for sure it will be compex)
  • 8. Background information • Concerns regarding Alfresco (Spring 2015!) – uncertanty on repo size scalability (strictly unofficial rumors regarding 100M nodes as a soft limit for a single repo, internal testing confirms this) – no SOLR sharding (yes, the customer wants full text for billions of documents — hello, Google)
  • 9. Background information • And ... Q4 2015 – Alfresco releases 1B benchmark – Alfresco supports sharded SOLR (finally!) • Do we still need our wheel? Yes!
  • 10. Different architectures & use cases 1B benchmark Sharded scheme • Powered by repo clustering • Single DB & content store • All features • Powered by L7 switch / balancer • Separate DBs & content stores • Limited features – No native assocs between repos
  • 11. Source: Alfresco 1B Benchmark Source: Alfresco Wiki Files were spoofed, so not on the filesystem. Alfresco server will generate a consistent text document each time the content is requested by an API, SOLR or Share. Key Decision Point: Storage! • Not addressed in 1B benchmark
  • 12. Key Decision Point: Storage! • Not addressed in 1B benchmark – Reasonable for load testing repo & SOLR – Not acceptable for infrastructure planning
  • 13. Key Decision Point: Storage! • Not addressed in 1B benchmark – Reasonable for load testing repo & SOLR – Not acceptable for infrastructure planning • 1B files x 100 kb x 1 version = 100 Tb 1B files x 100 kb x 10 versions = 1 Pb – Clustering requires shared storage – Terribly expensive, complex in operation
  • 14. Key Decision Point: Storage! • Not addressed in 1B benchmark – Reasonable for load testing repo & SOLR – Not acceptable for infrastructure planning • 1B files x 100 kb x 1 version = 100 Tb 1B files x 100 kb x 10 versions = 1 Pb – Clustering requires shared storage – Terribly expensive, complex in operation • Answer? Sharding allows to use local disks! – Fast, cheap, simple, vendor neutral – Hello, Google
  • 15. External SOLR Cloud • Single SOLR collection for all federated repos – Native sorting, paging, etc – No heavy logic and overhead in L7 switch
  • 16. External SOLR Cloud • Complex queries on associated objects • Common approach — policy to copy properties between related objects: – Overhead in DB and index – Your clear content structure goes crazy – Tons on policies to handle assoc creation / deletion, properties update for all objects – What about multiple assocs with queries inside child nodes? – What about full text, touching more than one object?
  • 17. External SOLR Cloud • Complex queries on associated objects • Alternative approach — nested SOLR objects: – No redundacy in DB tables / SOLR index – Metadata and full-text in a single query
  • 18. External SOLR Cloud • Complex queries on associated objects: • Alternative approach — nested SOLR objects: – No redundacy in DB tables / SOLR index – Metadata and full-text in a single query – Not a silver bullet, not an only option – Requires custom indexer implementation
  • 19. L7 Switch Internals • Separate Spring App – Yes, we implemented the switch in Java • Handles requests, communicates with backend repos and external SOLR Cloud
  • 20. L7 Switch Internals • Separate Spring App – Yes, we implemented the switch in Java • Handles requests, communicates with backend repos and external SOLR Cloud • Stateless, can be scaled out with zero changes
  • 21. L7 Switch Internals • Separate Spring App – Yes, we implemented the switch in Java • Handles requests, communicates with backend repos and external SOLR Cloud • Stateless, can be scaled out with zero changes • Common Alfresco REST API – Native API for Share and 3rd party apps – No heavy logic in the switch
  • 22. L7 Switch Internals: Deeper Look • Public APIs from L7 switch point of view: – Distributed (all repos) – Replicated (write to all repos, read from any repo) – Single-node (single repo, content based routing) – SOLR Cloud calls
  • 23. L7 Switch Internals: Real Life • Mapping all repo APIs to correct switch controllers is hard and time consuming: – Test, test, test – (+ upstream changes track, track, track) • However, you do not need all APIs for your project in real life – Still test, test, test
  • 24. Demo
  • 25. L7 Switch Benchmark • Concentrates on L7 switch benchmark – Backend repos and SOLR are «fast enough» in the test See benchmark details in backup slides (Appendix A)
  • 26. L7 Switch Benchmark • Concentrates on L7 switch benchmark – Backend repos and SOLR are «fast enough» in the test • 15k users with 30s think time – 125 browse req/s (federated) – 125 search req/s (SOLR) – 250 node access req/s (single repo access) See benchmark details in backup slides (Appendix A)
  • 27. L7 Switch Benchmark • Concentrates on L7 switch benchmark – Backend repos and SOLR are «fast enough» in the test • 15k users with 30s think time – 125 browse req/s (federated) – 125 search req/s (SOLR) – 250 node access req/s (single repo access) • Switch HW: 8 cores, 8 Gb RAM See benchmark details in backup slides (Appendix A)
  • 28. L7 Switch Benchmark • Concentrates on L7 switch benchmark – Backend repos and SOLR are «fast enough» in the test • 15k users with 30s think time – 125 browse req/s (federated) – 125 search req/s (SOLR) – 250 node access req/s (single repo access) • Switch HW: 8 cores, 8 Gb RAM • Results (user wait time): – Browse: < 3.9s – Node access: < 1.2s – Search: < 9.7s See benchmark details in backup slides (Appendix A)
  • 29. Distributed system in production • Docker + Ansible: nice but not enough • JGroups for auto-discovery • Pushing configs from the central location • Reconfigure running nodes without restart when a member joins / leaves federation • Basic safety checks against human mistakes (like adding repos with incompatible code versions into the same federation) See more details in backup slides (Appendix B)
  • 30. Summary • Yes, you can have sharded Alfresco • Sharding in an option to solve storage problem for really large repository
  • 31. Summary • Yes, you can have sharded Alfresco • Sharding in an option to solve storage problem for really large repository • Sharding is not a silver bullet, there are limitations, one size does not fit all
  • 32. Summary • Yes, you can have sharded Alfresco • Sharding in an option to solve storage problem for really large repository • Sharding is not a silver bullet, there are limitations, one size does not fit all • Sharded Alfresco (Community 5.1.e) on commodity hardware can scale up to 15k concurrent users
  • 33. Summary • Yes, you can have sharded Alfresco • Sharding in an option to solve storage problem for really large repository • Sharding is not a silver bullet, there are limitations, one size does not fit all • Sharded Alfresco (Community 5.1.e) on commodity hardware can scale up to 15k concurrent users • We would like to share and re-implement the switch and indexer from scratch as separate open source projects, if the community is interested
  • 34. Thank you for your time and attention! Alexey Vasyukov <avasyukov@itdhq.com>
  • 36. L7 Switch Benchmark: Infra • 3 repositories ● 1 separate PostgreSQL database server per repo ● sharded content is not indexed by internal Alfresco SOLR, only by external SOLR Cloud • 3 SOLR Cloud shards ● + 1 ZooKeeper node • 1 custom indexer server • 1 custom L7 switch server
  • 37. L7 Switch Benchmark: Limits • Synthetic load with just 'ab' tool – Share servers exist in real setup, but they are not included into benchmark • Quite simple SOLR queries – Not a real nested objects benchmark yet • Backends benchmark is out of scope – They are «fast enough» to make L7 switch a bottleneck – Real system performance data is the customer property – Real system has a lot of deep repo customizations, this data is not relevant for any other use case
  • 38. L7 Switch Benchmark: Load • 15k users: – 25% browse (federated) – 25% search (SOLR) – 50% access data (single repo) • Think time 30 seconds: – 125 browse req/s – 125 search req/s – 250 node access req/s • Load lasts for 1 hour
  • 39. L7 Switch Benchmark: HW • L7 switch hardware – 8 cores – 8 Gb RAM
  • 40. L7 Switch Benchmark: Results • Browse (ms): – 50% 969 – 66% 1181 – 75% 1345 – 80% 1447 – 90% 2074 – 95% 2624 – 98% 2953 – 99% 3319 – 100% 3895 (longest request)
  • 41. L7 Switch Benchmark: Results • Node access (ms): – 50% 423 – 66% 515 – 75% 592 – 80% 662 – 90% 821 – 95% 895 – 98% 962 – 99% 1065 – 100% 1165 (longest request)
  • 42. L7 Switch Benchmark: Results • Search (ms): – 50% 3180 – 66% 3708 – 75% 4059 – 80% 4291 – 90% 5134 – 95% 5659 – 98% 6674 – 99% 7571 – 100% 9638 (longest request)
  • 44. Distributed system in production • We do use Docker + Ansible • It's not a silver bullet and it's not enough – Components should detect and handle restarts and failures of each other – Adding new nodes into running federation — too many interconnections and relations, too many configs to edit on each server, existing nodes should be reconfigured – Handling human mistakes — starting new system while the old one is still running, starting new repos with the new application version while old repos are running the old version
  • 45. Distributed system in production • Auto-discovery with JGroups – Plus safety-check against multiple federations on the same subnet • Pushing configs from L7 switch – Single config for sysadmin to rule the federation – On-the-fly reconfiguration of existing nodes (without restart) when federation topology changes • Code version check on joining federation – Protection against running incompatible code versions in the same federation