9. Monolithic Code
public double getQuote(String type) {
double quote=0;
for (Product product: products) {
quote += product.getValue();
}
return quote;
}
N+1 Call Pattern
Works well within 1
process
10. N+1 Call Pattern
Product Service
Quote Service
1 call to Quote Service
= 44 calls to product
service
19. Granularity
Doc Processor Doc Transformer Doc Signer
Doc Encryption
Doc Shipment
Document Encryption is carved out at a separate
service. May not be the best option to run it as a
separate service
Documents
23. WPO (Web Performance Optimization)
taught us optimizing resource dependencies
when loading a web page by analyzing
Resource Waterfalls
24. Especially useful when page loads get very
complex and overloaded:
3rd party dependencies, non optimize
resources, wrong cache settings, loading too
much data too early, …
25. SFPO (Service Flow&Performance Optimization)
has to teach us how to optimize (micro)service
dependencies through Service Flows
26. Especially useful to identify: inefficient 3rd party services, recursive
call chains, N+1 Query Patterns, loading too much data, no data
caching, … -> sounds very familiar to WPO
45. 26.7s Load Time
5kB Payload
33! Service Calls
99kB - 3kB for each call!
171!Total SQL Count
Architecture Violation
Direct access to DB from frontend service
Single search query end-to-end
46. The fixed end-to-end use case
2.5s (vs 26.7)
5kB Payload
1! (vs 33!) Service Call
5kB (vs 99) Payload!
3!(vs 177) Total
SQL Count
48. Infrastructure Utilization
Is the load on microservices equally load
balanced?
When do you scale up/down?
• CPU
• Memory
• Load
Use automation process to scale up/down
Dynatrace – enterprise monitoring software vendor
Presentation today – discuss common issues monolith to microservices and how to fix them
Avoid common issues with performance
Best practices in continuous deployment
Leveraging infrastructure efficiently
It’s only one of ten things
Sample application
Application that can spin up multiple microservices on a single host
Not just theories – but have actual examples of the issues
Components:
Controller spin up multiple processes – each service instance
Registry service – acts as a router
Service client – example web browser or upstream service
Used spring boot as the base
- offers rich set of technologies, multiple technology support
- every one of the services is based on same binaries
- spring boot allows to easily change singular service instances
Example of code written to access the service remotely
Anti-patterns shown are for any type of process, not just for spring boot
Iterative loop over to get a list of products and get some value from them
Very likely that the values required are in the cache of the local process
Works well for monolith, what happens if you break out functionality
Transaction flow – topology of the lifecycle of the transaction
left hand side shows the web request from client all the way to the DB on the right
Can see the quote service goes to product which goes to db
Problems with this
- Request to quote service spawns 44 calls downstream
Potential solution
- group summation logic at the product service component as opposed to the quote service
Some high level issues with the microservice architectures
- organizations now break up responsibilities into multiple devs or teams
- the logic at one tier impacts the scalability or performance of a downstream component
If you eliminate n+1 call pattern, this problem gets solved
Use cache at service level to eliminate redundant calls
As soon as you are splitting up monolythic applications, revisit caching strategies
Happens when there are multiple steps in a transformation process
AA – Use photoshop to make doc node make calls to transformer and signer
Document creates document, then hands off to next document processor
Bandwidth limit isnt going to impact internal dc traffic
But the transition to cloud or multiple infra locations can cause significant overhead and delay
Split up the flow from a manager and distributor relationship
Each component does not get the full document, but just the required part
Architect the service to do the processing asynchronously
They had a monolithic app that couldnt scale endlessly. Their popularity caused them to think about re-architecture and allowing developers to make faster changes to their code. The were moving towards a Service Approach
Separating frontend logic from backend (search service). The idea was to also host these services potentially in the public cloud (frontend) and in a dynamic virtual enviornment (backend) to be able to scale better globally
On Go Live Date with the new architecture everything looked good at 7AM where not many folks were yet online!
By noon – when the real traffic started to come in the picture was completely different. User Experience across the globe was bad. Response Time jumped from 2.5 to 25s and bounce rate trippled from 20% to 60%
The backend service itself was well tested. The problem was that they never looked at what happens under load „end-to-end“. Turned out that the frontend had direct access to the database to execute the initial query when somebody executed a search. The returned list of search result IDs was then iterated over in a loop. For every element a „Micro“ Service call was made to the backend which resulted in 33! Service Invokations for this particular use case where the search result returned 33 items. Lots of wasted traffic and resources as these Key Architectural Metrics show us
They fixed the problem by understanding the end-to-end use cases and then defined backend service APIs that provided the data they really needed by the frontend. This reduced roundtrips, elimiated the architectural regression and improved performance and scalability