SQL Database Design For Developers at php[tek] 2024
Conway Case Study - Optimizing Application Integration SDLC
1. DevOps
Con-way Case Study: Optimizing Application
Integration Software Development Lifecycle
Customer logo
goes here
Partner logo
goes here
Ram Vittal
Con-way
Principal Enterprise Architect
DOX05S
3. App Integration SDLC History
· 10+ years app development
· 100+ apps in production
· Manual testing results in outages
· Test Automation tool selection
· Test Automation Pilot
· Recognizing SDLC Constraints
· Service Virtualization Pilot
4. Agenda
· About Con-way
· IT App Overview
· IT App Integration Overview
· App Layers and SDLC
· App Dev/Test Constraints
· Pilot Project Use Case
· Pilot Project Benefits
· Q & A
11. App Dev/Test Constraints
· Time for test automation
· System availability
· Test data management
· Limited capacity
· High complexity
12. Pilot Project Use Case
· On-demand inspection planning (ODIP)
· Shippers often incorrectly classify shipments resulting in
revenue loss for Con-way.
· ODIP solution will predict which shipments are most likely to
be misclassified and yield additional revenue.
13. Classifying Freight
Freight classification
• 18 freight classes
• Weight
• Length
• Height
• Density
• Ease of handling
• Value
• Liability from theft,
damage, break-ability
or spoilage.
Freight class 50
Freight class 500
14. Incorrect Freight Classification
Revenue
Classified as 50
Sample rate = $47.98
20 CWT x 47.98 = 959.60
Should be 500
Sample rate = $409.32
20 CWT x 409.32 = 8186.40
2000 lb. Shipment
20 CWT (hundredweight)
7226.80
15. ODIP and System Dependencies
ODIP
CIS LOC
SHM EQP
Rating SMART SCO
CORR
Billing Shipment
VS
SCO VS
Shipments
Shipment
VS
Linehaul
VS
Pkup
rqst Pickup VS
Model X Model Y Model Z
CIS
SHM
16. ODIP System Under Test ODIP System Dependencies
Shipment
Java Service
Shipment
inspection
Java service
FBES
CORR
SMART
SCO
Shipment
Java VS
Shipment
canonical
service
Shipment
Canonical
Shipment
event
publisher
Mobile
Navigator
VS
Shipment
event VS
25. Benefits
Saved two months of development/testing
Reduced complexity for development/testing
Identifying and fixing bugs became easier
Provided high-availability for constrained services
Achieved component level performance testing
Eliminated capacity constraints for performance testing
Identified performance issues earlier in SDLC
Repeated performance test several times
Reusable virtual services for other projects
26. ODIP Pilot Project Scorecard
Metric Pre Post Benefits
Service Virtualization
Integration test system availability Low High • Direct dependencies are virtualized provided
very high system uptime
• Test data scenarios were setup in spreadsheets
and Oracle DB improving coverage and
accuracy.
Integration test data coverage and accuracy Low High
Integration Testing
Scenarios validated during development Low High • “Shift left” of testing
• Better code quality/lower bugs
• Increased developer productivity
Phase in which all systems get tested SIT Development • Released with confidence
Load and Performance Testing
Throughput achieved 100 bills/hour 50,000 bills/hour • Performance issues identified during
development phase
• Cost savings in resolving issues identified
earlier in cycle
• Ability to test through various load scenarios
• Ability to test performance at a component level
Number of cycles executed 1 10/on-demand
Number of issues identified Small Large
SDLC phase in which L&P testing done Post SIT Development
28. Recommended Sessions
SESSION # TITLE DATE/TIME
DOT10S DevOps: A Cultural Transformation, More than
Technology 11/11/2014 at 4:15 pm
DOT17S Moving forward in your DevOps Journey 11/12/2014 at 11:15 am
29. Service
Virtualization
CA Technologies
DevOps
Simulation
Experience
CA Technologies
Related Technologies
Parallel
Application
Development
CA Technologies
DevOps
Assessment
CA Technologies
30. Session Evaluation
31
Please provide your feedback
about this session
Session Name:
Con-way Case Study:
Optimizing Application
Integration Software
Development Lifecycle
Access inside the CA World Mobile App
Click on SURVEY/SESSION EVALUATION
If your badge was scanned at the entrance to
this session, click on the name of this session.
Hinweis der Redaktion
Good afternoon everyone. I am here today to talk to you about how we Optimized Application Integration SDLC, to improve quality and efficiency. I’d like to start off with a brief history on our SDLC journey.
Optional: My name is Ram Vittal and I am an Enterprise Architect from Con-way. My role as an architect is to help project teams design and develop integration solutions.
At Con-way, we have been developing integration apps for over 10 years and we have 100+ integration apps in production. But the challenge we have is testing all our application when a change occurs. Since our testing manual we inadvertently migrate untested changes to production causing outages.
A couple of years ago, we looked at several options to address this challenge. We looked at several test automation products including IBM GreenHat & CA LISA, but we chose CA LISA because it met all our requirements. After choosing CA LISA, we did a test automation pilot for our Salesforce.com integration application. For that test automaton pilot we had to call several services on our legacy systems and those services were not up and running all the time.
We realized that test automation alone will not solve all our problems when there are constraints in our SDLC such as system/service availability, test data issues, capacity issues . So we decided to execute a pilot project that will address those constraints in SDLC. In fact, I will be sharing that pilot project as case study with you today.
So how many of you are familiar with Service Virtualization?
Depending on audience response: Some of you are familiar with it. To follow this presentation, you don’t really need to know service virtualization .
Here is a quick run down on our agenda..
A quick a intro to who we are and what we do at Con-way and our IT app overview, followed by SDLC constraints. And then we will get into our pilot project use case and share the benefits we have send and end by taking your questions.
Con-way is an industry leader in freight and transportation logistics. We have three subsidiaries – Conway Freight, Menlo Worldwide, and Conway TruckLoad.
Conway Freight is a less than truckload carrier and we have over 400+ operating locations in United States, Canada and Mexico.
Menlo WorldWide is a global supply chain logistics solution provider that implements 3PL and 4PL solutions.
Conway TruckLoad is a full truckload carrier that is shipping within US, Canada and Mexico
Menlo found in 1991. 2013 revenue $1.54 billion. 4800 employees. Operates in 20 countries
Freight found in 1983. 2013 revenue $3.46 billion. 20300 employees. Over 9000 trucks and about 2500 trailers
Truckload found in 1951, 2013 revneue $637 million. 3500 employees. Over 2600 trucks and about 8000 trailers.
Con-way Freight IT is made up several third party apps and home grown apps. All these applications communicate via Enterprise Service Bus. And majority of our business logic and data resides on mainframe.
We use Salesforce.com for managing sales and marketing activities, PeopleSoft for managing human resources, Oracle financials for AR/AP, DMS for documents management.
We have built several home grown apps for Billing, Pricing, Operations and Invoicing business functions.
We use several interfaces to access our applications such as internet, intranet, mobile, iPad and the good old green screen. We still can’t get away from it.
We offer several customer facing apps on the internet such as Tracking, Rate Quote etc.
So how do these applications integrate and talk to each other?
This is how..
We use an Event Driven and Service Oriented Architecture for integrating applications. Our core business components such as customer, shipment, location and operations run on mainframe that were developed using CA:GEN technology. Those components generate events for any significant business activity and also provide set of services to maintain the business data.
Those events are published to the TIBCO EMS messaging bus via Tibco adapters. We use TIBCO business works as Integration Clients that listen to those events and orchestrate a business process typically using a set of services. And typically, those services are either Java services running on IBM websphere or third party web services such as Salesforce.com
We use Tibco business events to correlate events and identify opportunities and threats.
We provide a variety of end user interfaces to access our services such as web, mobile, EDI etc
So this our integration architecture for an app that is made up of several components and layers.
We just talked about component and layers in our integration app. So these our app layers from three perspectives – web , mobile and db. If you start the transaction from web, you will go thru these set of layers IBM WAS, TIBCO ESB, ORACLE DB, CA:GEN PROXY and DB2 on mainframe.
If you start the transaction from mobile or DB2, you will go thru the same set of layers however in different order.
Web UI is basically HTML /Javascript pages interacting with Java Services running on IBM Websphere. TIBCO ESB is our enterprise service bus where we host web services and events and orchestrate business process. We store some reference data on Oracle and use CA:Gen services to store transactional data on DB2.
A Mobile app consumes web services from TIBCO ESB layer that in turn dependents on these layers WAS, ORACLE, CA:GEN & DB2.
We have seen little bit about our app layers so how do we build our apps ?
This is our traditional waterfall like SDLC process for our app development.
We have 10 dev, 5 QA, 1 load and pre-prod env.
We develop applications in DEV environment and after development is done, we migrate to QA envrionment for testing, after QA is done, we may optionally perform LOAD testing, and then go to pre-production and production.
We spend about 70% of the time in development and try to cram all the testing towards the end of the SDLC cycle. So the problem with this traditional approach is testing is done later in the SDLC resulting in less time to identify and fix defects
So in our traditional SDLC, one of the significant problems we run into is not having enough time to test our application before it goes to production. I believe there are four major factors that contribute to this time problem. They are system availability – sevice your app is calling is not available, test data management – your app’s test data not setup right, limited capacity – e.g. limited mainframe capacity and high complexity – several layers /technologies in your app creates complexity.
Next I will show you how we addressed these constraints in our pilot project.
Our pilot project use case is On-Demand Inspection Planning. This is a project for reengineering legacy batch inspection system into a real time event driven system for providing accurate inspection data.
One of the problems we have is shippers often incorrectly classify shipment that results in revenue loss for us.
This ODIP solution will fix such problems by predicting which shipments are most likely to be misclassified and yield additional revenue.
A little bit about how we classify Freight…
We classify freight using density, weight, length, height, ease of handling, value and liability
For example, a bag of cement will be class 50 and a bag of golf balls will be class 500. where class 50 average density is 50 and class 500 average density is < 1
So if a shipper is shipping a box of golf ball but classified that as class 50, he will be paying about $1000 dollars to us. But really it must be classified as class 500, and he must be paying about $8000.
ODIP system uses models –– for scoring shipments for inspection based on several business rules.
ODIP system depends on several systems events & services to predict if a shipment need to be inspected for revenue recovery. It runs several models to score shipments for inspection. The shipment scored by multiple models with higher score is ranked high for inspection.
Typically, a shipment moves through a lifecycle of events from pickup all the way through delivery. The shipment lifecyle starts with pickup request, then it gets billed and rated.
And then it gets moved through our network using SMART & SCO systems and gets delivered. And it can go thru corrections for charges or parties etc.
ODIP needs all these dependent systems to be up and running with the right test data and capacity. And be able to generate events in high volumes. But these dependent system would go down often in our test systems. And test system did not have the capacity to produce high volume events & serve high volume services
Having realized these constraints in our test environment, we decided to virtualize these dependent systems’s services needed for ODIP. We developed virtual services using CA LISA that simulates the data and behavior of these dependent system’s services.
We are able to develop and test our critical components early in the SDLC. We were also able to reduce complexity in our development and testing as we were working with simulated interfaces without needing to understand how the internals of those dependent systems work.
This is a component level perspective of the ODIP solution..
Shipment Inspection CEP is the critical component for ODIP. It takes shipment events input from various sources such as FBES, CORR, SMART & SCO, qualifies the shipment, scores the shipment for inspection and sends it to Java service for recording to a database.
There are three constraints: 1. Shipment Events Publisher 2. Shipment Canonical Service 3. Shipment Java Service
Shipment events publisher is a constraint because producing this requires manual data setup on dependent systems and having those systems mainframe and TIBCO components up and running. Also, shipment events is being published in canonical format and that development is in-flight.
Shipment Canonical Service is a constraint because the development is in-flight and requires TIBCO, WAS, M/F components up and running.
Shipment Java Service is a constraint because the development is in-flight and requires Gen proxy to be in sync and M/F components up and running.
Using these Virtual Services, now we can automate testing for various scenarios in a few days. We can also load test Shipment Inspection CEP component with 100s and 1000s of events without loading the mainframe.
This is our ODIP system integration technical architecture..
Here we had three major constraints
All these mainframe applications need to be setup and running to produce events
App Server and Mainframe need to be setup and to our dependent services
Need to have more capacity on mainframe to support high volume of events and services
We addressed these three constraints using virtual services.
We had a set of Virtual Services that simulates mainframe apps and produces events
We had Virtual Services that simulates the tibco web service engines
We had virtual services that simulate the app client engines
With this setup we were able to develop and test our critical CEP component without constraints.
So essentially our SDLC look like this with CA LISA ..
We extended our DEV/QA/LOAD test envrionments with VSE envrionments for constrained services. VSE stands for virtual service envrionment. The approach we took was to host the constrained services in the VSE and let the other services reside in the test env. and have the SUT talk to both.
We did component level DEV/QA/LOAD test early in the SDLC. Then we assembled our components and did integration testing in our QA environment. We found far fewer integration issues with this approach. We also used virtual services in QA to simulate scenarios that are hard to setup for e.g. dispatch 1000 trucks.
Then we did our live end-to-end load test using CA LISA to measure the impact on mainframe system and we were able to estimate the additional capacity.
We used LISA Test as well for ODIP. We used LISA Test for unit testing, Integration testing and Performance testing. We used LISA Test in conjunction with Virtual Services for Integration and Perf. Testing. We also used LISA for Perf testing all critical component with live system.
We developed a custom framework for Virtual Services that stores data elements in a oracle table instead of VSI file. This allows the consumer of the virtual services to setup scenarios on-demand instead of a preprogrammed VSI file. It also provides dynamic state management where consumers can dictate the states for virtual services. E.g. pickup shipment, bill shipment, rate shipment, deliver shipment etc.
So here is an example virtual service that uses the custom framework..
This Virtual Service simulates Shipment life cycle event publication of the Billing/Operations/Corrections systems.
It starts out by initializing itself with configuration information reading a custom oracle table. This configuration informs the Virtual Service about how to respond to an incoming request with two pieces of information 1. The event/message xml template 2. event SQL template
Now virtual service is ready to serve the incoming request. When a shipment event creation request comes in, it receives that request and selects a response template. It checks to see if the shipment already exists. If it exists in increments the key otherwise it generates one. Then it marries the incoming event data with SQL template and build the insert SQL. It records the shipment event to Oracle table as this will be later used by other services to provide information about this shipment. Finally it publishes the shipment event to the messaging BUS and responds to the caller.
Con-way LISA Manager is a small UI tool that we created to support the custom framework. This tool allows us to maintain virtual service configuration across our operating components, by function group and by test environment. It also helps us with generating virtual service client that provides a sample test data file and SOAP request/reply service signatures with properties to send request data.
For example, for the Shipment Event Virtual Service we looked at earlier, we have an entry here. VS client lets you generate the client for LISA test consumer of this Virtual Service. VS Copy allows you to copy this VS configuration to any of our 20+ test regions. VS restarts just restarts the virtual service for that test region.
This is how we configure a virtual service. And this gets loaded into the Virtual Service intiialization step.
In the virtual service configuration we specify the operation name, the oracle table to store the data, and the event xml template with properties. The Event SQL will be generated by the tool using the Oracle table columns.
This is an example of functional test with virtual service. The use case is this – we picked up a shipment from a shipper that is probably misclassified and the system needs to recommend that shipment for inspection. Here we are simulating that shipment pickup by reading the data from csv file and calling a virtual service that will publish a pickup event. The ODIP system under test will process that event and recommend that shipment for inspection.
We listen to SUT output channel for that recommendation and validate we got that recommendation and we also verify that recommendation recorded in a database.
This is performance test. Here the use case is to stress test the ODIP SUT to ensure it can process 50,000 events/hour. This is really an extension of what we have previously seen for functional test. Here we randomly generate events using DB input and then call the VS to publish the pickup event. SUT process the event and makes recommendation. If a recommendation is found we add it to summary table, and if no recommendation we add it to error table.
This is may be most important part for most of you. So we did all this hard work and what are the benefits…
The benefits are…
We saved at least 2 months of development/testing time. I think even bigger benefit is reducing complexity. For example, in this ODIP project we had TIBCO/integration developer who knows nothing about how mainframe works or how to setup data on mainframe components. Since we virtualized the mainframe interfaces, that developer did not have to worry about learning about those mainframe components.
Identifying and fixing bugs became easier. When QA finds a problem, developer turn around time for fix those bugs was matter hours and not days because we were able to simulate that scenario quickly.
Provided high availability for our constrained services. The virtual services we created had a better up time than our live services. Live services would go down because of developer change or configuration or environment issue.
Another huge benefit we saw was being able to do component level performance testing. This allowed us to focus on performance testing of critical components early in our SDLC.
We also eliminated mainframe capacity constraints for our performance testing. We supported high volume of events and services without loading up mainframe. As we worked with virtual services during our development, we are able to identify performance issues early in SDLC which otherwise would be found much later in SDLC. We also repeated the performance test several times as it was automated and setup time was very small.
And the virtual services we built for this project is being reused by other projects.