In an increasingly competitive marketplace, speed and business agility are paramount. And integration between customer-facing systems and back-end applications is more crucial than ever.
At this event, you'll learn how open source software built by communities, like Apache Camel, Docker, Kubernetes, OpenShift Origin, and Fabric8, can help organizations integrate services and establish effective continuous integration and delivery (CI/CD) pipelines.
3. • Trying to incorporate new technology?
• Trying to copy what others (Netflix, Amazon) are
doing?
• Tactical automation?
• Created a “DevOps” team?
• Exploring cloud services?
• Build/deploy automation?
• OpenSource?
• Piecemeal integration?
How are you keeping up with change?
Cloud Native Architectures
4. Cloud Native Architectures
• Faster software delivery
• Own database (data)
• Faster innovation
• Scalability
• Right technology for the
problem
• Test individual services
• Isolation
• Individual deployments
Microservices helps solve the problem
of “how do we decouple our services
and teams to move quickly at scale to
deliver business value”
5. • If my services are isolated at the process
level, I’m doing #microservices
I’m doing microservices if…
• If I use REST/Thrift/ProtoBuf instead of
SOAP, I’m doing #microservices
• If I use JSON, I’m doing #microservices
• If I use Docker / SpringBoot / Dropwizard /
embedded Jetty, I’m doing #microservices
6.
7. Cloud Native Architectures
Fallacies of distributed computing
• Reliable networking
• Latency is zero
• Bandwidth is infinite
• Network is secure
• Topology doesn’t change
• Single administrator
• Transport cost is zero
• Network is homogenous
https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
12. Cloud Native Architectures
Apache Camel to the rescue!
• Small Java library
• Distributed-system swiss-army knife!
• Powerful EIPs
• Declarative DSL
• Embeddable into any JVM (EAP, Karaf, Tomcat, Spring
Boot, Dropwizard, Wildfly Swarm, no container, etc)
• Very popular (200+ components for “dumb pipes”)
13. • “Smart endpoints, dumb pipes”
• Endpoint does one thing well
• Metadata used for further routing
• Really “dynamic” with rules engine (eg,
Drools/BRMS)
Apache Camel features easy to use visual editor
Dynamic Routing
14. Apache Camel features easy to understand config
REST DSL
public class OrderProcessorRouteBuilder extends RouteBuilder {
@Override
public void configure() throws Exception {
rest().post(“/order/socks”)
.description(“New Order for pair of socks”)
.consumes(“application/json”)
.route()
.to(“activemq:topic:newOrder”)
.log(“received new order ${body.orderId}”)
.to(“ibatis:storeOrder?statementType=Insert”);
}
}
16. Cloud Native Architectures
Typical problems developing microservices
• How to run them all locally?
• How to package them (dependency management)
• How to test?
• Vagrant? VirtualBox? VMs?
• Specify configuration
• Process isolation
• Service discovery
• Multiple versions?
17. Cloud Native Architectures
Shared infrastructure platforms headaches
• Different teams
• Different rates of change
• VM sprawl
• Configuration drift
• Isolation / multi-tenancy
• Performance
• Real-time vs batch
• Compliance
• Security
• Technology choices
18.
19. Cloud Native Architectures
Immutable infrastructure/deploys
• “we’ll just put it back in Ansible”
• Avoid chucking binaries / configs together and hope!
• Cattle vs Pets
• Don’t change it; replace it
• System created fully from automation; avoid drift
• Eliminate manual configuration/intervention
22. • Developer focused workflow
• Enterprise ready
• Higher level abstraction above containers for
delivering technology and business value
• Build/deployment triggers
• Software Defined Networking (SDN)
• Docker native format/packaging
• CLI/Web based tooling
OpenShift
23. Cloud Native Architectures
Fuse Integration Services for OpenShift
• Set of tools for integration developers
• Build/package your Fuse/Camel services
as Docker images
• Run locally on CDK
• Deploy on top of OpenShift
• Plugs-in to your existing build/release
ecosystem
(Jenkins/Maven/Nexus/Gitlab,etc)
• Manage them with Kubernetes/OpenShift
• Flat class loader JVMs
• Take advantage of existing investment into
Karaf with additional options like “just
enough app server” deployments
• Supports Spring, CDI, Blueprint
• Small VM run locally by
developers
• Full access to Docker,
Kubernetes, OpenShift
• Deploy your suite of
microservices with ease!
• Uses Vagrant/VirtualBox
• Getting Started on Linux,
Mac or Windows!
http://bit.ly/1U5xU4z
25. RED HAT JBOSS FUSE
Development and tooling
Develop, test, debug, refine,
deploy
JBoss Developer Studio
Web services framework
Web services standards, SOAP,
XML/HTTP, RESTful HTTP
Integration framework
Transformation, mediation, enterprise
integration patterns
Management and
monitoring
System and web services metrics,
automated discovery, container
status, automatic updates
JBoss Operations Network
+
JBoss Fabric Management
Console
(hawtio)
Apache CXF Apache Camel
Reliable Messaging
JMS/STOMP/NMS/MQTT, publishing-subscribe/point-2-point, store and forward
Apache ActiveMQ
Container
Life cycle management, resource management, dynamic deployment,
security and provisioning
Apache Karaf + Fuse Fabric
RED HAT ENTERPRISE LINUX
Windows, UNIX, and other Linux
26. Cloud Native Architectures
Typical problems developing microservices
• How to run them all locally?
• How to package them
• How to test?
• Vagrant? VirtualBox? VMs?
• Specify configuration
• Process isolation
• Service discovery
• Multiple versions?
29. • Trying to incorporate new technology?
• Trying to copy what others (Netflix, Amazon) are
doing?
• Tactical automation?
• Created a “DevOps” team?
• Exploring cloud services?
• Build/deploy automation?
• OpenSource?
• Piecemeal integration?
How are you keeping up with change?
Cloud Native Architectures
30. • 100% open source, ASL 2.0
• Technology agnostic (java,
nodejs, python, golang, etc)
• Built upon decades of
industry practices
• 1-click automation
• Cloud native (on premise,
public cloud, hybrid)
• Complex build/deploy
pipelines (human workflows,
approvals, chatops, etc)
• Comprehensive integration
inside/outside the platform
What if you could do all of this right now
with an open-source platform?
31. • Docker native, built on top of
Kubernetes API
• Out of the box CI/CD,
management UI
• Logging, Metrics
• ChatOps
• API Management
• iPaaS/Integration
• Chaos Monkey
• Lots and lots of
tooling/libraries to make
developing cloud-native
applications easier
http://fabric8.io
We need to discuss “change” in terms of scaling out our organizations. Devops and microservices is not a technology choice or a new team. DevOps is a re-org. All of these attempts to “keep up with change” without addressing the organization is not much help.
When creating distributed systems, a lot of what’s old is new again. Just bringing in “new technology” does not solve problems; in fact it probably creates new ones.
Trying to copy others’ technology choices is fools errand. People try to copy netflix/amazon/etc, but as Adrian Cockcroft says “you’re copying a point in time, not the process”
We try to fight the organizational structure with piecemeal automation, creating more “teams” of silos (“devops” team?... Totally misses the point) or even saying we’ll just adopt “cloud” or adopting “opensource”
Microservices is an approach to distributed systems that focus on scaling an organization’s IT systems and people. It doesn’t come without its drawbacks but it does allow us to make decisions quicker, implement functionality faster, and ultimately deliver on the business requirements faster to stay competitive. By breaking IT systems and teams down into smaller, autonomous components, we can test things easier, isolate them for failure properly, change them without impacting the entire systems, scale them where needed, etc.
Teams should be small (6-8 people), focus on the service(s) they provide via APIs, be cross functional (ops/security/dba/release/devs all on one team or automate away the pieces where resources are lacking), be responsible for the systems the create (you build it, you own it).
http://blog.christianposta.com/microservices/the-real-success-story-of-microservices-architectures/
People claim to do microservices without regard for the system-thinking principles that undelie any successful microservice architecture. If we just “do X” or “use X” then we’ll be doing microservices. In the end, they end up developing the same brittle, constrained architectures they had before but this time with new tools.
Ultimately, when we dig into the technology and how that aligns with our company structure, we’re talking about building and scaling distributed systems. Building and scaling these systems requires different ways of thinking and cannot ignore the past.
Foremost on our minds when building distributed systems is how they interact with each other: over unrealiable networks. A strong corollary for this fact is that we must build our systems to interact with each other knowing things fail and will fail. Second, even if things do not fail, they may appear to fail.. Latency in distributed systems is not something we have to deal with in more-monolithic systems, but is easily one of the biggest issues. Did things fail? Are they just slow? Do we retry? What do we do?
Given that systems will be communicating over lossy, unreliable networks… do we need integration? As we start to build non-trivial systems that interact with partner organizations (external and internal), use/consume/interact with “cloud” services, and require access to legacy applications/databases.. It’s clear and “by definition” that distributed systems will require integration.
People consider integration in the form of legacy ESB or EAI solutions, but as we see in the following slides, integration does not imply those approaches… those approaches come because of our organizational structure. But as we explore microservices, integration, and organization further, we’ll see EAI/ESB are not pre-requisites.
What about new-fangled “reactive” or event-driven systems? Do we need integration?
YES.
Consuming events and reacting to “what happened in time” requires us to not lose events, retry when networks are down, failover or retry other “possibly synchronous” systems in order to continue to delivery business value. Systems publishing events need access to queues/channels and some mechanism for interacting with them reliably.
When we start to look at systems as disconnected, autonomous agents both from a technology and organizational aspect, we absolutely need reliable integration.
Systems will communicate over may non-homogenous protocols and data formats: messaging (JMS, AMQP, proprietary), file transfer, HTTP (SOAP/REST/other), streaming, etc. These systems will need transformation, reliability, synchronous and asynchronous communication. Gregor Hophe’s book on integration lays out the patterns that may be useful in a disconnected environment like this.
Apache Camel brings tried and true experience to the table to tackle some of these distributed-systems integration challenges.
Apache Camel is very well suited for integration in a microservices environment. It’s not an ESB, doesn’t pre-suppose suites of software or servers. It’s a small, lightweight library that can be embedded in your choice of JVM runtime like Spring Boot, Dropwizard, WildFly/Swarm, EAP, Jetty, Tomcat, Karaf, or anything.
Microservices architectures are built around autonomy and being able to make changes to a service without impacting other areas that must also change along with it. In this scenario a service is part of a set of “choreographed interaction scenario” where the service knows enough about what it provides and its surrounding members/services and can make its own decisions about what services to engage, when, and for what reason. Apache Camel allows us to build services with smart routing without regard for the technology or “pipes” that are used to communicate. We can leverage the Dynamic Router EIP or plug into existing rules engines or complementary rules engines like Jboss Drools to accomplish sophisticated routing requirements an decisions.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
Even though Apache Camel brings some good solutions for implementing integration across distributed systems, why is my head still hurting with distributed systems? Maybe you already do use Camel, or you’ve already incorporated a light-weight integration framework… why are we still running into issues/pain when creating these types of systems?
Developers experience this type of pain…
Operations experiences another type of pain…
When we move to smaller, isolated, autonomous systems at any kind of scale, we need to move away from the “pet” analoogy and to the “cattle” analogy where we build systems that can quickly be delivered and replaced as needed.
https://blog.engineyard.com/2014/pets-vs-cattle
Immutable delivery concepts help us reason about these problems. With immutable delivery, we try to reduce the number of moving pieces into pre-baked images as part of the build process. For example, imagine in your build process you could output a fully baked image with the operating system, the intended version of the JVM, any side-car applications, and all configuration? You could then deploy this in one environment, test it, and migrate it along a delivery pipeline toward production without worrying about "whether the environment or application is configured consistently." If you needed to make a change to your application, you rerun this pipeline which produces a new immutable image of your application and then do a rolling upgrade to deliver it. If it doesn't work, you can rollback by deploying the previous image. No more worrying about configuration or environment drift or whether things were properly restored on a rollback.
Docker came along a few years ago with an elegant solution to immutable delivery. Docker allows us to package our applications with all of the dependencies it needs (OS, JVM, other application dependencies, etc) in a lightweight, layered, image format. Additionally, Docker uses these images to run instances which run our applications inside `Linux containers` with isolated CPU, memory, network, and disk usage. In a way, these containers are a form of "application virtualization" or "process virtualization." They allow a process to execute thinking it's the only thing running (ie, list processes with `ps` and you see only your application's process there), that it has full access to the CPUs, memory, disk, network and other resources when reality it doesn't. It can only use resources it's allocated. For example, I can start a Docker container with a slice of CPU, a segment of memory, and limits on how much network IO can be used. From outside the Linux container, on the Host, the application just looks like another process. No virtualization of device drivers, operating systems, network stacks, no special hypervisors, etc. It's just a process. This fact also means we can get even more applications running on a single set of hardware for higher density without the overhead of additional Operating Systems and other pieces of a VM which would be required to achieve similar isolation qualities.
Back in 2013 when Docker rocked the technology industry, Google decided it was time to open-source their next-generation successor to Borg, which they named Kubernetes. Today, Kubernetes is a large, open, and rapidly growing community with contributions from Google, Red Hat, CoreOS and many others (including lots of independent individuals!). Kubernetes brings a lot of functionality for running clusters of microservices inside Linux containers at scale. Google has packaged over a decade of experience into Kubernetes, so being able to leverage this knowledge and functionality for our own microservices deployments is game changing. The web-scale companies have been doing this for years and a lot of them (Netflix, Amazon, etc) had to hand build a lot of the primitives that Kubernetes now has baked-in. Kubernetes has a handful of simple primitives that you should understand before we dig into examples. In this chapter, we'll introduce you to these concepts and in the following chapter we'll make use of them for managing a cluster of microservices.
Red Hat OpenShift 3.x is a Apache v2 licensed open-source developer self-service platform (OpenShift Origin: https://github.com/openshift/origin) that has been revamped to use Docker and Kubernetes. OpenShift at one point had its own cluster management and orchestration engine, but with the knowledge, simplicity, and power that Kubernetes brings to the world of container cluster management, it would have been silly to try and re-create yet another one. The broader community is converging around Kubernetes and Red Hat is all in with Kubernetes.
OpenShift has many features, but of the most important is that it's still native Kubernetes under the covers and supports features many enterprises need role-based access control, out of the box software defined networking, security, logins, developer builds, and many other things.
The RH CDK allows us to develop using the same technology as a world-class PaaS directly on our laptops locally. We can run our builds locally, test things out, wire up services, and when we’re comfortable, push to a CaaS or PaaS like OpenShift to handle the build pipeline/CI steps and perform validations/security checks and begin the application lifecycle management steps toward production. We can fit in with existing tooling like Git/Jenkins/and Nexus and integrate with the OpenShift Docker registry to do build promotions and so forth.
Quick demo of rider-auto-openshift on CDK
https://github.com/christian-posta/rider-auto-openshift/tree/ceposta-add-rest-module
Keeping up with “change” and building an organization to be agile is a challenge in it’s own right.
From a technology perspective we’d like to give service teams more autonomy, self-service, and responsibility.
Previous versions of fabric8 were built specifically for Java developers and for specific flavors of the JVM. In fabric8 2.0 instead of rebuilding everything that the Docker and Kubernetes communities were building, we’ve rebased everything on top of the Kubernetes API and can take advantage of the out of the box features. We’ve also built things like CI/CD with visualization of environments, a Chaos Monkey to help prove out the resilience of our distributed systems, etc.
Playback recording? Or do live demo of fabric8 CI/CD?
Show and talk to this demo:
https://blog.fabric8.io/create-and-explore-continuous-delivery-pipelines-with-fabric8-and-jenkins-on-openshift-661aa82cb45a#.p1apj49e5