If you want to build an ecosystem of streaming data to your Kafka platform, you will need a much easier way for your developer to quickly move what’s on the source to your cluster. Better yet, making the connector serverless so it would NOT waste any resources for being idle, and having a trusted partner manage your Kafka infrastructure for you. In this session, we will show you how easy we have made streaming data with great user experience. Flexible resource management with our new secret weapon in the Apache Camel project -- Kamelet. We’ll also demonstrate how Red Hat OpenShift Streams for Apache Kafka simplifies the provisioning of Kafka deployments in a public cloud, managing the cluster,topics, and configuring secure access to the Kafka cluster for your developers.
My INSURER PTE LTD - Insurtech Innovation Award 2024
Kubernetes connectivity to Cloud Native Kafka | Evan Shortiss and Hugo Guerrero, Red Hat
1. Cloud-native Kafka
connectivity with
Kamelet connectors
Hugo Guerrero
APIs & Messaging Developer Advocate
@hguerreroo
Evan Shortiss
Principal Technical Marketing Manager
@evanshortiss
1
2. 2
Next-generation of messaging - designed for retaining,
integrating and streaming large amounts of data with low
overhead
INTRODUCTION
The impact of Apache Kafka
Proven open source technology for real-time information processing
Delivers real-time data applications - eliminates the
delays of batch data processing and ELT applications for
better customer experiences
A fundamental building block - simplifies the delivery
of data-driven, microservices based applications
3. INTRODUCTION
3
Kafka in use today
Digital experiences
Delivers real-time experiences with immediate
access to information and response time
Microservices
applications
Loosely couples microservices so
development teams can remain agile
Streaming ETL
Modernize applications driven by batch data
for real-time performance
Real-time analytics
Ingest data from multiple sources for better
business insights
Edge & hybrid scenarios
Collect data from diverse and disparate
devices and systems
5. 5
OPENSHIFT STREAMS FOR APACHE KAFKA
Why use a managed Apache Kafka service?
Reduced Complexity
Brokers
Load
Balancers
VMs & Networking
Applications
Components
Events
6. 6
OPENSHIFT STREAMS FOR APACHE KAFKA
Why use a managed Apache Kafka service?
Reduced Complexity
Brokers
Load
Balancers
VMs & Networking
Applications
Components
Events
7. 7
OPENSHIFT STREAMS FOR APACHE KAFKA
Why use a managed Apache Kafka service?
Reduced Complexity
Applications
Events
Cloud-Native Managed Kafka
Topic A
Topic B
Topic C
8. RED HAT MANAGEMENT
8
Hosted & managed service offering
Red Hat cloud services are managed and operated by Red Hat’s Site Reliability Engineers
▸ SREs serve as the cloud provider account owner and cluster
administrator owning the 99.95% SLA
▸ Responsible for the 24x7 support for all managed and hosted
environments
■ Including for building, installing, upgrading, managing
and maintaining every cluster
▸ SRE teams are distributed across 3 regions: APAC, EMEA
and Americas
▸ The team ensures open communications channels
centralized around the dedicated customer portal
Americas EMEA APAC
9. CONFIDENTIAL designator
Red Hat OpenShift Streams for Apache Kafka
Complete solution for stream-based applications
PRODUCT OVERVIEW
9
Metrics
&
monitoring
Configuration
mgmt
Hosted & managed (99.95% SLA)
Kafka cluster
Broker(s)
Topic(s)
Streamlined developer experience: a curated solution
with a developer-first, consistent experience
Delivered as a service, managed by Red Hat SRE -
24x7 global support and a 99.95% service-level
agreement (SLA)
Real-time, streaming data broker - Dedicated Apache
Kafka cluster deliver as a service in the cloud and
location of choice
▸ Access to Kafka brokers, topics, and partitions
▸ Managed ZooKeeper
▸ Metrics and monitoring
▸ Integrated identity & access management
STREAMLINED DEVELOPER EXPERIENCE
UI service binding API
CLI
10. CONFIDENTIAL designator
PRODUCT OVERVIEW
The value of Red Hat OpenShift Streams for Apache Kafka
10
Faster application
velocity
Unified experience across all
clouds
Kafka ecosystem for
streams-based applications
Begin developing
immediately and
continuously respond to
change
Seamlessly connects
applications across public
and private clouds
Delivers a curated set of
cloud services to simplify
delivery of stream-based
applications
12. 12
INTRODUCTION
Red Hat’s commitment to Kafka-based products
A history of innovation and success
Red Hat brings Kafka to
Kubernetes and launches Red Hat
AMQ Streams (Red Hat
Integration)
Ecosystem to deliver
event-driven solutions
Customer success
Kafka innovation since
2018
13. GETTING STARTED
13
Development Preview *NEW*
Hosted and managed Kafka service
for stream-based applications
● Spin up your own Kafka cluster
● Create your topics and its partitions
● Connect your producers and consumers
● Get started with the quick starts
● Integrate your apps to the service
Managed Kafka cluster
● Access for 48 hours
● Limited number of topics & brokers
Time and resource limited
● Go to: red.ht/TryKafka
● Create your own Red Hat account
● Sign-in to try the service
Sign-up
red.ht/TryKafka
18. 18
Efficient, High throughput
And only focus on receiving, store
and replicate data.
I don’t want to know what’s in it...
Connect
Connect and talks in client and
Kafka protocol.
Transform
Making changes to into data
content. With additional data,
filtering or masking it. Convert
Serialize or deserialize with
various data format and
validating it’s schema.
Error Handle
What to do when things go
wrong? Where to place the
problem data?
19. 19
Kamelet - Connect, Stream to Kafka on Kubernetes
?
That could connect to any
almost any system
With support for known
integration patterns
That can work on and off
the cloud
from:
uri: "telegram:bots"
steps:
- to:
uri: "kafka:{{topic}}"
This is
Apache Camel
Using a simple languages like
YAML
HL7 FHIR CSV
Avro
Multiple data transformation
options
20. 20
Efficient, High throughput
And only focus on receiving, store
and replicate data.
I don’t want to know what’s in it...
Connect
Connect and talks in client and
Kafka protocol.
Transform
Making changes to into data
content. With additional data,
filtering or masking it. Convert
Serialize or deserialize with
various data format and
validating it’s schema.
Error Handle
What to do when things go
wrong? Where to place the
problem data?
300+ Connectors
EIP, split, filter, even
customization of
processes
Built-in transform
data formats with
simple
configuration
Dead letter queue,
catching exceptions
21. Kamelet - Connect, Stream to Kafka on Kubernetes
Deploy
Scaling
Monitoring
Managing
Repository
23. KAMELET CATALOG
Kamelet - Connect, Stream to Kafka on Kubernetes
Apache Camel K leverages the
catalog of connectors that
allow creating sources or sinks
towards external systems via a
simplified interface
configuration, hiding all the low
level details about how those
connections are implemented.
27. RED HAT MANAGEMENT
27
Hosted & managed service offering
Red Hat cloud services are managed and operated by Red Hat’s Site Reliability Engineers
▸ SREs serve as the cloud provider account owner and cluster
administrator owning the 99.95% SLA
▸ Responsible for the 24x7 support for all managed and hosted
environments
■ Including for building, installing, upgrading, managing
and maintaining every cluster
▸ SRE teams are distributed across 3 regions: APAC, EMEA
and Americas
▸ The team ensures open communications channels
centralized around the dedicated customer portal
Americas EMEA APAC
28. GETTING STARTED
28
Development Preview *NEW*
Hosted and managed Kafka service
for stream-based applications
● Spin up your own Kafka cluster
● Create your topics and its partitions
● Connect your producers and consumers
● Get started with the quick starts
● Integrate your apps to the service
Managed Kafka cluster
● Access for 48 hours
● Limited number of topics & brokers
Time and resource limited
● Go to: red.ht/TryKafka
● Create your own Red Hat account
● Sign-in to try the service
Sign-up
red.ht/TryKafka