How to store Nagios/Icinga(2) performancedata in Influxdb and generate automatically Grafana dashboards. Used tools:
- https://github.com/Griesbacher/nagflux
- https://github.com/Griesbacher/histou
Banco temporal Influxdb + Grafana: Operando sua PlataformaCelso Crivelaro
O documento discute a implementação de telemetria em uma plataforma usando InfluxDB e Grafana para fornecer visibilidade sobre o desempenho. Ele descreve como a telemetria resolveu problemas de falta de métricas e tendências, permitindo que os desenvolvedores identificassem problemas antes dos clientes. Explica como o InfluxDB armazena e consulta dados de séries temporais e como o Grafana pode ser usado para criar gráficos que representam esses dados.
IEEE Standard 1588 defines the Precision Time Protocol (PTP) to synchronize clocks over packet networks. PTP is needed for applications that require precise timing such as Time Division Multiplexing (TDM) over IP. PTP uses network messages and timestamps to synchronize slave clocks to a master clock with nanosecond precision. PTP messages include sync, delay request, follow up, and delay response. Hardware time stamping is often required to achieve high precision with low delay and jitter. PTP is a cheaper and more scalable solution than alternatives like GPS or atomic clocks for synchronizing networks to within 50 parts per billion.
Tzu-Li (Gordon) Tai - Stateful Stream Processing with Apache FlinkVerverica
As Apache Flink continues to push the boundaries of stateful stream processing as an integral part of its past releases, increasing numbers of users are starting to realize the potential of stateful stream processing as a promising paradigm for robust and reactive data analytics as well as event-driven applications.
This talk aims at covering the general idea and motivations of stateful stream processing, and how Flink enables it with its powerful set of state management features and programming APIs. In addition to that, we will also take a look at the recent advancements related to Flink's state management and large state handling that were driven by our team at data Artisans team in the latest version 1.3 (expected release by end of May / early June).
Library Operating System for Linux #netdev01Hajime Tazaki
This document introduces a library operating system approach for using the Linux network stack in userspace. Some key points:
- It describes building the Linux network stack (including components like ARP, TCP/IP, Qdisc, etc) as a library that can be loaded and used in userspace.
- This allows flexible experimentation with and testing of new network stack ideas without modifying the kernel. Code can be added and tested through the library interface.
- Implementations described include directly executing the code (DCE) and using it to integrate with a network simulator, as well as a Network Stack in Userspace (NUSE) that provides a full-featured POSIX-like platform for the network stack in user
Multi-Tenancy Kafka cluster for LINE services with 250 billion daily messagesLINE Corporation
Yuto Kawamura
LINE / Z Part Team
At LINE we've been operating Apache Kafka to provide the company-wide shared data pipeline for services using it for storing and distributing data.
Kafka is underlying many of our services in some way, not only the messaging service but also AD, Blockchain, Pay, Timeline, Cryptocurrency trading and more.
Many services feeding many data into our cluster, leading over 250 billion daily messages and 3.5GB incoming bytes in 1 second which is one of the world largest scale.
At the same time, it is required to be stable and performant all the time because many important services uses it as a backend.
In this talk I will introduce the overview of Kafka usage at LINE and how we're operating it.
I'm also going to talk about some engineerings we did for maximizing its performance, solving troubles led particularly by hosting huge data from many services, leveraging advanced techniques like kernel-level dynamic tracing.
Flink Forward San Francisco 2022.
Resource Elasticity is a frequently requested feature in Apache Flink: Users want to be able to easily adjust their clusters to changing workloads for resource efficiency and cost saving reasons. In Flink 1.13, the initial implementation of Reactive Mode was introduced, later releases added more improvements to make the feature production ready. In this talk, we’ll explain scenarios to deploy Reactive Mode to various environments to achieve autoscaling and resource elasticity. We’ll discuss the constraints to consider when planning to use this feature, and also potential improvements from the Flink roadmap. For those interested in the internals of Flink, we’ll also briefly explain how the feature is implemented, and if time permits, conclude with a short demo.
by
Robert Metzger
The document provides step-by-step instructions for building and running Intel DPDK sample applications on a test environment with 3 virtual machines connected by 10G NICs. It describes compiling and running the helloworld, L2 forwarding, and L3 forwarding applications, as well as using the pktgen tool for packet generation between VMs to test forwarding performance. Key steps include preparing the Linux kernel for DPDK, compiling applications, configuring ports and MAC addresses, and observing packet drops to identify performance bottlenecks.
Banco temporal Influxdb + Grafana: Operando sua PlataformaCelso Crivelaro
O documento discute a implementação de telemetria em uma plataforma usando InfluxDB e Grafana para fornecer visibilidade sobre o desempenho. Ele descreve como a telemetria resolveu problemas de falta de métricas e tendências, permitindo que os desenvolvedores identificassem problemas antes dos clientes. Explica como o InfluxDB armazena e consulta dados de séries temporais e como o Grafana pode ser usado para criar gráficos que representam esses dados.
IEEE Standard 1588 defines the Precision Time Protocol (PTP) to synchronize clocks over packet networks. PTP is needed for applications that require precise timing such as Time Division Multiplexing (TDM) over IP. PTP uses network messages and timestamps to synchronize slave clocks to a master clock with nanosecond precision. PTP messages include sync, delay request, follow up, and delay response. Hardware time stamping is often required to achieve high precision with low delay and jitter. PTP is a cheaper and more scalable solution than alternatives like GPS or atomic clocks for synchronizing networks to within 50 parts per billion.
Tzu-Li (Gordon) Tai - Stateful Stream Processing with Apache FlinkVerverica
As Apache Flink continues to push the boundaries of stateful stream processing as an integral part of its past releases, increasing numbers of users are starting to realize the potential of stateful stream processing as a promising paradigm for robust and reactive data analytics as well as event-driven applications.
This talk aims at covering the general idea and motivations of stateful stream processing, and how Flink enables it with its powerful set of state management features and programming APIs. In addition to that, we will also take a look at the recent advancements related to Flink's state management and large state handling that were driven by our team at data Artisans team in the latest version 1.3 (expected release by end of May / early June).
Library Operating System for Linux #netdev01Hajime Tazaki
This document introduces a library operating system approach for using the Linux network stack in userspace. Some key points:
- It describes building the Linux network stack (including components like ARP, TCP/IP, Qdisc, etc) as a library that can be loaded and used in userspace.
- This allows flexible experimentation with and testing of new network stack ideas without modifying the kernel. Code can be added and tested through the library interface.
- Implementations described include directly executing the code (DCE) and using it to integrate with a network simulator, as well as a Network Stack in Userspace (NUSE) that provides a full-featured POSIX-like platform for the network stack in user
Multi-Tenancy Kafka cluster for LINE services with 250 billion daily messagesLINE Corporation
Yuto Kawamura
LINE / Z Part Team
At LINE we've been operating Apache Kafka to provide the company-wide shared data pipeline for services using it for storing and distributing data.
Kafka is underlying many of our services in some way, not only the messaging service but also AD, Blockchain, Pay, Timeline, Cryptocurrency trading and more.
Many services feeding many data into our cluster, leading over 250 billion daily messages and 3.5GB incoming bytes in 1 second which is one of the world largest scale.
At the same time, it is required to be stable and performant all the time because many important services uses it as a backend.
In this talk I will introduce the overview of Kafka usage at LINE and how we're operating it.
I'm also going to talk about some engineerings we did for maximizing its performance, solving troubles led particularly by hosting huge data from many services, leveraging advanced techniques like kernel-level dynamic tracing.
Flink Forward San Francisco 2022.
Resource Elasticity is a frequently requested feature in Apache Flink: Users want to be able to easily adjust their clusters to changing workloads for resource efficiency and cost saving reasons. In Flink 1.13, the initial implementation of Reactive Mode was introduced, later releases added more improvements to make the feature production ready. In this talk, we’ll explain scenarios to deploy Reactive Mode to various environments to achieve autoscaling and resource elasticity. We’ll discuss the constraints to consider when planning to use this feature, and also potential improvements from the Flink roadmap. For those interested in the internals of Flink, we’ll also briefly explain how the feature is implemented, and if time permits, conclude with a short demo.
by
Robert Metzger
The document provides step-by-step instructions for building and running Intel DPDK sample applications on a test environment with 3 virtual machines connected by 10G NICs. It describes compiling and running the helloworld, L2 forwarding, and L3 forwarding applications, as well as using the pktgen tool for packet generation between VMs to test forwarding performance. Key steps include preparing the Linux kernel for DPDK, compiling applications, configuring ports and MAC addresses, and observing packet drops to identify performance bottlenecks.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
InfluxDB is an open source time series database written in Go that stores metric data and performs real-time analytics. It has no external dependencies. InfluxDB stores data as time series with measurements, tags, and fields. Data is written using a line protocol and can be visualized using Grafana, an open source metrics dashboard.
4 matched filters and ambiguity functions for radar signals-2Solo Hermelin
Matched filters (Part 2of 2) maximizes the output signal-to-noise ratio for a known radar signal at a predefined time.
For comments please contact me at solo.hermelin@gmail.com.
For more presentations on different subjects visit my website at http://www.solohermelin.com.
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
Flinkn Forward San Francisco 2022.
In this talk, we will cover various topics around performance issues that can arise when running a Flink job and how to troubleshoot them. We’ll start with the basics, like understanding what the job is doing and what backpressure is. Next, we will see how to identify bottlenecks and which tools or metrics can be helpful in the process. Finally, we will also discuss potential performance issues during the checkpointing or recovery process, as well as and some tips and Flink features that can speed up checkpointing and recovery times.
by
Piotr Nowojski
Embedded Recipes 2018 - Finding sources of Latency In your system - Steven Ro...Anne Nicolas
Having just an RTOS is not enough for a real-time system. The hardware must be deterministic as well as the applications that run on the system. When you are missing deadlines, the first thing that must be done is to find what is the source of the latency that caused the issue. It could be the hardware, the operating system or the application, or even a combination of the above. This talk will discuss how to determine where the latency is using tools that come with the Linux Kernel, and will explain a few cases that caused issues.
Common issues with Apache Kafka® Producerconfluent
Badai Aqrandista, Confluent, Senior Technical Support Engineer
This session will be about a common issue in the Kafka Producer: producer batch expiry. We will be discussing the Kafka Producer internals, its common causes, such as a slow network or small batching, and how to overcome them. We will also be sharing some examples along the way!
https://www.meetup.com/apache-kafka-sydney/events/279651982/
How an Open Marine Standard, InfluxDB and Grafana Are Used to Improve Boating...InfluxData
Steve and Teppo presented a solution for visualizing and analyzing boat data from disparate sources. Their goals were to record all boat data for later analysis, create customizable dashboards to visualize data and trends, and gain access to data from any source. Their solution uses Signal K to connect data sources, writes the data to InfluxDB for storage, and uses Grafana dashboards to visualize trends over time. This has helped users monitor engine performance, electrical systems, storms, and troubleshoot issues remotely. Future work includes more dashboards, alerts, device integrations, and extending the Signal K data model.
Tuning Apache Kafka Connectors for Flink.pptxFlink Forward
Flink Forward San Francisco 2022.
In normal situations, the default Kafka consumer and producer configuration options work well. But we all know life is not all roses and rainbows and in this session we’ll explore a few knobs that can save the day in atypical scenarios. First, we'll take a detailed look at the parameters available when reading from Kafka. We’ll inspect the params helping us to spot quickly an application lock or crash, the ones that can significantly improve the performance and the ones to touch with gloves since they could cause more harm than benefit. Moreover we’ll explore the partitioning options and discuss when diverging from the default strategy is needed. Next, we’ll discuss the Kafka Sink. After browsing the available options we'll then dive deep into understanding how to approach use cases like sinking enormous records, managing spikes, and handling small but frequent updates.. If you want to understand how to make your application survive when the sky is dark, this session is for you!
by
Olena Babenko
Tracing MariaDB server with bpftrace - MariaDB Server Fest 2021Valeriy Kravchuk
Bpftrace is a relatively new eBPF-based open source tracer for modern Linux versions (kernels 5.x.y) that is useful for analyzing production performance problems and troubleshooting software. Basic usage of the tool, as well as bpftrace one liners and advanced scripts useful for MariaDB DBAs are presented. Problems of MariaDB Server dynamic tracing with bpftrace and some possible solutions and alternative tracing tools are discussed.
Chartbeat measures and monetizes attention on the web. They were experiencing slow load times and TCP retransmissions due to default system settings. Tuning various TCP, NGINX and EC2 ELB settings like increasing buffers, disabling Nagle's algorithm, and enabling HTTP keep-alive resolved the issues and improved performance. These included tuning settings like net.ipv4.tcp_max_syn_backlog, net.core.somaxconn, and nginx listen backlog values.
DPDK is a set of drivers and libraries that allow applications to bypass the Linux kernel and access network interface cards directly for very high performance packet processing. It is commonly used for software routers, switches, and other network applications. DPDK can achieve over 11 times higher packet forwarding rates than applications using the Linux kernel network stack alone. While it provides best-in-class performance, DPDK also has disadvantages like reduced security and isolation from standard Linux services.
This document provides an overview and comparison of Informix's streaming technologies: Change Data Capture (CDC), Smart Triggers, Asynchronous Triggers, and V-II Socket Streaming. CDC processes database transaction logs to capture all changes and send them to clients. Smart Triggers use selective triggers and filtering to capture specific data changes. Asynchronous Triggers use post-commit triggers to route data to user-defined routines. V-II Socket Streaming sends triggered data to MQTT brokers but is not officially supported. The document also includes code examples and diagrams demonstrating how these technologies integrate with applications.
A 64Gb/s PAM-4 Transmitter with 4-Tap FFE and 2.26pJ/b Energy Efficiency in 2...aiclab
University of Pavia and STMicroelectronics present a PAM-4 transmitter with 4-tap FFE in 28nm FDSOI CMOS. The proposed TX leverages a new serializer architecture and output stage to demonstrate 1.2Vppd output swing and the highest reported speed of 64Gb/s. Further, it shows state-of-the-art 2.26pJ/bit energy efficiency while meeting CEI-56G-PAM-4 requirements.
Distributed stream processing is evolving from a technology in the sidelines of Big Data to a key enabler for businesses to provide more scalable, real-time services to their customers. We at Ververica, the company founded by the original creators of Apache Flink, and other prominent players in the Flink community have witnessed this development from the driver’s seat. Working with our customer and the wider community we have seen great success stories and we have seen things going wrong. In this talk, I would like to share anecdotes and hard-learned lessons of adopting distributed stream processing – Apache Flink specific as well as across frameworks. Afterwards, you will know, how not to model your use cases as a stream processing application, which data structures not to use, how not to deal with failure, how not to approach the topic of monitoring and much more.
Video: https://www.youtube.com/watch?v=F7HQd3KX2TQ&list=PLDX4T_cnKjD207Aa8b5CsZjc7Z_KRezGz&index=48&t=6s
CETH for XDP [Linux Meetup Santa Clara | July 2016] IO Visor Project
This document discusses CETH (Common Ethernet Driver Framework), which aims to improve kernel networking performance for virtualization. CETH simplifies NIC drivers by consolidating common functions. It supports various NICs and accelerators. CETH features efficient memory and buffer management, flexible TX/RX scheduling, and a customizable metadata structure. It is being simplified to work with XDP for even higher performance network I/O processing in the kernel. Next steps include further optimizations and measuring performance gains when using CETH with XDP and virtualized environments.
Open vSwitch Offload: Conntrack and the Upstream KernelNetronome
Offloading all or part of the Open vSwitch datapath to SmartNICs has been shown to not only release CPU resources on the server, but improve traffic processing performance. Recently steps have been made to support such offloading in the upstream Linux kernel. This has focused on creating an OVS datapath using the TC flower filter and utilizing the offload hooks already present here. This presentation focuses on how Connection Tracking (Conntrack) may fit into this model. It describes current work being undertaken with the Netfilter community to allow offloading of Conntrack entries. It continues to link this work with the offloading of Conntrack rules within OVS-TC.
Time to say goodbye to your Nagios based setup. Discover all the new cool tools out there to do some more efficient monitoring. A talk made at OSMC 2014.
https://www.youtube.com/watch?v=_BAWi9Zhmic
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
InfluxDB is an open source time series database written in Go that stores metric data and performs real-time analytics. It has no external dependencies. InfluxDB stores data as time series with measurements, tags, and fields. Data is written using a line protocol and can be visualized using Grafana, an open source metrics dashboard.
4 matched filters and ambiguity functions for radar signals-2Solo Hermelin
Matched filters (Part 2of 2) maximizes the output signal-to-noise ratio for a known radar signal at a predefined time.
For comments please contact me at solo.hermelin@gmail.com.
For more presentations on different subjects visit my website at http://www.solohermelin.com.
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
Flinkn Forward San Francisco 2022.
In this talk, we will cover various topics around performance issues that can arise when running a Flink job and how to troubleshoot them. We’ll start with the basics, like understanding what the job is doing and what backpressure is. Next, we will see how to identify bottlenecks and which tools or metrics can be helpful in the process. Finally, we will also discuss potential performance issues during the checkpointing or recovery process, as well as and some tips and Flink features that can speed up checkpointing and recovery times.
by
Piotr Nowojski
Embedded Recipes 2018 - Finding sources of Latency In your system - Steven Ro...Anne Nicolas
Having just an RTOS is not enough for a real-time system. The hardware must be deterministic as well as the applications that run on the system. When you are missing deadlines, the first thing that must be done is to find what is the source of the latency that caused the issue. It could be the hardware, the operating system or the application, or even a combination of the above. This talk will discuss how to determine where the latency is using tools that come with the Linux Kernel, and will explain a few cases that caused issues.
Common issues with Apache Kafka® Producerconfluent
Badai Aqrandista, Confluent, Senior Technical Support Engineer
This session will be about a common issue in the Kafka Producer: producer batch expiry. We will be discussing the Kafka Producer internals, its common causes, such as a slow network or small batching, and how to overcome them. We will also be sharing some examples along the way!
https://www.meetup.com/apache-kafka-sydney/events/279651982/
How an Open Marine Standard, InfluxDB and Grafana Are Used to Improve Boating...InfluxData
Steve and Teppo presented a solution for visualizing and analyzing boat data from disparate sources. Their goals were to record all boat data for later analysis, create customizable dashboards to visualize data and trends, and gain access to data from any source. Their solution uses Signal K to connect data sources, writes the data to InfluxDB for storage, and uses Grafana dashboards to visualize trends over time. This has helped users monitor engine performance, electrical systems, storms, and troubleshoot issues remotely. Future work includes more dashboards, alerts, device integrations, and extending the Signal K data model.
Tuning Apache Kafka Connectors for Flink.pptxFlink Forward
Flink Forward San Francisco 2022.
In normal situations, the default Kafka consumer and producer configuration options work well. But we all know life is not all roses and rainbows and in this session we’ll explore a few knobs that can save the day in atypical scenarios. First, we'll take a detailed look at the parameters available when reading from Kafka. We’ll inspect the params helping us to spot quickly an application lock or crash, the ones that can significantly improve the performance and the ones to touch with gloves since they could cause more harm than benefit. Moreover we’ll explore the partitioning options and discuss when diverging from the default strategy is needed. Next, we’ll discuss the Kafka Sink. After browsing the available options we'll then dive deep into understanding how to approach use cases like sinking enormous records, managing spikes, and handling small but frequent updates.. If you want to understand how to make your application survive when the sky is dark, this session is for you!
by
Olena Babenko
Tracing MariaDB server with bpftrace - MariaDB Server Fest 2021Valeriy Kravchuk
Bpftrace is a relatively new eBPF-based open source tracer for modern Linux versions (kernels 5.x.y) that is useful for analyzing production performance problems and troubleshooting software. Basic usage of the tool, as well as bpftrace one liners and advanced scripts useful for MariaDB DBAs are presented. Problems of MariaDB Server dynamic tracing with bpftrace and some possible solutions and alternative tracing tools are discussed.
Chartbeat measures and monetizes attention on the web. They were experiencing slow load times and TCP retransmissions due to default system settings. Tuning various TCP, NGINX and EC2 ELB settings like increasing buffers, disabling Nagle's algorithm, and enabling HTTP keep-alive resolved the issues and improved performance. These included tuning settings like net.ipv4.tcp_max_syn_backlog, net.core.somaxconn, and nginx listen backlog values.
DPDK is a set of drivers and libraries that allow applications to bypass the Linux kernel and access network interface cards directly for very high performance packet processing. It is commonly used for software routers, switches, and other network applications. DPDK can achieve over 11 times higher packet forwarding rates than applications using the Linux kernel network stack alone. While it provides best-in-class performance, DPDK also has disadvantages like reduced security and isolation from standard Linux services.
This document provides an overview and comparison of Informix's streaming technologies: Change Data Capture (CDC), Smart Triggers, Asynchronous Triggers, and V-II Socket Streaming. CDC processes database transaction logs to capture all changes and send them to clients. Smart Triggers use selective triggers and filtering to capture specific data changes. Asynchronous Triggers use post-commit triggers to route data to user-defined routines. V-II Socket Streaming sends triggered data to MQTT brokers but is not officially supported. The document also includes code examples and diagrams demonstrating how these technologies integrate with applications.
A 64Gb/s PAM-4 Transmitter with 4-Tap FFE and 2.26pJ/b Energy Efficiency in 2...aiclab
University of Pavia and STMicroelectronics present a PAM-4 transmitter with 4-tap FFE in 28nm FDSOI CMOS. The proposed TX leverages a new serializer architecture and output stage to demonstrate 1.2Vppd output swing and the highest reported speed of 64Gb/s. Further, it shows state-of-the-art 2.26pJ/bit energy efficiency while meeting CEI-56G-PAM-4 requirements.
Distributed stream processing is evolving from a technology in the sidelines of Big Data to a key enabler for businesses to provide more scalable, real-time services to their customers. We at Ververica, the company founded by the original creators of Apache Flink, and other prominent players in the Flink community have witnessed this development from the driver’s seat. Working with our customer and the wider community we have seen great success stories and we have seen things going wrong. In this talk, I would like to share anecdotes and hard-learned lessons of adopting distributed stream processing – Apache Flink specific as well as across frameworks. Afterwards, you will know, how not to model your use cases as a stream processing application, which data structures not to use, how not to deal with failure, how not to approach the topic of monitoring and much more.
Video: https://www.youtube.com/watch?v=F7HQd3KX2TQ&list=PLDX4T_cnKjD207Aa8b5CsZjc7Z_KRezGz&index=48&t=6s
CETH for XDP [Linux Meetup Santa Clara | July 2016] IO Visor Project
This document discusses CETH (Common Ethernet Driver Framework), which aims to improve kernel networking performance for virtualization. CETH simplifies NIC drivers by consolidating common functions. It supports various NICs and accelerators. CETH features efficient memory and buffer management, flexible TX/RX scheduling, and a customizable metadata structure. It is being simplified to work with XDP for even higher performance network I/O processing in the kernel. Next steps include further optimizations and measuring performance gains when using CETH with XDP and virtualized environments.
Open vSwitch Offload: Conntrack and the Upstream KernelNetronome
Offloading all or part of the Open vSwitch datapath to SmartNICs has been shown to not only release CPU resources on the server, but improve traffic processing performance. Recently steps have been made to support such offloading in the upstream Linux kernel. This has focused on creating an OVS datapath using the TC flower filter and utilizing the offload hooks already present here. This presentation focuses on how Connection Tracking (Conntrack) may fit into this model. It describes current work being undertaken with the Netfilter community to allow offloading of Conntrack entries. It continues to link this work with the offloading of Conntrack rules within OVS-TC.
Time to say goodbye to your Nagios based setup. Discover all the new cool tools out there to do some more efficient monitoring. A talk made at OSMC 2014.
https://www.youtube.com/watch?v=_BAWi9Zhmic
The Open source market is getting overcrowded with different Network monitoring solutions, and not without reason, monitoring your infrastructure become more important each day, you have to know what's going on for your boss, your customers and for yourself. Nagios started the evolution, but today OpenNMS, Zabix, Zenoss, Groundworks, Hyperic and different others are showing up in the market. Do you want lightweight, or feature full, how far do you want to go with your monitoring, just on os level, or do you want to dig into your applications, do you want to know how many query per seconds your MySQL database is serving, or do you want to know about the internal state of your JBoss, or be triggered if the OOM killer will start working soon. This presentation will guide the audience trough the different alternatives, based on our experiences in the field. We will be looking both at alerting and trending and how easy or difficult it is to deploy such an environment.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1LarzvW.
Paul Dix discusses experiences building InfluxDB, an open source distributed time series database, in Go. He talks about what drove the decision to use Go, what's been really great about developing in the language, and a few of the pains that they’ve had along the way. He also digs into what performance characteristics they’ve seen in Go 1.4 vs. Go 1.5, which has a new garbage collector. Filmed at qconsf.com.
Paul Dix is the CEO of InfluxDB. He is the editor for Addison Wesley's "Data and Analytics", the author of "Service Oriented Design with Ruby and Rails" and the founder of the NYC Machine Learning Meetup.
This short document promotes the creation of Haiku Deck presentations on SlideShare by stating it provides inspiration and allows users to get started making their own presentations. It encourages the reader to create presentations on the Haiku Deck platform hosted on SlideShare.
Measure your app internals with InfluxDB and Symfony2Corley S.r.l.
This document discusses using InfluxDB, a time-series database, to measure application internals in Symfony. It describes sending data from a Symfony app to InfluxDB using its PHP client library, and visualizing the data with Grafana dashboards. Key steps include setting up the InfluxDB client via dependency injection, dispatching events from controllers, listening for them to send data to InfluxDB, and building Grafana dashboards to view measurements over time.
This document discusses InfluxDB, an open-source time series database. It stores time stamped numeric data in structures called time series. The document provides an overview of time series data, describes how to install and use InfluxDB, and discusses features like its HTTP API, client libraries, Grafana integration for visualization, and benchmark results showing it has better performance for time series data than other databases.
The document discusses job design and ergonomics. It defines job design as deciding the content, methods, and relationships within an organization. It also outlines factors that affect job design and proposals like cross-training, job enlargement, job enrichment, and team production. The document then defines ergonomics as the scientific study of human capabilities and aims of ergonomics to improve effectiveness, efficiency, and fit the job to the person. It lists benefits of ergonomics as improved safety, comfort, job satisfaction, and quality of life while reducing stress and fatigue.
Introduction to InfluxDB, an Open Source Distributed Time Series Database by ...Hakka Labs
In this presentation, Paul introduces InfluxDB, a distributed time series database that he open sourced based on the backend infrastructure at Errplane. He talks about why you'd want a database specifically for time series and he covers the API and some of the key features of InfluxDB, including:
• Stores metrics (like Graphite) and events (like page views, exceptions, deploys)
• No external dependencies (self contained binary)
• Fast. Handles many thousands of writes per second on a single node
• HTTP API for reading and writing data
• SQL-like query language
• Distributed to scale out to many machines
• Built in aggregate and statistics functions
• Built in downsampling
Beautiful Monitoring With Grafana and InfluxDBleesjensen
Query your data streams with the time series database InfluxDB and then visualize the results with stunning Grafana dashboards. Quick and easy to set up. Fully scalable to millions of metrics per second.
Monitoring Kubernetes with Prometheus (Kubernetes Ireland, 2016)Brian Brazil
Prometheus is a next-generation monitoring system. Since being publicly announced last year it has seen wide-spread interest and adoption. This talk will look at the concepts behind monitoring with Prometheus, and how to use it with Kubernetes which has direct support for Prometheus.
OSMC 2015: Grafana meets Monitoring-Vorstellung einer Komplettlösung by Phili...NETWAYS
The key to increase the popularity of monitoring is to facilitate the access and to adapt it to current design conceptions. This talk will introduce an approach in OMD to achieve the latter by replacing the classical RRD graphs of performance data with modern ones. For that Grafana in combination with InfluxDB has been integrated into OMD. In particular every effort has been made to keep the former strong-points of similar systems and to rectify potential weak-points in order to ensure especially the practicability.
IPv6 bei PostFinance AG - Erste Erkenntnisse aus der VorstudieSwiss IPv6 Council
Inhalt:
Kurz Vorgestellt Die Informatik von PostFinance
IPv6 bei PostFinance Warum wir uns mit IPv6 auseinandersetzen
Standortbestimmung Wo wir heute stehen
Herausforderungen Unsere ersten Erkenntnisse zu IPv6
Wie es weiter geht Rüstung für die Zukunft
Dies sind die Slides unseres Webinars mit dem Thema SAP BOPF, welches wir am 27.1.2017 abgehalten haben.
Das SAP BOPF (Business Object Processing Framework) besteht aus einer Reihe von Diensten und Funktionalitäten die zur Standardisierung bzw. Modularisierung von ABAP Entwicklungen dient.
Neben einem theoretischen Überblick und ausgewählten Live Demos haben wir auch Erfahrungen aus 2 Projekten wiedergegeben.
FMK2016 - Longin Ziegler - Schritt für Schritt zum eigenen KalenderVerein FM Konferenz
Longin Ziegler zeigt an der FileMaker Konferenz 2016 in Salzburg eine Lösung wie man ohne Plugins in FileMaker Kalender Einträge für iCal oder Outlook erstellt
OSMC 2012 | Corporate-IT-Monitoring bei der ING-DiBa AG by Dr. Sven WolfarthNETWAYS
Die über Jahre gewachsene Monitoring-Infrastruktur in einem großen Unternehmen wie der ING-DiBa ist ein sensibles und komplexes Konstrukt. Um unseren Fachteams einen flexiblen und robusten Monitoring-Service anbieten zu können, haben wir uns dazu entschlossen, unser Monitoring Tool gegen Icinga einzutauschen.
Dieser Vortrag wird zeigen, welchen Weg wir eingeschlagen haben und welchen Herausforderungen wir bei diesem Migrationsprojekt gegenüberstanden. Das Projekt war jederzeit ein voller Erfolg. Wir haben aber auch gelernt, dass wir nicht nur technologische Herausforderungen zu lösen hatten. Bei der hohen Anzahl an beteiligten Fachteams waren vor allem die richtige Kommunikation sowie ein angemessenes Risiko- und Erwartungsmanagement ausschlaggebend. Anhand konkreter Beispiele werden wir zeigen, wie wir jederzeit die richtige Balance gefunden und das Projekt zum gewünschten Termin umgesetzt haben.
Abschlußvortrag zu der semantisch gestützten freien Oberflächenkomposition des Mediendemonstrators im Rahmen des SENSE-Forschungsprojektes (www.sense-projekt.de) .
UX Congress 2016: Agile als Agentur – Ideen, Fails und LearningsMartin Snajdr
Selbst in den konservativsten Branchen ist „agile“ als Buzzword angekommen, alles muss in Iterationen und am bestem mit diesem Scrum passieren. Doch kaum jemand weiß, was das eigentlich bedeutet und an wie vielen Stellen das den Projektalltag beeinflusst. Bestes Beispiel ist das agile Projekt zum Fixpreis.
Wie wir als UX-Agentur dieser Nachfrage aktuell begegnen und unser Weg dorthin – Ideen, Ansätze, Fehlschläge und Erkenntnisse.
Talk at UX Congress 07.10.2016 @ Frankfurt/Main, Germany by Martin Snajdr, Director Technology at COBE
TYPO3camp Munich 2018 - Keynote - "Wo woll'n mer denn hin?"Oliver Hader
Keynote zum TYPO3camp 2018 in München mit einem Rückblick der Errungenschaften der zurückliegenden zehn Jahre, einem Überblick über wichtige Features von TYPO3 v9 und einem Ausblick in technologische Themen und einen möglichen zukünftigen Working-Mode bei der Entwicklung des TYPO3 Kernsystems.
FMK2017 - Scriptprogrammierung und Fehlerbehandlung in FileMaker by Heike Lan...Verein FM Konferenz
Scriptprogrammierung ist eigentlich ganz einfach.
Die paare Befehle, die man auch so per Menü oder Tastenkombination aufrufen könnte, kann man ganz schnell mit der Maus zusammenklicken.
Und dann … kommt der Alltag oder der unbedarfte Anwender oder beides.
Heike Landschulz zeigt Ihnen, dass Scriptprogrammierung kein Hexenwerk ist und geht dabei auch verstärkt auf Fehlerbehandlung ein und wie man sich behelfen kann, wenn man keinen FileMaker Advanced hat.
3. Seite
Überblick
• Aktueller Standard PNP4Nagios
• Was war/ist das Ziel?
• Eingesetzte Programme
– InfluxDB
– Grafana
• Wie kommt man zu einem Graph?
– Backend
– Frontend
• Produktiv-Einsatz
• Ziele erfüllt?
07.09.2016 Philip Griesbacher - www.consol.de3
5. Seite
Was war/ist das Ziel?
• Zeitgemäßes „look and feel“
• Alte Stärken beibehalten
• Um neue Funktionen erweitern
• Produktiv einsetzbar
07.09.2016 Philip Griesbacher - www.consol.de5
6. Seite
Gründe für den Wechsel
• Festes Zeitraster
– Informationsverlust durch Normalisierung der Werte
– Feinere Rasterung nur für Neu-Daten
• Primäre Verwendung von RRD-Daten: Erzeugung von RRD-Graphen
– Andere Verwendungszwecke eingeschränkt / Nur über Umwege
07.09.2016 Philip Griesbacher - www.consol.de6
7. Seite
Eingesetzte Programme – InfluxDB
• “An open-source distributed time series database with no external
dependencies (https://influxdb.com/ 06.11.2015).”
07.09.2016 Philip Griesbacher - www.consol.de7
[2]
8. Seite
Eingesetzte Programme – Grafana
• “An open source, feature rich metrics dashboard and graph editor for
Graphite, InfluxDB & OpenTSDB (https://github.com/grafana/grafana 06.11.2015).”
07.09.2016 Philip Griesbacher - www.consol.de8
[3]
9. Seite
Wie kommt man zu einem Graph?
07.09.2016 Philip Griesbacher - www.consol.de9
10. Seite
Backend – Nagflux
• Verbindet „nagiosartige“ Systeme mit einer InfluxDB
• Schnittstellen:
– Perfdata-Verzeichnis
– Gearman
– Livestatus zur Anreicherung der Performancedaten
– Downtimes
– Notifications
– …
– Daten von Drittsystemen
• Programmiersprache: Go (1.5)
• https://github.com/Griesbacher/nagflux
07.09.2016 Philip Griesbacher - www.consol.de10