Die Komponenten einer Unternehmens-IT können nur zusammenarbeiten und mit der Außenwelt kommunizieren, wenn das Netzwerk funktioniert. Switches, Router, Firewalls und Loadbalancer bilden das Rückgrat vernetzter Systeme und sind somit Primärziele für das Monitoring. Bisher gab es für jedes Fabrikat und jeden Abfragetyp ein extra Plugin. Dies führte dazu, dass in Nagios-Installationen mehr als zehn Plugins, natürlich jedes mit seiner eigenen Kommandozeilensyntax, zum Einsatz kamen. Um diesen Irrsinn zu beenden wurde check_nwc_health geschrieben. Es hat sich zum Ziel gesetzt, sämtliche Anforderungen beim Monitoring der gebräuchlichsten Netzwerkkomponenten in einem einzigen Plugin zu bündeln.
Mittlerweile wird es in mehreren Umgebungen mit jeweils tausenden von Netzknoten (Cisco, Juniper, HP, CheckPoint, F5, Brocade, Bluecoat uvm.) erfolgreich eingesetzt und die Liste der Features wächst stetig.
Gerhard Laußer zeigt, wie mit wenig Aufwand ein Netzwerk-Monitoring auf Basis von check_nwc_health eingerichtet werden kann und wie man das Plugin mit wenigen Zeilen Code für spezielle Anforderungen aufbohren kann.
Icinga Director and vSphereDB - how they play together - Icinga Camp Zurich 2019Icinga
Talk by: Thomas Gelf
While the Icinga Director is the main configuration tool for Icinga, vSphereDB is a completely different beast. Icinga models everything around Hosts and Services, vSphereDB instead discovers your whole VMware infrastructure and builds a huge and deep inventory.
This talk wants to explain the reasoning behind this, shows what’s possible right now and where those powerful Icinga components are heading to in the near future.
Kernel Recipes 2019 - XDP closer integration with network stackAnne Nicolas
XDP (eXpress Data Path) is the new programmable in-kernel fast-path, which is placed as a layer before the existing Linux kernel network stack (netstack).
We claim XDP is not kernel-bypass, as it is a layer before and it can easily fall-through to netstack. Reality is that it can easily be (ab)used to create a kernel-bypass situation, where non of the kernel facilities are used (in form of BPF-helpers and in-kernel tables). The main disadvantage with kernel-bypass, is the need to re-implement everything, even basic building blocks, like routing tables and ARP protocol handling.
It is part of the concept and speed gain, that XDP allows users to avoid calling part of the kernel code. Users have the freedom to do kernel-bypass and re-implement everything, but the kernel should provide access to more in-kernel tables, via BPF-helpers, such that users can leverage other parts of the Open Source ecosystem, like router daemons etc.
This talk is about how XDP can work in-concert with netstack, and proposal on how we can take this even-further. Crazy ideas like using XDP frames to move SKB allocation out of driver code, will also be proposed.
Kafka’s New Control Plane: The Quorum Controller | Colin McCabe, ConfluentHostedbyConfluent
Currently, Apache Kafka® uses Apache ZooKeeper™ to store its metadata. Data such as the location of partitions and the configuration of topics are stored outside of Kafka itself, in a separate ZooKeeper cluster. In 2019, we outlined a plan to break this dependency and bring metadata management into Kafka itself through a dynamic service that runs inside the Kafka Cluster. We call this the Quorum Controller.
In this talk, we’ll look at how the Quorum Controller works and how it integrates with other parts of the next-generation Kafka architecture, such as the Raft quorum and snapshotting mechanism. We’ll also explain how the Quorum Controller will simplify operations, improve security, and enhance scalability and performance.
Finally, we’ll look at some of the practicalities, such as how to monitor and run the Quorum Controller yourself. We’ll talk about some of the performance gains we’ve seen, and our plans for the future.
If AMD Adopted OMI in their EPYC ArchitectureAllan Cantle
AMD's EPYC Architecture has paved the way forward towards Heterogeneous Data Centric Computing, but it is still limited by it's parallel DDR interfaces. This presentation shows the potential for the EPYC architecture if it adopted the Open Memory Interface, OMI, for it's Near Memory interface.
Computing Performance: On the Horizon (2021)Brendan Gregg
Talk by Brendan Gregg for USENIX LISA 2021. https://www.youtube.com/watch?v=5nN1wjA_S30 . "The future of computer performance involves clouds with hardware hypervisors and custom processors, servers running a new type of BPF software to allow high-speed applications and kernel customizations, observability of everything in production, new Linux kernel technologies, and more. This talk covers interesting developments in systems and computing performance, their challenges, and where things are headed."
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2msgBlb.
Kavya Joshi explores when and why locks affect performance, delves into Go’s lock implementation as a case study, and discusses strategies one can use when locks are actually a problem. Filmed at qconnewyork.com.
Kavya Joshi works as a software engineer at Samsara - a start-up in San Francisco. She particularly enjoys architecting and building highly concurrent, highly scalable systems.
Meta/Facebook's database serving social workloads is running on top of MyRocks (MySQL on RocksDB). This means our performance and reliability depends a lot on RocksDB. Not just MyRocks, but also we have other important systems running on top of RocksDB. We have learned many lessons from operating and debugging RocksDB at scale.
In this session, we will offer an overview of RocksDB, key differences from InnoDB, and share a few interesting lessons learned from production.
This document discusses optimizing Ceph latency through hardware design. It finds that CPU frequency has a significant impact on latency, with higher frequencies resulting in lower latencies. Testing shows 4KB write latency of 2.4ms at 900MHz but 694us at higher frequencies. The document also discusses how CPU power states that wake slowly, like C6 at 85us, can negatively impact latency. Overall it advocates designing hardware with fast CPUs and avoiding slower cores or dual sockets to minimize latency in Ceph deployments.
Icinga Director and vSphereDB - how they play together - Icinga Camp Zurich 2019Icinga
Talk by: Thomas Gelf
While the Icinga Director is the main configuration tool for Icinga, vSphereDB is a completely different beast. Icinga models everything around Hosts and Services, vSphereDB instead discovers your whole VMware infrastructure and builds a huge and deep inventory.
This talk wants to explain the reasoning behind this, shows what’s possible right now and where those powerful Icinga components are heading to in the near future.
Kernel Recipes 2019 - XDP closer integration with network stackAnne Nicolas
XDP (eXpress Data Path) is the new programmable in-kernel fast-path, which is placed as a layer before the existing Linux kernel network stack (netstack).
We claim XDP is not kernel-bypass, as it is a layer before and it can easily fall-through to netstack. Reality is that it can easily be (ab)used to create a kernel-bypass situation, where non of the kernel facilities are used (in form of BPF-helpers and in-kernel tables). The main disadvantage with kernel-bypass, is the need to re-implement everything, even basic building blocks, like routing tables and ARP protocol handling.
It is part of the concept and speed gain, that XDP allows users to avoid calling part of the kernel code. Users have the freedom to do kernel-bypass and re-implement everything, but the kernel should provide access to more in-kernel tables, via BPF-helpers, such that users can leverage other parts of the Open Source ecosystem, like router daemons etc.
This talk is about how XDP can work in-concert with netstack, and proposal on how we can take this even-further. Crazy ideas like using XDP frames to move SKB allocation out of driver code, will also be proposed.
Kafka’s New Control Plane: The Quorum Controller | Colin McCabe, ConfluentHostedbyConfluent
Currently, Apache Kafka® uses Apache ZooKeeper™ to store its metadata. Data such as the location of partitions and the configuration of topics are stored outside of Kafka itself, in a separate ZooKeeper cluster. In 2019, we outlined a plan to break this dependency and bring metadata management into Kafka itself through a dynamic service that runs inside the Kafka Cluster. We call this the Quorum Controller.
In this talk, we’ll look at how the Quorum Controller works and how it integrates with other parts of the next-generation Kafka architecture, such as the Raft quorum and snapshotting mechanism. We’ll also explain how the Quorum Controller will simplify operations, improve security, and enhance scalability and performance.
Finally, we’ll look at some of the practicalities, such as how to monitor and run the Quorum Controller yourself. We’ll talk about some of the performance gains we’ve seen, and our plans for the future.
If AMD Adopted OMI in their EPYC ArchitectureAllan Cantle
AMD's EPYC Architecture has paved the way forward towards Heterogeneous Data Centric Computing, but it is still limited by it's parallel DDR interfaces. This presentation shows the potential for the EPYC architecture if it adopted the Open Memory Interface, OMI, for it's Near Memory interface.
Computing Performance: On the Horizon (2021)Brendan Gregg
Talk by Brendan Gregg for USENIX LISA 2021. https://www.youtube.com/watch?v=5nN1wjA_S30 . "The future of computer performance involves clouds with hardware hypervisors and custom processors, servers running a new type of BPF software to allow high-speed applications and kernel customizations, observability of everything in production, new Linux kernel technologies, and more. This talk covers interesting developments in systems and computing performance, their challenges, and where things are headed."
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2msgBlb.
Kavya Joshi explores when and why locks affect performance, delves into Go’s lock implementation as a case study, and discusses strategies one can use when locks are actually a problem. Filmed at qconnewyork.com.
Kavya Joshi works as a software engineer at Samsara - a start-up in San Francisco. She particularly enjoys architecting and building highly concurrent, highly scalable systems.
Meta/Facebook's database serving social workloads is running on top of MyRocks (MySQL on RocksDB). This means our performance and reliability depends a lot on RocksDB. Not just MyRocks, but also we have other important systems running on top of RocksDB. We have learned many lessons from operating and debugging RocksDB at scale.
In this session, we will offer an overview of RocksDB, key differences from InnoDB, and share a few interesting lessons learned from production.
This document discusses optimizing Ceph latency through hardware design. It finds that CPU frequency has a significant impact on latency, with higher frequencies resulting in lower latencies. Testing shows 4KB write latency of 2.4ms at 900MHz but 694us at higher frequencies. The document also discusses how CPU power states that wake slowly, like C6 at 85us, can negatively impact latency. Overall it advocates designing hardware with fast CPUs and avoiding slower cores or dual sockets to minimize latency in Ceph deployments.
This document describes exploiting a use-after-free vulnerability called "Hearthstone" in VMware Workstation to escape from a virtual machine. It begins with background on VMware RPC and the fuzzing framework used. It then explains the Hearthstone vulnerability, how it allows information leakage, and how that leakage can be used to conduct an out-of-bounds write to achieve code execution on the host system. The presentation concludes with a demonstration of the exploitation process and takes questions.
The engineering teams within Splunk have been using several technologies Kinesis, SQS, RabbitMQ and Apache Kafka for enterprise wide messaging for the past few years but have recently made the decision to pivot toward Apache Pulsar, migrating both existing use cases and embedding it into new cloud-native service offerings such as the Splunk Data Stream Processor (DSP).
Shared Memory Centric Computing with CXL & OMIAllan Cantle
Discusses how CXL can be better utilized as a separate Fabric Cache domain to a processors own Local Cache Domain. This is done by leveraging a Shared Memory Centric architectures that utilize both the Open Memory Interface OMI, and Compute eXpress Link, CXL, for the memory ports.
This document provides an overview of Kubernetes 101. It begins with asking why Kubernetes is needed and provides a brief history of the project. It describes containers and container orchestration tools. It then covers the main components of Kubernetes architecture including pods, replica sets, deployments, services, and ingress. It provides examples of common Kubernetes manifest files and discusses basic Kubernetes primitives. It concludes with discussing DevOps practices after adopting Kubernetes and potential next steps to learn more advanced Kubernetes topics.
FreeRTOS basics (Real time Operating System)Naren Chandra
A presentation that covers all the basics needed to understand and start working with FreeRTOS . FreeRTOS is comparable with more than 20 controller families and 30 plus supporting tools and IDEs.
FreeRTOS is a market-leading real-time operating system (RTOS) for microcontrollers and small microprocessors. Distributed freely under the MIT open source license, FreeRTOS includes a kernel and a growing set of libraries suitable for use across all industry sectors. FreeRTOS is built with an emphasis on reliability and ease of use.
Tiny, power-saving kernel
Scalable size, with usable program memory footprint as low as 9KB. Some architectures include a tick-less power saving mode
Support for 40+ architectures
One code base for 40+ MCU architectures and 15+ toolchains, including the latest RISC-V and ARMv8-M (Arm Cortex-M33) microcontrollers
Modular libraries
A growing number of add-on libraries used across all industries sectors, including secure local or cloud connectivity
IoT Reference Integrations
Take advantage of tested examples that include all the libraries essential to securely connect to the cloud
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Anne Nicolas
Understanding how Linux kernel IO subsystem works is a key to analysis of a wide variety of issues occurring when running a Linux system. This talk is aimed at helping Linux users understand what is going on and how to get more insight into what is happening.
First we present an overview of Linux kernel block layer including different IO schedulers. We also talk about a new block multiqueue implementation that gets used for more and more devices.
After surveying the basic architecture we will be prepared to talk about tools to peek into it. We start with lightweight monitoring like iostat and continue with more heavy blktrace and variety of tools that are based on it. We demonstrate use of the tools on analysis of real world issues.
Jan Kara, SUSE
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Henning Jacobs
Kubernetes has the concept of resource requests and limits. Pods get scheduled on the nodes based on their requests and optionally limited in how much of the resource they can consume. Understanding and optimizing resource requests/limits is crucial both for reducing resource "slack" and ensuring application performance/low-latency. This talk shows our approach to monitoring and optimizing Kubernetes resources for 80+ clusters to achieve cost-efficiency and reducing impact for latency-critical applications. All shown tools are Open Source and can be applied to most Kubernetes deployments.
Crimson: Ceph for the Age of NVMe and Persistent MemoryScyllaDB
Ceph is a mature open source software-defined storage solution that was created over a decade ago.
During that time new faster storage technologies have emerged including NVMe and Persistent memory.
The crimson project aim is to create a better Ceph OSD that is more well suited to those faster devices. The crimson OSD is built on the Seastar C++ framework and can leverage these devices by minimizing latency, cpu overhead, and cross-core communication. This talk will discuss the project design, our current status, and our future plans.
QEMU Disk IO Which performs Better: Native or threads?Pradeep Kumar
Pradeep Kumar Surisetty from Red Hat presented a comparison of native and threaded I/O performance in QEMU disk I/O. He outlined KVM I/O architecture, storage transport options in KVM including virtio-blk configurations, and benchmark tools used. Performance testing was done with various disk types, file systems, images and configurations. Native generally outperformed threads for random I/O workloads, while threads sometimes showed better performance for sequential reads, especially with multiple VMs.
The document introduces CompletableFuture in Java, which is a library that allows asynchronous and non-blocking operations to be performed and chained together. It provides methods to chain dependent tasks together without blocking or callback hell. CompletableFuture implements Future and CompletionStage interfaces and provides various methods to handle results, errors, chaining and composition of asynchronous operations.
Apache Kafka lies at the heart of the largest data pipelines, handling trillions of messages and petabytes of data every day. Learn the right approach for getting the most out of Kafka from the experts at LinkedIn and Confluent. Todd Palino and Gwen Shapira demonstrate how to monitor, optimize, and troubleshoot performance of your data pipelines—from producer to consumer, development to production—as they explore some of the common problems that Kafka developers and administrators encounter when they take Apache Kafka from a proof of concept to production usage. Too often, systems are overprovisioned and underutilized and still have trouble meeting reasonable performance agreements.
Topics include:
- What latencies and throughputs you should expect from Kafka
- How to select hardware and size components
- What you should be monitoring
- Design patterns and antipatterns for client applications
- How to go about diagnosing performance bottlenecks
- Which configurations to examine and which ones to avoid
Java Performance Analysis on Linux with Flame GraphsBrendan Gregg
This document discusses using Linux perf_events (perf) profiling tools to analyze Java performance on Linux. It describes how perf can provide complete visibility into Java, JVM, GC and system code but that Java profilers have limitations. It presents the solution of using perf to collect mixed-mode flame graphs that include Java method names and symbols. It also discusses fixing issues with broken Java stacks and missing symbols on x86 architectures in perf profiles.
This document discusses using IP multicast and layer 2 networking techniques in AWS VPC to enable features like VRRP and software load balancers. It describes how to implement pseudo broadcast/multicast in VPC using IP multicast and packet duplication. Examples are provided showing how to configure LVS and Keepalived for VRRP heartbeat and failover. The document concludes that VPC provides more flexibility and options for architecting compared to EC2-Classic.
In this session, we’ll review how previous efforts, including Netfilter, Berkley Packet Filter (BPF), Open vSwitch (OVS), and TC, approached the problem of extensibility. We’ll show you an open source solution available within the Red Hat Enterprise Linux kernel, where extending and merging some of the existing concepts leads to an extensible framework that satisfies the networking needs of datacenter and cloud virtualization.
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...HostedbyConfluent
Kubernetes became the de-facto standard for running cloud-native applications. And many users turn to it also to run stateful applications such as Apache Kafka. You can use different tools to deploy Kafka on Kubernetes - write your own YAML files, use Helm Charts, or go for one of the available operators. But there is one thing all of these have in common. You still need very good knowledge of Kubernetes to make sure your Kafka cluster works properly in all situations. This talk will cover different Kubernetes features such as resources, affinity, tolerations, pod disruption budgets, topology spread constraints and more. And it will explain why they are important for Apache Kafka and how to use them. If you are interested in running Kafka on Kubernetes and do not know all of these, this is a talk for you.
OSMC 2014: Monitoring von Netzwerkkomponenten mit check_nwc_health | Gerhard ...NETWAYS
Die Komponenten einer Unternehmens-IT können nur zusammenarbeiten und mit der Außenwelt kommunizieren, wenn das Netzwerk funktioniert. Switches, Router, Firewalls und Loadbalancer bilden das Rückgrat vernetzter Systeme und sind somit Primärziele für das Monitoring. Bisher gab es für jedes Fabrikat und jeden Abfragetyp ein extra Plug-in. Dies führte dazu, dass in Nagios-Installationen mehr als zehn Plug-ins, natürlich jedes mit seiner eigenen Kommandozeilensyntax, zum Einsatz kamen. Um diesen Irrsinn zu beenden wurde check_nwc_health geschrieben. Es hat sich zum Ziel gesetzt, sämtliche Anforderungen beim Monitoring der gebräuchlichsten Netzwerkkomponenten in einem einzigen Plug-in zu bündeln.
Mittlerweile wird es in mehreren Umgebungen mit jeweils tausenden von Netzknoten (Cisco, Juniper, HP, CheckPoint, F5, Brocade, Bluecoat uvm.) erfolgreich eingesetzt und die Liste der Features wächst stetig.
Gerhard Laußer zeigt, wie mit wenig Aufwand ein Netzwerk-Monitoring auf Basis von check_nwc_health eingerichtet werden kann und wie man das Plug-in mit wenigen Zeilen Code für spezielle Anforderungen aufbohren kann.
This document describes exploiting a use-after-free vulnerability called "Hearthstone" in VMware Workstation to escape from a virtual machine. It begins with background on VMware RPC and the fuzzing framework used. It then explains the Hearthstone vulnerability, how it allows information leakage, and how that leakage can be used to conduct an out-of-bounds write to achieve code execution on the host system. The presentation concludes with a demonstration of the exploitation process and takes questions.
The engineering teams within Splunk have been using several technologies Kinesis, SQS, RabbitMQ and Apache Kafka for enterprise wide messaging for the past few years but have recently made the decision to pivot toward Apache Pulsar, migrating both existing use cases and embedding it into new cloud-native service offerings such as the Splunk Data Stream Processor (DSP).
Shared Memory Centric Computing with CXL & OMIAllan Cantle
Discusses how CXL can be better utilized as a separate Fabric Cache domain to a processors own Local Cache Domain. This is done by leveraging a Shared Memory Centric architectures that utilize both the Open Memory Interface OMI, and Compute eXpress Link, CXL, for the memory ports.
This document provides an overview of Kubernetes 101. It begins with asking why Kubernetes is needed and provides a brief history of the project. It describes containers and container orchestration tools. It then covers the main components of Kubernetes architecture including pods, replica sets, deployments, services, and ingress. It provides examples of common Kubernetes manifest files and discusses basic Kubernetes primitives. It concludes with discussing DevOps practices after adopting Kubernetes and potential next steps to learn more advanced Kubernetes topics.
FreeRTOS basics (Real time Operating System)Naren Chandra
A presentation that covers all the basics needed to understand and start working with FreeRTOS . FreeRTOS is comparable with more than 20 controller families and 30 plus supporting tools and IDEs.
FreeRTOS is a market-leading real-time operating system (RTOS) for microcontrollers and small microprocessors. Distributed freely under the MIT open source license, FreeRTOS includes a kernel and a growing set of libraries suitable for use across all industry sectors. FreeRTOS is built with an emphasis on reliability and ease of use.
Tiny, power-saving kernel
Scalable size, with usable program memory footprint as low as 9KB. Some architectures include a tick-less power saving mode
Support for 40+ architectures
One code base for 40+ MCU architectures and 15+ toolchains, including the latest RISC-V and ARMv8-M (Arm Cortex-M33) microcontrollers
Modular libraries
A growing number of add-on libraries used across all industries sectors, including secure local or cloud connectivity
IoT Reference Integrations
Take advantage of tested examples that include all the libraries essential to securely connect to the cloud
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Anne Nicolas
Understanding how Linux kernel IO subsystem works is a key to analysis of a wide variety of issues occurring when running a Linux system. This talk is aimed at helping Linux users understand what is going on and how to get more insight into what is happening.
First we present an overview of Linux kernel block layer including different IO schedulers. We also talk about a new block multiqueue implementation that gets used for more and more devices.
After surveying the basic architecture we will be prepared to talk about tools to peek into it. We start with lightweight monitoring like iostat and continue with more heavy blktrace and variety of tools that are based on it. We demonstrate use of the tools on analysis of real world issues.
Jan Kara, SUSE
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Henning Jacobs
Kubernetes has the concept of resource requests and limits. Pods get scheduled on the nodes based on their requests and optionally limited in how much of the resource they can consume. Understanding and optimizing resource requests/limits is crucial both for reducing resource "slack" and ensuring application performance/low-latency. This talk shows our approach to monitoring and optimizing Kubernetes resources for 80+ clusters to achieve cost-efficiency and reducing impact for latency-critical applications. All shown tools are Open Source and can be applied to most Kubernetes deployments.
Crimson: Ceph for the Age of NVMe and Persistent MemoryScyllaDB
Ceph is a mature open source software-defined storage solution that was created over a decade ago.
During that time new faster storage technologies have emerged including NVMe and Persistent memory.
The crimson project aim is to create a better Ceph OSD that is more well suited to those faster devices. The crimson OSD is built on the Seastar C++ framework and can leverage these devices by minimizing latency, cpu overhead, and cross-core communication. This talk will discuss the project design, our current status, and our future plans.
QEMU Disk IO Which performs Better: Native or threads?Pradeep Kumar
Pradeep Kumar Surisetty from Red Hat presented a comparison of native and threaded I/O performance in QEMU disk I/O. He outlined KVM I/O architecture, storage transport options in KVM including virtio-blk configurations, and benchmark tools used. Performance testing was done with various disk types, file systems, images and configurations. Native generally outperformed threads for random I/O workloads, while threads sometimes showed better performance for sequential reads, especially with multiple VMs.
The document introduces CompletableFuture in Java, which is a library that allows asynchronous and non-blocking operations to be performed and chained together. It provides methods to chain dependent tasks together without blocking or callback hell. CompletableFuture implements Future and CompletionStage interfaces and provides various methods to handle results, errors, chaining and composition of asynchronous operations.
Apache Kafka lies at the heart of the largest data pipelines, handling trillions of messages and petabytes of data every day. Learn the right approach for getting the most out of Kafka from the experts at LinkedIn and Confluent. Todd Palino and Gwen Shapira demonstrate how to monitor, optimize, and troubleshoot performance of your data pipelines—from producer to consumer, development to production—as they explore some of the common problems that Kafka developers and administrators encounter when they take Apache Kafka from a proof of concept to production usage. Too often, systems are overprovisioned and underutilized and still have trouble meeting reasonable performance agreements.
Topics include:
- What latencies and throughputs you should expect from Kafka
- How to select hardware and size components
- What you should be monitoring
- Design patterns and antipatterns for client applications
- How to go about diagnosing performance bottlenecks
- Which configurations to examine and which ones to avoid
Java Performance Analysis on Linux with Flame GraphsBrendan Gregg
This document discusses using Linux perf_events (perf) profiling tools to analyze Java performance on Linux. It describes how perf can provide complete visibility into Java, JVM, GC and system code but that Java profilers have limitations. It presents the solution of using perf to collect mixed-mode flame graphs that include Java method names and symbols. It also discusses fixing issues with broken Java stacks and missing symbols on x86 architectures in perf profiles.
This document discusses using IP multicast and layer 2 networking techniques in AWS VPC to enable features like VRRP and software load balancers. It describes how to implement pseudo broadcast/multicast in VPC using IP multicast and packet duplication. Examples are provided showing how to configure LVS and Keepalived for VRRP heartbeat and failover. The document concludes that VPC provides more flexibility and options for architecting compared to EC2-Classic.
In this session, we’ll review how previous efforts, including Netfilter, Berkley Packet Filter (BPF), Open vSwitch (OVS), and TC, approached the problem of extensibility. We’ll show you an open source solution available within the Red Hat Enterprise Linux kernel, where extending and merging some of the existing concepts leads to an extensible framework that satisfies the networking needs of datacenter and cloud virtualization.
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...HostedbyConfluent
Kubernetes became the de-facto standard for running cloud-native applications. And many users turn to it also to run stateful applications such as Apache Kafka. You can use different tools to deploy Kafka on Kubernetes - write your own YAML files, use Helm Charts, or go for one of the available operators. But there is one thing all of these have in common. You still need very good knowledge of Kubernetes to make sure your Kafka cluster works properly in all situations. This talk will cover different Kubernetes features such as resources, affinity, tolerations, pod disruption budgets, topology spread constraints and more. And it will explain why they are important for Apache Kafka and how to use them. If you are interested in running Kafka on Kubernetes and do not know all of these, this is a talk for you.
OSMC 2014: Monitoring von Netzwerkkomponenten mit check_nwc_health | Gerhard ...NETWAYS
Die Komponenten einer Unternehmens-IT können nur zusammenarbeiten und mit der Außenwelt kommunizieren, wenn das Netzwerk funktioniert. Switches, Router, Firewalls und Loadbalancer bilden das Rückgrat vernetzter Systeme und sind somit Primärziele für das Monitoring. Bisher gab es für jedes Fabrikat und jeden Abfragetyp ein extra Plug-in. Dies führte dazu, dass in Nagios-Installationen mehr als zehn Plug-ins, natürlich jedes mit seiner eigenen Kommandozeilensyntax, zum Einsatz kamen. Um diesen Irrsinn zu beenden wurde check_nwc_health geschrieben. Es hat sich zum Ziel gesetzt, sämtliche Anforderungen beim Monitoring der gebräuchlichsten Netzwerkkomponenten in einem einzigen Plug-in zu bündeln.
Mittlerweile wird es in mehreren Umgebungen mit jeweils tausenden von Netzknoten (Cisco, Juniper, HP, CheckPoint, F5, Brocade, Bluecoat uvm.) erfolgreich eingesetzt und die Liste der Features wächst stetig.
Gerhard Laußer zeigt, wie mit wenig Aufwand ein Netzwerk-Monitoring auf Basis von check_nwc_health eingerichtet werden kann und wie man das Plug-in mit wenigen Zeilen Code für spezielle Anforderungen aufbohren kann.
Monitoring von SAP mit den bisher vorhandenen Plugins beschränkte sich auf die Abfrage von CCMS-Metriken. In einem SAP-System steckt aber noch viel mehr, das sich überwachen lässt. Check_sap_health ist ein neues Plugin, welches in Perl geschrieben wurde. Es entstand in einem Projekt, bei dem von unterschiedlichen Standorten aus die Laufzeiten von BAPI-Aufrufen gemessen werden sollten. Durch die einfache Erweiterung des Plugins um selbstgeschriebene Perl-Elemente lassen sich beliebige Funktionen per RFC aufrufen und somit firmenspezifische Logik implementieren.“
Vortrag gehalten beim Workshop der Open-Source-Monitoring-Community 2014 in Berlin.
PowerShell Sicherheit in 6 Schritten produktiv absichernAttila Krick
Welche Sicherheitsmaßnahmen stehen dem Administrator zur Verfügung, um die PowerShell produktiv abzusichern und trotzdem administrative Aufgaben an Nicht-Administratoren zu delegieren (JEA)? Wie können verdächtige und unerwünschte Aktivitäten protokolliert werden (ScriptBlockLogging)? Wie sollten die Ausführungsrichtlinien eingestellt sein und was bedeutet das für den Betrieb? Solche und noch weitere Fragen werden hier geklärt, um die Sicherheit an der PowerShell zu erhöhen. Von https://attilakrick.com
SNMP Applied - Sicheres Monitoring mit SNMPGerrit Beine
Der Vortrag gibt Unix-Nutzern einen Einblick, wie man Net-SNMP zum Monitoring und Steuern beliebiger Anwendungen nutzen kann.
Der Schwerpunkt liegt dabei auf dem Thema der Absicherung des SNMP-Dienstes mit Hilfe von SSL/TLS und Authentifizierung.
Als Beispiele dienen hierzu SNMP4J und Net-SNMP.
CMAN Reloaded - Der Oracle Connection Manager (CMAN) als Firewall für das Routing von Datenbank Verbindungen
DOAG Security Day - Mai 2015
Mit dem Oracle Connection Manager, dem CMAN, kann das SQL*Net Protokoll zwischen verschiedenen Netzwerken „geroutet“ werden.
Der CMAN ist bereits sehr lange Bestandteil der Oracle Installation und ursprünglich war einer der Haupteinsatzzwecke das Routen zwischen verschiedenen Netzwerk Protokollen, zum Beispiel in einer SPX/IPX Welt nach TCP/IP.
Heute ist der CMAN sehr praktisch um Proxy und Firewall Funktionalitäten zwischen verschiedenen Netzwerk Segmenten zu realisieren.
Mit diesem Konzept lässt sich sauber das administrative bzw., interne Netzwerk von der Produktion trennen. Trotzdem ist aber noch ein komfortables und sicheres Arbeiten für die Administratoren möglich.
Im Vortrag wird die Architektur und der Einsatz als Gateway in einem Unternehmensnetz vorgestellt.
Ziel in diesem Projekt war es die Zugriffe der Administratoren zentral zu bündeln und zu überwachen.
Auch wird die Installation und der Betrieb auf einem Linux System vorgestellt und in einer Demonstration die Funktion dargelegt.
In diesem Vortrag wird zum einen ein Überblick und eine Begriffsklärung zum Thema CPU/PSU gegeben. Zum anderen wird gezeigt, wie man patcht und welche Dinge dabei beachtet werden sollten.OPITZ CONSULTING Beraterin Katja Werner hielt diesen Vortrag bei der DOAG SIG Security am 3.3.2011.
OSMC 2008 | Einsatz von check_multi in einfachen bis hochkomplexen Monitoring...NETWAYS
check_multi ist eines der ersten Plugins, welches die neuen multi-line Features von Nagios 3 extensiv nutzt. Ob es um Performance-Probleme in großen Setups, flexible Konfigurationen in heterogenen Netzwerken, Delegierung der Monitoring-Konfiguration, Business Process Views oder Adaptives Monitoring geht: es gibt nur wenige Anforderungen, für die mit check_multi nicht eine überzeugende Lösung gefunden werden kann.
Eckpunkte des Vortrags:
- check_multi: Motivation und Entstehung
- Das Konzept
- Voraussetzungen
- Konfiguration und Test
- Integration in Nagios und Betrieb
- Performance-Daten
- State Evaluation und Business Process Views
- Adaptives Monitoring
Naehere Informationen zu check_multi finden sich auf der Homepage des Projektes http://www.my-plugin.de/check_multi
In ihrem Referat stellten Daniel Bühlmann und Roman Andres hilfreiche Tipps für die Fehler- und Problemanalyse in SCC vor. Dabei wird zeigten sie auf, wie die häufigsten Probleme, die mit SCCM auftreten können, schnell identifiziert werden können und in welchen Situationen welche Logfiles wichtig sein können.
2. Warum ein weiteres Netzwerk-Plugin?
21.11.2014 www.consol.de 2
Seite
Es gibt doch schon Plugins
check_cisco_cpu.sh
check_cisco_mem.pl
check_cisco_fan_1.sh
check_cisco_fan_2.sh
juniper_check_portstatus
check_snmp_int
check_snmp_mem
check_ifoperstatus
fF5_all.pl
check_status_f5.sh
….
4. Runterladen, zusammenbauen
$ wget http://labs.consol.de/download/shinken-nagios-plugins/
check_nwc_health-3.2.0.1.tar.gz
$ cd check_nwc_health-3.2.0.1
$ ./configure; make
$ cp plugins-scripts/check_nwc_health $OMD_ROOT/local/lib/nagios/plugins
21.11.2014 www.consol.de 4
Seite
5. Grundlagen - Kommandozeilenparameter
Mindestens muss man angeben:
--hostname <IP oder Hostname>
--community <SNMP v1/v2 Community>
--mode <was soll denn das Plugin tun?>
Eventuell
--timeout (15 Sekunden sind Default)
--protocol 1 (2c ist der Default)
--port (wenn nicht 161)
--domain (udp/ipv4 ist der Default, tcp/ipv4, udp6, udp/ipv6,…)
21.11.2014 www.consol.de 5
Seite
6. Grundlagen - Kommandozeilenparameter
SNMP v3 geht auch
--protocol 3
--username (securityName)
--authpassword (dazugehöriges Passwort)
--authprotocol (md5 oder sha)
--privpassword (Passwort für authPriv)
--privprotocol (des, aes, aes128, 3des, 3desde)
--contextengineid (10-64 hex character)
--contextname (Default ist "default context")
21.11.2014 www.consol.de 6
Seite
7. Seite
Erste Checks - Uptime
Uptime - Spontane Reboots erkennen, Anführer einer Servicedependency
$ check_nwc_health
--hostname 10.23.4.2 --community abc
--mode uptime
OK - device is up since 103d 13h 26m 24s | 'uptime'=149126;15:;5:;;
Mode uptime funktioniert mit allen Geräten, die SNMP sprechen
Verwendet snmpEngineTime falls vorhanden. => 64bit
Besser als der 32bit-Wert sysUptime, der nach 496 Tagen überläuft.
21.11.2014 www.consol.de 7
8. Seite
Erste Checks - CPU
$ check_nwc_health
--hostname 10.23.4.2 --community abc
--mode cpu-load
OK - cpu Chassis PIX 515E Firewall Appliance usage (5 min avg.) is 15.00% |
'cpu_Chassis PIX 515E Firewall Appliance_usage'=15%;80;90;0;100
OK - cpu usage is 27.00% | 'cpu_usage'=27%;80;90;0;100
OK - tmm cpu usage is 0.00% | 'cpu_tmm_usage'=0%;80;90;0;100
OK - cpu 0 is 5.00%, cpu 1 is 3.00%, cpu 2 is 3.00%, cpu 3 is 1.00% |
'cpu_0_usage'=5%;80;90;0;100 'cpu_1_usage'=3%;80;90;0;100
'cpu_2_usage'=3%;80;90;0;100 'cpu_3_usage'=1%;80;90;0;100
Thresholds kommen entweder vom Gerät selber oder werden mit
--warning/--critical angegeben. Unterschiedliche Schwellwerte für mehrere
CPUs sind auch möglich
21.11.2014 www.consol.de 8
10. Seite
Erste Checks - Memory
$ check_nwc_health
--hostname 10.23.4.2 --community abc
--mode memory-usage
OK - mempool Processor usage is 13.50%, mempool I/O usage is 52.39% |
'Processor_usage'=13.50%;80;90;0;100 'I/O_usage'=52.39%;80;90;0;100
OK - memory usage is 53.00% | 'memory_usage'=53%;80;90;0;100
OK - storage 1 (Physical RAM) has 45.30% free space left | 'Physical
RAM_free_pct'=45.30%;10:;5:;0;100
OK - mempool Processor usage is 20.71%, mempool Driver text usage is
0.00%, mempool I/O usage is 42.70% |
'Processor_usage'=20.71%;80;90;0;100 'Driver
text_usage'=0.00%;80;90;0;100 'I/O_usage'=42.70%;80;90;0;100
21.11.2014 www.consol.de 10
12. Seite
Erste Checks - Hardware
$ check_nwc_health
--hostname 10.23.4.2 --community abc
--mode hardware-health
OK - disk 0 usage is 35.00%, environmental hardware working fine |
'sensor_Motherboard temperature 1'=18.70;;;; 'sensor_+12V bus
voltage'=12.13;;;; 'sensor_CPU core voltage'=1.10;;;; 'sensor_CPU +1.8V bus
voltage'=1.81;;;; 'sensor_Motherboard temperature 2'=20.50;;;; 'sensor_CPU
temperature'=28;;;; 'sensor_System Fan 1 speed'=8280;;;; 'sensor_System
Fan 2 speed'=8400;;;; 'sensor_System Fan 3 speed'=9764.80;;;;
'sensor_System Fan 4 speed'=8460;;;; 'sensor_+2.5V bus voltage'=2.51;;;;
'sensor_+5V bus voltage'=5.07;;;; 'disk_0_usage'=35%;60;60;0;100
Es wird so viel wie möglich abgefragt.
Power Supply, Fan, Temperatur, Sensoren, Filesysteme, Raid, ….
21.11.2014 www.consol.de 12
14. Seite
Erste Checks - Hardware
$ check_nwc_health
--hostname 10.23.4.2 --community abc
--mode hardware-health
OK - no alarms
Falls es keine Sensoren etc. gibt, wird nach Alertlogs gesucht. Bsp. ASA
21.11.2014 www.consol.de 14
15. Basis-Checks für jede Netzwerkkomponente
Die Netzwerker pflegen eine DB oder ein Sheet mit ihren Geräten und
coshsh erzeugt diese Default-Services.
21.11.2014 www.consol.de 15
Seite
16. Neue Kommandozeilenparameter
check_snmp_health basiert auf dem „neuen“ Standard meiner Plugins,
GLPlugin.pm und GLPluginSNMP.pm
Gleiche Bauweise wie check_tl_health, check_ups_health und
check_sap_health
1. Erweiterte Thresholds:
--warning 90 --critical 95
--warningx cpu_1=83 --criticalx=cpu_1=91 --warningx cpu_2=60
cpu_1=22;83;91;0;100 cpu_2=23;60;95;0;100
21.11.2014 www.consol.de 16
Seite
18. Neue Kommandozeilenparameter
Seite
3. Exit-Code umwandeln
Statt
$USER1$/negate --warning=CRITICAL $USER1$/check_nwc_health …
schreibt man
$USER1$/check_nwc_health --negate warning=critical …
Relevant für Installationen, die nur OK und nicht OK unterscheiden.
Spart einen Fork.
Ermöglicht embedded Perl
21.11.2014 www.consol.de 18
19. Neue Kommandozeilenparameter
Seite
3. Exit-Code abschwächen
check_nwc_health … --mode interface-status
CRITICAL - GigabitEthernet0/0/1 is admin down
check_nwc_health … --mode interface-status --mitigation warning
WARNING - GigabitEthernet0/0/1 is admin down
check_nwc_health … --mode ha-status
WARNING - ha was not started
check_nwc_health … --mode ha-status --mitigation ok
OK - ha was not started
21.11.2014 www.consol.de 19
20. Neue Kommandozeilenparameter
Seite
3. Blacklisting
check_nwc_health … --mode hardware-health
CRITICAL - celsius sensor 21718 is nonoperational, celsius sensor 21719
is nonoperational | 'sens_celsius_21594'=47;95;105;;
'sens_celsius_21595'=73;105;115;; 'sens_celsius_21596'=72;105;115;;
'sens_celsius_21597'=71;105;115;; 'sens_celsius_21598'=72;105;115;;….
Sensoren 21718 und 21719 möchte ich ignorieren, so ein Nexus 7000 hat
ja noch 150 weitere…
21.11.2014 www.consol.de 20
21. Neue Kommandozeilenparameter
check_nwc_health … --mode hardware-health -vv
I am a Cisco NX-OS(tm) n7000,
…
[SENSOR_21718]
entPhysicalIndex: 21718
entSensorMeasuredEntity: undef
entSensorPrecision: 0
entSensorScale: units
entSensorStatus: nonoperational
entSensorType: Celsius
entSensorValue: -128
info: celsius sensor 21718 is nonoperational
…
21.11.2014 www.consol.de 21
Seite
22. Neue Kommandozeilenparameter
check_nwc_health … --mode hardware-health --blacklist SENSOR:21718,21719
OK - environmental hardware working fine | 'sens_celsius_21594'=47;95;105;;
'sens_celsius_21595'=73;105;115;; 'sens_celsius_21596'=72;105;115;;
--blacklist SENSOR:21718,21719
oder
--blacklist SENSOR_21718,SENSOR_21719
21.11.2014 www.consol.de 22
Seite
23. Seite
Interface-Checks
Erstmal kann man nachsehen, welche Interfaces es überhaupt gibt:
$ check_nwc_health … --mode list-interfaces
000001 Vlan1
000600 Vlan600
002091 Vlan2091
010101 GigabitEthernet0/1
010102 GigabitEthernet0/2
010103 GigabitEthernet0/3
010104 GigabitEthernet0/4
…
010128 GigabitEthernet0/28
010501 Null0
OK - have fun
21.11.2014 www.consol.de 23
24. Seite
Interface-Checks - Status
--mode interface-status prüft, ob ein Interface oper up ist:
$ check_nwc_health … --mode interface-status --name GigabitEthernet0/2
OK - GigabitEthernet0/2 is up/up
$ check_nwc_health … --mode interface-status --name GigabitEthernet0/4
CRITICAL - GigabitEthernet0/4 is admin down, GigabitEthernet0/4 is
down/down
$ check_nwc_health … --mode interface-status --name GigabitEthernet0/3
CRITICAL - fault condition is presumed to exist on GigabitEthernet0/3,
GigabitEthernet0/3 is down/up
Ausgabe: interface is OperStatus/AdminStatus
21.11.2014 www.consol.de 24
25. Seite
Interface-Checks - Status
Interfaces spricht man gezielt an mit --name ifDescr
Wenn man den Parameter --regexp anhängt,
dann wird das Argument von --name als regulärer Ausdruck interpretiert
$ check_nwc_health … --mode interface-status
--name 'GigabitEthernet0/(1|2)$' –regexp
OK - GigabitEthernet0/1 is up/up, GigabitEthernet0/2 is up/up
Dieser Trick kann für alle Interface-Modi angewandt werden. Am
sinnvollsten ist es aber, pro Interface einen separaten Service
einzurichten.
21.11.2014 www.consol.de 25
26. Interface-Checks - Bandbreite
--mode interface-usage prüft, wieviel Prozent der maximalen Bandbreite
der derzeitige Traffic ausmacht:
$ check_nwc_health … --mode interface-usage
--name GigabitEthernet0/1 --units Gbi
OK - interface GigabitEthernet0/1 usage is in:22.76% (0.21GBi/s)
out:36.78% (0.34GBi/s) |
'GigabitEthernet0/1_usage_in'=22.76%;80;90;0;100
'GigabitEthernet0/1_usage_out'=36.78%;80;90;0;100
'GigabitEthernet0/1_traffic_in'=0.21GBi;0.7451;0.8382;0;0.9313
'GigabitEthernet0/1_traffic_out'=0.34GBi;0.7451;0.8382;0;0.9313
--units kann sein: %, B, KB, MB, GB, Bit, KBi, MBi, GBi
21.11.2014 www.consol.de 26
Seite
27. Interface-Checks - Bandbreite
Wenn ein Interface nicht rausrückt, wieviele Gbi/s es schafft oder wenn
die Angabe schlichtweg falsch ist, kann man nachhelfen mit:
Seite
--ifspeed
oder
--ifspeedin
--ifspeedout
Die Argumente werden in Octets/s angegeben
21.11.2014 www.consol.de 27
28. Interface-Checks - Anmerkungen
1. Es gibt einen Interface-Namens-Index-Cache.
Wenn man --name GigabitEthernet0/1 schreibt, dann wird aus dem
Cache der Index für die IfTable geholt. Danach wird gezielt diese eine
Zeile aus der ifTable gelesen.
Dadurch schützt man sich vor Fehlern durch Neuvergabe der Indices
beim Ziehen und Stecken von Komponenten.
Der Cache wird stündlich, nach einem Reboot oder bei Änderung der
ifTable erneuert.
2. Der Zählerstand ifOctetsIn/Out wird nach jedem Lauf abgespeichert.
Beim nächsten Lauf wird der aktuelle Zählerstand geholt, der
gespeicherte Zählerstand abgezogen und das Delta durch die
verstrichene Zeit dividiert.
3. Wenn möglich, werden die 64-Bit-Zähler verwendet.
21.11.2014 www.consol.de 28
Seite
29. Interface-Checks – Errors und Discards
Errors haben ihre Ursache in Wackelkontakten, falschen CRC-Prüfsummen,
Seite
…
$ check_nwc_health … --mode interface-errors --name GigabitEthernet0/1
OK - interface GigabitEthernet0/1 errors in:0.00/s out:0.00/s |
'GigabitEthernet0/1_errors_in'=0;1;10;;
'GigabitEthernet0/1_errors_out'=0;1;10;;
Discards haben ihre Ursache in Überlastung, Firewall-Regeln,
unerwünschen Vlan-IDs, unerwünschen MAC-Adressen, unbekannten
Layer-2-Protokollen, …
$ check_nwc_health --mode interface-discards --name GigabitEthernet0/1
OK - interface GigabitEthernet0/1 discards in:0.00/s out:0.00/s |
'GigabitEthernet0/1_discards_in'=0;1;10;;
'GigabitEthernet0/1_discards_out'=0;1;10;;
21.11.2014 www.consol.de 29
30. Interface-Checks - Link-Aggregation
--mode link-aggregation-availability --name Aggr-Bezeichngung,if2,if3
$ check_nwc_health … --mode link-aggregation-availability
--name uplink_rz1,GigabitEthernet0/1,GigabitEthernet0/2
OK - aggregation uplink_rz1 availability is 100.00% (2 of 2) |
'aggr_uplink_rz1_availability'=100%;;;0;100
$ check_nwc_health … --mode link-aggregation-availability
--name uplink_rz1,GigabitEthernet0/1,GigabitEthernet0/2,GigabitEthernet0/4
WARNING - aggregation uplink_rz1 availability is 66.67% (2 of 3) (down:
GigabitEthernet0/4) | 'aggr_uplink_rz1_availability'=66.67%;;;0;100
oder kurz: --name 'uplink_rz1,GigabitEthernet0/(1|2|4)$'
21.11.2014 www.consol.de 30
Seite
31. Interface-Checks – Freie Steckplätze im Switch
--mode interface-availability [--lookback 3600*24*30 o.ä., Default 1800]
21.11.2014 www.consol.de 31
Seite
32. Seite
Load Balancer
Load-Balancer-Pool
= 1 öffentliche Adresse + Port, z.b. xyz.de:80
Dahinter stehen mehrere “echte” Server, zu denen Anfragen (Protokoll
egal) weitergeleitet werden.
--mode pool-completeness prüft, mittels geeigneter MIB, ob diese
nachgelagerten Server verfügbar sind.
Warning, wenn einer fehlt
Critical, wenn mehr als die Hälfte fehlen
21.11.2014 www.consol.de 32
33. Load Balancer – Pool completeness
Seite
$ check_nwc_health
--mode pool-completeness
--name EXT-WEB
--report html
CRITICAL - vpo EXT-WEB:80 is enabled (0 connections to 2 real ports)
rpo smuc1120:80 is failed
…<html/>…
21.11.2014 www.consol.de 33
34. Seite
Checkpoint Firewall-1
$ check_nwc_health … --mode ha-role --role standby
Prüft, ob die vorgegebene Rolle im Cluster (hier: standby) mit der
tatsächlichen Rolle übereinstimmt.
21.11.2014 www.consol.de 34
35. Seite
Checkpoint Firewall-1
$ check_nwc_health … --mode fw-policy --name <policy>
Prüft, ob die vorgegebene Rolle im Cluster (hier: standby) mit der
tatsächlichen Rolle übereinstimmt.
21.11.2014 www.consol.de 35
36. Was gibt es sonst noch …
Möglicherweise wird es sowas geben:
--mode freeze-interface-status
Damit wird der aktuelle Zustand eines Switch (d.h. die up/down-Zustände
der Ports) in einer kleinen Datei gespeichert oder in einem Custom-
Macro.
Mit --mode compare-interface-status wird dann der tatsächliche mit dem
gespeicherten Zustand verglichen.
Damit spart man sich einen Haufen Services, wenn man lediglich den
Link Status checken will. In Thruk wird es dazu einen Button geben.
21.11.2014 www.consol.de 36
Seite