The document discusses virtual machines and JavaScript engines. It provides a brief history of virtual machines from the 1970s to today. It then explains how virtual machines work, including the key components of a parser, intermediate representation, interpreter, garbage collection, and optimization techniques. It discusses different approaches to interpretation like switch statements, direct threading, and inline threading. It also covers compiler optimizations and just-in-time compilation that further improve performance.
This document provides an introduction to the version control system Git. It defines key Git concepts like the working tree, repository, commit, and HEAD. It explains that Git is a distributed version control system where the full history of a project is available once cloned. The document outlines Git's history, with it being created by Linus Torvalds to replace the commercial BitKeeper tool. It then lists and briefly describes important Git commands for local and collaboration repositories, including config, add, commit, log, diff, status, branch, checkout, merge, remote, clone, push, and pull. Lastly, it covers installing Git and generating SSH keys on Windows for accessing Git repositories.
Python lambda functions with filter, map & reduce functionARVIND PANDE
Lambda functions allow the creation of small anonymous functions and can be passed as arguments to other functions. The map() function applies a lambda function to each element of a list and returns a new list. The filter() function filters a list based on the return value of a lambda function. The reduce() function iteratively applies a lambda function to consecutive pairs in a list and returns a single value. User-defined functions in Python can perform tasks like converting between temperature scales, finding max/min/average of lists, generating Fibonacci series, reversing strings, summing digits in numbers, and calculating powers using recursion.
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - with ...Philip Schwarz
(download for perfect quality) - See how recursive functions and structural induction relate to recursive datatypes.
Follow along as the fold abstraction is introduced and explained.
Watch as folding is used to simplify the definition of recursive functions over recursive datatypes
Part 1 - through the work of Richard Bird and Graham Hutton.
This version corrects the following issues:
slide 7, 11 fib(0) is 0,rather than 1
slide 23: was supposed to be followed by 2-3 slides recapitulating definitions of factorial and fibonacci with and without foldr, plus translation to scala
slide 36: concat not invoked in concat example
slides 48 and 49: unwanted 'm' in definition of sum
throughout: a couple of typographical errors
throughout: several aesthetic imperfections (wrong font, wrong font colour)
This document provides a summary of Python programming concepts including conditionals, iteration, functions, strings and lists. It covers topics such as if/else statements, for/while loops, functions with parameters and return values. String methods and slicing are explained. Lists are discussed as arrays in Python. Example programs for square root, GCD, exponentiation and searching arrays are provided to illustrate the concepts.
Whether running load tests or migrating historic data, loading data directly into Cassandra can be very useful to bypass the system’s write path.
In this webinar, we will look at how data is stored on disk in sstables, how to generate these structures directly, and how to load this data rapidly into your cluster using sstableloader. We'll also review different use cases for when you should and shouldn't use this method.
The document discusses virtual machines and JavaScript engines. It provides a brief history of virtual machines from the 1970s to today. It then explains how virtual machines work, including the key components of a parser, intermediate representation, interpreter, garbage collection, and optimization techniques. It discusses different approaches to interpretation like switch statements, direct threading, and inline threading. It also covers compiler optimizations and just-in-time compilation that further improve performance.
This document provides an introduction to the version control system Git. It defines key Git concepts like the working tree, repository, commit, and HEAD. It explains that Git is a distributed version control system where the full history of a project is available once cloned. The document outlines Git's history, with it being created by Linus Torvalds to replace the commercial BitKeeper tool. It then lists and briefly describes important Git commands for local and collaboration repositories, including config, add, commit, log, diff, status, branch, checkout, merge, remote, clone, push, and pull. Lastly, it covers installing Git and generating SSH keys on Windows for accessing Git repositories.
Python lambda functions with filter, map & reduce functionARVIND PANDE
Lambda functions allow the creation of small anonymous functions and can be passed as arguments to other functions. The map() function applies a lambda function to each element of a list and returns a new list. The filter() function filters a list based on the return value of a lambda function. The reduce() function iteratively applies a lambda function to consecutive pairs in a list and returns a single value. User-defined functions in Python can perform tasks like converting between temperature scales, finding max/min/average of lists, generating Fibonacci series, reversing strings, summing digits in numbers, and calculating powers using recursion.
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - with ...Philip Schwarz
(download for perfect quality) - See how recursive functions and structural induction relate to recursive datatypes.
Follow along as the fold abstraction is introduced and explained.
Watch as folding is used to simplify the definition of recursive functions over recursive datatypes
Part 1 - through the work of Richard Bird and Graham Hutton.
This version corrects the following issues:
slide 7, 11 fib(0) is 0,rather than 1
slide 23: was supposed to be followed by 2-3 slides recapitulating definitions of factorial and fibonacci with and without foldr, plus translation to scala
slide 36: concat not invoked in concat example
slides 48 and 49: unwanted 'm' in definition of sum
throughout: a couple of typographical errors
throughout: several aesthetic imperfections (wrong font, wrong font colour)
This document provides a summary of Python programming concepts including conditionals, iteration, functions, strings and lists. It covers topics such as if/else statements, for/while loops, functions with parameters and return values. String methods and slicing are explained. Lists are discussed as arrays in Python. Example programs for square root, GCD, exponentiation and searching arrays are provided to illustrate the concepts.
Whether running load tests or migrating historic data, loading data directly into Cassandra can be very useful to bypass the system’s write path.
In this webinar, we will look at how data is stored on disk in sstables, how to generate these structures directly, and how to load this data rapidly into your cluster using sstableloader. We'll also review different use cases for when you should and shouldn't use this method.
These days rule engines are often overlooked, possibly because people think that they are only useful inside heavyweight enterprise software products. However, this is not necessarily true. Simply put, a rule engine is just a piece of software that allows you to separate domain and business-specific constraints from the main application flow. I am the project lead of Drools, the rule engine of Red Hat, and my target was to modernize my project and make it ready to be used in serverless environments. In this talk I will explore and make sense of technologies like GraalVM and Quarkus. I will show, with practical use cases taken from my experience with this migration, what is necessary to change in a code base — making extensive use of reflection, dynamic class loading, and other Java sorceries — to make it compatible with those technologies, and demonstrate how this is allowing us to make Drools part of the cloud and serverless revolution.
Getting The Best Performance With PySparkSpark Summit
This document provides an overview of techniques for getting the best performance with PySpark. It discusses RDD reuse through caching and checkpointing. It explains how to avoid issues with groupByKey by using reduceByKey or aggregateByKey instead. Spark SQL and DataFrames are presented as alternatives that can improve performance by avoiding serialization costs for Python users. The document also covers mixing Python and Scala code by exposing Scala functions to be callable from Python.
O documento fornece uma introdução aos sistemas Git e GitHub. Explica que Git é um sistema de controle de versão distribuído usado principalmente para gerenciar versões de softwares, e que GitHub é um serviço de hospedagem para repositórios Git remotos. Também define termos como commit, branch, fork e merge, e demonstra comandos básicos como git add, git commit e git push/pull para trabalhar com repositórios locais e remotos.
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...InfluxData
The document discusses updates to InfluxDB IOx, a new columnar time series database. It covers changes and improvements to the API, CLI, query capabilities, and path to open sourcing builds. Key points include moving to gRPC for management, adding PostgreSQL string functions to queries, optimizing functions for scalar values and columns, and monitoring internal systems as the first step to releasing open source builds.
Arrays allow storing multiple values in a single variable. There are indexed arrays which use numeric indices and associative arrays which use named keys. Arrays can be defined using the array() function or by directly assigning values. Arrays can be looped through using foreach loops or functions like sizeof() to get the size. Multidimensional arrays store arrays within other arrays.
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeDatabricks
The document summarizes the SOS technique for optimizing shuffle I/O in distributed computing frameworks. SOS merges small intermediate data files from map tasks into larger files to reduce the number and fragmentation of shuffle fetch requests. When deployed at Facebook scale, SOS reduced shuffle I/O by 7.5x and disk service time by 2x, while increasing average I/O size by 2.5x. These I/O optimizations translated to an overall 10% reduction in reserved CPU time for jobs.
Apache Spark on K8S and HDFS Security with Ilan FlonenkoDatabricks
This document discusses running Apache Spark jobs on Kubernetes that access data from secure HDFS clusters. It begins with an introduction to Kubernetes and running big data workloads on it. It then demonstrates running a Spark job on Kubernetes that accesses a Kerberized HDFS cluster. The document delves into details of securing HDFS access and running HDFS itself on Kubernetes. It discusses how data locality was broken when running Spark on Kubernetes originally and how it was fixed to improve performance.
Understanding InfluxDB Basics: Tags, Fields and MeasurementsInfluxData
Is it a table? No, it is much more! Finally understand tags, fields and measurements.
In this session, you will learn how to answer your real-life questions with data stored in InfluxDB. You will see that InfluxDB is more than some tables; it is a window to the world of your data. In particular, the usage of tags, fields and measurements enhances the time series database and helps answer your questions in a convenient and fast way, if you know what to do. Discover tips and tricks to use while implementing InfluxDB.
All topics are addressed in the context of IoT monitoring, predictive maintenance and medical applications.
The Eclipse Transformer is an open source project that provides an engine for transforming Java artifacts like classes, manifests, and deployment descriptors. It replaces Java package references to transform artifacts from JavaEE to Jakarta specifications. The transformer is used by server projects like WildFly and TomEE to generate Jakarta versions of their distributions, and by Open Liberty for OSGi bundles and tests to update package references from JavaEE to Jakarta.
Dask is a Python library for parallel computing that allows users to scale existing Python code to larger datasets and clusters. It provides parallelized versions of NumPy, Pandas, and Scikit-Learn that have the same interfaces as the originals. Dask can be used to parallelize existing Python code with minimal changes, and it supports scaling computations from a single multicore machine to large clusters with thousands of nodes. Dask's task-scheduling approach allows it to be more flexible than other parallel frameworks and to support complex computations and real-time workloads.
Git is a free and open source distributed version control system that allows creating local repositories based on remote repositories. GitHub is a web-based hosting service for Git repositories that allows collaboration on open source projects. Visual Studio Code is an advanced code editor that integrates with Git and GitHub, allowing developers to work with source code and repositories locally or on remote servers.
This document discusses Pinot, Uber's real-time analytics platform. It provides an overview of Pinot's architecture and data ingestion process, describes a case study on modeling trip data in Pinot, and benchmarks Pinot's performance on ingesting large volumes of data and answering queries in real-time.
Query optimizers and people have one thing in common: the better they understand their data, the better they can do their jobs. Optimizing queries is hard if you don't have good estimates for the sizes of the intermediate join and aggregate results. Data profiling is a technique that scans data, looking for patterns within the data such as keys, functional dependencies, and correlated columns. These richer statistics can be used in Apache Calcite's query optimizer, and the projects that use it, such as Apache Hive, Phoenix and Drill. We describe how we built a data profiler as a table function in Apache Calcite, review the recent research and algorithms that made it possible, and show how you can use the profiler to improve the quality of your data.
A talk given by Julian Hyde at DataWorks Summit, San Jose, on June 14th 2017.
The document provides an overview of the Python programming language, covering topics such as Python basics, input/output, data types, variables, operators, control flow, functions, object oriented concepts, exception handling, collections, NumPy, Pandas, machine learning, and data processing. It includes descriptions of key Python concepts and many code examples.
Hybrid Apache Spark Architecture with YARN and KubernetesDatabricks
Lyft is on the mission to improve people’s lives with the world’s best transportation. Starting 2019, Lyft has been running both Batch ETL and ML spark workloads primarily on Kubernetes with the Apache Spark on k8s operator. However, with the increasing scale of workloads in frequency and resource requirements, we started hitting numerous reliability issues related to IP allocation, container images, IAM role assignment, and Kubernetes Control Plane.
To continue supporting growing Spark usage with Lyft, the team came up with a hybrid architecture optimized for containerized and non-containerized workload based on Kubernetes and YARN. In this talk, we will also cover a dynamic runtime controller that helps with per environment config overrides and easy switchover between resource managers.
Building real time analytics applications using pinot : A LinkedIn case studyKishore Gopalakrishna
This document discusses using real-time analytics applications with LinkedIn activity data and Apache Pinot. It provides three examples of use cases: 1) article analytics to understand reader demographics, 2) feed ranking to improve relevance, and 3) anomaly detection for monitoring metrics and detecting issues. It compares performance of Pinot to other real-time analytics databases and processing engines. Finally, it outlines an architecture for building analytics applications and dashboards using Pinot to enable real-time insights from large-scale activity data.
Sparklens: Understanding the Scalability Limits of Spark Applications with R...Databricks
One of the common requests we receive from customers (at Qubole) is debugging slow spark application. Usually this process is done with trial and error, which takes time and requires running clusters beyond normal usage (read wasted resources). Moreover, it doesn’t tell us where to looks for further improvements. We at Qubole are looking into making this process more self-serve. Towards this goal we have built a tool (OSS https://github.com/qubole/sparklens) based on spark event listener framework.
From a single run of the application, Sparklens provides insights about scalability limits of given spark application. In this talk we will cover what Sparklens does and theory behind Sparklens. We will talk about how structure of spark application puts important constraints on its scalability. How can we find these structural constraints and how to use these constraints as a guide in solving performance and scalability problems of spark applications.
This talk will help audience in answering the following questions about their spark applications: 1) Will their application run faster with more executors? 2) How will cluster utilization change as number of executors change? 3) What is the absolute minimum time this application will take even if we give it infinite executors? 4) What is the expected wall clock time for the application when we fix the most important structural limits of these application? Sparklens makes the ROI of additional executor extremely obvious for a given application and needs just a single run of the application to determine how application with behave with different executor counts. Specifically, it will help managers take the correct side of the tradeoff between spending developer time optimising applications vs spending money on compute bills.
Dies sind die Slides unseres Webinars mit dem Thema SAP BOPF, welches wir am 27.1.2017 abgehalten haben.
Das SAP BOPF (Business Object Processing Framework) besteht aus einer Reihe von Diensten und Funktionalitäten die zur Standardisierung bzw. Modularisierung von ABAP Entwicklungen dient.
Neben einem theoretischen Überblick und ausgewählten Live Demos haben wir auch Erfahrungen aus 2 Projekten wiedergegeben.
These days rule engines are often overlooked, possibly because people think that they are only useful inside heavyweight enterprise software products. However, this is not necessarily true. Simply put, a rule engine is just a piece of software that allows you to separate domain and business-specific constraints from the main application flow. I am the project lead of Drools, the rule engine of Red Hat, and my target was to modernize my project and make it ready to be used in serverless environments. In this talk I will explore and make sense of technologies like GraalVM and Quarkus. I will show, with practical use cases taken from my experience with this migration, what is necessary to change in a code base — making extensive use of reflection, dynamic class loading, and other Java sorceries — to make it compatible with those technologies, and demonstrate how this is allowing us to make Drools part of the cloud and serverless revolution.
Getting The Best Performance With PySparkSpark Summit
This document provides an overview of techniques for getting the best performance with PySpark. It discusses RDD reuse through caching and checkpointing. It explains how to avoid issues with groupByKey by using reduceByKey or aggregateByKey instead. Spark SQL and DataFrames are presented as alternatives that can improve performance by avoiding serialization costs for Python users. The document also covers mixing Python and Scala code by exposing Scala functions to be callable from Python.
O documento fornece uma introdução aos sistemas Git e GitHub. Explica que Git é um sistema de controle de versão distribuído usado principalmente para gerenciar versões de softwares, e que GitHub é um serviço de hospedagem para repositórios Git remotos. Também define termos como commit, branch, fork e merge, e demonstra comandos básicos como git add, git commit e git push/pull para trabalhar com repositórios locais e remotos.
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...InfluxData
The document discusses updates to InfluxDB IOx, a new columnar time series database. It covers changes and improvements to the API, CLI, query capabilities, and path to open sourcing builds. Key points include moving to gRPC for management, adding PostgreSQL string functions to queries, optimizing functions for scalar values and columns, and monitoring internal systems as the first step to releasing open source builds.
Arrays allow storing multiple values in a single variable. There are indexed arrays which use numeric indices and associative arrays which use named keys. Arrays can be defined using the array() function or by directly assigning values. Arrays can be looped through using foreach loops or functions like sizeof() to get the size. Multidimensional arrays store arrays within other arrays.
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeDatabricks
The document summarizes the SOS technique for optimizing shuffle I/O in distributed computing frameworks. SOS merges small intermediate data files from map tasks into larger files to reduce the number and fragmentation of shuffle fetch requests. When deployed at Facebook scale, SOS reduced shuffle I/O by 7.5x and disk service time by 2x, while increasing average I/O size by 2.5x. These I/O optimizations translated to an overall 10% reduction in reserved CPU time for jobs.
Apache Spark on K8S and HDFS Security with Ilan FlonenkoDatabricks
This document discusses running Apache Spark jobs on Kubernetes that access data from secure HDFS clusters. It begins with an introduction to Kubernetes and running big data workloads on it. It then demonstrates running a Spark job on Kubernetes that accesses a Kerberized HDFS cluster. The document delves into details of securing HDFS access and running HDFS itself on Kubernetes. It discusses how data locality was broken when running Spark on Kubernetes originally and how it was fixed to improve performance.
Understanding InfluxDB Basics: Tags, Fields and MeasurementsInfluxData
Is it a table? No, it is much more! Finally understand tags, fields and measurements.
In this session, you will learn how to answer your real-life questions with data stored in InfluxDB. You will see that InfluxDB is more than some tables; it is a window to the world of your data. In particular, the usage of tags, fields and measurements enhances the time series database and helps answer your questions in a convenient and fast way, if you know what to do. Discover tips and tricks to use while implementing InfluxDB.
All topics are addressed in the context of IoT monitoring, predictive maintenance and medical applications.
The Eclipse Transformer is an open source project that provides an engine for transforming Java artifacts like classes, manifests, and deployment descriptors. It replaces Java package references to transform artifacts from JavaEE to Jakarta specifications. The transformer is used by server projects like WildFly and TomEE to generate Jakarta versions of their distributions, and by Open Liberty for OSGi bundles and tests to update package references from JavaEE to Jakarta.
Dask is a Python library for parallel computing that allows users to scale existing Python code to larger datasets and clusters. It provides parallelized versions of NumPy, Pandas, and Scikit-Learn that have the same interfaces as the originals. Dask can be used to parallelize existing Python code with minimal changes, and it supports scaling computations from a single multicore machine to large clusters with thousands of nodes. Dask's task-scheduling approach allows it to be more flexible than other parallel frameworks and to support complex computations and real-time workloads.
Git is a free and open source distributed version control system that allows creating local repositories based on remote repositories. GitHub is a web-based hosting service for Git repositories that allows collaboration on open source projects. Visual Studio Code is an advanced code editor that integrates with Git and GitHub, allowing developers to work with source code and repositories locally or on remote servers.
This document discusses Pinot, Uber's real-time analytics platform. It provides an overview of Pinot's architecture and data ingestion process, describes a case study on modeling trip data in Pinot, and benchmarks Pinot's performance on ingesting large volumes of data and answering queries in real-time.
Query optimizers and people have one thing in common: the better they understand their data, the better they can do their jobs. Optimizing queries is hard if you don't have good estimates for the sizes of the intermediate join and aggregate results. Data profiling is a technique that scans data, looking for patterns within the data such as keys, functional dependencies, and correlated columns. These richer statistics can be used in Apache Calcite's query optimizer, and the projects that use it, such as Apache Hive, Phoenix and Drill. We describe how we built a data profiler as a table function in Apache Calcite, review the recent research and algorithms that made it possible, and show how you can use the profiler to improve the quality of your data.
A talk given by Julian Hyde at DataWorks Summit, San Jose, on June 14th 2017.
The document provides an overview of the Python programming language, covering topics such as Python basics, input/output, data types, variables, operators, control flow, functions, object oriented concepts, exception handling, collections, NumPy, Pandas, machine learning, and data processing. It includes descriptions of key Python concepts and many code examples.
Hybrid Apache Spark Architecture with YARN and KubernetesDatabricks
Lyft is on the mission to improve people’s lives with the world’s best transportation. Starting 2019, Lyft has been running both Batch ETL and ML spark workloads primarily on Kubernetes with the Apache Spark on k8s operator. However, with the increasing scale of workloads in frequency and resource requirements, we started hitting numerous reliability issues related to IP allocation, container images, IAM role assignment, and Kubernetes Control Plane.
To continue supporting growing Spark usage with Lyft, the team came up with a hybrid architecture optimized for containerized and non-containerized workload based on Kubernetes and YARN. In this talk, we will also cover a dynamic runtime controller that helps with per environment config overrides and easy switchover between resource managers.
Building real time analytics applications using pinot : A LinkedIn case studyKishore Gopalakrishna
This document discusses using real-time analytics applications with LinkedIn activity data and Apache Pinot. It provides three examples of use cases: 1) article analytics to understand reader demographics, 2) feed ranking to improve relevance, and 3) anomaly detection for monitoring metrics and detecting issues. It compares performance of Pinot to other real-time analytics databases and processing engines. Finally, it outlines an architecture for building analytics applications and dashboards using Pinot to enable real-time insights from large-scale activity data.
Sparklens: Understanding the Scalability Limits of Spark Applications with R...Databricks
One of the common requests we receive from customers (at Qubole) is debugging slow spark application. Usually this process is done with trial and error, which takes time and requires running clusters beyond normal usage (read wasted resources). Moreover, it doesn’t tell us where to looks for further improvements. We at Qubole are looking into making this process more self-serve. Towards this goal we have built a tool (OSS https://github.com/qubole/sparklens) based on spark event listener framework.
From a single run of the application, Sparklens provides insights about scalability limits of given spark application. In this talk we will cover what Sparklens does and theory behind Sparklens. We will talk about how structure of spark application puts important constraints on its scalability. How can we find these structural constraints and how to use these constraints as a guide in solving performance and scalability problems of spark applications.
This talk will help audience in answering the following questions about their spark applications: 1) Will their application run faster with more executors? 2) How will cluster utilization change as number of executors change? 3) What is the absolute minimum time this application will take even if we give it infinite executors? 4) What is the expected wall clock time for the application when we fix the most important structural limits of these application? Sparklens makes the ROI of additional executor extremely obvious for a given application and needs just a single run of the application to determine how application with behave with different executor counts. Specifically, it will help managers take the correct side of the tradeoff between spending developer time optimising applications vs spending money on compute bills.
Dies sind die Slides unseres Webinars mit dem Thema SAP BOPF, welches wir am 27.1.2017 abgehalten haben.
Das SAP BOPF (Business Object Processing Framework) besteht aus einer Reihe von Diensten und Funktionalitäten die zur Standardisierung bzw. Modularisierung von ABAP Entwicklungen dient.
Neben einem theoretischen Überblick und ausgewählten Live Demos haben wir auch Erfahrungen aus 2 Projekten wiedergegeben.
Prüfen Sie Ihre ABAP SQL Abfragen auf SAP HANA TauglichkeitCadaxo GmbH
"Prüfen Sie Ihre SELECTs auf HANA-Tauglichkeit!" Johann Fößleitner, Geschäftsführer von Cadaxo GmbH, wird Ihnen in diesem einstündigen Tutorial
zeigen, wie Sie mit dem SQL Cockpit mehr aus Ihrem SAP HANA herausholen.
Abap 7.02 new features - neue stringfunktionenCadaxo GmbH
Der Foliensatz liefert einen Überblick über neue Stringfunktionen welcher seit ABAP 7.02 vorhanden sind. Alle wichtigen Funktionen werden mit Beispielen erklärt.
ITSS Trainning | Curso de SAP ABAP FoundationsCharles Aragão
OBJETIVO DO CURSO:
Este curso tem como objetivo capacitar Programadores e Analistas de Sistemas à fornecerem soluções no seguimento SAP através da linguagem ABAP. Ao contemplar a parte de ABAP Foundations nesses módulos, os mesmos terão adquirido um conhecimento básico para aplicações SAP e entender os fundamentos das mesmas.
Single Consulting busca analistas programadores ABAP IV con al menos 1 año de experiencia en proyectos SAP para sus oficinas en Madrid y Barcelona. Single Consulting es una consultora líder con presencia en España y otros países que ofrece contratos indefinidos y salarios atractivos. Los candidatos deben tener conocimientos de programación ABAP IV y nivel medio de inglés.
1) The document describes how to build a simple two screen WebDynpro application in ABAP to accept user input on the first screen and display it on the second screen.
2) Key steps include creating a WebDynpro component and views, designing the screens with labels, input fields and buttons, mapping attributes and nodes between views, and embedding the views in a window with navigation between them.
3) Testing involves creating a WebDynpro application from the component, saving without changes, and executing to view the input and output screens.
This document provides an overview of ABAP Query and demonstrates how to create an ABAP Query report. It describes ABAP Query as a tool for generating reports without coding by joining tables and selecting fields. It then provides a case example of a purchase order report and walks through the three steps to create the query: 1) defining a user group, 2) creating an infoset by joining relevant tables, and 3) using the infoset to build the query and arrange the fields and layout of the report. Tips are also provided, such as modifying existing queries by accessing the underlying program.
ABAP es el lenguaje de programación utilizado para desarrollar aplicaciones en SAP. Un programador ABAP crea nuevos programas y modifica los existentes para adaptar el sistema SAP a los requisitos específicos de cada cliente. SAP AG es la empresa alemana que fabrica el sistema SAP y es considerada el mayor fabricante europeo de software empresarial.
Hierbei handelt es sich um das Handout des Vortrags zum Thema "ABAP Test & Troubleshooting" während des SAP Inside Track Munich 2013 von Martin Steinberg.
Das Handout beinhaltet alle besprochenen Themen und einiges darüber hinaus.
O documento fornece instruções para criar views para tabelas no SAP. Ele descreve os passos para acessar a transação SE11, selecionar uma tabela, gerar a view, criar uma transação para a view na SE93 e configurar parâmetros para acessar a view.
O documento lista ícones utilizados no sistema SAP, incluindo seus nomes e breves descrições. São mostrados ícones relacionados a rotinas ABAP, análises, atividades, documentos, relatórios e outros elementos do sistema.
Este documento descreve comandos e funções utilizadas no ABAP/4 para desenvolvimento de programas no SAP R/3. As principais funções descritas incluem comandos para leitura e escrita de dados, controle de fluxo, criação de relatórios e interface com o usuário. Exemplos de uso são fornecidos para clarificar o propósito de cada comando.
Abap 7 02 new features - new string functionsCadaxo GmbH
The document describes new string functions introduced in ABAP 7.02, including cmax/cmin for character extreme values, condense for condensing strings, concat_lines_of for linking lines from a table, and over a dozen other functions for tasks like escaping characters, inserting/replacing/matching substrings, and comparing string distances. It also provides examples of how each function works.
The document contains a set of multiple choice questions related to various SAP concepts and technologies. Specifically, it tests knowledge on topics like READ with BINARY SEARCH, F1 help functionality, R/3 configuration, background job output, Dynpro flow logic, Idoc process code, GUI components, RFC call types, transaction codes, ABAP Dictionary usage, internal table types, subroutine interfaces, function module parameters, and client-independent objects. It contains 50 questions in total to assess an individual's familiarity with fundamental SAP technical concepts.
Este documento fornece instruções para criar funções no SAP através de 3 etapas: 1) Criar um Grupo de Funções informando nome, texto breve, pacote e ordem; 2) Acessar a transação SE37 e criar a função informando o módulo, grupo de funções e texto breve; 3) Abrir a tela de edição da função recém-criada.
This document outlines coding standards for developing ABAP programs. It covers standards for functional and technical specifications, the development lifecycle, types of ABAP programs, general coding practices, error handling, naming conventions, program structure, readability, security, performance, internal tables, SAPscript, user exits, logical databases, and documentation requirements. Adherence to these standards helps ensure consistent, readable, and maintainable code.
Manikanta Sai Kumar Karri is an SAP ABAP Associate Consultant with over 3 years of experience programming in SAP ABAP. He has worked on various SAP modules for clients in industries like healthcare, manufacturing, and retail. His responsibilities have included creating reports, forms, and remote-enabled functions for use with SAP UI5 and OData services. He is proficient in technologies like HTML5, CSS, JSON, JavaScript, and SAP Fiori.
Events allow methods in one class to trigger methods in another class without instantiating the other class. To set up an event handler:
1. Create an event in a class.
2. Create a triggering method that raises the event.
3. Create an event handler method for the event in the same or another class.
4. Register the event handler method.
The triggering method calls the event, which executes the event handler method. Examples demonstrate setting up event handlers within the same class and across classes.
Das sind die Folien zu unserem Webinar vom 29.3.2019 zum Thema Modern ABAP.
Unter Anderem wurden folgende Themen behandelt:
ABAP Sprachversionen
SAP Cloud Platform ABAP Environment
ABAP Development Tools
ABAP Sprachelemente und SQL Expressions
Obsolete Sprachelemente
abapGit
CDS Views
ABAP RESTful Programming Model
Code Checks in SAP
Clean Code
Refactoring
Das sind die Folien zu unserem Webinar vom 25.1.2019 über die releasespezifischen Neuheiten welche mit ABAP 7.51 zur Verfügung stehen.
Unter Anderem wurden folgende Themen behandelt:
Enumerations
ABAP Open SQL
ABAP SQL / CDS – Eingebaute Funktionen
ABAP CDS
ABAP Development Tools
ABAP Test Cockpit Checks
ABAP Channels, ABAP Daemons
Schnelle Serialisierung für RFC
Das sind die Folien zu unserem Webinar vom 27.3.2020 über die releasespezifischen Neuheiten welche mit ABAP 7.53/7.54 zur Verfügung stehen.
Unter Anderem wurden folgende Themen behandelt:
ABAP Dictionary
Interne Tabellen
Zuweisungen
ABAP SQL
AMDP
ABAP CDS
ABAP RESTful Programming Model
Exceptionhandling
ABAP Units
ABAP Development Tools
Dies sind die Slides unseres Webinars zum Thema SAP ABAP CDS Views. Das Webinar fand am 24.6.2017 statt.
Themen: CDS Views, Eingebaute Funktionen, Parameter in CDS Views, Associations, Annotations, Integration NetWeaver Gateway, Berechtigungen, Table Functions
In diesem Webinar haben wir das Thema ABAP & Performance behandelt. Im Detail sind wir auf folgende Themen eingegangen:
- Skill
- Detect
- Optimize
Skill: Welche Skills sind notwendig? Wie erlange ich diese Skills? Welche Plattformen, welche Netzwerke sind sinnvoll?
Detect: Welche Tools stehen in einem SAP System zur Verfügung?
Optimize: Welche Möglichkeiten der Performanceoptimierung sind möglich und sinnvoll?
Einige ausgewählte Oracle PL/SQL Packages aus Version 11g und 12c werden kurz beschrieben und an Beispielen illlustriert. In Teil 2 handelt es sich dabei um:
DBMS_QOPATCH
DBMS_SPACE
DBMS_SERVICE
DBMS_FLASHBACK_ARCHIVE
Zusätzliche werden ein paar grundsätzliche Frage beantwortet wie zum Beispiel: wie finde ich einfach und schnell bestimmte PL/SQL Packages oder welche PL/SQL Packages sind obsolet.
Einige ausgewählte Oracle PL/SQL Packages aus Version 11g und 12c werden kurz beschrieben und an Beispielen erklärt. In Teil 1 handelt es sich dabei um folgende Packages:
DBMS_XDB_CONFIG
DBMS_COMPRESSION
DBMS_REDEFINITION
DBMS_SQL_MONITOR
DBMS_PARALLEL_EXECUTE
Was steckt hinter den Hype "Clean Code"? Die meisten denken in erster Linie an Quellcode, jedoch ist es nur ein Teil des Ganzen. Regeln, Konzepte und Richtlinien gehören mit dazu und entscheiden, ob stabile und wartbare Programme entstehen, die zum Unternehmenserfolg oder auch -ruin beitragen.
Zu dem Thema "Clean Code" haben wir am 12.8.2016 ein Webinar abgehalten und das sind die dazugehörigen Folien.
Vorlesung Semantic Web Technologien, HTWG Konstanz WS 2009/2010.
Veranstaltung #7+8
Im Anschluss an die RDF(S)-Frameworks und einer Übungseinheit, haben wir uns in den letzten beiden Vorlesungen mit dem Thema SPARQL, der SPARQL Protocol And RDF Query Language, befasst.
Dabei wurden folgende Themen behandelt:
* SPARQL Query Language – Anfragesprache
o Erstellen einfacher Anfragen – Wie funktionieren Anfragemuster
o Behandlung von Literalen und Blank Nodes
o Gruppierung von Mustern, optionale Muster, alternative Muster, Kombination
o Filter, Vergleichsoperatore, Funktionen
o Modifikatoren zur Sortierung, Entfernung doppelter Lösungen und zur Aufteilung von Ergebnismengen
o Anfragetypen in SPARQL – SELECT, CONSTRUCT, ASK und DESCRIBE
o RDF Dataset – Default und Named Graphen
* SPARQL Query Result XML Format für SELECT und ASK Anfragen
* SPARQL Protokoll
* Zukünftige Features von SPARQL
o Aggregatfunktionen – COUNT, SUM, AVG etc.
o Subqueries
o Negation
o Project Expressions
o SPARQL Update – RDF Graphen verändern per Query Language
o Dienstbeschreibung
o Übersicht über eventuelle weitere Features
Ähnlich wie Webinar - ABAP 7.50 Releaseabhängige Änderungen (20)
Das sind die Folien zu unserem Webinar vom 29.5.2020 zum Thema: SAP/ABAP und Microsoft
Unter Anderem wurden folgende Themen behandelt:
- ABAP2XSLX
- ABAP SDK for Azure
- Microsoft Graph API (aus ABAP!)
Entwurfsmuster sind bewährte Lösungsschablonen für wiederkehrende Entwurfsprobleme in der Softwareentwicklung. Diese Entwurfsmuster können auch in ABAP eingesetzt werden.
In diesem Webinar haben wir einen Einblick in die Entwurfsmuster gegeben und anhand von 3 Praxisbeispielen deren Anwendung in ABAP veranschaulicht.
Dies sind die Slides unseres Webinars zum Thema SAP Gateway. Das Webinar fand am 24. November 2017 statt.
Themen: REST / OData Überblick, SAP Gateway Überblick, Service Generierung, CDS -> Odata, Annotations
Refactoring is changing the internal structure of code without changing its external behavior in order to improve various attributes of the software. It involves techniques like renaming variables and methods for clarity, extracting duplicate code into functions, and restructuring classes and modules to make the design and logic easier to understand. The key benefits of refactoring include improving code quality, maintainability and extensibility which allows adding new functionality more quickly.
The document describes SQL Cockpit, a tool for querying, modifying, and analyzing data in SAP systems. It provides an ABAP Open SQL editor with features like code completion, logging of changes, and auditability confirmed by Ernst & Young. SQL Cockpit is available in Standard and Premium versions, and it supports SQL features through all recent SAP releases. It has advantages over built-in SAP tools like more query flexibility and additional authorizations.
9. ABAP 7.50
Diverse Neuheiten
Globale temporäre Tabellen (GTT)
Open SQL
CDS Views
Expressions and Functions
ABAP Units
RFC und ABAP Channels
Ausblick 7.51
Nächste Termine
Agenda
10. Diverse Neuheiten ABAP 7.50
Diverse Neuheiten
◦ Ab 7.50 nur mehr Unicode. Nicht-Unicode-Codepages werden
nicht mehr unterstützt
◦ Neuer Datentyp INT8 (Trillionen)
-9.223.372.036.854.775.808 bis +9.223.372.036.854.775.807
◦ ABAP Ausnahmeklassen
Neues Interface IF_T100_DYN_MSG
Neuer Zusatz MESSAGE bei RAISE EXCEPTION/THROW
◦ CDS Views als Vertreterobjekte -> Hände weg davon!
12. Globale Temporäre Tabellen (GTT) ABAP 7.50
Globale temporäre Tabellen (GTT)
◦ GTT sind spezielle transparent Tabellen,
die für temporäre Ablage von Daten
vorgesehen sind
◦ Sie sind nur innerhalb einer Datenbank-
LUW verfügbar und müssen am Ende
geleert werden
◦ GTTs können (fast) gleich wie normale
transparente Tabellen verwendet werden.
15. Open SQL ABAP 7.50
Open SQL in Release 7.50
◦ Unions
◦ Subquery als Datenquelle bei INSERT
◦ Host Ausdrücke, SQL Ausdrücke & SQL Funktionen
◦ CDS Views
16. Open SQL - Union ABAP 7.50
Union [ALL|DISTINCT]
◦ Mit UNION kann die
Ergebnismenge von zwei
Selects vereinigt werden
◦ Beide SELECTS haben ihre
eigenen FROM, WHERE, …
◦ Weitere Details:
http://help.sap.com/abapdocu_750/de/inde
x.htm?file=abapunion_clause.htm
17. Open SQL – INSERT from SELECT ABAP 7.50
INSERT from SELECT
◦ Bei INSERT kann hinter
FROM jetzt eine Subquery
als Datenquelle angegeben
werden
◦ Die Ergebniszeilen aus dem
Subquery werden direkt in
die Zieltabelle eingefügt
INSERT zdb_table FROM
( SELECT FROM BUT020 as b
FIELDS b~partner,
COUNT( * ) as cnt_addr
GROUP BY b~partner.
18. Open SQL – Host Expressions ABAP 7.50
Neuerungen Host Expressions
◦ Überall wo Host Variablen verwendet werden können, können nun
auch Host Expressions eingesetzt werden
◦ Host Expressions sind ABAP Expressions welche in Open SQL
verwendet werden können
Table Expressions
String Expressions
Funktionale Methoden
…
19. Open SQL – SQL Expressions ABAP 7.50
Neuerungen SQL Expressions
◦ Neue Verwendungsmöglichkeiten
Linke Seite bei WHERE, HAVING, ON
und CASE
Als Operand bei CAST
◦ Neue SQL Funktionen
CONCAT, LPAD, LENGTH, LTRIM, REPLACE, RIGHT, RTRIM, SUBSTRING,
ROUND, COALESCE (erweitert)
23. Open SQL – SQL Ausdrücke & Funktionen ABAP 7.50
ABAP Open SQL: REPLACE
◦ Ersetze in Zeichenkette arg1 vorkommende arg2 mit Inhalt aus
arg3
◦ REPLACE( arg1, arg2, arg3 )
24. Open SQL – SQL Expressions ABAP 7.50
ABAP Open SQL: SUBSTRING
◦ Teilfeld von arg ab der Position pos in der Länge len
◦ SUBSTRING( arg, pos, len )
25. Open SQL – SQL Expressions ABAP 7.50
ABAP Open SQL: ROUND
◦ Gerundeter Wert von arg an Position pos
◦ ROUND( arg, pos )
26. Open SQL – SQL Expressions ABAP 7.50
ABAP Open SQL: COALESCE
◦ Gibt den ersten Wert aus den Argument arg1, arg2, … (max.
255) zurück welcher nicht NULL ist
◦ COALESCE( arg1, arg2, arg3, … )
27. Open SQL – SQL Expressions ABAP 7.50
ABAP Open SQL: LPAD
◦ Zeichenkette in Länge len mit rechts. Inhalt von arg.
Verlängerte Strings werden mit src aufgefüllt.
◦ LPAD( arg, len, src )
28. Open SQL – SQL Expressions ABAP 7.50
ABAP Open SQL: LTRIM
◦ Entfernt alle schließenden Leerzeichen aus arg und führende
Zeichen char
◦ LTRIM( arg, char )
29. Open SQL – SQL Expressions ABAP 7.50
ABAP Open SQL: RIGHT
◦ Zeichenkette der Länge len mit den rechten Zeichen von arg
◦ RIGHT( arg, len )
30. Open SQL – SQL Expressions ABAP 7.50
ABAP Open SQL: RTRIM
◦ Entfernt alle schließenden Leerzeichen aus arg und schließende
Zeichen char
◦ RTRIM( arg, char )
31. Open SQL – SQL Expressions ABAP 7.50
ABAP Open SQL Funktionen ab 7.51
◦ DIVISON
◦ LOWER
◦ UPPER
◦ LEFT
◦ CONCAT_WITH_SPACE
◦ INSTR
◦ RPAD
◦ DATS_IS_VALID
◦ DATS_DAYS_BETWEEN
◦ DATS_ADD_DAYS
◦ DATS_ADD_MONTHS
32. Open SQL – Zugriff auf CDS Views ABAP 7.50
Zugriff auf CDS Views mit Open SQL
◦ CDS Entitäten können nun gemeinsam mit
Datenbanktabellen und klassischen Views verwendet
werden
◦ Der Zugriff auf eine CDS-View über den CDS-Datenbank-
View ist ab nun obsolet
33. Open SQL – CDS Views mit Eingabeparameter ABAP 7.50
CDS Views mit Eingabeparameter
◦ Werden jetzt von allen Datenbanken unterstützt
◦ Die Überprüfung mit der Klasse CL_ABAP_DBFEATURES ist
nicht mehr notwendig
37. ABAP CDS - Tabellenfunktionen ABAP 7.50
CDS Tabellenfunktionen
◦ Eine CDS Tabellenfunktion ist neue Art von CDS-Entitäten
◦ Die Implementierung einer solchen Funktion erfolgt als Native
SQL in einer AMDP Funktionsimplementierung
Damit ist diese Funktion derzeit nur auf SAP Hana DB verfügbar
38. ABAP CDS - Zugriffskontrolle ABAP 7.50
CDS Zugriffskontrolle
◦ Mit Hilfe von PFCG Rollen und einer DCL (Data Control
Language) Definition kann die Ergebnismenge eines CDS Views
(CDS Entität!) eingeschränkt werden
@MappingRole: true
define role demo_cds_role_lit_pfcg {
grant select on demo_cds_auth_lit_pfcg
where (carrid) =
aspect pfcg_auth (s_carrid, carrid,
actvt='03') and
currcode = 'EUR'; }
@AbapCatalog.sqlViewName: 'DEMO_CDS_LITPFCG'
@AccessControl.authorizationCheck: #CHECK
define view demo_cds_auth_lit_pfcg
as select from
scarr
{
key carrid,
carrname,
currcode,
url
};DCL DDL
45. ABAP CDS – Expressions ABAP 7.50
ABAP CDS: LTRIM
◦ Entfernt alle schließenden Leerzeichen aus arg und führende
Zeichen char
◦ LTRIM( arg, char )
46. ABAP CDS – Ausdrücke & Funktionen ABAP 7.50
ABAP CDS: LTRIM
◦ Entfernt alle schließenden Leerzeichen aus arg und führende
Zeichen char
◦ LTRIM( arg, char )
47. ABAP CDS – Ausdrücke & Funktionen ABAP 7.50
ABAP CDS: RIGHT
◦ Zeichenkette der Länge len mit den rechten Zeichen von arg
◦ RIGHT( arg, len )
48. ABAP CDS – Expressions ABAP 7.50
ABAP CDS: RPAD
◦ Zeichenkette in Länge len mit linksbündigen Inhalt von arg.
Verlängerte Strings werden mit src aufgefüllt.
◦ LPAD( arg, len, src )
49. ABAP CDS – Expressions ABAP 7.50
ABAP CDS: RTRIM
◦ Entfernt alle schließenden Leerzeichen aus arg und schließende
Zeichen char
◦ RTRIM( arg, char )
53. IS INSTANCE OF / CASE TYPE OF ABAP 7.50
IS INSTANCE OF
◦ Down Cast Möglichkeit vor Zuweisung prüfen
◦ <reference_var> enthält Referenz auf <Class>
◦ <reference_var> enthält Referenz auf <Interface>
CASE TYPE OF
◦ Mehrfache Prüfung wie IS INSTANCE OF
◦ Direkte Zuweisung an Variable
71. ABAP 7.50Ausblick 7.51 – Aufzählungstypen (Enumerations)
Aufzählungstypen (Enumerations)
◦ Ein Aufzählungstyp ist ein Datentyp für Variablen mit einer
endlichen Wertemenge.
◦ Alles zulässigen Werte werden bei der Deklaration mit einem
eindeutigen Namen definiert.
TYPES:
BEGIN OF ENUM developer,
domi, “Standard 0
foess “1
END OF ENUM developer.
DATA lv_developer type developer.
lv_developer = domi. “Allowed
lv_developer = 9. “Syntax / Runtime Error
72. ABAP 7.50Ausblick 7.51 – Open SQL / CDS Views
OPEN SQL / CDS Views
◦ Neuer Join Möglichkeit: CROSS JOIN
◦ Viele neue von SQL- und Aggregatfunktionen
◦ DELETE erlaubt endlich ORDER BY, OFFSET und UP TO
73. ABAP 7.50Ausblick 7.51 - CDS View anzeige in SE80
CDS Views – Anzeige/Pflege
◦ Anzeige wurde in die SE80 integriert
◦ Pflege weiterhin nur mit Eclipse möglich
78. If you want to stay in touch …
https://twitter.com/domibiglsap
https://www.linkedin.com/in/dominik-bigl-9b98b68b
https://www.xing.com/profile/dominik_bigl
dominik.bigl@cadaxo.com
See you again!
Thank you for participating!
https://twitter.com/foessleitnerj
https://www.linkedin.com/in/johann-fößleitner-a9851b2a
https://www.xing.com/profile/johann_foessleitner
johann.foessleitner@cadaxo.com
- Auzählungstypen (Enumerations) – Common Pattern
- CROSS JOIN – Verbindet die Zeilen der rechten und linken Seite zu einer Ergebnismenge in der alle Kombinationen von Zeilen enthalten sind.
Neue SQL- und Aggregatfunktionen (lower, upper, … )
Mit Extended Result wird das Ergebnis einer SQL Leseoperation in ein Objekt der Klasse CL_OSQL_EXTENDED_RESULT versorgt.
- Auzählungstypen (Enumerations) – Common Pattern
- CROSS JOIN – Verbindet die Zeilen der rechten und linken Seite zu einer Ergebnismenge in der alle Kombinationen von Zeilen enthalten sind.
Neue SQL- und Aggregatfunktionen (lower, upper, … )
Mit Extended Result wird das Ergebnis einer SQL Leseoperation in ein Objekt der Klasse CL_OSQL_EXTENDED_RESULT versorgt.