Unity Connect - Getting SQL Spinning with SharePoint - Best Practices for the...Knut Relbe-Moe [MVP, MCT]
Performance problems in SharePoint are most commonly caused by a poorly configured or ineffectively optimized SQL Server back end. More often than not, the SQL Server is not installed following Best Practice guidelines. In this fast-paced session, Chief Technical Architect and International speaker Knut Relbe-Moe will walk you through his top 13 tips for ensuring your SQL back end is perfectly configured and performing well for SharePoint. If you want to ensure that your SharePoint environment is great whether it's in Azure or on premises, this is the session for you to join.
Implementing SharePoint on Azure, Lessons Learnt from a Real World ProjectK.Mohamed Faizal
Infrastructure as a Service (IaaS) and its features that can be leveraged for hosting a SharePoint 2013 farm. Learn how to setup, thinks to consider when you setup VPN, Storage, Cloud Services, setting up load balance endpoints. The speaker will share his real world experience and trips and tricks
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...Amazon Web Services Japan
AWS Black Belt Online Seminarの最新コンテンツ: https://aws.amazon.com/jp/aws-jp-introduction/#new
過去に開催されたオンラインセミナーのコンテンツ一覧: https://aws.amazon.com/jp/aws-jp-introduction/aws-jp-webinar-service-cut/
SharePoint 2010 Boost your farm performance!Brian Culver
Is your farm struggling to server your organization? How long is it taking between page requests? Where is your bottleneck in your farm? Is your SQL Server tuned properly? Worried about upgrading due to poor performance? We will look at various tools for analyzing and measuring performance of your farm. We will look at simple SharePoint and IIS configuration options to instantly improve performance.
For our eReader development project, we had to find a persistent storage for our JSON documents. After initial scanning we zeroed into two products DynamoDB and MongoDB. These slides take a deeper dive in the selection of our JSON data store.
Unity Connect - Getting SQL Spinning with SharePoint - Best Practices for the...Knut Relbe-Moe [MVP, MCT]
Performance problems in SharePoint are most commonly caused by a poorly configured or ineffectively optimized SQL Server back end. More often than not, the SQL Server is not installed following Best Practice guidelines. In this fast-paced session, Chief Technical Architect and International speaker Knut Relbe-Moe will walk you through his top 13 tips for ensuring your SQL back end is perfectly configured and performing well for SharePoint. If you want to ensure that your SharePoint environment is great whether it's in Azure or on premises, this is the session for you to join.
Implementing SharePoint on Azure, Lessons Learnt from a Real World ProjectK.Mohamed Faizal
Infrastructure as a Service (IaaS) and its features that can be leveraged for hosting a SharePoint 2013 farm. Learn how to setup, thinks to consider when you setup VPN, Storage, Cloud Services, setting up load balance endpoints. The speaker will share his real world experience and trips and tricks
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...Amazon Web Services Japan
AWS Black Belt Online Seminarの最新コンテンツ: https://aws.amazon.com/jp/aws-jp-introduction/#new
過去に開催されたオンラインセミナーのコンテンツ一覧: https://aws.amazon.com/jp/aws-jp-introduction/aws-jp-webinar-service-cut/
SharePoint 2010 Boost your farm performance!Brian Culver
Is your farm struggling to server your organization? How long is it taking between page requests? Where is your bottleneck in your farm? Is your SQL Server tuned properly? Worried about upgrading due to poor performance? We will look at various tools for analyzing and measuring performance of your farm. We will look at simple SharePoint and IIS configuration options to instantly improve performance.
For our eReader development project, we had to find a persistent storage for our JSON documents. After initial scanning we zeroed into two products DynamoDB and MongoDB. These slides take a deeper dive in the selection of our JSON data store.
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances. In this technical session, we conduct a detailed analysis of the types of Amazon EBS block storage including General Purpose (SSD), Provisioned IOPS (SSD) as well as the new Throughput Optimized HDD and Cold HDD. Along the way, we will share Amazon EBS best practices for performance, management and security.
Did you know that 80% to 90% of the user's page-load time comes from components outside the firewall? Optimizing performance on the front end (e.g. from the client side) can enhance the user experience by reducing the response times of your web pages and making them load and render much faster.
AWS re:Invent 2016: Case Study: Librato's Experience Running Cassandra Using ...Amazon Web Services
At Librato, a Solarwinds company, we run hundreds of Cassandra instances across multiple rings and use it as our primary data store. In the past year, we embarked on a process to upgrade our fleet of Cassandra Amazon EC2 instances from instance store to instances using Amazon EBS and attached elastic network interfaces (ENIs). We find running Cassandra on EBS gives us the flexibility to choose the best instances for the best performance of our workload while saving us significant costs on infrastructure. In this session, we discuss how Librato operates Cassandra on EBS. Topics include how we chose the right instance for our workload, use detached EBS volumes and ENI mobility to reduce MTTR, use mixed EBS storage types for the best cost/performance tradeoff, debug performance issues, and continuously monitor Cassandra to get the most from AWS. We also look at performance tradeoffs made in the implementation of storage engines of large data systems like Cassandra.
발표 영상 다시보기: https://kr-resources.awscloud.com/data-databases-and-analytics/%EC%A7%80%EA%B8%88-%EB%8B%B9%EC%9E%A5-dynamo-db-%ED%99%9C%EC%9A%A9%ED%95%98%EA%B8%B0-%EA%B0%95%EB%AF%BC%EC%84%9D-aws-database-modernization-day-%EC%98%A8%EB%9D%BC%EC%9D%B8-2
DynamoDB는 대량의 트래픽에 대해 빠른 응답시간을 보장하는 AWS의 NOSQL Database 서비스 입니다. 본 세션에서는 DynamoDB를 생성하고 테이블 디자인 후 데이터 입력, 삭제, 업데이트 및 성능에 관련된 설정에 대해서 진행합니다. 이 세션후 참석자들은 DyanmoDB에 대한 이해하며 직접 구성 및 사용할 수 있습니다.
MongoDB is one of the fastest growing NoSQL workloads on AWS due to its simplicity and scalability, and recent product additions by the AWS team have only improved those traits. In this session, we’ll talk about various AWS offerings and how they fit together with MongoDB -- including CloudFormation, Elastic MapReduce, Route53, Elastic Beanstalk, Elastic Load Balancing, and more -- and how they can be leveraged to enhance your MongoDB experience.
Learn from our hands-on experience using and working with Firebase. Great for building quick POC (prototypes) of apps that need real-time updates. Build cross platform web and mobile products with ease quickly.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Day 2 - Amazon RDS - Letting AWS run your Low Admin, High Performance DatabaseAmazon Web Services
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and re-sizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. In this webinar we review the different types of Amazon RDS available and how to move your existing databases to Amazon RDS with minimum disruption.
Reasons to attend:
- Learn how Amazon RDS can reduce the overhead of running high performance mission critical databases.
- Learn how to migrate your existing database workloads into Amazon RDS running on the AWS Cloud.
- Learn how to scale up and scale down your Amazon RDS instance and save money with reserved instances.
Streaming Data Analytics with Amazon Redshift and Kinesis FirehoseAmazon Web Services
Evolving your analytics from batch processing to real-time processing can have a major business impact, but ingesting streaming data into your data warehouse requires building complex streaming data pipelines. Amazon Kinesis Firehose solves this problem by making it easy to transform and load streaming data into Amazon Redshift so that you can use existing analytics and business intelligence tools to extract information in near real-time and respond promptly. In this session, we will dive deep using Amazon Kinesis Firehose to load streaming data into Amazon Redshift reliably, scalably, and cost-effectively.
by Darin Briskman, Technical Evangelist, AWS
DynamoDB queries enable consistent low latency at any workload, using the partition key, sort key, local secondary indexes, and global secondary indexes. Amazon Elasticsearch Service enables flexible search, including ranking and aggregation. Adding Elasticsearch to DynamoDB opens new capabilities to combine the power of query and search. Learn how Amazon.com uses this combination and how you can use it, too. Level: 200
Amazon Aurora Let's Talk About PerformanceDanilo Poccia
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It delivers up to five times the throughput of standard MySQL running on the same hardware. Amazon Aurora is designed to be compatible with MySQL 5.6, so that existing MySQL applications and tools can run without requiring modification.
Amazon Aurora is a fully managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It is purpose-built for the cloud using a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously possible using conventional monolithic database architectures. Amazon Aurora packs a lot of innovations in the engine and storage layers. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, new improvements to Aurora's performance, availability and cost-effectiveness and discuss best practices and optimal configurations.
It’s been an exciting year for Amazon Aurora, the MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. In this deep dive session, we’ll discuss best practices and explore new features, include high availability options and new integrations with AWS services. We’ll also discuss the recently-announced Aurora with PostgreSQL compatibility.
Overview and Best Practices for Amazon Elastic Block Store - September 2016 W...Amazon Web Services
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances. In this technical session, we present the differences between the types of Amazon EBS block storage so that you can best understand which storage type to use for your different application deployments. We discuss how to maximize Amazon EBS performance with a special eye towards low-latency and high-throughput applications. We discuss Amazon EBS encryption and share best practices for Amazon EBS snapshot management. Throughout, we share tips for success.
Learning Objectives:
• Learn about the latest updates to EBS
• Learn about best practices for using EBS.
Who Should Attend:
• Application admins, DBAs, database and big data architects
Creating Real-Time Data Mashups with Node.JS and Adobe CQiCiDIGITAL
Adobe CQ is great at managing the authored content, but is less adept at handling the real-time data. The time it takes to ingest the data and replicate it is too long – the data will have already changed.
Node.JS has a broad and diverse developer community. If you want to build something with Node, chances are someone else has already done the same thing.
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances. In this technical session, we conduct a detailed analysis of the types of Amazon EBS block storage including General Purpose (SSD), Provisioned IOPS (SSD) as well as the new Throughput Optimized HDD and Cold HDD. Along the way, we will share Amazon EBS best practices for performance, management and security.
Did you know that 80% to 90% of the user's page-load time comes from components outside the firewall? Optimizing performance on the front end (e.g. from the client side) can enhance the user experience by reducing the response times of your web pages and making them load and render much faster.
AWS re:Invent 2016: Case Study: Librato's Experience Running Cassandra Using ...Amazon Web Services
At Librato, a Solarwinds company, we run hundreds of Cassandra instances across multiple rings and use it as our primary data store. In the past year, we embarked on a process to upgrade our fleet of Cassandra Amazon EC2 instances from instance store to instances using Amazon EBS and attached elastic network interfaces (ENIs). We find running Cassandra on EBS gives us the flexibility to choose the best instances for the best performance of our workload while saving us significant costs on infrastructure. In this session, we discuss how Librato operates Cassandra on EBS. Topics include how we chose the right instance for our workload, use detached EBS volumes and ENI mobility to reduce MTTR, use mixed EBS storage types for the best cost/performance tradeoff, debug performance issues, and continuously monitor Cassandra to get the most from AWS. We also look at performance tradeoffs made in the implementation of storage engines of large data systems like Cassandra.
발표 영상 다시보기: https://kr-resources.awscloud.com/data-databases-and-analytics/%EC%A7%80%EA%B8%88-%EB%8B%B9%EC%9E%A5-dynamo-db-%ED%99%9C%EC%9A%A9%ED%95%98%EA%B8%B0-%EA%B0%95%EB%AF%BC%EC%84%9D-aws-database-modernization-day-%EC%98%A8%EB%9D%BC%EC%9D%B8-2
DynamoDB는 대량의 트래픽에 대해 빠른 응답시간을 보장하는 AWS의 NOSQL Database 서비스 입니다. 본 세션에서는 DynamoDB를 생성하고 테이블 디자인 후 데이터 입력, 삭제, 업데이트 및 성능에 관련된 설정에 대해서 진행합니다. 이 세션후 참석자들은 DyanmoDB에 대한 이해하며 직접 구성 및 사용할 수 있습니다.
MongoDB is one of the fastest growing NoSQL workloads on AWS due to its simplicity and scalability, and recent product additions by the AWS team have only improved those traits. In this session, we’ll talk about various AWS offerings and how they fit together with MongoDB -- including CloudFormation, Elastic MapReduce, Route53, Elastic Beanstalk, Elastic Load Balancing, and more -- and how they can be leveraged to enhance your MongoDB experience.
Learn from our hands-on experience using and working with Firebase. Great for building quick POC (prototypes) of apps that need real-time updates. Build cross platform web and mobile products with ease quickly.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Day 2 - Amazon RDS - Letting AWS run your Low Admin, High Performance DatabaseAmazon Web Services
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and re-sizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. In this webinar we review the different types of Amazon RDS available and how to move your existing databases to Amazon RDS with minimum disruption.
Reasons to attend:
- Learn how Amazon RDS can reduce the overhead of running high performance mission critical databases.
- Learn how to migrate your existing database workloads into Amazon RDS running on the AWS Cloud.
- Learn how to scale up and scale down your Amazon RDS instance and save money with reserved instances.
Streaming Data Analytics with Amazon Redshift and Kinesis FirehoseAmazon Web Services
Evolving your analytics from batch processing to real-time processing can have a major business impact, but ingesting streaming data into your data warehouse requires building complex streaming data pipelines. Amazon Kinesis Firehose solves this problem by making it easy to transform and load streaming data into Amazon Redshift so that you can use existing analytics and business intelligence tools to extract information in near real-time and respond promptly. In this session, we will dive deep using Amazon Kinesis Firehose to load streaming data into Amazon Redshift reliably, scalably, and cost-effectively.
by Darin Briskman, Technical Evangelist, AWS
DynamoDB queries enable consistent low latency at any workload, using the partition key, sort key, local secondary indexes, and global secondary indexes. Amazon Elasticsearch Service enables flexible search, including ranking and aggregation. Adding Elasticsearch to DynamoDB opens new capabilities to combine the power of query and search. Learn how Amazon.com uses this combination and how you can use it, too. Level: 200
Amazon Aurora Let's Talk About PerformanceDanilo Poccia
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It delivers up to five times the throughput of standard MySQL running on the same hardware. Amazon Aurora is designed to be compatible with MySQL 5.6, so that existing MySQL applications and tools can run without requiring modification.
Amazon Aurora is a fully managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It is purpose-built for the cloud using a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously possible using conventional monolithic database architectures. Amazon Aurora packs a lot of innovations in the engine and storage layers. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, new improvements to Aurora's performance, availability and cost-effectiveness and discuss best practices and optimal configurations.
It’s been an exciting year for Amazon Aurora, the MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. In this deep dive session, we’ll discuss best practices and explore new features, include high availability options and new integrations with AWS services. We’ll also discuss the recently-announced Aurora with PostgreSQL compatibility.
Overview and Best Practices for Amazon Elastic Block Store - September 2016 W...Amazon Web Services
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances. In this technical session, we present the differences between the types of Amazon EBS block storage so that you can best understand which storage type to use for your different application deployments. We discuss how to maximize Amazon EBS performance with a special eye towards low-latency and high-throughput applications. We discuss Amazon EBS encryption and share best practices for Amazon EBS snapshot management. Throughout, we share tips for success.
Learning Objectives:
• Learn about the latest updates to EBS
• Learn about best practices for using EBS.
Who Should Attend:
• Application admins, DBAs, database and big data architects
Creating Real-Time Data Mashups with Node.JS and Adobe CQiCiDIGITAL
Adobe CQ is great at managing the authored content, but is less adept at handling the real-time data. The time it takes to ingest the data and replicate it is too long – the data will have already changed.
Node.JS has a broad and diverse developer community. If you want to build something with Node, chances are someone else has already done the same thing.
Node.js and the MEAN Stack Building Full-Stack Web Applications.pdflubnayasminsebl
Welcome To
Node.js and the MEAN Stack: Building Full-Stack Web Applications
Nowadays, picking the best web app development technology is difficult. Because there are so many programming languages, frameworks, and technologies available right now, it can be challenging for business owners and entrepreneurs to SEO Expate Bangladesh Ltd choose the best development tool. Maintaining project efficiency has now become crucial in the era of web app development. Your firm will incur more expenses as you delay doing the assignment. A ground-breaking technology with distinctive characteristics, Node.js for web development. It is regarded by developers as one of the most successful cross-platform JavaScript environments for building reliable and powerful REST APIs, mobile applications, and online applications.
Describe Node.js
Node.js is a standalone runtime environment, not just a library or framework. It is dependent on Chrome's V8, a JavaScript engine capable of NodeJs Web Development running application code independently of the operating system or type of browser. Node.js is regarded as a standalone application on any machine because of its independence.
Frameworks for web applications
Any Node.js web application will require the web application framework as one of its most crucial requirements. Although the HTTP module allows you to construct your own, it is strongly advised that you build on the shoulders of others who came before you and utilize their work. If you haven't already decided which is your favorite, there are SEO Expate Bangladesh Ltd several to chose from. Express has a higher developer share than all other frameworks combined, according to a report by Eran Hammer. Second place went to Hammer's own Hapi.js, while many other frameworks followed with smaller market shares. In this situation, Express is not only the most widely used but also provides you with the best possibility of being able to pick up most new codebases rapidly. Additionally.
Security
Although web security has always been important, recent breaches and problems have made it absolutely essential. Learn about the OWASP Top 10, a list of the most significant internet security issues that is periodically updated. You can use this list to find potential security gaps in your application and conduct an audit there. Find out how to give your web application secure authentication. Popular middleware called Passport is used to authenticate users using many types of schemes. Learn effective Node.js encryption techniques. The hashing method known as Bcrypt is also the name of a popular npm package for encryption. Despite the probability that your code is secure, there is always a chance that one of your dependencies.
The front end
Although writing Node.js code for the back end of a website makes up a big portion of the job description for a Node.js Web Developer, you will probably also need to work on the front end occasionally to design the user interface. The occasional mo
Case Study: Sprinklr Uses Amazon EBS to Maximize Its NoSQL Deployment - DAT33...Amazon Web Services
Sprinklr delivers a complete social media management system for the enterprise. It also helps the world’s largest brands do marketing, advertising, care, sales, research, and commerce on Facebook, Twitter, LinkedIn, and 21 other channels on a global level. This is all done on a single integrated platform. In this session, you learn about Sprinklr’s journey to the cloud and discover how to optimize your NoSQL database on AWS for cost, efficiency, and scale. We also do dive deep into best practices and architectural considerations for designing and managing NoSQL databases, such as Apache Cassandra, MongoDB, Apache CouchDB, and Aerospike on Amazon EC2 and Amazon EBS. We share best practices for instance and volume selection, provide performance tuning hints, and describe cost optimization techniques throughout.
Continuous Integration and Deployment Best Practices on AWS (ARC307) | AWS re...Amazon Web Services
With AWS, companies now have the ability to develop and run their applications with speed and flexibility like never before. Working with an infrastructure that can be 100 percent API driven enables businesses to use lean methodologies and realize these benefits. This in turn leads to greater success for those who make use of these practices. In this session, we talk about some key concepts and design patterns for continuous deployment and continuous integration, two elements of lean development of applications and infrastructures.
NoSQL and Spatial Database Capabilities using PostgreSQLEDB
PostgreSQL is an object-relational database system. NoSQL on the other hand is a non-relational database and is document-oriented. Learn how the PostgreSQL database gives one the flexible options to combine NoSQL workloads with the relational query power by offering JSON data types. With PostgreSQL, new capabilities can be developed and plugged into the database as required.
Attend this webinar to learn:
- The new features and capabilities in PostgreSQL for new workloads, requiring greater flexibility in the data model
- NoSQL with JSON, Hstore and its performance and features for enterprises
- Spatial SQL - advanced features in PostGIS application with PostGIS extension
This presentation was given by David Maier @magicable @munichnosql may 2014. The code can be found https://github.com/dmaier-couchbase/cbl-android-tasklist
Postgres has been proven to process document database workloads faster than MongoDB in benchmark testing. But there are multiple benefits to using Postgres over a specialized solution for such applications.
Application developers are finding new ways to combine schema-less data with traditional relational tables and deliver innovative applications faster while meeting evolving DevOps strategies and goals. Providing a single, ACID-compliant enterprise-ready database that can manage both structured and unstructured data supports the development process and reduces overall complexity.
Learn what Postgres can help you achieve. This covers:
-- Using JSON/JSONB and HSTORE to combine schema-less data with enterprise information
-- Build on existing skillsets while using web 2.0 development technologies
-- Reduce complexity that comes with using multiple heterogeneous platform and disparate application demands
-- New performance benchmark results showing Postgres outperforms MongdoDB
Ähnlich wie Node.CQ - Creating Real-time Data Mashups with Node.JS and Adobe CQ (20)
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Pushing the limits of ePRTC: 100ns holdover for 100 days
Node.CQ - Creating Real-time Data Mashups with Node.JS and Adobe CQ
1. NODE.CQ
CREATING REAL-TIME DATA MASHUPS WITH
NODE.JS AND ADOBE CQ
Joshua Miller
NASCAR Digital Media
jsmiller@nascar.com
@jo5h | www.jo5h.com
2. PROBLEM SCENARIO
We want to mix authored content from Adobe CQ with Real-
Time Race Data from our Timing and Scoring system.
Combining Slowly Changing Dimensions such as Driver
Team Name, Vehicle Manufacturer Name, Track Information,
etc. with Constantly Changing Metrics such as Last Lap
Speed, Driver Position, Lap Number, etc.
Adobe CQ is great at managing the authored content, but is
less adept at handling the real-time data. The time it takes to
ingest the data and replicate it is too long – the data will have
already changed.
7. WORKING WITH NODE.JS
SHOULD BE LIKE WORKING
WITH BUILDING BLOCKS
Node.JS has a broad and diverse developer community. If
you want to build something with Node, chances are
someone else has already done the same thing.
Before you start building from scratch, look at the packages
that already exist on NPM (http://npmjs.org)
Using NPM (Node Package Manager), you can install
packages that perform the tasks you need to accomplish.
8. ELEMENTS OF A NODE.JS
APPLICATION
Web Server / Framework
• Express
• Flatiron
Logging Service
• Morgan
• Winston
Configuration
• Nconf
• config
Promise Library
• Q
• promise
Built-In Services
• HTTP / HTTPS
• FileSystem
• Crypto
• Events
• Stream
• Etc.
9. NODE.JS GOTCHAS
Some things about Node.JS are a bit different from working with
other technologies.
• NODE.JS IS ASYNCHRONOUS
Getting familiar with JavaScript Promises and Deferred
Libraries or understanding an developing very clear callback
chains is a must for working with Node.JS effectively
• NODE.JS IS A PACKAGE-DRIVEN TECHNOLOGY
Getting comfortable working with a Package Manager (NPM)
is a must for working with Node.JS effectively
• YOUR APPLICATION IS YOUR SERVER
There is no Apache or nginx or IIS to work with. You build
your server, or use a framework like Express or Flatiron
• NODE.JS IS AS FAULT-TOLERANT AS YOU MAKE IT
Building solid functionality with lots of error handling and
good logging is important
10. WTF DID YOU JUST BUILD?
Node.JS is Package-Driven and NPM provides you with a
wealth of resources for working with Node, but be careful
what packages you choose. If you see a package that has
25,000 downloads and a vibrant development
history on GitHub then you’re probably safe.
If you’re the only one that has downloaded this
package this calendar year and the last commit
was made in 2010, you might want to keep
looking for a more popular package.
Just because you have bricks in your bin,
you don’t have to use them all together.
12. USING ADOBE CQ’S REST API
WITH NODE.JS
Adobe CQ is built on top of Apache Sling – a Web Framework
that provides a REST API to CRX - the Java Content
Repository that sits beneath Adobe CQ
You can directly query CRX using simple REST commands
and have the output formatted as JSON
JSON data can be directly consumed by the Node.JS
application independent of your website’s front-end
13. MAKING RESTFUL REQUESTS
TO ADOBE CQ CONTENT
It’s simple enough to extract content using the RESTful API
in Adobe CQ. Take for example Race Data stored at the path:
/content/nascar/lookups/events/sprint-cup-series/2014/
You can easily view this data using the following URL:
http://10.196.135.9:4503/content/nascar/lookups/events/sprint
-cup-series/2014.infinity.json
Note the “infinity” selector in the URL – this can be replaced
with a number indicating the node-depth from which you
wish to return data
http://10.196.135.9:4503/content/nascar/lookups/events/sprint
-cup-series/2014.2.json
14. USING THE NODE-DEPTH
SELECTOR WITH ADOBE CQ
USING THE INFINITY
NODE-DEPTH SELECTOR
USING A NUMERIC NODE-
DEPTH SELECTOR
Returns either all child
nodes at the given path,
or an array of the
available numeric node-
depth selectors if the
structure is deemed too
large.
Returns data from the root
path, and all child nodes
at the node-depth
indicated by the selector.
16. HOW DO WE
USE THE DATA
WITH NODE.JS?
NOW THAT WE HAVE THIS DATA
17. HOW DO WE USE THIS
DATA?
By itself, the data that comes from CQ is only as useful as
the underlying data structure, the power of this data comes
in our ability to use Node.JS to quickly extract the data and
then mash it up with other data sources.
Using Node.JS, not only can we query data from CRX, we can
query data from a number of sources and combine our CRX
data with other feeds to create new data sources.
This enables us to mix authored content from CRX with Real-
Time data from our Timing and Scoring feed to create a new,
single feed that can be used in our Mobile product.
18. HOW IS THE DATA JOINED
INTO A NEW DATA SOURCE?
Creating the feed mashup is not out-of-the-box functionality
for Node.JS – we have to custom-code a method by which to
join feeds together
Node.JS enables us to build an application using the building
blocks we discussed earlier, but also allows us to create new,
custom blocks with which to build
Without too much effort, we have created a package that
allows feeds to be joined together using the same Primary
and Foreign Key relationships you would find in a typical
RDBMS product.
19. HOW IS THE DATA JOINED
INTO ONE FEED?
• Using simple JSON syntax, we can define a new feed that
is comprised of one or more feeds.
• Each feed has a “join” condition that allows a the feed to
be joined to the collection based on a specific JSON node
value.
• Special syntax allows for variable replacement from URL
parameters
• Special syntax allows for values from the new feed to be
used throughout the feed
• Includes custom functions such as Date and String
Formatting
• Includes dependency conditions where field values are
calculated and/or displayed based on the value of other
fields
21. GETTING LIVE DATA FROM
THE RACETRACK
During a race, NASCAR vehicles are monitored via
transponders placed in the cars. As the cars cross over fiber
optic sensors in the track, the data is transmitted to a piece
of software called TimeGear.
TimeGear tracks the speed of each car, its position relative to
the other race cars and feeds this data into the Timing and
Scoring system.
Timing and Scoring provides a feed that is consumed by
Apex, our Mobile Cacher application, which streams the
JSON feed out to Akamai where the data is consumed by
internal applications and third-parties such as Yahoo!, Fox
Sports and ESPN.
22. INTEGRATING OUR REAL-
TIME DATA FEED
Using the same syntax and the same data providers, we can
query our Real-Time race data directly from Timing and Scoring,
or directly from Akamai to reduce the load on the T&S systems.
Without modifying any code, provided a relationship can be
found in the data, we can now merge any JSON data source into
our feed.
This allows us to merge our Real-Time race statistics right into
our authored CQ content, providing a richer and more in-depth
feed for our Mobile application without the delay of first
ingesting the race data into Adobe CQ.
Now that our data is available in a new format, we can provide a
single stream of data to the NASCAR Mobile application,
reducing the number of calls that need to be made from a
mobile device.
23. EXTENDING OUR DATASET
WITH THIRD-PARTY SERVICES
Given the flexibility of this data aggregator, we can now start
to lay new and powerful data layers from disparate source on
top of our existing data without having to store that data in
CQ.
For example, we can pull Real-Time Weather Conditions into
our data based on the zip code of the track. We could pull
track records to note if a driver’s lap speed was the fastest in
the track’s history. We could even pull in Sponsor
information based on the current Race Leader.
We accomplish all of this without the need to add to the
storage requirements of our application, or write custom
aggregators for external content.
25. COULDN’T WE HAVE DONE
THIS USING CQ?
Of course, we could have accomplished the same end-result
using only Adobe CQ and some custom Java code. There are
some real benefits to using Node.JS in this scenario though:
• There is no code to compile and new feeds only require
JSON configuration
• Node.JS is an extremely high-throughput platform. We can
serve hundreds of simultaneous connections per second.
• We reduce the load on our CQ environment by offloading
tasks to an application with fewer hardware requirements
• We don’t use an large, complex web framework to deliver
small streams of data with no user interface requirements
26. IS NODE.JS REALLY THAT
MUCH MORE PERFORMANT?
We have used Node.JS for a number of new tasks here at
NASCAR Digital Media lately and have found it to be
incredibly performant. We recently launched a new RaaS
implementation with Gigya and use Node.JS to authenticate
users.
During our load tests, we found that we could serve in 10
minutes of sustained load, all of the traffic that we expected
the Node service to experience within the entire race season.
In fact, we have found that our load tests typically max-out
not because of Node’s inability to serve more requests, but
because MySQL starts to queue requests, or Gigya begins to
throttle requests-per-second.