Puppet is an open source tool used to automate server configuration management. It ensures servers are configured and packages installed as defined. Puppet manages configuration through resources like packages, files, users and more. It can install packages, configure files and folders, manage services, create users/groups, and run commands. Puppet applies configurations idempotently so they can be run multiple times without changing the server unless the configuration changes.
This document outlines an agenda for integrating the Apache FTP Server into a Java project using WebObjects for authentication. It discusses setting up the FTP Server, creating User and UserManager classes to handle authentication through WebObjects, and using FTPLets to customize FTP commands and behavior. The session aims to provide feedback and example code for basic integration without a reusable framework.
The JavaMail API allows Java applications to send and receive email. It includes core classes like Session, Message, Transport, and Store. A Session represents a mail session with an email server. Messages can be composed and sent using Transport, and email can be retrieved from mailboxes using Store and Folder classes. The API supports authentication, sending attachments, and receiving notifications about mail events.
- JavaMail API provides a way to send and receive emails in Java through core classes like Session, Message, InternetAddress, and Transport. It supports SMTP, POP3, and IMAP protocols.
- To send an email, create a Message, set the from/to addresses, and use Transport to send it through an SMTP server. To receive emails, use a POP3 or IMAP store to access messages in a mailbox folder.
- Attachments, HTML content, authentication, and searching emails are also supported through the API classes. Other providers extend JavaMail for features like NNTP and S/MIME encryption.
Or: how to build a complete system from scratch.
It begins by the requirements to have an installation process
easy to repeat, documented and auditable.
Twig is a template engine for PHP that allows developers to create powerful and flexible templates. It provides features like template inheritance, blocks, variables, filters, tags, and loops to integrate dynamic content. Templates can extend a base template, override blocks, and include other templates. Variables passed to templates can be accessed and filtered. Developers can also extend Twig with custom filters and functions.
Fun with containers: Use Ansible to build Docker imagesabadger1999
Docker allows deploying applications in isolated containers. Ansible is useful for building Docker images because it provides consistency and portability for configuring containers in the same way as configuring hosts. Ansible roles from Galaxy can be used to try applications before deploying them by building Docker images configured with Ansible plays that include the roles.
Michael Peacock gave a presentation on Symfony components and related libraries. The presentation [1] introduced several Symfony components including routing, event dispatching, forms, validation, security, and HTTP foundation, [2] discussed related libraries like Pimple and Twig, and [3] covered how to install the components using Composer.
Puppet is an open source tool used to automate server configuration management. It ensures servers are configured and packages installed as defined. Puppet manages configuration through resources like packages, files, users and more. It can install packages, configure files and folders, manage services, create users/groups, and run commands. Puppet applies configurations idempotently so they can be run multiple times without changing the server unless the configuration changes.
This document outlines an agenda for integrating the Apache FTP Server into a Java project using WebObjects for authentication. It discusses setting up the FTP Server, creating User and UserManager classes to handle authentication through WebObjects, and using FTPLets to customize FTP commands and behavior. The session aims to provide feedback and example code for basic integration without a reusable framework.
The JavaMail API allows Java applications to send and receive email. It includes core classes like Session, Message, Transport, and Store. A Session represents a mail session with an email server. Messages can be composed and sent using Transport, and email can be retrieved from mailboxes using Store and Folder classes. The API supports authentication, sending attachments, and receiving notifications about mail events.
- JavaMail API provides a way to send and receive emails in Java through core classes like Session, Message, InternetAddress, and Transport. It supports SMTP, POP3, and IMAP protocols.
- To send an email, create a Message, set the from/to addresses, and use Transport to send it through an SMTP server. To receive emails, use a POP3 or IMAP store to access messages in a mailbox folder.
- Attachments, HTML content, authentication, and searching emails are also supported through the API classes. Other providers extend JavaMail for features like NNTP and S/MIME encryption.
Or: how to build a complete system from scratch.
It begins by the requirements to have an installation process
easy to repeat, documented and auditable.
Twig is a template engine for PHP that allows developers to create powerful and flexible templates. It provides features like template inheritance, blocks, variables, filters, tags, and loops to integrate dynamic content. Templates can extend a base template, override blocks, and include other templates. Variables passed to templates can be accessed and filtered. Developers can also extend Twig with custom filters and functions.
Fun with containers: Use Ansible to build Docker imagesabadger1999
Docker allows deploying applications in isolated containers. Ansible is useful for building Docker images because it provides consistency and portability for configuring containers in the same way as configuring hosts. Ansible roles from Galaxy can be used to try applications before deploying them by building Docker images configured with Ansible plays that include the roles.
Michael Peacock gave a presentation on Symfony components and related libraries. The presentation [1] introduced several Symfony components including routing, event dispatching, forms, validation, security, and HTTP foundation, [2] discussed related libraries like Pimple and Twig, and [3] covered how to install the components using Composer.
Ansible is a configuration management and orchestration tool that is agentless, uses SSH for connections, and is designed to be easy to use. It allows users to define infrastructure by writing playbooks that describe configurations, deployments, and orchestrations. Playbooks can install software, copy files, execute commands, and more on remote servers. Ansible playbooks provide an idempotent and predictable way to configure and manage infrastructure and applications.
PuppetCamp SEA 1 - Version Control with PuppetWalter Heck
Choon Ming Goh, System Administrator at OnApp Malaysia, gave a presentation on how OnApp implements version control. Since they have quite a few repositories, this is all puppetised and that is quite a nice way of doing version control.
Raphaël Pinson's talk on "Configuration surgery with Augeas" at PuppetCamp Geneva '12. Video at http://youtu.be/H0MJaIv4bgk
Learn more: www.puppetlabs.com
This document summarizes Phinx, a PHP library for managing database migrations. It allows creating migrations to modify the database schema, rolling back changes if needed. Migrations are stored in version control and can be shared. Phinx provides a table API to create migrations in a database-agnostic way and supports rolling back changes. It works with MySQL, PostgreSQL, SQLite and more.
This document summarizes the Aura Project for PHP 5.4. The Aura Project provides independent library packages that can be used individually or together to build applications. It includes components like autoloading, routing, database abstraction, and more. Each component is built as a separate package that can be included as needed for applications.
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
The document discusses Python virtual environments (virtualenv) and the pip package manager. It introduces virtualenv and pip, explains why they are useful tools for isolating Python environments and managing packages, and provides exercises for creating virtual environments, using pip to install/uninstall packages, creating your own pip packages, and sharing packages on PyPI. The goal is to help users understand and learn to use these tools in 90 minutes.
Walter Heck, founder of OlinData, presented a step-by-step guide on how to set up a proper puppet repository, complete with the brand new PuppetDB, exported resources and usage of open source modules.
Puppet can be used effectively and at scale without running as root. In many organizations, particularly large ones, different teams are responsible for different pieces of the infrastructure. In my case, I am on a team responsible for installation, configuration, upkeep, and monitoring of an application, but we are denied root access. Despite this, we have a rich puppet infrastructure thats saves us time and reduces configuration drift. I will present our model for success in this kind of limited environment, including recipes for using puppet as non root and some encouraging words and ideas for those who want to implement puppet, but the rest of their organization isn't ready yet.
Spencer Krum
Systems Admin, UTI Worldwide
Spencer is a Linux and application administrator with UTI Worldwide, a shipping and logistics firm. He lives and works in Portland. He has been using Linux and Puppet for years. Spencer is co-authoring (with William Van Hevelingen and Ben Kero) the second edition of Pro Puppet by James Turnbull and Jeff McCune, which should be available from Apress in alpha/beta E-Book in time for Puppet Conf '13. He enjoys hacking, tennis, StarCraft, and Hawaiian food.
The document discusses how immutable infrastructure can be achieved through Puppet by treating systems configuration as code. Puppet allows defining systems in code and enforcing that state through automatic idempotent runs, compensating for inherent system mutability. This brings predictability to infrastructure and allows higher level operations by establishing a foundation of reliable, known states.
This document provides step-by-step instructions for building a blog using Django, including setting up the project structure, models, views, templates, and deploying to Heroku. Key steps include initializing the project with Django, creating models and admin interfaces, writing views, setting up the template directory, and configuring settings, URLs, and static files. The document concludes by walking through deploying the blog to Heroku.
How to Develop Puppet Modules: From Source to the Forge With Zero ClicksCarlos Sanchez
Puppet Modules are a great way to reuse code, share your development with other people and take advantage of the hundreds of modules already available in the community. But how to create, test and publish them as easily as possible? now that infrastructure is defined as code, we need to use development best practices to build, test, deploy and use Puppet modules themselves. Three steps for a fully automated process
* Continuous Integration of Puppet Modules
* Automatic release and upload to the Puppet Forge
* Deploy to Puppet master
Tornado is a non-blocking light-weight web server and framework. There's been many introductory talks about it, and it's time to look deeper into it: not just what Tornado does, but how it does it and what can we learn from it when designing our own concurrent systems.
In this talk I go over the following topics. I cover them in two parts: first I present how to use a certain feature or approach in our applications; then, I dig into Tornado's source code to see how it really works.
- Getting Started: quickly get a simple Tornado application up and running. We'll keep digging into, changing and poking this Application for most of the talk.
- An Application Listens: what an Application is, how does Tornado start it and how does it process its requests.
- Application and IOLoop: we'll look at how the IOLoop receives the connections from the users and passes them on to the Applications.
- Scheduled Tasks: we'll see how to schedule tasks and how the IOLoop will run them.
- Generators: we'll learn to use generators to handle the responses of our asynchronous calls, and how they work with the IOLoop.
Advanced:
- Websockets: how to use them and how they work.
- IOStream: how do Tornado's non-blocking sockets work.
- Database: how to use non-blocking sockets to connect to databases.
- Process: how Tornado works with multiple processes.
I presented this talk at Europython 2012 and PyGrunn 2012.
Code examples: https://bitbucket.org/grimborg/tornado-in-depth/src/tip/examples/
This document provides instructions for setting up a continuous integration environment for Nuxeo modules using Ubuntu Server 7.10 virtualized with VmWare. It describes installing Ubuntu, configuring it with tools like MySQL, Maven, and Subversion. It then covers setting up Continuum and Archiva for continuous integration and artifact deployment. The document demonstrates configuring a sample Nuxeo module project in Eclipse to build with Maven and deploy artifacts to the Archiva repository for continuous integration with each code change.
This document provides instructions for deploying Spark in high availability (HA) mode using Ansible on OpenStack. It begins with an overview of using the OpenStack client and Ansible for infrastructure automation. It then demonstrates hands-on use of the OpenStack client to create and manage resources. The document introduces Ansible concepts like playbooks, modules, roles and Galaxy before explaining how to deploy Spark in HA mode using Ansible roles and providing a link to example code.
Python from zero to hero (Twitter Explorer)Yuriy Senko
This document outlines steps to build a Twitter explorer application using Python and Flask. It begins with setting up the virtual environment and cloning the GitHub repository. It then walks through steps to add basic functionality like configuration, templates, a database with SQLAlchemy ORM, user authentication with Flask plugins, and finally integrating the Twitter API. Each step includes changes to files, dependencies in requirements.txt, and commands to test and view progress. The goal is to create a full-stack web application to explore tweets from the Twitter API.
This document provides tips and tricks for writing Ansible roles, including:
1. It recommends using 'ansible-galaxy init' to automatically generate the directory structure for a role, rather than manually creating files and folders.
2. It describes how to specify role dependencies in the metadata file and how tags and conditionals also apply to dependent roles.
3. It discusses best practices for creating cross-platform roles by using conditionals, includes, and set_facts to target tasks and variables depending on operating system.
The Coolest Symfony Components you’ve never heard of - DrupalCon 2017Ryan Weaver
What is Symfony *really*? It's a collection of *35* independent libraries, and
Drupal uses less than *half* of them! That means that there's a *ton* of other
good stuff that you can bring into your project to solve common problems... as
long as you know how, and what those components do!
In this talk, we'll have some fun: taking a tour of the Symfony components, how
to install them (into Drupal, or anywhere) and how to use some of my *favorite*,
lesser-known components. By the end, you'll have a better appreciation of what
Symfony *really* is, and some new tools to use immediately.
Create Development and Production Environments with VagrantBrian Hogan
Need a Linux box to test a Wordpress site or a Windows VM to test a web site on IE 10? Creating a virtual machine to test or deploy your software doesn’t have to be a manual process. Bring one up in seconds with Vagrant, software for creating and managing virtual machines. With Vagrant, you can bring up a new virtual machine with the software you need, share directories, copy files, and configure networking using a friendly DSL. You can even use shell scripts or more powerful provisioning tools to set up your software and install your apps. Whether you need a Windows machine for testing an app, or a full-blown production environment for your apps, Vagrant has you covered.
In this talk you’ll learn to script the creation of multiple local virtual machines. Then you’ll use the same strategy to provision production servers in the cloud.
I work with Vagrant, Terraform, Docker, and other provisioning systems daily and am excited to show others how to bring this into their own workflows.
DevOps Hackathon: Session 3 - Test Driven InfrastructureAntons Kranga
We will assume that you already familiar with Vagrant and Chef fundamentals described in session 1 and 2. Today we will go through TestKitchen and ServerSpec. While chef-dk is not stable, this is most reliable path.
Practical activities can be found here:
https://github.com/akranga/devops-hackathon-3
This document provides an overview of Catalyst, an elegant Perl MVC framework. It discusses how to install and set up a Catalyst application, including generating the initial application structure. It then explains the MVC pattern and describes the various components - the Model, View and Controller. The document dives into details about dispatching requests to controller actions in Catalyst and describes the context object ($c) that is passed to actions and provides access to request/response objects, configuration, logging and more.
Ansible is a configuration management and orchestration tool that is agentless, uses SSH for connections, and is designed to be easy to use. It allows users to define infrastructure by writing playbooks that describe configurations, deployments, and orchestrations. Playbooks can install software, copy files, execute commands, and more on remote servers. Ansible playbooks provide an idempotent and predictable way to configure and manage infrastructure and applications.
PuppetCamp SEA 1 - Version Control with PuppetWalter Heck
Choon Ming Goh, System Administrator at OnApp Malaysia, gave a presentation on how OnApp implements version control. Since they have quite a few repositories, this is all puppetised and that is quite a nice way of doing version control.
Raphaël Pinson's talk on "Configuration surgery with Augeas" at PuppetCamp Geneva '12. Video at http://youtu.be/H0MJaIv4bgk
Learn more: www.puppetlabs.com
This document summarizes Phinx, a PHP library for managing database migrations. It allows creating migrations to modify the database schema, rolling back changes if needed. Migrations are stored in version control and can be shared. Phinx provides a table API to create migrations in a database-agnostic way and supports rolling back changes. It works with MySQL, PostgreSQL, SQLite and more.
This document summarizes the Aura Project for PHP 5.4. The Aura Project provides independent library packages that can be used individually or together to build applications. It includes components like autoloading, routing, database abstraction, and more. Each component is built as a separate package that can be included as needed for applications.
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
The document discusses Python virtual environments (virtualenv) and the pip package manager. It introduces virtualenv and pip, explains why they are useful tools for isolating Python environments and managing packages, and provides exercises for creating virtual environments, using pip to install/uninstall packages, creating your own pip packages, and sharing packages on PyPI. The goal is to help users understand and learn to use these tools in 90 minutes.
Walter Heck, founder of OlinData, presented a step-by-step guide on how to set up a proper puppet repository, complete with the brand new PuppetDB, exported resources and usage of open source modules.
Puppet can be used effectively and at scale without running as root. In many organizations, particularly large ones, different teams are responsible for different pieces of the infrastructure. In my case, I am on a team responsible for installation, configuration, upkeep, and monitoring of an application, but we are denied root access. Despite this, we have a rich puppet infrastructure thats saves us time and reduces configuration drift. I will present our model for success in this kind of limited environment, including recipes for using puppet as non root and some encouraging words and ideas for those who want to implement puppet, but the rest of their organization isn't ready yet.
Spencer Krum
Systems Admin, UTI Worldwide
Spencer is a Linux and application administrator with UTI Worldwide, a shipping and logistics firm. He lives and works in Portland. He has been using Linux and Puppet for years. Spencer is co-authoring (with William Van Hevelingen and Ben Kero) the second edition of Pro Puppet by James Turnbull and Jeff McCune, which should be available from Apress in alpha/beta E-Book in time for Puppet Conf '13. He enjoys hacking, tennis, StarCraft, and Hawaiian food.
The document discusses how immutable infrastructure can be achieved through Puppet by treating systems configuration as code. Puppet allows defining systems in code and enforcing that state through automatic idempotent runs, compensating for inherent system mutability. This brings predictability to infrastructure and allows higher level operations by establishing a foundation of reliable, known states.
This document provides step-by-step instructions for building a blog using Django, including setting up the project structure, models, views, templates, and deploying to Heroku. Key steps include initializing the project with Django, creating models and admin interfaces, writing views, setting up the template directory, and configuring settings, URLs, and static files. The document concludes by walking through deploying the blog to Heroku.
How to Develop Puppet Modules: From Source to the Forge With Zero ClicksCarlos Sanchez
Puppet Modules are a great way to reuse code, share your development with other people and take advantage of the hundreds of modules already available in the community. But how to create, test and publish them as easily as possible? now that infrastructure is defined as code, we need to use development best practices to build, test, deploy and use Puppet modules themselves. Three steps for a fully automated process
* Continuous Integration of Puppet Modules
* Automatic release and upload to the Puppet Forge
* Deploy to Puppet master
Tornado is a non-blocking light-weight web server and framework. There's been many introductory talks about it, and it's time to look deeper into it: not just what Tornado does, but how it does it and what can we learn from it when designing our own concurrent systems.
In this talk I go over the following topics. I cover them in two parts: first I present how to use a certain feature or approach in our applications; then, I dig into Tornado's source code to see how it really works.
- Getting Started: quickly get a simple Tornado application up and running. We'll keep digging into, changing and poking this Application for most of the talk.
- An Application Listens: what an Application is, how does Tornado start it and how does it process its requests.
- Application and IOLoop: we'll look at how the IOLoop receives the connections from the users and passes them on to the Applications.
- Scheduled Tasks: we'll see how to schedule tasks and how the IOLoop will run them.
- Generators: we'll learn to use generators to handle the responses of our asynchronous calls, and how they work with the IOLoop.
Advanced:
- Websockets: how to use them and how they work.
- IOStream: how do Tornado's non-blocking sockets work.
- Database: how to use non-blocking sockets to connect to databases.
- Process: how Tornado works with multiple processes.
I presented this talk at Europython 2012 and PyGrunn 2012.
Code examples: https://bitbucket.org/grimborg/tornado-in-depth/src/tip/examples/
This document provides instructions for setting up a continuous integration environment for Nuxeo modules using Ubuntu Server 7.10 virtualized with VmWare. It describes installing Ubuntu, configuring it with tools like MySQL, Maven, and Subversion. It then covers setting up Continuum and Archiva for continuous integration and artifact deployment. The document demonstrates configuring a sample Nuxeo module project in Eclipse to build with Maven and deploy artifacts to the Archiva repository for continuous integration with each code change.
This document provides instructions for deploying Spark in high availability (HA) mode using Ansible on OpenStack. It begins with an overview of using the OpenStack client and Ansible for infrastructure automation. It then demonstrates hands-on use of the OpenStack client to create and manage resources. The document introduces Ansible concepts like playbooks, modules, roles and Galaxy before explaining how to deploy Spark in HA mode using Ansible roles and providing a link to example code.
Python from zero to hero (Twitter Explorer)Yuriy Senko
This document outlines steps to build a Twitter explorer application using Python and Flask. It begins with setting up the virtual environment and cloning the GitHub repository. It then walks through steps to add basic functionality like configuration, templates, a database with SQLAlchemy ORM, user authentication with Flask plugins, and finally integrating the Twitter API. Each step includes changes to files, dependencies in requirements.txt, and commands to test and view progress. The goal is to create a full-stack web application to explore tweets from the Twitter API.
This document provides tips and tricks for writing Ansible roles, including:
1. It recommends using 'ansible-galaxy init' to automatically generate the directory structure for a role, rather than manually creating files and folders.
2. It describes how to specify role dependencies in the metadata file and how tags and conditionals also apply to dependent roles.
3. It discusses best practices for creating cross-platform roles by using conditionals, includes, and set_facts to target tasks and variables depending on operating system.
The Coolest Symfony Components you’ve never heard of - DrupalCon 2017Ryan Weaver
What is Symfony *really*? It's a collection of *35* independent libraries, and
Drupal uses less than *half* of them! That means that there's a *ton* of other
good stuff that you can bring into your project to solve common problems... as
long as you know how, and what those components do!
In this talk, we'll have some fun: taking a tour of the Symfony components, how
to install them (into Drupal, or anywhere) and how to use some of my *favorite*,
lesser-known components. By the end, you'll have a better appreciation of what
Symfony *really* is, and some new tools to use immediately.
Create Development and Production Environments with VagrantBrian Hogan
Need a Linux box to test a Wordpress site or a Windows VM to test a web site on IE 10? Creating a virtual machine to test or deploy your software doesn’t have to be a manual process. Bring one up in seconds with Vagrant, software for creating and managing virtual machines. With Vagrant, you can bring up a new virtual machine with the software you need, share directories, copy files, and configure networking using a friendly DSL. You can even use shell scripts or more powerful provisioning tools to set up your software and install your apps. Whether you need a Windows machine for testing an app, or a full-blown production environment for your apps, Vagrant has you covered.
In this talk you’ll learn to script the creation of multiple local virtual machines. Then you’ll use the same strategy to provision production servers in the cloud.
I work with Vagrant, Terraform, Docker, and other provisioning systems daily and am excited to show others how to bring this into their own workflows.
DevOps Hackathon: Session 3 - Test Driven InfrastructureAntons Kranga
We will assume that you already familiar with Vagrant and Chef fundamentals described in session 1 and 2. Today we will go through TestKitchen and ServerSpec. While chef-dk is not stable, this is most reliable path.
Practical activities can be found here:
https://github.com/akranga/devops-hackathon-3
This document provides an overview of Catalyst, an elegant Perl MVC framework. It discusses how to install and set up a Catalyst application, including generating the initial application structure. It then explains the MVC pattern and describes the various components - the Model, View and Controller. The document dives into details about dispatching requests to controller actions in Catalyst and describes the context object ($c) that is passed to actions and provides access to request/response objects, configuration, logging and more.
Catalyst is a web framework for developing dynamic websites using Perl. It follows the model-view-controller (MVC) pattern, separating the application into modules for the model (data), view (presentation), and controller (logic). When a HTTP request is made to a Catalyst application, the controller module processes the request, interacts with the model to retrieve/manipulate data, and forwards the data to the view module to generate a response like HTML that is returned to the browser. Catalyst provides features like database access, form handling, and templating to help build full-featured web applications in Perl.
This document provides instructions for building a Rails API and discusses related topics. It recommends using Rails 3.1 and Ruby 1.9.2 to build the API. It provides steps to generate a MessagesController to handle API requests for messages. It discusses testing the API with curl and rspec tests. It also covers building a namespaced and versioned API, authentication, caching responses, hosting on DotCloud, and running background jobs with Delayed Job.
- The document provides step-by-step instructions for installing Bugzilla, including downloading and installing prerequisite software like Bazaar, MySQL, ActiveState Perl, and Apache.
- Key steps include extracting and saving Bugzilla files, creating a MySQL 'bugs' database and user, configuring Apache to run CGI scripts and point to the Bugzilla directory, and running checksetup.pl to configure Bugzilla.
- The instructions conclude by noting the administrator account can now log into Bugzilla and configure the maintainer and URL settings.
Adopt DevOps philosophy on your Symfony projects (Symfony Live 2011)Fabrice Bernhard
This is the presentation given at the Symfony Live 2011 conference. It is an introduction to the new agile movement spreading in the technical operations community called DevOps and how to adopt it on web development projects, in particular Symfony projects.
Plan of the slides :
- Configuration Management
- Development VM
- Scripted deployment
- Continuous deployment
Tools presented in the slides:
- Puppet
- Vagrant
- Fabric
- Jenkins / Hudson
Pharo is a modern and powerful Smalltalk environment that is open source, supports many platforms, and actively adds new features. Version 5.0 includes performance improvements from the new Spur VM, as well as new debugging tools and a unified FFI. An example web application built with Teapot and PunQLite demonstrates how easily full-stack web applications can now be developed in Pharo.
This document discusses strategies for making Ruby on Rails applications highly available. It covers common architectures using a single server, and moving to distributed systems. Key topics include application modularity, useful gems for asynchronous processing, database replication, session management, application deployment, configuration management, and load balancing. The conclusion emphasizes that porting Rails apps to a highly available environment requires thinking about architecture and distribution early, but is not prohibitively difficult if approached methodically.
The document summarizes the steps taken to set up a Django project called "he" on Ubuntu. It shows commands used to install Python, virtualenv, Django and other dependencies. Database setup with PostgreSQL is also demonstrated. An app called "board" is created, with a Post model defined and admin configured. Templates are added and the development server is run. Authentication and registration are implemented along with forms to add new posts. The project is developed iteratively through multiple versions.
Exploring MySQL Operator for Kubernetes in PythonIvan Ma
The document discusses the MySQL Operator for Kubernetes, which allows users to run MySQL clusters on Kubernetes. It provides an overview of how the operator works using the Kopf framework to create Kubernetes custom resources and controllers. It describes how the operator creates deployments, services, and other resources to set up MySQL servers in a stateful set, a replica set for routers, and monitoring. The document also provides instructions for installing the MySQL Operator using Kubernetes manifests or Helm.
SymfonyCon Berlin 2016 - Symfony Plugin for PhpStorm - 3 years laterHaehnchen
In 2013 the "Symfony Plugin" for PhpStorm was born. Today we see over 1 million downloads and several other plugins for projects like Laravel, Drupal, Shopware, ... that help to improve your productivity.
I will talk about Symfony related features and will give you some tips and tricks. Also, we take a look at the infrastructure behind these plugins and how I maintain all of them.
This document summarizes Olaf Alders' experience building and evolving a personal tracking application using various Perl web frameworks and tools. It describes his initial use of Dancer and later transition to Mojolicious, adoption of Minion for job queueing, migration from MySQL to Postgres, and shift from manual deployment to using Ansible for automation. The key lessons were learning new frameworks like Mojolicious and tools like Minion, Sqitch, and Ansible, as well as adopting practices like SSL and OAuth authentication.
Practical introduction to dev ops with chefLeanDog
The document provides an introduction to DevOps using Chef. It discusses configuration management and deployment automation. It introduces key Chef concepts like nodes, resources, recipes and cookbooks. It demonstrates using Chef recipes to configure a sample Ubuntu application server with Apache, Python, Django and PostgreSQL. The recipes install packages, create a virtualenv, install dependencies and configure the application using Chef resources and Ruby code.
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Carlos Sanchez
Continuous Integration, with Apache Continuum or Jenkins, can be extended to fully manage deployments and production environments, running in Tomcat for instance, in a full Continuous Delivery cycle using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Apache Continuum or Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We will show how to install and manage Puppet nodes with JDK, multiple Tomcat instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
TorqueBox: The beauty of Ruby with the power of JBoss. Presented at Devnexus...bobmcwhirter
- Bob McWhirter is the project lead of TorqueBox and a JBoss Fellow.
- TorqueBox allows Ruby web applications to run on the JBoss Application Server using JRuby.
- It provides tight integration with JBoss and allows Ruby applications to take advantage of features like messaging, jobs, and services that are traditionally Java-based.
This document provides an overview of DevOPS concepts including containers, Docker, and related tools. It discusses what containers are and the differences between virtual machines and containers. It then covers how containers can be used by developers and systems engineers. Docker is introduced as a tool for running and managing containers. Dockerfiles are described as documents for assembling container images. Docker Compose is presented as a tool for defining and running multi-container applications. Examples are given for creating a simple container with Dockerfile and running it locally and sharing it publicly. Monitoring tools like cAdvisor are mentioned. The document ends with discussing continuous integration/deployment using tools like Gitlab and Jenkins to automate the build and deployment process.
Krux operates a large infrastructure serving thousands of user requests per second. They use Puppet and tools like Cloudkick, Foreman, Boto, and Vagrant to manage their infrastructure in an automated and scalable way. Their Puppet configuration is split into modules, environments, and datacenters. They launch AWS nodes programmatically and configure them with Puppet. Cloudkick is used for monitoring and parallel SSH. Boto allows full Python API access to AWS. Vagrant allows consistently provisioning development machines locally. Automation and external configuration enable their small operations team to manage a large, dynamic infrastructure.
Yet another introduction to the challenges of geocoding and the ways we approach those problems at Lokku/Nestoria. I describe the details of how we tackle geocoding in countries like India, and also plug our the new OpenCage Data Geocoder API.
Test::Kit 2.0 (London.pm Technical Meeting July 2014)Alex Balhatchet
The document describes Test::Kit 2.0, a module that allows creating custom test modules with desired test features. It allows combining behaviors from multiple test modules, excluding or renaming exported subs, and directly passing parameters to module imports. The document provides an example of creating a test kit and using it, and discusses benefits like reduced boilerplate and consistent testing. It also describes improvements made in Test::Kit 2.0 over the previous version.
Perl is a high-level, general purpose programming language that was introduced in 1987 and remains widely used today. It draws inspiration from languages like C, sed, awk, and grep. The document provides an introduction to Perl's history and basics, including variables, conditionals, loops, regular expressions, subroutines and objects. It highlights advantages like the comprehensive CPAN module library, strong Unicode support, testing culture, and job opportunities. The author works at Nestoria, where Perl powers their property search engine, handling tasks like XML parsing, geocoding, and image processing.
App::highlight - a simple grep-like highlighter appAlex Balhatchet
App::highlight is a bit like grep, except that it doesn't filter out lines. In exchange for seeing all the output you get a lot more fun highlighting options to play with, and full Perl regex support of course.
I gave this talk at the London.pm technical meeting in July 2013.
App::highlight is available on Github and CPAN.
File::CleanupTask is a CPAN module that the company I work at has opensourced. This is a presentation I gave about it at the London Perl Mongers technical meeting in August 2012.
Introduction to writing readable and maintainable PerlAlex Balhatchet
An introduction to writing readable Perl code, for people who write Perl that other people may want to read. Covers the most important lessons from Perl Best Practices, and ends by showing how to use Perl::Critic to test that you are meeting the standards set out.
Given at FOSDEM 2011
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
2. Your first CPAN module
PAUSE
Perl Authors Upload Server
http://pause.perl.org
Account requests are manually
verified, can take weeks.
Sign up early!
3. Getting ready to write some code
% module-starter --module='My::New::Module'
--author='me' --email='me@lokku.com' --mb
Learn the CPAN module code layout:
lib/ - Perl modules
t/ - tests
Changes - change log file
META.yml - distribution metadata
LICENSE - legal stuff
README - installation details
MANIFEST - list of files included
MakeFile.PL - installation script (autoconf)
Build.PL - installation script (pure Perl)
4. If you're unsure...
Copy somebody else's code!
There are many different types of CPAN module...
● App::*
● WebService::*
● *::XS
● *::Tiny
● *::Manual
All have different layouts and conventions. When in doubt look
at a few popular examples, or examples by popular authors.
5. Of course you use source control
Go sign up to Github - http://github.com
If you're unfamiliar with Git read Pro Git - http://progit.org/book/
Getting started using Git and Github for your CPAN module is
easy
Create "My-New-Module" repository on Github
% cd My-New-Module
% git init
% git remote add origin git@github.com:you/My-New-Module.git
% git push origin master
% vim README.pod
% git commit -a "Added README.pod"
% git push origin master
6. Writing your module
This is the bit you are already familiar with.
Write your module in lib/My/New/Module.pm
Write your tests in t/*.t
Test your code using prove -l and perl -cw
Write good quality code and tests, somebody might read it!
7. Getting your module on CPAN
The manual, non-Dist::Zilla way...
% perl Build.PL # creates Build
% ./Build distmeta # creates Makefile.PL and META.yml
% ./Build manifest # creates MANIFEST
% git diff / git add / git commit as necessary
% ./Build disttest # test the distribution
% ./Build dist # spit out a tarball
Upload your distribution tarball to PAUSE...
https://pause.perl.org/pause/authenquery?ACTION=add_uri
8. Dist::Zilla
Dist::Zilla is a package to help CPAN authors.
% dzil help
Available commands:
commands: list the application's commands
help: display a command's help screen
authordeps: list your distribution's author dependencies
build: build your dist
clean: clean up after build, test, or install
install: install your dist
listdeps: print your distribution's prerequisites
new: mint a new dist
nop: do nothing: initialize dzil, then exit
release: release your dist
run: run stuff in a dir where your dist is built
setup: set up a basic global config file
smoke: smoke your dist
test: test your dist
9. Creating a new Dist::Zilla-based dist
% dzil new My::New::Module
Creates only dist.ini and lib/My/New/Module.pm
The default dist.ini contains...
name = My-New-Module
author = Alex Balhatchet <kaoru@slackwise.net>
license = Perl_5
copyright_holder = Alex Balhatchet
copyright_year = 2010
version = 0.001
[@Basic]
You can find out about @Basic here:
http://search.cpan.org/dist/Dist-Zilla/lib/Dist/Zilla/PluginBundle/Basic.pm
10. Converting a dist to Dist::Zilla
% rm -f Build.PL Makefile.PL MANIFEST META.yml t/pod-*.t
Didn't that feel good? :-)
% vim ~/dzil/config.ini
Global defaults
% vim dist.ini
Distribution-specific config
13. Build, test & release with Dist::Zilla
% dzil test
% dzil release
Yep, that's it :-)
My Dist::Zilla config adds the $VERSION variable, adds a
POD syntax checking test, creates the META.yml and META.
json files, and creates the LICENSE, README, MANIFEST
and Makefile.PL files.
Dist::Zilla can also interact with SVN or Git, determine your
dependencies automatically, or Tweet when you release a new
version of your module!
http://search.cpan.org/search?query=Dist::Zilla::Plugin
14. The waiting game
After running the dzil release command or uploading your
distribution via the PAUSE web interface, you should get an
two emails letting you know everything is OK.
After that it takes a few hours for your distribution to be fully
indexed in all the CPAN mirrors. Once it's there it will show up
on http://search.cpan.org/~you/ as you would expect.
Once it's on the web, let people know about it.
15. CPAN Testers
Once you've uploaded your distribution, the CPAN Testers
testing service will start testing it for you.
You will get emails about the results, and you can also check
them online.
For example, http://www.cpantesters.org/distro/N/Number-
Format-SouthAsian.html
In the case of Number::Format::SouthAsian the CPAN testers
quickly flagged two important bugs - it was broken on 32bit
systems, and it was broken on Windows.
Version 0.07 has both those bugs fixed. Woohoo!