Behind the scenes of WebLion's Plone hosting service, which uses Debian packages and a custom repository to deliver reliable, unattended updates to a cluster of heterogeneous departmental virtual servers. And it's all available for your own use for free.
7. A scalable solution
To save consulting effort
College of Business
Dairy and Animal Science
The Huck Institutes
Teaching and Learning
with Technology
8. A scalable solution
To save consulting effort
Penn State ts
nt of
rtme gy
Operatio
ns and eral Ar
LibCampus
Erie
pa
DeCollegeolo Physica Outrea ch
eteor of
MScience l Plan
College of Business t
ITS Marketing
World a
nd
Dairy and Animal Science Communications
Campus
Inn of
College ova Group
tion
Education rk The Huck Institutes ary
Pa Chemistry
l Libr es
igita logi Department and
D
TeachingTechno
and Learning inary al
Veter edic
College of
with Technology Biom Solutions
IST es
Agricultural cienc
S Institute
Sciences e of Human
Offic
Res ourcesConsulting and Support Services i Computer
Alumn
College of Associa
Communications
Population Research Institute tioScience and
Office of n
Peacock Care Engineering
27. Buildout
The right tool for the wrong job
Redoes existing work…worse
Every server is a point of failure.
28. Buildout
The right tool for the wrong job
Redoes existing work…worse
Every server is a point of failure.
On failure, breaks the site
29. Buildout
The right tool for the wrong job
Redoes existing work…worse
Every server is a point of failure.
On failure, breaks the site
Package QA is lacking.
30. Buildout
The right tool for the wrong job
Redoes existing work…worse
Every server is a point of failure.
On failure, breaks the site
“Publishing known good sets
Package QA is lacking. of versions is quite painful.”
—Martin Aspeli
31. Buildout
The right tool for the wrong job
Redoes existing work…worse
Every server is a point of failure.
On failure, breaks the site
“Publishing known good sets
Package QA is lacking. of versions is quite painful.”
—Martin Aspeli
Not repeatable
35. Advanced Packaging Tool
Or “APT”
We need them anyway.
Excellent QA record
High-level, low-level, and
config stuff are close to atomic.
36. Advanced Packaging Tool
Or “APT”
We need them anyway.
Excellent QA record
High-level, low-level, and
config stuff are close to atomic.
Tolerance of local changes
37. Advanced Packaging Tool
Or “APT”
Configuration file `/etc/my-bologna-conf.d/firstname'
==> File on system created by you or by a script.
We need them anyway.
==> File also in package provided by package maintainer.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
Excellent QA record
N or O : keep your currently-installed version
D : show the differences between the versions
Z : background this process to examine the situation
High-level, low-level, and
The default action is to keep your current version.
config stuff are close to atomic.
*** firstname (Y/I/N/O/D/Z) [default=N] ?
Tolerance of local changes
38. Advanced Packaging Tool
Or “APT”
We need them anyway.
Excellent QA record
High-level, low-level, and
config stuff are close to atomic.
Tolerance of local changes
39. Advanced Packaging Tool
Or “APT”
We need them anyway.
Excellent QA record
High-level, low-level, and
config stuff are close to atomic.
Tolerance of local changes
Reliable. Reliablereliablereliable.
40. Advanced Packaging Tool
A case study in failing gracefully
1. 1. If a version of the package is already installed, call
!
old-prerm upgrade new-version
2. If the script runs but exits with a non-zero exit status, dpkg will attempt:
!
new-prerm failed-upgrade old-version
If this works, the upgrade continues. If this does not work, the error unwind:
old-postinst abort-upgrade new-version
If this works, then the old-version is quot;Installedquot;, if not, the old version is in a quot;Failed-Configquot; state.
2. If a quot;conflictingquot; package is being removed at the same time, or if any package will be broken (due to Breaks):
1. If --auto-deconfigure is specified, call, for each package to be deconfigured due to Breaks:
deconfigured's-prerm deconfigure
!
in-favour package-being-installed version
Error unwind:
deconfigured's-postinst abort-deconfigure
in-favour package-being-installed-but-failed version
The deconfigured packages are marked as requiring configuration, so that if --install is used they will be configured again if possible.
2. If any packages depended on a conflicting package being removed and --auto-deconfigure is specified, call, for each such package:
deconfigured's-prerm deconfigure
in-favour package-being-installed version
!
removing conflicting-package version
Error unwind:
41. 2. If this fails, dpkg will attempt:
!
!
new-postrm failed-upgrade old-version
If this works, installation continues. If not, Error unwind:
Advanced Packaging Tool
old-preinst abort-upgrade new-version
If this fails, the old version is left in an quot;Half Installedquot; state. If it works, dpkg now calls:
new-postrm abort-upgrade old-version
A case study in failing gracefully
If this fails, the old version is left in an quot;Half Installedquot; state. If it works, dpkg now calls:
old-postinst abort-upgrade new-version
If this fails, the old version is in an quot;Unpackedquot; state.
This is the point of no return - if dpkg gets this far, it won't back off past this point if an error occurs. This will leave the package in a fairly bad
state, which will require a successful re-installation to clear up, but it's when dpkg starts doing things that are irreversible.
6. Any files which were in the old version of the package but not in the new are removed.
7. The new file list replaces the old.
8. The new maintainer scripts replace the old.
9. Any packages all of whose files have been overwritten during the installation, and which aren't required for dependencies, are considered to have
been removed. For each such package
1. dpkg calls:
disappearer's-postrm disappear
overwriter overwriter-version
2. The package's maintainer scripts are removed.
3. It is noted in the status database as being in a sane state, namely not installed (any conffiles it may have are ignored, rather than being
removed by dpkg). Note that disappearing packages do not have their prerm called, because dpkg doesn't know in advance that the
package is going to vanish.
10. Any files in the package we're unpacking that are also listed in the file lists of other packages are removed from those lists. (This will lobotomize
the file list of the quot;conflictingquot; package if there is one.)
11. The backup files made during installation, above, are deleted.
12. The new package's status is now sane, and recorded as quot;unpackedquot;.
Here is another point of no return - if the conflicting package's removal fails we do not unwind the rest of the installation; the conflicting package
is left in a half-removed limbo.
13. If there was a conflicting package we go and do the removal actions (described below), starting with the removal of the conflicting package's files
(any that are also in the package being installed have already been removed from the conflicting package's file list, and so do not get removed
now).
58. weblion-apache-config
Crown jewel of config-package-dev-ery
1 # AUTOMATIC UPDATES MIGHT BREAK YOUR MACHINE if you don't read!
2 # https://weblion.psu.edu/wiki/ConfigPackageOverrides before editing this file.!
3 #!
4 # We intend that you can perform the customizations you need without editing!
5 # this file. Instead, edit any of the files in /etc/weblion-apache-config!
6 # Included herein. This way, we can update this file unattended without paving!
7 # over your work.!
8 #!
9 # If you find you need even more flexibility, please file a ticket, and we'll!
10 # revise the design or advise you to use an entirely custom vhost and include!
11 # what files you can from!
12 # /usr/share/weblion-apache-config/config-snippets/public.!
13 !
14 # We don't put this in conf.d because, if dpkg puts a global.conf.dpkg-new or!
15 # something there, Apache will load it, too. This isn't a problem in other!
16 # folders, where Apache is careful to load only files with the extension!
17 # quot;.confquot;.!
18 Include /etc/weblion-apache-config/global.conf!
19 !
20 <VirtualHost *:80>!
21 Include /etc/weblion-apache-config/servername.conf!
22 !
23 # If you want your site to answer to more than one domain (for example,!
24 # www.example.com and example.com), don't use ServerAlias. Instead, make a!
25 # new virtual host, following the directions in!
26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.!
27 !
28 Include /etc/weblion-apache-config/serveradmin.conf!
59. weblion-apache-config
Crown jewel of config-package-dev-ery
1 # AUTOMATIC UPDATES MIGHT BREAK YOUR MACHINE if you don't read!
2 # https://weblion.psu.edu/wiki/ConfigPackageOverrides before editing this file.!
3 #!
4 # We intend that you can perform the customizations you need without editing!
5 # this file. Instead, edit any of the files in /etc/weblion-apache-config!
6 # Included herein. This way, we can update this file unattended without paving!
7 # over your work.!
8 #!
9 # If you find you need even more flexibility, please file a ticket, and we'll!
10 # revise the design or advise you to use an entirely custom vhost and include!
11 # what files you can from!
12 # /usr/share/weblion-apache-config/config-snippets/public.!
13 !
14 # We don't put this in conf.d because, if dpkg puts a global.conf.dpkg-new or!
15 # something there, Apache will load it, too. This isn't a problem in other!
16 # folders, where Apache is careful to load only files with the extension!
17 # quot;.confquot;.!
18 Include /etc/weblion-apache-config/global.conf!
19 !
20 <VirtualHost *:80>!
21 Include /etc/weblion-apache-config/servername.conf!
22 !
23 # If you want your site to answer to more than one domain (for example,!
24 # www.example.com and example.com), don't use ServerAlias. Instead, make a!
25 # new virtual host, following the directions in!
26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.!
27 !
28 Include /etc/weblion-apache-config/serveradmin.conf!
60. weblion-apache-config
Crown jewel of config-package-dev-ery
1 # AUTOMATIC UPDATES MIGHT BREAK YOUR MACHINE if you don't read!
2 # https://weblion.psu.edu/wiki/ConfigPackageOverrides before editing this file.!
3 #!
4 # We intend that you can perform the customizations you need without editing!
5 # this file. Instead, edit any of the files in /etc/weblion-apache-config!
6 # Included herein. This way, we can update this file unattended without paving!
7
8
# over your work.!
#!
servername.conf:
9 # If you find you need even more flexibility, please file a ticket, and we'll!
10 # revise the design or advise you to use an entirely custom vhost and include!
11 # what files you can from! # This file should consist of a single
12 # ServerName directive specifying the
# /usr/share/weblion-apache-config/config-snippets/public.!
13 !
14 # We don't put this in conf.d# FQDN if dpkg puts a global.conf.dpkg-new or!
because, of the primary vhost.
15 # something there, Apache will load it, too. This isn't a problem in other!
16
ServerName #example.psu.edu#
# folders, where Apache is careful to load only files with the extension!
17 # quot;.confquot;.!
18 Include /etc/weblion-apache-config/global.conf!
19 !
20 <VirtualHost *:80>!
21 Include /etc/weblion-apache-config/servername.conf!
22 !
23 # If you want your site to answer to more than one domain (for example,!
24 # www.example.com and example.com), don't use ServerAlias. Instead, make a!
25 # new virtual host, following the directions in!
26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.!
27 !
28 Include /etc/weblion-apache-config/serveradmin.conf!
61. weblion-apache-config
Crown jewel of config-package-dev-ery
1 # AUTOMATIC UPDATES MIGHT BREAK YOUR MACHINE if you don't read!
2 # https://weblion.psu.edu/wiki/ConfigPackageOverrides before editing this file.!
3 #!
4 # We intend that you can perform the customizations you need without editing!
5 # this file. Instead, edit any of the files in /etc/weblion-apache-config!
6 # Included herein. This way, we can update this file unattended without paving!
7 # over your work.!
8 #!
9 # If you find you need even more flexibility, please file a ticket, and we'll!
10 # revise the design or advise you to use an entirely custom vhost and include!
11 # what files you can from!
12 # /usr/share/weblion-apache-config/config-snippets/public.!
13 !
14 # We don't put this in conf.d because, if dpkg puts a global.conf.dpkg-new or!
15 # something there, Apache will load it, too. This isn't a problem in other!
16 # folders, where Apache is careful to load only files with the extension!
17 # quot;.confquot;.!
18 Include /etc/weblion-apache-config/global.conf!
19 !
20 <VirtualHost *:80>!
21 Include /etc/weblion-apache-config/servername.conf!
22 !
23 # If you want your site to answer to more than one domain (for example,!
24 # www.example.com and example.com), don't use ServerAlias. Instead, make a!
25 # new virtual host, following the directions in!
26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.!
27 !
28 Include /etc/weblion-apache-config/serveradmin.conf!
62. 19 !
20 <VirtualHost *:80>!
21 Include /etc/weblion-apache-config/servername.conf!
22 !
23 # If you want your site to answer to more than one domain (for example,!
weblion-apache-config
24 # www.example.com and example.com), don't use ServerAlias. Instead, make a!
25 # new virtual host, following the directions in!
26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.!
27 !
28 Include /etc/weblion-apache-config/serveradmin.conf!
Crown jewel of config-package-dev-ery
29
30
31
Include /etc/weblion-apache-config/log.conf!
Include /usr/share/weblion-apache-config/config-snippets/public/prepare-to-proxy.conf!
!
32 # Most of your custom configuration, including rewrites, should go in this!
33 # file and in before-proxy-to-plone-https.conf, below:!
34 Include /etc/weblion-apache-config/before-proxy-to-plone.conf!
35 !
36 Include /etc/weblion-apache-config/proxy-to-plone.conf!
37 </VirtualHost>!
38 !
39 <VirtualHost *:443>!
40 Include /etc/weblion-apache-config/servername.conf!
41 Include /etc/weblion-apache-config/serveradmin.conf!
42 Include /etc/weblion-apache-config/log.conf!
43 !
44 Include /etc/weblion-apache-config/enable-ssl.conf!
45 Include /etc/weblion-apache-config/ssl-certificate-files.conf!
46 !
47 ! # Require authN for SSL access to the Plone site:!
48 ! <Location />!
49 ! Include /usr/share/weblion-apache-config/config-snippets/public/require-cosign-auth.conf!
50 ! Include /etc/weblion-apache-config/cosign-host-parameters.conf!
51 ! </Location>!
52 ! !
53 Include /usr/share/weblion-apache-config/config-snippets/public/prepare-to-proxy-https.conf!
54 !
55 # Most of your custom configuration, including rewrites, should go in this!
56 # file and in before-proxy-to-plone.conf, above:!
57 Include /etc/weblion-apache-config/before-proxy-to-plone-https.conf!
58 !
59 Include /etc/weblion-apache-config/proxy-to-plone-https.conf!
60 </VirtualHost>!
(Don’t say anything; this is just a splash slide.)
You can think of WL Hosting as…
a Plone hosting appliance
came out of 2 realizations: lots more to a Plone deployment than Zope & Plone. \\ Python, Apache, Squid, cron jobs for DB maint & backups, SNMP for remote monitoring, …. Then kernel, libs, etc.
2nd thing: I realized there’s a strangeness in WebLion’s business model…
clients vs. partners: don’t do stuff for them (except multi-dept usefulnesses) \\ advantages: scalability, distribution of knowledge across the organization, keeping our own team lean and agile.
Didn’t realize: Plone apparently hard to sysadmin
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out
However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”
It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
…cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)
But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.
So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
…cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)
But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.
So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
…cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)
But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.
So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
…cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)
But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.
So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
…cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)
But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.
So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
…cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)
But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.
So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
…cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)
But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.
So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
Gets the stuff on there. Upgrades?
Puppet & cfengine definitely contenders \\ couple things I didn’t like
Command-&-control philosophy. assume every machine updates in lockstep: cluster-oriented. \\ I want the option of telling control freak sysadmins “Sure, you can use our stuff. Just set up your own box, and hit ‘update’ manually when you see fit.” without running into situations where a config file assumes a certain version of the software and is surprised.
Cross-OS abstraction: major feature: manage Windows & UNIXes & Mac from 1 conf file \\ invent own language \\ we’ll pick 1 OS \\ Don’t need cross-OS abstraction. \\ Don’t pay for another language in learning time. \\ Want people to hack on this system as easily as possible.
Non-concurrent: updates to config not synced with updates to the software it configures, which could conceivably cause problems, for example if a new version of a package changes the meaning of a config directive.
Puppet & cfengine definitely contenders \\ couple things I didn’t like
Command-&-control philosophy. assume every machine updates in lockstep: cluster-oriented. \\ I want the option of telling control freak sysadmins “Sure, you can use our stuff. Just set up your own box, and hit ‘update’ manually when you see fit.” without running into situations where a config file assumes a certain version of the software and is surprised.
Cross-OS abstraction: major feature: manage Windows & UNIXes & Mac from 1 conf file \\ invent own language \\ we’ll pick 1 OS \\ Don’t need cross-OS abstraction. \\ Don’t pay for another language in learning time. \\ Want people to hack on this system as easily as possible.
Non-concurrent: updates to config not synced with updates to the software it configures, which could conceivably cause problems, for example if a new version of a package changes the meaning of a config directive.
Puppet & cfengine definitely contenders \\ couple things I didn’t like
Command-&-control philosophy. assume every machine updates in lockstep: cluster-oriented. \\ I want the option of telling control freak sysadmins “Sure, you can use our stuff. Just set up your own box, and hit ‘update’ manually when you see fit.” without running into situations where a config file assumes a certain version of the software and is surprised.
Cross-OS abstraction: major feature: manage Windows & UNIXes & Mac from 1 conf file \\ invent own language \\ we’ll pick 1 OS \\ Don’t need cross-OS abstraction. \\ Don’t pay for another language in learning time. \\ Want people to hack on this system as easily as possible.
Non-concurrent: updates to config not synced with updates to the software it configures, which could conceivably cause problems, for example if a new version of a package changes the meaning of a config directive.
Puppet & cfengine definitely contenders \\ couple things I didn’t like
Command-&-control philosophy. assume every machine updates in lockstep: cluster-oriented. \\ I want the option of telling control freak sysadmins “Sure, you can use our stuff. Just set up your own box, and hit ‘update’ manually when you see fit.” without running into situations where a config file assumes a certain version of the software and is surprised.
Cross-OS abstraction: major feature: manage Windows & UNIXes & Mac from 1 conf file \\ invent own language \\ we’ll pick 1 OS \\ Don’t need cross-OS abstraction. \\ Don’t pay for another language in learning time. \\ Want people to hack on this system as easily as possible.
Non-concurrent: updates to config not synced with updates to the software it configures, which could conceivably cause problems, for example if a new version of a package changes the meaning of a config directive.
Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.
buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.
So buildout wasn’t really suitable for unattended deployment. But what about…
Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.
buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.
So buildout wasn’t really suitable for unattended deployment. But what about…
Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.
buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.
So buildout wasn’t really suitable for unattended deployment. But what about…
Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.
buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.
So buildout wasn’t really suitable for unattended deployment. But what about…
Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.
buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.
So buildout wasn’t really suitable for unattended deployment. But what about…
Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.
buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.
So buildout wasn’t really suitable for unattended deployment. But what about…
Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.
buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.
So buildout wasn’t really suitable for unattended deployment. But what about…
…Debian packages?
We need them anyway to manage the kernel, libraries, and basic services.
Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
…Debian packages?
We need them anyway to manage the kernel, libraries, and basic services.
Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
…Debian packages?
We need them anyway to manage the kernel, libraries, and basic services.
Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
…Debian packages?
We need them anyway to manage the kernel, libraries, and basic services.
Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
…Debian packages?
We need them anyway to manage the kernel, libraries, and basic services.
Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
…Debian packages?
We need them anyway to manage the kernel, libraries, and basic services.
Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
…Debian packages?
We need them anyway to manage the kernel, libraries, and basic services.
Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
…there are a whole bunch of bailout points that return things to a working state.
This is a breakdown of how the APT system installs or upgrades a package. Each smiley face marks a point where something might go wrong, and there’s a remediation step to return things to a working state.
And it’s not until way down here at this big red line that you’re committed to the upgrade; it can roll back at any point before that.
Imagine if buildout did this! Imagine how many fewer people we’d have showing up in the #plone channel screaming about how it broke their install!
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
plone-3.1-stack: All the rest \\ Packaged Plone.
center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
massdeploy \\ I mean…
config-package-dev
0:20
framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
overview of what we use it for
auto-update: screwing with cron-apt
squid-config: one conf file to rule them all
plone-site-config: listen on localhost, hook up to ZEO, restart leaky Zope, pack DB
Not on this diagram:
weblion-krb5-config
weblion-snmpd-config
weblion-ssh-server-config
Crown jewel: apache-config
Not cuz it uses config-package-dev fancily \\ fancy IOC framework \\ while Squid and zope.conf stay static, Apache is custom
“primary” vhost \\ full of includes \\ fill out tiny conffiles included by the vhost \\ contracts \\ all made out of includes
Example fixes so far: HTTP_REMOTE_USER hole, route auth’d stuff through Squid. \\ pattern worked really well. recommend.
additional vhosts \\ alias vhosts
Not cuz it uses config-package-dev fancily \\ fancy IOC framework \\ while Squid and zope.conf stay static, Apache is custom
“primary” vhost \\ full of includes \\ fill out tiny conffiles included by the vhost \\ contracts \\ all made out of includes
Example fixes so far: HTTP_REMOTE_USER hole, route auth’d stuff through Squid. \\ pattern worked really well. recommend.
additional vhosts \\ alias vhosts
Not cuz it uses config-package-dev fancily \\ fancy IOC framework \\ while Squid and zope.conf stay static, Apache is custom
“primary” vhost \\ full of includes \\ fill out tiny conffiles included by the vhost \\ contracts \\ all made out of includes
Example fixes so far: HTTP_REMOTE_USER hole, route auth’d stuff through Squid. \\ pattern worked really well. recommend.
additional vhosts \\ alias vhosts
Not cuz it uses config-package-dev fancily \\ fancy IOC framework \\ while Squid and zope.conf stay static, Apache is custom
“primary” vhost \\ full of includes \\ fill out tiny conffiles included by the vhost \\ contracts \\ all made out of includes
Example fixes so far: HTTP_REMOTE_USER hole, route auth’d stuff through Squid. \\ pattern worked really well. recommend.
additional vhosts \\ alias vhosts
and wait for Zenoss…
and wait for Zenoss…
and wait for Zenoss…
and wait for Zenoss…
…which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
…which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
…which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
…which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
…which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
…which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
…which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
…which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
3 dists on server \\ mirror Debian structure, except all Etch
new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable
When we get to lenny \\ etch -> lenny-unstable \\ work its way up
3 dists on server \\ mirror Debian structure, except all Etch
new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable
When we get to lenny \\ etch -> lenny-unstable \\ work its way up
3 dists on server \\ mirror Debian structure, except all Etch
new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable
When we get to lenny \\ etch -> lenny-unstable \\ work its way up
3 dists on server \\ mirror Debian structure, except all Etch
new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable
When we get to lenny \\ etch -> lenny-unstable \\ work its way up
3 dists on server \\ mirror Debian structure, except all Etch
new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable
When we get to lenny \\ etch -> lenny-unstable \\ work its way up
3 dists on server \\ mirror Debian structure, except all Etch
new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable
When we get to lenny \\ etch -> lenny-unstable \\ work its way up
how we manage project: Trac \\ 1 milestone per release of stable