SlideShare ist ein Scribd-Unternehmen logo
1 von 3
Downloaden Sie, um offline zu lesen
CASPUR Tape Dispatcher
Introduction
At CASPUR, we have several automated services that make use of tape drives. These drives are
shared over the fibre channel SAN. And often the same physical tape drive is called differently on
different machines, in function of the system architecture and the way it was configured on each
host. For instance, on some Linux machine a drive may be accessed as "/dev/st0", and at the same
time on another AIX machine this very drive will be called /dev/rmt3, and on yet another Solaris
host it will be named "/dev/rmt/2" etc.
On the top of this, same drive (or tape) may be claimed by several processes at the same time, and
this has to be handled somehow. Our structure is even more complex, as the same magnetic tape
may be mounted on the same host but on the different tape drives. And we have to handle several
different tape types (LTO, STK9840, DLT, AIT).
To address the management issues that arise in this distributed environment, we have introduced the
CASPUR Tape Dispatcher. This application 1) is doing a minimal bookkeeping of tape resources
(both drives and magnetic tapes); 2) is handling tape mount requests, i.e. mounts and unmounts
tapes, and 3) takes care of the resource "locking", or simply ensures that only one process at a time
may use one specific resource, such as a tape drive or some magnetic tape.
We believe that our solution may be of interest also for other sites. It may be used to handle
multiple mount requests on one or many tape mounters via the site-specific mount commands. And
it is especially useful in cases when several independent and asynchronous tasks that run on the
same or different hosts may simultaneously require the same tape resource.
Architecture
The CASPUR Tape Dispatcher is a stand-alone server process written in perl, and is fully site-
independent. Systems that need access to the tape resources communicate with the server by means
of the Tape Dispatcher API (supplied along with the server).
The site-dependent part of the system consists of 3 components:
1. System-wide configuration file, /etc/sysconfig/tapedispatcher.conf, which describes all tape
resources (drives and magnetic tapes) of the organization.
2. Current tape usage file ( /etc/sysconfig/tapedispatcher.usedtapes), which denotes the used
tapes and is dynamically managed by the server process.
3. Tape mounter command (/bin/tapemounter), which each site provides in accordance with
our specification.
The server process maintains also a log of operation in /var/log/tapedispatcher.log.
Operation
When the server process starts, it parses the configuration file and is immediately ready to serve the
requests from client hosts (only hosts mentioned in the "DRIVE" statements are served; requests
from other hosts are silently ignored).
Server controls all available drives simultaneously. For each requested drive, it keeps a "session"
with a client which grants this client exclusive rights for the device. This session remains active
until the client process interrupts it, or if the client process dies. During the session, client may
mount one or more different tapes on the reserved device.
There are 2 ways to establish a session, let us consider them in detail:
1. A session is established by requesting a tape with a certain VID for a service with a certain
nickname (on a specified drive, or on a first unused matching drive).
In this case the server:
o Checks whether the request has come from an authorized host.
o Checks whether the tape VID is present in the central configuration file.
o Checks whether the VID "belongs" to the nicknamed service ("usedtapes" file).
o Checks whether the caller host is mentioned in the central configuration file as
having at least one drive of the right type attached (capable to mount the requested
VID), or has access to the explicitly specified drive.
o Puts the session request into the queue.
o Assigns a first available (or explicitly specified) drive of the needed type to the client
and mounts there the requested VID, thus establishing a session.
o Within the session, serves any subsequent mount/unmount requests. For each mount,
client receives an aknowledgement, complete with the name of physical device and
VID mounted on it.
o Marks the resourse as "free", upon completion (or crash) of the client session.
2. A session is established by requesting a brand-new tape of a certain type for a service with
a certain nickname (on a specified drive, or on a first unused matching drive).
The server in this case has to "generate" the VID (in all other aspects, it operates exactly as
in case 1). That is how it is done:
o The central configuration file is searched for a first not-used tape of the matching
type.
o This tape is then marked as used by a service with the specified nickname in the
"usedtapes" file.
o Thus obtained new VID is then used to create the session.
At any moment, server may be forced to re-read its configuration file by sending a HUP signal to
the server process. All requests to server and server operations, both failed and successful, are
logged.
Download
The code is available under these conditions. The history of updates and the latest downloadable
version of CASPUR Tape Dispatcher are here:
tapedispatcher-CASPUR-0.98f.tgz (18 KB).
Update History.
The distribution contains:
• The ready-to-use server code (tapedispatcher), it requires only perl and will run on any
UNIX/Linux host.
• Template tapedispatcher.conf and tapedispatcher.usedtapes files that are to be placed in
/etc/sysconfig.
• A sample tapemounter front end and some library-specific scripts
• Tape Dispatcher API (set of perl functions, README file and examples of usage).
The installation is straightforward, so we do not supply any installation script:
• Make sure that you have perl and /var/log and /etc/sysconfig directories on your system.
• Carefully edit and install both tapedispatcher.conf and tapedispatcher.usedtapes
configuration files in /etc/sysconfig.
• Prepare and install your site-specific /bin/tapemounter command.
• Your system is configured! Start the server (use the rc.tdisp script that we supply) and
maybe make sure that it restarts on reboot. We suggest that you do it via inittab. For
instance, if you have unpacked the distribution directory under /etc you may add a following
line to /etc/inittab:
tdis:2345:respawn:/etc/tapedispatcher-CASPUR-0.98f/rc.tdisp
• For each of your tape-related systems involved, replace any direct tape handling with the
interface to your tape dispatcher.
• Support
• We will be happy to answer your questions, and we are interested in any error reports and
suggestions. Please feel free to contact us at tapedispatcher@caspur.it.
•
CASPUR Tape Dispatcher team:
• Ruten Gurin (architecture and implementation),
Andrei Maslennikov (architecture and documentation),
Marco Mililotti (2003-2005 Project Lead),
Andrea Petrucci (developer),
Carolina Roldan Garrido (developer)
• Copyright notice
	
  

Weitere ähnliche Inhalte

Was ist angesagt?

Nfs version 4 protocol presentation
Nfs version 4 protocol presentationNfs version 4 protocol presentation
Nfs version 4 protocol presentationAbu Osama
 
The basic concept of Linux FIleSystem
The basic concept of Linux FIleSystemThe basic concept of Linux FIleSystem
The basic concept of Linux FIleSystemHungWei Chiu
 
FreeBSD Jail Complete Example
FreeBSD Jail Complete ExampleFreeBSD Jail Complete Example
FreeBSD Jail Complete ExampleMohammed Farrag
 
Lecture 6 Kernel Debugging + Ports Development
Lecture 6 Kernel Debugging + Ports DevelopmentLecture 6 Kernel Debugging + Ports Development
Lecture 6 Kernel Debugging + Ports DevelopmentMohammed Farrag
 
Storage Simplified NFS LXC K3S
Storage Simplified NFS LXC K3SStorage Simplified NFS LXC K3S
Storage Simplified NFS LXC K3SShailesh Thakur
 
Lecture 4 FreeBSD Security + FreeBSD Jails + MAC Security Framework
Lecture 4 FreeBSD Security + FreeBSD Jails + MAC Security FrameworkLecture 4 FreeBSD Security + FreeBSD Jails + MAC Security Framework
Lecture 4 FreeBSD Security + FreeBSD Jails + MAC Security FrameworkMohammed Farrag
 
Network File System in Distributed Computing
Network File System in Distributed ComputingNetwork File System in Distributed Computing
Network File System in Distributed ComputingChandan Padalkar
 
17 Linux Basics #burningkeyboards
17 Linux Basics #burningkeyboards17 Linux Basics #burningkeyboards
17 Linux Basics #burningkeyboardsDenis Ristic
 
FreeBSD and Drivers
FreeBSD and DriversFreeBSD and Drivers
FreeBSD and DriversKernel TLV
 

Was ist angesagt? (20)

FUSE Filesystems
FUSE FilesystemsFUSE Filesystems
FUSE Filesystems
 
linux
linuxlinux
linux
 
Nfs version 4 protocol presentation
Nfs version 4 protocol presentationNfs version 4 protocol presentation
Nfs version 4 protocol presentation
 
The basic concept of Linux FIleSystem
The basic concept of Linux FIleSystemThe basic concept of Linux FIleSystem
The basic concept of Linux FIleSystem
 
FreeBSD Jail Complete Example
FreeBSD Jail Complete ExampleFreeBSD Jail Complete Example
FreeBSD Jail Complete Example
 
LXC
LXCLXC
LXC
 
Fun with FUSE
Fun with FUSEFun with FUSE
Fun with FUSE
 
Linux06 nfs
Linux06 nfsLinux06 nfs
Linux06 nfs
 
Lecture 6 Kernel Debugging + Ports Development
Lecture 6 Kernel Debugging + Ports DevelopmentLecture 6 Kernel Debugging + Ports Development
Lecture 6 Kernel Debugging + Ports Development
 
Rhel3
Rhel3Rhel3
Rhel3
 
Storage Simplified NFS LXC K3S
Storage Simplified NFS LXC K3SStorage Simplified NFS LXC K3S
Storage Simplified NFS LXC K3S
 
Lecture1 Introduction
Lecture1  IntroductionLecture1  Introduction
Lecture1 Introduction
 
[ArabBSD] Unix Basics
[ArabBSD] Unix Basics[ArabBSD] Unix Basics
[ArabBSD] Unix Basics
 
Meeting 9 samba
Meeting 9   sambaMeeting 9   samba
Meeting 9 samba
 
Lecture 4 FreeBSD Security + FreeBSD Jails + MAC Security Framework
Lecture 4 FreeBSD Security + FreeBSD Jails + MAC Security FrameworkLecture 4 FreeBSD Security + FreeBSD Jails + MAC Security Framework
Lecture 4 FreeBSD Security + FreeBSD Jails + MAC Security Framework
 
Network File System in Distributed Computing
Network File System in Distributed ComputingNetwork File System in Distributed Computing
Network File System in Distributed Computing
 
Squid server
Squid serverSquid server
Squid server
 
228
228228
228
 
17 Linux Basics #burningkeyboards
17 Linux Basics #burningkeyboards17 Linux Basics #burningkeyboards
17 Linux Basics #burningkeyboards
 
FreeBSD and Drivers
FreeBSD and DriversFreeBSD and Drivers
FreeBSD and Drivers
 

Ähnlich wie CASPUR Tape Dispatcher

CASPUR Staging System II
CASPUR Staging System IICASPUR Staging System II
CASPUR Staging System IIAndrea PETRUCCI
 
RHCE (RED HAT CERTIFIED ENGINEERING)
RHCE (RED HAT CERTIFIED ENGINEERING)RHCE (RED HAT CERTIFIED ENGINEERING)
RHCE (RED HAT CERTIFIED ENGINEERING)Sumant Garg
 
Install websphere message broker 8 RHEL 6 64 bits
Install websphere message broker 8 RHEL 6 64 bitsInstall websphere message broker 8 RHEL 6 64 bits
Install websphere message broker 8 RHEL 6 64 bitsManuel Vega
 
An introduction into Oracle VM V3.x
An introduction into Oracle VM V3.xAn introduction into Oracle VM V3.x
An introduction into Oracle VM V3.xMarco Gralike
 
18587936 squid-proxy-configuration-guide - [the-xp.blogspot.com]
18587936 squid-proxy-configuration-guide - [the-xp.blogspot.com]18587936 squid-proxy-configuration-guide - [the-xp.blogspot.com]
18587936 squid-proxy-configuration-guide - [the-xp.blogspot.com]Krisman Tarigan
 
Content server installation guide
Content server installation guideContent server installation guide
Content server installation guideNaveed Bashir
 
Introduction to OS LEVEL Virtualization & Containers
Introduction to OS LEVEL Virtualization & ContainersIntroduction to OS LEVEL Virtualization & Containers
Introduction to OS LEVEL Virtualization & ContainersVaibhav Sharma
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawnGábor Nyers
 
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...OpenShift Origin
 
It04 roshan basnet
It04 roshan basnetIt04 roshan basnet
It04 roshan basnetrosu555
 
Container & kubernetes
Container & kubernetesContainer & kubernetes
Container & kubernetesTed Jung
 
Esxi troubleshooting
Esxi troubleshootingEsxi troubleshooting
Esxi troubleshootingOvi Chis
 
lamp technology
lamp technologylamp technology
lamp technologyDeepa
 

Ähnlich wie CASPUR Tape Dispatcher (20)

CASPUR Staging System II
CASPUR Staging System IICASPUR Staging System II
CASPUR Staging System II
 
linux installation.pdf
linux installation.pdflinux installation.pdf
linux installation.pdf
 
10.1.1.48.4087
10.1.1.48.408710.1.1.48.4087
10.1.1.48.4087
 
RHCE (RED HAT CERTIFIED ENGINEERING)
RHCE (RED HAT CERTIFIED ENGINEERING)RHCE (RED HAT CERTIFIED ENGINEERING)
RHCE (RED HAT CERTIFIED ENGINEERING)
 
Install websphere message broker 8 RHEL 6 64 bits
Install websphere message broker 8 RHEL 6 64 bitsInstall websphere message broker 8 RHEL 6 64 bits
Install websphere message broker 8 RHEL 6 64 bits
 
Ch18 system administration
Ch18 system administration Ch18 system administration
Ch18 system administration
 
An introduction into Oracle VM V3.x
An introduction into Oracle VM V3.xAn introduction into Oracle VM V3.x
An introduction into Oracle VM V3.x
 
18587936 squid-proxy-configuration-guide - [the-xp.blogspot.com]
18587936 squid-proxy-configuration-guide - [the-xp.blogspot.com]18587936 squid-proxy-configuration-guide - [the-xp.blogspot.com]
18587936 squid-proxy-configuration-guide - [the-xp.blogspot.com]
 
Content server installation guide
Content server installation guideContent server installation guide
Content server installation guide
 
Hp docs
Hp docsHp docs
Hp docs
 
4. Centos Administration
4. Centos Administration4. Centos Administration
4. Centos Administration
 
Introduction to OS LEVEL Virtualization & Containers
Introduction to OS LEVEL Virtualization & ContainersIntroduction to OS LEVEL Virtualization & Containers
Introduction to OS LEVEL Virtualization & Containers
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawn
 
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...
 
1184 Quayle
1184 Quayle1184 Quayle
1184 Quayle
 
It04 roshan basnet
It04 roshan basnetIt04 roshan basnet
It04 roshan basnet
 
Container & kubernetes
Container & kubernetesContainer & kubernetes
Container & kubernetes
 
Wissbi osdc pdf
Wissbi osdc pdfWissbi osdc pdf
Wissbi osdc pdf
 
Esxi troubleshooting
Esxi troubleshootingEsxi troubleshooting
Esxi troubleshooting
 
lamp technology
lamp technologylamp technology
lamp technology
 

Mehr von Andrea PETRUCCI

PETRUCCI_Andrea_Research_Projects_and_Publications
PETRUCCI_Andrea_Research_Projects_and_PublicationsPETRUCCI_Andrea_Research_Projects_and_Publications
PETRUCCI_Andrea_Research_Projects_and_PublicationsAndrea PETRUCCI
 
XDAQ_AP_LNL-INFN_27112014
XDAQ_AP_LNL-INFN_27112014XDAQ_AP_LNL-INFN_27112014
XDAQ_AP_LNL-INFN_27112014Andrea PETRUCCI
 
RT2014_TCPLA_Nara_27052014-V1
RT2014_TCPLA_Nara_27052014-V1RT2014_TCPLA_Nara_27052014-V1
RT2014_TCPLA_Nara_27052014-V1Andrea PETRUCCI
 
CHEP2012_Upgrade_CMS_Event_Builder_AP_21052011
CHEP2012_Upgrade_CMS_Event_Builder_AP_21052011CHEP2012_Upgrade_CMS_Event_Builder_AP_21052011
CHEP2012_Upgrade_CMS_Event_Builder_AP_21052011Andrea PETRUCCI
 
AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007Andrea PETRUCCI
 
GRIDCC_EELA_Conference_Andrea_Petrucci
GRIDCC_EELA_Conference_Andrea_PetrucciGRIDCC_EELA_Conference_Andrea_Petrucci
GRIDCC_EELA_Conference_Andrea_PetrucciAndrea PETRUCCI
 

Mehr von Andrea PETRUCCI (7)

PETRUCCI_Andrea_Research_Projects_and_Publications
PETRUCCI_Andrea_Research_Projects_and_PublicationsPETRUCCI_Andrea_Research_Projects_and_Publications
PETRUCCI_Andrea_Research_Projects_and_Publications
 
mg-ccr-ws-10122008
mg-ccr-ws-10122008mg-ccr-ws-10122008
mg-ccr-ws-10122008
 
XDAQ_AP_LNL-INFN_27112014
XDAQ_AP_LNL-INFN_27112014XDAQ_AP_LNL-INFN_27112014
XDAQ_AP_LNL-INFN_27112014
 
RT2014_TCPLA_Nara_27052014-V1
RT2014_TCPLA_Nara_27052014-V1RT2014_TCPLA_Nara_27052014-V1
RT2014_TCPLA_Nara_27052014-V1
 
CHEP2012_Upgrade_CMS_Event_Builder_AP_21052011
CHEP2012_Upgrade_CMS_Event_Builder_AP_21052011CHEP2012_Upgrade_CMS_Event_Builder_AP_21052011
CHEP2012_Upgrade_CMS_Event_Builder_AP_21052011
 
AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007
 
GRIDCC_EELA_Conference_Andrea_Petrucci
GRIDCC_EELA_Conference_Andrea_PetrucciGRIDCC_EELA_Conference_Andrea_Petrucci
GRIDCC_EELA_Conference_Andrea_Petrucci
 

CASPUR Tape Dispatcher

  • 1. CASPUR Tape Dispatcher Introduction At CASPUR, we have several automated services that make use of tape drives. These drives are shared over the fibre channel SAN. And often the same physical tape drive is called differently on different machines, in function of the system architecture and the way it was configured on each host. For instance, on some Linux machine a drive may be accessed as "/dev/st0", and at the same time on another AIX machine this very drive will be called /dev/rmt3, and on yet another Solaris host it will be named "/dev/rmt/2" etc. On the top of this, same drive (or tape) may be claimed by several processes at the same time, and this has to be handled somehow. Our structure is even more complex, as the same magnetic tape may be mounted on the same host but on the different tape drives. And we have to handle several different tape types (LTO, STK9840, DLT, AIT). To address the management issues that arise in this distributed environment, we have introduced the CASPUR Tape Dispatcher. This application 1) is doing a minimal bookkeeping of tape resources (both drives and magnetic tapes); 2) is handling tape mount requests, i.e. mounts and unmounts tapes, and 3) takes care of the resource "locking", or simply ensures that only one process at a time may use one specific resource, such as a tape drive or some magnetic tape. We believe that our solution may be of interest also for other sites. It may be used to handle multiple mount requests on one or many tape mounters via the site-specific mount commands. And it is especially useful in cases when several independent and asynchronous tasks that run on the same or different hosts may simultaneously require the same tape resource. Architecture The CASPUR Tape Dispatcher is a stand-alone server process written in perl, and is fully site- independent. Systems that need access to the tape resources communicate with the server by means of the Tape Dispatcher API (supplied along with the server). The site-dependent part of the system consists of 3 components: 1. System-wide configuration file, /etc/sysconfig/tapedispatcher.conf, which describes all tape resources (drives and magnetic tapes) of the organization. 2. Current tape usage file ( /etc/sysconfig/tapedispatcher.usedtapes), which denotes the used tapes and is dynamically managed by the server process. 3. Tape mounter command (/bin/tapemounter), which each site provides in accordance with our specification. The server process maintains also a log of operation in /var/log/tapedispatcher.log. Operation When the server process starts, it parses the configuration file and is immediately ready to serve the requests from client hosts (only hosts mentioned in the "DRIVE" statements are served; requests from other hosts are silently ignored). Server controls all available drives simultaneously. For each requested drive, it keeps a "session" with a client which grants this client exclusive rights for the device. This session remains active
  • 2. until the client process interrupts it, or if the client process dies. During the session, client may mount one or more different tapes on the reserved device. There are 2 ways to establish a session, let us consider them in detail: 1. A session is established by requesting a tape with a certain VID for a service with a certain nickname (on a specified drive, or on a first unused matching drive). In this case the server: o Checks whether the request has come from an authorized host. o Checks whether the tape VID is present in the central configuration file. o Checks whether the VID "belongs" to the nicknamed service ("usedtapes" file). o Checks whether the caller host is mentioned in the central configuration file as having at least one drive of the right type attached (capable to mount the requested VID), or has access to the explicitly specified drive. o Puts the session request into the queue. o Assigns a first available (or explicitly specified) drive of the needed type to the client and mounts there the requested VID, thus establishing a session. o Within the session, serves any subsequent mount/unmount requests. For each mount, client receives an aknowledgement, complete with the name of physical device and VID mounted on it. o Marks the resourse as "free", upon completion (or crash) of the client session. 2. A session is established by requesting a brand-new tape of a certain type for a service with a certain nickname (on a specified drive, or on a first unused matching drive). The server in this case has to "generate" the VID (in all other aspects, it operates exactly as in case 1). That is how it is done: o The central configuration file is searched for a first not-used tape of the matching type. o This tape is then marked as used by a service with the specified nickname in the "usedtapes" file. o Thus obtained new VID is then used to create the session. At any moment, server may be forced to re-read its configuration file by sending a HUP signal to the server process. All requests to server and server operations, both failed and successful, are logged. Download The code is available under these conditions. The history of updates and the latest downloadable version of CASPUR Tape Dispatcher are here: tapedispatcher-CASPUR-0.98f.tgz (18 KB). Update History. The distribution contains:
  • 3. • The ready-to-use server code (tapedispatcher), it requires only perl and will run on any UNIX/Linux host. • Template tapedispatcher.conf and tapedispatcher.usedtapes files that are to be placed in /etc/sysconfig. • A sample tapemounter front end and some library-specific scripts • Tape Dispatcher API (set of perl functions, README file and examples of usage). The installation is straightforward, so we do not supply any installation script: • Make sure that you have perl and /var/log and /etc/sysconfig directories on your system. • Carefully edit and install both tapedispatcher.conf and tapedispatcher.usedtapes configuration files in /etc/sysconfig. • Prepare and install your site-specific /bin/tapemounter command. • Your system is configured! Start the server (use the rc.tdisp script that we supply) and maybe make sure that it restarts on reboot. We suggest that you do it via inittab. For instance, if you have unpacked the distribution directory under /etc you may add a following line to /etc/inittab: tdis:2345:respawn:/etc/tapedispatcher-CASPUR-0.98f/rc.tdisp • For each of your tape-related systems involved, replace any direct tape handling with the interface to your tape dispatcher. • Support • We will be happy to answer your questions, and we are interested in any error reports and suggestions. Please feel free to contact us at tapedispatcher@caspur.it. • CASPUR Tape Dispatcher team: • Ruten Gurin (architecture and implementation), Andrei Maslennikov (architecture and documentation), Marco Mililotti (2003-2005 Project Lead), Andrea Petrucci (developer), Carolina Roldan Garrido (developer) • Copyright notice