SlideShare ist ein Scribd-Unternehmen logo
1 von 144
Downloaden Sie, um offline zu lesen
By : Sunil Modi
         Under the supervision of

Mr. D. Nayak, Principal System Analyst
Topics Covered
Installation             Networking

Storage & File systems   Configuring Serves
                           DHCP
Creating RAID
                           NFS
Creating LVM               FTP
                           Syslog
User Management            Squid Proxy
                           DNS
Package Management         Appache
Cron File                  Samba
Introduction – What is Linux ?
As we already know, Linux is a freely distributed
implementation of a UNIX-like kernel, the lowlevel
core of an operating system. Because Linux takes
the UNIX system as its inspiration, Linux and UNIX
programs are very similar.
Linux was developed by Linus Torvalds at the
University of Helsinki, with the help of UNIX
programmers from across the Internet. It began as
a hobby inspired by Andy Tanenbaum’s Minix, a
small UNIXlike system, but has grown to become
a complete system in its own right. The intention is
that the Linux kernel will not incorporate
proprietary code but will contain nothing but freely
distributable code.
Installation – Method
The following installation methods are available:
     DVD/CD-ROM
     If we have a DVD/CD-ROM drive and the Red Hat Enterprise Linux
     CD-ROMs or DVD.

  In rest of installation methods, We need a boot CD-
  ROM (use the linux askmethod boot option).
     Hard Drive
     If we have copied the Red Hat Enterprise Linux ISO images to a local
     hard drive.
     NFS
     If we are installing directly from an NFS server, use this method.
     FTP
     If you are installing directly from an FTP server, use this method.
     HTTP
     If we are installing directly from an HTTP (Web) server, use this
     method.
Installation – from DVD/CD-ROMs
To install Red Hat Enterprise Linux from a DVD/CD-ROM,
place the DVD or CD #1 in your DVD/CD-ROM drive and
boot your system from the DVD/CD-ROM.
Just press Enter key at boot: prompt for GUI installation.
Type linux text at boot: prompt for text mode installation.



Welcome screen
for GUI
Installation
Installation – cont’d…




Language Selection
The language we select here will
become the default language for the
operating system once it is installed.


                                         Keyboard configuration
Installation – cont’d…
Disk Partitioning
Setup
Partitioning allows us
to divide our hard drive
into isolated sections,
where each section
behaves as its own
hard drive. Partitioning
is particularly useful if
we run multiple
operating systems.
Installation – cont’d…
If we chose to create a
custom layout, we must
tell the installation
program where to install
Red Hat Enterprise Linux.
This is done by defining
mount points for one or
more disk partitions. We
may also need to create
and/or delete partitions at
this time. This partition tool
used by the installation
pragram is Disk Druid.
Installation – cont’d…
Adding Partitions :
To add a new partition, select
the New button. A dialog box
appears.


Edit Partitions :
To edit a partition, select the
Edit button or double-click on
the existing partition.


Delete Partitions :
To delete a partition, highlight it
in the Partitions section and
click the Delete button. Confirm
the deletion when prompted.
Installation – cont’d…
Boot Loader Configuration :
To boot the system without boot
media, we usually need to install a
boot loader. A boot loader is the
first software program that runs
when a computer starts. It is
responsible for loading and
transferring control to the operating
system kernel software. The kernel,
in turns, initializes the rest of
operating system.
GRUB (Grand Unified Boot loader),
which is installed by default, is a
very powerful boot loader.
Installation – cont’d…
Boot Loader Installation:
o The Master Boot Record (MBR)
This is the recommended place to install
a boot loader, unless the MBR already
starts another operating system loader.
The MBR is a special area on our hard
drive that is automatically loaded by
computer's BIOS, and is the earliest
point at which the boot loader can take
control of the boot process.


o The First Sector of Boot
  Partition
This is recommended if you are already
using another boot loader on your
system. In this case, your other boot
loader takes control first. You can then
configure that boot loader to start
GRUB, which then boots Red Hat
Enterprise Linux.
Installation – cont’d…
Network Configuration:
The installation program automatically
detects any network devices the system
have and displays them in the Network
Devices list.
Once selected a network device, click
Edit. From the Edit Interface pop-up
screen, you can choose to configure the
IP address and Netmask (for IPv4 -
Prefix for IPv6) of the device via DHCP
(or manually if DHCP is not selected)
and you can choose to activate the
device at boot time.
Installation – cont’d…
Time Zone Configuration
Set your time zone by
selecting the city closest to
your computer's physical
location. Click on the map
to zoom in to a particular
geographical region of the
world.
Installation – cont’d…
Set Root Password
Setting up a root account
and password is one of
the most important steps
during our installation.
The root account is used
to install packages,
upgrade RPMs, and
perform most system
maintenance.
Logging in as root gives
us complete control over
our system.
Installation – cont’d…
Package Group Selection
To select a component,
click on the checkbox
beside it“Package Group
Selection”).
Select the Customize
now option on the
screen. Clicking
Next takes you to the
Package Group
Selection screen.
Installation – cont’d…
Package Group Selection
Select each component you
wish to install.
Once a package group has
been selected, if optional
components are available
you can click on
Optional packages to view
which packages are
installed by default, and to
add or remove optional
packages from that group. If
there are no optional
components this button will
be disabled.
Installation – cont’d…
Prepare to Install
A screen preparing you for the installation of Red Hat Enterprise Linux now
appears.
For your reference, a complete log of your installation can be found in
/root/install.log once
you reboot your system.
To cancel this installation process, press your computer's Reset button or
use the
Control-Alt-Delete key combination to restart your machine.
Installation – cont’d…
Installation Complete

Congratulations! Your Red Hat Enterprise Linux installation is now
complete!

The installation program prompts you to prepare your system for reboot.
Remember to remove any installation media if it is not ejected automatically
upon reboot.

After your computer's normal power-up sequence has completed, the
graphical boot loader prompt appears at which you can do any of the
following things:

Press Enter — causes the default boot entry to be booted.
File Systems
File system refers to the files and directories stored on
a computer. A file system can have different formats
called file system types. These formats determine how
the information is stored as files and directories. Some
file system types store redundant copies of the data,
while some file system types make hard drive access
faster. This part discusses the ext3, swap, RAID, and
LVM file system types. It also discusses the parted
utility to manage partitions and access control lists
(ACLs) to customize file permissions.
File System Structure
File System Hierarchy Standard (FHS)

Red Hat Enterprise Linux uses the File system Hierarchy Standard (FHS)
file system structure,
which defines the names, locations, and permissions for many file types and
directories.




 bin      etc      usr   home     mnt     var    dev   sbin    boot   root
FHS Organization
FHS Organization
The directories and files noted here are a small subset of those specified by
the FHS document.

Refer to the latest FHS document for the most complete information.

The complete standard is available online at http://www.pathname.com/fhs/
[http://www.pathname.com/fhs].

The /boot/ Directory
The /boot/ directory contains static files required to boot the system, such as
the Linux kernel.
These files are essential for the system to boot properly.
FHS Organization
The /dev/ Directory
The /dev/ directory contains device nodes that either represent devices that
are attached to the system or virtual devices that are provided by the kernel.
These device nodes are essential for the system to function properly. The
udev demon takes care of creating and removing all these device nodes in
/dev/.
/dev/hda - the master device on primary IDE channel.
/dev/hdb - the slave device on primary IDE channel


The /etc/ Directory
The /etc/ directory is reserved for configuration files that are local to the
machine. No binaries are to be placed in /etc/. Any binaries that were once
located in /etc/ should be placed into /sbin/ or /bin/.
FHS Organization
The /lib/ Directory
The /lib/ directory should contain only those libraries needed to execute the
binaries in /bin/ and /sbin/. These shared library images are particularly
important for booting the system and executing commands within the root
file system.

The /media/ Directory
The /media/ directory contains subdirectories used as mount points for
removeable media such as usb storage media, DVDs, CD-ROMs, and Zip
disks.

The /mnt/ Directory
The /mnt/ directory is reserved for temporarily mounted file systems, such
as NFS file system mounts. For all removeable media, please use the
/media/ directory. Automatically detected removeable media will be mounted
in the /media directory.
FHS Organization
The /opt/ Directory
The /opt/ directory provides storage for most application software packages.
A package placing files in the /opt/ directory creates a directory bearing the
same name as the package. This directory, in turn, holds files that otherwise
would be scattered throughout the file system, giving the system administrator
an easy way to determine the role of each file within a particular package.

The /proc/ Directory
The /proc/ directory contains special files that either extract information from
or send information to the kernel. Examples include system memory, cpu
information, hardware configuration etc.
Due to the great variety of data available within /proc/ and the many ways this
directory can be used to communicate with the kernel.
FHS Organization
The /sys/ Directory
The /sys/ directory utilizes the new sysfs virtual file system specific to the
kernel. With the increased support for hot plug hardware devices in the
kernel, the /sys/ directory contains information similarly held in /proc/, but
displays a hierarchical view of specific device information in regards to hot
plug devices.

The /usr/ Directory
The /usr/ directory is for files that can be shared across multiple machines.
The /usr/ directory is often on its own partition and is mounted read-only. At
a minimum, the following directories should be subdirectories of /usr/:
/usr |- bin/ |- etc/ |- games/ |- include/ |- kerberos/ |- lib/ |- libexec/ |- local/ |- sbin/ |- share/


The /usr/local/ Directory
The /usr/local hierarchy is for use by the system administrator when
installing software locally. It needs to be safe from being overwritten when
the system software is updated. It may be used for programs and data that
are shareable among a group of hosts, but not found in /usr.
FHS Organization
The /sbin/ Directory
The /sbin/ directory stores executables used by the root user. The
executables in /sbin/ are used at boot time, for system administration and to
perform system recovery operations. Of this directory, the FHS says:
/sbin contains binaries essential for booting, restoring, recovering, and/or
repairing the system in addition to the binaries in /bin. Programs executed
after /usr/ is known to be mounted (when there are no problems) are
generally placed into /usr/sbin. Locally-installed system administration
programs should be placed into /usr/local/sbin.
At a minimum, the following programs should be in /sbin/:
arp, clock, halt, init, fsck.*, grub, ifconfig, mingetty, mkfs.*, mkswap, reboot, route,
shutdown, swapoff,


The /srv/ Directory
The /srv/ directory contains site-specific data served by your system running
Red Hat Enterprise Linux. This directory gives users the location of data
files for a particular service, such as FTP, WWW, or CVS. Data that only
pertains to a specific user should go in the /home/ directory.
FHS Organization
The /var/ Directory

Since the FHS requires Linux to mount /usr/ as read-only, any programs that
write log files or need spool/ or lock/ directories should write them to the
/var/ directory. The FHS states /var/ is for:
...variable data files. This includes spool directories and files, administrative
and logging data, and transient and temporary files.
Below are some of the directories found within the /var/ directory:

/var |- account/ |- arpwatch/ |- cache/ |- crash/ |- db/ |- empty/ |- ftp/ |- gdm/ |- kerberos/ |-
lib/ |-


System log files, such as messages and lastlog, go in the /var/log/ directory.
The /var/lib/rpm/ directory contains RPM system databases. Lock files go in
the /var/lock/ directory, usually in directories for the program using the file.
The /var/spool/ directory has subdirectories for programs in which data files
are stored.
The ext3 File System
Features of ext3
The ext3 file system is essentially an enhanced version of the ext2 file
system. These improvements provide the following advantages:
Availability
After an unexpected power failure or system crash (also called an unclean
system shutdown),each mounted ext2 file system on the machine must be
checked for consistency by the e2fsck program. This is a time-consuming
process that can delay system boot time significantly, especially with large
volumes containing a large number of files. During this time, any data on the
volumes is unreachable. The journaling provided by the ext3 file system
means that this sort of file system check is no longer necessary after an
unclean system shutdown. The only time a consistency check occurs using
ext3 is in certain rare hardware failure cases, such as hard drive failures.
The time to recover an ext3 file system after an unclean system shutdown
does not depend on the size of the file system or the number of files; rather,
it depends on the size of the journal used to maintain consistency. The
default journal size takes about a second to recover, depending on the
speed of the hardware.
The ext3 File System
Data Integrity
The ext3 file system prevents loss of data integrity in the event that an
unclean system shutdown occurs. The ext3 file system allows you to choose
the type and level of protection that your data receives. By default, the ext3
volumes are configured to keep a high level of data consistency with regard
to the state of the file system. Speed Despite writing some data more than
once, ext3 has a higher throughput in most cases than ext2 because ext3's
journaling optimizes hard drive head motion. You can choose from three
journaling modes to optimize speed, but doing so means trade-offs
inregards to data integrity if the system was to fail.
Easy Transition
It is easy to migrate from ext2 to ext3 and gain the benefits of a robust
journaling file system without reformatting.
Creating an ext3 File System
After installation, it is sometimes necessary to create a new ext3 file system.
For example, if you
add a new disk drive to the system, you may want to partition the drive and
use the ext3 file system.
The steps for creating an ext3 file system are as follows:
1. Format the partition with the ext3 file system using mkfs.
2. Label the partition using e2label.
Converting to an ext3 File System
The tune2fs allows you to convert an ext2 filesystem to ext3.
To convert an ext2 filesystem to ext3, log in as root and type the following
command in a terminal:
/sbin/tune2fs -j <block_device> where <block_device> contains the ext2
filesystem you wish to convert.
A valid block device could be one of two types of entries:
A mapped device — A logical volume in a volume group, for example, /
dev/mapper/VolGroup00-LogVol02.
 A static device — A traditional storage volume, for example, /dev/hdbX, where hdb is a storage
device name and X is the partition number.
Issue the df command to display mounted file systems.
Reverting to an ext2 File System
If you wish to revert a partition from ext3 to ext2 for any reason, you must first
unmount the partition by logging in as root and typing,
umount /dev/mapper/VolGroup00-LogVol02
Next, change the file system type to ext2 by typing the following command as
root:
/sbin/tune2fs -O ^has_journal /dev/mapper/VolGroup00-LogVol02
Check the partition for errors by typing the following command as root:
/sbin/e2fsck -y /dev/mapper/VolGroup00-LogVol02
Then mount the partition again as ext2 file system by typing:
mount -t ext2 /dev/mapper/VolGroup00-LogVol02/mount/point
In the above command, replace /mount/point with the mount point of the
partition.
Next, remove the .journal file at the root level of the partition by changing to the
directory where it is mounted and typing:
rm -f .journal
You now have an ext2 partition.
If you want to permanently change the partition to ext2, remember to update the
/etc/fstab file.
Swap Space
What is Swap Space?

Swap space in Linux is used when the amount of physical memory (RAM) is
full. If the system needs more memory resources and the RAM is full,
inactive pages in memory are moved to the swap space. While swap space
can help machines with a small amount of RAM, it should not be considered
a replacement for more RAM. Swap space is located on hard drives, which
have a slower access time than physical memory.
Swap space can be a dedicated swap partition (recommended), a swap file,
or a combination of swap partitions and swap files.
Swap should equal 2x physical RAM for up to 2 GB of physical RAM, and
then an additional 1x physical RAM for any amount above 2 GB, but never
less than 32 MB.
Swap Space
Adding Swap Space
Sometimes it is necessary to add more swap space after
installation. For example, you may upgrade the amount of RAM
in your system from 128 MB to 256 MB, but there is only 256
MB of swap space. It might be advantageous to increase the
amount of swap space to 512 MB if you perform memory-
intense operations or run applications that require a large
amount of memory.

Creating an Another Swap Space
To create and manipulate swap space, use the mkswap,
swapon, and swapoff commands. mkswap initializes a swap
area on a device (the usual method) or a file. swapon enables
the swap area for use, and swapoff disables the swap space.
Swap Space
1.Create the Partition of size 256 MB:(sda6)
2. Format the new swap space:
# mkswap /dev/sda6(swap file system)
3. Change the label of swap partition
# e2label /dev/sda6 /swap-sda6
4. Enable the extended logical volume:
# swapon /dev/sda6
5. Set priority
Priority of swap by default is 1 and many created will have -1. we can
change it to 1.
#vi /etc/fstab
/dev/sad3          swap      swap     defaults,pri=1 0 0
/swap-sda6         swap      swap     defaults,pri=1 0 0 [Reboot the System]
6. Display swap partition
#cat /proc/swaps
Expanding Disk Capacity
Introduction
The lack of available disk storage frequently plagues Linux systems
administrators. The most common reasons for this are expanding
databases, increasing numbers of users, and the larger number of tasks
your Linux server is expected to perform until a replacement is found.
This section explores how to add a disk in Linux system. By moving
directories from a full partition to an empty one made available by the new
disk and then linking the directory structures of the two disks together.
We adding a hard disk with only one partition and will then explain how to
migrate data from the full disk to the new one.
Expanding Disk Capacity
Determining The Disk Types
Linux stores the names of all known disk partitions in the /proc/partitions file.
The entire hard disk is represented by an entry with a minor number of 0,
and all the partitions on the drive are sequentially numbered after that. In the
example, the system has two hard disks; disk /dev/hda has been partitioned,
but the new disk (/dev/hdb) needs to be prepared to accept data.
     [root@localhost~]# cat /proc/partitions
     major minor #blocks        name
        3         0   7334145          hda
        3      1 104391                hda1
        3      2 1052257               hda2
        3      3   2040255             hda3
        3      4 1                     hda4
        3      5 3582463               hda5
        3      6 554211                hda6
        22     0 78150744              hdb
     [root@localhost~]#
Expanding Disk Capacity
Preparing Partitions on New Disks
Linux partition preparation is very similar to that in a Windows environment,
because both operating systems share the fdisk partitioning utility. The
steps are:

1) The first Linux step in adding a new disk is to partition it in preparation of
adding a filesystem to it. Type the fdisk command followed by the name of
the disk. You want to run fdisk on the /dev/hdb disk, so the command is:

[root@localhost~]# fdisk /dev/hdb
Command (m for help): p
Disk /dev/hdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders Units = cylinders
of 16065 * 512 = 8225280 bytes
 Device          Boot     Start   End      Blocks   Id      System

Command (m for help):
Expanding Disk Capacity
Command (m for help): n
Command action
e        extended
p       primary partition (1-4)
Partition number (1-4): 1
First cylinder (1-9729, default 1):<RETURN>
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-9729, default 9729):
  Run the print (p) command to confirm that you successfully created
the partition partition.

Command (m for help): p
Disk /dev/hdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot    Start  End     Blocks         Id      System
/dev/hdb1      1      9726    78148161       83      Linux
Command (m for help):
Expanding Disk Capacity
Command (m for help): p
 Disk /dev/hda: 7510 MB, 7510164480 bytes
255 heads, 63 sectors/track, 913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 Device Boot   Start  End     Blocks         Id                   System
/dev/hda1 *    1      13      104391 83      Linux
/dev/hda2      14     144     1052257+       83                   Linux
/dev/hda3      145    398     2040255        82                   Linux swap
/dev/hda4      399    913     4136737+       5                    Extended
/dev/hda5      399    844     3582463+       83                   Linux
/dev/hda6      845    913     554211 83      Linux
Command (m for help):
Changes won't be made to the disk's partition table until you use the w command to
write, or save, the changes. Do that now, and, when finished, exit with the q
command.
Command (m for help): w
Command (m for help): q
After this is complete you'll need to verify your work and start migrating your data to
the new disk. These steps will be covered next.
Expanding Disk Capacity
Verifying Your New Partition
You can take a look at the /proc/partitions file or use the fdisk -l command to see the
changes to the disk partition structure of your system:
[root@localhost~]# cat /proc/partitions
 major            minor #blocks                 name
...
...
22 0 78150744               hdb
23 1 78150744               hdb1
[root@localhost]# fdisk –l
 ...
 ...
Disk /dev/hdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End                 Blocks                         Id    System
/dev/hdb1         1         9729 76051710                            83    Linux
Expanding Disk Capacity
Putting A Directory Structure On Your New Partition
You now need to format the partition, giving it a new directory structure by
using the mkfs command.

[root@localhost]# mkfs -t ext3 /dev/hdb1
Next, you must create special mount point directory, to which the new
partition will be attached. Create directory /home/hdb1 for this purpose.

[root@localhost]# mkdir /home/hdb1
When Linux boots, it searches the /etc/fstab file for a list of all partitions and
their mounting characteristics, and then it mounts the partitions
automatically. You'll have to add an entry for your new partition that looks
like this:
# vi /etc/fstab
---
/dev/hdb1       /home/hdb1        ext3    defaults 1 2
Expanding Disk Capacity
Migrating Data Over To your New Partition
As you remember from investigating with the df -k command,
the /var partition is almost full.

[root@localhost~]# df -k
Filesystem     1K-blocks              Used           Available Use% Mounted on
/dev/hda3      505636                 118224          361307   25%    /
/dev/hda1      101089                 14281            81589   15%    /boot
none           63028                  0               63028    0%     /dev/shm
/dev/hda5      248895                 6613            229432   3%     /tmp
/dev/hda7      3304768                2720332         416560   87%    /usr
/dev/hda2      3304768                3300536           4232   99%    /var
[root@localhost~]#


As a solution, the /var partition will be expanded to the new /dev/hdb1 partition mounted
on the /home/hdb1 directory mount point. To migrate the data, use these steps:
1) Back up the data on the partition you are about to work on.
Expanding Disk Capacity
Rename the /var/transactions directory /var/transactions-save to make sure
you have an easy to restore backup of the data, not just the tapes.
# mv /var/transactions /var/transactions-save

Create a new, empty /var/transactions directory; this will later act as a mount
point.
# mkdir /var/transactions

Copy the contents of the /var/transactions-save directory to the root
directory of /dev/hdb1, which is actually /home/hdb1.

# cp -a /var/transactions-save/* /home/hdb1
Unmount the new /dev/hdb1 partition.

# umount /home/hdb1
Expanding Disk Capacity
Edit the /etc/fstab file, removing our previous entry for /dev/hdb1 replacing it
with one using the new mount point.
# vi /etc/fstab
#
#/dev/hdb1    /home/hdb1                          ext3      defaults 1 2
/dev/hdb1    /var/transactions                    ext3      defaults 1 2

Remount /dev/hdb1 on the new mount point using the mount -a command, which reads
/etc/fstab and automatically mounts any entries that are not mounted already.
sh-2.05b# mount -a
Test to make sure that the contents of the new /var/transactions directory is identical to
/var/transactions-save.
sh-2.05b# exit
Make sure your applications are working correctly and delete both the /var/transactions-save
directory and the /home/hdb1 mount point directory at some later date.
This exercise showed you how to migrate the entire contents of a subdirectory to a new disk.
Linux also allows you to merge partitions together, to create a larger combined one. The
reasons and steps for doing so will be explained next.
Redundant Array of Independent Disks(RAID)
Introduction
The main goals of using redundant arrays of inexpensive disks (RAID) are
to improve disk data performance and provide data redundancy.
RAID can be handled either by the operating system software or it may be
implemented via a purpose built RAID disk controller card without having to
configure the operating system at all. This section will explain how to
configure the software RAID schemes supported by RedHat.




RAID Types
Whether hardware- or software-based, RAID can be configured using a
variety of standards. Take a look at the most popular.
Redundant Array of Independent Disks(RAID)
Configuring Software RAID
RAID configuration during the installation process using the Disk Druid
application.
                           Creating the RAID Partitions
These examples use two 9.1 GB SCSI drives (/dev/sda and /dev/sdb) to
illustrate the creation of simple RAID1 configurations. They detail how to
create a simple RAID 1 configuration by implementing
multiple RAID devices. On the Disk Partitioning Setup screen, select
Manually partition with Disk Druid.
Redundant Array of Independent Disks(RAID)
1.   In Disk Druid, choose RAID to enter the software RAID creation screen.
Redundant Array of Independent Disks(RAID)
2.    Choose      Create    a
software RAID partition to
create a RAID partition as
shown in Figure
“RAID Partition Options”.
Note that no other RAID
options (such as entering a
mount point) are available
until RAID partitions, as well
as RAID devices, are
created.
Redundant Array of Independent Disks(RAID)
3. A software RAID partition
must be constrained to one
drive. For Allowable Drives,
select
the drive to use for RAID. If
you have multiple drives, by
default    all  drives   are
selected and you must
deselect the drives you do
not want.
Redundant Array of Independent Disks(RAID)
Repeat these steps to create
as many partitions as needed
for your RAID setup. Notice
that all the partitions do not
have to be RAID partitions. For
example, you can configure
only the /boot/ partition as a
software RAID device, leaving
the root partition (/), /home/,
and swap as regular file
systems. “RAID 1 Partitions
Ready, Pre-Device and Mount
Point       Creation”      shows
successfully allocated space
for the RAID 1 configuration
(for /boot/), which is now
ready for RAID device and
mount point creation:
Redundant Array of Independent Disks(RAID)
Creating the RAID Devices and Mount Points
Once you create all of your partitions as Software RAID partitions, you must
create the RAID device and mount point.
1. Select the RAID button on the Disk Druid main partitioning screen.
2 “RAID Options” appears. Select Create a RAID device.
Redundant Array of Independent Disks(RAID)
3. Next, “Making a RAID Device
   and Assigning a Mount Point”
   appears, where you
   can make a RAID device and
   assign a mount point.

4. Select a mount point.

5. Choose the file system type for
   the partition. Traditional static
   ext2/ext3 file system. Select a
   device name such as md0 for
   the RAID device.

7. Choose your RAID level. You
   can choose from RAID 0, RAID
   1, and RAID 5.
Redundant Array of Independent Disks(RAID)
8. The RAID partitions created appear in the RAID Members list. Select
    which of these partitions should be used to create the RAID device.
9. If configuring RAID 1 or RAID 5, specify the number of spare partitions. If
    a software RAID partition fails, the spare is automatically used as a
    replacement. For each spare you want to specify, you must create an
    additional software RAID partition (in addition to the partitions for the
    RAID device). Select the partitions for the RAID device and the
    partition(s) for the spare(s).
10. After clicking OK, the RAID device appears in the Drive Summary
    list.
11. Repeat this chapter's entire process for configuring additional partitions,
    devices, and mount points, such as the root partition (/), /home/, or swap.
    After completing the entire configuration, the figure as shown below,
    “Final Sample RAID
    Configuration” resembles the default configuration, except for the use of
    RAID.
Redundant Array of Independent Disks(RAID)
       Final Sample RAID Configuration
Redundant Array of Independent Disks(RAID)
Configuring Software Raid After Installation
Only RAID level 0,1 and 5 can be implemented using the software RAID. In
Linux this can be done using the mdadm command. mdadm stands for
Multiple Disk Administration.
First of all we have to prepare our disks for implementation of raid, for that
we have to make three of more partitions in different disks :

Prepare The Partitions With FDISK

You have to change each partition in the RAID set to be of type FD (Linux
raid autodetect), and you can do this with fdisk. Here is an example using
/dev/sda
[root@localhost]# fdisk /dev/sda

Command (m for help):
Redundant Array of Independent Disks(RAID)
Command (m for help): m
...
...
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
...
...
Command (m for help):
Set The ID Type To FD
Partition /dev/hde1 is the first partition on disk /dev/sda. Modify its type
using the t command, and specify the partition number and type code. You
also should use the L command to get a full listing of ID types in case you
forget.
Redundant Array of Independent Disks(RAID)
Command (m for help): t
Partition number (1-5): 1
Hex code (type L to list codes): L
...
...
...
16 Hidden FAT16                61        SpeedStor           f2 DOS secondary
17 Hidden HPFS/NTF             63        GNU HURD or Sys fd Linux raid auto
18 AST SmartSleep              64        Novell Netware      fe LANstep
1b Hidden Win95 FA             65        Novell Netware      ff BBT Hex code
(type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help):

Make Sure The Change Occurred

Use the p command to get the new proposed partition table:
Redundant Array of Independent Disks(RAID)
Command (m for help): p
Disk /dev/sda: 4311 MB, 4311982080 bytes
16 heads, 63 sectors/track, 8355 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Device Boot          Start     End      Blocks     Id       System
/dev/sda1            1         4088     2060320+   83   Linux
/dev/sda2            4089      5713     819000     83   Linux
/dev/sda3            4089      5713     819000     83   Linux
/dev/sda4            6608      8355     880992      5   Extended
/dev/sda5            6608      7500     450040+    83   Linux raid autodetect
/dev/sda6            7501      8355     430888+    fd   Linux raid autodetect
Command (m for help): w
Use the w command to permanently save the changes to disk /dev/sda
Repeat For The Other Partitions
For the sake of brevity, I won't show the process for the other partitions. It's
enough to know that the steps for changing the IDs for /dev/sda6 and
/dev/sdb5 are very similar.
Redundant Array of Independent Disks(RAID)
[root@localhost ~]# fdisk /dev/sdb

Disk /dev/sdc: 9175 MB, 9175979520 bytes
255 heads, 63 sectors/track, 1115 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot    Start      End          Blocks        Id       System
/dev/sdb1 *       1       609          4891761       83       Linux
/dev/sdb2        610      1115         4064445       5        Extended
/dev/sdb5        610       622         104391        fd       Linux raid autodetect

Command (m for help): w
Preparing the RAID Set
Now that the partitions have been prepared, we have to merge them into a new
RAID partition that we'll then have to format and mount. Here's how it's done.
Redundant Array of Independent Disks(RAID)
Create the RAID Set
You use the mdadm command with the --create option to create the RAID
set. In this example we use the --level option to specify RAID 1, and the --
raid-devices option to define the number of partitions to use.
The syntax for creation of raid is :
[root@localhost ~]# mdadm -C /dev/md0 -l1 -n2 /dev/sda5 /dev/sdb5

mdadm: array /dev/md0 started.
-c: Create
-l: RAID Level i.e. 0,1,5
-n: Numbers of disks used
 OR
[root@localhost ~]# mdadm -C /dev/md0 -l1 -n2 missing /dev/sdb5
mdadm: array /dev/md0 started.
Missing : Missing tells mdadm to create the raid with the rest of disks.
Redundant Array of Independent Disks(RAID)
OR

[root@localhost ~]# mdadm --create /dev/md0 -–level=raid1 -–raid-devices=2
/dev/sda5 /dev/sdb5
mdadm: array /dev/md0 started.

OR

[root@localhost ~]# mdadm -C /dev/md0 -l1 -n2 /dev/sda5 /dev/sdb5 –x1
/dev/sda6
mdadm: array /dev/md0 started.
-x1: for adding spare disk during raid creation.

Now make the ext3 filesystem for /dev/md0


[root@localhost ~]# mkfs.ext3 /dev/md0
Redundant Array of Independent Disks(RAID)
Confirm RAID Is Correctly Inititalized

The /proc/mdstat file provides the current status of all RAID devices.
Confirm that the initialization is finished by inspecting the file and making
sure that there are no initialization related messages. If there are, then wait
until there are none.

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]

md0 : active raid1 sda5[1] sdb5[0]
          104320 blocks [2/2] [UU]

unused devices: <none>
Redundant Array of Independent Disks(RAID)
                                   OR
[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.01
  Creation Time : Fri Jul 13 17:28:13 2007
     Raid Level : raid1
     Array Size : 104320 (101.88 MiB 106.82 MB)
    Device Size : 104320 (101.88 MiB 106.82 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent
    Update Time : Fri Jul 13 17:28:40 2007
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
     Number   Major   Minor   RaidDevice State
       0       8       23        0      active sync   /dev/sda5
       1       8       24        1      active sync   /dev/sdb5
           UUID : e5d221a1:323fa424:e98e53dc:395326af
         Events : 0.4
[root@localhost ~]#
Redundant Array of Independent Disks(RAID)
Let us make the /etc/mdadm.conf and initial ramdisk with raid support, so
that the the kernel could understand the raid at boot time.

[root@localhost ~]# mdadm –detail –scan > /etc/mdadm.conf
[root@localhost ~]# mkinitrd –v --preload=raid1 /boot/intird-`uname–
r`.img.raid `uname –r`


If more raid levels are used then they should also be used like

[root@localhost ~]# mkinitrd –v --preload=raid0 --preload=raid1 --
preload=raid5 /boot/intird-`uname –r`.img.raid `uname –r`

It will create a initrd image file initrd-2.6.9-11.EL.img.raid. Now make the
necessary changes in /etc/grub.conf file to instruct the grub to load initrd
image file with raid support during boot time. Add the following line in
grub.conf :
 initrd /initrd-2.6.9-11.EL.img.raid
Redundant Array of Independent Disks(RAID)
• Create A Mount Point For The RAID Set
• The next step is to create a mount point for /dev/md0. In this case we'll
  create one called /raid-data
• [root@localhost]# mkdir /raid-data
• [root@localhost ~]# mount /dev/md0 /raid-data
• Edit The /etc/fstab File
• The /etc/fstab file lists all the partitions that need to mount when the
  system boots. Add an Entry for the RAID set, the /dev/md0 device.

• /dev/md0       /raid-data        ext3            defaults 1 2
Redundant Array of Independent Disks(RAID)
Raid Failure Testing
Testing after adding a extra disk in raid :
[root@localhost ~]# mdadm /dev/md0 -a /dev/sda6
mdadm: hot added /dev/sda6
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda6[2] sdb5[1] sda5[0]
      104320 blocks [2/2] [UU]
 unused devices: <none>
Testing the failure of one disk
[root@localhost ~]# mdadm /dev/md0 -f /dev/sdb5
mdadm: set /dev/sdb5 faulty in /dev/md0
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb6[2] sdb5[1](F) sdb5[0]
      104320 blocks [2/1] [U_]
      [============>........] recovery = 64.7% (67904/104320) finish=0.0min
speed=33952K/sec
unused devices: <none>
Redundant Array of Independent Disks(RAID)
[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.01
  Creation Time : Fri Jul 13 16:39:06 2007
     Raid Level : raid1
     Array Size : 104320 (101.88 MiB 106.82 MB)
    Device Size : 104320 (101.88 MiB 106.82 MB)
   Raid Devices : 2
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent
     Update Time : Fri Jul 13 16:54:42 2007
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 1
  Spare Devices : 0
     Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync   /dev/sda5
       1       8       22        1      active sync   /dev/sda6
       2       8       37       -1      faulty   /dev/sdb5
           UUID : cd8563c9:d52e18f5:8deb3cc3:6304ce1c
         Events : 0.223
Redundant Array of Independent Disks(RAID)
[root@localhost ~]# mdadm /dev/md0 -r /dev/sdb5
mdadm: hot removed /dev/sdb5
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda6[1] sda5[0]
    104320 blocks [2/2] [UU]

unused devices: <none>
LVM(Logical Volume Manager)
The Logical Volume Manager (LVM) enables you to resize your partitions
without having to modify the partition tables on your hard disk. This is most useful
when you find yourself running out of space on a filesystem and want to expand into
a new disk partition versus migrating all or a part of the filesystem to a new disk.


Physical Volume: A physical volume (PV) is another name for a regular physical
disk partition that is used or will be used by LVM.


Volume Group: Any number of physical volumes (PVs) on different disk drives
can be lumped together into a volume group (VG). Under LVM, volume groups are
analogous to a virtual disk drive.


Logical Volumes: Volume groups must then be subdivided into logical volumes.
Each logical volume can be individually formatted as if it were a regular Linux
partition. A logical volume is, therefore, like a virtual partition on your virtual disk
drive.
Physical Extent: Real disk partitions are divided into chunks of data called physical
extents (PEs) when you add them to a logical volume. PEs are important as you
usually have to specify the size of your volume group not in gigabytes, but as a
number of physical extents.
LVM(Logical Volume Manager)
The physical volumes are combined into logical volumes, with the exception
of the /boot/ partition.The /boot/ partition cannot be on a logical volume
group because the boot loader cannot read it. If the root (/) partition is on a
logical volume, create a separate /boot/ partition which is
not a part of a volume group.
LVM(Logical Volume Manager)
The volume groups can be divided into logical volumes, which are assigned
mount points, such as /home and / and file system types, such as ext2 or
ext3. When "partitions" reach their full capacity, free space from the volume
group can be added to the logical volume to increase the size of the
partition. When a new hard drive is added to the system, it can be added to
the volume group, and partitions that are logical volumes can be increased
in size.
LVM(Logical Volume Manager)
What is LVM2?
LVM version 2, or LVM2, is the default for Red Hat Enterprise Linux 5, which
uses the device mapper driver contained in the 2.6 kernel. LVM2 can be
upgraded from versions of Red Hat Enterprise Linux running the 2.4 kernel.


Steps required to configure LVM include:

Creating physical volumes from the hard drives.

Creating volume groups from the physical volumes.

Creating logical volumes from the volume groups and assign the logical
volumes mount points.
LVM(Logical Volume Manager)
Basic LVM commands

Initializing disks or disk partitions
To use LVM, partitions and whole disks must first be converted into physical volumes (PVs)
using the pvcreate command.
For example, to convert /dev/hda5 and /dev/hdb5 into PVs use the following command.


#pvcreate /dev/hda5 /dev/hdb5

Initialize the target partitions with the pvcreate command. This wipes out all the data on them
in preparation for the next step.


Creating a volume group

Use the vgcreate command to combine the two physical volumes into a single unit called a
volume group.


          #vgcreate song /dev/hda5 /dev/hdb5
LVM(Logical Volume Manager)
Creating a logical volume
Now we are ready to partition the volume group into logical volumes with the
lvcreate command. Like hard disks, which are divided into blocks of data, logical
volumes are divided into units called physical extents (PEs). Here we are creating
logical volume OldSong of size 1000 MB.
                            We can also give the size as number of PEs or in % of
free space/total space available in that volume group.

# lvcreate OldSong –L+1000M –n Song

# lvcreate NewSong –L+1000M –n Song

# lvcreate RemixSong –L+1000M –n Song

Now our logical volume is created. It can be used further by making the filesystem
and mounting it somewhere.
LVM(Logical Volume Manager)
Format the Logical Volume

# mkfs.ext3 /dev/Song/NewSong
# mkfs.ext3 /dev/Song/OldSong
# mkfs.ext3 /dev/Song/RemixSong

Mount The Logical Volume
#mkdir /NewSong
#mkdir /OldSong
#mkdir /RemixSong

#mount /dev/Song/NewSong /NewSong
#mount /dev/Song/OldSong /OldSong
#mount /dev/Song/RemixSong /RemixSong
LVM(Logical Volume Manager)
                                  Or

we can insert the following line in /etc/fstab file to make it mount at boot
time
         /dev/Song/NewSong /NewSong                  ext3    defaults1 2
        /dev/Song/NewSong          /NewSong          ext3    defaults1 2
        /dev/Song/NewSong          /NewSong          ext3    defaults1 2

Extending a logical volume
Let us consider that our logical volume NewSong becomes full and there is
no space in our volume group Song. And we have to expand the NewSong
then we have to make a physical partition and create a PV on it and extend
our VG (Song) to newly created partition and thereafter extend our LV
(NewSong) as per our need.
LVM(Logical Volume Manager)
Let we have /dev/sdb6 partition available and we have to extend NewSong
to 1000MB.

Create physical volume on /dev/sdb6
# pvcreate /dev/sdb6

Our volume group extended to /dev/sdb6
# vgextend Song /dev/sdb6

The size of logical volume NewSong is extended to 1000MB.
# lvextend /dev/Song/NewSong -L +1000M

This is to resize the filesystem of NewSong.
# resize2fs /dev/Song/NewSong
LVM(Logical Volume Manager)
               Logical Volumes

                          Old            Remix
       New                                Song
                         Song
       Song                             (1000M)
                       (1000M)
     (1000M)


                 Song(3GB)
               (Volume Group)

     Sda5(1.5GB)                   Sdb5(1.5GB)
   (Physical Volume)             (Physical Volume)
LVM(Logical Volume Manager)
                      Logical Volumes
Extended
 Volume
            (1000M)            Old            Remix
                              Song             Song
              New                                      #vgextend Song /dev/sdb6
                            (1000M)          (1000M)
              Song
            (1000M)

                         Song(3GB)                            Extended VG
                                                                 (1 GB)
                       (Volume Group)

         Sda5(1.5GB)                     Sdb5(1.5GB)            Sdb6(1GB)
       (Physical Volume)               (Physical Volume)        Physical Vol


           # lvextend /dev/Song/NewSong -L +1000M
                      # resize2fs /dev/Song/NewSong     # pvcreate   /dev/sdb6
Package Management
Package Management

All software on a Red Hat Enterprise Linux system is divided into RPM
Packages. This Section describes how to manage the RPM packages on a
Red Hat Enterprise Linux system using graphical and command line tools.

RPM has five basic modes of operation:
  installing
  uninstalling
  upgrading
  querying
  verifying.
       For complete details and options try rpm –help.
Package Management
  Installing
RPM packages typically have file names like foo-1.0-1.i386.rpm.
The file name includes the
Package Name (foo)
Version (1.0)
Release (1)
Architecture (i386).
Installing a package is as simple as typing the following command at a shell
prompt:
# rpm -ivh foo-1.0-1.i386.rpm
 foo             ####################################
#
As you can see, RPM prints out the name of the package and then prints a
succession of hash marks as the package is installed as a progress meter.
Package Management
Package Already Installed

If the package of the same version is already installed, you will see:

# rpm -ivh foo-1.0-1.i386.rpm
foo                      package foo-1.0-1 is already installed
#

If you want to install the package anyway and the same version you are
trying to install is already installed, you can use the --replacepkgs option,
which tells RPM to ignore the error:

# rpm -ivh --replacepkgs foo-1.0-1.i386.rpm
foo                     ####################################
#
Package Management
Conflicting Files

If you attempt to install a package that contains a file which has already
been installed by another package or an earlier version of the same
package, you'll see:

# rpm -ivh foo-1.0-1.i386.rpm
foo              /usr/bin/foo conflicts with file from bar-1.0-1
#

To make RPM ignore this error, use the --replacefiles option:

# rpm -ivh --replacefiles foo-1.0-1.i386.rpm
foo                       ####################################
#
Package Management
Unresolved Dependency

RPM packages can "depend" on other packages, which means that they
require other packages to be installed in order to run properly. If you try to
install a package which has an unresolved dependency, you'll see:

# rpm -ivh foo-1.0-1.i386.rpm
failed dependencies:
                 bar is needed by foo-1.0-1
#

To handle this error you should install the requested packages. If you want
to force the installation anyway (a bad idea since the package probably will
not run correctly), use the --nodeps option.
         # rpm -ivh --nodeps foo-1.0-1.i386.rpm
Package Management
  Uninstalling
Uninstalling a package is just as simple as installing one. Type the following
command at a shell prompt:
# rpm -e foo
#
You can encounter a dependency error when uninstalling a package if
another installed package depends on the one you are trying to remove. For
example:
# rpm -e foo
 removing these packages would break dependencies:
                 foo is needed by bar-1.0-1
#
To cause RPM to ignore this error and uninstall the package anyway use
the --nodeps option.
Package Management
  Upgrading
Upgrading a package is similar to installing one. Type the following
command at a shell prompt:
# rpm -Uvh foo-2.0-1.i386.rpm
foo             ####################################
#
Upgrading is really a combination of uninstalling and installing, so during an
RPM upgrade you can encounter uninstalling and installing errors, plus one
more. If RPM thinks you are trying to upgrade to a package with an older
version number, you will see:
# rpm -Uvh foo-1.0-1.i386.rpm
foo             package foo-2.0-1 (which is newer) is already installed
#
To cause RPM to "upgrade" anyway, use the --oldpackage option:
         # rpm -Uvh --oldpackage foo-1.0-1.i386.rpm
                foo     #####################################
Package Management
     Querying
Use the rpm -q command to query the database of installed packages. The
rpm -q foo command will print the package name, version, and release
number of the installed package foo:
# rpm -q foo foo-2.0-1
#

Instead of specifying the package name, you can use the following options
with -q to specify the package(s) you want to query. These are called
Package Specification Options.

-a     queries all currently installed packages.
-f     <file> will query the package which owns <file>. When specifying a
       file, you must specify the full path of the file (for example, /usr/bin/ls)
-p     <packagefile> queries the package <packagefile>.
Package Management
There are a number of ways to specify what information to display about queried
packages. The following options are used to select the type of information for which
you are searching. These are called Information Selection Options.

-i       Displays package information including name, description, release, size,
         build date, install date, vendor, and other miscellaneous information.

-l       Displays the list of files that the package contains.

-s       Displays the state of all the files in the package.

-d       Displays a list of files marked as documentation (man pages, info pages,
         READMEs, etc.).

-c       Displays a list of files marked as configuration files. These are the files you
         change after installation to adapt the package to your system (for example,
         sendmail.cf, passwd, inittab, etc.).
Package Management
Verifying
Verifying a package compares information about files installed from a
package with the same information from the original package. Among other
things, verifying compares the size, MD5 sum, permissions, type, owner,
and group of each file.



The command rpm -V verifies a package. You can use any of the Package
Selection Options listed for querying to specify the packages you wish to
verify. A simple use of verifying is rpm -V foo, which verifies that all the files
in the foo package are as they were when they were originally installed. For
example:
Package Management
• To verify a package containing a particular file:

   # rpm -Vf /bin/vi

• To verify ALL installed packages:

   # rpm -Va

• To verify an installed package against an RPM package file:

   # rpm -Vp foo-1.0-1.i386.rpm

This command can be useful if you suspect that your RPM databases are
corrupt.
User and Group Management
The control of users and groups is a core element of Red
Hat Enterprise Linux system administration.

Users can be either people (meaning accounts tied to physical users) or
accounts which exist for specific applications to use.

Groups are logical expressions of organization, tying users together for a
common purpose. Users within a group can read, write, or execute files
owned by that group.

Each user and group has a unique numerical identification number called a
userid (UID) and a groupid (GID) respectively.
User Management
User information:

The id command prints information for a certain user. Use it like this:
# id username


Create a user

To create a new user:
# useradd -c "My Example User" username
# passwd username

The created user is initially in an inactive state. To activate the user you
have to assign a password with passwd. Some useful useradd options
include the following:
User Management
-c     :sets a comment for the user.

-s     : is used in order to define the user’s default login shell. If not used, then the
       system’s default shell becomes the user’s default login shell.

-r     : creates a user with UID<500 (system account)

-d     : sets the user’s home directory. If not used, the default home directory is
       created (/home/username/)

-M     : the home directory is not created. This is useful when the directory already
       exists.
# useradd -c "This user cannot login to a shell" -s /sbin/nologin
<username>

# passwd <username>
User Management
Change the user’s password
To change a user’s password:

# passwd <username>

If it’s used without specifying a username, then the currently logged in user’s
password is changed.


Add a user to a group

Usermod is used to modify a user account’s settings. Check the man page for all
the available options. One useful use of this command is to add a user to a group:

# usermod -a -G <group1> <username>

The -a option is critical. The user is added to group1 while he continues to be a
member of other groups. If it’s not used, then the user is added only to group1 and
         removed from any other groups. So, take note!
User Management
Lock and Unlock user accounts
usermod uses are to lock and unlock user accounts. To lock out a user:
# usermod -L <username>


To Unlock User
# usermod -U <username>


Delete a user

Userdel is used to delete a user account. If the -r option is used then the
user’s home directory and mail spool are deleted too:

# userdel -r <username>
User Management
Create a new group
To create a new group, issue the command:

# groupadd <groupname>

The -r option can be used to create a group with GID<500 (system).


Change a group’s name

Groupmod can be used to change a group name:

# groupmod -n newgroupname <groupname>
User Management
Delete a group
Groupdel can delete a group:

# groupdel <groupname>

In order to delete a user’s primary group (usually this is the group with name
equal to the username) the respective user must be deleted previously.
User Management
User Management
User Management
User Management
User Management
User Management
Dynamic Host
  Configuration Protocol (DHCP)
DHCP is a network protocol that automatically assigns
TCP/IP information to client machines. Each DHCP
client connects to the centrally located DHCP server,
which returns that client's network configuration
(including the IP address, gateway, and DNS servers).

Why Use DHCP?
o DHCP is useful for automatic configuration of client network
  interfaces.
o DHCP is also useful if an administrator wants to change the
  IP addresses of a large number of systems.
DHCP Server Configuration
The daemon which runs on the server is dhcpd and is included in the file
dhcp-<version>.rpm. If dhcpd is not installed in the server, then install it.
# rpm –ivh dhcp*

DHCP server is controlled by the configuration file /etc/dhcpd.conf. To
make this file copy the sample file make the necessary changes as below :
# cp /usr/share/doc/dhcp-3.0.1/dhcpd.conf.sample /etc/dhcpd.conf

Change the parameters in dhcpd.conf as per the requirement, minimum
change are as below:
# vi /etc/dhcpd.conf
          subnet 192.168.0.0 netmask 255.255.255.0
          range dynamic_bootp 192.168.0.100 192.168.0.200

Now each of our clients will receive its IP address between 192.168.0.100
and 192.168.0.200, subnet mask, gateway, and broadcast address from
dhcp server.
DHCP Server Configuration
Service startup :
# service dhcpd start
        To start the dhcp daemon.
# chkconfig dhcpd on
        To start the daemon on boot time.

Lease Database :
On the DHCP server, the file /var/lib/dhcpd/dhcpd.leases stores the
DHCP client lease database. DHCP lease information for each recently
assigned IP address is automatically stored in the lease database. The
information includes the length of the lease, to whom the IP address has
been assigned, the start and end dates for the lease, and the MAC address
of the network interface card that was used to retrieve the lease. The lease
database is recreated from time to time so that it is not too large.
Network File System (NFS)

• NFS is the most common method for
  providing file sharing services on Linux
  and Unix Networks. It is a distributed
  file system that enables local access to
  remote disks and file system.
• NFS uses a standard client/server
  architecture.
NFS          –     Cont’d…
Red Hat Enterprise Linux uses a combination of kernel-level
support and daemon processes to
provide NFS file sharing. To share or mount NFS file systems,
the following services work together :
• /etc/init.d/nfs
Starts the Network File System service.
• /etc/init.d/portmap
Starts the portmap daemon, called the port mapper; needed by
all programs that use Remote Procedure Call (RPC).
• /etc/init.d/nfslock
It starts locking daemon lockd and statd, although nfsd starts
the lockd itself, but we must start the statd separately.
NFS Server Configuration
There are three ways to configure an NFS server
   under Red Hat Enterprise Linux:

1. manually editing its configuration file (/etc/exports),
2. using the /usr/sbin/exportfs command.
3. using the NFS Server Configuration Tool
   (system-config-nfs) graphical tool,
NFS Server Configuration – cont’d...
1. Manually editing the configuration file :

Make the following entries in /etc/exports file :
Local directory to be                              Options
share
                           Hosts


/data                   192.168.0.6/255.255.255.0(rw,sync)
/usr/local              *.example.com(ro)
/home                   @dev(rw,async)
/var/tmp                192.168.0.11(rw,async)
NFS Server Configuration – cont’d...
•    Here the first line permits any host with an IP address in
     the range 192.168.0.6 to 192.168.0.255 to access the /data
     directory with read-write permission.
•    The second line permits all host with a name of the format
     somehost.example.com to access the /usr/local directory
     with read only permission.
•    The third line permits any number of NIS group named dev
     to access the /home directory with read-write permission.
•    The last line permits the only host whose IP address is
     192.168.0.11 to access the /var/tmp directory with read-
     write permission.
NFS Server Configuration – cont’d...
2. Using the exportfs command :
# /usr/sbin/exportfs
          This command writes the currently exported file system in
/var/lib/nfs/xtab and the kernel’s internal table of exported file systems.
# exportfs –a
          Initializes the xtab file, synchronizing it with /etc/exports.
# exportfs –o exp_opts host:directory
          To add a new export to /var/lib/nfs/xtab and to kernel’s internal table
of NFS exports without editing the /etc/exports. As for example
          # exports –o async,rw 192.169.0.3:/var/tmp
More options with exportfs are
-v : verbose
-u : unexport
-i : ignore
NFS Server Configuration – cont’d...

3. Usingthe NFS Server Configuration
  Tool (system-config-nfs) :
To start the application, click on System =>
  Administration => Server Settings => NFS. Or
  we can also type the command system-config-nfs
  in a terminal.
NFS Client Configuration
# showmount –e <host/server IP>
# showmount -e 192.168.0.2
Shows the NFS server’s list of exported file systems.
# mkdir /mntdata
# mount 192.168.0.2:/data /mntdata
Mount the exported file system /data to /mntdata to
use it.
           Or make entry in /etc/fstab file:
192.168.0.2:/data /mntdata        nfs defaults 0 0
NFS – Limitation (do & don’t)
Good candidates for NFS exports include any file system that is shared
among a large number of users.
We can export only local file system and their subdirectories, we can’t
export a file system that is itself an NFS mount.
A subdirectory of an exported file system can’t be exported unless
subdirectories resides on a different physical disk than its parent.
As for example :
/dev/sda1 /usr/local ext3 defaults 1 2
here if we export /usr/local, we cannot export /usr/local/devtools.
/dev/sda1 /usr/local                    ext3 defaults 1 2
/dev/sdb2 /usr/local/devtools           ext3 defaults 1 2
now we could export both /usr/local as well as /usr/local/devtools.

Conversely, the parent directory of an exported subdirectory cannot be
exported unless the parent directory resides on a different physical
disk.
FTP Server Configuration
Introduction :
The File Transfer Protocol (FTP) is used as one of the most
common means of copying files between servers over the
Internet. Here we’ll see how to convert your Linux box into an
FTP server using the default Very Secure FTP Daemon
(VSFTPD).
FTP relies on a pair of TCP ports to get the job done. It
operates in two connection channels :
FTP Control Channel, TCP Port 21: All commands we send
and the ftp server's responses to those commands will go over
the control connection, but any data sent back (such as "ls"
directory lists or actual file data in either direction) will go over
the data connection.
FTP Data Channel, TCP Port 20: This port is used for all
subsequent data transfers between the client and server.
FTP Server Configuration   – Cont’d…

Types of FTP :
  From a networking
  perspective, the
  two main types of
  FTP are active and
  passive.
• In active FTP, the
  FTP server initiates
  a data transfer
  connection back to
  the client.
• For passive FTP,
  the connection is
  initiated from the
  FTP client.
FTP Server Configuration                              – Cont’d…

The important files and directories are :
• /etc/rc.d/init.d/vsftpd
The initialization script (initscript).
• /etc/vsftpd/vsftpd.conf
The configuration file for vsftpd.
• /etc/vsftpd.ftpusers
A list of users not allowed to log into vsftpd.
• /etc/vsftpd.user_list
This file can be configured to either deny or allow access to the users listed,
depending on whether the userlist_deny directive is set to YES (default) or
NO in /etc/vsftpd/vsftpd.conf. If /etc/vsftpd.user_list is used to grant access
to users, the usernames listed must not appear in /etc/vsftpd.ftpusers.
• /var/ftp/
The directory containing files served by vsftpd. It also contains the
          /var/ftp/pub/ directory for anonymous users.
FTP Server Configuration                                  – Cont’d…

#   service vsftpd start/stop/restart                 To start/stop/restart the FTP
#   netstat –a | grep 21                              service.
The important parameters to set in /etc/vsftpd/vsftpd.conf configuration files are :

anonymous_enable=             yes/no
         default is yes, allowed the anonymous users to log in.
local_enable       =          yes/no
         default is yes, allowed the local users to log into the system.
userlist_enable =             yes/no
         default is no, when enabled, the users listed in the file specified by the
         userlist_file directive are denied access.
userlist_deny      =          yes/no
         default is yes, when used in conjunction with the userlist_enable directive
         and set to NO, all local users are denied access unless the username is
         listed in the file specified by the userlist_file directive.
FTP Server Configuration                                 – Cont’d…
userlist_file     =      /etc/vsftpd.user_list
    Specifies the file referenced by vsftpd when the userlist_enable directive is
    enabled.
tcp_wrappers      =      yes/no
    When enabled, TCP wrappers are used to grant access to the server.
anon_max_rate =          <value>
    Specifies the maximum data transfer rate for anonymous users in bytes per
    second.
local_max_rate =         <value>
    Specifies the maximum rate data is transferred for local users logged into the
    server in bytes per second.
max_clients       =      <value>
    Specifies the maximum number of simultaneous clients allowed to connect to
    the server when it is running in standalone mode.
max_per_ip        =      <value>
    Specifies the maximum of clients allowed to connected from the same source
    IP address.
                 (The default <value> is 0, which does not limit connections.)
FTP Server Configuration                   – Cont’d…

Connect to ftp server (192.168.1.100) :
[root@cipa_nic tmp]# ftp 192.168.1.100
Connected to 192.168.1.100 (192.168.1.100)
220 ready, dude(vsFTPd 1.1.0:beat me,break me)
Name (192.168.1.100:root): user1
331 Please specify the password.
Password:
230 Login successful. Have fun.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>
ftp> put testfile
ftp> get vsftpd-1.1.0-1.i386.rpm
ftp> exit
221 Goodbye.
[root@cipa_nic tmp]#
Syslog Server Configuration
Introduction :
•   Linux applications use the syslog utility to export all their
    errors and status messages to files located in the /var/log
    directory.

•   The main configuration file /etc/syslog.conf decides that
    what level of error messages for the services are to be
    written in which file. By default most of the messages are
    written in /var/log/messages file.

•   Configuring syslog messages to a Remote Log Server, we
    have to do two things :
      –   Configuring the Linux syslog Server.
      –   Configuring the Linux Client.
Syslog                 Server Configuration – cont’d
•   Syslog reserves facilities local0 through local7 for log messages received
    from remote servers and network devices.
•   Syslog Facility and Severity Numbering Scheme for Local Directors
    Facility    FF Value                               Severity         SS Value

    local 0       16          System unusable                              0

    local 1       17          Immediate action required                    1

    local 2       18          Critical condition                           2

    local 3       19          Error conditions                             3

    local 4       20          Warning conditions                           4

    local 5       21          Normal but significant conditions            5

    local 6       22          Informational messages                       6

    local 7       23          Debugging messages                           7
Syslog            Server Configuration – cont’d
• Configuring the Linux syslog Server :
  Edit the /etc/sysconfig/syslog file and set the
  SYSLOGD_OPTIONS variable as shown below :
# Options to syslogd
# -m 0 disables 'MARK' messages.
# -r enables logging from remote machines
# -x disables DNS lookups on messages
received with -r
# See syslogd(8) for more details

 SYSLOGD_OPTIONS="-m 0 –r -x"

     Now the server will start listening the log messages from remote on
     UDP port 514.
Syslog            Server Configuration – cont’d

Configuring the Linux remote servers :
  Edit the /etc/syslog.conf file and the necessary
  changes as follows (say) :
*.info;mail.none;authpriv.none;cron.none           @192.168.0.2
*.debug                                            @loghost
*.debug                                            /var/log/messages


 Where loghost is the nick name of the syslog server and IP
 is 192.168.0.2. We have to make an entry in /etc/hosts file :
  192.168.0.2    logserv.at-my-site.com           logserv       loghost

 # service syslog restart
       Restart the syslog service and starts sending the logs to loghost.
Squid Proxy Server
               Configuration
Introduction :
Two important goals of many small businesses are to:
• Reduce Internet bandwidth charges
• Limit access to the Web to only authorized users.
The Squid web caching proxy server can achieve both these
goals fairly easily.
We can configure our web browsers to use the Squid proxy
server instead of going to the web directly. The Squid server
then checks its web cache for the web information requested
by the user. It will return any matching information that finds in
its cache, and if not, it will go to the web to find it on behalf of
the user. Once it finds the information, it will populate its cache
with it and also forward it to the user's web browser.
Squid Proxy Server Configuration –
                cont’d…
Starting Squid :
Use the chkconfig configure Squid to start at
   boot :
[root@cipa_nic tmp]# chkconfig squid on


Use the service command to start, stop, and restart Squid after booting :

[root@cipa_nic tmp]# service squid start [root@cipa_nic
tmp]# service squid stop [root@cipa_nic tmp]# service
squid restart
Squid Proxy Server Configuration –
                  cont’d…
The /etc/squid/squid.conf File :
The Visible Host Name
Squid will fail to start if we don't give our server a hostname. This can set with the
visible_hostname parameter.

 visible_hostname cipa_nic


Access Control Lists (ACL)
We can limit users' ability to browse the Internet with access control lists (ACLs). Each ACL line
defines a particular type of activity, such as an access time or source network, they are then
linked to an http_access statement that tells Squid whether or not to deny or allow traffic that
matches the ACL.
Squid matches each Web access request it receives by checking the http_access list from top to
bottom. If it finds a match, it enforces the allow or deny statement and stops reading further.
Squid Proxy Server Configuration –
               cont’d…
/etc/squid/squid.conf :
# Add this to the bottom of the ACL Section
acl cipa_network src 192.168.1.0/24
acl business_hours time M T W H F 09:00-17:00
acl SamirHost src 192.168.1.23


# Add this to the top of the http_access Section
http_access deny SamirdHost
http_access allow cipa_network business_hours

 This allow only the business hour access from the CIPA network, while always
 restricting access from host 192.168.1.23 (Samir).
Squid Proxy Server Configuration –
                   cont’d…
    /etc/squid/squid.conf :                         ….cont’d…
•     We can allow morning access only
•     Restrict the access to particular web sites

    # Add this to the bottom of the ACL Section
    acl morning time 08:00-12:00
    acl DenyHost dst www.restricted.com


    # Add this to the top of the http_access Section
    http_access allow morning
    http_access deny DenyHost
Squid Proxy Server Configuration –
                  cont’d…
/etc/squid/squid.conf :                             ….cont’d…
Squid is also capable of reading the files containing list of websites and/or
domains for use in ACL.
Let’s create two files named (containing the list of web sites):
1.   /home/samir/allowed-sites.squid        2. /home/prem/restricted-sites.squid

www.openfree.org                              www.porn.com
Linuxhomenetworking.com                       illegal.com
www.google.co.in                              www.notallowedsites.com
Squid Proxy Server Configuration –
                cont’d…
/etc/squid/squid.conf :               ….cont’d…
# Add this to the bottom of the ACL section of
acl home_network src 192.168.1.0/24
acl business_hours time M T W H F 9:00-17:00
acl GoodSites dstdomain "/home/samir/allowed-sites.squid"
acl BadSites dstdomain "/home/prem/restricted-sites.squid"


# Add this at the top of the http_access section of squid.conf
http_access deny BadSites
http_access allow home_network business_hours GoodSites
Domain Name Service (DNS)
Introduction :
• On most modern networks, including the Internet, users
  locate other computers by name. The most effective way to
  configure a network to allow such name-based connections is
  to set up a Domain Name Service (DNS) or a nameserver,
  which resolves hostnames on the network to numerical
  addresses and vice versa.
• DNS associates hostnames with their respective IP
  addresses, so that when users want to connect to other
  machines on the network, they can refer to them by name,
  without having to remember IP addresses.
• DNS is normally implemented using centralized servers that
  are authoritative for some domains and refer to other DNS
  servers for other domains.
DNS – cont’d…
DNS Domains
  Everyone in the world has a first name and a last, or family, name. The
  same thing is true in the DNS world: A family of Web sites can be loosely
  described a domain. For example, the domain indiatimes.com has a
  number of children, such as in.indiatimes.com , www.indiatimes.com and
  mail.indiatimes.com for the Web and mail servers, respectively.

How DNS Servers find out the site information
• There are 13 root authoritative DNS servers (super duper authorities)
  that all DNS servers query first. These root servers know all the
  authoritative DNS servers for all the main domains - .com, .net, .mil, .edu
  and the rest. This layer of servers keep track of all the DNS servers that
  Web site systems administrators have assigned for their sub domains.

• For example, when we register our domain my-site.com, we are actually
  inserting a record on the .com DNS servers that point to the authoritative
  DNS servers we assigned for our domain.
DNS – cont’d…
Nameserver Types
There are four primary nameserver configuration types:
1. Master
   Stores original and authoritative zone records for a namespace, and answers
   queries about the namespace from other nameservers.
2. Slave
   Answers queries from other nameservers concerning namespaces for which it is
   considered an authority. However, slave nameservers get their namespace
   information from master nameservers.
3. Caching-only
   Offers name-to-IP resolution services, but is not authoritative for any zones.
   Answers for all resolutions are cached in memory for a fixed period of time, which
   is specified by the retrieved zone record.
4. Forwarding
   Forwards requests to a specific list of nameservers for name resolution. If none of
   the specified nameservers can perform the resolution, the resolution fails.
   A nameserver may be one or more of these types. For example, a
   nameserver can be a master for some zones, a slave for others, and only
   offer forwarding resolutions for others.
DNS – cont’d…
BIND as a Nameserver
• Berkeley Internet Name Domain (BIND) performs name
  resolution services through the /usr/sbin/named
  daemon.
• BIND stores its configuration files in the following locations:
  /etc/named.conf
       –   The configuration file for the named daemon.
 /var/named/ directory
       –   The named working directory which stores zone, statistic, and
           cache files.
Note
 If you have installed the bind-chroot package, the BIND service will
 run in the /var/named/chroot environment. All configuration files will
 be moved there. As such, named.conf will be located in
 /var/named/chroot/etc/named.conf, and so on..
DNS – cont’d…
Basic DNS Testing of DNS Resolution
• As we know, DNS resolution maps a fully qualified domain name (FQDN),
  such as www.google.com, to an IP address. This is also known as a forward
  lookup. The reverse is also true: By performing a reverse lookup, DNS can
  determining the fully qualified domain name associated with an IP address.

• Many different Web sites can map to a single IP address, but the reverse
  isn't true; an IP address can map to only one FQDN. This means that forward
  and reverse entries frequently don't match

 [root@cipa_nic      tmp]#    host www.google.com
 [root@cipa_nic      tmp]#    dig www.yahoo.com
 [root@cipa_nic      tmp]#    dig –x 202.86.4.142
 [root@cipa_nic      tmp]#    nslookup www.google.com
DNS – cont’d…
Configuring NameServer
Step-1: Configure /etc/resolv.conf
        Make the DNS server refer to itself for all DNS queries.
        nameserver               127.0.0.1
Step-2: Creating a /etc/named.conf base configuration file
        The /etc/named.conf file contains the main DNS configuration and tells
        BIND where to find the configuration or zone files for each domain we own.
        This file usually has two zone areas :
        1. Forward zone (to map domains with IP address)
        2. Reverse zone (to map IP address with domains)

        We can get the sample named.conf file from
        /usr/share/doc/bind…./sample/etc/ , copy it to /etc/ and edit it as per our
        need.
DNS – cont’d…
Step-3: Creating zone file reference in /etc/named.conf
       Zone files contain information about a namespace and are stored in
       the named working directory (/var/named/) by default. Each zone
       file is named according to the file option data in the zone statement.
       We can create as many zone as we needed.
           options {
                directory "/var/named";
                dump_file "/var/named/data";
                allow-transfer { 192.168.1.200; }; -   Secondary DNS(slave)
                forward only;
           };
           zone "jhr.nic"{
                type master;
                file “jhr.nic.hosts";
           };
DNS – cont’d…
             Creating zone file
             [root@cipa_nic ~]# cd /var/named
             [root@cipa_nic named]# vi jhr.nic.hosts

  $ttl 604800         - time to live, measured in seconds
  jhr.nic. IN SOA cipamaster.nic.in. samir.cipamaster.nic.in(
         2007291105; -serial no. use year+month+day+integer
         1D;          -refresh time,
         1H;          -retry period
         1W;          -expire time
         1D;          -minimum ttl period
         )

Time representation : D (day), W (week), H (hours), No suffix (seconds)
The SOA (Start of Authority) record format :
Name Class Type NameServer Email_Address SerialNo Refresh Expiry Minimum-TTL
DNS – cont’d…
$ttl 1W           - time to live, measured in seconds
jhr.nic. IN SOA @ deepak.cipamaster.nic.in(
         2007291106;       -serial no. use year+month+day+integer
         1D;               -refresh time,
         1H;               -retry period
         1W;               -expire time
         1D;               -minimum ttl period
          )
jhr.nic.      IN          NS        localhost
localhost.jhr.nic.        IN        A      192.168.1.2
www.jhr.nic. IN           A         192.168.1.3
ftp.jhr.nic. IN           A         192.168.1.4
mail.jhr.nic. IN          A         192.168.1.5
www.jhr.nic. IN           A         192.168.1.5
deepak.jhr.nic.           IN        A      192.168.1.6
parishesh.jhr.nic.        IN        A      192.168.1.7
cipaslave.jhr.nic.        IN        A      192.168.1.200

        DNS Resource Records : Name       class   type   data
        IN – Internet, A – forward lookup,   PTR – reverse lookup    NS – Name Server,
        MX – mail exchange            CNAME - alias  @ - localhost
DNS – cont’d…
# Continue … …

cipa_boys                              IN        A            192.168.1.10

www.jhr.nic.                           IN        CNAME        cipa_boys
ftp.jhr.nic.                           IN        CNAME        cipa_boys
nfs.jhr.nic.                           IN        CNAME        cipa_boys

                          options {
  Creating Secondary or




                               directory "/var/named";
                               dump_file "/var/named/data";
     (Slave) Server




                               allow-transfer { 192.168.1.200; }; -   Secondary DNS(slave)
                               forward only;
                          };
                          zone "jhr.nic"{
                               type slave;
                               masters { 192.168.1.2; };
                               file “slaves/jhr.nic.hosts";
                          };
DNS – cont’d…
Now we have to restart the named service. And check whether functioning
properly or not.

 [root@cipa_nic tmp]# service named restart
 [root@cipa_nic tmp]# named-checkconf
 [root@cipa_nic tmp]# dig www.jhr.nic
 [root@cipa_nic tmp]# dig mail.jhr.nic
Red Hat   Training

Weitere ähnliche Inhalte

Was ist angesagt?

LivePC creation webcast
LivePC creation webcastLivePC creation webcast
LivePC creation webcastkelvin
 
Lesson 5 - Managing Devices
Lesson 5 - Managing DevicesLesson 5 - Managing Devices
Lesson 5 - Managing DevicesGene Carboni
 
Step for installing linux server
Step for installing linux serverStep for installing linux server
Step for installing linux serversyed mehdi raza
 
Informix User Group France - 30/11/2010 - Fonctionalités IDS 11.7
Informix User Group France - 30/11/2010 - Fonctionalités IDS 11.7Informix User Group France - 30/11/2010 - Fonctionalités IDS 11.7
Informix User Group France - 30/11/2010 - Fonctionalités IDS 11.7Nicolas Desachy
 
Lesson 3 - Understanding Native Applications, Tools, Mobility, and Remote Man...
Lesson 3 - Understanding Native Applications, Tools, Mobility, and Remote Man...Lesson 3 - Understanding Native Applications, Tools, Mobility, and Remote Man...
Lesson 3 - Understanding Native Applications, Tools, Mobility, and Remote Man...Gene Carboni
 
Operating System
Operating SystemOperating System
Operating SystemBini Menon
 
Linux Disaster Recovery Solutions
Linux Disaster Recovery SolutionsLinux Disaster Recovery Solutions
Linux Disaster Recovery SolutionsGratien D'haese
 
Guide to open suse 13.2 by mustafa rasheed abass & abdullah t. tua'ama..super...
Guide to open suse 13.2 by mustafa rasheed abass & abdullah t. tua'ama..super...Guide to open suse 13.2 by mustafa rasheed abass & abdullah t. tua'ama..super...
Guide to open suse 13.2 by mustafa rasheed abass & abdullah t. tua'ama..super...Mustafa AL-Timemmie
 
Asiabsdcon14 lavigne
Asiabsdcon14 lavigneAsiabsdcon14 lavigne
Asiabsdcon14 lavigneDru Lavigne
 
Linux Disaster Recovery Made Easy
Linux Disaster Recovery Made EasyLinux Disaster Recovery Made Easy
Linux Disaster Recovery Made EasyNovell
 
Chapter 4 Wendys Op Systems & File Mgt
Chapter 4    Wendys Op Systems & File MgtChapter 4    Wendys Op Systems & File Mgt
Chapter 4 Wendys Op Systems & File Mgtguestf77c65c
 

Was ist angesagt? (18)

Ch12
Ch12Ch12
Ch12
 
LivePC creation webcast
LivePC creation webcastLivePC creation webcast
LivePC creation webcast
 
9781111306366 ppt ch2
9781111306366 ppt ch29781111306366 ppt ch2
9781111306366 ppt ch2
 
Lesson 5 - Managing Devices
Lesson 5 - Managing DevicesLesson 5 - Managing Devices
Lesson 5 - Managing Devices
 
Linux
Linux Linux
Linux
 
Linux basics 1/2
Linux basics 1/2Linux basics 1/2
Linux basics 1/2
 
Step for installing linux server
Step for installing linux serverStep for installing linux server
Step for installing linux server
 
Informix User Group France - 30/11/2010 - Fonctionalités IDS 11.7
Informix User Group France - 30/11/2010 - Fonctionalités IDS 11.7Informix User Group France - 30/11/2010 - Fonctionalités IDS 11.7
Informix User Group France - 30/11/2010 - Fonctionalités IDS 11.7
 
Lesson 3 - Understanding Native Applications, Tools, Mobility, and Remote Man...
Lesson 3 - Understanding Native Applications, Tools, Mobility, and Remote Man...Lesson 3 - Understanding Native Applications, Tools, Mobility, and Remote Man...
Lesson 3 - Understanding Native Applications, Tools, Mobility, and Remote Man...
 
Operating System
Operating SystemOperating System
Operating System
 
Linux Disaster Recovery Solutions
Linux Disaster Recovery SolutionsLinux Disaster Recovery Solutions
Linux Disaster Recovery Solutions
 
Guide to open suse 13.2 by mustafa rasheed abass & abdullah t. tua'ama..super...
Guide to open suse 13.2 by mustafa rasheed abass & abdullah t. tua'ama..super...Guide to open suse 13.2 by mustafa rasheed abass & abdullah t. tua'ama..super...
Guide to open suse 13.2 by mustafa rasheed abass & abdullah t. tua'ama..super...
 
Open_suse
Open_suseOpen_suse
Open_suse
 
Asiabsdcon14 lavigne
Asiabsdcon14 lavigneAsiabsdcon14 lavigne
Asiabsdcon14 lavigne
 
Linux Disaster Recovery Made Easy
Linux Disaster Recovery Made EasyLinux Disaster Recovery Made Easy
Linux Disaster Recovery Made Easy
 
Tlf2014
Tlf2014Tlf2014
Tlf2014
 
Disk Operating systems
Disk Operating systemsDisk Operating systems
Disk Operating systems
 
Chapter 4 Wendys Op Systems & File Mgt
Chapter 4    Wendys Op Systems & File MgtChapter 4    Wendys Op Systems & File Mgt
Chapter 4 Wendys Op Systems & File Mgt
 

Andere mochten auch (14)

1. Introduction
1. Introduction1. Introduction
1. Introduction
 
itft_Installation of linux
itft_Installation of linuxitft_Installation of linux
itft_Installation of linux
 
Introduction to Virtualization (viadmin.com)
Introduction to Virtualization (viadmin.com)Introduction to Virtualization (viadmin.com)
Introduction to Virtualization (viadmin.com)
 
Oracle Enterprise Linux
Oracle Enterprise LinuxOracle Enterprise Linux
Oracle Enterprise Linux
 
Rhel5 essentials preview
Rhel5 essentials previewRhel5 essentials preview
Rhel5 essentials preview
 
Ccnp course details
Ccnp course detailsCcnp course details
Ccnp course details
 
Route Authentication
Route AuthenticationRoute Authentication
Route Authentication
 
Switching Types
Switching TypesSwitching Types
Switching Types
 
VMWare Lab For Training, Testing or Proof of Concept
VMWare Lab For Training, Testing or Proof of ConceptVMWare Lab For Training, Testing or Proof of Concept
VMWare Lab For Training, Testing or Proof of Concept
 
Linux
LinuxLinux
Linux
 
Switching Types
Switching TypesSwitching Types
Switching Types
 
CCNA presentation.
CCNA presentation.CCNA presentation.
CCNA presentation.
 
Computer networking
Computer networkingComputer networking
Computer networking
 
Subnetting
SubnettingSubnetting
Subnetting
 

Ähnlich wie Red Hat Training

Introduction to Operating Systems.pptx
Introduction to Operating Systems.pptxIntroduction to Operating Systems.pptx
Introduction to Operating Systems.pptxMohamedSaied877003
 
Linux conf-admin
Linux conf-adminLinux conf-admin
Linux conf-adminbadamisri
 
Becoming Linux Expert Series-Install Linux Operating System
Becoming Linux Expert Series-Install Linux Operating SystemBecoming Linux Expert Series-Install Linux Operating System
Becoming Linux Expert Series-Install Linux Operating Systemskbansal222
 
Dru lavigne servers-tutorial
Dru lavigne servers-tutorialDru lavigne servers-tutorial
Dru lavigne servers-tutorialDru Lavigne
 
Network operating system practicle file
Network operating system practicle fileNetwork operating system practicle file
Network operating system practicle fileAnkit Dixit
 
Redhat OS installation
Redhat OS installationRedhat OS installation
Redhat OS installationPontika Gupta
 
logical volume manager.ppt
logical volume manager.pptlogical volume manager.ppt
logical volume manager.pptPandiya Rajan
 
Preparing_Your_Computer.pdf
Preparing_Your_Computer.pdfPreparing_Your_Computer.pdf
Preparing_Your_Computer.pdfprago1
 
Disabling windows file protection
Disabling windows file protectionDisabling windows file protection
Disabling windows file protectionJhonathansmrt Smart
 
I Am Linux-Introductory Module on Linux
I Am Linux-Introductory Module on LinuxI Am Linux-Introductory Module on Linux
I Am Linux-Introductory Module on LinuxSagar Kumar
 

Ähnlich wie Red Hat Training (20)

Introduction to Operating Systems.pptx
Introduction to Operating Systems.pptxIntroduction to Operating Systems.pptx
Introduction to Operating Systems.pptx
 
Linux
Linux Linux
Linux
 
Linux conf-admin
Linux conf-adminLinux conf-admin
Linux conf-admin
 
Linux Conf Admin
Linux Conf AdminLinux Conf Admin
Linux Conf Admin
 
Becoming Linux Expert Series-Install Linux Operating System
Becoming Linux Expert Series-Install Linux Operating SystemBecoming Linux Expert Series-Install Linux Operating System
Becoming Linux Expert Series-Install Linux Operating System
 
Linux
LinuxLinux
Linux
 
linux
linuxlinux
linux
 
Dru lavigne servers-tutorial
Dru lavigne servers-tutorialDru lavigne servers-tutorial
Dru lavigne servers-tutorial
 
Network operating system practicle file
Network operating system practicle fileNetwork operating system practicle file
Network operating system practicle file
 
Redhat OS installation
Redhat OS installationRedhat OS installation
Redhat OS installation
 
logical volume manager.ppt
logical volume manager.pptlogical volume manager.ppt
logical volume manager.ppt
 
Preparing_Your_Computer.pdf
Preparing_Your_Computer.pdfPreparing_Your_Computer.pdf
Preparing_Your_Computer.pdf
 
Weblogic12 c installation guide
Weblogic12 c installation guideWeblogic12 c installation guide
Weblogic12 c installation guide
 
Divya
DivyaDivya
Divya
 
Divya
DivyaDivya
Divya
 
Sahul
SahulSahul
Sahul
 
Sahul
SahulSahul
Sahul
 
Disabling windows file protection
Disabling windows file protectionDisabling windows file protection
Disabling windows file protection
 
Edubooktraining
EdubooktrainingEdubooktraining
Edubooktraining
 
I Am Linux-Introductory Module on Linux
I Am Linux-Introductory Module on LinuxI Am Linux-Introductory Module on Linux
I Am Linux-Introductory Module on Linux
 

Red Hat Training

  • 1. By : Sunil Modi Under the supervision of Mr. D. Nayak, Principal System Analyst
  • 2. Topics Covered Installation Networking Storage & File systems Configuring Serves DHCP Creating RAID NFS Creating LVM FTP Syslog User Management Squid Proxy DNS Package Management Appache Cron File Samba
  • 3. Introduction – What is Linux ? As we already know, Linux is a freely distributed implementation of a UNIX-like kernel, the lowlevel core of an operating system. Because Linux takes the UNIX system as its inspiration, Linux and UNIX programs are very similar. Linux was developed by Linus Torvalds at the University of Helsinki, with the help of UNIX programmers from across the Internet. It began as a hobby inspired by Andy Tanenbaum’s Minix, a small UNIXlike system, but has grown to become a complete system in its own right. The intention is that the Linux kernel will not incorporate proprietary code but will contain nothing but freely distributable code.
  • 4. Installation – Method The following installation methods are available: DVD/CD-ROM If we have a DVD/CD-ROM drive and the Red Hat Enterprise Linux CD-ROMs or DVD. In rest of installation methods, We need a boot CD- ROM (use the linux askmethod boot option). Hard Drive If we have copied the Red Hat Enterprise Linux ISO images to a local hard drive. NFS If we are installing directly from an NFS server, use this method. FTP If you are installing directly from an FTP server, use this method. HTTP If we are installing directly from an HTTP (Web) server, use this method.
  • 5. Installation – from DVD/CD-ROMs To install Red Hat Enterprise Linux from a DVD/CD-ROM, place the DVD or CD #1 in your DVD/CD-ROM drive and boot your system from the DVD/CD-ROM. Just press Enter key at boot: prompt for GUI installation. Type linux text at boot: prompt for text mode installation. Welcome screen for GUI Installation
  • 6. Installation – cont’d… Language Selection The language we select here will become the default language for the operating system once it is installed. Keyboard configuration
  • 7. Installation – cont’d… Disk Partitioning Setup Partitioning allows us to divide our hard drive into isolated sections, where each section behaves as its own hard drive. Partitioning is particularly useful if we run multiple operating systems.
  • 8. Installation – cont’d… If we chose to create a custom layout, we must tell the installation program where to install Red Hat Enterprise Linux. This is done by defining mount points for one or more disk partitions. We may also need to create and/or delete partitions at this time. This partition tool used by the installation pragram is Disk Druid.
  • 9. Installation – cont’d… Adding Partitions : To add a new partition, select the New button. A dialog box appears. Edit Partitions : To edit a partition, select the Edit button or double-click on the existing partition. Delete Partitions : To delete a partition, highlight it in the Partitions section and click the Delete button. Confirm the deletion when prompted.
  • 10. Installation – cont’d… Boot Loader Configuration : To boot the system without boot media, we usually need to install a boot loader. A boot loader is the first software program that runs when a computer starts. It is responsible for loading and transferring control to the operating system kernel software. The kernel, in turns, initializes the rest of operating system. GRUB (Grand Unified Boot loader), which is installed by default, is a very powerful boot loader.
  • 11. Installation – cont’d… Boot Loader Installation: o The Master Boot Record (MBR) This is the recommended place to install a boot loader, unless the MBR already starts another operating system loader. The MBR is a special area on our hard drive that is automatically loaded by computer's BIOS, and is the earliest point at which the boot loader can take control of the boot process. o The First Sector of Boot Partition This is recommended if you are already using another boot loader on your system. In this case, your other boot loader takes control first. You can then configure that boot loader to start GRUB, which then boots Red Hat Enterprise Linux.
  • 12. Installation – cont’d… Network Configuration: The installation program automatically detects any network devices the system have and displays them in the Network Devices list. Once selected a network device, click Edit. From the Edit Interface pop-up screen, you can choose to configure the IP address and Netmask (for IPv4 - Prefix for IPv6) of the device via DHCP (or manually if DHCP is not selected) and you can choose to activate the device at boot time.
  • 13. Installation – cont’d… Time Zone Configuration Set your time zone by selecting the city closest to your computer's physical location. Click on the map to zoom in to a particular geographical region of the world.
  • 14. Installation – cont’d… Set Root Password Setting up a root account and password is one of the most important steps during our installation. The root account is used to install packages, upgrade RPMs, and perform most system maintenance. Logging in as root gives us complete control over our system.
  • 15. Installation – cont’d… Package Group Selection To select a component, click on the checkbox beside it“Package Group Selection”). Select the Customize now option on the screen. Clicking Next takes you to the Package Group Selection screen.
  • 16. Installation – cont’d… Package Group Selection Select each component you wish to install. Once a package group has been selected, if optional components are available you can click on Optional packages to view which packages are installed by default, and to add or remove optional packages from that group. If there are no optional components this button will be disabled.
  • 17. Installation – cont’d… Prepare to Install A screen preparing you for the installation of Red Hat Enterprise Linux now appears. For your reference, a complete log of your installation can be found in /root/install.log once you reboot your system. To cancel this installation process, press your computer's Reset button or use the Control-Alt-Delete key combination to restart your machine.
  • 18. Installation – cont’d… Installation Complete Congratulations! Your Red Hat Enterprise Linux installation is now complete! The installation program prompts you to prepare your system for reboot. Remember to remove any installation media if it is not ejected automatically upon reboot. After your computer's normal power-up sequence has completed, the graphical boot loader prompt appears at which you can do any of the following things: Press Enter — causes the default boot entry to be booted.
  • 19. File Systems File system refers to the files and directories stored on a computer. A file system can have different formats called file system types. These formats determine how the information is stored as files and directories. Some file system types store redundant copies of the data, while some file system types make hard drive access faster. This part discusses the ext3, swap, RAID, and LVM file system types. It also discusses the parted utility to manage partitions and access control lists (ACLs) to customize file permissions.
  • 20. File System Structure File System Hierarchy Standard (FHS) Red Hat Enterprise Linux uses the File system Hierarchy Standard (FHS) file system structure, which defines the names, locations, and permissions for many file types and directories. bin etc usr home mnt var dev sbin boot root
  • 21. FHS Organization FHS Organization The directories and files noted here are a small subset of those specified by the FHS document. Refer to the latest FHS document for the most complete information. The complete standard is available online at http://www.pathname.com/fhs/ [http://www.pathname.com/fhs]. The /boot/ Directory The /boot/ directory contains static files required to boot the system, such as the Linux kernel. These files are essential for the system to boot properly.
  • 22. FHS Organization The /dev/ Directory The /dev/ directory contains device nodes that either represent devices that are attached to the system or virtual devices that are provided by the kernel. These device nodes are essential for the system to function properly. The udev demon takes care of creating and removing all these device nodes in /dev/. /dev/hda - the master device on primary IDE channel. /dev/hdb - the slave device on primary IDE channel The /etc/ Directory The /etc/ directory is reserved for configuration files that are local to the machine. No binaries are to be placed in /etc/. Any binaries that were once located in /etc/ should be placed into /sbin/ or /bin/.
  • 23. FHS Organization The /lib/ Directory The /lib/ directory should contain only those libraries needed to execute the binaries in /bin/ and /sbin/. These shared library images are particularly important for booting the system and executing commands within the root file system. The /media/ Directory The /media/ directory contains subdirectories used as mount points for removeable media such as usb storage media, DVDs, CD-ROMs, and Zip disks. The /mnt/ Directory The /mnt/ directory is reserved for temporarily mounted file systems, such as NFS file system mounts. For all removeable media, please use the /media/ directory. Automatically detected removeable media will be mounted in the /media directory.
  • 24. FHS Organization The /opt/ Directory The /opt/ directory provides storage for most application software packages. A package placing files in the /opt/ directory creates a directory bearing the same name as the package. This directory, in turn, holds files that otherwise would be scattered throughout the file system, giving the system administrator an easy way to determine the role of each file within a particular package. The /proc/ Directory The /proc/ directory contains special files that either extract information from or send information to the kernel. Examples include system memory, cpu information, hardware configuration etc. Due to the great variety of data available within /proc/ and the many ways this directory can be used to communicate with the kernel.
  • 25. FHS Organization The /sys/ Directory The /sys/ directory utilizes the new sysfs virtual file system specific to the kernel. With the increased support for hot plug hardware devices in the kernel, the /sys/ directory contains information similarly held in /proc/, but displays a hierarchical view of specific device information in regards to hot plug devices. The /usr/ Directory The /usr/ directory is for files that can be shared across multiple machines. The /usr/ directory is often on its own partition and is mounted read-only. At a minimum, the following directories should be subdirectories of /usr/: /usr |- bin/ |- etc/ |- games/ |- include/ |- kerberos/ |- lib/ |- libexec/ |- local/ |- sbin/ |- share/ The /usr/local/ Directory The /usr/local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable among a group of hosts, but not found in /usr.
  • 26. FHS Organization The /sbin/ Directory The /sbin/ directory stores executables used by the root user. The executables in /sbin/ are used at boot time, for system administration and to perform system recovery operations. Of this directory, the FHS says: /sbin contains binaries essential for booting, restoring, recovering, and/or repairing the system in addition to the binaries in /bin. Programs executed after /usr/ is known to be mounted (when there are no problems) are generally placed into /usr/sbin. Locally-installed system administration programs should be placed into /usr/local/sbin. At a minimum, the following programs should be in /sbin/: arp, clock, halt, init, fsck.*, grub, ifconfig, mingetty, mkfs.*, mkswap, reboot, route, shutdown, swapoff, The /srv/ Directory The /srv/ directory contains site-specific data served by your system running Red Hat Enterprise Linux. This directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data that only pertains to a specific user should go in the /home/ directory.
  • 27. FHS Organization The /var/ Directory Since the FHS requires Linux to mount /usr/ as read-only, any programs that write log files or need spool/ or lock/ directories should write them to the /var/ directory. The FHS states /var/ is for: ...variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files. Below are some of the directories found within the /var/ directory: /var |- account/ |- arpwatch/ |- cache/ |- crash/ |- db/ |- empty/ |- ftp/ |- gdm/ |- kerberos/ |- lib/ |- System log files, such as messages and lastlog, go in the /var/log/ directory. The /var/lib/rpm/ directory contains RPM system databases. Lock files go in the /var/lock/ directory, usually in directories for the program using the file. The /var/spool/ directory has subdirectories for programs in which data files are stored.
  • 28. The ext3 File System Features of ext3 The ext3 file system is essentially an enhanced version of the ext2 file system. These improvements provide the following advantages: Availability After an unexpected power failure or system crash (also called an unclean system shutdown),each mounted ext2 file system on the machine must be checked for consistency by the e2fsck program. This is a time-consuming process that can delay system boot time significantly, especially with large volumes containing a large number of files. During this time, any data on the volumes is unreachable. The journaling provided by the ext3 file system means that this sort of file system check is no longer necessary after an unclean system shutdown. The only time a consistency check occurs using ext3 is in certain rare hardware failure cases, such as hard drive failures. The time to recover an ext3 file system after an unclean system shutdown does not depend on the size of the file system or the number of files; rather, it depends on the size of the journal used to maintain consistency. The default journal size takes about a second to recover, depending on the speed of the hardware.
  • 29. The ext3 File System Data Integrity The ext3 file system prevents loss of data integrity in the event that an unclean system shutdown occurs. The ext3 file system allows you to choose the type and level of protection that your data receives. By default, the ext3 volumes are configured to keep a high level of data consistency with regard to the state of the file system. Speed Despite writing some data more than once, ext3 has a higher throughput in most cases than ext2 because ext3's journaling optimizes hard drive head motion. You can choose from three journaling modes to optimize speed, but doing so means trade-offs inregards to data integrity if the system was to fail. Easy Transition It is easy to migrate from ext2 to ext3 and gain the benefits of a robust journaling file system without reformatting.
  • 30. Creating an ext3 File System After installation, it is sometimes necessary to create a new ext3 file system. For example, if you add a new disk drive to the system, you may want to partition the drive and use the ext3 file system. The steps for creating an ext3 file system are as follows: 1. Format the partition with the ext3 file system using mkfs. 2. Label the partition using e2label. Converting to an ext3 File System The tune2fs allows you to convert an ext2 filesystem to ext3. To convert an ext2 filesystem to ext3, log in as root and type the following command in a terminal: /sbin/tune2fs -j <block_device> where <block_device> contains the ext2 filesystem you wish to convert. A valid block device could be one of two types of entries: A mapped device — A logical volume in a volume group, for example, / dev/mapper/VolGroup00-LogVol02. A static device — A traditional storage volume, for example, /dev/hdbX, where hdb is a storage device name and X is the partition number. Issue the df command to display mounted file systems.
  • 31. Reverting to an ext2 File System If you wish to revert a partition from ext3 to ext2 for any reason, you must first unmount the partition by logging in as root and typing, umount /dev/mapper/VolGroup00-LogVol02 Next, change the file system type to ext2 by typing the following command as root: /sbin/tune2fs -O ^has_journal /dev/mapper/VolGroup00-LogVol02 Check the partition for errors by typing the following command as root: /sbin/e2fsck -y /dev/mapper/VolGroup00-LogVol02 Then mount the partition again as ext2 file system by typing: mount -t ext2 /dev/mapper/VolGroup00-LogVol02/mount/point In the above command, replace /mount/point with the mount point of the partition. Next, remove the .journal file at the root level of the partition by changing to the directory where it is mounted and typing: rm -f .journal You now have an ext2 partition. If you want to permanently change the partition to ext2, remember to update the /etc/fstab file.
  • 32. Swap Space What is Swap Space? Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory. Swap space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files. Swap should equal 2x physical RAM for up to 2 GB of physical RAM, and then an additional 1x physical RAM for any amount above 2 GB, but never less than 32 MB.
  • 33. Swap Space Adding Swap Space Sometimes it is necessary to add more swap space after installation. For example, you may upgrade the amount of RAM in your system from 128 MB to 256 MB, but there is only 256 MB of swap space. It might be advantageous to increase the amount of swap space to 512 MB if you perform memory- intense operations or run applications that require a large amount of memory. Creating an Another Swap Space To create and manipulate swap space, use the mkswap, swapon, and swapoff commands. mkswap initializes a swap area on a device (the usual method) or a file. swapon enables the swap area for use, and swapoff disables the swap space.
  • 34. Swap Space 1.Create the Partition of size 256 MB:(sda6) 2. Format the new swap space: # mkswap /dev/sda6(swap file system) 3. Change the label of swap partition # e2label /dev/sda6 /swap-sda6 4. Enable the extended logical volume: # swapon /dev/sda6 5. Set priority Priority of swap by default is 1 and many created will have -1. we can change it to 1. #vi /etc/fstab /dev/sad3 swap swap defaults,pri=1 0 0 /swap-sda6 swap swap defaults,pri=1 0 0 [Reboot the System] 6. Display swap partition #cat /proc/swaps
  • 35. Expanding Disk Capacity Introduction The lack of available disk storage frequently plagues Linux systems administrators. The most common reasons for this are expanding databases, increasing numbers of users, and the larger number of tasks your Linux server is expected to perform until a replacement is found. This section explores how to add a disk in Linux system. By moving directories from a full partition to an empty one made available by the new disk and then linking the directory structures of the two disks together. We adding a hard disk with only one partition and will then explain how to migrate data from the full disk to the new one.
  • 36. Expanding Disk Capacity Determining The Disk Types Linux stores the names of all known disk partitions in the /proc/partitions file. The entire hard disk is represented by an entry with a minor number of 0, and all the partitions on the drive are sequentially numbered after that. In the example, the system has two hard disks; disk /dev/hda has been partitioned, but the new disk (/dev/hdb) needs to be prepared to accept data. [root@localhost~]# cat /proc/partitions major minor #blocks name 3 0 7334145 hda 3 1 104391 hda1 3 2 1052257 hda2 3 3 2040255 hda3 3 4 1 hda4 3 5 3582463 hda5 3 6 554211 hda6 22 0 78150744 hdb [root@localhost~]#
  • 37. Expanding Disk Capacity Preparing Partitions on New Disks Linux partition preparation is very similar to that in a Windows environment, because both operating systems share the fdisk partitioning utility. The steps are: 1) The first Linux step in adding a new disk is to partition it in preparation of adding a filesystem to it. Type the fdisk command followed by the name of the disk. You want to run fdisk on the /dev/hdb disk, so the command is: [root@localhost~]# fdisk /dev/hdb Command (m for help): p Disk /dev/hdb: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System Command (m for help):
  • 38. Expanding Disk Capacity Command (m for help): n Command action e extended p primary partition (1-4) Partition number (1-4): 1 First cylinder (1-9729, default 1):<RETURN> Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-9729, default 9729): Run the print (p) command to confirm that you successfully created the partition partition. Command (m for help): p Disk /dev/hdb: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdb1 1 9726 78148161 83 Linux Command (m for help):
  • 39. Expanding Disk Capacity Command (m for help): p Disk /dev/hda: 7510 MB, 7510164480 bytes 255 heads, 63 sectors/track, 913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 83 Linux /dev/hda2 14 144 1052257+ 83 Linux /dev/hda3 145 398 2040255 82 Linux swap /dev/hda4 399 913 4136737+ 5 Extended /dev/hda5 399 844 3582463+ 83 Linux /dev/hda6 845 913 554211 83 Linux Command (m for help): Changes won't be made to the disk's partition table until you use the w command to write, or save, the changes. Do that now, and, when finished, exit with the q command. Command (m for help): w Command (m for help): q After this is complete you'll need to verify your work and start migrating your data to the new disk. These steps will be covered next.
  • 40. Expanding Disk Capacity Verifying Your New Partition You can take a look at the /proc/partitions file or use the fdisk -l command to see the changes to the disk partition structure of your system: [root@localhost~]# cat /proc/partitions major minor #blocks name ... ... 22 0 78150744 hdb 23 1 78150744 hdb1 [root@localhost]# fdisk –l ... ... Disk /dev/hdb: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdb1 1 9729 76051710 83 Linux
  • 41. Expanding Disk Capacity Putting A Directory Structure On Your New Partition You now need to format the partition, giving it a new directory structure by using the mkfs command. [root@localhost]# mkfs -t ext3 /dev/hdb1 Next, you must create special mount point directory, to which the new partition will be attached. Create directory /home/hdb1 for this purpose. [root@localhost]# mkdir /home/hdb1 When Linux boots, it searches the /etc/fstab file for a list of all partitions and their mounting characteristics, and then it mounts the partitions automatically. You'll have to add an entry for your new partition that looks like this: # vi /etc/fstab --- /dev/hdb1 /home/hdb1 ext3 defaults 1 2
  • 42. Expanding Disk Capacity Migrating Data Over To your New Partition As you remember from investigating with the df -k command, the /var partition is almost full. [root@localhost~]# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda3 505636 118224 361307 25% / /dev/hda1 101089 14281 81589 15% /boot none 63028 0 63028 0% /dev/shm /dev/hda5 248895 6613 229432 3% /tmp /dev/hda7 3304768 2720332 416560 87% /usr /dev/hda2 3304768 3300536 4232 99% /var [root@localhost~]# As a solution, the /var partition will be expanded to the new /dev/hdb1 partition mounted on the /home/hdb1 directory mount point. To migrate the data, use these steps: 1) Back up the data on the partition you are about to work on.
  • 43. Expanding Disk Capacity Rename the /var/transactions directory /var/transactions-save to make sure you have an easy to restore backup of the data, not just the tapes. # mv /var/transactions /var/transactions-save Create a new, empty /var/transactions directory; this will later act as a mount point. # mkdir /var/transactions Copy the contents of the /var/transactions-save directory to the root directory of /dev/hdb1, which is actually /home/hdb1. # cp -a /var/transactions-save/* /home/hdb1 Unmount the new /dev/hdb1 partition. # umount /home/hdb1
  • 44. Expanding Disk Capacity Edit the /etc/fstab file, removing our previous entry for /dev/hdb1 replacing it with one using the new mount point. # vi /etc/fstab # #/dev/hdb1 /home/hdb1 ext3 defaults 1 2 /dev/hdb1 /var/transactions ext3 defaults 1 2 Remount /dev/hdb1 on the new mount point using the mount -a command, which reads /etc/fstab and automatically mounts any entries that are not mounted already. sh-2.05b# mount -a Test to make sure that the contents of the new /var/transactions directory is identical to /var/transactions-save. sh-2.05b# exit Make sure your applications are working correctly and delete both the /var/transactions-save directory and the /home/hdb1 mount point directory at some later date. This exercise showed you how to migrate the entire contents of a subdirectory to a new disk. Linux also allows you to merge partitions together, to create a larger combined one. The reasons and steps for doing so will be explained next.
  • 45. Redundant Array of Independent Disks(RAID) Introduction The main goals of using redundant arrays of inexpensive disks (RAID) are to improve disk data performance and provide data redundancy. RAID can be handled either by the operating system software or it may be implemented via a purpose built RAID disk controller card without having to configure the operating system at all. This section will explain how to configure the software RAID schemes supported by RedHat. RAID Types Whether hardware- or software-based, RAID can be configured using a variety of standards. Take a look at the most popular.
  • 46. Redundant Array of Independent Disks(RAID) Configuring Software RAID RAID configuration during the installation process using the Disk Druid application. Creating the RAID Partitions These examples use two 9.1 GB SCSI drives (/dev/sda and /dev/sdb) to illustrate the creation of simple RAID1 configurations. They detail how to create a simple RAID 1 configuration by implementing multiple RAID devices. On the Disk Partitioning Setup screen, select Manually partition with Disk Druid.
  • 47. Redundant Array of Independent Disks(RAID) 1. In Disk Druid, choose RAID to enter the software RAID creation screen.
  • 48. Redundant Array of Independent Disks(RAID) 2. Choose Create a software RAID partition to create a RAID partition as shown in Figure “RAID Partition Options”. Note that no other RAID options (such as entering a mount point) are available until RAID partitions, as well as RAID devices, are created.
  • 49. Redundant Array of Independent Disks(RAID) 3. A software RAID partition must be constrained to one drive. For Allowable Drives, select the drive to use for RAID. If you have multiple drives, by default all drives are selected and you must deselect the drives you do not want.
  • 50. Redundant Array of Independent Disks(RAID) Repeat these steps to create as many partitions as needed for your RAID setup. Notice that all the partitions do not have to be RAID partitions. For example, you can configure only the /boot/ partition as a software RAID device, leaving the root partition (/), /home/, and swap as regular file systems. “RAID 1 Partitions Ready, Pre-Device and Mount Point Creation” shows successfully allocated space for the RAID 1 configuration (for /boot/), which is now ready for RAID device and mount point creation:
  • 51. Redundant Array of Independent Disks(RAID) Creating the RAID Devices and Mount Points Once you create all of your partitions as Software RAID partitions, you must create the RAID device and mount point. 1. Select the RAID button on the Disk Druid main partitioning screen. 2 “RAID Options” appears. Select Create a RAID device.
  • 52. Redundant Array of Independent Disks(RAID) 3. Next, “Making a RAID Device and Assigning a Mount Point” appears, where you can make a RAID device and assign a mount point. 4. Select a mount point. 5. Choose the file system type for the partition. Traditional static ext2/ext3 file system. Select a device name such as md0 for the RAID device. 7. Choose your RAID level. You can choose from RAID 0, RAID 1, and RAID 5.
  • 53. Redundant Array of Independent Disks(RAID) 8. The RAID partitions created appear in the RAID Members list. Select which of these partitions should be used to create the RAID device. 9. If configuring RAID 1 or RAID 5, specify the number of spare partitions. If a software RAID partition fails, the spare is automatically used as a replacement. For each spare you want to specify, you must create an additional software RAID partition (in addition to the partitions for the RAID device). Select the partitions for the RAID device and the partition(s) for the spare(s). 10. After clicking OK, the RAID device appears in the Drive Summary list. 11. Repeat this chapter's entire process for configuring additional partitions, devices, and mount points, such as the root partition (/), /home/, or swap. After completing the entire configuration, the figure as shown below, “Final Sample RAID Configuration” resembles the default configuration, except for the use of RAID.
  • 54. Redundant Array of Independent Disks(RAID) Final Sample RAID Configuration
  • 55. Redundant Array of Independent Disks(RAID) Configuring Software Raid After Installation Only RAID level 0,1 and 5 can be implemented using the software RAID. In Linux this can be done using the mdadm command. mdadm stands for Multiple Disk Administration. First of all we have to prepare our disks for implementation of raid, for that we have to make three of more partitions in different disks : Prepare The Partitions With FDISK You have to change each partition in the RAID set to be of type FD (Linux raid autodetect), and you can do this with fdisk. Here is an example using /dev/sda [root@localhost]# fdisk /dev/sda Command (m for help):
  • 56. Redundant Array of Independent Disks(RAID) Command (m for help): m ... ... p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id ... ... Command (m for help): Set The ID Type To FD Partition /dev/hde1 is the first partition on disk /dev/sda. Modify its type using the t command, and specify the partition number and type code. You also should use the L command to get a full listing of ID types in case you forget.
  • 57. Redundant Array of Independent Disks(RAID) Command (m for help): t Partition number (1-5): 1 Hex code (type L to list codes): L ... ... ... 16 Hidden FAT16 61 SpeedStor f2 DOS secondary 17 Hidden HPFS/NTF 63 GNU HURD or Sys fd Linux raid auto 18 AST SmartSleep 64 Novell Netware fe LANstep 1b Hidden Win95 FA 65 Novell Netware ff BBT Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): Make Sure The Change Occurred Use the p command to get the new proposed partition table:
  • 58. Redundant Array of Independent Disks(RAID) Command (m for help): p Disk /dev/sda: 4311 MB, 4311982080 bytes 16 heads, 63 sectors/track, 8355 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System /dev/sda1 1 4088 2060320+ 83 Linux /dev/sda2 4089 5713 819000 83 Linux /dev/sda3 4089 5713 819000 83 Linux /dev/sda4 6608 8355 880992 5 Extended /dev/sda5 6608 7500 450040+ 83 Linux raid autodetect /dev/sda6 7501 8355 430888+ fd Linux raid autodetect Command (m for help): w Use the w command to permanently save the changes to disk /dev/sda Repeat For The Other Partitions For the sake of brevity, I won't show the process for the other partitions. It's enough to know that the steps for changing the IDs for /dev/sda6 and /dev/sdb5 are very similar.
  • 59. Redundant Array of Independent Disks(RAID) [root@localhost ~]# fdisk /dev/sdb Disk /dev/sdc: 9175 MB, 9175979520 bytes 255 heads, 63 sectors/track, 1115 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 609 4891761 83 Linux /dev/sdb2 610 1115 4064445 5 Extended /dev/sdb5 610 622 104391 fd Linux raid autodetect Command (m for help): w Preparing the RAID Set Now that the partitions have been prepared, we have to merge them into a new RAID partition that we'll then have to format and mount. Here's how it's done.
  • 60. Redundant Array of Independent Disks(RAID) Create the RAID Set You use the mdadm command with the --create option to create the RAID set. In this example we use the --level option to specify RAID 1, and the -- raid-devices option to define the number of partitions to use. The syntax for creation of raid is : [root@localhost ~]# mdadm -C /dev/md0 -l1 -n2 /dev/sda5 /dev/sdb5 mdadm: array /dev/md0 started. -c: Create -l: RAID Level i.e. 0,1,5 -n: Numbers of disks used OR [root@localhost ~]# mdadm -C /dev/md0 -l1 -n2 missing /dev/sdb5 mdadm: array /dev/md0 started. Missing : Missing tells mdadm to create the raid with the rest of disks.
  • 61. Redundant Array of Independent Disks(RAID) OR [root@localhost ~]# mdadm --create /dev/md0 -–level=raid1 -–raid-devices=2 /dev/sda5 /dev/sdb5 mdadm: array /dev/md0 started. OR [root@localhost ~]# mdadm -C /dev/md0 -l1 -n2 /dev/sda5 /dev/sdb5 –x1 /dev/sda6 mdadm: array /dev/md0 started. -x1: for adding spare disk during raid creation. Now make the ext3 filesystem for /dev/md0 [root@localhost ~]# mkfs.ext3 /dev/md0
  • 62. Redundant Array of Independent Disks(RAID) Confirm RAID Is Correctly Inititalized The /proc/mdstat file provides the current status of all RAID devices. Confirm that the initialization is finished by inspecting the file and making sure that there are no initialization related messages. If there are, then wait until there are none. [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda5[1] sdb5[0] 104320 blocks [2/2] [UU] unused devices: <none>
  • 63. Redundant Array of Independent Disks(RAID) OR [root@localhost ~]# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.01 Creation Time : Fri Jul 13 17:28:13 2007 Raid Level : raid1 Array Size : 104320 (101.88 MiB 106.82 MB) Device Size : 104320 (101.88 MiB 106.82 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Fri Jul 13 17:28:40 2007 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Number Major Minor RaidDevice State 0 8 23 0 active sync /dev/sda5 1 8 24 1 active sync /dev/sdb5 UUID : e5d221a1:323fa424:e98e53dc:395326af Events : 0.4 [root@localhost ~]#
  • 64. Redundant Array of Independent Disks(RAID) Let us make the /etc/mdadm.conf and initial ramdisk with raid support, so that the the kernel could understand the raid at boot time. [root@localhost ~]# mdadm –detail –scan > /etc/mdadm.conf [root@localhost ~]# mkinitrd –v --preload=raid1 /boot/intird-`uname– r`.img.raid `uname –r` If more raid levels are used then they should also be used like [root@localhost ~]# mkinitrd –v --preload=raid0 --preload=raid1 -- preload=raid5 /boot/intird-`uname –r`.img.raid `uname –r` It will create a initrd image file initrd-2.6.9-11.EL.img.raid. Now make the necessary changes in /etc/grub.conf file to instruct the grub to load initrd image file with raid support during boot time. Add the following line in grub.conf : initrd /initrd-2.6.9-11.EL.img.raid
  • 65. Redundant Array of Independent Disks(RAID) • Create A Mount Point For The RAID Set • The next step is to create a mount point for /dev/md0. In this case we'll create one called /raid-data • [root@localhost]# mkdir /raid-data • [root@localhost ~]# mount /dev/md0 /raid-data • Edit The /etc/fstab File • The /etc/fstab file lists all the partitions that need to mount when the system boots. Add an Entry for the RAID set, the /dev/md0 device. • /dev/md0 /raid-data ext3 defaults 1 2
  • 66. Redundant Array of Independent Disks(RAID) Raid Failure Testing Testing after adding a extra disk in raid : [root@localhost ~]# mdadm /dev/md0 -a /dev/sda6 mdadm: hot added /dev/sda6 [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda6[2] sdb5[1] sda5[0] 104320 blocks [2/2] [UU] unused devices: <none> Testing the failure of one disk [root@localhost ~]# mdadm /dev/md0 -f /dev/sdb5 mdadm: set /dev/sdb5 faulty in /dev/md0 [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb6[2] sdb5[1](F) sdb5[0] 104320 blocks [2/1] [U_] [============>........] recovery = 64.7% (67904/104320) finish=0.0min speed=33952K/sec unused devices: <none>
  • 67. Redundant Array of Independent Disks(RAID) [root@localhost ~]# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.01 Creation Time : Fri Jul 13 16:39:06 2007 Raid Level : raid1 Array Size : 104320 (101.88 MiB 106.82 MB) Device Size : 104320 (101.88 MiB 106.82 MB) Raid Devices : 2 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Jul 13 16:54:42 2007 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Number Major Minor RaidDevice State 0 8 21 0 active sync /dev/sda5 1 8 22 1 active sync /dev/sda6 2 8 37 -1 faulty /dev/sdb5 UUID : cd8563c9:d52e18f5:8deb3cc3:6304ce1c Events : 0.223
  • 68. Redundant Array of Independent Disks(RAID) [root@localhost ~]# mdadm /dev/md0 -r /dev/sdb5 mdadm: hot removed /dev/sdb5 [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda6[1] sda5[0] 104320 blocks [2/2] [UU] unused devices: <none>
  • 69. LVM(Logical Volume Manager) The Logical Volume Manager (LVM) enables you to resize your partitions without having to modify the partition tables on your hard disk. This is most useful when you find yourself running out of space on a filesystem and want to expand into a new disk partition versus migrating all or a part of the filesystem to a new disk. Physical Volume: A physical volume (PV) is another name for a regular physical disk partition that is used or will be used by LVM. Volume Group: Any number of physical volumes (PVs) on different disk drives can be lumped together into a volume group (VG). Under LVM, volume groups are analogous to a virtual disk drive. Logical Volumes: Volume groups must then be subdivided into logical volumes. Each logical volume can be individually formatted as if it were a regular Linux partition. A logical volume is, therefore, like a virtual partition on your virtual disk drive. Physical Extent: Real disk partitions are divided into chunks of data called physical extents (PEs) when you add them to a logical volume. PEs are important as you usually have to specify the size of your volume group not in gigabytes, but as a number of physical extents.
  • 70. LVM(Logical Volume Manager) The physical volumes are combined into logical volumes, with the exception of the /boot/ partition.The /boot/ partition cannot be on a logical volume group because the boot loader cannot read it. If the root (/) partition is on a logical volume, create a separate /boot/ partition which is not a part of a volume group.
  • 71. LVM(Logical Volume Manager) The volume groups can be divided into logical volumes, which are assigned mount points, such as /home and / and file system types, such as ext2 or ext3. When "partitions" reach their full capacity, free space from the volume group can be added to the logical volume to increase the size of the partition. When a new hard drive is added to the system, it can be added to the volume group, and partitions that are logical volumes can be increased in size.
  • 72. LVM(Logical Volume Manager) What is LVM2? LVM version 2, or LVM2, is the default for Red Hat Enterprise Linux 5, which uses the device mapper driver contained in the 2.6 kernel. LVM2 can be upgraded from versions of Red Hat Enterprise Linux running the 2.4 kernel. Steps required to configure LVM include: Creating physical volumes from the hard drives. Creating volume groups from the physical volumes. Creating logical volumes from the volume groups and assign the logical volumes mount points.
  • 73. LVM(Logical Volume Manager) Basic LVM commands Initializing disks or disk partitions To use LVM, partitions and whole disks must first be converted into physical volumes (PVs) using the pvcreate command. For example, to convert /dev/hda5 and /dev/hdb5 into PVs use the following command. #pvcreate /dev/hda5 /dev/hdb5 Initialize the target partitions with the pvcreate command. This wipes out all the data on them in preparation for the next step. Creating a volume group Use the vgcreate command to combine the two physical volumes into a single unit called a volume group. #vgcreate song /dev/hda5 /dev/hdb5
  • 74. LVM(Logical Volume Manager) Creating a logical volume Now we are ready to partition the volume group into logical volumes with the lvcreate command. Like hard disks, which are divided into blocks of data, logical volumes are divided into units called physical extents (PEs). Here we are creating logical volume OldSong of size 1000 MB. We can also give the size as number of PEs or in % of free space/total space available in that volume group. # lvcreate OldSong –L+1000M –n Song # lvcreate NewSong –L+1000M –n Song # lvcreate RemixSong –L+1000M –n Song Now our logical volume is created. It can be used further by making the filesystem and mounting it somewhere.
  • 75. LVM(Logical Volume Manager) Format the Logical Volume # mkfs.ext3 /dev/Song/NewSong # mkfs.ext3 /dev/Song/OldSong # mkfs.ext3 /dev/Song/RemixSong Mount The Logical Volume #mkdir /NewSong #mkdir /OldSong #mkdir /RemixSong #mount /dev/Song/NewSong /NewSong #mount /dev/Song/OldSong /OldSong #mount /dev/Song/RemixSong /RemixSong
  • 76. LVM(Logical Volume Manager) Or we can insert the following line in /etc/fstab file to make it mount at boot time /dev/Song/NewSong /NewSong ext3 defaults1 2 /dev/Song/NewSong /NewSong ext3 defaults1 2 /dev/Song/NewSong /NewSong ext3 defaults1 2 Extending a logical volume Let us consider that our logical volume NewSong becomes full and there is no space in our volume group Song. And we have to expand the NewSong then we have to make a physical partition and create a PV on it and extend our VG (Song) to newly created partition and thereafter extend our LV (NewSong) as per our need.
  • 77. LVM(Logical Volume Manager) Let we have /dev/sdb6 partition available and we have to extend NewSong to 1000MB. Create physical volume on /dev/sdb6 # pvcreate /dev/sdb6 Our volume group extended to /dev/sdb6 # vgextend Song /dev/sdb6 The size of logical volume NewSong is extended to 1000MB. # lvextend /dev/Song/NewSong -L +1000M This is to resize the filesystem of NewSong. # resize2fs /dev/Song/NewSong
  • 78. LVM(Logical Volume Manager) Logical Volumes Old Remix New Song Song Song (1000M) (1000M) (1000M) Song(3GB) (Volume Group) Sda5(1.5GB) Sdb5(1.5GB) (Physical Volume) (Physical Volume)
  • 79. LVM(Logical Volume Manager) Logical Volumes Extended Volume (1000M) Old Remix Song Song New #vgextend Song /dev/sdb6 (1000M) (1000M) Song (1000M) Song(3GB) Extended VG (1 GB) (Volume Group) Sda5(1.5GB) Sdb5(1.5GB) Sdb6(1GB) (Physical Volume) (Physical Volume) Physical Vol # lvextend /dev/Song/NewSong -L +1000M # resize2fs /dev/Song/NewSong # pvcreate /dev/sdb6
  • 80. Package Management Package Management All software on a Red Hat Enterprise Linux system is divided into RPM Packages. This Section describes how to manage the RPM packages on a Red Hat Enterprise Linux system using graphical and command line tools. RPM has five basic modes of operation: installing uninstalling upgrading querying verifying. For complete details and options try rpm –help.
  • 81. Package Management Installing RPM packages typically have file names like foo-1.0-1.i386.rpm. The file name includes the Package Name (foo) Version (1.0) Release (1) Architecture (i386). Installing a package is as simple as typing the following command at a shell prompt: # rpm -ivh foo-1.0-1.i386.rpm foo #################################### # As you can see, RPM prints out the name of the package and then prints a succession of hash marks as the package is installed as a progress meter.
  • 82. Package Management Package Already Installed If the package of the same version is already installed, you will see: # rpm -ivh foo-1.0-1.i386.rpm foo package foo-1.0-1 is already installed # If you want to install the package anyway and the same version you are trying to install is already installed, you can use the --replacepkgs option, which tells RPM to ignore the error: # rpm -ivh --replacepkgs foo-1.0-1.i386.rpm foo #################################### #
  • 83. Package Management Conflicting Files If you attempt to install a package that contains a file which has already been installed by another package or an earlier version of the same package, you'll see: # rpm -ivh foo-1.0-1.i386.rpm foo /usr/bin/foo conflicts with file from bar-1.0-1 # To make RPM ignore this error, use the --replacefiles option: # rpm -ivh --replacefiles foo-1.0-1.i386.rpm foo #################################### #
  • 84. Package Management Unresolved Dependency RPM packages can "depend" on other packages, which means that they require other packages to be installed in order to run properly. If you try to install a package which has an unresolved dependency, you'll see: # rpm -ivh foo-1.0-1.i386.rpm failed dependencies: bar is needed by foo-1.0-1 # To handle this error you should install the requested packages. If you want to force the installation anyway (a bad idea since the package probably will not run correctly), use the --nodeps option. # rpm -ivh --nodeps foo-1.0-1.i386.rpm
  • 85. Package Management Uninstalling Uninstalling a package is just as simple as installing one. Type the following command at a shell prompt: # rpm -e foo # You can encounter a dependency error when uninstalling a package if another installed package depends on the one you are trying to remove. For example: # rpm -e foo removing these packages would break dependencies: foo is needed by bar-1.0-1 # To cause RPM to ignore this error and uninstall the package anyway use the --nodeps option.
  • 86. Package Management Upgrading Upgrading a package is similar to installing one. Type the following command at a shell prompt: # rpm -Uvh foo-2.0-1.i386.rpm foo #################################### # Upgrading is really a combination of uninstalling and installing, so during an RPM upgrade you can encounter uninstalling and installing errors, plus one more. If RPM thinks you are trying to upgrade to a package with an older version number, you will see: # rpm -Uvh foo-1.0-1.i386.rpm foo package foo-2.0-1 (which is newer) is already installed # To cause RPM to "upgrade" anyway, use the --oldpackage option: # rpm -Uvh --oldpackage foo-1.0-1.i386.rpm foo #####################################
  • 87. Package Management Querying Use the rpm -q command to query the database of installed packages. The rpm -q foo command will print the package name, version, and release number of the installed package foo: # rpm -q foo foo-2.0-1 # Instead of specifying the package name, you can use the following options with -q to specify the package(s) you want to query. These are called Package Specification Options. -a queries all currently installed packages. -f <file> will query the package which owns <file>. When specifying a file, you must specify the full path of the file (for example, /usr/bin/ls) -p <packagefile> queries the package <packagefile>.
  • 88. Package Management There are a number of ways to specify what information to display about queried packages. The following options are used to select the type of information for which you are searching. These are called Information Selection Options. -i Displays package information including name, description, release, size, build date, install date, vendor, and other miscellaneous information. -l Displays the list of files that the package contains. -s Displays the state of all the files in the package. -d Displays a list of files marked as documentation (man pages, info pages, READMEs, etc.). -c Displays a list of files marked as configuration files. These are the files you change after installation to adapt the package to your system (for example, sendmail.cf, passwd, inittab, etc.).
  • 89. Package Management Verifying Verifying a package compares information about files installed from a package with the same information from the original package. Among other things, verifying compares the size, MD5 sum, permissions, type, owner, and group of each file. The command rpm -V verifies a package. You can use any of the Package Selection Options listed for querying to specify the packages you wish to verify. A simple use of verifying is rpm -V foo, which verifies that all the files in the foo package are as they were when they were originally installed. For example:
  • 90. Package Management • To verify a package containing a particular file: # rpm -Vf /bin/vi • To verify ALL installed packages: # rpm -Va • To verify an installed package against an RPM package file: # rpm -Vp foo-1.0-1.i386.rpm This command can be useful if you suspect that your RPM databases are corrupt.
  • 91. User and Group Management The control of users and groups is a core element of Red Hat Enterprise Linux system administration. Users can be either people (meaning accounts tied to physical users) or accounts which exist for specific applications to use. Groups are logical expressions of organization, tying users together for a common purpose. Users within a group can read, write, or execute files owned by that group. Each user and group has a unique numerical identification number called a userid (UID) and a groupid (GID) respectively.
  • 92. User Management User information: The id command prints information for a certain user. Use it like this: # id username Create a user To create a new user: # useradd -c "My Example User" username # passwd username The created user is initially in an inactive state. To activate the user you have to assign a password with passwd. Some useful useradd options include the following:
  • 93. User Management -c :sets a comment for the user. -s : is used in order to define the user’s default login shell. If not used, then the system’s default shell becomes the user’s default login shell. -r : creates a user with UID<500 (system account) -d : sets the user’s home directory. If not used, the default home directory is created (/home/username/) -M : the home directory is not created. This is useful when the directory already exists. # useradd -c "This user cannot login to a shell" -s /sbin/nologin <username> # passwd <username>
  • 94. User Management Change the user’s password To change a user’s password: # passwd <username> If it’s used without specifying a username, then the currently logged in user’s password is changed. Add a user to a group Usermod is used to modify a user account’s settings. Check the man page for all the available options. One useful use of this command is to add a user to a group: # usermod -a -G <group1> <username> The -a option is critical. The user is added to group1 while he continues to be a member of other groups. If it’s not used, then the user is added only to group1 and removed from any other groups. So, take note!
  • 95. User Management Lock and Unlock user accounts usermod uses are to lock and unlock user accounts. To lock out a user: # usermod -L <username> To Unlock User # usermod -U <username> Delete a user Userdel is used to delete a user account. If the -r option is used then the user’s home directory and mail spool are deleted too: # userdel -r <username>
  • 96. User Management Create a new group To create a new group, issue the command: # groupadd <groupname> The -r option can be used to create a group with GID<500 (system). Change a group’s name Groupmod can be used to change a group name: # groupmod -n newgroupname <groupname>
  • 97. User Management Delete a group Groupdel can delete a group: # groupdel <groupname> In order to delete a user’s primary group (usually this is the group with name equal to the username) the respective user must be deleted previously.
  • 104. Dynamic Host Configuration Protocol (DHCP) DHCP is a network protocol that automatically assigns TCP/IP information to client machines. Each DHCP client connects to the centrally located DHCP server, which returns that client's network configuration (including the IP address, gateway, and DNS servers). Why Use DHCP? o DHCP is useful for automatic configuration of client network interfaces. o DHCP is also useful if an administrator wants to change the IP addresses of a large number of systems.
  • 105. DHCP Server Configuration The daemon which runs on the server is dhcpd and is included in the file dhcp-<version>.rpm. If dhcpd is not installed in the server, then install it. # rpm –ivh dhcp* DHCP server is controlled by the configuration file /etc/dhcpd.conf. To make this file copy the sample file make the necessary changes as below : # cp /usr/share/doc/dhcp-3.0.1/dhcpd.conf.sample /etc/dhcpd.conf Change the parameters in dhcpd.conf as per the requirement, minimum change are as below: # vi /etc/dhcpd.conf subnet 192.168.0.0 netmask 255.255.255.0 range dynamic_bootp 192.168.0.100 192.168.0.200 Now each of our clients will receive its IP address between 192.168.0.100 and 192.168.0.200, subnet mask, gateway, and broadcast address from dhcp server.
  • 106. DHCP Server Configuration Service startup : # service dhcpd start To start the dhcp daemon. # chkconfig dhcpd on To start the daemon on boot time. Lease Database : On the DHCP server, the file /var/lib/dhcpd/dhcpd.leases stores the DHCP client lease database. DHCP lease information for each recently assigned IP address is automatically stored in the lease database. The information includes the length of the lease, to whom the IP address has been assigned, the start and end dates for the lease, and the MAC address of the network interface card that was used to retrieve the lease. The lease database is recreated from time to time so that it is not too large.
  • 107. Network File System (NFS) • NFS is the most common method for providing file sharing services on Linux and Unix Networks. It is a distributed file system that enables local access to remote disks and file system. • NFS uses a standard client/server architecture.
  • 108. NFS – Cont’d… Red Hat Enterprise Linux uses a combination of kernel-level support and daemon processes to provide NFS file sharing. To share or mount NFS file systems, the following services work together : • /etc/init.d/nfs Starts the Network File System service. • /etc/init.d/portmap Starts the portmap daemon, called the port mapper; needed by all programs that use Remote Procedure Call (RPC). • /etc/init.d/nfslock It starts locking daemon lockd and statd, although nfsd starts the lockd itself, but we must start the statd separately.
  • 109. NFS Server Configuration There are three ways to configure an NFS server under Red Hat Enterprise Linux: 1. manually editing its configuration file (/etc/exports), 2. using the /usr/sbin/exportfs command. 3. using the NFS Server Configuration Tool (system-config-nfs) graphical tool,
  • 110. NFS Server Configuration – cont’d... 1. Manually editing the configuration file : Make the following entries in /etc/exports file : Local directory to be Options share Hosts /data 192.168.0.6/255.255.255.0(rw,sync) /usr/local *.example.com(ro) /home @dev(rw,async) /var/tmp 192.168.0.11(rw,async)
  • 111. NFS Server Configuration – cont’d... • Here the first line permits any host with an IP address in the range 192.168.0.6 to 192.168.0.255 to access the /data directory with read-write permission. • The second line permits all host with a name of the format somehost.example.com to access the /usr/local directory with read only permission. • The third line permits any number of NIS group named dev to access the /home directory with read-write permission. • The last line permits the only host whose IP address is 192.168.0.11 to access the /var/tmp directory with read- write permission.
  • 112. NFS Server Configuration – cont’d... 2. Using the exportfs command : # /usr/sbin/exportfs This command writes the currently exported file system in /var/lib/nfs/xtab and the kernel’s internal table of exported file systems. # exportfs –a Initializes the xtab file, synchronizing it with /etc/exports. # exportfs –o exp_opts host:directory To add a new export to /var/lib/nfs/xtab and to kernel’s internal table of NFS exports without editing the /etc/exports. As for example # exports –o async,rw 192.169.0.3:/var/tmp More options with exportfs are -v : verbose -u : unexport -i : ignore
  • 113. NFS Server Configuration – cont’d... 3. Usingthe NFS Server Configuration Tool (system-config-nfs) : To start the application, click on System => Administration => Server Settings => NFS. Or we can also type the command system-config-nfs in a terminal.
  • 114. NFS Client Configuration # showmount –e <host/server IP> # showmount -e 192.168.0.2 Shows the NFS server’s list of exported file systems. # mkdir /mntdata # mount 192.168.0.2:/data /mntdata Mount the exported file system /data to /mntdata to use it. Or make entry in /etc/fstab file: 192.168.0.2:/data /mntdata nfs defaults 0 0
  • 115. NFS – Limitation (do & don’t) Good candidates for NFS exports include any file system that is shared among a large number of users. We can export only local file system and their subdirectories, we can’t export a file system that is itself an NFS mount. A subdirectory of an exported file system can’t be exported unless subdirectories resides on a different physical disk than its parent. As for example : /dev/sda1 /usr/local ext3 defaults 1 2 here if we export /usr/local, we cannot export /usr/local/devtools. /dev/sda1 /usr/local ext3 defaults 1 2 /dev/sdb2 /usr/local/devtools ext3 defaults 1 2 now we could export both /usr/local as well as /usr/local/devtools. Conversely, the parent directory of an exported subdirectory cannot be exported unless the parent directory resides on a different physical disk.
  • 116. FTP Server Configuration Introduction : The File Transfer Protocol (FTP) is used as one of the most common means of copying files between servers over the Internet. Here we’ll see how to convert your Linux box into an FTP server using the default Very Secure FTP Daemon (VSFTPD). FTP relies on a pair of TCP ports to get the job done. It operates in two connection channels : FTP Control Channel, TCP Port 21: All commands we send and the ftp server's responses to those commands will go over the control connection, but any data sent back (such as "ls" directory lists or actual file data in either direction) will go over the data connection. FTP Data Channel, TCP Port 20: This port is used for all subsequent data transfers between the client and server.
  • 117. FTP Server Configuration – Cont’d… Types of FTP : From a networking perspective, the two main types of FTP are active and passive. • In active FTP, the FTP server initiates a data transfer connection back to the client. • For passive FTP, the connection is initiated from the FTP client.
  • 118. FTP Server Configuration – Cont’d… The important files and directories are : • /etc/rc.d/init.d/vsftpd The initialization script (initscript). • /etc/vsftpd/vsftpd.conf The configuration file for vsftpd. • /etc/vsftpd.ftpusers A list of users not allowed to log into vsftpd. • /etc/vsftpd.user_list This file can be configured to either deny or allow access to the users listed, depending on whether the userlist_deny directive is set to YES (default) or NO in /etc/vsftpd/vsftpd.conf. If /etc/vsftpd.user_list is used to grant access to users, the usernames listed must not appear in /etc/vsftpd.ftpusers. • /var/ftp/ The directory containing files served by vsftpd. It also contains the /var/ftp/pub/ directory for anonymous users.
  • 119. FTP Server Configuration – Cont’d… # service vsftpd start/stop/restart To start/stop/restart the FTP # netstat –a | grep 21 service. The important parameters to set in /etc/vsftpd/vsftpd.conf configuration files are : anonymous_enable= yes/no default is yes, allowed the anonymous users to log in. local_enable = yes/no default is yes, allowed the local users to log into the system. userlist_enable = yes/no default is no, when enabled, the users listed in the file specified by the userlist_file directive are denied access. userlist_deny = yes/no default is yes, when used in conjunction with the userlist_enable directive and set to NO, all local users are denied access unless the username is listed in the file specified by the userlist_file directive.
  • 120. FTP Server Configuration – Cont’d… userlist_file = /etc/vsftpd.user_list Specifies the file referenced by vsftpd when the userlist_enable directive is enabled. tcp_wrappers = yes/no When enabled, TCP wrappers are used to grant access to the server. anon_max_rate = <value> Specifies the maximum data transfer rate for anonymous users in bytes per second. local_max_rate = <value> Specifies the maximum rate data is transferred for local users logged into the server in bytes per second. max_clients = <value> Specifies the maximum number of simultaneous clients allowed to connect to the server when it is running in standalone mode. max_per_ip = <value> Specifies the maximum of clients allowed to connected from the same source IP address. (The default <value> is 0, which does not limit connections.)
  • 121. FTP Server Configuration – Cont’d… Connect to ftp server (192.168.1.100) : [root@cipa_nic tmp]# ftp 192.168.1.100 Connected to 192.168.1.100 (192.168.1.100) 220 ready, dude(vsFTPd 1.1.0:beat me,break me) Name (192.168.1.100:root): user1 331 Please specify the password. Password: 230 Login successful. Have fun. Remote system type is UNIX. Using binary mode to transfer files. ftp> ftp> put testfile ftp> get vsftpd-1.1.0-1.i386.rpm ftp> exit 221 Goodbye. [root@cipa_nic tmp]#
  • 122. Syslog Server Configuration Introduction : • Linux applications use the syslog utility to export all their errors and status messages to files located in the /var/log directory. • The main configuration file /etc/syslog.conf decides that what level of error messages for the services are to be written in which file. By default most of the messages are written in /var/log/messages file. • Configuring syslog messages to a Remote Log Server, we have to do two things : – Configuring the Linux syslog Server. – Configuring the Linux Client.
  • 123. Syslog Server Configuration – cont’d • Syslog reserves facilities local0 through local7 for log messages received from remote servers and network devices. • Syslog Facility and Severity Numbering Scheme for Local Directors Facility FF Value Severity SS Value local 0 16 System unusable 0 local 1 17 Immediate action required 1 local 2 18 Critical condition 2 local 3 19 Error conditions 3 local 4 20 Warning conditions 4 local 5 21 Normal but significant conditions 5 local 6 22 Informational messages 6 local 7 23 Debugging messages 7
  • 124. Syslog Server Configuration – cont’d • Configuring the Linux syslog Server : Edit the /etc/sysconfig/syslog file and set the SYSLOGD_OPTIONS variable as shown below : # Options to syslogd # -m 0 disables 'MARK' messages. # -r enables logging from remote machines # -x disables DNS lookups on messages received with -r # See syslogd(8) for more details SYSLOGD_OPTIONS="-m 0 –r -x" Now the server will start listening the log messages from remote on UDP port 514.
  • 125. Syslog Server Configuration – cont’d Configuring the Linux remote servers : Edit the /etc/syslog.conf file and the necessary changes as follows (say) : *.info;mail.none;authpriv.none;cron.none @192.168.0.2 *.debug @loghost *.debug /var/log/messages Where loghost is the nick name of the syslog server and IP is 192.168.0.2. We have to make an entry in /etc/hosts file : 192.168.0.2 logserv.at-my-site.com logserv loghost # service syslog restart Restart the syslog service and starts sending the logs to loghost.
  • 126. Squid Proxy Server Configuration Introduction : Two important goals of many small businesses are to: • Reduce Internet bandwidth charges • Limit access to the Web to only authorized users. The Squid web caching proxy server can achieve both these goals fairly easily. We can configure our web browsers to use the Squid proxy server instead of going to the web directly. The Squid server then checks its web cache for the web information requested by the user. It will return any matching information that finds in its cache, and if not, it will go to the web to find it on behalf of the user. Once it finds the information, it will populate its cache with it and also forward it to the user's web browser.
  • 127. Squid Proxy Server Configuration – cont’d… Starting Squid : Use the chkconfig configure Squid to start at boot : [root@cipa_nic tmp]# chkconfig squid on Use the service command to start, stop, and restart Squid after booting : [root@cipa_nic tmp]# service squid start [root@cipa_nic tmp]# service squid stop [root@cipa_nic tmp]# service squid restart
  • 128. Squid Proxy Server Configuration – cont’d… The /etc/squid/squid.conf File : The Visible Host Name Squid will fail to start if we don't give our server a hostname. This can set with the visible_hostname parameter. visible_hostname cipa_nic Access Control Lists (ACL) We can limit users' ability to browse the Internet with access control lists (ACLs). Each ACL line defines a particular type of activity, such as an access time or source network, they are then linked to an http_access statement that tells Squid whether or not to deny or allow traffic that matches the ACL. Squid matches each Web access request it receives by checking the http_access list from top to bottom. If it finds a match, it enforces the allow or deny statement and stops reading further.
  • 129. Squid Proxy Server Configuration – cont’d… /etc/squid/squid.conf : # Add this to the bottom of the ACL Section acl cipa_network src 192.168.1.0/24 acl business_hours time M T W H F 09:00-17:00 acl SamirHost src 192.168.1.23 # Add this to the top of the http_access Section http_access deny SamirdHost http_access allow cipa_network business_hours This allow only the business hour access from the CIPA network, while always restricting access from host 192.168.1.23 (Samir).
  • 130. Squid Proxy Server Configuration – cont’d… /etc/squid/squid.conf : ….cont’d… • We can allow morning access only • Restrict the access to particular web sites # Add this to the bottom of the ACL Section acl morning time 08:00-12:00 acl DenyHost dst www.restricted.com # Add this to the top of the http_access Section http_access allow morning http_access deny DenyHost
  • 131. Squid Proxy Server Configuration – cont’d… /etc/squid/squid.conf : ….cont’d… Squid is also capable of reading the files containing list of websites and/or domains for use in ACL. Let’s create two files named (containing the list of web sites): 1. /home/samir/allowed-sites.squid 2. /home/prem/restricted-sites.squid www.openfree.org www.porn.com Linuxhomenetworking.com illegal.com www.google.co.in www.notallowedsites.com
  • 132. Squid Proxy Server Configuration – cont’d… /etc/squid/squid.conf : ….cont’d… # Add this to the bottom of the ACL section of acl home_network src 192.168.1.0/24 acl business_hours time M T W H F 9:00-17:00 acl GoodSites dstdomain "/home/samir/allowed-sites.squid" acl BadSites dstdomain "/home/prem/restricted-sites.squid" # Add this at the top of the http_access section of squid.conf http_access deny BadSites http_access allow home_network business_hours GoodSites
  • 133. Domain Name Service (DNS) Introduction : • On most modern networks, including the Internet, users locate other computers by name. The most effective way to configure a network to allow such name-based connections is to set up a Domain Name Service (DNS) or a nameserver, which resolves hostnames on the network to numerical addresses and vice versa. • DNS associates hostnames with their respective IP addresses, so that when users want to connect to other machines on the network, they can refer to them by name, without having to remember IP addresses. • DNS is normally implemented using centralized servers that are authoritative for some domains and refer to other DNS servers for other domains.
  • 134. DNS – cont’d… DNS Domains Everyone in the world has a first name and a last, or family, name. The same thing is true in the DNS world: A family of Web sites can be loosely described a domain. For example, the domain indiatimes.com has a number of children, such as in.indiatimes.com , www.indiatimes.com and mail.indiatimes.com for the Web and mail servers, respectively. How DNS Servers find out the site information • There are 13 root authoritative DNS servers (super duper authorities) that all DNS servers query first. These root servers know all the authoritative DNS servers for all the main domains - .com, .net, .mil, .edu and the rest. This layer of servers keep track of all the DNS servers that Web site systems administrators have assigned for their sub domains. • For example, when we register our domain my-site.com, we are actually inserting a record on the .com DNS servers that point to the authoritative DNS servers we assigned for our domain.
  • 135. DNS – cont’d… Nameserver Types There are four primary nameserver configuration types: 1. Master Stores original and authoritative zone records for a namespace, and answers queries about the namespace from other nameservers. 2. Slave Answers queries from other nameservers concerning namespaces for which it is considered an authority. However, slave nameservers get their namespace information from master nameservers. 3. Caching-only Offers name-to-IP resolution services, but is not authoritative for any zones. Answers for all resolutions are cached in memory for a fixed period of time, which is specified by the retrieved zone record. 4. Forwarding Forwards requests to a specific list of nameservers for name resolution. If none of the specified nameservers can perform the resolution, the resolution fails. A nameserver may be one or more of these types. For example, a nameserver can be a master for some zones, a slave for others, and only offer forwarding resolutions for others.
  • 136. DNS – cont’d… BIND as a Nameserver • Berkeley Internet Name Domain (BIND) performs name resolution services through the /usr/sbin/named daemon. • BIND stores its configuration files in the following locations: /etc/named.conf – The configuration file for the named daemon. /var/named/ directory – The named working directory which stores zone, statistic, and cache files. Note If you have installed the bind-chroot package, the BIND service will run in the /var/named/chroot environment. All configuration files will be moved there. As such, named.conf will be located in /var/named/chroot/etc/named.conf, and so on..
  • 137. DNS – cont’d… Basic DNS Testing of DNS Resolution • As we know, DNS resolution maps a fully qualified domain name (FQDN), such as www.google.com, to an IP address. This is also known as a forward lookup. The reverse is also true: By performing a reverse lookup, DNS can determining the fully qualified domain name associated with an IP address. • Many different Web sites can map to a single IP address, but the reverse isn't true; an IP address can map to only one FQDN. This means that forward and reverse entries frequently don't match [root@cipa_nic tmp]# host www.google.com [root@cipa_nic tmp]# dig www.yahoo.com [root@cipa_nic tmp]# dig –x 202.86.4.142 [root@cipa_nic tmp]# nslookup www.google.com
  • 138. DNS – cont’d… Configuring NameServer Step-1: Configure /etc/resolv.conf Make the DNS server refer to itself for all DNS queries. nameserver 127.0.0.1 Step-2: Creating a /etc/named.conf base configuration file The /etc/named.conf file contains the main DNS configuration and tells BIND where to find the configuration or zone files for each domain we own. This file usually has two zone areas : 1. Forward zone (to map domains with IP address) 2. Reverse zone (to map IP address with domains) We can get the sample named.conf file from /usr/share/doc/bind…./sample/etc/ , copy it to /etc/ and edit it as per our need.
  • 139. DNS – cont’d… Step-3: Creating zone file reference in /etc/named.conf Zone files contain information about a namespace and are stored in the named working directory (/var/named/) by default. Each zone file is named according to the file option data in the zone statement. We can create as many zone as we needed. options { directory "/var/named"; dump_file "/var/named/data"; allow-transfer { 192.168.1.200; }; - Secondary DNS(slave) forward only; }; zone "jhr.nic"{ type master; file “jhr.nic.hosts"; };
  • 140. DNS – cont’d… Creating zone file [root@cipa_nic ~]# cd /var/named [root@cipa_nic named]# vi jhr.nic.hosts $ttl 604800 - time to live, measured in seconds jhr.nic. IN SOA cipamaster.nic.in. samir.cipamaster.nic.in( 2007291105; -serial no. use year+month+day+integer 1D; -refresh time, 1H; -retry period 1W; -expire time 1D; -minimum ttl period ) Time representation : D (day), W (week), H (hours), No suffix (seconds) The SOA (Start of Authority) record format : Name Class Type NameServer Email_Address SerialNo Refresh Expiry Minimum-TTL
  • 141. DNS – cont’d… $ttl 1W - time to live, measured in seconds jhr.nic. IN SOA @ deepak.cipamaster.nic.in( 2007291106; -serial no. use year+month+day+integer 1D; -refresh time, 1H; -retry period 1W; -expire time 1D; -minimum ttl period ) jhr.nic. IN NS localhost localhost.jhr.nic. IN A 192.168.1.2 www.jhr.nic. IN A 192.168.1.3 ftp.jhr.nic. IN A 192.168.1.4 mail.jhr.nic. IN A 192.168.1.5 www.jhr.nic. IN A 192.168.1.5 deepak.jhr.nic. IN A 192.168.1.6 parishesh.jhr.nic. IN A 192.168.1.7 cipaslave.jhr.nic. IN A 192.168.1.200 DNS Resource Records : Name class type data IN – Internet, A – forward lookup, PTR – reverse lookup NS – Name Server, MX – mail exchange CNAME - alias @ - localhost
  • 142. DNS – cont’d… # Continue … … cipa_boys IN A 192.168.1.10 www.jhr.nic. IN CNAME cipa_boys ftp.jhr.nic. IN CNAME cipa_boys nfs.jhr.nic. IN CNAME cipa_boys options { Creating Secondary or directory "/var/named"; dump_file "/var/named/data"; (Slave) Server allow-transfer { 192.168.1.200; }; - Secondary DNS(slave) forward only; }; zone "jhr.nic"{ type slave; masters { 192.168.1.2; }; file “slaves/jhr.nic.hosts"; };
  • 143. DNS – cont’d… Now we have to restart the named service. And check whether functioning properly or not. [root@cipa_nic tmp]# service named restart [root@cipa_nic tmp]# named-checkconf [root@cipa_nic tmp]# dig www.jhr.nic [root@cipa_nic tmp]# dig mail.jhr.nic