Part 1: Installing puppet 2.6.1 on CentOS with YUM/RPM


Installing Puppetmaster 2.6.1

Assuming, like me, the thought of letting rubygems vommit all over your filesystem is not a pleasant one, then how to get the latest puppet 2.6.1 installed on CentOS 5.5 with yum isn’t very clear. Things may differ on other peoples systems, but the below worked for me.


Set up yum repositories.

Do this on both the client and the server

Add the following files and save them to /etc/yum.repos.d/


puppet.repo
[puppetlabs]
name=Puppet Labs Packages
baseurl=http://yum.puppetlabs.com/base/
enabled=0
gpgcheck=0


epel.repo
[epel]
name=Extra Packages for Enterprise Linux 5 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/5/$basearch
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-5&arch=$basearch
failovermethod=priority
enabled=0
gpgcheck=0
 
 
[epel-puppet]
name=epel puppet
baseurl=http://tmz.fedorapeople.org/repo/puppet/epel/5/$basearch/
enabled=0
gpgcheck=0


ruby.repo
[ruby]
name=ruby
baseurl=http://repo.premiumhelp.eu/ruby/
gpgcheck=0
enabled=0


Note that we include ruby and puppetlabs as the next steps in this tutorial will be to configure puppet and install puppet-dashboard. We want to upgrade to ruby 1.8.6 in order to run puppet-dashboard, doing this now will save you some pain down the line.

Upgrade Ruby to 1.8.6

Do this on both the client and the server

As mentioned above, use the ruby repo to upgrade.

punch# yum --enablerepo="ruby" update ruby
[...]
==============================================================
 Package            Arch          Version               Repository     Size
==============================================================
Updating:
 ruby               i686          1.8.6.111-1           ruby          525 k
Updating for dependencies:
 ruby-libs          i686          1.8.6.111-1           ruby          2.6 M
 
Transaction Summary
===============================================================
Install       0 Package(s)
Upgrade       2 Package(s)
 
Total download size: 3.1 M
Is this ok [y/N]: y
[...]

Install Puppet Server

On your puppetmaster server:
punch# yum --enablerepo=epel,epel-puppet install puppet-server
 
[...]
Installing:
 puppet-server        noarch      2.6.1-0.3.rc3.el5       epel-puppet       20 k
Installing for dependencies:
 facter               noarch      1.5.8-0.2.rc2.el5       epel-puppet       55 k
 libselinux-ruby      i386        1.33.4-5.5.el5          base              60 k
 puppet               noarch      2.6.1-0.3.rc3.el5       epel-puppet      818 k
 ruby-augeas          i386        0.3.0-1.el5             epel              19 k
 ruby-shadow          i386        1.4.1-7.el5             epel             9.5 k
 
Install       6 Package(s)
Upgrade       0 Package(s)
 
Total download size: 981 k
Is this ok [y/N]: y
[...]


On your puppet client
judy# yum --enablerepo="epel,epel-puppet" install puppet
 
[...]
Installing:
 puppet            noarch   2.6.1-0.3.rc3.el5      epel-fedora   818 k
Installing for dependencies:
 facter            noarch   1.5.8-0.2.rc2.el5      epel-fedora    55 k
 libselinux-ruby   i386     1.33.4-5.5.el5         base           60 k
 ruby-augeas       i386     0.3.0-1.el5            epel           19 k
 ruby-shadow       i386     1.4.1-7.el5            epel          9.5 k
Install       5 Package(s)
Upgrade       0 Package(s)
 
Total download size: 961 k
Is this ok [y/N]: y

That’s it, in part 2 and 3 we will install our client and server and install dashboard.


Part 2: Puppet 2.6.1, configure puppetmaster and puppetd

Configure Puppetmaster

For installing puppetmaster 2.4.1 on CentOS please click here for Part 1


In Part 1 we covered installing the Puppetmaster and Puppetd packages on Centos 5.5. We will now configure a very basic client/server model to serve the /etc/resolv.conf file to our client. Simple enough!

Create your first module

Our first module will be called networking::resolver, it’s job will be to push out a resolve.conf file to clients.


Create the directory structure under /etc/puppet
punch# cd /etc/puppet
punch# mkdir modules
punch# mkdir modules/networking
punch# mkdir modules/networking/files
punch# mkdir modules/networking/manifests
punch# mkdir files

Create your resolv.conf file
punch# vi modules/networking/files/resolv.conf
Create your module manifest
punch# vi modules/networking/manifests/init.pp
class networking {
    # Here you can add stuff to be inhereted by your networking classes
    # We won't bother for this demonstration, but just for show!
}
 
class networking::resolver inherits networking { 
          file { "/etc/resolv.conf": 
              ensure => present,
              source => "puppet:///modules/networking/resolv.conf",
              group   => "root",
              owner => "root",
              mode  => "0755"
          }
}

Configure your site and nodes

Create a minimal site.pp
punch# vi manifests/site.pp
import "nodes"
import "templates"
 
filebucket { main: server => puppet }


Create a tempates file
punch# vi manifests/templates.pp
class baseclass { 
        include networking::resolver
}
 
node default { 
        include baseclass
}

Create your node file


Don’t forget to replace judy.craigdunn.org with the fqdn of your client server
punch# vi manifests/nodes.pp
node 'basenode' { 
  include baseclass
}
 
node 'judy.craigdunn.org' inherits basenode { 
}

Set up puppetmaster parameters



Create default configuration


This is a minimal puppet.conf file, a more detailed file can be produced with puppetmasterd –genconfig


The autosign will automatically sign certs for new clients, this is discouraged in a production environment but useful for testing. For information on running puppetmaster without autosign see the puppetca documentation.
punch# vi puppet.conf
[main]
    # The Puppet log directory.
    # The default value is '$vardir/log'.
    logdir = /var/log/puppet
 
    # Where Puppet PID files are kept.
    # The default value is '$vardir/run'.
    rundir = /var/run/puppet
 
    # Where SSL certificates are kept.
    # The default value is '$confdir/ssl'.
    ssldir = $vardir/ssl
 
[agent]
    # The file in which puppetd stores a list of the classes
    # associated with the retrieved configuratiion.  Can be loaded in
    # the separate ``puppet`` executable using the ``--loadclasses``
    # option.
    # The default value is '$confdir/classes.txt'.
    classfile = $vardir/classes.txt
 
    # Where puppetd caches the local configuration.  An
    # extension indicating the cache format is added automatically.
    # The default value is '$confdir/localconfig'.
    localconfig = $vardir/localconfig
    report = true
 
[master]
    autosign = true

Set permissions for your fileserver.

Note that this allows everything, you should restrict this in a production environment.
punch# vi fileserver.conf
[files]
  path /etc/puppet/files
  allow *
 
[modules]
  allow *
 
[plugins]
  allow *

Start puppetmaster
punch# service puppetmaster start
Starting puppetmaster:                                     [  OK  ]


The puppet client



Configure puppetd
On your client, edit puppet.conf and add the following in the [agent] section, remembering to change punch.craigdunn.org to the fqdn of your Puppetmaster.
judy# vi /etc/puppet/puppet.conf
[agent]
    server = punch.craigdunn.org
    report = true
    listen = true

Allow puppetrunner


Create a file called namespaceauth.conf and add the following, note in a production environment this should be restricted to the fqdn of your puppet master
judy# vi /etc/puppet/namespaceauth.conf
[puppetrunner]
allow *

Start puppetd
judy# service puppet start

View pending changes


Use –test along with –noop to do a dry run to view the changes that puppetd will make
judy# puppetd --noop --test
[...]
notice: /Stage[main]/Networking::Resolver/File[/etc/resolv.conf]/content: is 
{md5}e71a913327efa3ec8dae8c1a6df09b43, should be {md5}24b6444365e7e012e8fdc5f302b56e9c (noop)
[...]


Now you can run puppetd without –noop to pull in your new resolv.conf file


This is a very basic demonstration of creating a server/client pair with puppet. There is much more documentation on configuring and managing puppet here


Part 3: Installing puppet-dashboard on CentOS / Puppet 2.6.1

Puppet Dashboard

Puppet dashboard is a fairly new app with loads of future potential and is great for monitoring your puppet estate. This is a quick guide to getting it running on puppet 2.6.1. Be sure you have the correct yum repos and ruby versions installed, see Part 1 and Part 2 for more details.



Install the puppet-dashboard package.

punch# yum --enablerepo=puppetlabs,ruby,epel install puppet-dashboard
[...]
Installing for dependencies:
 mysql                        i386               5.0.77-4.el5_5.3            
 ruby-irb                     i686               1.8.6.111-1                 
 ruby-mysql                   i686               2.7.4-1                     
 ruby-rdoc                    i686               1.8.6.111-1                 
 rubygem-rake                 noarch             0.8.7-2.el5                 
 rubygems                     noarch             1.3.1-1.el5                 
Install       7 Package(s)
Upgrade       0 Package(s)
 
Total download size: 11 M
Is this ok [y/N]: y
[...]

Create a MySQL database for puppet-dashboard

Create a database for puppet-dashboard to use and set up a user with all privileges to use it. This can be done on a seperate host.
mysql> CREATE DATABASE puppetdash;
Query OK, 1 row affected (0.00 sec)
 
mysql> GRANT ALL PRIVILEGES ON puppetdash.* TO puppet@'%' IDENTIFIED BY 'punchandjudy';
Query OK, 0 rows affected (0.00 sec)

Configure database.yaml

cd /usr/share/puppet-dashboard
vi config/database.yaml
Add your database parameters to the development section, note that host: can be ommitted if you are using local sockets to connect to MySQL.
development:
  host: professor.craigdunn.org
  database: puppetdash
  username: puppet
  password: punchandjudy
  encoding: utf8
  adapter: mysql

Migrate the database
punch# rake RAILS_ENV=development db:migrate
[...]
(in /usr/share/puppet-dashboard)
==  BasicSchema: migrating ====================================================
-- create_table(:assignments, {:force=>true})
   -> 0.0072s
-- create_table(:nodes, {:force=>true})
   -> 0.0030s
-- create_table(:services, {:force=>true})
   -> 0.0026s
==  BasicSchema: migrated (0.0132s) ===========================================
[...]

Copy reports module to site_ruby



I hate doing this but puppetmasterd explicitly looks for reports in puppet/reports and so far I haven’t found a clean workaround to tell it to look in /usr/share/puppet-dashboard for it. If anyone knows of a way, please email me.
punch# cp /usr/share/puppet-dashboard/ext/puppet/puppet_dashboard.rb /usr/lib/ruby/site_ruby/1
.8/puppet/reports

Edit your puppet.conf

Include the following in the [master] section, changing punch.craigdunn.org to your puppet server
[master]
reports = puppet_dashboard,store
reportdir = /var/lib/puppet/reports
reporturl = http://punch.craigdunn.org:3000/reports

Restart puppetmaster and start puppet-dashboard

punch# service puppetmaster restart
Stopping puppetmaster:                                     [  OK  ]
Starting puppetmaster:                                      [  OK  ]
punch# service puppet-dashboard start
Starting puppet-dashboard:                                 [  OK  ]

Test web GUI

Go to the following link in your browser (replacing the hostname with your fqdn)
http://punch.craigdunn.org:3000/

Configure the client

Edit puppet.conf

Make sure the following things are set in the [agent] section of puppet.conf on your client node.
judy# vi /etc/puppet/puppet.conf
[agent]
    report = true


Run puppet in noop mode on the client
judy# puppetd --noop --test

Refresh browser

If all has gone well, you should now see your reports in puppet dashboard for your client node.

Linux's directory structure - 1.2

As you may have noticed, Linux organizes its files differently from Windows. First the directory structure may seem unlogical and strange and you have no idea where all the programs, icons, config files, and others are. This tuXfile will take you to a guided tour through the Linux file system. This is by no means a complete list of all the directories on Linux, but it shows you the most interesting places in your file system.


< / >

The root directory. The starting point of your directory structure. This is where the Linux system begins. Every other file and directory on your system is under the root directory. Usually the root directory contains only subdirectories, so it's a bad idea to store single files directly under root.
Don't confuse the root directory with the root user account, root password (which obviously is the root user's password) or root user's home directory.

< /boot >

As the name suggests, this is the place where Linux keeps information that it needs when booting up. For example, this is where the Linux kernel is kept. If you list the contents of /boot, you'll see a file called vmlinuz - that's the kernel.

< /etc >

The configuration files for the Linux system. Most of these files are text files and can be edited by hand. Some interesting stuff in this directory:
/etc/inittab
A text file that describes what processes are started at system bootup and during normal operation. For example, here you can determine if you want the X Window System to start automatically at bootup, and configure what happens when a user presses Ctrl+Alt+Del.
/etc/fstab
This file contains descriptive information about the various file systems and their mount points, like floppies, cdroms, and so on.
/etc/passwd
A file that contains various pieces of information for each user account. This is where the users are defined.

< /bin, /usr/bin >

These two directories contain a lot of programs (binaries, hence the directory's name) for the system. The /bin directory contains the most important programs that the system needs to operate, such as the shells, ls, grep, and other essential things. /usr/bin in turn contains applications for the system's users. However, in some cases it really doesn't make much difference if you put the program in /bin or /usr/bin.

< /sbin, /usr/sbin >

Most system administration programs are stored in these directories. In many cases you must run these programs as the root user.

< /usr >

This directory contains user applications and a variety of other things for them, like their source codes, and pictures, docs, or config files they use. /usr is the largest directory on a Linux system, and some people like to have it on a separate partition. Some interesting stuff in /usr:
/usr/doc
Documentation for the user apps, in many file formats.
/usr/share
Config files and graphics for many user apps.
/usr/src
Source code files for the system's software, including the Linux kernel.
/usr/include
Header files for the C compiler. The header files define structures and constants that are needed for building most standard programs. A subdirectory under /usr/include contains headers for the C++ compiler.
/usr/X11R6
The X Window System and things for it. The subdirectories under /usr/X11R6 may contain some X binaries themselves, as well as documentation, header files, config files, icons, sounds, and other things related to the graphical programs.

< /usr/local >

This is where you install apps and other files for use on the local machine. If your machine is a part of a network, the /usr directory may physically be on another machine and can be shared by many networked Linux workstations. On this kind of a network, the /usr/local directory contains only stuff that is not supposed to be used on many machines and is intended for use at the local machine only.
Most likely your machine isn't a part of a network like this, but it doesn't mean that /usr/local is useless. If you find interesting apps that aren't officially a part of your distro, you should install them in /usr/local. For example, if the app would normally go to /usr/bin but it isn't a part of your distro, you should install it in /usr/local/bin instead. When you keep your own programs away from the programs that are included in your distro, you'll avoid confusion and keep things nice and clean.

< /lib >

The shared libraries for programs that are dynamically linked. The shared libraries are similar to DLL's on Winblows.

< /home >

This is where users keep their personal files. Every user has their own directory under /home, and usually it's the only place where normal users are allowed to write files. You can configure a Linux system so that normal users can't even list the contents of other users' home directories. This means that if your family members have their own user accounts on your Linux system, they won't see all the w4r3z you keep in your home directory. ;-)

< /root >

The superuser's (root's) home directory. Don't confuse this with the root directory (/) of a Linux system.

< /var >

This directory contains variable data that changes constantly when the system is running. Some interesting subdirectories:
/var/log
A directory that contains system log files. They're updated when the system runs, and checking them out can give you valuable info about the health of your system. If something in your system suddenly goes wrong, the log files may contain some info about the situation.
/var/mail
Incoming and outgoing mail is stored in this directory.
/var/spool
This directory holds files that are queued for some process, like printing.

< /tmp >

Programs can write their temporary files here.

< /dev >

The devices that are available to a Linux system. Remember that in Linux, devices are treated like files and you can read and write devices like they were files. For example, /dev/fd0 is your first floppy drive, /dev/cdrom is your CD drive, /dev/hda is the first IDE hard drive, and so on. All the devices that a Linux kernel can understand are located under /dev, and that's why it contains hundreds of entries.

< /mnt >

This directory is used for mount points. The different physical storage devices (like the hard disk drives, floppies, CD-ROM's) must be attached to some directory in the file system tree before they can be accessed. This attaching is called mounting, and the directory where the device is attached is called the mount point.
The /mnt directory contains mount points for different devices, like /mnt/floppy for the floppy drive, /mnt/cdrom for the CD-ROM, and so on. However, you're not forced to use the /mnt directory for this purpose, you can use whatever directory you wish. Actually in some distros, like Debian and SuSE, the default is to use /floppy and /cdrom as mount points instead of directories under /mnt.

< /proc >

This is a special directory. Well, actually /proc is just a virtual directory, because it doesn't exist at all! It contains some info about the kernel itself. There's a bunch of numbered entries that correspond to all processes running on the system, and there are also named entries that permit access to the current configuration of the system. Many of these entries can be viewed.

< /lost+found >

Here Linux keeps the files that it restores after a system crash or when a partition hasn't been unmounted before a system shutdown. This way you can recover files that would otherwise have been lost.

< What next? >

If you're completely new to Linux, you might want to learn some commands for moving around in the file system, viewing text files, or manipulating the files. In that case I suggest you take a look at the set of tuXfiles in the Introduction to the Linux command line section.

OSI Model: Layers

Network Layer

The network layer manages device addressing. It defines protocols for opening and maintaining network path between systems. It is also manages data transmission and switching procedures. Routers operate at the network layer. The network layer looks at packet addresses to determine routing methods. If a packet is for the system on the local network, it is sent directly there. If it is addressed to the system on another segment, the packet is sent to the router, which forwards it on the desired network.

Data-Link Layer

The data link layer provides the rules for sending and receiving information across the physical connection between two systems. This layer provides error detection and control. Because this layer provides error control, higher layers do not need to handle such services. Switches and Bridges operate at this layer.

Physical Layer

Physical layer sends and receives the bits. This layer defines the physical characteristics of the medium such as connectors, electrical characteristics such as voltage levels, and functional aspects such as setting up and maintaining the physical link. Well-known physical layer interfaces for local area networks (LANs) include Ethernet, Token-Ring, and Fiber Distributed Data Interface (FDDI). Hubs and Repeaters work at this layer.

Presentation Layer

The presentation layer protocols are part of the user’s operating system and applications. In this layer information is formatted for display or printing. Tasks like interpretation of codes within the data (such as tabs or special graphics sequences), data compression, decompression, encryption and the translation of other character sets are performed here.

Session Layer

Session layer sets up manages, and then tears the sessions between Presentation layer entities. This layer coordinates communication between nodes, and offers three different modes of communications: Simplex, Half Duplex and Full Duplex.

Transport Layer

This layer breaks and reassembles the large data into data stream. It provides a high level of control for moving the information between systems, including prioritization, more sophisticated error handling, and security features. It controls packet sequence, regulates traffic, and finds duplicate packets. If data is missing from the packet, the receiving end transport layer protocol asks the sending end transport layer protocol to retransmit packets. This layer ensures that all data is in the proper order and received completely. 

The OSI model is divided in seven layers. These layers have been arranged in two groups. Top three layers define how the applications in the computers will communicate in with each other or with users. The bottom four layers define how the data is transmitted from one end to another.

Application
Presentation
Session
Transport
Network
Data Link
Physical

Application
Presentation
Session




Transport
Network
Data Link
Physical






The OSI reference model = Top 3 Layers + Bottom 4 Layers

Application Layer


It is the layer where users actually communicate to the computer system. Applications access the network services using defined procedures in this layer. The application layer is used to define the applications that handle file transfers, network management, terminal sessions, and message exchange etc.

What is the difference between TCP and UDP ?

Overview

TCP (Transmission Control Protocol) is the most commonly used protocol on the Internet. The reason for this is because TCP offers error correction. When the TCP protocol is used there is a "guaranteed delivery." This is due largely in part to a method called "flow control." Flow control determines when data needs to be re-sent, and stops the flow of data until previous packets are successfully transferred. This works because if a packet of data is sent, a collision may occur. When this happens, the client re-requests the packet from the server until the whole packet is complete and is identical to its original.

UDP (User Datagram Protocol) is anther commonly used protocol on the Internet. However, UDP is never used to send important data such as webpages, database information, etc; UDP is commonly used for streaming audio and video. Streaming media such as Windows Media audio files (.WMA) , Real Player (.RM), and others use UDP because it offers speed! The reason UDP is faster than TCP is because there is no form of flow control or error correction. The data sent over the Internet is affected by collisions, and errors will be present. Remember that UDP is only concerned with speed. This is the main reason why streaming media is not high quality.





On the contrary, UDP has been implemented among some trojan horse viruses. Hackers develop scripts and trojans to run over UDP in order to mask their activities. UDP packets are also used in DoS (Denial of Service) attacks. It is important to know the difference between TCP port 80 and UDP port 80. If you don't know what ports are go here.

Frame Structure

As data moves along a network, various attributes are added to the file to create a frame. This process is called encapsulation. There are different methods of encapsulation depending on which protocol and topology are being used. As a result, the frame structure of these packets differ as well. The images below show both the TCP and UDP frame structures.

TCP FRAME STRUCTURE

UDP FRAME STRUCTURE



The payload field contains the actually data. Notice that TCP has a more complex frame structure. This is largely due to the fact the TCP is a connection-oriented protocol. The extra fields are need to ensure the "guaranteed delivery" offered by TCP.

What are the differences between GRUB and LILO?


LILO (LInux LOader)
LILO stores information about the location of the kernel or other operating system on the Master Boot Record (MBR).


GNU GRUB (GRand Unified Boot loader)
GRUB has a more powerful, interactive command line interface
RUB will default to its command line interface where the user can boot the system manually.
GRUB may have difficulties booting certain hardware. LILO and GRUB do have a number of differences:

* LILO has no interactive command interface, whereas GRUB does.
* LILO does not support booting from a network, whereas GRUB does.
* LILO stores information regarding the location of the operating systems it can to load physically on the MBR. If you change your LILO config file, you have to rewrite the LILO stage one boot loader to the MBR. Compared with GRUB, this is a much more risky option since a misconfigured MBR could leave the system unbootable. With GRUB, if the configuration file is configured incorrectly, it will simply default to the GRUB command-line interface.

Differences Between Multicast and Unicast

Unicast

Unicast is a one-to one connection between the client and the server. Unicast uses IP delivery methods such as Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), which are session-based protocols. When a Windows Media Player client connects using unicast to a Windows Media server, that client has a direct relationship to the server. Each unicast client that connects to the server takes up additional bandwidth. For example, if you have 10 clients all playing 100-kilobits per second (Kbps) streams, those clients as a group are taking up 1,000 Kbps. If you have only one client playing the 100 Kbps stream, only 100 Kbps is being used.

Multicast

Multicast is a true broadcast. The multicast source relies on multicast-enabled routers to forward the packets to all client subnets that have clients listening. There is no direct relationship between the clients and Windows Media server. The Windows Media server generates an .nsc (NetShow channel) file when the multicast station is first created. Typically, the .nsc file is delivered to the client from a Web server. This file contains information that the Windows Media Player needs to listen for the multicast. This is similar to tuning into a station on a radio. Each client that listens to the multicast adds no additional overhead on the server. In fact, the server sends out only one stream per multicast station. The same load is experienced on the server whether only one client or 1,000 clients are listening

Important: Multicast on the Internet is generally not practical because only small sections of the Internet are multicast-enabled. Multicast in corporate environments where all routers are multicast-enabled can save quite a bit of bandwidth.

6 Stages of Linux Boot Process (Startup Sequence)

Press the power button on your system, and after few moments you see the Linux login prompt.
Have you ever wondered what happens behind the scenes from the time you press the power button until the Linux login prompt appears?
The following are the 6 high level stages of a typical Linux boot process.


1. BIOS

  • BIOS stands for Basic Input/Output System
  • Performs some system integrity checks
  • Searches, loads, and executes the boot loader program.
  • It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically F12 of F2, but it depends on your system) during the BIOS startup to change the boot sequence.
  • Once the boot loader program is detected and loaded into the memory, BIOS gives the control to it.
  • So, in simple terms BIOS loads and executes the MBR boot loader.

2. MBR

  • MBR stands for Master Boot Record.
  • It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda
  • MBR is less than 512 bytes in size. This has three components 1) primary boot loader info in 1st 446 bytes 2) partition table info in next 64 bytes 3) mbr validation check in last 2 bytes.
  • It contains information about GRUB (or LILO in old systems).
  • So, in simple terms MBR loads and executes the GRUB boot loader.

3. GRUB

  • GRUB stands for Grand Unified Bootloader.
  • If you have multiple kernel images installed on your system, you can choose which one to be executed.
  • GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.
  • GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand filesystem).
  • Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is sample grub.conf of CentOS.
  • #boot=/dev/sda
    default=0
    timeout=5
    splashimage=(hd0,0)/boot/grub/splash.xpm.gz
    hiddenmenu
    title CentOS (2.6.18-194.el5PAE)
              root (hd0,0)
              kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/
              initrd /boot/initrd-2.6.18-194.el5PAE.img
  • As you notice from the above info, it contains kernel and initrd image.
  • So, in simple terms GRUB just loads and executes Kernel and initrd images.

4. Kernel

  • Mounts the root file system as specified in the “root=” in grub.conf
  • Kernel executes the /sbin/init program
  • Since init was the 1st program to be executed by Linux Kernel, it has the process id (PID) of 1. Do a ‘ps -ef | grep init’ and check the pid.
  • initrd stands for Initial RAM Disk.
  • initrd is used by kernel as temporary root file system until kernel is booted and the real root file system is mounted. It also contains necessary drivers compiled inside, which helps it to access the hard drive partitions, and other hardware.

5. Init

  • Looks at the /etc/inittab file to decide the Linux run level.
  • Following are the available run levels
    • 0 – halt
    • 1 – Single user mode
    • 2 – Multiuser, without NFS
    • 3 – Full multiuser mode
    • 4 – unused
    • 5 – X11
    • 6 – reboot
  • Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate program.
  • Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level
  • If you want to get into trouble, you can set the default run level to 0 or 6. Since you know what 0 and 6 means, probably you might not do that.
  • Typically you would set the default run level to either 3 or 5.

6. Runlevel programs

  • When the Linux system is booting up, you might see various services getting started. For example, it might say “starting sendmail …. OK”. Those are the runlevel programs, executed from the run level directory as defined by your run level.
  • Depending on your default init level setting, the system will execute the programs from one of the following directories.
    • Run level 0 – /etc/rc.d/rc0.d/
    • Run level 1 – /etc/rc.d/rc1.d/
    • Run level 2 – /etc/rc.d/rc2.d/
    • Run level 3 – /etc/rc.d/rc3.d/
    • Run level 4 – /etc/rc.d/rc4.d/
    • Run level 5 – /etc/rc.d/rc5.d/
    • Run level 6 – /etc/rc.d/rc6.d/
  • Please note that there are also symbolic links available for these directory under /etc directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.
  • Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K.
  • Programs starts with S are used during startup. S for startup.
  • Programs starts with K are used during shutdown. K for kill.
  • There are numbers right next to S and K in the program names. Those are the sequence number in which the programs should be started or killed.
  • For example, S12syslog is to start the syslog deamon, which has the sequence number of 12. S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So, syslog program will be started before sendmail.
There you have it. That is what happens during the Linux boot process.

AMANDA The 15-Minute Backup Solution

The 15-Minute Backup Solution

Secure Network Backups in a Heterogeneous Environment in the Time it Takes to Have Pizza Delivered (All Using Open Source Software!)

This setup below was performed using Amanda 2.5.1p2. To learn how to set up:
    - the latest version of Amanda 3.x with new configuration tools
    - the new Volume Shadow Copy Service (VSS) based Zmanda Windows Client Community Edition
Please register on Zmanda Network and read Setting-Up an Open Source Backup Software Amanda Community in About 15 Minutes white paper available in the Resources section.
Today’s businesses rarely run on just one operating system. Linux users and administrators often have strong preferences for one distribution over another; web designers might lean towards the Mac; legacy software and hardware can include various UNIX operating systems. Despite the complexity of modern business computing environments, a system administrator is expected to find a reliable backup solution.
Even in the case where users are expected to keep important files on networked resources, for true intellectual data security, desktop machines and laptops will also be backed up. The price of hard disk storage is continuously falling, bringing terabytes of storage within reach, and increasing the amount of data that can potentially be lost. (The amount of data that you have will always expand to fit the storage available; as the golden rule states.) We live in a global and e-commerce economy, where businesses run around the clock and crucial business data changes commensurately.

The Challenge

For our 15-minute challenge, you will backup two Linux systems (each running a different Linux distribution) and one Windows system, using freely downloadable open source software.

Our scenario is as follows:
The user "pavel" works with sensitive information. We need to make an encrypted backup of his home directory, /home/pavel, which resides on a Fedora Core Linux system called Iron. Our webmaster needs the webserver's document home backed up, the /var/www/html directory on a SUSE Enterprise Linux system called Copper. Our manager works solely on a Windows XP system called Uranium, and keeps all of his work in the MyDocuments folder, so we will need to add //Uranium/MyDocuments to our backup configuration.

The Solution: Amanda

Amanda is open source backup software that is flexible, secure and scalable to dynamic computing environments. Amanda can save you from expensive proprietary backup software and those custom backup scripts that have a propensity to break at the worst times. Dating back to 1991, Amanda has been used successfuly in environments from one standalone machine to hundreds of clients. Amanda is so thoroughly documented, from community wikis to published system administration texts, that it might be hard to discern just how easy an Amanda backup can be.
This article will show you how, in about 15 minutes, you can:
1. Install and configure the Amanda backup server.
2. Prepare three different clients for backup.
3. Set backup parameters.
4. Verify the configuration.
5. Verify the backup.

We will install and configure Amanda backup server software on Quartz, which is running Red Hat Enterprise Linux. We will install and configure Amanda backup client software on Copper and on Iron. The Windows XP client, Uranium, will be backed up with Amanda server software running in conjunction with Samba on the backup server, Quartz.

Client
Filesystem
OS
Compression
Encryption
Copper
/var/www/html
SLES9
Yes
No
Iron
/home/pavel
FC4
Yes
Yes
Uranium
//uranium/MyDocuments*
WINXP
Yes
No
* using Samba (i.e. without installing any software on the Windows system)
Open Source backup software
Amanda gives you the capability to use disk storage as backup media. Configuring, initiating and verifying a backup will complete the backup cycle, all in less than the time it takes for a pizza to be delivered!

Prerequisites
The basic Amanda setup consists of an Amanda server, the Amanda client or clients that are to be backed up, and the backup storage media such as a tape or hard disk device. An intermediate holding area for caching data is not absolutely necessary, but will improve performance significantly and is considered part of a basic setup.
Before we begin, please review the introduction to Amanda. Then, note the following prerequisites:
  • tar 1.15 or later and xinetd are installed on Quartz, Iron and Copper.
  • Quartz is able to send mail to the root user.
  • The systems are all on the same network and available.
  • You have root access, and root access through SSH is enabled and working.
  • The directories to be backed up exist.
  • The Amanda 2.5.1p2 backup_server RPM should be available on Quartz, and the backup_client RPM should be available on Iron and Copper. Amanda binary and source RPM packages and source tarballs are freely available from Zmanda.
  • Quartz, the backup server, is running Samba client software. Samba is also freely available open source software.
To support the encrypted backup of /home/pavel on Iron, the following packages should be installed and available on Iron:
Also note that this article assumes a fresh install of Amanda. If you have an existing Amanda installation, additional steps are needed to ensure the proper upgrade to the latest Amanda release, (2.5.1p2 and later).
TIP: You can copy and paste all of the examples here, making appropriate modifications for your environment.
Order Pizza
Call your favorite pizza delivery place, set your stopwatch and...
Install and Configure the Amanda Backup Server
1.    Log in as root on Quartz, the Red Hat Enterprise Linux 4 server.
2.    Install the Amanda 2.5.1p2 amanda-backup_server RPM. Installing the package also creates a user named amandabackup who belongs to the group disk.
[root@quartz server]# rpm -ivh amanda-backup_server-2.5.1p2-1.rhel4.i386.rpm
warning: amanda-backup_server-2.5.1p2-1.rhel4.i386.rpm: V3 DSA signature: NOKEY, key ID 3c5d1c92
Preparing...                ########################################### [100%]
Jan  5 2007 12:12:55: Preparing to install: Amanda Community Edition - version 2.5.1p2
Jan  5 2007 12:12:55: Checking for 'amandabackup' user...
Jan  5 2007 12:12:55:
Jan  5 2007 12:12:55:  The Amanda backup software is configured to operate as the
Jan  5 2007 12:12:55:  user 'amandabackup'.  This user exists on your system and has not
Jan  5 2007 12:12:55:  been modified.  To ensure that Amanda functions properly,
Jan  5 2007 12:12:56:  please see that the following parameters are set for that
Jan  5 2007 12:12:56:  user.:
Jan  5 2007 12:12:56:
Jan  5 2007 12:12:56:  SHELL:          /bin/sh
Jan  5 2007 12:12:56:  HOME:           /var/lib/amanda
Jan  5 2007 12:12:56:  Default group:  disk
Jan  5 2007 12:12:56:
Jan  5 2007 12:12:56:  Checking ownership of '/var/lib/amanda'... correct.
Jan  5 2007 12:12:57:
Jan  5 2007 12:12:57: === Amanda backup server installation started. ===
   1:amanda-backup_server   ########################################### [100%]
Jan  5 2007 12:13:05: Updating system library cache...done.
Jan  5 2007 12:13:21: Installing '/etc/amandates'.
Jan  5 2007 12:13:21: The file '/etc/amandates' has been created.
Jan  5 2007 12:13:21: Ensuring correct permissions for '/etc/amandates'.
Jan  5 2007 12:13:21: '/etc/amandates' Installation successful.
Jan  5 2007 12:13:22: Checking '/var/lib/amanda/.amandahosts' file.
Jan  5 2007 12:13:22: Checking for '/var/lib/amanda/.profile' and ensuring correct environment.
Jan  5 2007 12:13:23: Setting ownership and permissions for '/var/lib/amanda/.profile'
Jan  5 2007 12:13:23: === Amanda backup server installation complete. ===
Amanda installation log can be found in '/var/log/amanda/install.log' and errors (if any) in '/var/log/amanda/install.err'.
3.    The Amanda services are started by the extended internet daemon, xinetd, which is why you must have xinetd installed on every Amanda server and client. In any text editor, create one xinetd startup file, /etc/xinetd.d/amandaserver , with content as follows.
For the /etc/xinetd.d/amandaserver file, on Quartz:
# default: on
#
# description: Amanda services for Amanda server and client.
#
service amanda
{
        disable         = no
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = amandabackup
        group           = disk
        groups          = yes
        server          = /usr/lib/amanda/amandad
        server_args     = -auth=bsdtcp amdump amindexd amidxtaped
}
4.    Restart xinetd on Quartz.
[root@quartz xinetd.d]# service xinetd reload
Reloading configuration:                                   [  OK  ]
5.    Note the time. Only about five minutes should have passed!

Install and Configure Three Different Amanda Clients

Installation of Amanda Client RPM on Iron (FC4)
1.    Log in as root on Iron, your Fedora Core 4 client.
2.    Install the Amanda 2.5.1p2 backup_client RPM. Installing the package also creates a user named amandabackup who belongs to the group disk.
[root@iron client]# rpm -ivh amanda-backup_client-2.5.1p2-1.fc4.i386.rpm
warning: amanda-backup_client-2.5.1p2-1.fc4.i386.rpm: Header V3 DSA signature: NOKEY, key ID 3c5d1c92
Preparing...                ########################################### [100%]
Jan  5 2007 10:17:16: Preparing to install: Amanda Community Edition - version 2.5.1p2
Jan  5 2007 10:17:16: Checking for 'amandabackup' user...
Jan  5 2007 10:17:16:
Jan  5 2007 10:17:16:  The Amanda backup software is configured to operate as the
Jan  5 2007 10:17:17:  user 'amandabackup'.  This user exists on your system and has not
Jan  5 2007 10:17:17:  been modified.  To ensure that Amanda functions properly,
Jan  5 2007 10:17:17:  please see that the following parameters are set for that
Jan  5 2007 10:17:17:  user.:
Jan  5 2007 10:17:17:
Jan  5 2007 10:17:17:  SHELL:          /bin/sh
Jan  5 2007 10:17:17:  HOME:           /var/lib/amanda
Jan  5 2007 10:17:17:  Default group:  disk
Jan  5 2007 10:17:17:
Jan  5 2007 10:17:17:  Checking ownership of '/var/lib/amanda'... correct.
Jan  5 2007 10:17:17:
Jan  5 2007 10:17:17: === Amanda backup client installation started. ===

   1:amanda-backup_client   ########################################### [100%]
Jan  5 2007 10:17:21: Updating system library cache...done.
Jan  5 2007 10:17:30: Checking '/var/lib/amanda/.amandahosts' file.
Jan  5 2007 10:17:31: Checking for '/var/lib/amanda/.profile' and ensuring correct environment.
Jan  5 2007 10:17:31: Setting ownership and permissions for '/var/lib/amanda/.profile'
Jan  5 2007 10:17:31: Checking for '/var/lib/amanda/.profile' and ensuring correct environment.
Jan  5 2007 10:17:31: Setting ownership and permissions for '/var/lib/amanda/.profile'
Jan  5 2007 10:17:31: === Amanda backup client installation complete. ===
Amanda installation log can be found in '/var/log/amanda/install.log' and errors (if any) in '/var/log/amanda/install.err'.
3.    In any text editor, create an xinetd startup file, /etc/xinetd.d/amandaclient, with content as follows.
# default: on
#
# description: Amanda services for Amanda client.
#
service amanda
{
        disable         = no
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = amandabackup
        group           = disk
        groups          = yes
        server          = /usr/lib/amanda/amandad
        server_args     = -auth=bsdtcp amdump
}
4.    Restart xinetd on Iron.
[root@ironxinetd.d]# service xinetd reload
Reloading configuration:                                   [  OK  ]
5.    Become the amandabackup user and append the line "quartz.zmanda.com amandabackup amdump" to the /var/lib/amanda/.amandahosts file on Iron. This allows Quartz, the Amanda backup server, to connect to Iron, the Amanda client.
Note that you should use fully qualified domain names when configuring Amanda.
-bash-3.00$ echo quartz.zmanda.com amandabackup amdump >> /var/lib/amanda/.amandahosts
-bash-3.00$ chmod 700 /var/lib/amanda/.amandahosts
6.    Save the passphrase as a hidden file in the home directory of the amandabackup user. Protect the file with the proper permissions.
As the user amandabackup: 
-sh-3.00$ chown amandabackup:disk ~amandabackup/.am_passphrase
-sh-3.00$ chmod 700 ~amandabackup/.am_passphrase
7.    Create a script that enables encryption on the client Iron.
As root create a file /usr/sbin/amcryptsimple:
 
#!/usr/bin/perl -w
use Time::Local;
my $AMANDA='amandabackup';
$AMANDA_HOME = (getpwnam($AMANDA) )[7] || die "Cannot find $AMANDA home directory\n" ;
$AM_PASS = "$AMANDA_HOME/.am_passphrase";
$ENV{'PATH'} = '/usr/local/bin:/usr/bin:/usr/sbin:/bin:/sbin';
$ENV{'GNUPGHOME'} = "$AMANDA_HOME/.gnupg";
sub encrypt() {

   system "gpg --batch --disable-mdc --symmetric --cipher-algo AES256 --passphrase-fd 3  3<$AM_PASS";}
sub decrypt() {

    system "gpg --batch --quiet --no-mdc-warning --decrypt --passphrase-fd 3  3<$AM_PASS";
}
if ( $#ARGV > 0 ) {

    die "Usage: $0 [-d]\n";
}
if ( $#ARGV==0 && $ARGV[0] eq "-d" ) {

   decrypt();
}
else {

   encrypt();
}
7.    Change the owership and the permissions on the file /usr/sbin/amcryptsimple you just created:
[root@iron sbin]# chown amandabackup:disk /usr/sbin/amcryptsimple
[root@iron sbin]# chmod 750 /usr/sbin/amcryptsimple
9.    This completes configuration of the Amanda client on Iron.
Installation of Amanda Client RPM on Copper (SLES9)
1.    Log in as the root user on Copper, your SUSE Linux Enterprise Server 9 client.
2.    Install the Amanda 2.5.1p2 backup_client RPM. Installing the package also creates a user named amandabackup who belongs to the group disk.
copper:/ # rpm -ivh amanda-backup_client-2.5.1p2-1.sles9.i586.rpm
warning: amanda-backup_client-2.5.1p2-1.sles9.i586.rpm: V3 DSA signature: NOKEY, key ID 3c5d1c92
Preparing...                ########################################### [100%]
Jan  5 2007 07:20:21: Preparing to install: Amanda Community Edition - version 2.5.1p2
Jan  5 2007 07:20:21: Checking for 'amandabackup' user...
Jan  5 2007 07:20:21:
Jan  5 2007 07:20:21:  The Amanda backup software is configured to operate as the
Jan  5 2007 07:20:21:  user 'amandabackup'.  This user exists on your system and has not
Jan  5 2007 07:20:21:  been modified.  To ensure that Amanda functions properly,
Jan  5 2007 07:20:21:  please see that the following parameters are set for that
Jan  5 2007 07:20:22:  user.:
Jan  5 2007 07:20:22:
Jan  5 2007 07:20:22:  SHELL:          /bin/sh
Jan  5 2007 07:20:22:  HOME:           /var/lib/amanda
Jan  5 2007 07:20:22:  Default group:  disk
Jan  5 2007 07:20:22:
Jan  5 2007 07:20:22:  Checking ownership of '/var/lib/amanda'... correct.
Jan  5 2007 07:20:22:
Jan  5 2007 07:20:22: === Amanda backup client installation started. ===

   1:amanda-backup_client   ########################################### [100%]
Jan  5 2007 07:20:26: Updating system library cache...done.
Jan  5 2007 07:20:26: Checking '/var/lib/amanda/.amandahosts' file.
Jan  5 2007 07:20:27: Checking for '/var/lib/amanda/.profile' and ensuring correct environment.
Jan  5 2007 07:20:27: Setting ownership and permissions for '/var/lib/amanda/.profile'
Jan  5 2007 07:20:27: Checking for '/var/lib/amanda/.profile' and ensuring correct environment.
Jan  5 2007 07:20:27: Setting ownership and permissions for '/var/lib/amanda/.profile'
Jan  5 2007 07:20:27: === Amanda backup client installation complete. ===
Amanda installation log can be found in '/var/log/amanda/install.log' and errors (if any) in '/var/log/amanda/install.err'.
3.    In any text editor, create an xinetd startup file, /etc/xinetd.d/amandaclient, with content as follows.
# default: on
#
# description: Amanda services for Amanda client.
#
service amanda
{
        disable         = no
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = amandabackup
        group           = disk
        groups          = yes
        server          = /usr/lib/amanda/amandad
        server_args     = -auth=bsdtcp amdump
}
5.  Restart xinetd on Copper.
copper:/ # /etc/rc.d/xinetd restart
Reload INET services (xinetd).                                       done
6.  Become the amandabackup user and append the line "quartz.zmanda.com amandabackup amdump" to the /var/lib/amanda/.amandahosts file on Copper. This allows Quartz, the Amanda backup server, to connect to Copper, the Amanda client.
Note that you should use fully qualified domain names when configuring Amanda.
-bash-3.00$ echo quartz.zmanda.com amandabackup amdump >> /var/lib/amanda/.amandahosts
-bash-3.00$ chmod 700 /var/lib/amanda/.amandahosts
7.    This completes configuration of the Amanda client on Copper. If you check your watch, you should find that only about ten minutes have passed!

Configurations Required to Backup Windows Client Uranium
·       Configuration done on backup server Quartz:
1.    The file /etc/amandapass must be created manually, owned by the amandabackup user and have permissions of 700. The amandapass file contains share name to user name, password and workgroup mapping.
As the root user:
[root@quartz /]# echo //uranium/MyDocuments zmanda%amanda Workgroup >> /etc/amandapass
2.    Change the ownership and permissions on this file:
[root@quartz etc]# chown amandabackup:disk /etc/amandapass
[root@quartz etc]# chmod 700 /etc/amandapass
·       Configuration done on Windows client Uranium:
The directory getting backed up must be shared from Windows and must be
accessible by the Windows user zmanda with the password amanda.
Set Backup Parameters
1.    On Quartz, as the amandabackup user, create the Amanda configuration directory.
[root@quartz etc]# su - amandabackup
-bash-3.00$ mkdir /etc/amanda/DailySet1
2.    Copy the /var/lib/amanda/example/amanda.conf file to the /etc/amanda/DailySet1 directory. The amanda.conf file is the most important file for configuring your Amanda setup.
-bash-3.00$ cp /var/lib/amanda/example/amanda.conf /etc/amanda/DailySet1
3.    The sample amanda.conf distributed with Amanda is over 700 lines long and is extensively commented. For more information, search for amanda.conf on the Amanda wiki. We will focus on just a few lines and make minimal modifications.
Open /etc/amanda/DailySet1/amanda.conf with any text editor and edit it to suit your environment.
·       The following lines control some details specific to your organization and to your tape configuration.
org "YourCompanyName"                          # your organization name for reports
mailto "root@localhost"                        # space separated list of operators at your site
tpchanger "chg-disk"                           # the tape-changer glue script
tapedev "file://space/vtapes/DailySet1/slots"  # the no-rewind tape device to be used
tapetype HARDDISK                              # use hard disk intead of tapes (vtape config)
·       We add the following lines to specify the size of the virtual tapes:
define tapetype HARDDISK {
 length 100000 mbytes
}
·       We add the following lines to support the encrypted backup of /home/pavel on Iron:
define dumptype encrypt-simple {
root-tar
comment "client simple symmetric encryption, dumped with tar"

encrypt client
compress fast
client_encrypt "/usr/sbin/amcryptsimple"
client_decrypt_option "-d"
}
      . Go to the “define dumptype global” section in the amanda.conf file and add the “auth "bsdtcp"” line right before the last “}” bracket. This is done to enable “BSDTCP” authentication.
# index yes
# record no
# split_diskbuffer "/raid/amanda"
# fallback_splitsize 64m
auth "bsdtcp"
4.    As the root user, create a cache directory to use as a holding disk.
[root@quartz ~]# mkdir -p /dumps/amanda
[root@quartz ~]# chown amandabackup:disk /dumps/amanda
[root@quartz ~]# chmod 750 /dumps/amanda
5.    Create the virtual tapes. Dedicated directories are used as “virtual tapes” called vtapes. You work with vtapes in the same way that you work with physical tapes. Vtapes can even simulate tape changers, as you will see in our example.
For security reasons, limit access to the vtapes directory to the amandabackup user.
As the root user:
[root@quartz ~]# mkdir -p /space/vtapes
[root@quartz ~]# chown amandabackup:disk /space/vtapes
[root@quartz ~]# chmod 750 /space/vtapes
As the amandabackup user:
-bash-3.00$ touch /etc/amanda/DailySet1/tapelist
-bash-3.00$ mkdir -p /space/vtapes/DailySet1/slots
-bash-3.00$ cd /space/vtapes/DailySet1/slots
-bash-3.00$ for ((i=1; $i<=25; i++)); do mkdir  slot$i;done
-bash-3.00$ ln -s slot1 data
6.    Test the virtual tape setup.
-bash-3.00$ ammt -f file:/space/vtapes/DailySet1/slots status
file:/space/vtapes/DailySet1/slots
status: ONLINE
7.    Just as with physical tapes, the virtual tapes now need to be labeled. (Please note that the output below has been truncated.)
bash-3.00$ for ((i=1; $i<=9;i++)); do amlabel DailySet1 DailySet1-0$i slot $i; done
changer: got exit: 0 str: 1 file://space/vtapes/DailySet1/slots
labeling tape in slot 1 (file://space/vtapes/DailySet1/slots):
rewinding, reading label, not an amanda tape (Read 0 bytes)
rewinding, writing label DailySet1-01, checking label, done.
...
changer: got exit: 0 str: 9 file://space/vtapes/DailySet1/slots
labeling tape in slot 9 (file://space/vtapes/DailySet1/slots):
rewinding, reading label, not an amanda tape (Read 0 bytes)
rewinding, writing label DailySet1-09, checking label, done.
-bash-3.00$ for ((i=10; $i<=25;i++)); do amlabel DailySet1 DailySet1-$i slot $i; done
changer: got exit: 0 str: 10 file://space/vtapes/DailySet1/slots
labeling tape in slot 10 (file://space/vtapes/DailySet1/slots):
rewinding, reading label, not an amanda tape (Read 0 bytes)

 rewinding, writing label DailySet1-10, checking label, done.
...
changer: got exit: 0 str: 25 file://space/vtapes/DailySet1/slots
labeling tape in slot 25 (file://space/vtapes/DailySet1/slots):
rewinding, reading label, not an amanda tape (Read 0 bytes)
rewinding, writing label DailySet1-25, checking label, done.
8.    Now we need to reset the virtual tape changer back to the first slot.
-bash-3.00$ amtape DailySet1 reset
changer: got exit: 0 str: 1
amtape: changer is reset, slot 1 is loaded.
9.    Create an /etc/amanda/DailySet1/disklist file in the Amanda configuration directory. The disklist contains the fully qualified backup client names, the directory or directories to be backed up and the dumptype.
copper.zmanda.com /var/www/html comp-user-tar
iron.zmanda.com /home/pavel encrypt-simple
quartz.zmanda.com //uranium/MyDocuments comp-user-tar
10.                        As the user amandabackup, append the following lines to the /var/lib/amanda/.amandahosts file to allow the backup clients to connect back to the server when doing restores. Specify fully qualified domain names.
iron.zmanda.com root amindexd amidxtaped
copper.zmanda.com root amindexd amidxtaped
quartz.zmanda.com root amindexd amidxtaped
quartz.zmanda.com amandabackup admump
11.                        Create a cron job that will execute amdump and initiate your backups automatically. As the amandabackup user, run crontab -e,and add the following line to run backups Monday through Friday at 1am.
0 1 * * 1-5 /usr/sbin/amdump DailySet1
Verify Your Configuration
1.    On Quartz, as amandabackup, run the amcheck tool to verify that you can successfully perform a backup.
-bash-3.00$ amcheck DailySet1
Amanda Tape Server Host Check
-----------------------------
Holding disk /dumps/amanda: 16714488 KB disk space available, using 16612088 KB
slot 1: read label `DailySet1-01', date `X'
NOTE: skipping tape-writable test
Tape DailySet1-01 label ok
NOTE: conf info dir /etc/amanda/DailySet1/curinfo does not exist
NOTE: it will be created on the next run.
NOTE: index dir /etc/amanda/DailySet1/index does not exist
NOTE: it will be created on the next run.
Server check took 4.259 seconds
Amanda Backup Client Hosts Check
--------------------------------
Client check: 3 hosts checked in 27.097 seconds, 0 problems found
(brought to you by Amanda 2.5.1p2)

Run a Backup
1.    On Quartz, as amandabackup, run amdump to start the DailySet1 backup.
-bash-3.00$ amdump DailySet1
2.    Amanda will email a detailed status report from the amandabackup user to you, the root user on Quartz.
From amandabackup@quartz.zmanda.com  Fri Jan  5 13:04:20 2007
Date: Fri, 5 Jan 2007 13:04:19 -0800
From: Amanda user
To: root@quartz.zmanda.com
Subject: YourCompanyName AMANDA MAIL REPORT FOR January 5, 2007
These dumps were to tape DailySet1-02.
The next tape Amanda expects to use is: a new tape.
The next new tape already labelled is: DailySet1-02.
STATISTICS:
                          Total       Full      Incr.
                        --------   --------   --------
Estimate Time (hrs:min)    0:00
Run Time (hrs:min)         0:00
Dump Time (hrs:min)        0:00       0:00       0:00
Output Size (meg)           3.5        3.5        0.0
Original Size (meg)        11.8       11.8        0.0
Avg Compressed Size (%)    29.7       29.7        --
Filesystems Dumped            3          3          0
Avg Dump Rate (k/s)       292.8      292.8        --
Tape Time (hrs:min)        0:00       0:00       0:00
Tape Size (meg)             3.7        3.7        0.0
Tape Used (%)               0.0        0.0        0.0
Filesystems Taped             3          3          0
Chunks Taped                  0          0          0
Avg Tp Write Rate (k/s)  8509.1     8509.1        --
 
USAGE BY TAPE:
  Label              Time      Size      %    Nb    Nc
  DailySet1-02       0:00     3744K    0.0     3     0 
NOTES:
  planner: Forcing full dump of copper.zmanda.com:/var/www/html as directed.
  planner: Forcing full dump of iron.zmanda.com:/home/pavel as directed.
  planner: Forcing full dump of quartz.zmanda.com://uranium/MyDocuments as directed.
  taper: tape DailySet1-02 kb 3744 fm 3 [OK]
DUMP SUMMARY:
                                       DUMPER STATS               TAPER STATS
HOSTNAME     DISK        L ORIG-KB  OUT-KB  COMP%  MMM:SS   KB/s MMM:SS   KB/s
-------------------------- ------------------------------------- -------------
copper.zmand -r/www/html 0    7640    2336   30.6    0:03  910.6   0:00 8680.7
iron.zmanda. /home/pavel 0    3530    1024   29.0    0:07  149.1   0:00 12486.1
quartz.zmand -yDocuments 0     960     384   40.0    0:03  101.0   0:00 4295.3
(brought to you by Amanda version 2.5.1p2)
3.    You can also run the tool amadmin with a find argument for a quick summary of what has been backed up.
-bash-3.00$ amadmin DailySet1 find
Scanning /dumps/amanda...
date                host              disk                  lv tape or file file part status
2007-01-05 13:04:03 copper.zmanda.com /var/www/html          0 DailySet1-02    2   -- OK
2007-01-05 13:04:03 iron.zmanda.com   /home/pavel            0 DailySet1-02    3   -- OK
2007-01-05 13:04:03 quartz.zmanda.com //uranium/MyDocuments  0 DailySet1-02    1   -- OK

Success!
In just about 15 minutes, we installed and configured a secure, heterogeneous network backup, verified our configurations and ran a backup. We did it with freely downloadable open source software that you can install from binaries or compile for your unique needs. The pizza, which should be getting delivered right about now, will be that much more enjoyable with the clear conscience and peace of mind that comes with knowing that your data is secure.
Recovery

Based on feedback received on our forums we are adding a section that shows the ability to do a restore.

1. On Copper, as root, create the "/etc/amanda" directory.
copper:~ # mkdir /etc/amanda

copper:~ # chown amandabackup:disk /etc/amanda

2. As amandabackup, create a file "/etc/amanda/amanda-client.conf" and insert the lines below in to the file.

# amanda.conf - sample Amanda client configuration file.
#
# This file normally goes in /etc/amanda/amanda-client.conf.
#
conf "DailySet1" # your config name

index_server "quartz.zmanda.com" # your amindexd server

tape_server "quartz.zmanda.com" # your amidxtaped server

#tapedev "/dev/null" # your tape device
# auth - authentication scheme to use between server and client.
# Valid values are "bsd", "bsdudp", "bsdtcp" and "ssh".
# Default: [auth "bsdtcp"]

auth "bsdtcp"

# your ssh keys file if you use ssh auth

ssh_keys "/var/lib/amanda/.ssh/id_rsa_amrecover"

3. As root run "amrecover" to initiate the data recovery process.

copper:/etc/amanda # amrecover
AMRECOVER Version 2.5.1p2. Contacting server on quartz.zmanda.com ...
220 quartz AMANDA index server (2.5.1p2) ready.
Setting restore date to today (2007-01-08)
200 Working date set to 2007-01-08.
200 Config set to DailySet1.
501 Host copper is not in your disklist.
Trying host copper.zmanda.com ...
200 Dump host set to copper.zmanda.com.
Use the setdisk command to choose dump disk to recover
amrecover>

4. The list of commands below will demonstrate a recovery of a set of different files and directories to the "/tmp" directory.

amrecover> listdisk
200- List of disk for host copper.zmanda.com
201- /var/www/html
200 List of disk for host copper.zmanda.com
amrecover> setdisk /var/www/html
200 Disk set to /var/www/html.
amrecover> ls
2007-01-05-13-04-03 tar-1.15/
2007-01-05-13-04-03 .
amrecover> cd tar-1.15
/var/www/html/tar-1.15
amrecover> ls
2007-01-05-13-04-03 scripts/
2007-01-05-13-04-03 doc/
2007-01-05-13-04-03 configure
2007-01-05-13-04-03 config/
2007-01-05-13-04-03 COPYING
2007-01-05-13-04-03 AUTHORS
2007-01-05-13-04-03 ABOUT-NLS
amrecover> add scripts/
Added dir /tar-1.15/scripts/ at date 2007-01-05-13-04-03
amrecover> add configure
Added file /tar-1.15/configure
amrecover> add doc/
Added dir /tar-1.15/doc/ at date 2007-01-05-13-04-03
amrecover> lcd /tmp
amrecover> extract
Extracting files using tape drive chg-disk on host quartz.zmanda.com.
The following tapes are needed: DailySet1-02
Restoring files into directory /tmp
Continue [?/Y/n]? y
Extracting files using tape drive chg-disk on host quartz.zmanda.com.
Load tape DailySet1-02 now
Continue [?/Y/n/s/t]? y
./tar-1.15/doc/
./tar-1.15/scripts/
./tar-1.15/configure
./tar-1.15/doc/Makefile.am
./tar-1.15/doc/Makefile.in
./tar-1.15/doc/convtexi.pl
./tar-1.15/doc/fdl.texi
./tar-1.15/doc/freemanuals.texi
./tar-1.15/doc/getdate.texi
./tar-1.15/doc/header.texi
./tar-1.15/doc/stamp-vti
./tar-1.15/doc/tar.info
./tar-1.15/doc/tar.info-1
./tar-1.15/doc/tar.info-2
./tar-1.15/doc/tar.texi
./tar-1.15/doc/version.texi
./tar-1.15/scripts/Makefile.am
./tar-1.15/scripts/Makefile.in
./tar-1.15/scripts/backup-specs
./tar-1.15/scripts/backup.in
./tar-1.15/scripts/backup.sh.in
./tar-1.15/scripts/dump-remind.in
./tar-1.15/scripts/restore.in
amrecover> quit
200 Good bye.

5. We can now verify that the files have been recovered successfully by running run the following command.

copper:/ # tree /tmp/tar-1.15
/tmp/tar-1.15
|-- configure
|-- doc
| |-- Makefile.am
| |-- Makefile.in
| |-- convtexi.pl
| |-- fdl.texi
| |-- freemanuals.texi
| |-- getdate.texi
| |-- header.texi
| |-- stamp-vti
| |-- tar.info
| |-- tar.info-1
| |-- tar.info-2
| |-- tar.texi
| `-- version.texi
`-- scripts
|-- Makefile.am
|-- Makefile.in
|-- backup-specs
|-- backup.in
|-- backup.sh.in
|-- dump-remind.in
`-- restore.in


2 directories, 21 files