Skip to end of metadata
Go to start of metadata

 Managing a cluster can be a challenge. The need to customize, deploy and configure numerous computers is a task that does not scale well past a few nodes without some automated assistance. Cobbler is a toolset that allows for quick, uniform node deployment and Puppet is a configuration engine that gives administrators the ability to quickly and uniformly reconfigure machines. Both are highly scalable systems that will help administrators quickly provision and maintain cluster nodes.

Installation and Initial Configuration

Installation of Cobbler is pretty straight forward starting with a RHEL/CentOS/SL Linux machine.  Install a RHEL or SL version of Linux on what will be the management machine.

 Cobbler Manual

If you'd like extra information on configuring cobbler that goes beyond the scope of this document, the Cobbler Manual is available on the cobbler website.

You can check your version of cobbler with the simple command

cobbler version
  1. Enable EPEL: 

    rpm -i http://mirror.pnl.gov/epel/6/x86_64/epel-release-6-8.noarch.rpm
  2. Install cobbler and cobbler-web (and koan for good measure): 

    yum -y install cobbler cobbler-web pykickstart system-config-kickstart dhcp mod_python wget tftp bind xinetd
  3. SELinux will be a headache out of the box. There are fixes, but for now disable selinux 

    To disable it in the current session, use:

    setenforce 0


    To disable SELinux and set it to permissive mode at boot, you must edit the file /etc/sysconfig/selinux and set

    SELINUX=permissive

    Status 'permissive' will stil show as 'enabled' since permissive mode still evaluates all filesystem ops against the selinux rules; Permissive mode logs the operations that would have been denied to the audit log to allow for system administrators to correct the selinux policies and enable selinux.

    You can check the current status of SELinux with the command 

    sestatus

From here you can continue to configure cobbler via its web interface or via command line. The command line (CLI) is preferred for experienced admins but the web UI may be easier to understand at first. CLI commands equivalents will be noted after web interface screenshots.

Choose a Hostname and domain

If your headnode is not assigned a hostname via  DHCP it will be necessary to set its hostname and its domain.  The domain should not be equivalent to a domain that is managed by another DNS service, but it can be an extended domain such as cluster.unl.edu or group12.unl.edu.  Unl.edu is a real domain but cluster.unl.edu is not. 

Set the hostname and domain (this is the fully qualified domain name or FQDN) by editting /etc/sysconfig/network to have a HOSTNAME line different than localhost.localdomain.

/etc/sysconfig/network
HOSTNAME=headnode.cluster.unl.edu

 

IPTables Configuration

As with selinux, iptables is a complication that has solutions but not easy ones. You can disable iptables to continue forward quickly, or add some configuration to allow cobbler to go through

For TFTP
iptables -I INPUT -p tcp --dport 69 -j ACCEPT
iptables -I INPUT -p udp --dport 69 -j ACCEPT
 
## for iptables to allow tftp, you must also add ip_conntrack_tftp to the IPTABLES_MODULES list in the file /etc/sysconfig/iptables-config (for RHEL/CentOS/SL) or /etc/modules (for Ubuntu) ##
For HTTPD
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
iptables -I INPUT -p tcp --dport 443 -j ACCEPT
For Cobbler
iptables -I INPUT -p tcp --dport 25150 -j ACCEPT
For Koan
iptables -I INPUT -p tcp --dport 25151 -j ACCEPT
Save iptables configuration
service iptables save

Configure the Secondary Network Interface

The Cobbler headnode will serve as a manager for the worker's network configuration.  The secondary internet interface on the Cobbler node needs configured prior to starting Cobbler.  Failure to configure the interface will cause strange Cobbler errors that may not make sense.  Since this interface will live on a private subnet you can manually configure the interface in the networking scripts.  On SL6 you can do this by editting /etc/sysconfig/network-scripts/ifcfg-eth1  (assuming you're using eth1 as your internal interface.  You are free to use eth0 as the internal and eth1 as the external interface.)

DEVICE=eth1
HWADDR=00:18:8B:7A:D2:7F  ## <--- Of course this MAC address is machine dependent.
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
NAME=eth1
IPADDR=10.0.0.1
NETMASK=255.255.255.0
NETWORK=10.0.0.0
BROADCAST=10.0.0.255
GATEWAY=192.168.1.5 ##<--- This address is the Cobbler's machine's upstream Gateway.  For Cluster Computing class only.

Restart the network (service network restart) and check ifconfig to see if the interface has come up.


Start the web and cobbler daemons

service httpd start
service cobblerd start
service xinetd start
 
## set to start on boot ##
 
chkconfig httpd on
chkconfig xinetd on
chkconfig cobblerd on


Enable TFTPD and RSYNC

You must also make sure to enable TFTPD and RSYNC. To do this, you must change their disabled status to 'no'. After editing these files you may want to restart xinetd.

/etc/xinetd.d/tftp
# default: off
# description: The tftp server serves files using the trivial file transfer \
#       protocol.  The tftp protocol is often used to boot diskless \
#       workstations, download configuration files to network-aware printers, \
#       and to start the installation process for some operating systems.
service tftp
{
        disable                 = no
        socket_type             = dgram
        protocol                = udp
        wait                    = yes
        user                    = nobody
        server                  = /usr/sbin/tftpd.py
        server_args             = -B 1380 -v -s /var/lib/tftpboot/
        per_source              = 11
        cps                     = 100 2
        flags                   = IPv4
}

/etc/xinetd.d/rsync
# default: off
# description: The rsync server is a good addition to an ftp server, as it \
#       allows crc checksumming etc.
service rsync
{
        disable = no
        flags           = IPv6
        socket_type     = stream
        wait            = no
        user            = root
        server          = /usr/bin/rsync
        server_args     = --daemon
        log_on_failure  += USERID
}

Web Configuration  *** OPTIONAL ***

 You should now be able to see the Cobbler web ui pages by connecting to the managing server with a web browser. (https://x.x.x.x/cobbler_web, where x.x.x.x is the ip address of the management node.  If you are working on a test cluster with an NATted off internet connection you can connect to its web interface via the machines internal address.  For the cluster computing class configure a laptop to have a wired connection with address 10.0.0.2 and physically plug the laptop into the netgear switch.)

 

The first thing is to configure is the user accounts and authentication mechanism for the Cobbler Web UI. Without a comprehensive site infrastructure that includes Kerberos or LDAP services you can use the “Digest” method included in SSL Apache. Enabling this is fairly simple. (Prior to enabling this all authentication is denied by default.)

Edit the file /etc/cobbler/modules.conf and change the [authentication] block to have the lines

[authentication] 
module=authn_configfile


Modify the cobbler user's password (or remove the account altogether) using the htdigest command.

htdigest /etc/cobbler/users.digest "Cobbler" cobbler
 
## and/or add another user with ##
 
htdigest /etc/cobbler/users.digest "Cobbler" $username

Restart the cobbler service (to pick up the authentication change in the module.conf file.

service cobblerd restart

Creating a Profile and Kickstart for the First Node

At this point we can import a linux distro that we would like to install on our worker nodes. There's several different methods to import a distro.

In the following examples we will assume you are using a Scientific Linux (RHEL) distro, but any distro will work
To mirror from a DVD:

By ISO
## Loop mount the ISO file such that it is accessible to the OS. For an example let us assume that the distro we would like to use is Scientific Linux, 64 bit, x86_64. We have downloaded the ISO image to root's home directory. Mount the image by doing the following: ##
 
mkdir /mnt/sl64
mount -o loop /root/SL-64-x86_64-2013-03-21-Everything-DVD1.iso /mnt/sl64
 
## Then use the cobbler import command. Note: you must specify an architecture with the --arch flag, or else the import will fail

cobbler import --path=/mnt/sl64 --name=sl_64 --arch=x86_64

To mirror from the web (cluster computing class Option 1):

cobbler import --path=http://192.168.1.5/cobbler/ks_mirror/sl_64 --name=sl_64  (for Cluster Computing Class the IP Is correct.  Others can use their favorite mirror)

This should take a couple of minutes.

To mirror from the web (cluster computing class Option 2, if option 1 does not work):

cobbler import --path=rsync://hcc-mirror.unl.edu/scientific-linux/6.4/x86_64/os/ --name=sl_64

This should also just take a couple of minutes.


Now create a profile to utilize this distro. Multiple profiles can use the same distro but in different ways. The difference comes in the kickstart file associated with the file. To begin, create a profile with the following command.

cobbler profile add --name=workernodes --distro=sl_64-x86_64


Now a tricky part. You need to create a kickstart file for this profile to use as the instruction set for the worker to do its install.

Cobbler ships with a sample kickstart file. It may be found in

/var/lib/cobbler/kickstarts/sample.ks

What's special about this file is that it uses specialized snippets so that it may be 'controlled' through the cobbler settings.  You may wish to edit a few things in the file. but overall everything should be ready to go out of the box.

Associate your constructed kickstart file with the the worker's profile (in this example, we use the sample.ks file):

cobbler profile edit --name=workernodes --kickstart=/var/lib/cobbler/kickstarts/sample.ks

Nameservers and DHCP

The Cobbler server can handle the duties of DHCP and DNS but these things need to be configured within the scope of Cobbler. In a typical DHCP configuration on linux there is an /etc/dhcpd.conf or /etc/dhcp/dhcpd.conf file that configures the network topology. If the DHCP service is enabled in Cobbler, which is desirable in our case, Cobbler will overwrite the system dhcpd.conf files from its template. Node definition will be done automatically by Cobbler but you may wish to add nameservers to be sent to the node when it gets its IP address from DHCP.

In order to enable DHCP and DNS management in Cobbler, you must edit the /etc/cobbler/settings file and set manage_dhcp as well as manage_dns to 1

Example:

settings
# set to 1 to enable Cobbler's DHCP management features.
# the choice of DHCP managment engine is in /etc/cobbler/modules.conf
manage_dhcp: 1
 
# set to 1 to enable Cobbler's DNS management features.
# the choice of DNS management engine is in /etc/cobbler/modules.conf
manage_dns: 1

In order for the DHCP and DNS management to work correctly, there are a few other options in cobblers settings that we must change. In the /etc/cobbler/settings file, you must change the ip addresses form the default 127.0.0.1 value to the ip address of the interface exposed to the cluster. So find and change the following options in this file to that ip address:

/etc/cobbler/settings
bind_master: x.x.x.x
server: x.x.x.x
next_server: x.x.x.x

The bind_master value will ensure all PXE booted and installed servers will communicate with our new bind server correctly. The values server and next_server will set values inside dhcp.template correctly so that PXE booted machines will tftp boot correctly.

There's two more options that need to be set before the BIND server will work correctly. These are the settings for manage_forward_zones and manage_reverse_zones in the settings file. In the manage_forward_zones option, you must put the domain name that you wish the cluster systems to boot into. You may have multiple zones set here, but that configuration setup goes beyond this document. The manage_reverse_zones setting is the reverse lookup ip settings of the network in the setup. In normal BIND settings, this would be in CIDR format, but not when managed through cobbler. Again, there may be multiple options here, but that is not covered by this document.

 

For manage_forward_zones, it's fairly simple. You just put the domain you wish to use inside the cluster network in the box.

Ex.

/etc/cobbler/settings
manage_forward_zones: ['example.com']
 
## if you wanted multiple zones, it would look like this ##
 
manage_forward_zones: ['example.com', 'foo.example.com', 'bar.com']

For manage_reverse_zones, it's a little different. You only put in the network part of the address into the box.

Ex.

/etc/cobbler/settings
## if your network is a class C type network (192.168.1.0/24), then you enter it like so ##
 
manage_reverse_zones: ['192.168.1']
 
## if your network is a class B type network (172.168.0.0/16), then you enter it like so ##
 
manage_reverse_zones: ['172.168']

## if your network is a class A type network (10.0.0.0/8), then you enter it like so #

manage_reverse_zones: ['10']

 

Now, instead of putting your configurations in /etc/dhcpd.conf you will put the configurations for DHCP and DNS inside of dhcp.template and named.template respectively. We won't cover this much here, but further information will be added to Getting DHCP/DNS to work on Cobbler/Puppet setup

 

Ensure DHCP and DNS will start on boot
chkconfig dhcpd on
chkconfig named on
 
cobbler sync
service cobblerd restart

## cobbler will restart these services as needed ## 

By default, BIND and DHCP management are set in modules.conf so you don't need to change anything there. You may change it so instead of BIND and DHCP cobbler will instead use DNSMASQ for both, but that is not covered by this document.

Adding the Cobbler Server

It may not make sense to register the Cobbler server to itself as it can never kickstart from itself but Cobbler is managing our DNS so for workernodes to be able to look up the Cobbler server via its name it will have to be registered in Cobbler.

cobbler system add --name=headnode --interface=eth0 --ip-address=y.y.y.y --mac=AA:AA:AA:AA:AA:AA --dns-name=headnode.domain.domain --profile=workernodes

The profile is needed to add the system but which profile you choose will not make a difference since you won't be kickstarting this server via the Cobbler infrastructure.

Adding the First Workernode

The next step can take place either in the web GUI or on the command line. On the cobbler server add a worker to the cobbler service. You will need to know the MAC address of the PXE bootable device on the worker.

The MAC address must be the address of the system you're installing on! This marker will signal to cobbler that when it sees this address to serve a specific address. The IP address may be any address within the range you provided to the DHCP server configuration. Once you give it an ip address, that IP will be reserved for that machine.

cobbler system add --name=worker1 --ip-address=x.x.x.x --mac=AA:AA:AA:AA:AA:AA --dns-name worker1.domain.domain --interface=eth0  --name-servers=192.168.1.1 --gateway=192.168.1.1 --profile=workernodes

Attempt to PXE and kickstart the worker node.

Adding a YUM Repo

Mirroring a repo is sometimes necessary to manage the slave nodes when they don't have access out to the internet. This takes up a lot of space on the head node; however, it is necessary to manually add repos to install all the required packages. However, if the slave nodes have internet access, or the head node acts as a NAT, then you can manually add repos for them to use that act as a link to an external repo. 


You can add a repo from the command line on the cobbler server using the 'repo add' commands to cobbler:

cobbler repo add --arch=x86_64 --breed=yum --keep-updated=Y --mirror=http://mirrors.xmission.com/fedora/epel/6/x86_64/ --name=epel --mirror-locally=0

 


In order for a worker node to enable this repo you need to enable the repo either node by node or in the profile used by all the nodes. To enable this repo to the “workernodes” profile do the following:

cobbler profile edit --name=workernodes --repos="epel"

'Cobbler reposync' will sync and mirror the repo to the cobbler server. This takes a considerable amount of space for most 'standard' repos, e.g. 6 GB for epel if you allow it to mirror locally.

Default Cobbler Password on the Worker Node

The default password on any kickstarted workernode, at this point, is 'cobbler'.
To modify this password in cobbler such that the workers don't get the default you'll have to add an encrypted password to the kickstart template file (sample.ks).

  • No labels