Top 10 Open Source Tools for Linux Sys­tems Ad­min­is­tra­tors

Linux sys­tems ad­min­is­tra­tors need a num­ber of tools to keep their sys­tems well-oiled and run­ning smoothly at peak ef­fi­ciency lev­els. Here is a set of ten tools which will help them, whether they are new­bies or veter­ans get­ting a re­fresher course.

OpenSource For You - - Contents - By: Kshi­tij Upadhyay The au­thor is RHCSA and RHCE cer­ti­fied, and loves to write about new tech­nolo­gies. He can be reached at upad­hyayk04@gmail.com.

Asys­tems ad­min­is­tra­tor’s job is com­plex, cov­er­ing re­spon­si­bil­i­ties that range from man­ag­ing sys­tems to in­tru­sion de­tec­tion. Thank­fully, the world of open source soft­ware pro­vides a com­pre­hen­sive set of tools to sim­plify ad­min tasks. The fol­low­ing list of ten key open source tools cov­ers all the bases.

1. Cock­pit

Cock­pit is soft­ware de­vel­oped by Red Hat that pro­vides an in­ter­ac­tive browser based Linux ad­min­is­tra­tion in­ter­face. Its graph­i­cal in­ter­face al­lows begin­ner sys­tem ad­min­is­tra­tors to per­form com­mon sysad­min tasks with­out the req­ui­site skills on the com­mand line. In ad­di­tion to mak­ing sys­tems eas­ier to man­age for novice ad­min­is­tra­tors, Cock­pit also makes sys­tems con­fig­u­ra­tion and per­for­mance data ac­ces­si­ble to them, even if they do not know com­mand line tools. It is avail­able via the Cock­pit pack­age in the Red Hat Linux 7 ex­tras repos­i­tory. You can in­stall Cock­pit us­ing the fol­low­ing com­mand:

# yum –y in­stall cock­pit.-

Once in­stalled on a sys­tem, Cock­pit must be started be­fore it can be ac­cessed across the net­work, as shown be­low. Cock­pit can be ac­cessed re­motely via HTTPS us­ing a Web browser and by con­nect­ing to TCP Port 9090. This port must be opened on the sys­tem fire­wall for Cock­pit to be ac­cessed re­motely. It is de­fined as the Cock­pit for fire­walls.

# sys­tem­ctl start cock­pit

# fire­wall-cmd --add-ser­vice=cock­pit --per­ma­nent # fire­wall-cmd --reload

Once the con­nec­tion is es­tab­lished with the Cock­pit Web in­ter­face, a user must be au­then­ti­cated in order to gain en­try. Au­then­ti­ca­tion is per­formed us­ing the sys­tem’s lo­cal OS ac­count data­base. The dash­board screen in the Cock­pit in­ter­face pro­vides an over­view of the core sys­tem’s per­for­mance met­rics. Met­rics are re­ported on a per se­cond ba­sis, and al­low the ad­min­is­tra­tor to mon­i­tor the use of sub­sys­tems such as the CPU, mem­ory, net­work and disk.

2. PCP

Red Hat En­ter­prise Linux 7 in­cludes a pro­gram called Per­for­mance Co-Pi­lot, pro­vided by the PCP RPM pack­age. Per­for­mance Co-Pi­lot, or PCP, al­lows the ad­min­is­tra­tor to col­lect and query data from var­i­ous sub­sys­tems. It is in­stalled with the pcp pack­age. Af­ter in­stal­la­tion, the ma­chine will get the pmcd dae­mon, which is nec­es­sary for col­lect­ing the sub­sys­tem data. Ad­di­tion­ally, the ma­chine will also have var­i­ous com­mand line tools for query­ing sys­tem per­for­mance data.

There are sev­eral ser­vices that are part of PCP but the one that col­lects sys­tems per­for­mance data lo­cally is pmcd, or the Per­for­mance Met­rics Col­lec­tor Dae­mon. This ser­vice must be run­ning in order to query per­for­mance data with the CP com­mand line util­i­ties. The pcp pack­age pro­vides a va­ri­ety of com­mand line util­i­ties to gather and dis­play data on the ma­chine.

The pm­stat com­mand pro­vides in­for­ma­tion sim­i­lar to vm­stat. pm­stat sup­ports op­tions to ad­just the in­ter­val be­tween col­lec­tions (-t) or the num­ber of sam­ples (­s).

The pmatop com­mand pro­vides a top like out­put of ma­chine statis­tics and data. It in­cludes disk I/O and net­work I/O statis­tics, as well as CPU mem­ory and process in­for­ma­tion pro­vided by other tools. By de­fault, pmatop will up­date ev­ery 5 sec­onds.

The pm­val com­mand is used to ob­tain his­tor­i­cal statis­tics of per CPU idle time at one minute in­ter­vals from the most re­cent ar­chive log.

3. Pup­pet

Pup­pet al­lows the sys­tems ad­min­is­tra­tor to write in­fra­struc­ture as code us­ing a de­scrip­tive lan­guage to con­fig­ure machines, in­stead of us­ing in­di­vid­u­alised and cus­tomised scripts to do so. Pup­pet’s do­main-spe­cific lan­guage is used to de­scribe the state of a ma­chine, and

Pup­pet can en­force this state. This means that if the ad­min­is­tra­tor mis­tak­enly changes some­thing on the ma­chine, Pup­pet can en­force the state and re­turn the ma­chine to the de­sired state. Thus, not only can the Pup­pet code be used to con­fig­ure a sys­tem ini­tially, but it can also be used to keep the state of the sys­tem in line with the de­sired con­fig­u­ra­tion.

Pup­pet ar­chi­tec­ture: Pup­pet uses a server/client model. The server is called a Pup­pet mas­ter and it stores recipes and man­i­fests for the clients. The clients are called Pup­pet nodes and run the Pup­pet agent soft­ware. Th­ese nodes nor­mally run a Pup­pet dae­mon that is used to con­nect to the Pup­pet mas­ter. The nodes will down­load the recipe as­signed to the node from the Pup­pet mas­ter and ap­ply the con­fig­u­ra­tion if needed.

Con­fig­ur­ing a Pup­pet client: Al­though Pup­pet can run in stand­alone mode, where all Pup­pet clients have

Pup­pet mod­ules lo­cally that are ap­plied to the sys­tem, most sys­tems ad­min­is­tra­tors find that this tool works best us­ing a cen­tralised Pup­pet mas­ter. The first step in de­ploy­ing a Pup­pet client is to in­stall the Pup­pet pack­age.

# yum –y in­stall pup­pet

Once the Pup­pet pack­age is in­stalled, the Pup­pet client must be con­fig­ured with the host name of the Pup­pet mas­ter. The host name of the Pup­pet mas­ter should be placed in the /etc/pup­pet/pup­pet.conf file un­der the [agent] sec­tion as shown be­low. First open the con­fig­u­ra­tion file of Pup­pet, and then make en­tries as shown, be­fore sav­ing the file.

# vim /etc/pup­pet/pup­pet.conf [agent] Server=pup­pet.demo.ex­am­ple.com

The fi­nal step to be taken on the Pup­pet client is to start the Pup­pet agent ser­vice and con­fig­ure it to run at boot time.

# sys­tem­ctl start pup­pet.ser­vice.

# sys­tem­ctl en­able pup­pet.ser­vice.

4. AIDE

Sys­tem sta­bil­ity is put at risk when con­fig­u­ra­tion files are deleted or mod­i­fied with­out au­tho­ri­sa­tion or care­ful su­per­vi­sion. How can a change to an im­por­tant file or direc­tory be de­tected? This prob­lem can be solved by us­ing in­tru­sion de­tec­tion soft­ware to mon­i­tor files for changes. Ad­vance In­tru­sion De­tec­tion En­vi­ron­ment or AIDE can be con­fig­ured to mon­i­tor files for a va­ri­ety of changes in­clud­ing per­mis­sions or own­er­ship changes, time­stamp changes, or con­tent changes.

To get started with AIDE, in­stall the RPM pack­age that pro­vides the AIDE soft­ware. This pack­age has use­ful doc­u­men­ta­tion on tun­ing the soft­ware to mon­i­tor the spe­cific changes of in­ter­est.

# yum –y in­stall aide.

Once the soft­ware is in­stalled, it needs to be con­fig­ured. The /etc/aide.conf file is the pri­mary con­fig­u­ra­tion file for AIDE. It has three types of con­fig­u­ra­tion di­rec­tives: con­fig­u­ra­tion lines, se­lec­tion lines and macro lines.

Con­fig­u­ra­tion lines take the form param=value. When param is not a built-in AIDE set­ting, it is a group def­i­ni­tion that lists which changes to look for. For ex­am­ple, the fol­low­ing group def­i­ni­tion can be found in /etc/aide.conf, which is in­stalled by de­fault:

PERMS = p+i+u+g+acl+selinux

The afore­said line de­fines a group called PERMS that looks for changes in file per­mis­sions (p), in­ode (i), user own­er­ship (u), group own­er­ship (g), ACLs (acl), or SELinux con­text (selinux). We can write our own pa­ram­e­ters in the file which can be used to mon­i­tor our sys­tems.

Se­lec­tion lines de­fine which checks are per­formed on matched di­rec­to­ries. The third type of di­rec­tive is macro lines — th­ese de­fine vari­ables and their def­i­ni­tion has the fol­low­ing syn­tax.

@@de­fine VAR value

Ex­e­cute the aide ­­init com­mand to ini­tialise the AIDE data­base. Af­ter the data­base is cre­ated, a file sys­tem check can be per­formed us­ing the aide ­­check com­mand. This com­mand scans the file sys­tem and com­pares the cur­rent state of files with the in­for­ma­tion in the ear­lier AIDE data­base. Any dif­fer­ences that are found will be dis­played, show­ing both the orig­i­nal file’s state and the new con­di­tion.

5. Mcelog

An im­por­tant step in trou­bleshoot­ing po­ten­tial hard­ware is­sues is know­ing ex­actly which hard­ware is present in the sys­tem. The CPU(s) in a run­ning sys­tem can be iden­ti­fied from the lscpu com­mand, as shown in Fig­ure 4.

The dmide­code tool can be used to re­trieve in­for­ma­tion about phys­i­cal mem­ory banks, in­clud­ing the type, speed and lo­ca­tion of the bank as shown in Fig­ure 5.

Mod­ern sys­tems can typ­i­cally keep a watch on var­i­ous hard­ware fail­ures, alert­ing an ad­min­is­tra­tor when a hard­ware fault oc­curs. While some of th­ese so­lu­tions are ven­dor-spe­cific and re­quire a re­mote man­age­ment card, oth­ers can be read

from the OS in a stan­dard fash­ion. RHEL 7 pro­vides mcelog for log­ging hard­ware faults, which pro­vides a frame­work for catch­ing and log­ging ma­chine check ex­cep­tions on x86 sys­tems. On sup­ported sys­tems, it can also au­to­mat­i­cally mark bad ar­eas of RAM so that they will not be used.

In­stall and en­able mcelog as shown be­low:

# yum –y in­stall mcelog. # sys­tem­ctl en­able mcelog. #sys­tem­ctl start mcelog.

From now on, hard­ware er­rors caught by the mcelog dae­mon will show up in the sys­tem jour­nal. Mes­sages can be queried us­ing the jour­nalctl –u mcelog ser­vice. If the abort dae­mon is in­stalled and ac­tive, it will also trig­ger on var­i­ous mcelog mes­sages. Al­ter­na­tively, for ad­min­is­tra­tors who do not wish to run a sep­a­rate ser­vice, a cron is set up but com­mented out in /etc/cron.hourly/mcelog.cron that will dump events into /var/log/mcelog.

6. Memtest86+

When a phys­i­cal mem­ory er­ror is sus­pected, an ad­min­is­tra­tor might want to run an ex­haus­tive mem­ory test. In such cases, the Memtest86+ pack­age must be in­stalled. Since the mem­ory test in a live sys­tem is more than ideal, the Memtest86+ pack­age will in­stall a sep­a­rate boot en­try that runs Memtest86+ in­stead of the reg­u­lar Linux ker­nel. The fol­low­ing steps out­line how to en­able this in the boot en­try: 1.) In­stall the Memtest86+ pack­age and this will in­stall the

Memtest86+ ap­pli­ca­tion into /boot.

2.) Run the com­mand memtest­setup. This will add a new

tem­plate into /etc/grub.d to en­able Memtest86+.

3.) Up­date the Grub2 boot loader con­fig­u­ra­tion as shown be­low:

# grub2-mk­con­fig –o /boot/grub2/grub.cfg.

7. Nmap

Nmap is an open source port scan­ner that is pro­vided by the Red Hat En­ter­prise Linux 7 dis­tri­bu­tion. It is a tool that ad­min­is­tra­tors use to rapidly scan large net­works but it can also do a more in­ten­sive port scan on in­di­vid­ual hosts. Nmap uses the raw IP pack­age in novel ways to de­ter­mine what hosts are avail­able on the net­work, what ser­vices those hosts are of­fer­ing, what OS they are run­ning, what type of packet fil­ters/fire­walls are in use and dozens of other char­ac­ter­is­tics.

The nmap pack­age pro­vides the nmap executable.

# yum –y in­stall nmap

The ex­am­ple given in Fig­ure 6 shows Nmap scan­ning the net­work. The –n op­tion in­structs it to dis­play host in­for­ma­tion nu­mer­i­cally, not us­ing DNS. As Nmap dis­cov­ers each host, it scans priv­i­leged TCP ports look­ing for ser­vices. It dis­plays the MAC ad­dress, with the cor­re­spond­ing net­work adapter man­u­fac­turer of each host.

8. Wire­shark

Wire­shark is an open source, graph­i­cal ap­pli­ca­tion for cap­tur­ing, fil­ter­ing and in­spect­ing net­work pack­ets. It was for­merly called Ethe­real, but be­cause of trade­mark is­sues, the project’s name was changed. Wire­shark can per­form pro­mis­cu­ous packet sniff­ing when net­work in­ter­face con­trollers sup­port it. RHEL 7 in­cludes the wire­shark­gnome pack­age. This pro­vides Wire­shark func­tion­al­ity on a sys­tem in­stalled with X.

#yum –y in­stall wire­shark-gnome.

Once Wire­shark is in­stalled, it can be launched by se­lect­ing Ap­pli­ca­tions > In­ter­net > Wire­shark Net­work An­a­lyzer from the GNOME desk­top. It can also be launched di­rectly from the shell us­ing the fol­low­ing com­mand:

# wire­shark

Wire­shark can cap­ture net­work pack­ets. It must be ex­e­cuted by the root user to cap­ture pack­ets, be­cause di­rect ac­cess to the net­work in­ter­face re­quires root priv­i­leges. The cap­ture op­tion in the top level menu as shown in Fig­ure 7 per­mits the user to start and pre­vent Wire­shark from cap­tur­ing

pack­ets. It also al­lows the ad­min­is­tra­tor to se­lect the in­ter­face to cap­ture pack­ets on. The ‘any’ op­tion in the in­ter­face list matches all of the net­work in­ter­faces.

Once packet cap­tur­ing has been stopped, the cap­tured net­work pack­ets can be writ­ten to a file for shar­ing or later anal­y­sis. The File>Save>… or File > Save as… menu item al­lows the user to spec­ify the files to save the packet into. Wire­shark sup­ports a va­ri­ety of file for­mats.

9. Kdump

RHEL of­fers the Kdump soft­ware for the cap­ture of ker­nel crash dumps. This soft­ware works by us­ing the kexec util­ity on a run­ning sys­tem to boot a sec­ondary Linux ker­nel with­out go­ing through a sys­tem re­set. The Kdump soft­ware is in­stalled by de­fault in RHEL 7 through the in­stal­la­tion of the kexec­tools pack­age. The pack­age pro­vides the files and com­mand line util­i­ties nec­es­sary for ad­min­is­ter­ing Kdump from the com­mand line.

#

yum –y in­stall kexec-tools sys­tem-con­fig-kdump.

The Kdump crash dump mech­a­nism is pro­vided through the kdump ser­vice. Ad­min­is­tra­tors in­ter­ested in en­abling the col­lec­tion of ker­nel crash dumps on their sys­tems must en­sure that the kdump ser­vice is en­abled and started on each sys­tem.

# sys­tem­ctl en­able kdump. # sys­tem­ctl start kdump.

With the kdump ser­vice en­abled and started, ker­nel crash dumps will be­gin to be gen­er­ated dur­ing sys­tem hangs and crashes. The be­hav­iour of the ker­nel crash dump and col­lec­tion can be mod­i­fied in var­i­ous ways by us­ing the /etc/ kdump.conf con­fig­u­ra­tion file.

By de­fault, Kdump cap­tures crash dumps lo­cally to crash dump files lo­cated in sub­di­rec­to­ries un­der the /var/crash path.

10. Sys­temTap

The Sys­temTap frame­work al­lows easy prob­ing and in­stru­men­ta­tion of al­most any com­po­nent within the ker­nel. It pro­vides ad­min­is­tra­tors with a flex­i­ble script­ing lan­guage and li­brary by lev­er­ag­ing the kprobes fa­cil­ity within the Linux ker­nel. Us­ing kprobes, ker­nel pro­gram­mers can at­tach in­stru­men­ta­tion code to the be­gin­ning or end of any ker­nel func­tion. Sys­temTap scripts spec­ify where to at­tach probes and what data to col­lect when the probe ex­e­cutes.

Sys­temTap re­quires sym­bolic nam­ing for in­struc­tions within the ker­nel. So it de­pends on the fol­low­ing pack­ages, which are not usu­ally found on pro­duc­tion sys­tems. Th­ese pack­ages will pull in any re­quired de­pen­den­cies. The pack­ages are: 1) Ker­nel-de­bug­info

2) Ker­nel-de­vel

3) Sys­temtap

Us­ing stap to run Sys­temTap scripts: The Sys­temTap pack­age pro­vides a va­ri­ety of sam­ple scripts that ad­min­is­tra­tors may find use­ful for gath­er­ing data on their sys­tems. The scripts are stored in /usr/share/doc/sys­temtap­client­*/ex­am­ples. Th­ese scripts are fur­ther di­vided into sev­eral dif­fer­ent sub­di­rec­to­ries based on what type of in­for­ma­tion they have been asked to col­lect. Sys­temTap scripts have an ex­ten­sion of .stp.

To com­pile and run th­ese ex­am­ple scripts, or any other Sys­temTap script for that mat­ter, ad­min­is­tra­tors use the stap com­mand.

Fig­ure 6: Nmap scan­ning ports of some other ma­chine

Fig­ure 7: Graph­i­cal user in­ter­face of Wire­shark

Fig­ure 4: CPUs run­ning in a ma­chine can be iden­ti­fied by the lscpu com­mand

Fig­ure 3: The con­fig­u­ra­tion file of AIDE /etc/aide.conf

Fig­ure 5: The dmide­code com­mand in use

Fig­ure 1: Pack­age in­stal­la­tion and start­ing the ser­vice

Fig­ure 2: Out­put of pm­stat and pm­val com­mands

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.