Certified Series: Red Hat #5

RHCSA Exam Preparedness: File/Folder Permissions and Collaboration

This post will cover the following Red Hat Certified Systems Administrative objectives:

  • Create and Configure set-GID directories for collaboration
  • Create and Manage Access Control Lists (ACLs)
  • Diagnose and correct file permission problems

Please check out Ralph Nyberg’s video series (1, 2, and 3) for a comprehensive lecture on these objectives.

The basic Linux file system security concepts are based on owner, group, and other. Permissions associated with these concepts include read, write, and execution of files. These permissions can also be represented with numbers:

  • read 4 open file/list directory
  • write 2 change file/delete directory
  • execute 1 run file/enter directory

Some useful tools and examples to change permissions and ownership of files:

chmod 664 <target>
chown {ugo}{+-=}{rwx} <target>
chown user:group <target>
chgrp group <target>

Make sure the marketing group has access to the marketing directory:

mkdir /marketing
useradd Nate
useradd Eric
groupadd marketing
usermod -aG marketing Nate
usermod -aG marketing Eric
ls -ld /marketing
chown .marketing /marketing
chmod 770 /marketing

This is still shy of the objective of collaboration. At this point files created by either Nate or Eric cannot be edited by the other. They can only remove each other’s files.

Additional GID and sticky bit permissions are used inherit files nested under directories to run files with owner or group permissions:

  • SUid 4 u+s run as owner/-
  • GID 2 g+s run as group/inherit parent directory permissions
  • sitcky 1 o+t -/root and owner can delete files

Create the directory permissions with collaboration via GID and sticky bits:

ll -ld /marketing
chmod 3770 /marketing

Additional security controls are available by using Access Control Lists or ACLs.

Wendy needs read access to the marketing directory. Michael needs read and write access to the marketing directory. And, the Directors group needs read access to the marketing directory:

useradd Wendy
useradd Michael
groupadd Directors
useradd Director
usermod -aG Directors Director

View the default ACL for the marketing directory:

getfacl /marketing

Make ACL changes with the setfacl command:

setfacl -m user:Wendy:rx,u:Michael:rwx,g:Directors:rx /marketing

ls can identify the use of an ACL represented by a “+” symbol at the end of the permission bits:

ls -ld /marketing
drwxrws--T+ root marketing

These permissions still need to be inherited by files created in the marketing directory by setting the default/default ACL:

setfacl -m d:u:Wendy:rx,d:u:Michael:rwx,d:g:Directors:rx /marketing

Subnet Scan

Classless solutions to sweep a network

I have come across a number of bash scripts and one liners to scan for all of the hosts on a Class C subnet. Something that is more rare is a script that can scan for hosts on a classless subnet.

Nmap is a tool utilized for such a task, and it can be used to exfiltrate additional information from a subnet about hosts (OS, Services, and Open Ports.) The downside could be the time some of theses scan options can take or security restrictions may throw some flags you may want to avoid.

Scan a specified subnet for active hosts:

nmap -sP

A ping through a range of hosts can accomplish the same by pinging a broadcast address, or by using a bash script to loop the ping function through a sequence of addresses:

# example: ./pscan.sh 192.168.1
for ip in $(seq 254);
  do ping -c 1 $1.$ip | grep 'ttl'

The above is a quick way of pinging hosts on a Class C subnet, but the output will be out of order with unneeded characters. I wasn’t able to find a way to place the ping responses in order directly, so I created a temporary file to store the IP addresses. The sort function was then able to arrange the IP addresses by each octet and in numeric order:

# example: ./pscan.sh 192.168.1
for ip in $(seq 254);
  do ping -c 1 $1.$ip | grep 'ttl'| cut -d " " -f 4 | cut -d ":" -f 1 >> .pscan.tmp &
sleep 2
cat .pscan.tmp | sort -g -t. -k1,1n -k2,2n -k3,3n -k4,4n
sleep 2
rm .pscan.tmp

I still needed a way to scan a /23 or /9 subnet with ping. Using a simple sequence function like above just wouldn’t suffice for multiple octets of variability. Enter dotted_quad_to_integer() and integer_to_dotted_quad() functions. These functions need to have their maths defined, and a deeper explanation can be found at https://www.shellscript.sh/tips/ping. In this case, I only needed to specify the beginning and ending addresses of the range I wished to have scanned. I also added the sort function from above to arrange the addresses for an organized data set:

# example: ./icmp_scan.sh
IFS="." read a b c d <<< echo $1
expr $(( (a<<24) + (b<<16) + (c<<8) + d))
local ip=$1
let a=$((ip>>24&255))
let b=$((ip>>16&255))
let c=$((ip>>8&255))
let d=$((ip&255))
echo "${a}.${b}.${c}.${d}"
start=$(dotted_quad_to_integer $first)
end=$(dotted_quad_to_integer $last)
for ip in seq $start $end
( ping -c1 -w1 ${ip} > /dev/null 2>&1 && integer_to_dotted_quad ${ip} ) >> .pscan.tmp &
cat .pscan.tmp | sort -g -t. -k1,1n -k2,2n -k3,3n -k4,4n
sleep 1
rm .pscan.tmp

Media Manipulation #3

Text Swiss Army Chainsaw: bash

I like to utilize bash to help automate trivial tasks that can add up to the size of mountains. Today, I wanted to replace a space with a comma in every single line of a thousand line .cvs file. I wasn’t successful in accomplishing this task with a single magical sed or awk command with my limited exposure and knowledge of bash text editing tools. I had to break it down into smaller tasks.

My objective was to extract specified columns of customer data, and I needed to separate the address field to put the house number and street into their own respective columns. I had to learn this task so I can organize this spreadsheet based on street name.

If you need to sort large .cvs files or other data files with deliminator, this post may benefit you.

I started by removing the first line of the customer file with sed. This will prevent the header from interfering with field and column editing. Output was placed into a new file to preserve the original data:

sed '1 d' customer.csv > cus_data.csv

I was able to organize the data by sorting and removing all duplicate lines by piping sort into uniq:

sort cus_data.csv | uniq -u > edit1.csv

I wanted to extract necessary columns such as customer name, phone number, latitude, and longitude. I did this using cut, and I utilized the -d option to specify commas as the deliminator:

cut -d , -f 2-3,5-6 edit1.csv > edit2.csv

I also extracted the address column to it’s own file to allow me to focus edits on this field:

cut -d , -f 4 edit1.csv > edit3.csv

Then, the first space was replaced with a comma using sed. This is the space after the house number and before the street name. You may want to practice this one without the -i option before committing to the change:

sed -ir 's/\s+/,/' edit3.csv 

The new columns were ready for compilation to the final file. I did this with paste again with the -d option to insert a deliminator comma between the pasted columns:

paste -d , edit2.csv edit3.csv > streets.csv

Some other useful tools include awk, cat, and wc to help gather metrics of files. Such as, awk can be used to count columns of .csv file, or using wc to count rows:

awk "{print NF}" FS-, streets.csv
cat streets.csv | wc -l

Certified Series: Red Hat #4

RHCSA Exam Preparedness: IPv4 and IPv6 Network Configuration

The primary objectives involved with network configuration are:

  • Configure IPv4 and IPv6 addresses
  • Configure hostname resolution
  • Configure network services to start automatically at boot

We will set IPv4 and IPv6 addresses using nmcli. A nice overview is available via the Red Hat technology promotion training. Additional IPv6 information can be found here.

View current network interface configurations:

nmcli dev status
nmcli dev show
nmcli con show
nmcli con show -a

Set a static IPv4 address by creating a connection on network interface enp2s0 named rhcsa1:

nmcli con add con-name rhcsa1 type ethernet ifname enp2s0
nmcli con mod rhcsa1 ipv4.addresses 
nmcli con mod rhcsa1 ipv4.method manual

Set the gateway and dns server:

nmcli con mod rhcsa1 ipv4.gateway
nmcli con mod rhcsa1 ipv4.dns

Add an additional dns server:

nmcli con mod rhcsa1 +ipv4.dns

Enable the interface for the changes to be applied:

nmcli con up rhcsa1
nmcli con show
nmcli con show rhcsa1

Set a static IPv6 address on the connection rhcsa1:

nmcli con mod rhcsa1 ipv6.addresses "2b22:2d0:6333:5::22f3/64" gw6 "2b22:2d0:6333:5::22f0/64"
nmcli con mod rhcsa1 ipv6.method manual
nmcli con up rhcsa1
nmcli con show rhcsa1

On second thought, change the IPv6 connection method to auto, and set the address configuration to dhcp:

nmcli con mod rhcsa1 ipv6.method auto
nmcli con up rhcsa1

Make sure the connection is set for autoconnect:

nmcli con mod rhcsa1 ipv4.autoconnect
nmcli con mod rhcsa1 ipv6.autoconnect

Verify connectivity and configuration after a reboot:

ping -c3 www.redhat.com
ping6 -c3 www.redhat.com
nmcli dev status
nmcli dev show
nmcli con show
nmcli con show rhcsa1
cat /etc/sysconfig/network-scripts/ifcfg-enp2s0

Set the hostname using hostnamectl:

hostnamectl set-hostname rhcsa.home.lab

Verify that the “Static hostname” property has changed as intended:


Configure network services to start automatically at boot. Use systemctl to configure these services. The network target should contain all of the required network services:

systemctl list-unit-files
systemctl enable network.target
systemctl status network.target

After rebooting, verify all the above setting to make sure the changes are persistent.

Happy Belated Backup

And it’s Friday!!

According to Wendell, it was World Backup Day a week or two ago. So, I recreated my file server. I also made sure this site was backed up. This involved the web site files, as well as the WordPress database. Let us pwn this task with these Linuxy skillz we have haquired.

Start by creating backup directories. Make sure an administrator with escalated permissions to create files in this directory is able to access the web site file in addition to the MySql database:

mkdir -p /backup/wordpress
mkdir -p /backup/mysql
chown admin /backup/wordpress
chown admin /backup/mysql

Compress the WordPress web site files, and place them in the wordpress directory created above. Include the date to help identify when the backup was created:

tar -czf /backup/wordpress/wordpress-`date '+%m%d%y'`.tar.gz /var/www/wordpress

Do the thingy with the database too! Dump it using the database, user, password, and le’ pipe’ to the zippening with dates yo:

mysqldump wordpressdatabase -u wordpressuser -pmysqlpassword | gzip > /backup/mysql/my-`date '+%m%d%y'`.sql.gz

Now, we only need to pull the backups down to another computer for preservation:

scp masterblaster@abstractrepresentation.online:/backup/wordpress/wordpress-041020.tar.gz
scp masterblaster@abstractrepresentation.online:/backup/mysql/my-041020.sql.gz

Stay tuned gang. Next backup related subject will help us automate this whole process. Believe it!

In Isolation

Red Hat training resources during social distancing

The Ask Noah Show 175 shared a link to a series of training courses offered by Red Hat during the Covid-19 Pandemic crisis.

I am still tremendously busy with work and RHEL 7 training, but I may have to review a couple of these courses. I am particularly interested in the Ansible courses for review and comprehensive demos of Roles and Tower.

Please, stay safe and isolated. Nurture your skills, and make the best of these strange times.

Certified Series: Red Hat #3

RHCSA Exam Preparedness: Disk Management

This is a huge subject including partitions, physical and logical volumes, as well as file systems. Please follow the video series ( 1, 2, 3, 4) by Ralph Nyberg for a comprehensive “lecture” on these Red Hat Certified Systems Administrator certification exam objectives:

  • List, create, and delete partitions on MBR and GPT disks
  • Add new partitions and logical volumes and swap to a system non-destructively
  • Create, mount, unmount, and use vfat, ext4, and xfs file systems
  • Create and remove physical and logical volumes assigned to volume groups
  • Extend existing logical volumes

(Virtual Machine Manager and KVM were used for lab purposes. Additional drive space was added to the virtual machine by adding storage hardware within Virtual Machine Manager.)


  • 2 TB maximum partition size
  • Allows up to 4 primary partitions, one of which is an extended partition
  • Can create additional logical partitions/volumes within extended partitions
  • fdisk can be used to edit MBR partitions

List MBR partitions:

fdisk -l

Enter a device such as sdb:

fdisk /dev/sdb

List options with help:


Add a new partition:


Select primary (p) or extended (e):


Enter the Partition number:


Enter the first sector:

2048 (default)

Enter the last sector, +sectors or +size {K,H,G}:


View and verify the created partition with print:


Rinse and repeat the process to create two more primary partitions:

1001472 (default)
2000896 (default)

At this point, no more primary partitions are available. An extended partition must now be utilized to accommodate additional logical partitions/volumes:

3000320 (default)
3002368 (default)

The last partition needs to be converted to Swap space by changing the Fifth Partition’s Id or type:


Finally, write the partition table to disk, and exit the fdisk utility:


It is recommended to run partprobe after exiting fdisk or gdisk:


View the partition blocks to verify the changes:

cat /proc/partitions


  • No constraints on partition size or partition types
  • gdisk can be used to edit GPT partitions

Enter the designated device sdc:

gdisk /dev/sdc

Enter ? for help:


Create a new partition:


Enter the partition number:

enter (default is 1)

Enter the first sector:

enter (default is 2048)

Enter the last sector, or enter the size in {KMGTP}:


Enter the hex code for the Linux Filesystem. Use L to show the code options:

enter (default is 8300 for the Linux filesystem)

Use p to verify the creation of the GPT partition:


Create two other Linux filesystem partitions:

enter (default is 2099200)
enter (default is 8300 for the Linux filesystem)
enter (default is 4196352)
enter (default is 8300 for the Linux filesystem)

Create a smaller Linux filesystem partition to be changed to a swap partition:

enter (default is 6293504)
enter (default is 8300 for the Linux filesystem)

Change the fourth partition to type Linux swap:


After the changes have been verified, write the table to disk and exit:


Run partprobe and verify the partition table changes:

cat /proc/partitions

Create filesystems on partitions

List available filesystems that can be created:

mkfs -t 'tab'

create an xfs filesystem on sdb1:

mkfs.xfs /dev/sdb1

Mount the sdb1 filesystem by first creating a mount point:

mkdir /entfs1
mount /dev/sdb1 /entfs1/

Verify that the partition successfully mounted using the df with the type and human readable options:

df -Th

Dismount the filesystem prior to making persistent changes to the fstab file:

umount /dev/sdb1

These changes need to be made to the /etc/fstab for persistence after restarting. Add the following to the /etc/fstab file:

# UUID/Label - Mount Point - Filesystem - Options   
/dev/sdb1    /entfs1       xfs          defaults  0 0

Vim users can mount filesystems in the fstab file by importing the UUID using blkid from inside vim:


The UUID can be copied and pasted into the fstab file, or the output of blkid can be copied into the vim session using :r with blkid:


Delete the unnecessary lines and information so that the fstab file line looks as follows to utilize the UUID instead of the Label:

UUID="62285499-234d-342-32223141" /entfs1 xfs defaults 0 0

Test the changes made to the fstab file using the -a option with the mount command and look for “successfully mounted”:

mount -av

Let’s make a few more directories to mount more partitions on:

mkdir /entfs{2..6}

Create an ext4 filesystem on sdb2, mount the partition under /entfs2, and verify the changes:

mkfs.ext4 /dev/sdb2
mount /dev/sdb2 /entfs2/
df -Th

Dismount the filesystem and partition prior to making the change persistent by editing the fstab file:

umount /dev/sdb2

Edit /etc/fstab by adding a line to mount sdb2 persistently by UUID:

vim /etc/fstab
:r!blkid (dd and delete until the UUID remains)

The line should appear as:

UUID=000ea07e-672d-44e7-948b-f15356d0cebe /entfs2 ext4 defaults 0 0

Verify the changes:

mount -av
df -Th
fdisk -l /dev/sdb

Create a swap partition with the space on sdb5 and verify its creation:

mkswap /dev/sdb5
swapon -s
swapon /dev/sdb5
swapon -s

Define the swap partition in fstab for persistence:

swapoff /dev/sdb5
vi /etc/fstab

Try using the label method to define the swap partition:

/dev/sdb5  swap  swap  defaults  0 0

After saving the file, test the swap entry in the fstab file:

swapon -a
swapon -s

Verify the changes were a success with the -a option added to mount:

mount -av

Run the verification again after a reboot for peace of mind.

Logical Volume Manager (LVM)

LVM is able to group physical and logical partitions into a single logical volume.

Dismount any mountpoints on sdb or sdc:

umount /entfs{1..2}

Create LVM partitions on sdb1 and sdb2 with fdisk, and create LVM partitions on sdc1 and sdc2 with gdisk:

fdisk /dev/sdb
gdisk /dev/sdc

Make these LVM partitions available to the Logical Volume Manager with the pv utility:

pvcreate /dev/sdb1
pvcreate /dev/sdb2
pvdisplay /dev/sdb1
pvdisplay /dev/sdb2
pvcreate /dev/sdc1
pvcreate /dev/sdc2
pvdisplay /dev/sdc1
pvdisplay /dev/sdc2

Create the volume group using the vg tool:

vgcreate vg00 /dev/sdb1

View the volume group with vgdisplay, and compare the two partitions with pvdisplay:

vgdisplay vg00
pvdisplay /dev/sdb1
pvdisplay /dev/sdb2

Extend the volume group to include sdb2, sdc1, and sdc2:

vgextend vg00 /dev/sdb2
vgextend vg00 /dev/sdc1
vgextend vg00 /dev/sdc2
vgdisplay vg00
pvdisplay /dev/sdb2
pvdisplay /dev/sdc1
pvdisplay /dev/sdc2

Create logical volumes with the volume group vg00 named lv1 and lv2:

lvcreate -L 256M -n lv1 vg00
lvcreate -L 256M -n lv2 vg00

Place filesystems on the logical volumes lv1 and lv2:

mkfs.ext4 /dev/mapper/vg00-lv1
mkfs.xfs /dev/mapper/vg00-lv2

Mount the logical volumes on a pair of directories created earlier:

mount /dev/mapper/vg00-lv1 /entfs3
mount /dev/mapper/vg00-lv2 /entfs4

Verify that the volumes mounted as anticipated:

df -Th

Dismount the volumes and make these changes persistent in the fstab file. Also update the fstab file for out most recent tasks:

umount /entfs{3..4}
vi /etc/fstab

The lines should look similar to the following:

/dev/mapper/vg00-lv1  /entfs3  ext4  defaults  0 0
/dev/mapper/vg00-lv2  /entfs4  ext4  defaults  0 0

Verify the entries mount successfully by restarting or by running the following:

mount -av
df -Th

We are almost done. Another task to complete is expanding the logical volumes. Let’s start by expanding lv1:

lvextend -L 1G /dev/vg00/lv1

The ext4 filesystem also needs to be expanded to reflect this change:

resize2fs -f /dev/vg00/lv1

Lastly, let us expand logical volume lv2, and expand the xfs filesystem:

lvextend -L 1G /dev/vg00/lv2
xfs_growfs -d /dev/vg00/lv2

Certified Series: Red Hat #2

RHCSA Exam Preparedness: Recover Root

This is another lesson based on a video by Ralph Nyberg .

Root Recovery may be a necessary step to get into a Red Hat Enterprise Linux system in order to advance to the rest of the exam requirements. There are some caveats due to SELinux that add a bit of complexity to the process.

The objective this topic covers is:

  • Interrupt the boot process in order to gain access to a system

The basic concept is to interrupt the boot process and add an argument to the grub boot menu entry. Use the directional arrow down key to stop the grub timeout, and use ‘e’ to edit a selected item. For x86_64 BIOS based systems the line would start with linux16, and for UEFI-based systems the line would start with linuxefi. Ctrl+e or the End key can be used to jump the cursor to the end of the line.

At the end of the line, the following must be added:


Use Ctrl+x to start the system with the added argument. This will allow logging into the device as root with a read only file system. So, the file system will then need to be remounted with read and write capabilities:

mount -o remount,rw /sysroot

Root the system at sysroot, and reset the root password:

chroot /sysroot

At this point the issue that would be encountered is that the SELinux file context of /etc/shadow would change, and if the system was rebooted it would not complete the boot process. To update the file context using this rd.break method, a auto-relabel file must be created to relabel the SELinux file context of all files on the entire system when the system is next started:

touch /.autorelabel

The system can now be rebooted to verify that the root password has been updated and that the file context auto-relabel was a success. The auto-relabel process will take a few moments to complete, as it relabels every file context during the boot process. Also note that encrypted disks still need to be accessed during the process of regaining root access, and the encryption password is required for system access.

Let’s Verify our TLS

Let’s Encrypt Bug Secure Website verification

Let’s Encrypt may need to reissue your website’s TLS/SSL certificate. A statement by Let’s Encrypt can help explain the bug. So, how do we know if our sites are going to have their certificates revoked?

The easy way is to run your domain name on Let’s Encrypt’s checkhost tool. After running the query abstractrepresentation.online was issued with the following:

The certificate currently available on abstractrepresentation.online is OK. It is not one of the certificates affected by the Let's Encrypt CAA rechecking problem. Its serial number is 03a802bd280f4186861abb3c74f57dfe2405

Certified Series: Red Hat #1

RHCSA Exam Preparedness: SELinux

I want to start this series on Red Hat Certified Systems Administrator preparations with a sometimes neglected security system: SELinux. Regardless of the controversy of this subject, it is a required component of the RHCSA and RHCE examinations. Let us tear it up.

A large part of this blog effort is to empower readers and to share resources I have accumulated around the web. This post is based on a video published by Ralph Nyberg , so please follow the hypertext link to his video on SELinux. A couple of very important resources required for exam preparations are the Red Hat Certification Objectives as well as Red Hat’s documentation on the Red Hat Enterprise Linux operating system .

The objectives related to SELinux are:

  • Set enforcing and permissive modes for SELinux
  • List and identify SELinux file and process context
  • Restore default file contexts
  • Use boolean settings to modify system SELinux settings
  • Diagnose and address routine SELinux policy violations

The SELinux configuration file is located in /etc/selinux/config

View the status of SELinux:


Display the mode SELinux is running in:


Set enforcing mode:

setenforce 1

Set permissive mode:

setenforce 0

Display a file’s SELinux context using ls with the -Z option:

ls -Z

Display a service’s SELinux context using ps with the -Z option:

ps -Z

Use semanage to edit file and service SELinux context with persistence. Add a file context for everything under /web and restorecon the directory recursively:

semanange fcontext -a -t httpd_sys_content_t "/web(/.*)?"
restorecon -R -v /web

Add file context for everything under a specified samba share:

semanage fcontext -a -t samba_share_t "/entlinux(/.*)?"
restorecon -R /entlinux
ls -Zd /entlinux

Show all, list, and filter SELinux booleans:

getsebool -a
getsebool -a | grep ftp
getsebool -l
getsebool -l | grep ftp

Set persistent boolean changes with setsebool by adding the -P option. For example, allow ftp users to access their home directories and verify the change afterwords:

setsebool -P ftp_home_dir on
getsebool -a | grep ftp

Search the SELinux audit log, and narrow a search for avc events:

ausearch -m avc

View the audit report:


View messages in the main system logs. grep sealerts in /var/log/messages and take note of the sealert code. Use that code with the sealert in conjunction with that sealert code. This will display issues and suggestions to fix those issues:

grep sealert /var/log/messages
sealert -l c45688460-22234-2jk3-lkasjkdk | less