RHCSA Series (1): Obtaining and Maintaining the RHCSA Certification: Laying the Foundation
Alphabetical List of the Abbreviations used in this article:
ACLs = Access Control Lists
APT = Advanced Package Tool
DNF = Dandified YUM
GNU = GNU's Not Unix
GUI = graphical user interface
KVM = Kernel-based Virtual Machine
LLM = large language model
LUKS = Linux Unified Key Setup
POSIX = Portable Operating System Interface
RHCE = Red Hat Certified Engineer
RHCSA = Red Hat Certified System Administrator
SELinux = Security-Enhanced Linux
SSH = Secure Shell
Introduction
One of my life goals is to obtain the Red Hat Certified System Administrator (RHCSA) certification, especially since I use GNU/Linux every day in my business. The RHCSA certification provides objective evidence that one is a skilled and competent GNU/Linux system administrator.
The RHCSA exam is very difficult, so I know that I am going to need a sufficient amount of time to prepare for the exam. Part of my preparation will be to do research and write articles here to get me familiar with all aspects of the exam.
I will also set up GNU/Linux virtual machines on my home computer that I will use to practice completing system administration tasks in a timely manner.
This article will be the first one in my RHCSA series. All abbreviations used in this article will be spelled out at the top of the article.
Credits
I used several powerful research tools to compile information for this article:
- I used the Mistral large language model (LLM) running on my local GNU/Linux machine.
- I used HuggingChat, a powerul online AI platform that gives you access to many different LLMs. I specifically used the Qwen/Qwen3-235B-A22B model as a research tool for this article. You can access HuggingChat here.
- Core.ac.uk. This is an online portal that claims to be the world’s largest collection of open access research papers. I used it to find research papers that discussed GNU/Linux, the Linux kernel, GNU/Linux system administration, and other GNU/Linux-related topics.
The RHCSA Certification: A High Level Overview
The Red Hat Certified System Administrator (RHCSA) certification is a professional qualification offered by Red Hat, Inc., a United States-based provider of open-source software solutions. The RHCSA certification validates an individual's skills in deploying, managing, and troubleshooting Red Hat Enterprise Linux (RHEL) systems in a variety of environments.
The RHCSA exam covers essential system administration tasks such as:
- Installing and configuring RHEL from installation media or system snapshots
- Partitioning storage and managing file systems
- Managing users, groups, and filesystem permissions
- Managing software packages using the Yum/DNF package managers
- Configuring Network Interfaces, DNS, firewall settings, SELinux, and system time synchronization
- Managing bootloaders, system services, and logging facilities
- Basic system troubleshooting techniques
Individuals who pass the RHCSA exam demonstrate the skills required to maintain a basic Red Hat Enterprise Linux infrastructure, making them valuable assets for organizations running or planning to adopt Red Hat's open-source solutions.
Upon successful completion of the RHCSA certification, individuals can choose to further their education by pursuing the Red Hat Certified Engineer (RHCE) certification, which builds upon the skills acquired in the RHCSA and covers more advanced system administration tasks.
Reviewing the Fundamentals of GNU/Linux Systems Administration
The first step in my journey towards becoming a successful RHCSA is to carefully review everything that I've learned about GNU/Linux over the last few decades, and to see where my strengths and weaknesses lie. Reference 1 proved to be a very good place to start in this review.
What Makes GNU/Linux So Powerful?
Reference 1 makes it extremely clear why GNU/Linux is so powerful: it's trusted by large multi-national organizations. For example, the reference cites that both Google and Facebook use GNU/Linux internally. This means that people with proven GNU/Linux skills, such as RHCSAs, are in high demand within organizations that use GNU/Linux.
The Components of the GNU/Linux Computer Operating System
GNU/Linux is a computer operating system that solves a similar problem to that solved by Microsoft's Windows operating system, and Apple's MacOS. Like all other computer operating systems, GNU/Linux has the following components: [1]
- the kernel
- user utilities
- server software
- shells
- file systems
- kernel modules
- GUI software
- libraries
- device files
The Linux Kernel
The Linux kernel serves as the central component of the Linux operating system, orchestrating a wide range of critical functions to ensure efficient, secure, and reliable operation.
At its core, it manages system memory, dynamically allocating resources to applications as needed and reclaiming them when no longer in use to optimize performance. It also handles process scheduling and synchronization, efficiently distributing CPU time across available cores to balance the demands of concurrent applications while prioritizing responsiveness for interactive tasks. Additionally, it facilitates inter-process communication and coordination to maintain system stability.
Device management is another key responsibility, with the kernel providing a hardware abstraction layer that enables consistent access to peripherals such as storage devices, network adapters, and input hardware, shielding user applications from low-level complexities.
The kernel supports a variety of file systems, including Ext4, XFS, and NTFS, offering a unified interface for managing data storage and retrieval across diverse storage media.
Networking capabilities are deeply integrated, with the kernel managing network interfaces, routing, firewall configurations, and cross-device communication protocols to enable seamless connectivity.
Security is addressed through mechanisms like access controls, permission frameworks, and user-space isolation techniques, such as namespaces, to safeguard system resources and prevent unauthorized access.
On portable devices, the kernel optimizes power consumption by dynamically adjusting hardware power states, implementing workload-specific optimizations, and coordinating low-power modes to extend battery life.
For specialized applications requiring deterministic timing, certain kernel configurations offer real-time scheduling features that prioritize critical tasks to meet strict time constraints, catering to domains like industrial control systems and multimedia processing.
The modular architecture of the kernel further enhances its flexibility, allowing components to be added or removed dynamically without recompiling the entire system, thus enabling customization for specific hardware or performance needs.
Finally, the kernel exposes system calls as the primary interface between user-space applications and core functionalities, facilitating essential operations such as process management, memory allocation, file handling, and network interactions.
This comprehensive management of resources and services underscores the Linux kernel’s pivotal role in maintaining system efficiency, security, and adaptability across diverse computing environments.
User Utilities in GNU/Linux
User utilities in GNU/Linux are essential command-line tools that enable users and administrators to interact with and manage the system efficiently.
These utilities are primarily divided into two major collections: the GNU Core Utilities (coreutils) and util-linux, each providing a wide range of functions for system administration, file manipulation, process control, and more.
The GNU Core Utilities include 102 fundamental commands like ls (list files), cp (copy files), mv (move files), cat (concatenate files), and chmod (modify file permissions), which are critical for daily tasks.
These tools handle basic operations such as text processing (e.g., grep, sed), directory management (e.g., mkdir, rmdir), and data stream manipulation. For example, chmod manages file access controls, ensuring secure file ownership and permissions.
The util-linux package complements coreutils with 107 additional commands, including mount (attach filesystems), umount (detach filesystems), fdisk (partition management), and ps (process monitoring), which are vital for system maintenance, hardware interaction (e.g., disk management), and process scheduling.
For instance, commands like reboot or halt allow administrators to control system power states, while systemd-related tools manage services and boot processes.
Together, these utilities form the backbone of Linux system interaction, enabling tasks ranging from simple file operations to advanced system configuration. They adhere to POSIX standards where possible, ensuring consistency across Unix-like environments.
Users often combine these tools with shell scripting or aliases (e.g., ll for ls -l) to automate workflows. By abstracting complex kernel operations into intuitive commands, user utilities bridge the gap between the Linux kernel and end-user functionality, facilitating efficient system management and customization.
Server Software in GNU/Linux
Server software in GNU/Linux is designed to provide networked services to clients, manage system resources, and facilitate communication across devices. Its primary functions include handling incoming requests (e.g., web pages, files, or database queries), managing user and application access, and ensuring secure, efficient operation of networked systems.
Common types of server software include web servers (e.g., Apache, Nginx), database servers (e.g., MySQL, PostgreSQL), email servers (e.g., Postfix, Sendmail), file servers (e.g., Samba, vsftpd), and authentication servers (e.g., OpenLDAP, Kerberos). These programs typically run in the background as daemons, managed by systemd or other init systems.
Server software in Linux often emphasizes flexibility, security, and scalability. It can enforce access controls, encrypt data in transit (e.g., via SSL/TLS), and integrate with firewall tools (e.g., iptables, firewalld) to protect against unauthorized access. Many server applications are modular, allowing administrators to customize features (e.g., enabling modules in Apache for specific web functionalities). They also support logging and monitoring for troubleshooting and performance optimization.
Additionally, Linux server software is central to hosting virtualization platforms (e.g., KVM, Docker) and cloud infrastructure (e.g., OpenStack, Kubernetes), enabling resource sharing and automation. Tools like SSH (Secure Shell) and remote desktop protocols allow administrators to manage servers remotely, while services like cron and systemd timers automate routine tasks.
Overall, server software in GNU/Linux transforms the operating system into a robust platform for hosting critical services in environments ranging from small networks to enterprise-scale data centers.
Shells in GNU/Linux
Shells in GNU/Linux serve as command-line interpreters that act as an interface between users and the operating system’s kernel.
Their primary function is to accept commands from users or scripts, translate them into system calls that the kernel can execute, and return the results.
Shells enable direct interaction with the system for tasks like file manipulation, process control, and system configuration. They support scripting capabilities, allowing users to automate repetitive tasks through shell scripts that combine commands, loops, conditionals, and variables.
Shells also handle input/output redirection, piping output from one command into another, and managing environment variables that define user settings or system behavior.
Additionally, they provide features like command history, tab completion, job control (e.g., background/foreground processes), and globbing (pattern matching for files).
Common shells include Bash, Zsh, and Dash, each offering unique features while adhering to POSIX standards for compatibility.
By bridging user intent with system functionality, shells streamline both interactive use and automation, making them indispensable for system administration and development workflows.
File Systems in GNU/Linux
File systems in GNU/Linux serve as the foundational structure for organizing, storing, and managing data on storage devices.
Their primary role is to provide a hierarchical directory structure starting from the root directory “/”, enabling logical grouping of files, directories, and devices so users and applications can efficiently locate and access data.
They manage how information is stored on physical or virtual media such as SSDs, HDDs, USB drives, or network storage, handling tasks like allocating space for files, tracking free space, and optimizing read/write operations to ensure efficient use of storage resources.
A key function of file systems is handling metadata, which includes details like file permissions (ownership, read/write/execute flags), timestamps, file size, and inode references. This metadata ensures data security, tracks relationships between files, and supports features like symbolic and hard links.
File systems also enforce access controls to restrict unauthorized access, using permission models and advanced security mechanisms such as encryption (e.g., eCryptfs, LUKS) and access control lists (ACLs) to provide granular data protection.
Reliability is another critical aspect, with many Linux file systems employing journaling techniques to log changes before committing them to disk. This prevents data corruption and enables faster recovery after system crashes or power failures.
Linux supports a wide variety of file systems to meet diverse needs, including local file systems like Ext4 (commonly used as the default), XFS (optimized for high performance), and Btrfs (offering snapshots and self-healing capabilities).
Network file systems such as NFS, CIFS/SMB for Windows integration, and SSHFS for secure remote access allow seamless sharing of files across networks. Special-purpose file systems like tmpfs (RAM-based temporary storage), procfs (providing kernel data), and sysfs (exposing hardware information) address specific operational requirements.
The Linux kernel’s Virtual File System (VFS) layer abstracts these diverse file systems, offering a uniform interface for applications to interact with different storage types, whether local disks, USB drives, or cloud storage, without requiring implementation-specific details. This abstraction allows storage devices to be mounted at specific directories (mount points), integrating them into the system’s directory tree for consistent access.
To manage file systems effectively, GNU/Linux provides tools like `mkfs` for creating file systems, `fsck` for checking and repairing issues, `df` for monitoring disk usage, and `du` for analyzing directory sizes.
Modern file systems also introduce advanced features such as snapshots (for backups and rollbacks), compression, deduplication, and subvolumes, enhancing flexibility and optimization for data management.
In summary, file systems in GNU/Linux ensure data integrity, security, and accessibility while accommodating diverse hardware configurations, performance demands, and user workflows. They act as the bridge between raw storage and the structured environment required by the operating system and applications, enabling efficient and reliable data handling across a wide range of computing scenarios.
Kernel Modules in GNU/Linux
Kernel modules in GNU/Linux are essential components that provide a way to extend the functionality of the operating system’s core kernel without requiring a full reboot.
Their primary purpose is to enable the kernel to dynamically load or unload specific features or drivers as needed, allowing the system to adapt to hardware, software, and configuration changes without rebuilding the entire kernel. This modularity ensures the kernel remains lightweight and efficient while supporting a vast array of devices and capabilities.
One key function of kernel modules is to handle hardware support by loading device drivers dynamically. For example, when a new USB device is connected, the system can load the appropriate module to enable communication with that hardware without requiring all possible drivers to be included in the base kernel. Similarly, modules allow the kernel to support new technologies as they emerge, such as updated network protocols or storage interfaces, by adding or updating modules without modifying the core kernel code.
Kernel modules also provide flexibility in managing file systems. The kernel can load modules to support different file systems like Ext4, Btrfs, NTFS, or FAT32, allowing the system to read and write to diverse storage formats without embedding all file system drivers permanently into the kernel. This modularity extends to networking and security features as well, enabling the kernel to implement protocols like IPv6 or security frameworks like SELinux and AppArmor only when required, reducing unnecessary resource consumption.
By loading modules only when necessary, the kernel conserves system resources. For instance, modules related to unused hardware or features can be unloaded to free up memory and processing power, ensuring the system remains efficient. This dynamic management is handled through tools like `modprobe`, `insmod`, and `rmmod`, which allow administrators to manually or automatically load, unload, and configure modules as needed. Configuration files in `/etc/modprobe.d/ further enable customization of module behavior.
During the system boot process, critical kernel modules are loaded via an initial RAM disk (initramfs) to ensure the kernel can access essential hardware and file systems required to start the operating system. This ensures that even if the root file system or storage controller requires a module, the system can load it before accessing the main disk.
Kernel modules also simplify troubleshooting and updates. Faulty or outdated modules can be replaced or updated independently of the entire kernel, reducing downtime and complexity during system maintenance. This modular approach allows administrators to tailor the kernel’s functionality to specific use cases, such as enabling virtualization support on a server or optimizing power management for a laptop.
In summary, kernel modules enhance the adaptability, efficiency, and maintainability of GNU/Linux systems. They allow the kernel to remain compact while supporting a wide range of hardware, file systems, and features, ensuring systems can evolve with new technologies and user requirements without requiring frequent kernel recompilation or reboots.
GUI Software in GNU/Linux
GUI software in GNU/Linux provides a visual interface that allows users to interact with the operating system through graphical elements such as windows, icons, menus, and buttons.
Its primary function is to abstract the complexity of the underlying system, making it more accessible and intuitive for users who may not be familiar with command-line tools.
By offering a graphical environment, it simplifies tasks like file management, application launching, system configuration, and resource monitoring, enabling both technical and non-technical users to navigate the system efficiently.
The core components of GUI software in Linux include display servers, window managers, and desktop environments. The display server (typically the X Window System or Wayland) handles low-level interactions with the hardware, such as rendering graphics and processing input from devices like keyboards and mice.
Window managers control the appearance and behavior of windows, including resizing, positioning, and decorations like borders and title bars. Desktop environments (e.g., GNOME, KDE, XFCE) integrate these components with additional tools, such as file managers, system settings panels, and application launchers, to create a cohesive user experience.
GUI software also bridges the gap between the kernel, user utilities, and applications by providing graphical frontends for system tasks. For example, tools like GNOME Disks or KDE Partition Manager offer visual interfaces for managing storage devices, while network managers like NetworkManager simplify wireless and wired connectivity settings. These graphical tools often streamline complex operations (e.g., configuring firewalls or software repositories) into user-friendly workflows, reducing the need for manual command-line input.
Customization is a key aspect of GUI software in GNU/Linux. Desktop environments allow users to tailor the interface to their preferences, including themes, icons, layouts, and extensions. This flexibility caters to diverse workflows, from lightweight environments optimized for older hardware (e.g., LXDE) to feature-rich experiences designed for modern systems (e.g., GNOME or KDE).
Additionally, GUI software often integrates with accessibility tools, ensuring usability for individuals with disabilities through features like screen readers, magnifiers, and high-contrast themes.
While GUI software is optional in Linux (many servers operate without a graphical interface), it plays a critical role in desktop environments by enhancing productivity and ease of use. By combining visual elements with underlying system functionality, it transforms technical operations into intuitive actions, making GNU/Linux a versatile platform for both casual users and advanced administrators.
Libraries in GNU/Linux
Libraries in GNU/Linux serve as repositories of pre-written code that applications can utilize to perform common tasks without requiring developers to write code from scratch.
Their primary function is to promote code reuse, modularity, and efficiency by providing standardized implementations of widely used operations. These include functions for mathematical computations, file handling, network communication, graphical rendering, and system interactions.
Libraries abstract the complexity of interacting with the kernel, hardware, or system resources, allowing applications to perform tasks like reading files, managing memory, or rendering graphics without directly handling low-level details. This abstraction simplifies development and ensures consistency across applications. They also enable dynamic linking, where shared libraries (e.g., .so files) are loaded at runtime, allowing multiple programs to use the same library simultaneously. This reduces memory usage and disk space compared to static linking, where code is embedded directly into executables.
Libraries are critical for dependency management. Applications often rely on specific versions of libraries to function correctly, and package managers (e.g., APT, DNF) ensure these dependencies are installed and maintained. This system prevents redundant code and streamlines software updates. Libraries also support backward compatibility through versioning, allowing newer libraries to maintain interfaces that older applications can still use.
Stored in directories like /lib, /usr/lib, and /usr/local/lib, libraries are managed by tools like `ldconfig`, which configures the system’s library paths and caches for efficient access. Developers link libraries to their programs using compilers like `gcc`, specifying dependencies with `-l` flags (e.g., `-lm` for the math library).
Security and stability are enhanced through libraries as well. Vulnerabilities in a library can be patched once, and all applications using it benefit from the fix without needing recompilation. Features like Address Space Layout Randomization (ASLR) further protect against exploits by randomizing memory addresses used by libraries.
In summary, libraries in GNU/Linux streamline software development, optimize resource usage, manage dependencies, and act as intermediaries between applications and the system’s core components, ensuring functionality, security, and maintainability across diverse environments.
Device Files in GNU/Linux
Device files in GNU/Linux serve as an interface between the kernel and user-space programs, enabling applications to interact with hardware devices through standard file operations.
Their primary function is to abstract the complexity of hardware communication, allowing users and programs to access devices (e.g., disks, keyboards, printers, network interfaces) as if they were regular files. This abstraction simplifies system design by leveraging the kernel’s file system framework to manage hardware interactions.
Device files are categorized into two main types: character devices and block devices. Character devices handle data as a continuous stream of bytes (e.g., serial ports, mice, or sound cards), while block devices process data in fixed-size chunks (e.g., hard drives, SSDs, or USB storage). For example, reading from `/dev/sda` (a block device) retrieves data from a physical hard drive, while writing to `/dev/ttyUSB0` (a character device) sends commands to a connected serial device.
These files are dynamically created in the `/dev` directory by the kernel or the devtmpfs virtual file system during system boot or when hardware is detected (e.g., plugging in a USB drive). This ensures that device files exist even if the underlying hardware is not physically present. The udev device manager further enhances this system by generating persistent, descriptive names (e.g., `/dev/disk/by-uuid/...`) and managing permissions, ensuring devices are consistently accessible across reboots or hardware changes.
Each device file is identified by a major number (indicating the kernel driver responsible for the device) and a minor number (specifying a sub-component, such as a partition on a disk). For instance, a hard drive might have a major number of 8 and a minor number of 0 for the first partition (`/dev/sda1`). This numbering system allows the kernel to route read/write operations to the correct hardware driver.
Device files also enforce security and access control through standard file permissions. For example, raw disk access might be restricted to the root user, while a USB printer device could be accessible to members of the `lp` group. This ensures that only authorized users or processes can interact with sensitive hardware.
Beyond traditional hardware, device files also represent virtual or pseudo-devices. For instance, `/dev/null` discards written data, `/dev/zero` provides a stream of zeros, and `/dev/random` generates random numbers. These virtual devices perform specialized functions without direct hardware ties.
In summary, device files in GNU/Linux provide a unified, flexible, and secure way for applications and users to interact with hardware and virtual resources. By abstracting device management into the file system paradigm, they streamline system operations, enable dynamic device handling, and ensure consistent access to both physical and virtual components.
Conclusions
This was a fun and excellent review of the GNU/Linux fundamentals to prepare us for the long journey of becoming extremely capable RHCSAs. Each article in the series will cover topics that build confidence, and help us achieve the goal of passing the RHCSA exam, not by the skin of our teeth, but with a score that demonstrates we've mastered most aspects on GNU/Linux system administration. Thank you so much for reading this article!
References:
[1] 2020 - Lecture - CSCI 275: Linux Systems Administration and Security - Moe Hassan - CUNY John Jay College - NYC Tech-in-Residence Corps. Retrieved June 22, 2025 from https://academicworks.cuny.edu/cgi/viewcontent.cgi?article=1053&context=jj_oers