Linux Commands: The Complete Guide for 2026

Published February 11, 2026 · 45 min read

The Linux command line is the most powerful interface available to a software developer, system administrator, or DevOps engineer. While graphical interfaces limit you to what button designers anticipated, the terminal gives you direct, composable access to everything the operating system can do. In 2026, Linux runs the majority of the world's servers, cloud infrastructure, containers, and embedded devices — and the command line is how you manage all of it.

This guide covers Linux commands from the fundamentals through advanced techniques. Every command includes practical examples you can run immediately. Whether you are opening a terminal for the first time or writing automation scripts that manage hundreds of servers, this is the reference you will keep coming back to.

⚙ Quick tools: Use our Chmod Calculator to generate file permission commands and the Crontab Generator to build cron schedule expressions.

Table of Contents

  1. Getting Started with the Linux Terminal
  2. File and Directory Commands
  3. File Viewing and Inspection
  4. File Permissions and Ownership
  5. Text Processing
  6. System Information
  7. Process Management
  8. Networking Commands
  9. Package Management
  10. User Management
  11. Compression and Archives
  12. Disk Management
  13. System Services
  14. Cron Jobs and Scheduling
  15. SSH and Remote Access
  16. Shell Redirections and Pipes
  17. Environment Variables
  18. Useful Command Combinations and One-Liners
  19. Frequently Asked Questions

1. Getting Started with the Linux Terminal

The terminal (also called the shell, command line, or console) is a text-based interface to your operating system. You type commands, press Enter, and the shell executes them and displays the output. It sounds primitive compared to a graphical interface, but the terminal is actually more efficient for nearly every task once you learn the basics.

Why the Command Line Matters

Opening a Terminal

On Ubuntu/GNOME, press Ctrl+Alt+T or search for "Terminal" in the application menu. On macOS, open Terminal from Applications/Utilities or press Cmd+Space and type "Terminal". On Windows, install WSL (Windows Subsystem for Linux) to get a full Linux terminal: open PowerShell as administrator and run wsl --install.

The Shell Prompt

When you open a terminal, you see a prompt that typically looks like this:

# Typical bash prompt format:
username@hostname:~$

# What each part means:
# username   - your login name
# @          - separator
# hostname   - your machine's name
# :          - separator
# ~          - current directory (~ means home directory)
# $          - normal user (# means root/superuser)

# Examples:
dev@webserver:~$                  # user "dev" on "webserver", in home dir
root@production:/etc/nginx#       # root user on "production", in /etc/nginx
alice@laptop:/var/log$            # user "alice", in /var/log

Getting Help

# Read the manual page for any command
man ls                  # opens the manual for ls
man grep                # opens the manual for grep
man man                 # manual for the man command itself

# Quick help (shorter than man)
ls --help               # most commands support --help
grep --help

# Search man pages by keyword
man -k "copy files"     # find commands related to copying files
apropos "network"       # same as man -k

# One-line description of a command
whatis ls               # "ls (1) - list directory contents"
whatis grep             # "grep (1) - print lines that match patterns"

# Show which binary will be executed
which python3           # /usr/bin/python3
which nginx             # /usr/sbin/nginx
type ls                 # ls is aliased to 'ls --color=auto'

Keyboard Shortcuts

These shortcuts work in bash and zsh and will save you enormous amounts of time:

# Navigation
Ctrl+A          Move cursor to beginning of line
Ctrl+E          Move cursor to end of line
Ctrl+U          Delete from cursor to beginning of line
Ctrl+K          Delete from cursor to end of line
Ctrl+W          Delete the word before cursor
Alt+B           Move back one word
Alt+F           Move forward one word

# History
Ctrl+R          Search command history (type to filter, Enter to execute)
Ctrl+P / Up     Previous command in history
Ctrl+N / Down   Next command in history
!!              Repeat last command
!$              Last argument of previous command
!grep           Repeat last command starting with "grep"

# Control
Ctrl+C          Cancel current command / kill running process
Ctrl+D          Exit shell (or send EOF)
Ctrl+Z          Suspend current process (resume with fg or bg)
Ctrl+L          Clear the screen (same as clear command)
Tab             Auto-complete command, file, or directory name
Tab Tab         Show all possible completions

2. File and Directory Commands

Everything in Linux is a file. Regular files, directories, devices, sockets, and even processes (in /proc) are represented as files. Mastering file and directory commands is the foundation of everything else.

ls — List Directory Contents

# Basic listing
ls                      # list files in current directory
ls /etc                 # list files in /etc
ls /var/log /tmp        # list files in multiple directories

# Common flags
ls -l                   # long format (permissions, owner, size, date)
ls -a                   # show hidden files (starting with .)
ls -la                  # long format + hidden files (most common combo)
ls -lh                  # human-readable file sizes (1K, 234M, 2G)
ls -lt                  # sort by modification time (newest first)
ls -ltr                 # sort by time, reversed (oldest first)
ls -lS                  # sort by file size (largest first)
ls -R                   # recursive (list subdirectories too)
ls -1                   # one file per line
ls -d */                # list only directories

# Reading ls -l output:
# drwxr-xr-x  5 alice staff  160 Feb 11 10:30 projects
# │└──┬───┘   │  │     │      │       │          │
# │   │       │  │     │      │       │          └─ name
# │   │       │  │     │      │       └─ modification date
# │   │       │  │     │      └─ size in bytes
# │   │       │  │     └─ group
# │   │       │  └─ owner
# │   │       └─ link count
# │   └─ permissions (rwx for owner, group, others)
# └─ type (d=directory, -=file, l=symlink)

cd — Change Directory

# Basic navigation
cd /var/log             # go to /var/log (absolute path)
cd projects             # go to projects/ inside current directory (relative)
cd ..                   # go up one directory
cd ../..                # go up two directories
cd ~                    # go to your home directory
cd                      # same as cd ~ (go home)
cd -                    # go to the previous directory (toggle)

# Examples of absolute vs relative paths:
# Absolute: starts from root /
cd /home/alice/projects/webapp

# Relative: starts from current directory
cd projects/webapp      # if you are already in /home/alice

pwd — Print Working Directory

pwd                     # /home/alice/projects
# Always shows the full absolute path of your current location

mkdir — Make Directory

# Create a single directory
mkdir projects

# Create nested directories (parent directories created automatically)
mkdir -p projects/webapp/src/components

# Create multiple directories at once
mkdir -p src tests docs config

# Create with specific permissions
mkdir -m 755 public
mkdir -m 700 private_data

rm — Remove Files and Directories

# Remove a file
rm file.txt

# Remove multiple files
rm file1.txt file2.txt file3.txt

# Remove with confirmation prompt
rm -i important-file.txt        # asks "remove important-file.txt?" y/n

# Remove a directory and all its contents (DANGEROUS)
rm -r old-project/              # recursive removal
rm -rf build/                   # recursive + force (no confirmation)

# SAFETY TIP: Never run rm -rf / or rm -rf * without double-checking
# your current directory with pwd first

# Remove empty directories only
rmdir empty-folder              # fails if directory is not empty

cp — Copy Files and Directories

# Copy a file
cp source.txt destination.txt
cp config.yml config.yml.backup

# Copy to a different directory
cp report.pdf /home/alice/documents/

# Copy a directory recursively
cp -r project/ project-backup/

# Copy preserving attributes (timestamps, permissions, ownership)
cp -a /var/www/html/ /backup/html/

# Copy with progress (for large files)
cp --verbose largefile.iso /mnt/usb/

# Copy only if source is newer than destination
cp -u *.html /var/www/html/

# Interactive mode (ask before overwriting)
cp -i source.txt destination.txt

mv — Move and Rename Files

# Rename a file
mv old-name.txt new-name.txt

# Move a file to a different directory
mv report.pdf /home/alice/documents/

# Move multiple files to a directory
mv *.jpg /home/alice/photos/

# Move a directory
mv old-project/ archive/old-project/

# Interactive mode (ask before overwriting)
mv -i source.txt /destination/

# Do not overwrite existing files
mv -n source.txt /destination/

touch — Create Files and Update Timestamps

# Create an empty file
touch newfile.txt

# Create multiple files at once
touch index.html style.css script.js

# Update the modification timestamp of an existing file
touch existing-file.txt         # sets mtime to now

# Set a specific timestamp
touch -t 202602110930.00 file.txt   # set to Feb 11, 2026 09:30

find — Search for Files

# Find files by name
find /home -name "*.txt"                # find all .txt files under /home
find . -name "config.yml"               # find config.yml in current tree
find . -iname "readme*"                 # case-insensitive search

# Find by type
find . -type f                          # files only
find . -type d                          # directories only
find . -type l                          # symbolic links only

# Find by size
find . -size +100M                      # files larger than 100MB
find . -size -1k                        # files smaller than 1KB
find /var/log -size +50M -type f        # large log files

# Find by time
find . -mtime -7                        # modified in the last 7 days
find . -mtime +30                       # modified more than 30 days ago
find . -mmin -60                        # modified in the last 60 minutes
find . -newer reference.txt             # newer than reference.txt

# Find by permissions
find . -perm 755                        # exactly 755
find . -perm -u+x                       # user has execute permission

# Find and execute commands on results
find . -name "*.log" -delete            # delete all .log files
find . -name "*.tmp" -exec rm {} \;     # same with exec
find . -type f -name "*.js" -exec grep -l "TODO" {} \;  # JS files with TODO
find . -empty -type f -delete           # delete empty files

# Find with multiple conditions
find . -name "*.py" -size +10k -mtime -30   # Python files > 10KB, recent

# Exclude directories
find . -path ./node_modules -prune -o -name "*.js" -print

locate — Fast File Search

# Find files by name (uses a pre-built database, very fast)
locate nginx.conf
locate "*.service"

# Update the database (usually runs daily via cron)
sudo updatedb

# Case-insensitive search
locate -i readme

# Count matches instead of listing them
locate -c "*.py"

ln — Create Links

# Create a symbolic link (most common)
ln -s /var/www/html current-site       # current-site -> /var/www/html
ln -s /usr/bin/python3 /usr/bin/python

# Create a hard link
ln original.txt hardlink.txt           # both point to same inode

# Key differences:
# Symbolic links: can cross filesystems, can link to directories, break if target deleted
# Hard links: same filesystem only, files only, both names equally valid

3. File Viewing and Inspection

Linux provides specialized commands for viewing files in different ways. Use cat for short files, less for interactive browsing, and head/tail for viewing specific portions.

cat — Concatenate and Display Files

# Display a file
cat /etc/hostname

# Display with line numbers
cat -n script.sh

# Display multiple files concatenated
cat header.html body.html footer.html > page.html

# Show non-printing characters (tabs as ^I, line endings as $)
cat -A file.txt

# Create a file from terminal input (Ctrl+D to finish)
cat > notes.txt
This is line 1
This is line 2
^D

less — Interactive File Viewer

# Open a file for browsing
less /var/log/syslog

# Navigation inside less:
# Space / Page Down    scroll down one page
# b / Page Up          scroll up one page
# g                    go to beginning of file
# G                    go to end of file
# /pattern             search forward for "pattern"
# ?pattern             search backward for "pattern"
# n                    next search match
# N                    previous search match
# q                    quit

# Follow a growing file (like tail -f)
less +F /var/log/syslog

# Open with line numbers
less -N /etc/nginx/nginx.conf

# Open multiple files
less file1.txt file2.txt         # :n for next, :p for previous

head and tail — View File Portions

# First 10 lines (default)
head /var/log/syslog

# First N lines
head -n 20 /var/log/syslog
head -20 /var/log/syslog         # shorthand

# First N bytes
head -c 100 binary-file

# Last 10 lines (default)
tail /var/log/syslog

# Last N lines
tail -n 50 /var/log/syslog
tail -50 /var/log/syslog

# Follow a file in real time (essential for monitoring logs)
tail -f /var/log/nginx/access.log

# Follow and retry if file is recreated (log rotation)
tail -F /var/log/nginx/access.log

# Follow multiple files
tail -f /var/log/nginx/access.log /var/log/nginx/error.log

# Lines starting from line N
tail -n +100 largefile.txt       # everything from line 100 onward

wc — Word Count

# Count lines, words, and bytes
wc file.txt                     # 42  310  1847 file.txt

# Count only lines
wc -l file.txt                  # 42 file.txt
wc -l /var/log/syslog

# Count only words
wc -w essay.txt

# Count only characters
wc -c file.txt                  # byte count
wc -m file.txt                  # character count (differs for UTF-8)

# Count lines in multiple files
wc -l *.py                      # shows count per file + total

# Common pattern: count results from a command
ls | wc -l                      # number of files in current directory
grep -c "error" /var/log/syslog # count lines containing "error"

file — Determine File Type

file image.png                  # image.png: PNG image data, 800 x 600
file script.sh                  # script.sh: Bourne-Again shell script
file /bin/ls                    # /bin/ls: ELF 64-bit LSB executable
file document.pdf               # document.pdf: PDF document, version 1.4
file mystery-file               # useful when extension is missing or wrong

stat — Detailed File Information

stat index.html
#   File: index.html
#   Size: 4096       Blocks: 8          IO Block: 4096   regular file
# Device: 252,1      Inode: 131073      Links: 1
# Access: (0644/-rw-r--r--)  Uid: ( 1000/  alice)   Gid: ( 1000/  staff)
# Access: 2026-02-11 10:30:00.000000000 +0000
# Modify: 2026-02-10 14:22:30.000000000 +0000
# Change: 2026-02-10 14:22:30.000000000 +0000

4. File Permissions and Ownership

Linux file permissions determine who can read, write, and execute files. Every file has three permission sets: one for the owner, one for the group, and one for everyone else. Getting permissions right is critical for security, especially on web servers and shared systems.

Understanding Permission Notation

# Permission string from ls -l:
# -rwxr-xr-- 1 alice staff 4096 Feb 11 10:30 script.sh
#  │││ │││ │││
#  │││ │││ └┴┴─ others: r-- (read only)
#  │││ └┴┴───── group:  r-x (read + execute)
#  └┴┴──────── owner:  rwx (read + write + execute)

# Permission values:
# r (read)    = 4
# w (write)   = 2
# x (execute) = 1
# - (none)    = 0

# Numeric notation: add the values for each group
# rwx = 4+2+1 = 7
# r-x = 4+0+1 = 5
# r-- = 4+0+0 = 4
# So rwxr-xr-- = 754

chmod — Change File Permissions

# Numeric (octal) notation
chmod 755 script.sh             # rwxr-xr-x (standard for executables)
chmod 644 index.html            # rw-r--r-- (standard for files)
chmod 700 private.key           # rwx------ (owner only)
chmod 600 .ssh/id_rsa           # rw------- (SSH private key)
chmod 777 /tmp/shared           # rwxrwxrwx (everyone, AVOID in production)

# Symbolic notation
chmod u+x script.sh             # add execute for owner
chmod g+w shared.txt            # add write for group
chmod o-r secret.txt            # remove read for others
chmod a+r public.txt            # add read for all (a = all)
chmod u=rwx,g=rx,o=r file      # set exact permissions

# Recursive (apply to directory and all contents)
chmod -R 755 /var/www/html/     # common for web directories
chmod -R 644 /var/www/html/*.html

# Common permission patterns:
# 755 - directories, executable scripts
# 644 - regular files (HTML, CSS, config)
# 600 - private keys, sensitive configs
# 700 - private directories (.ssh)
# 775 - shared group directories
⚙ Try it: Not sure which permission numbers you need? Use our Chmod Calculator to visually select permissions and get the exact chmod command.

chown — Change File Ownership

# Change owner
sudo chown alice file.txt

# Change owner and group
sudo chown alice:staff file.txt

# Change only group
sudo chown :www-data file.txt
chgrp www-data file.txt          # alternative command

# Recursive (change directory and all contents)
sudo chown -R www-data:www-data /var/www/html/

# Common pattern for web servers:
sudo chown -R www-data:www-data /var/www/html
sudo chmod -R 755 /var/www/html
sudo chmod -R 644 /var/www/html/*.html

umask — Default Permissions

# View current umask
umask                           # 0022 (typical default)

# How umask works:
# New files:       666 - umask = default permissions
# New directories: 777 - umask = default permissions
# With umask 022:  files get 644, directories get 755

# Set umask for the current session
umask 027                       # files: 640, directories: 750
umask 077                       # files: 600, directories: 700

# Make permanent by adding to ~/.bashrc or ~/.profile

5. Text Processing

Linux text processing tools are some of the most powerful utilities in the entire operating system. They form the backbone of log analysis, data transformation, and system automation. These commands are designed to be piped together to build complex processing pipelines.

grep — Search Text Patterns

# Basic search
grep "error" /var/log/syslog            # lines containing "error"
grep "404" /var/log/nginx/access.log    # lines with 404 status

# Common flags
grep -i "error" logfile                 # case-insensitive
grep -r "TODO" ./src/                   # recursive search in directory
grep -n "function" script.js            # show line numbers
grep -c "error" logfile                 # count matching lines
grep -l "import" *.py                   # list filenames only
grep -v "debug" logfile                 # invert (non-matching lines)
grep -w "error" logfile                 # whole word match only

# Context (show surrounding lines)
grep -A 3 "error" logfile              # 3 lines After match
grep -B 2 "error" logfile              # 2 lines Before match
grep -C 5 "error" logfile              # 5 lines of Context (before + after)

# Regular expressions
grep -E "error|warning|critical" logfile    # extended regex (OR)
grep -E "^[0-9]{4}-" logfile               # lines starting with 4 digits
grep -P "\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}" logfile  # IP addresses (Perl regex)

# Recursive search with exclusions
grep -r "password" --include="*.py" ./     # only .py files
grep -r "TODO" --exclude-dir=node_modules  # skip directories

# Count occurrences per file
grep -rc "error" /var/log/*.log

sed — Stream Editor

# Basic substitution (first occurrence per line)
sed 's/old/new/' file.txt

# Global substitution (all occurrences)
sed 's/old/new/g' file.txt

# Edit file in place
sed -i 's/http:/https:/g' config.yml

# Edit in place with backup
sed -i.bak 's/http:/https:/g' config.yml   # creates config.yml.bak

# Delete lines
sed '/^#/d' config.yml                     # delete comment lines
sed '/^$/d' file.txt                       # delete empty lines
sed '1,5d' file.txt                        # delete lines 1-5

# Insert and append
sed '3i\New line inserted before line 3' file.txt
sed '3a\New line appended after line 3' file.txt

# Print specific lines
sed -n '10,20p' file.txt                   # print lines 10-20
sed -n '5p' file.txt                       # print only line 5

# Multiple operations
sed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txt

# Address ranges
sed '10,20s/old/new/g' file.txt            # replace only in lines 10-20
sed '/START/,/END/s/old/new/g' file.txt    # replace between patterns

# Real-world examples
sed -i 's/localhost:3000/api.example.com/g' .env
sed -i '/^DEBUG=/s/=.*/=false/' config
sed -n '/ERROR/p' /var/log/app.log

awk — Pattern Scanning and Processing

# Print specific columns (fields separated by whitespace)
awk '{print $1}' file.txt                  # first column
awk '{print $1, $3}' file.txt              # first and third columns
awk '{print $NF}' file.txt                 # last column

# Custom field separator
awk -F: '{print $1, $3}' /etc/passwd       # username and UID
awk -F, '{print $2}' data.csv              # second column of CSV

# Filter by condition
awk '$3 > 100 {print $1, $3}' data.txt    # where column 3 > 100
awk '/error/ {print $0}' logfile           # lines matching "error"
awk '$1 == "alice" {print}' users.txt      # where column 1 is "alice"

# Built-in variables
awk '{print NR, $0}' file.txt             # NR = line number
awk 'END {print NR}' file.txt             # total number of lines
awk '{print NF}' file.txt                 # NF = number of fields per line

# Calculations
awk '{sum += $3} END {print sum}' data.txt             # sum column 3
awk '{sum += $3; n++} END {print sum/n}' data.txt      # average column 3
awk '{if ($3 > max) max = $3} END {print max}' data.txt  # max of column 3

# Format output
awk '{printf "%-20s %10.2f\n", $1, $3}' data.txt

# Real-world examples
# Extract IP addresses from nginx access log:
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

# Sum request sizes from access log:
awk '{sum += $10} END {printf "%.2f GB\n", sum/1073741824}' access.log

# Find processes using most memory:
ps aux | awk '{print $4, $11}' | sort -rn | head -10

sort — Sort Lines

# Alphabetical sort
sort names.txt

# Numeric sort
sort -n numbers.txt

# Reverse sort
sort -r names.txt

# Sort by specific column
sort -k2 data.txt               # sort by second column
sort -k3 -n data.txt            # numeric sort by third column
sort -t: -k3 -n /etc/passwd     # sort passwd by UID (field separator :)

# Remove duplicates while sorting
sort -u names.txt

# Human-readable numeric sort (1K, 2M, 3G)
du -sh * | sort -h

uniq — Filter Duplicate Lines

# Remove consecutive duplicate lines (input must be sorted)
sort data.txt | uniq

# Count occurrences
sort data.txt | uniq -c

# Show only duplicates
sort data.txt | uniq -d

# Show only unique lines (no duplicates)
sort data.txt | uniq -u

# Common pattern: frequency analysis
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head -20

cut — Extract Columns

# Cut by character position
cut -c1-10 file.txt             # first 10 characters of each line
cut -c5- file.txt               # from character 5 to end

# Cut by field (default delimiter: tab)
cut -f1 file.tsv                # first field
cut -f1,3 file.tsv              # first and third fields

# Cut by field with custom delimiter
cut -d: -f1 /etc/passwd         # username from /etc/passwd
cut -d, -f2-4 data.csv          # fields 2-4 from CSV

# Complement (everything except specified fields)
cut -d: --complement -f2 /etc/passwd

tr — Translate Characters

# Convert to uppercase
echo "hello" | tr 'a-z' 'A-Z'          # HELLO

# Convert to lowercase
echo "HELLO" | tr 'A-Z' 'a-z'          # hello

# Replace characters
echo "hello world" | tr ' ' '_'        # hello_world

# Delete characters
echo "hello 123 world" | tr -d '0-9'   # hello  world

# Squeeze repeated characters
echo "hello     world" | tr -s ' '     # hello world

# Replace newlines with spaces (join lines)
cat file.txt | tr '\n' ' '

# Delete non-printable characters
tr -cd '[:print:]\n' < dirty-file.txt > clean-file.txt

diff — Compare Files

# Compare two files
diff file1.txt file2.txt

# Unified diff format (most readable, used in patches)
diff -u file1.txt file2.txt

# Side-by-side comparison
diff -y file1.txt file2.txt
diff --side-by-side --width=80 file1.txt file2.txt

# Compare directories
diff -r dir1/ dir2/

# Brief output (only report if files differ)
diff -q file1.txt file2.txt

# Ignore whitespace differences
diff -w file1.txt file2.txt

# Create a patch file
diff -u original.py modified.py > changes.patch

# Apply a patch
patch original.py < changes.patch

6. System Information

These commands help you understand the state of your system: hardware, operating system version, resource usage, and uptime. Essential for troubleshooting, capacity planning, and system administration.

System Identity

# Kernel and OS information
uname -a                        # all system info
uname -r                        # kernel version: 6.8.0-90-generic
uname -m                        # architecture: x86_64

# Distribution information
cat /etc/os-release             # detailed OS info
lsb_release -a                  # distribution details (if available)

# Hostname
hostname                        # current hostname
hostnamectl                     # hostname + OS + kernel + architecture

# System uptime
uptime                          # 10:30:00 up 45 days, 2:15, 3 users, load: 0.5, 0.3, 0.2
uptime -p                       # up 45 days, 2 hours, 15 minutes

# Who is logged in
who                             # list logged-in users
w                               # who + what they are doing + load average
whoami                          # your username
id                              # your UID, GID, and groups

Memory and CPU

# Memory usage
free -h                         # human-readable memory stats
#               total   used    free    shared  buff/cache  available
# Mem:          16Gi    4.2Gi   8.1Gi   256Mi   3.7Gi       11Gi
# Swap:         4.0Gi   0B      4.0Gi

# CPU information
lscpu                           # detailed CPU info
nproc                           # number of processing units
cat /proc/cpuinfo               # raw CPU details

# Real-time process monitoring
top                             # classic process viewer
htop                            # modern interactive process viewer (if installed)

# top keyboard shortcuts:
# M     sort by memory usage
# P     sort by CPU usage
# k     kill a process
# q     quit
# 1     show individual CPUs

Disk Usage

# Disk space on all mounted filesystems
df -h                           # human-readable
df -h /                         # specific filesystem
df -h /home                     # specific mount point

# Directory size
du -sh /var/log                 # total size of /var/log
du -sh *                        # size of each item in current directory
du -sh * | sort -h              # sorted by size
du -h --max-depth=1 /var        # one level deep, with sizes

# Find largest directories
du -h /var | sort -rh | head -20

# Find largest files
find / -type f -size +100M -exec ls -lh {} \; 2>/dev/null | sort -k5 -h

# Disk usage summary
df -h --total                   # includes total across all filesystems

Hardware Information

# PCI devices (network cards, GPUs, etc.)
lspci
lspci | grep -i network
lspci | grep -i vga

# USB devices
lsusb

# Block devices (disks, partitions)
lsblk

# Hardware summary (if installed)
lshw -short
sudo dmidecode --type memory    # detailed RAM info

# Network interfaces
ip link show
ifconfig                        # older but still common

7. Process Management

Every running program in Linux is a process. Understanding how to view, control, and terminate processes is essential for managing servers and debugging application issues.

ps — Process Status

# Show your own processes
ps

# Show all processes (BSD syntax, most common)
ps aux
# a = all users, u = user-oriented format, x = include background processes

# Show all processes (System V syntax)
ps -ef

# Filter by name
ps aux | grep nginx
ps aux | grep -v grep | grep nginx     # exclude the grep process itself

# Show process tree
ps auxf                         # forest (tree) view
pstree                          # graphical process tree
pstree -p                       # include PIDs

# Custom columns
ps -eo pid,ppid,user,%cpu,%mem,stat,cmd --sort=-%mem | head -20

# Find process by name
pgrep nginx                     # just PIDs
pgrep -a nginx                  # PIDs + full command line

kill — Terminate Processes

# Graceful termination (SIGTERM, signal 15)
kill 1234                       # send SIGTERM to PID 1234
kill -15 1234                   # explicit SIGTERM

# Force kill (SIGKILL, signal 9) - last resort
kill -9 1234                    # cannot be caught or ignored

# Kill by name
pkill nginx                     # kill all processes named "nginx"
killall nginx                   # same effect

# Kill by name with signal
pkill -9 runaway-script

# Common signals:
# SIGTERM (15) - graceful termination (default)
# SIGKILL (9)  - immediate forced termination
# SIGHUP  (1)  - hangup, often used to reload configuration
# SIGSTOP (19) - pause process
# SIGCONT (18) - resume paused process

# Reload configuration without restarting
kill -HUP $(pgrep nginx)       # tells nginx to reload config

# List all available signals
kill -l

Job Control

# Run a command in the background
long-running-script.sh &

# Suspend the current foreground process
# Press Ctrl+Z

# View background jobs
jobs
# [1]+  Running    long-running-script.sh &
# [2]+  Stopped    vim config.yml

# Resume in background
bg %1                           # resume job 1 in background

# Bring to foreground
fg %1                           # bring job 1 to foreground
fg                              # bring most recent job to foreground

nohup — Run After Logout

# Run a command that survives logout
nohup long-script.sh &
# Output goes to nohup.out by default

# With custom output file
nohup long-script.sh > /var/log/script.log 2>&1 &

# Alternative: use disown
long-script.sh &
disown %1                       # detach from terminal

screen and tmux — Terminal Multiplexers

# screen: basic terminal multiplexer
screen                          # start new session
screen -S deploy                # start named session
screen -ls                      # list sessions
screen -r deploy                # reattach to session
# Inside screen:
# Ctrl+A D    detach from session
# Ctrl+A C    create new window
# Ctrl+A N    next window

# tmux: modern terminal multiplexer (recommended)
tmux                            # start new session
tmux new -s deploy              # start named session
tmux ls                         # list sessions
tmux attach -t deploy           # reattach to session
tmux kill-session -t deploy     # kill a session

# Inside tmux:
# Ctrl+B D        detach from session
# Ctrl+B C        create new window
# Ctrl+B N        next window
# Ctrl+B P        previous window
# Ctrl+B %        split pane vertically
# Ctrl+B "        split pane horizontally
# Ctrl+B arrow    switch between panes

8. Networking Commands

Networking commands are essential for troubleshooting connectivity, transferring files, monitoring traffic, and configuring network interfaces. These are the commands you reach for when something is not connecting, loading slowly, or behaving unexpectedly.

ping — Test Connectivity

# Basic ping
ping google.com                 # continuous ping (Ctrl+C to stop)
ping -c 5 google.com            # send exactly 5 packets
ping -c 5 -i 0.5 google.com    # 5 packets, 0.5s interval

# Ping with timeout
ping -W 2 10.0.0.1             # 2-second timeout per packet

# Quick connectivity check
ping -c 1 -W 2 8.8.8.8 && echo "Internet OK" || echo "No internet"

curl — Transfer Data from URLs

# Basic GET request
curl https://api.example.com/data

# Save output to a file
curl -o output.html https://example.com
curl -O https://example.com/file.zip     # keep original filename

# Show response headers
curl -I https://example.com              # headers only
curl -i https://example.com              # headers + body

# POST request with data
curl -X POST -d "name=alice&email=alice@example.com" https://api.example.com/users

# POST with JSON
curl -X POST \
  -H "Content-Type: application/json" \
  -d '{"name": "alice", "email": "alice@example.com"}' \
  https://api.example.com/users

# Authentication
curl -u username:password https://api.example.com/private
curl -H "Authorization: Bearer YOUR_TOKEN" https://api.example.com/data

# Follow redirects
curl -L https://example.com

# Verbose output (debugging)
curl -v https://example.com

# Silent mode (no progress bar)
curl -s https://api.example.com/data | jq .

# Download with progress bar
curl -# -O https://example.com/largefile.zip

# Limit download speed
curl --limit-rate 1M -O https://example.com/file.iso

# Resume interrupted download
curl -C - -O https://example.com/file.iso

wget — Download Files

# Download a file
wget https://example.com/file.zip

# Download to specific location
wget -O /tmp/data.json https://api.example.com/data

# Download in background
wget -b https://example.com/large-file.iso

# Resume interrupted download
wget -c https://example.com/large-file.iso

# Mirror a website
wget --mirror --convert-links --page-requisites https://example.com

# Download all files of a specific type
wget -r -A "*.pdf" https://example.com/documents/

# Quiet mode
wget -q https://example.com/file.zip

ssh — Secure Shell

# Connect to a remote server
ssh user@server.example.com
ssh -p 2222 user@server.com     # custom port

# Run a single command remotely
ssh user@server "uptime"
ssh user@server "df -h && free -h"

# SSH with key authentication
ssh -i ~/.ssh/my-key.pem user@server.com

# SSH tunneling (port forwarding)
ssh -L 8080:localhost:80 user@server     # local port forwarding
ssh -R 8080:localhost:3000 user@server   # remote port forwarding
ssh -D 1080 user@server                 # SOCKS proxy

# Copy SSH key to remote server (for passwordless login)
ssh-copy-id user@server.com

# Generate SSH key pair
ssh-keygen -t ed25519 -C "your-email@example.com"
ssh-keygen -t rsa -b 4096 -C "your-email@example.com"

scp — Secure Copy

# Copy file to remote server
scp file.txt user@server:/home/user/

# Copy file from remote server
scp user@server:/var/log/app.log ./

# Copy directory recursively
scp -r project/ user@server:/home/user/projects/

# Copy with custom SSH port
scp -P 2222 file.txt user@server:/home/user/

# Copy between two remote servers
scp user1@server1:/file.txt user2@server2:/destination/

Network Diagnostics

# Show network interfaces and IP addresses
ip addr show                    # modern (recommended)
ip a                            # shorthand
ifconfig                        # legacy but still common

# Show routing table
ip route show
ip r                            # shorthand
route -n                        # legacy

# Show listening ports and connections
ss -tlnp                        # TCP listening ports with process names
ss -ulnp                        # UDP listening ports
ss -tnp                         # established TCP connections
netstat -tlnp                   # legacy equivalent of ss

# DNS lookup
dig example.com                 # detailed DNS query
dig +short example.com          # just the IP address
dig example.com MX              # mail server records
dig example.com ANY             # all records
nslookup example.com            # simpler DNS lookup
host example.com                # simplest DNS lookup

# Trace route to a host
traceroute google.com
tracepath google.com            # does not require root
mtr google.com                  # combines ping + traceroute (if installed)

# Show active network connections
ss -s                           # connection statistics summary
ss -tnp | grep ESTABLISHED     # established connections with process names

# Test if a specific port is open
nc -zv server.com 80            # TCP port check
nc -zv server.com 22            # SSH port check
timeout 3 bash -c "echo > /dev/tcp/server.com/80" && echo "open" || echo "closed"

9. Package Management

Every Linux distribution uses a package manager to install, update, and remove software. The commands differ by distribution, but the concepts are the same: repositories hold packages, and the package manager resolves dependencies and handles installation.

apt — Debian, Ubuntu, Mint

# Update package lists (always do this first)
sudo apt update

# Upgrade all installed packages
sudo apt upgrade
sudo apt full-upgrade            # also handles dependency changes

# Install a package
sudo apt install nginx
sudo apt install nginx php-fpm mariadb-server    # multiple packages

# Remove a package
sudo apt remove nginx            # keep config files
sudo apt purge nginx             # remove config files too
sudo apt autoremove              # remove unneeded dependencies

# Search for packages
apt search "web server"
apt list --installed             # list installed packages
apt show nginx                   # package details

# Install a .deb file
sudo dpkg -i package.deb
sudo apt install -f              # fix broken dependencies after dpkg

yum / dnf — RHEL, CentOS, Fedora, Rocky

# dnf is the modern replacement for yum (Fedora 22+, RHEL 8+)

# Update all packages
sudo dnf update
sudo dnf upgrade                 # same as update in dnf

# Install a package
sudo dnf install nginx
sudo dnf install nginx php-fpm mariadb-server

# Remove a package
sudo dnf remove nginx
sudo dnf autoremove              # remove unneeded dependencies

# Search for packages
dnf search "web server"
dnf list installed
dnf info nginx

# Install from a .rpm file
sudo dnf install ./package.rpm

# List available updates
dnf check-update

# Group install (install related packages together)
sudo dnf group install "Development Tools"

pacman — Arch Linux, Manjaro

# Sync package database and update
sudo pacman -Syu

# Install a package
sudo pacman -S nginx

# Remove a package and its dependencies
sudo pacman -Rns nginx

# Search for packages
pacman -Ss "web server"
pacman -Qs nginx                 # search installed packages

# Package information
pacman -Si nginx                 # remote package info
pacman -Qi nginx                 # installed package info

# List files owned by a package
pacman -Ql nginx

# Find which package owns a file
pacman -Qo /usr/bin/nginx

snap — Universal Package Manager

# Install a snap
sudo snap install code --classic         # VS Code
sudo snap install node --classic --channel=20

# List installed snaps
snap list

# Update snaps
sudo snap refresh

# Remove a snap
sudo snap remove code

# Find snaps
snap find "text editor"

10. User Management

Linux is a multi-user operating system. User management commands let you create accounts, manage groups, set passwords, and control access. On servers, getting user management right is critical for security.

Creating and Managing Users

# Create a new user
sudo useradd alice                       # basic (no home dir on some distros)
sudo useradd -m alice                    # create with home directory
sudo useradd -m -s /bin/bash alice       # with home dir and bash shell

# Create user with all options
sudo useradd -m -s /bin/bash -G sudo,docker -c "Alice Smith" alice

# Modify existing user
sudo usermod -aG docker alice            # add alice to docker group
sudo usermod -s /bin/zsh alice           # change shell
sudo usermod -l newname oldname          # rename user
sudo usermod -L alice                    # lock account
sudo usermod -U alice                    # unlock account

# Delete a user
sudo userdel alice                       # keep home directory
sudo userdel -r alice                    # remove home directory too

# Change password
passwd                                   # change your own password
sudo passwd alice                        # change another user's password

# Force password change on next login
sudo passwd -e alice

Groups

# Create a group
sudo groupadd developers

# Add user to a group (append, do not replace existing groups)
sudo usermod -aG developers alice

# Remove user from a group
sudo gpasswd -d alice developers

# View your groups
groups
groups alice                     # another user's groups
id alice                         # detailed UID/GID info

# List all groups
cat /etc/group

# Delete a group
sudo groupdel developers

sudo — Superuser Access

# Run a command as root
sudo apt update
sudo systemctl restart nginx

# Open a root shell
sudo -i                         # root login shell
sudo -s                         # root shell (keeps current env)
sudo su -                       # switch to root user

# Run as a different user
sudo -u postgres psql

# Edit the sudoers file safely
sudo visudo

# Common sudoers entries:
# alice ALL=(ALL:ALL) ALL              # alice can do anything with sudo
# %docker ALL=(ALL) NOPASSWD: ALL     # docker group, no password required
# deploy ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart nginx  # specific command only

# Check if you have sudo access
sudo -l                          # list your sudo privileges

# Run last command with sudo (when you forget)
sudo !!                          # repeats last command with sudo

11. Compression and Archives

Compression reduces file sizes for storage and transfer. Linux supports multiple compression formats, each with different trade-offs between speed and compression ratio.

tar — Tape Archive

# Create a tar archive (no compression)
tar cf archive.tar files/

# Create a compressed tar archive (gzip - most common)
tar czf archive.tar.gz files/
tar czf backup.tar.gz /var/www/html/

# Create with bzip2 compression (smaller but slower)
tar cjf archive.tar.bz2 files/

# Create with xz compression (smallest but slowest)
tar cJf archive.tar.xz files/

# Extract a tar archive
tar xf archive.tar
tar xzf archive.tar.gz
tar xjf archive.tar.bz2
tar xJf archive.tar.xz

# Extract to specific directory
tar xzf archive.tar.gz -C /destination/

# List contents without extracting
tar tzf archive.tar.gz

# Extract specific files
tar xzf archive.tar.gz path/to/specific-file.txt

# Create archive excluding certain files
tar czf backup.tar.gz --exclude='*.log' --exclude='node_modules' project/

# Create archive with verbose output
tar czvf archive.tar.gz files/

# Flags explained:
# c = create archive
# x = extract archive
# z = gzip compression
# j = bzip2 compression
# J = xz compression
# f = filename (must be followed by archive name)
# v = verbose (show files being processed)
# t = list contents

gzip, bzip2, xz — File Compression

# gzip (fast, good compression)
gzip file.txt                   # compresses to file.txt.gz (original deleted)
gzip -k file.txt                # keep original file
gzip -d file.txt.gz             # decompress
gunzip file.txt.gz              # same as gzip -d
gzip -9 file.txt                # maximum compression
zcat file.txt.gz                # view compressed file without extracting

# bzip2 (slower, better compression)
bzip2 file.txt                  # compresses to file.txt.bz2
bzip2 -d file.txt.bz2           # decompress
bunzip2 file.txt.bz2            # same as bzip2 -d
bzcat file.txt.bz2              # view without extracting

# xz (slowest, best compression)
xz file.txt                     # compresses to file.txt.xz
xz -d file.txt.xz               # decompress
unxz file.txt.xz                # same as xz -d
xzcat file.txt.xz               # view without extracting

# Compression comparison (typical ratios on text files):
# gzip:  60-70% compression, fast
# bzip2: 65-75% compression, moderate speed
# xz:    70-80% compression, slow
# zstd:  65-75% compression, very fast (modern alternative)

zip / unzip

# Create a zip archive
zip archive.zip file1.txt file2.txt
zip -r archive.zip directory/           # recursive (include subdirectories)
zip -r archive.zip project/ -x "*.git*" -x "node_modules/*"  # exclude patterns

# Extract a zip archive
unzip archive.zip
unzip archive.zip -d /destination/      # extract to specific directory

# List contents
unzip -l archive.zip

# Extract specific files
unzip archive.zip "*.html"

# Create password-protected zip
zip -e secure.zip sensitive-files/

12. Disk Management

Disk management commands help you partition drives, create filesystems, and manage mount points. These are most commonly needed when setting up servers, adding storage, or troubleshooting disk issues.

lsblk — List Block Devices

# List all block devices
lsblk
# NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
# sda      8:0    0   100G  0 disk
# ├─sda1   8:1    0    1G  0 part /boot
# ├─sda2   8:2    0   95G  0 part /
# └─sda3   8:3    0    4G  0 part [SWAP]
# sdb      8:16   0   500G  0 disk
# └─sdb1   8:17   0   500G  0 part /data

# Show filesystem info
lsblk -f
# Shows NAME, FSTYPE, LABEL, UUID, MOUNTPOINT

fdisk — Partition Management

# List partitions on all disks
sudo fdisk -l

# List partitions on a specific disk
sudo fdisk -l /dev/sdb

# Interactive partition management (CAREFUL - can destroy data)
sudo fdisk /dev/sdb
# m = help menu
# p = print partition table
# n = new partition
# d = delete partition
# w = write changes and exit
# q = quit without saving

# Modern alternative: parted
sudo parted /dev/sdb print

mkfs — Create Filesystem

# Create ext4 filesystem (most common for Linux)
sudo mkfs.ext4 /dev/sdb1

# Create XFS filesystem (common on RHEL/CentOS)
sudo mkfs.xfs /dev/sdb1

# Create FAT32 filesystem (for USB drives, cross-platform)
sudo mkfs.vfat /dev/sdb1

# Check filesystem for errors
sudo fsck /dev/sdb1              # must unmount first
sudo e2fsck -f /dev/sdb1        # force check on ext4

mount / umount — Mount Filesystems

# Mount a partition
sudo mount /dev/sdb1 /mnt/data

# Mount with specific options
sudo mount -o ro /dev/sdb1 /mnt/data              # read-only
sudo mount -t ext4 /dev/sdb1 /mnt/data            # specify filesystem type

# Mount a USB drive
sudo mount /dev/sdc1 /media/usb

# Unmount
sudo umount /mnt/data
sudo umount /media/usb

# Show all mounted filesystems
mount | column -t
findmnt                          # tree view of mounts

# Persistent mounts (survive reboot) - edit /etc/fstab
# /dev/sdb1  /mnt/data  ext4  defaults  0  2
# UUID=abc-123  /mnt/data  ext4  defaults  0  2

# Find UUID of a partition
sudo blkid /dev/sdb1

# Mount all entries in /etc/fstab
sudo mount -a

13. System Services

Modern Linux distributions use systemd to manage system services (daemons). The systemctl command controls services, and journalctl reads their logs. Understanding these two commands is essential for managing any Linux server.

systemctl — Service Management

# Start, stop, restart a service
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
sudo systemctl reload nginx      # reload config without downtime (if supported)

# Check service status
systemctl status nginx
# Shows: loaded, active/inactive, PID, memory, recent log lines

# Enable/disable auto-start at boot
sudo systemctl enable nginx      # start on boot
sudo systemctl disable nginx     # do not start on boot
sudo systemctl enable --now nginx   # enable AND start immediately

# Check if a service is running
systemctl is-active nginx        # "active" or "inactive"
systemctl is-enabled nginx       # "enabled" or "disabled"
systemctl is-failed nginx        # "failed" or not

# List all services
systemctl list-units --type=service
systemctl list-units --type=service --state=running    # only running
systemctl list-units --type=service --state=failed     # only failed

# View service configuration
systemctl cat nginx              # show the service unit file
systemctl show nginx             # show all properties

# Reload systemd after editing unit files
sudo systemctl daemon-reload

# System-wide commands
sudo systemctl reboot            # reboot the system
sudo systemctl poweroff          # shut down the system

journalctl — System Logs

# View all system logs
journalctl

# View logs for a specific service
journalctl -u nginx
journalctl -u nginx --since "1 hour ago"
journalctl -u nginx --since "2026-02-10" --until "2026-02-11"

# Follow logs in real time
journalctl -u nginx -f

# Show only the last N lines
journalctl -u nginx -n 50

# Filter by priority
journalctl -p err                # errors and above
journalctl -p warning            # warnings and above
# Priorities: emerg, alert, crit, err, warning, notice, info, debug

# Show logs since last boot
journalctl -b
journalctl -b -1                 # previous boot

# Kernel messages
journalctl -k

# Output as JSON
journalctl -u nginx -o json-pretty

# Check disk usage of journal logs
journalctl --disk-usage

# Clean old logs
sudo journalctl --vacuum-time=7d    # keep only last 7 days
sudo journalctl --vacuum-size=500M  # keep only 500MB

14. Cron Jobs and Scheduling

Cron is the standard Linux job scheduler. It runs commands at specified times automatically. Cron is used for backups, log rotation, cleanup scripts, monitoring checks, and any task that needs to run on a schedule.

Crontab Syntax

# Crontab format:
# ┌───────────── minute (0-59)
# │ ┌───────────── hour (0-23)
# │ │ ┌───────────── day of month (1-31)
# │ │ │ ┌───────────── month (1-12)
# │ │ │ │ ┌───────────── day of week (0-7, 0 and 7 = Sunday)
# │ │ │ │ │
# * * * * * command to execute

# Special characters:
# *   any value
# ,   list separator (1,3,5)
# -   range (1-5)
# /   step (*/5 = every 5 units)

Common Cron Schedules

# Every minute
* * * * * /path/to/script.sh

# Every 5 minutes
*/5 * * * * /path/to/script.sh

# Every hour at minute 0
0 * * * * /path/to/script.sh

# Every day at midnight
0 0 * * * /path/to/backup.sh

# Every day at 3:30 AM
30 3 * * * /path/to/maintenance.sh

# Every Monday at 9 AM
0 9 * * 1 /path/to/weekly-report.sh

# First day of every month at midnight
0 0 1 * * /path/to/monthly-cleanup.sh

# Every weekday (Mon-Fri) at 8 AM
0 8 * * 1-5 /path/to/workday-script.sh

# Every 15 minutes during business hours
*/15 8-17 * * 1-5 /path/to/check.sh

# Shorthand expressions:
# @reboot     run once at startup
# @yearly     0 0 1 1 *
# @monthly    0 0 1 * *
# @weekly     0 0 * * 0
# @daily      0 0 * * *
# @hourly     0 * * * *
⚙ Build cron expressions: Use our Crontab Generator to visually build and validate cron schedule expressions without memorizing the syntax.

Managing Crontab

# Edit your crontab
crontab -e

# List your cron jobs
crontab -l

# Remove all your cron jobs
crontab -r

# Edit another user's crontab (as root)
sudo crontab -u alice -e
sudo crontab -u alice -l

# Best practices for cron jobs:
# 1. Use full paths for commands: /usr/bin/python3 instead of python3
# 2. Redirect output to a log file:
0 3 * * * /home/user/backup.sh >> /var/log/backup.log 2>&1

# 3. Use a lock file to prevent overlapping runs:
*/5 * * * * flock -n /tmp/script.lock /path/to/script.sh

# 4. Set environment variables at the top of crontab:
# SHELL=/bin/bash
# PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# MAILTO=admin@example.com

System-wide Cron

# System cron directories (scripts here run automatically):
# /etc/cron.d/          custom cron files
# /etc/cron.daily/      scripts run once a day
# /etc/cron.hourly/     scripts run once an hour
# /etc/cron.weekly/     scripts run once a week
# /etc/cron.monthly/    scripts run once a month

# System crontab (includes user field)
sudo cat /etc/crontab
# * * * * * user  command
# 25 6 * * * root /usr/sbin/logrotate /etc/logrotate.conf

15. SSH and Remote Access

SSH (Secure Shell) is the primary method for remote access to Linux servers. It provides encrypted communication, key-based authentication, port forwarding, and file transfer capabilities. Understanding SSH thoroughly is essential for any server administrator or developer who works with remote machines.

SSH Configuration

# The SSH config file (~/.ssh/config) simplifies connections
# Instead of: ssh -i ~/.ssh/prod-key.pem -p 2222 deploy@production.example.com
# You type:   ssh production

# ~/.ssh/config example:
Host production
    HostName production.example.com
    User deploy
    Port 2222
    IdentityFile ~/.ssh/prod-key.pem

Host staging
    HostName staging.example.com
    User deploy
    IdentityFile ~/.ssh/staging-key.pem

Host dev-*
    User developer
    IdentityFile ~/.ssh/dev-key.pem

Host *
    ServerAliveInterval 60
    ServerAliveCountMax 3
    AddKeysToAgent yes

SSH Key Management

# Generate a modern SSH key pair (Ed25519 recommended)
ssh-keygen -t ed25519 -C "alice@company.com"
# Keys saved to ~/.ssh/id_ed25519 (private) and ~/.ssh/id_ed25519.pub (public)

# Generate RSA key (for older systems)
ssh-keygen -t rsa -b 4096 -C "alice@company.com"

# Copy public key to a server
ssh-copy-id user@server.com
ssh-copy-id -i ~/.ssh/custom-key.pub user@server.com

# Manual key copy (if ssh-copy-id is not available)
cat ~/.ssh/id_ed25519.pub | ssh user@server "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys"

# SSH agent (avoid typing passphrase repeatedly)
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519
ssh-add -l                       # list added keys

# Correct permissions for SSH files
chmod 700 ~/.ssh
chmod 600 ~/.ssh/id_ed25519      # private key
chmod 644 ~/.ssh/id_ed25519.pub  # public key
chmod 600 ~/.ssh/authorized_keys
chmod 644 ~/.ssh/config

SSH Tunneling

# Local port forwarding: access remote service through local port
# Access remote database (port 5432) through local port 5433
ssh -L 5433:localhost:5432 user@dbserver.com
# Now connect to localhost:5433 to reach the remote database

# Access a service on the remote network
ssh -L 8080:internal-app:80 user@jumphost.com
# Now visit http://localhost:8080 to reach internal-app:80

# Remote port forwarding: expose local service to remote network
# Make your local dev server (port 3000) accessible on remote port 8080
ssh -R 8080:localhost:3000 user@remote-server.com

# Dynamic port forwarding (SOCKS proxy)
ssh -D 1080 user@proxy-server.com
# Configure browser to use SOCKS5 proxy at localhost:1080

# Background tunnel (stays running, no shell)
ssh -fN -L 5433:localhost:5432 user@dbserver.com

SSH Security Hardening

# Key settings for /etc/ssh/sshd_config:

# Disable password authentication (use keys only)
PasswordAuthentication no

# Disable root login
PermitRootLogin no

# Use only SSH protocol 2
Protocol 2

# Change default port (reduces automated attacks)
Port 2222

# Limit login attempts
MaxAuthTries 3

# Allow specific users only
AllowUsers deploy alice

# After editing, restart SSH:
sudo systemctl restart sshd

# IMPORTANT: Always test your new SSH config in a separate terminal
# BEFORE closing your current session. If you lock yourself out,
# you will need console access to fix it.

16. Shell Redirections and Pipes

Redirections and pipes are what make the Linux command line truly powerful. They let you connect commands together, direct output to files, and build complex data processing workflows from simple building blocks. This is the Unix philosophy in action: small tools that do one thing well, combined through pipes.

Output Redirection

# Redirect stdout to a file (overwrite)
echo "Hello" > output.txt
ls -la > filelist.txt

# Redirect stdout to a file (append)
echo "Another line" >> output.txt
date >> logfile.txt

# Redirect stderr to a file
command 2> errors.txt
find / -name "*.conf" 2> /dev/null      # discard errors

# Redirect both stdout and stderr to a file
command > all-output.txt 2>&1
command &> all-output.txt               # bash shorthand

# Redirect stdout and stderr to different files
command > output.txt 2> errors.txt

# Discard all output
command > /dev/null 2>&1

Input Redirection

# Redirect stdin from a file
wc -l < file.txt
sort < unsorted.txt > sorted.txt

# Here document (multi-line input)
cat <<EOF > config.txt
server_name=production
port=8080
debug=false
EOF

# Here string
grep "error" <<< "this is an error message"

Pipes

# Pipe stdout of one command into stdin of the next
ls -la | less                           # browse long file listings
ps aux | grep nginx                     # find nginx processes
cat access.log | sort | uniq -c | sort -rn | head -20   # frequency analysis

# Chain multiple pipes
cat /var/log/syslog | grep "error" | awk '{print $5}' | sort | uniq -c | sort -rn

# Pipe to xargs (convert stdin to command arguments)
find . -name "*.tmp" | xargs rm        # delete all .tmp files
find . -name "*.js" | xargs grep "TODO"   # search JS files for TODO

# xargs with custom delimiter and parallel execution
cat urls.txt | xargs -I {} -P 4 curl -s -o /dev/null -w "%{http_code} {}\n" {}

# tee: write to file AND pass through to next command
ls -la | tee filelist.txt | wc -l      # save listing AND count lines
command | tee output.log                # see output AND save to file
command | tee -a output.log             # append instead of overwrite

# Process substitution (treat command output as a file)
diff <(ls dir1/) <(ls dir2/)           # compare two directories
comm <(sort file1.txt) <(sort file2.txt)  # compare sorted files

Command Chaining

# Sequential execution (always run next command)
command1 ; command2 ; command3

# AND: run next only if previous succeeded
command1 && command2 && command3
apt update && apt upgrade -y && apt autoremove

# OR: run next only if previous failed
command1 || command2
mkdir /tmp/mydir || echo "Directory already exists"

# Combined patterns
make build && make test && echo "All passed!" || echo "Something failed!"

# Subshell (run commands in a separate shell)
(cd /tmp && tar czf backup.tar.gz /important/data)
# Current directory is unchanged after subshell completes

# Command substitution (use output as a value)
echo "Today is $(date +%Y-%m-%d)"
echo "There are $(wc -l < file.txt) lines"
FILES=$(find . -name "*.py" | wc -l)

17. Environment Variables

Environment variables are key-value pairs available to all processes. They configure everything from your shell prompt to which editor opens when you type git commit. Understanding environment variables is essential for configuring applications, especially on servers and in Docker containers.

Viewing and Setting Variables

# View all environment variables
env
printenv
export                           # bash built-in, shows exported variables

# View a specific variable
echo $HOME                       # /home/alice
echo $PATH                       # /usr/local/bin:/usr/bin:/bin
echo $USER                       # alice
echo $SHELL                      # /bin/bash
printenv HOME                    # alternative syntax

# Set a variable for the current session
export MY_VAR="hello"
export DATABASE_URL="postgres://user:pass@localhost:5432/mydb"
export PATH="$HOME/.local/bin:$PATH"    # prepend to PATH

# Set a variable for a single command
DATABASE_URL="postgres://localhost/test" python app.py
ENV=production node server.js

# Unset a variable
unset MY_VAR

Important Environment Variables

# HOME      - your home directory (/home/alice)
# USER      - your username (alice)
# SHELL     - your default shell (/bin/bash)
# PATH      - directories searched for commands
# PWD       - current working directory
# OLDPWD    - previous working directory (used by cd -)
# EDITOR    - default text editor (vim, nano, etc.)
# LANG      - system language/locale
# TERM      - terminal type
# PS1       - shell prompt format
# HOSTNAME  - machine name
# LD_LIBRARY_PATH - directories to search for shared libraries
# XDG_CONFIG_HOME - user config directory (~/.config)

# Application-specific:
# JAVA_HOME - Java installation directory
# GOPATH    - Go workspace directory
# NODE_ENV  - Node.js environment (development/production)
# PYTHONPATH - Python module search paths

Making Variables Permanent

# For your user only: add to ~/.bashrc (or ~/.zshrc for zsh)
echo 'export EDITOR=vim' >> ~/.bashrc
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc

# Reload without logging out
source ~/.bashrc
# or
. ~/.bashrc

# Configuration files loaded by bash (in order):
# Login shell:     /etc/profile -> ~/.bash_profile -> ~/.bash_login -> ~/.profile
# Non-login shell: ~/.bashrc

# For all users: add to /etc/environment or /etc/profile.d/
# /etc/environment (simple KEY=VALUE format, no export needed):
# PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# JAVA_HOME="/usr/lib/jvm/java-17-openjdk"

# /etc/profile.d/ (shell scripts, sourced at login):
# /etc/profile.d/custom.sh:
# export COMPANY_NAME="Acme Corp"

The PATH Variable

# PATH is the most important environment variable
# It is a colon-separated list of directories where the shell looks for commands

echo $PATH
# /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/alice/.local/bin

# When you type "python3", the shell searches each PATH directory in order:
# 1. /usr/local/sbin/python3  - not found
# 2. /usr/local/bin/python3   - not found
# 3. /usr/sbin/python3        - not found
# 4. /usr/bin/python3         - FOUND! Execute this one.

# Add a directory to PATH
export PATH="$HOME/scripts:$PATH"       # prepend (checked first)
export PATH="$PATH:/opt/custom/bin"     # append (checked last)

# Common issue: "command not found" usually means:
# 1. The program is not installed
# 2. The program is installed but its directory is not in PATH
# Fix: find the binary and add its directory to PATH
which node          # shows where node is installed
whereis node        # shows all locations

18. Useful Command Combinations and One-Liners

The real power of the Linux command line comes from combining simple commands into powerful one-liners. These are the commands you bookmark, memorize, and reach for when you need to solve real problems quickly.

File Operations

# Find and replace text across multiple files
find . -name "*.html" -exec sed -i 's/old-text/new-text/g' {} +

# Find large files (top 20 by size)
find / -type f -exec du -h {} + 2>/dev/null | sort -rh | head -20

# Count lines of code in a project
find . -name "*.py" -type f | xargs wc -l | tail -1
find . \( -name "*.js" -o -name "*.ts" \) -not -path "*/node_modules/*" | xargs wc -l

# Find duplicate files by content (using checksums)
find . -type f -exec md5sum {} + | sort | uniq -d -w 32

# Find files modified in the last 24 hours
find . -type f -mtime -1 -ls

# Recursively change file permissions (files to 644, directories to 755)
find /var/www -type f -exec chmod 644 {} +
find /var/www -type d -exec chmod 755 {} +

# Delete files older than 30 days
find /tmp -type f -mtime +30 -delete

# Rename files with a pattern (requires rename command)
rename 's/\.jpeg$/\.jpg/' *.jpeg

# Batch rename using a loop
for f in *.txt; do mv "$f" "${f%.txt}.md"; done

Log Analysis

# Top 20 IP addresses in nginx access log
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

# Request count by HTTP status code
awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn

# Top 20 most requested URLs
awk '{print $7}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

# Find 404 errors
awk '$9 == 404 {print $7}' /var/log/nginx/access.log | sort | uniq -c | sort -rn

# Requests per hour
awk '{print $4}' /var/log/nginx/access.log | cut -d: -f1-2 | sort | uniq -c

# Find slow requests (if response time is logged)
awk '$NF > 5.0 {print $7, $NF}' /var/log/nginx/access.log | sort -k2 -rn | head -20

# Real-time log monitoring with filtering
tail -f /var/log/nginx/access.log | grep --line-buffered "POST"
tail -f /var/log/syslog | grep --line-buffered -i "error"

# Count errors in the last hour
awk -v d1="$(date -d '1 hour ago' '+%d/%b/%Y:%H')" '$4 >= "["d1 && $9 >= 500' /var/log/nginx/access.log | wc -l

System Administration

# Find which process is using a specific port
ss -tlnp | grep :8080
lsof -i :8080

# Kill process on a specific port
kill $(lsof -t -i :8080)
fuser -k 8080/tcp

# Monitor disk space usage in real time
watch -n 5 'df -h'

# Watch a command output change over time
watch -n 2 'ss -s'              # connection statistics every 2 seconds

# Find which process is using the most memory
ps aux --sort=-%mem | head -10

# Find which process is using the most CPU
ps aux --sort=-%cpu | head -10

# System resource snapshot
echo "=== CPU ===" && top -bn1 | head -5 && echo "=== Memory ===" && free -h && echo "=== Disk ===" && df -h /

# Check if a website is up
curl -s -o /dev/null -w "%{http_code}" https://example.com

# Test website response time
curl -s -o /dev/null -w "DNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" https://example.com

# Generate a random password
openssl rand -base64 32
tr -dc 'A-Za-z0-9!@#$%' < /dev/urandom | head -c 20; echo

# Quick HTTP server (serve current directory)
python3 -m http.server 8000
# Now accessible at http://localhost:8000

# Monitor a directory for changes
inotifywait -m -r /var/www/html/

Text and Data Processing

# Extract unique email addresses from a file
grep -oE '[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}' file.txt | sort -u

# Extract IP addresses from a file
grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' file.txt | sort -u

# CSV: print specific columns
awk -F, '{print $1,$3}' data.csv

# CSV: sum a column
awk -F, '{sum += $4} END {print sum}' data.csv

# Convert JSON to formatted output (using jq)
curl -s https://api.example.com/data | jq '.'
curl -s https://api.example.com/users | jq '.[] | {name: .name, email: .email}'

# Compare two sorted files
comm -12 <(sort file1.txt) <(sort file2.txt)     # lines in both files
comm -23 <(sort file1.txt) <(sort file2.txt)     # lines only in file1
comm -13 <(sort file1.txt) <(sort file2.txt)     # lines only in file2

# Parallel execution with xargs
cat urls.txt | xargs -P 10 -I {} curl -s -o /dev/null -w "%{http_code} {}\n" {}

# Count occurrences of each word in a file
tr -s ' ' '\n' < file.txt | sort | uniq -c | sort -rn | head -20

# Remove trailing whitespace from all files
find . -name "*.py" -exec sed -i 's/[[:space:]]*$//' {} +

Networking One-Liners

# Check if multiple hosts are reachable
for host in server1 server2 server3; do ping -c 1 -W 2 $host > /dev/null 2>&1 && echo "$host: UP" || echo "$host: DOWN"; done

# Scan common ports on a host (without nmap)
for port in 22 80 443 3306 5432 6379 8080; do (echo > /dev/tcp/server.com/$port) 2>/dev/null && echo "Port $port: OPEN" || echo "Port $port: CLOSED"; done

# Download multiple URLs in parallel
cat urls.txt | xargs -P 5 -I {} wget -q {}

# Monitor bandwidth usage on an interface (if iftop is installed)
sudo iftop -i eth0

# Watch active connections to a specific port
watch -n 1 'ss -tn | grep :443 | wc -l'

# Find your public IP address
curl -s ifconfig.me
curl -s ipinfo.io/ip
curl -s api.ipify.org

Frequently Asked Questions

What is the difference between Linux and Unix?

Unix is the original operating system developed at Bell Labs in the 1970s. Linux is a Unix-like operating system kernel created by Linus Torvalds in 1991. While Unix was proprietary and ran on specific hardware, Linux is open source and runs on virtually any hardware. Linux follows the POSIX standard that Unix established, so most commands and behaviors are identical. macOS is based on BSD Unix, which is why terminal commands on macOS are very similar to Linux commands. In practice, when people say "Linux" they usually mean a Linux distribution like Ubuntu, Fedora, or Debian, which bundles the Linux kernel with GNU utilities, a package manager, and other software.

How do I learn Linux commands effectively as a beginner?

Start with five essential commands: ls (list files), cd (change directory), pwd (print working directory), cat (view files), and mkdir (create directories). Use these daily for a week before adding more. Practice in a safe environment like a virtual machine or WSL on Windows so you cannot accidentally damage your system. Use the man command (for example, man ls) to read built-in documentation for any command. Focus on real tasks rather than memorization: navigate your file system, create project directories, edit configuration files, and manage processes. Keep a cheat sheet open while you work. Within two to three weeks of daily practice, the core 30 to 40 commands will become muscle memory.

What is the difference between chmod 755 and chmod 644?

The numbers represent read (4), write (2), and execute (1) permissions for three groups: owner, group, and others. chmod 755 means the owner can read, write, and execute (7 = 4+2+1), while group and others can read and execute (5 = 4+1). This is the standard permission for directories and executable scripts. chmod 644 means the owner can read and write (6 = 4+2), while group and others can only read (4). This is the standard permission for regular files like HTML pages, configuration files, and documents. Use 755 for anything that needs to be executed or traversed (directories, scripts, binaries) and 644 for data files that should be readable but not executable. Use our Chmod Calculator to visually build permission values.

How do I find and kill a process using a specific port in Linux?

Use the ss or lsof command to find which process is using a port, then kill it. To find the process on port 8080, run: ss -tlnp | grep 8080, which shows the process ID (PID). Alternatively, use: lsof -i :8080. Once you have the PID, terminate it gracefully with: kill PID. If the process does not stop, force kill it with: kill -9 PID. You can combine these into a one-liner: kill $(lsof -t -i :8080). The fuser command also works: fuser -k 8080/tcp will find and kill whatever process is listening on TCP port 8080. Always try a graceful kill (SIGTERM) before resorting to kill -9 (SIGKILL), as SIGKILL does not allow the process to clean up resources.

What is the difference between bash, sh, and zsh?

sh (Bourne Shell) is the original Unix shell and serves as the POSIX standard. Scripts written for sh are maximally portable across Unix-like systems. bash (Bourne Again Shell) is the most widely used shell on Linux. It extends sh with features like command history, tab completion, arrays, and improved scripting syntax. Most Linux distributions use bash as the default interactive shell. zsh (Z Shell) is the default shell on macOS since Catalina and is popular on Linux. It adds features like better tab completion, spell correction, shared command history, and extensive plugin support through frameworks like Oh My Zsh. For scripting, use bash or sh for maximum compatibility. For interactive use, zsh offers the best user experience. All three share the same core command set, so commands like ls, grep, and find work identically regardless of which shell you use.

Conclusion

The Linux command line is not just a tool — it is a language for communicating with computers. Every command in this guide is a building block. Individually, grep, awk, sort, and uniq are simple utilities. Piped together, they become a data processing pipeline that can analyze gigabytes of logs in seconds. Combined with cron, SSH, and shell scripting, they become an automation platform that can manage entire fleets of servers.

If you are just starting, focus on three things: learn to navigate the filesystem (ls, cd, pwd, find), learn to read and search files (cat, less, grep, head, tail), and learn to manage processes (ps, kill, top). Everything else builds on this foundation.

If you are already comfortable with the basics, push into text processing (sed, awk), shell scripting, and the one-liner combinations in the final section. The ability to quickly compose commands to solve problems on the fly is what separates proficient Linux users from experts.

The time you invest in learning the command line pays compound returns. Every server you manage, every Docker container you debug, every CI/CD pipeline you build, and every deployment you troubleshoot becomes faster and more reliable when you can fluently speak the language of the terminal.

⚙ Essential tools: Generate file permission commands with the Chmod Calculator, build cron schedule expressions with the Crontab Generator, and parse existing cron expressions with the Cron Expression Parser.

Learn More

Related Resources

Chmod Calculator
Visual file permission calculator for Linux chmod commands
Crontab Generator
Build and validate cron schedule expressions visually
Cron Expression Parser
Parse and explain existing cron expressions in plain English
Bash Scripting Guide
Complete guide to shell scripting with practical examples
Docker Complete Guide
Containerize applications using Linux command line skills
Git Complete Guide
Version control from the command line for developers