Bash Scripting: The Complete Guide for 2026

February 11, 2026

Bash is the default shell on virtually every Linux distribution, macOS (prior to Catalina), and the Windows Subsystem for Linux. It is the language that glues together the Unix ecosystem: orchestrating programs, automating system tasks, processing text, managing deployments, and handling everything from one-line aliases to thousand-line build systems. If you work with servers, containers, CI/CD pipelines, or any Unix-like environment, bash scripting is not optional — it is a core professional skill.

This guide covers everything from your first script to advanced techniques used in production. Every section includes practical code examples you can copy, modify, and use immediately. Whether you are automating backups, writing deployment scripts, parsing logs, or building developer tools, this is the reference you need.

⚙ Related tools: Schedule your scripts with the Crontab Generator, decode existing schedules with the Cron Expression Parser, and keep our Bash Cheat Sheet open as a quick reference.

Table of Contents

  1. What Is Bash and Why Learn It
  2. Getting Started: Your First Script
  3. Variables and Data Types
  4. String Manipulation
  5. Arrays: Indexed and Associative
  6. Conditionals: if/else and case
  7. Loops: for, while, and until
  8. Functions
  9. Command Substitution and Process Substitution
  10. Input/Output Redirection and Pipes
  11. File Operations and Tests
  12. Error Handling
  13. Regular Expressions: grep, sed, and awk
  14. Script Arguments and getopts
  15. Practical Examples
  16. Debugging Techniques
  17. Best Practices and Common Pitfalls
  18. Frequently Asked Questions

1. What Is Bash and Why Learn It

Bash (Bourne Again Shell) was written by Brian Fox in 1989 as a free replacement for the Bourne Shell (sh). It is part of the GNU Project and has been the default login shell on most Linux distributions for over three decades. Bash combines the capabilities of a command-line interpreter with a full programming language, making it uniquely positioned for system automation.

Here is why bash scripting matters in 2026:

Bash vs Other Shells

Several shells exist in the Unix ecosystem. Here is how they compare:

For scripting, bash offers the best balance of features and portability. This guide focuses exclusively on bash.

2. Getting Started: Your First Script

Checking Your Bash Version

Before writing scripts, verify that bash is available and check its version:

# Check bash version
bash --version

# Check which bash is being used
which bash

# Check the version from within a script
echo "Bash version: $BASH_VERSION"

Bash 4.0+ is required for associative arrays and several modern features. Bash 5.0+ (released 2019) is current on most Linux distributions. macOS ships with bash 3.2 due to licensing (GPLv3), so if you target macOS, either install a newer bash via Homebrew or avoid bash 4+ features.

The Shebang Line

Every bash script should start with a shebang line that tells the operating system which interpreter to use:

#!/usr/bin/env bash

Using #!/usr/bin/env bash is preferred over #!/bin/bash because it finds bash wherever it is installed (which varies across systems). The env command searches your PATH for the bash executable.

Creating and Running Your First Script

# Create the script file
cat > hello.sh <<'EOF'
#!/usr/bin/env bash
# My first bash script

echo "Hello, World!"
echo "Today is $(date +%Y-%m-%d)"
echo "Running as user: $(whoami)"
echo "Current directory: $(pwd)"
echo "Bash version: $BASH_VERSION"
EOF

# Make it executable
chmod +x hello.sh

# Run it
./hello.sh

There are three ways to run a bash script:

# Method 1: Make executable and run directly (requires shebang)
chmod +x script.sh
./script.sh

# Method 2: Pass to bash explicitly (shebang not required)
bash script.sh

# Method 3: Source the script (runs in current shell, not a subshell)
source script.sh
# or equivalently:
. script.sh

The difference between executing and sourcing is critical: when you execute a script (methods 1 and 2), it runs in a new subshell. Variables set in the script do not affect your current shell. When you source a script (method 3), it runs in your current shell, so any variables, functions, or directory changes persist after the script finishes. This is why .bashrc and .bash_profile are sourced, not executed.

File Permissions and chmod

# Make a script executable for the owner
chmod u+x script.sh

# Make it executable for everyone
chmod +x script.sh

# Set exact permissions: owner read/write/execute, group/others read/execute
chmod 755 script.sh

# Check permissions
ls -la script.sh

3. Variables and Data Types

Bash variables are untyped by default — they are all stored as strings. However, bash can treat them as integers in arithmetic contexts. Understanding variable assignment, expansion, and scoping is fundamental to every script you write.

Variable Assignment

#!/usr/bin/env bash

# Assignment: NO spaces around the equals sign
name="DevToolbox"
version=3
filepath="/var/log/syslog"

# WRONG - these all fail:
# name = "DevToolbox"    # bash tries to run 'name' as a command
# name ="DevToolbox"     # same problem
# name= "DevToolbox"    # sets name to empty and runs "DevToolbox" as command

Variable Expansion

# Basic expansion
echo "$name"
echo "${name}"    # Braces are optional but recommended

# Braces are REQUIRED when the variable is adjacent to other characters
prefix="dev"
echo "${prefix}_tools"    # dev_tools
echo "$prefix_tools"      # Empty! Bash looks for variable named 'prefix_tools'

# Command substitution
current_date=$(date +%Y-%m-%d)
file_count=$(ls -1 | wc -l)

# Arithmetic expansion
count=5
next=$((count + 1))
echo "Next: $next"    # Next: 6

Quoting Rules

Quoting is one of the most important (and most commonly misunderstood) aspects of bash. Getting it wrong causes bugs with filenames containing spaces, word splitting issues, and security vulnerabilities.

# Double quotes: expand variables but prevent word splitting and globbing
name="World"
echo "Hello, $name"          # Hello, World
echo "Files: $(ls)"          # Expands the command

# Single quotes: completely literal, no expansion at all
echo 'Hello, $name'          # Hello, $name
echo 'No $(expansion) here'  # No $(expansion) here

# Backslash: escape a single character
echo "The price is \$5.00"   # The price is $5.00
echo "She said \"hello\""    # She said "hello"

# $'...' syntax: interpret escape sequences
echo $'Line 1\nLine 2'       # Prints on two lines
echo $'Tab\there'             # Tab	here

# CRITICAL: Always quote variables to prevent word splitting
file="my document.txt"
cat "$file"      # Correct: passes one argument
cat $file        # WRONG: passes two arguments: 'my' and 'document.txt'

Special Variables

#!/usr/bin/env bash

echo "Script name: $0"
echo "First argument: $1"
echo "Second argument: $2"
echo "All arguments (separate words): $@"
echo "All arguments (single string): $*"
echo "Number of arguments: $#"
echo "Exit code of last command: $?"
echo "PID of current script: $$"
echo "PID of last background process: $!"
echo "Current shell options: $-"

Environment Variables vs Local Variables

# Local variable: only available in current shell/script
my_var="local value"

# Environment variable: available to child processes
export MY_ENV_VAR="exported value"

# Set and export in one line
export PATH="$HOME/bin:$PATH"

# Check if a variable is exported
declare -p my_var       # declare -- my_var="local value"
declare -p MY_ENV_VAR   # declare -x MY_ENV_VAR="exported value"

# Unset a variable
unset my_var

# Read-only variable (cannot be changed or unset)
declare -r CONSTANT="immutable"
readonly ANOTHER_CONSTANT="also immutable"

Default Values and Parameter Expansion

# Use default value if variable is unset or empty
echo "${name:-Anonymous}"      # Use "Anonymous" if $name is unset/empty

# Assign default value if variable is unset or empty
echo "${name:=Anonymous}"      # Set $name to "Anonymous" if unset/empty

# Display error if variable is unset or empty
echo "${name:?Error: name is required}"

# Use alternative value if variable IS set
echo "${name:+Hello $name}"    # Print "Hello ..." only if $name is set

# Length of variable
str="Hello World"
echo "${#str}"                 # 11

# Substring extraction
echo "${str:0:5}"              # Hello (start at 0, length 5)
echo "${str:6}"                # World (start at 6, to end)
echo "${str: -5}"              # World (last 5 chars, note the space before -)

4. String Manipulation

Bash has powerful built-in string manipulation that can often replace calls to external tools like sed and awk. These operations are faster because they run inside the shell process without forking.

Pattern Matching and Substitution

filename="backup-2026-02-11.tar.gz"

# Remove shortest match from the beginning
echo "${filename#*.}"        # tar.gz (remove up to first .)

# Remove longest match from the beginning
echo "${filename##*.}"       # gz (remove up to last .)

# Remove shortest match from the end
echo "${filename%.*}"        # backup-2026-02-11.tar (remove from last .)

# Remove longest match from the end
echo "${filename%%.*}"       # backup-2026-02-11 (remove from first .)

# Practical: extract file extension
ext="${filename##*.}"        # gz

# Practical: extract filename without path
path="/home/user/documents/report.pdf"
basename="${path##*/}"       # report.pdf

# Practical: extract directory from path
dirname="${path%/*}"         # /home/user/documents

Search and Replace

text="Hello World, Hello Bash"

# Replace first occurrence
echo "${text/Hello/Hi}"       # Hi World, Hello Bash

# Replace ALL occurrences
echo "${text//Hello/Hi}"      # Hi World, Hi Bash

# Replace at beginning of string (prefix)
echo "${text/#Hello/Hi}"      # Hi World, Hello Bash

# Replace at end of string (suffix)
echo "${text/%Bash/Shell}"    # Hello World, Hello Shell

# Delete matches (replace with nothing)
echo "${text//Hello/}"        # World,  Bash

Case Conversion (Bash 4+)

str="Hello World"

# Convert to uppercase
echo "${str^^}"              # HELLO WORLD

# Convert to lowercase
echo "${str,,}"              # hello world

# Capitalize first character
echo "${str^}"               # Hello World (already capitalized)

lower="hello world"
echo "${lower^}"             # Hello world

# Convert specific characters
echo "${str^^[aeiou]}"       # HEllO WOrld (uppercase only vowels)

String Concatenation and Length

# Concatenation is just placing strings next to each other
first="Hello"
second="World"
combined="$first $second"
echo "$combined"             # Hello World

# Concatenate with +=
greeting="Hello"
greeting+=", World"
echo "$greeting"             # Hello, World

# String length
echo "${#greeting}"          # 12

# Check if string is empty
str=""
if [[ -z "$str" ]]; then
    echo "String is empty"
fi

# Check if string is NOT empty
str="hello"
if [[ -n "$str" ]]; then
    echo "String is not empty"
fi

5. Arrays: Indexed and Associative

Bash supports two types of arrays: indexed arrays (numbered starting from 0) and associative arrays (key-value pairs, bash 4+). Arrays are essential for handling lists of items, command output, and structured data.

Indexed Arrays

# Declare an indexed array
fruits=("apple" "banana" "cherry" "date")

# Access elements (0-indexed)
echo "${fruits[0]}"          # apple
echo "${fruits[2]}"          # cherry

# Access all elements
echo "${fruits[@]}"          # apple banana cherry date

# Number of elements
echo "${#fruits[@]}"         # 4

# Length of a specific element
echo "${#fruits[1]}"         # 6 (length of "banana")

# Add an element
fruits+=("elderberry")

# Modify an element
fruits[1]="blueberry"

# Delete an element (leaves a gap in indices)
unset 'fruits[2]'

# Slice an array
echo "${fruits[@]:1:2}"     # Elements from index 1, count 2

# Array indices
echo "${!fruits[@]}"        # Print all valid indices

Associative Arrays (Bash 4+)

# Must declare with -A
declare -A colors
colors[red]="#ff0000"
colors[green]="#00ff00"
colors[blue]="#0000ff"

# Or declare and assign at once
declare -A config=(
    [host]="localhost"
    [port]="8080"
    [debug]="true"
)

# Access values
echo "${config[host]}"       # localhost

# All values
echo "${config[@]}"          # localhost 8080 true (order not guaranteed)

# All keys
echo "${!config[@]}"         # host port debug (order not guaranteed)

# Check if key exists
if [[ -v config[host] ]]; then
    echo "host is set"
fi

# Iterate over an associative array
for key in "${!config[@]}"; do
    echo "$key = ${config[$key]}"
done

Practical Array Patterns

# Read lines of a file into an array
mapfile -t lines < /etc/hostname
# or equivalently:
readarray -t lines < /etc/hostname

# Split a string into an array
IFS=',' read -ra parts <<< "one,two,three"
echo "${parts[1]}"           # two

# Read command output into an array
mapfile -t files < <(find . -name "*.sh" -type f)

# Filter an array
numbers=(1 2 3 4 5 6 7 8 9 10)
evens=()
for n in "${numbers[@]}"; do
    if (( n % 2 == 0 )); then
        evens+=("$n")
    fi
done
echo "${evens[@]}"           # 2 4 6 8 10

# Join array elements with a delimiter
join_by() {
    local IFS="$1"
    shift
    echo "$*"
}
join_by "," "${fruits[@]}"   # apple,blueberry,cherry,date

6. Conditionals: if/else and case

if/else Statements

Bash has two test syntaxes: [ ] (POSIX compatible, actually the test command) and [[ ]] (bash extended test, more powerful). Always prefer [[ ]] in bash scripts.

#!/usr/bin/env bash

# Basic if/else
if [[ "$1" == "hello" ]]; then
    echo "You said hello"
elif [[ "$1" == "bye" ]]; then
    echo "Goodbye!"
else
    echo "Unknown command: $1"
fi

# Numeric comparisons
age=25
if [[ $age -ge 18 ]]; then
    echo "Adult"
else
    echo "Minor"
fi

# Numeric comparisons with (( )) - cleaner for math
if (( age >= 18 && age < 65 )); then
    echo "Working age"
fi

Test Operators

# String comparisons (use [[ ]] for safety)
[[ "$a" == "$b" ]]      # Equal
[[ "$a" != "$b" ]]      # Not equal
[[ "$a" < "$b" ]]       # Less than (lexicographic)
[[ "$a" > "$b" ]]       # Greater than (lexicographic)
[[ -z "$a" ]]            # String is empty
[[ -n "$a" ]]            # String is not empty

# Pattern matching (only in [[ ]])
[[ "$str" == *.txt ]]    # Glob pattern match
[[ "$str" =~ ^[0-9]+$ ]] # Regex match

# Integer comparisons
[[ $a -eq $b ]]          # Equal
[[ $a -ne $b ]]          # Not equal
[[ $a -lt $b ]]          # Less than
[[ $a -le $b ]]          # Less than or equal
[[ $a -gt $b ]]          # Greater than
[[ $a -ge $b ]]          # Greater than or equal

# File tests
[[ -f "$file" ]]         # Is a regular file
[[ -d "$dir" ]]          # Is a directory
[[ -e "$path" ]]         # Exists (file or directory)
[[ -r "$file" ]]         # Is readable
[[ -w "$file" ]]         # Is writable
[[ -x "$file" ]]         # Is executable
[[ -s "$file" ]]         # File exists and is not empty
[[ -L "$file" ]]         # Is a symbolic link
[[ "$f1" -nt "$f2" ]]   # f1 is newer than f2
[[ "$f1" -ot "$f2" ]]   # f1 is older than f2

# Logical operators
[[ $a -gt 0 && $a -lt 100 ]]    # AND
[[ $a -eq 0 || $a -eq 1 ]]      # OR
[[ ! -f "$file" ]]               # NOT

Case Statements

Case statements are ideal for matching a variable against multiple patterns. They are cleaner than long if/elif chains and support glob patterns.

#!/usr/bin/env bash

case "$1" in
    start)
        echo "Starting service..."
        ;;
    stop)
        echo "Stopping service..."
        ;;
    restart)
        echo "Restarting service..."
        ;;
    status)
        echo "Service status: running"
        ;;
    *)
        echo "Usage: $0 {start|stop|restart|status}"
        exit 1
        ;;
esac

# Case with patterns
case "$filename" in
    *.tar.gz|*.tgz)
        tar xzf "$filename"
        ;;
    *.tar.bz2)
        tar xjf "$filename"
        ;;
    *.zip)
        unzip "$filename"
        ;;
    *.tar.xz)
        tar xJf "$filename"
        ;;
    *)
        echo "Unknown archive format: $filename"
        exit 1
        ;;
esac

# Case with character classes
case "$char" in
    [a-z])   echo "Lowercase letter" ;;
    [A-Z])   echo "Uppercase letter" ;;
    [0-9])   echo "Digit" ;;
    *)       echo "Other character" ;;
esac

7. Loops: for, while, and until

For Loops

# Iterate over a list of values
for fruit in apple banana cherry; do
    echo "I like $fruit"
done

# Iterate over an array
files=("config.yml" "app.js" "index.html")
for file in "${files[@]}"; do
    echo "Processing: $file"
done

# C-style for loop
for (( i = 0; i < 10; i++ )); do
    echo "Iteration: $i"
done

# Iterate over a range
for i in {1..10}; do
    echo "Number: $i"
done

# Range with step
for i in {0..100..5}; do
    echo "Count: $i"
done

# Iterate over files (correct way - handles spaces in filenames)
for file in /var/log/*.log; do
    [[ -f "$file" ]] || continue    # Skip if glob didn't match
    echo "Log file: $file"
    wc -l < "$file"
done

# Iterate over command output (line by line)
while IFS= read -r line; do
    echo "Line: $line"
done < <(find . -name "*.sh" -type f)

# WRONG way to iterate over command output (breaks on spaces):
# for file in $(find . -name "*.sh"); do ... done

While Loops

# Basic while loop
count=0
while [[ $count -lt 5 ]]; do
    echo "Count: $count"
    (( count++ ))
done

# Read a file line by line
while IFS= read -r line; do
    echo ">> $line"
done < /etc/hostname

# Read with multiple fields
while IFS=: read -r user _ uid gid _ home shell; do
    if (( uid >= 1000 )) && (( uid < 65000 )); then
        echo "User: $user, UID: $uid, Home: $home, Shell: $shell"
    fi
done < /etc/passwd

# Infinite loop with break
while true; do
    read -rp "Enter command (quit to exit): " cmd
    case "$cmd" in
        quit) break ;;
        *)    echo "You entered: $cmd" ;;
    esac
done

# Read from a pipe (note: runs in a subshell)
echo -e "line1\nline2\nline3" | while IFS= read -r line; do
    echo "Piped: $line"
done

Until Loops

# Until loop: runs UNTIL the condition becomes true
count=0
until [[ $count -ge 5 ]]; do
    echo "Count: $count"
    (( count++ ))
done

# Practical: wait for a service to be ready
until curl -s http://localhost:8080/health > /dev/null 2>&1; do
    echo "Waiting for service to start..."
    sleep 2
done
echo "Service is ready!"

Loop Control

# break: exit the loop entirely
for i in {1..100}; do
    if (( i == 5 )); then
        break
    fi
    echo "$i"
done

# continue: skip to the next iteration
for i in {1..10}; do
    if (( i % 3 == 0 )); then
        continue    # Skip multiples of 3
    fi
    echo "$i"
done

# break/continue with nested loops (specify level)
for i in {1..3}; do
    for j in {1..3}; do
        if (( j == 2 )); then
            continue 2    # Continue the OUTER loop
        fi
        echo "$i-$j"
    done
done

8. Functions

Functions in bash let you group commands into reusable, named blocks. They are essential for organizing scripts, avoiding code duplication, and creating readable, maintainable code.

Defining and Calling Functions

#!/usr/bin/env bash

# Method 1: preferred syntax
greet() {
    echo "Hello, $1!"
}

# Method 2: with 'function' keyword
function farewell() {
    echo "Goodbye, $1!"
}

# Call the functions
greet "World"        # Hello, World!
farewell "Alice"     # Goodbye, Alice!

Arguments and Return Values

# Functions receive arguments the same way as scripts: $1, $2, $@, $#
log_message() {
    local level="$1"
    shift                    # Remove first argument; $@ is now the rest
    local message="$*"
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $message"
}

log_message "INFO" "Application started"
log_message "ERROR" "Connection failed to database"

# Return values: bash functions can only return integers 0-255 (exit codes)
is_even() {
    if (( $1 % 2 == 0 )); then
        return 0    # Success (true)
    else
        return 1    # Failure (false)
    fi
}

if is_even 42; then
    echo "42 is even"
fi

# To "return" strings, use command substitution
get_hostname() {
    hostname -f 2>/dev/null || hostname
}
my_host=$(get_hostname)
echo "Host: $my_host"

Local Variables

# Without 'local', variables are global
bad_function() {
    result="I leak into the global scope"
}

bad_function
echo "$result"    # "I leak into the global scope"

# With 'local', variables are scoped to the function
good_function() {
    local result="I stay inside the function"
    echo "$result"
}

good_function     # "I stay inside the function"
echo "$result"    # "I leak into the global scope" (from earlier)

# Always use 'local' for function variables
process_file() {
    local filename="$1"
    local line_count
    local word_count

    line_count=$(wc -l < "$filename")
    word_count=$(wc -w < "$filename")

    echo "$filename: $line_count lines, $word_count words"
}

Advanced Function Patterns

# Function with validation
create_directory() {
    local dir="$1"

    if [[ -z "$dir" ]]; then
        echo "Error: directory name is required" >&2
        return 1
    fi

    if [[ -d "$dir" ]]; then
        echo "Directory already exists: $dir" >&2
        return 0
    fi

    mkdir -p "$dir" && echo "Created: $dir"
}

# Function that returns an array (via nameref, bash 4.3+)
get_file_info() {
    local -n result_ref=$1    # nameref: result_ref is an alias for $1
    local file="$2"

    result_ref=(
        "$(stat -c %s "$file" 2>/dev/null || echo 0)"
        "$(stat -c %Y "$file" 2>/dev/null || echo 0)"
        "$(file -b "$file" 2>/dev/null || echo unknown)"
    )
}

declare -a info
get_file_info info "/etc/hostname"
echo "Size: ${info[0]}, Modified: ${info[1]}, Type: ${info[2]}"

# Recursive function
factorial() {
    local n=$1
    if (( n <= 1 )); then
        echo 1
    else
        local sub
        sub=$(factorial $((n - 1)))
        echo $((n * sub))
    fi
}
echo "5! = $(factorial 5)"    # 5! = 120

9. Command Substitution and Process Substitution

Command Substitution

Command substitution captures the output of a command and uses it as a value. It is one of bash's most powerful features.

# Modern syntax: $(command)
current_date=$(date +%Y-%m-%d)
file_count=$(find . -name "*.log" -type f | wc -l)
git_branch=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)

# Legacy syntax: `command` (avoid - harder to read and nest)
current_date=`date +%Y-%m-%d`

# Nesting (much cleaner with $() than backticks)
files_in_newest=$(ls -1 "$(find /tmp -type d -maxdepth 1 | sort | tail -1)")

# Capture exit code AND output
output=$(some_command 2>&1)
exit_code=$?
if (( exit_code != 0 )); then
    echo "Command failed with code $exit_code: $output"
fi

# Assign to variable with word-by-word processing
read -r mem_total mem_used mem_free <<< "$(free -m | awk 'NR==2{print $2, $3, $4}')"
echo "Memory: ${mem_total}MB total, ${mem_used}MB used, ${mem_free}MB free"

Process Substitution

Process substitution (<(command) and >(command)) creates a temporary file descriptor that contains the output of a command. It lets you use command output where a filename is expected.

# Compare output of two commands (like diffing two commands)
diff <(ls /dir1) <(ls /dir2)

# Compare sorted versions of two files
diff <(sort file1.txt) <(sort file2.txt)

# Feed command output to a command that expects a file
while IFS= read -r line; do
    echo "Processing: $line"
done < <(find . -name "*.conf" -type f)

# This is better than piping because the while loop runs in the current
# shell (not a subshell), so variables set inside the loop persist
count=0
while IFS= read -r line; do
    (( count++ ))
done < <(find . -name "*.sh" -type f)
echo "Found $count shell scripts"    # This works!

# With a pipe, $count would be 0 here because the loop runs in a subshell:
# find . -name "*.sh" | while IFS= read -r line; do (( count++ )); done
# echo "$count"    # Always 0!

# Write to process substitution
tee >(gzip > output.gz) >(wc -l > linecount.txt) < input.txt

10. Input/Output Redirection and Pipes

Redirection and pipes are the foundation of the Unix philosophy: small programs that do one thing well, connected together to solve complex problems.

Output Redirection

# Redirect stdout to a file (overwrite)
echo "Hello" > output.txt

# Redirect stdout to a file (append)
echo "World" >> output.txt

# Redirect stderr to a file
command_that_fails 2> errors.log

# Redirect both stdout and stderr to the same file
command > all_output.log 2>&1

# Modern syntax for redirecting both (bash 4+)
command &> all_output.log

# Redirect stdout and stderr to different files
command > stdout.log 2> stderr.log

# Discard output (send to /dev/null)
command > /dev/null 2>&1

# Discard only stderr
command 2> /dev/null

# Discard only stdout
command > /dev/null

Input Redirection

# Redirect stdin from a file
sort < unsorted.txt

# Here document (heredoc): multi-line input
cat <<EOF
This is a heredoc.
Variables are expanded: $HOME
Commands too: $(date)
EOF

# Heredoc without expansion (quote the delimiter)
cat <<'EOF'
This is literal text.
No expansion: $HOME $(date)
EOF

# Heredoc with indentation stripping (use <<-)
if true; then
    cat <<-EOF
	This line's leading tabs are stripped.
	Variables still work: $USER
	EOF
fi

# Here string: pass a string as stdin
grep "pattern" <<< "search in this string"

wc -w <<< "count the words in this sentence"

Pipes

# Basic pipe: stdout of one command becomes stdin of the next
cat /var/log/syslog | grep "error" | sort | uniq -c | sort -rn | head -20

# Pipe with multiple stages
find . -name "*.py" -type f |
    xargs grep -l "import requests" |
    sort |
    while IFS= read -r file; do
        echo "$(wc -l < "$file") $file"
    done |
    sort -rn

# Named pipe (FIFO) for inter-process communication
mkfifo /tmp/my_pipe
echo "Hello from producer" > /tmp/my_pipe &
cat /tmp/my_pipe      # Reads "Hello from producer"
rm /tmp/my_pipe

# tee: write to stdout AND a file simultaneously
command | tee output.log           # Display and save
command | tee -a output.log        # Display and append
command | tee file1.txt file2.txt  # Write to multiple files

# PIPESTATUS: get exit codes of all commands in a pipeline
false | true | false
echo "${PIPESTATUS[@]}"            # 1 0 1

File Descriptors

# Open a file for reading on fd 3
exec 3< input.txt
while IFS= read -r line <&3; do
    echo "$line"
done
exec 3<&-    # Close fd 3

# Open a file for writing on fd 4
exec 4> output.txt
echo "Line 1" >&4
echo "Line 2" >&4
exec 4>&-    # Close fd 4

# Redirect stderr to a function for logging
log_errors() {
    while IFS= read -r line; do
        echo "[ERROR] $(date '+%H:%M:%S') $line" >> /var/log/app_errors.log
    done
}
risky_command 2> >(log_errors)

# Swap stdout and stderr
command 3>&1 1>&2 2>&3 3>&-

11. File Operations and Tests

File Test Operators

#!/usr/bin/env bash

file="/etc/passwd"

# Existence tests
[[ -e "$file" ]] && echo "Exists"
[[ -f "$file" ]] && echo "Is a regular file"
[[ -d "$file" ]] && echo "Is a directory"
[[ -L "$file" ]] && echo "Is a symbolic link"
[[ -p "$file" ]] && echo "Is a named pipe"
[[ -S "$file" ]] && echo "Is a socket"
[[ -b "$file" ]] && echo "Is a block device"
[[ -c "$file" ]] && echo "Is a character device"

# Permission tests
[[ -r "$file" ]] && echo "Is readable"
[[ -w "$file" ]] && echo "Is writable"
[[ -x "$file" ]] && echo "Is executable"
[[ -u "$file" ]] && echo "Has SUID bit"
[[ -g "$file" ]] && echo "Has SGID bit"
[[ -k "$file" ]] && echo "Has sticky bit"

# Size tests
[[ -s "$file" ]] && echo "File is not empty"

# Comparison tests
[[ "$file1" -nt "$file2" ]] && echo "file1 is newer"
[[ "$file1" -ot "$file2" ]] && echo "file1 is older"
[[ "$file1" -ef "$file2" ]] && echo "Same inode (hard link)"

Common File Operations

#!/usr/bin/env bash

# Create a temporary file safely
tmpfile=$(mktemp)
echo "Temp file: $tmpfile"

# Create a temporary directory
tmpdir=$(mktemp -d)
echo "Temp dir: $tmpdir"

# Always clean up temp files (use a trap)
cleanup() {
    rm -f "$tmpfile"
    rm -rf "$tmpdir"
}
trap cleanup EXIT

# Read file contents into a variable
contents=$(< myfile.txt)

# Process files safely (handles spaces in names)
find /path -name "*.txt" -print0 | while IFS= read -r -d '' file; do
    echo "Processing: $file"
done

# Create a backup before modifying
backup_and_edit() {
    local file="$1"
    cp "$file" "${file}.bak"
    # ... modify $file ...
}

# Atomically write to a file (write to temp, then rename)
write_atomic() {
    local target="$1"
    local content="$2"
    local tmp
    tmp=$(mktemp "${target}.XXXXXX")
    echo "$content" > "$tmp"
    mv "$tmp" "$target"
}

# Lock file to prevent concurrent execution
lock_file="/tmp/myscript.lock"
if ! mkdir "$lock_file" 2>/dev/null; then
    echo "Another instance is running" >&2
    exit 1
fi
trap 'rmdir "$lock_file"' EXIT

12. Error Handling

Robust error handling separates amateur scripts from production-quality ones. Bash provides several mechanisms for catching and responding to errors.

Exit Codes

# Every command returns an exit code: 0 = success, non-zero = failure
ls /nonexistent 2>/dev/null
echo "Exit code: $?"           # Exit code: 2

# Set your script's exit code
exit 0    # Success
exit 1    # General error
exit 2    # Misuse of shell command
exit 126  # Command not executable
exit 127  # Command not found
exit 128  # Invalid exit argument

# Convention: signals add 128 (e.g., SIGKILL = 9, so exit 137)

Bash Strict Mode

#!/usr/bin/env bash
set -euo pipefail

# set -e: Exit immediately if any command returns non-zero
# set -u: Treat unset variables as errors
# set -o pipefail: Pipeline fails if ANY command in the pipe fails

# Without -e, errors are silently ignored:
#   false          # Returns 1, script continues
#   echo "Still running"

# With -e, the script exits at 'false'

# Common pattern: intentionally allow a command to fail
set +e            # Temporarily disable -e
risky_command
result=$?
set -e            # Re-enable -e

# Or use || true to suppress the error
might_fail || true

# Or handle the error explicitly
if ! might_fail; then
    echo "Command failed, but we handled it"
fi

Trap: Catching Signals and Errors

#!/usr/bin/env bash
set -euo pipefail

# trap runs a command when a signal is received
# Common signals: EXIT, ERR, INT (Ctrl+C), TERM, HUP

# Clean up on exit (always runs, even on error)
tmpdir=$(mktemp -d)
trap 'rm -rf "$tmpdir"' EXIT

# Log errors with line numbers
trap 'echo "Error on line $LINENO, exit code $?" >&2' ERR

# Handle Ctrl+C gracefully
trap 'echo "Interrupted, cleaning up..."; exit 130' INT

# Complex cleanup
cleanup() {
    local exit_code=$?
    echo "Cleaning up..."
    rm -rf "$tmpdir"
    # Restore terminal settings, kill background processes, etc.
    kill $(jobs -p) 2>/dev/null || true
    exit $exit_code
}
trap cleanup EXIT

# Trap can be reset
trap - ERR    # Remove the ERR trap

# Practical error handler
die() {
    echo "FATAL: $*" >&2
    exit 1
}

[[ -f "$config_file" ]] || die "Config file not found: $config_file"

Error Handling Patterns

#!/usr/bin/env bash
set -euo pipefail

# Pattern 1: Guard clauses
check_requirements() {
    command -v docker >/dev/null 2>&1 || die "docker is not installed"
    command -v jq >/dev/null 2>&1 || die "jq is not installed"
    [[ -f ".env" ]] || die ".env file not found"
}

# Pattern 2: Retry logic
retry() {
    local max_attempts=$1
    local delay=$2
    shift 2
    local attempt=1

    while (( attempt <= max_attempts )); do
        if "$@"; then
            return 0
        fi
        echo "Attempt $attempt/$max_attempts failed. Retrying in ${delay}s..." >&2
        sleep "$delay"
        (( attempt++ ))
    done

    echo "All $max_attempts attempts failed" >&2
    return 1
}

# Usage: retry 3 5 curl -sf http://example.com/health

# Pattern 3: Validate inputs
validate_port() {
    local port="$1"
    if ! [[ "$port" =~ ^[0-9]+$ ]]; then
        die "Invalid port: $port (must be a number)"
    fi
    if (( port < 1 || port > 65535 )); then
        die "Port out of range: $port (must be 1-65535)"
    fi
}

# Pattern 4: Logging with levels
readonly LOG_LEVEL="${LOG_LEVEL:-INFO}"

log() {
    local level="$1"
    shift
    local timestamp
    timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    echo "[$timestamp] [$level] $*" >&2
}

log_info()  { log "INFO" "$@"; }
log_warn()  { log "WARN" "$@"; }
log_error() { log "ERROR" "$@"; }

13. Regular Expressions: grep, sed, and awk

Bash scripts frequently use regular expressions for text processing. The three essential tools are grep (search), sed (search and replace), and awk (field processing).

Regex in Bash Tests

# The =~ operator in [[ ]] performs regex matching
email="user@example.com"
if [[ "$email" =~ ^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$ ]]; then
    echo "Valid email"
fi

# Capture groups are stored in BASH_REMATCH
version="v3.12.4"
if [[ "$version" =~ ^v([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
    echo "Major: ${BASH_REMATCH[1]}"    # 3
    echo "Minor: ${BASH_REMATCH[2]}"    # 12
    echo "Patch: ${BASH_REMATCH[3]}"    # 4
fi

# Validate an IP address
ip="192.168.1.100"
if [[ "$ip" =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; then
    echo "Looks like an IP address"
fi

grep: Search for Patterns

# Basic search
grep "error" /var/log/syslog

# Case-insensitive search
grep -i "error" /var/log/syslog

# Extended regex (ERE)
grep -E "error|warning|critical" /var/log/syslog

# Show line numbers
grep -n "TODO" *.py

# Show only matching part
grep -o '[0-9]\+\.[0-9]\+\.[0-9]\+' version.txt

# Count matches
grep -c "error" /var/log/syslog

# List files containing the pattern
grep -rl "import requests" --include="*.py" .

# Invert match (lines NOT matching)
grep -v "^#" config.txt    # Non-comment lines

# Context: show lines around matches
grep -A 3 -B 1 "Exception" app.log    # 1 before, 3 after

# Fixed string (not regex)
grep -F "price: $10.00" data.txt

# Recursive search with file type filter
grep -rn "TODO\|FIXME\|HACK" --include="*.{py,js,sh}" .

sed: Stream Editor

# Replace first occurrence on each line
sed 's/old/new/' file.txt

# Replace ALL occurrences on each line
sed 's/old/new/g' file.txt

# Replace in-place (modify the file)
sed -i 's/old/new/g' file.txt

# Replace in-place with backup
sed -i.bak 's/old/new/g' file.txt

# Delete lines matching a pattern
sed '/^#/d' config.txt          # Delete comment lines
sed '/^$/d' file.txt            # Delete empty lines

# Print only matching lines (like grep)
sed -n '/error/p' log.txt

# Replace on specific lines
sed '3s/old/new/' file.txt      # Only line 3
sed '1,5s/old/new/g' file.txt   # Lines 1-5

# Insert and append lines
sed '2i\New line before line 2' file.txt    # Insert before line 2
sed '2a\New line after line 2' file.txt     # Append after line 2

# Multiple operations
sed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txt

# Use different delimiter (useful when pattern contains /)
sed 's|/usr/local/bin|/opt/bin|g' file.txt

# Extract text between patterns
sed -n '/START/,/END/p' file.txt

# Remove leading/trailing whitespace
sed 's/^[[:space:]]*//;s/[[:space:]]*$//' file.txt

awk: Field Processing

# Print specific fields (whitespace-separated by default)
awk '{print $1, $3}' file.txt

# Custom field separator
awk -F',' '{print $1, $2}' data.csv
awk -F':' '{print $1, $3}' /etc/passwd

# Filter rows
awk '$3 > 100 {print $1, $3}' data.txt

# Built-in variables
awk '{print NR, NF, $0}' file.txt
# NR = line number, NF = number of fields, $0 = entire line

# Sum a column
awk '{sum += $2} END {print "Total:", sum}' sales.txt

# Calculate average
awk '{sum += $1; count++} END {print "Average:", sum/count}' numbers.txt

# Pattern matching
awk '/error/ {print $0}' log.txt
awk '/^[0-9]/ {print $1}' data.txt

# BEGIN and END blocks
awk 'BEGIN {print "Name,Score"} {print $1 "," $2} END {print "---Done---"}' data.txt

# Conditional formatting
awk '{
    if ($3 > 90) grade = "A"
    else if ($3 > 80) grade = "B"
    else if ($3 > 70) grade = "C"
    else grade = "F"
    print $1, grade
}' scores.txt

# Unique values (like sort | uniq)
awk '!seen[$1]++' file.txt

# Transpose columns to rows
awk '{for (i=1; i<=NF; i++) a[i] = a[i] " " $i} END {for (i in a) print a[i]}' file.txt

14. Script Arguments and getopts

Positional Parameters

#!/usr/bin/env bash

# $0 = script name, $1-$9 = first 9 args, ${10}+ for more
echo "Script: $0"
echo "First arg: $1"
echo "All args: $@"
echo "Arg count: $#"

# shift moves all arguments left by N positions
echo "Before shift: $1 $2 $3"
shift        # $2 becomes $1, $3 becomes $2, etc.
echo "After shift: $1 $2"

# Check for required arguments
if [[ $# -lt 2 ]]; then
    echo "Usage: $0 <input_file> <output_file>" >&2
    exit 1
fi

input_file="$1"
output_file="$2"

getopts: Short Options

#!/usr/bin/env bash

# getopts handles short options (-v, -f filename, -h)
# The colon after a letter means it takes an argument

usage() {
    echo "Usage: $0 [-v] [-f file] [-n count] [-h]"
    echo "  -v          Verbose mode"
    echo "  -f file     Input file"
    echo "  -n count    Number of iterations"
    echo "  -h          Show this help"
    exit 1
}

verbose=false
input_file=""
count=1

while getopts "vf:n:h" opt; do
    case "$opt" in
        v) verbose=true ;;
        f) input_file="$OPTARG" ;;
        n) count="$OPTARG" ;;
        h) usage ;;
        ?) usage ;;    # Unknown option
    esac
done

# Remove processed options, leaving remaining arguments
shift $((OPTIND - 1))

echo "Verbose: $verbose"
echo "Input file: $input_file"
echo "Count: $count"
echo "Remaining args: $@"

Long Options with a while/case Loop

#!/usr/bin/env bash

# For long options (--verbose, --file=name), use a manual parser

usage() {
    cat <<EOF
Usage: $0 [OPTIONS] <target>

Options:
    -v, --verbose          Enable verbose output
    -f, --file FILE        Input file path
    -n, --count NUM        Number of iterations (default: 1)
    -o, --output DIR       Output directory
        --dry-run          Show what would be done without doing it
    -h, --help             Show this help message

Examples:
    $0 --verbose --file input.txt target_name
    $0 -f data.csv -n 5 --dry-run production
EOF
    exit 0
}

# Defaults
verbose=false
input_file=""
count=1
output_dir="."
dry_run=false

while [[ $# -gt 0 ]]; do
    case "$1" in
        -v|--verbose)
            verbose=true
            shift
            ;;
        -f|--file)
            input_file="$2"
            shift 2
            ;;
        --file=*)
            input_file="${1#*=}"
            shift
            ;;
        -n|--count)
            count="$2"
            shift 2
            ;;
        -o|--output)
            output_dir="$2"
            shift 2
            ;;
        --dry-run)
            dry_run=true
            shift
            ;;
        -h|--help)
            usage
            ;;
        --)
            shift
            break
            ;;
        -*)
            echo "Unknown option: $1" >&2
            usage
            ;;
        *)
            break
            ;;
    esac
done

# Validate required arguments
if [[ $# -lt 1 ]]; then
    echo "Error: target is required" >&2
    usage
fi

target="$1"
echo "Target: $target, Verbose: $verbose, File: $input_file"

15. Practical Examples

Here are complete, production-ready scripts that demonstrate bash scripting patterns in real-world scenarios.

Backup Script

#!/usr/bin/env bash
set -euo pipefail

# Configuration
readonly BACKUP_SRC="${BACKUP_SRC:-/var/www}"
readonly BACKUP_DEST="${BACKUP_DEST:-/backups}"
readonly RETENTION_DAYS="${RETENTION_DAYS:-30}"
readonly TIMESTAMP=$(date +%Y%m%d_%H%M%S)
readonly BACKUP_NAME="backup_${TIMESTAMP}.tar.gz"
readonly LOG_FILE="/var/log/backup.log"

log() {
    local msg="[$(date '+%Y-%m-%d %H:%M:%S')] $*"
    echo "$msg" | tee -a "$LOG_FILE"
}

die() {
    log "FATAL: $*"
    exit 1
}

# Ensure backup directory exists
mkdir -p "$BACKUP_DEST" || die "Cannot create backup directory"

# Check disk space (require at least 1GB free)
free_space=$(df -BG "$BACKUP_DEST" | awk 'NR==2 {gsub("G",""); print $4}')
if (( free_space < 1 )); then
    die "Insufficient disk space: ${free_space}GB free"
fi

log "Starting backup of $BACKUP_SRC"

# Create the backup
if tar czf "${BACKUP_DEST}/${BACKUP_NAME}" \
    --exclude='*.log' \
    --exclude='node_modules' \
    --exclude='.git' \
    "$BACKUP_SRC" 2>> "$LOG_FILE"; then

    backup_size=$(du -h "${BACKUP_DEST}/${BACKUP_NAME}" | cut -f1)
    log "Backup created: ${BACKUP_NAME} (${backup_size})"
else
    die "Backup failed"
fi

# Remove old backups
log "Removing backups older than ${RETENTION_DAYS} days"
find "$BACKUP_DEST" -name "backup_*.tar.gz" -mtime +"$RETENTION_DAYS" -delete

# Count remaining backups
remaining=$(find "$BACKUP_DEST" -name "backup_*.tar.gz" | wc -l)
log "Backup complete. ${remaining} backups stored."

Log Parser

#!/usr/bin/env bash
set -euo pipefail

# Parse nginx access logs and generate a summary report

readonly LOG_FILE="${1:-/var/log/nginx/access.log}"
readonly REPORT_FILE="${2:-/tmp/log_report_$(date +%Y%m%d).txt}"

[[ -f "$LOG_FILE" ]] || { echo "Log file not found: $LOG_FILE" >&2; exit 1; }

{
    echo "================================================="
    echo "  Nginx Access Log Report"
    echo "  Generated: $(date '+%Y-%m-%d %H:%M:%S')"
    echo "  Log file: $LOG_FILE"
    echo "================================================="
    echo ""

    total_requests=$(wc -l < "$LOG_FILE")
    echo "Total Requests: $total_requests"
    echo ""

    echo "--- Top 10 Requested URLs ---"
    awk '{print $7}' "$LOG_FILE" | sort | uniq -c | sort -rn | head -10
    echo ""

    echo "--- HTTP Status Code Distribution ---"
    awk '{print $9}' "$LOG_FILE" | sort | uniq -c | sort -rn
    echo ""

    echo "--- Top 10 IP Addresses ---"
    awk '{print $1}' "$LOG_FILE" | sort | uniq -c | sort -rn | head -10
    echo ""

    echo "--- Top 10 User Agents ---"
    awk -F'"' '{print $6}' "$LOG_FILE" | sort | uniq -c | sort -rn | head -10
    echo ""

    echo "--- Requests Per Hour ---"
    awk '{
        split($4, a, ":")
        hour = a[2]
        hours[hour]++
    }
    END {
        for (h in hours)
            printf "%s:00 - %d requests\n", h, hours[h]
    }' "$LOG_FILE" | sort
    echo ""

    # 4xx and 5xx errors
    errors=$(awk '$9 ~ /^[45]/' "$LOG_FILE" | wc -l)
    echo "Total 4xx/5xx Errors: $errors"

    if (( errors > 0 )); then
        echo ""
        echo "--- Top Error URLs ---"
        awk '$9 ~ /^[45]/ {print $9, $7}' "$LOG_FILE" |
            sort | uniq -c | sort -rn | head -10
    fi

} > "$REPORT_FILE"

echo "Report saved to: $REPORT_FILE"
cat "$REPORT_FILE"

Deployment Script

#!/usr/bin/env bash
set -euo pipefail

# Simple deployment script for a web application
# Usage: ./deploy.sh [--env staging|production] [--branch main] [--dry-run]

readonly SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
readonly APP_NAME="myapp"

# Defaults
environment="staging"
branch="main"
dry_run=false

# Parse arguments
while [[ $# -gt 0 ]]; do
    case "$1" in
        --env)       environment="$2"; shift 2 ;;
        --branch)    branch="$2"; shift 2 ;;
        --dry-run)   dry_run=true; shift ;;
        *)           echo "Unknown option: $1" >&2; exit 1 ;;
    esac
done

# Configuration per environment
case "$environment" in
    staging)
        deploy_dir="/var/www/staging/${APP_NAME}"
        service_name="${APP_NAME}-staging"
        ;;
    production)
        deploy_dir="/var/www/production/${APP_NAME}"
        service_name="${APP_NAME}-production"
        ;;
    *)
        echo "Unknown environment: $environment" >&2
        exit 1
        ;;
esac

log() { echo "[$(date '+%H:%M:%S')] $*"; }
run() {
    log "Running: $*"
    if [[ "$dry_run" == "false" ]]; then
        "$@"
    else
        log "(dry run - skipped)"
    fi
}

log "Deploying ${APP_NAME} to ${environment}"
log "Branch: ${branch}"
log "Deploy directory: ${deploy_dir}"

# Pre-flight checks
command -v git >/dev/null 2>&1 || { echo "git is required" >&2; exit 1; }
[[ -d "$deploy_dir" ]] || { echo "Deploy dir does not exist" >&2; exit 1; }

# Pull latest code
log "Pulling latest code..."
run git -C "$deploy_dir" fetch origin
run git -C "$deploy_dir" checkout "$branch"
run git -C "$deploy_dir" pull origin "$branch"

# Install dependencies
log "Installing dependencies..."
if [[ -f "${deploy_dir}/package.json" ]]; then
    run npm --prefix "$deploy_dir" ci --production
fi

# Run database migrations
if [[ -f "${deploy_dir}/migrate.sh" ]]; then
    log "Running database migrations..."
    run bash "${deploy_dir}/migrate.sh"
fi

# Restart the service
log "Restarting ${service_name}..."
run sudo systemctl restart "$service_name"

# Health check
log "Running health check..."
sleep 3
if curl -sf "http://localhost:8080/health" > /dev/null 2>&1; then
    log "Deployment successful! Service is healthy."
else
    log "WARNING: Health check failed!"
    exit 1
fi
⚙ Automate it: Schedule your backup and maintenance scripts with cron. Use our Crontab Generator to build the schedule expression, or paste an existing one into the Cron Expression Parser to verify when it runs.

16. Debugging Techniques

set -x: Trace Execution

#!/usr/bin/env bash

# Enable tracing for the entire script
set -x

echo "This command will be printed before executing"
name="World"
echo "Hello, $name"

# Or enable tracing for just a section
set +x    # Disable tracing

echo "This won't show the trace"

set -x    # Re-enable
echo "Tracing is back on"
set +x

# Run a script with tracing from the command line
# bash -x script.sh

# Custom trace prefix (shows file, line number, function)
export PS4='+${BASH_SOURCE}:${LINENO}:${FUNCNAME[0]:+${FUNCNAME[0]}():} '
set -x

ShellCheck: Static Analysis

# Install shellcheck
# Ubuntu/Debian: sudo apt install shellcheck
# macOS: brew install shellcheck
# Or use the online version: https://www.shellcheck.net/

# Run shellcheck on your script
shellcheck script.sh

# ShellCheck catches common issues:
# SC2086: Double quote to prevent globbing and word splitting
# SC2046: Quote this to prevent word splitting
# SC2006: Use $(...) notation instead of legacy backticks
# SC2004: $/${} is unnecessary on arithmetic variables
# SC2034: Variable appears unused
# SC2162: read without -r will mangle backslashes
# SC2164: Use cd ... || exit in case cd fails

# Exclude specific checks
shellcheck -e SC2086 script.sh

# Check as a specific shell
shellcheck --shell=bash script.sh

# Integrate into CI (exits non-zero if issues found)
shellcheck -f gcc script.sh    # GCC-compatible output for editors

Debugging Strategies

#!/usr/bin/env bash

# Strategy 1: Print variable state at key points
debug() {
    if [[ "${DEBUG:-false}" == "true" ]]; then
        echo "[DEBUG] $*" >&2
    fi
}

debug "Processing file: $file"
debug "Array contents: ${arr[*]}"

# Run with: DEBUG=true ./script.sh

# Strategy 2: Trap ERR to show where failures happen
trap 'echo "ERROR at ${BASH_SOURCE}:${LINENO} (exit code: $?)" >&2' ERR

# Strategy 3: Print a stack trace on error
stacktrace() {
    local i=0
    echo "Stack trace:" >&2
    while caller $i >&2; do
        (( i++ ))
    done
}
trap stacktrace ERR

# Strategy 4: Use a debug log file
exec 5>> /tmp/debug.log    # Open fd 5 for debug logging
echo "Starting at $(date)" >&5
echo "PID: $$" >&5

# Strategy 5: Validate assumptions early
assert() {
    if ! "$@"; then
        echo "Assertion failed: $*" >&2
        exit 1
    fi
}

assert [[ -d "/var/www" ]]
assert command -v docker

# Strategy 6: Dry run mode
DRY_RUN="${DRY_RUN:-false}"
execute() {
    if [[ "$DRY_RUN" == "true" ]]; then
        echo "[DRY RUN] Would execute: $*" >&2
    else
        "$@"
    fi
}

17. Best Practices and Common Pitfalls

Script Header Template

#!/usr/bin/env bash
#
# script_name.sh - Brief description of what this script does
#
# Usage: script_name.sh [OPTIONS] <required_arg>
#
# Options:
#   -v, --verbose    Enable verbose output
#   -h, --help       Show this help
#
# Examples:
#   script_name.sh -v input.txt
#   script_name.sh --output /tmp input.txt
#

set -euo pipefail

readonly SCRIPT_NAME="$(basename "$0")"
readonly SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"

Essential Best Practices

Common Pitfalls

# PITFALL 1: Unquoted variables with spaces
file="my file.txt"
rm $file           # WRONG: deletes 'my' and 'file.txt'
rm "$file"         # CORRECT: deletes 'my file.txt'

# PITFALL 2: for loop over command output
# WRONG: splits on ALL whitespace, not just newlines
for f in $(find . -name "*.txt"); do
    echo "$f"
done

# CORRECT: use while + read
while IFS= read -r f; do
    echo "$f"
done < <(find . -name "*.txt")

# PITFALL 3: Testing empty variables
# WRONG: syntax error if $var is empty
if [ $var == "value" ]; then echo "yes"; fi

# CORRECT: use [[ ]] or quote the variable
if [[ "$var" == "value" ]]; then echo "yes"; fi

# PITFALL 4: cd without error handling
cd /some/directory    # If this fails, all subsequent commands
rm -rf *              # run in the WRONG directory!

# CORRECT:
cd /some/directory || exit 1

# PITFALL 5: Parsing ls output
# WRONG: breaks on filenames with spaces, newlines, etc.
for f in $(ls *.txt); do echo "$f"; done

# CORRECT: use a glob
for f in *.txt; do
    [[ -f "$f" ]] || continue
    echo "$f"
done

# PITFALL 6: Using echo with user input (use printf instead)
# WRONG: -n or -e in user input changes echo's behavior
echo "$user_input"

# CORRECT:
printf '%s\n' "$user_input"

# PITFALL 7: Variable assignment in pipelines
# WRONG: the while loop runs in a subshell
count=0
cat file.txt | while IFS= read -r line; do
    (( count++ ))
done
echo "$count"    # Always 0!

# CORRECT: use process substitution or redirection
count=0
while IFS= read -r line; do
    (( count++ ))
done < file.txt
echo "$count"    # Correct count

# PITFALL 8: Arithmetic with leading zeros
num="08"
echo $(( num + 1 ))    # ERROR: 08 is invalid octal

# CORRECT: strip the leading zero
echo $(( 10#$num + 1 ))    # 9 (force base-10 interpretation)

# PITFALL 9: Not handling the case where a glob matches nothing
# If no .txt files exist, this processes the literal string "*.txt"
for f in *.txt; do
    echo "$f"
done

# CORRECT: use nullglob or check if the file exists
shopt -s nullglob
for f in *.txt; do
    echo "$f"
done
shopt -u nullglob

# Or:
for f in *.txt; do
    [[ -f "$f" ]] || continue
    echo "$f"
done

Performance Tips

# Use built-in string operations instead of external commands
# SLOW: forks a subprocess
ext=$(echo "$filename" | sed 's/.*\.//')

# FAST: bash built-in
ext="${filename##*.}"

# SLOW: forks cat and wc
lines=$(cat file.txt | wc -l)

# FAST: redirect instead of piping cat
lines=$(wc -l < file.txt)

# SLOW: calling an external program in a loop
for i in {1..1000}; do
    result=$(echo "$i * 2" | bc)
done

# FAST: use bash arithmetic
for i in {1..1000}; do
    result=$(( i * 2 ))
done

# For heavy text processing, use awk instead of bash loops
# SLOW: bash loop reading line by line
total=0
while IFS= read -r line; do
    value=$(echo "$line" | cut -d',' -f3)
    total=$(( total + value ))
done < data.csv

# FAST: single awk invocation
total=$(awk -F',' '{sum += $3} END {print sum}' data.csv)

# Use mapfile/readarray for reading files into arrays
# SLOW:
while IFS= read -r line; do
    lines+=("$line")
done < file.txt

# FAST:
mapfile -t lines < file.txt

Frequently Asked Questions

What is the difference between bash and sh?

sh (Bourne Shell) is the original Unix shell from 1979 and adheres to the POSIX standard. Bash (Bourne Again Shell) is a superset of sh that adds features like arrays, associative arrays, string manipulation operators, extended globbing, process substitution, and many other conveniences. Scripts written for sh will run in bash, but bash scripts that use bash-specific features (like [[ ]], arrays, or ${var//pattern/replacement}) will not run in sh. For maximum portability across all Unix systems, use #!/bin/sh and stick to POSIX features. For practical scripting on systems where bash is available (virtually all Linux distributions), use #!/usr/bin/env bash and take advantage of the richer feature set.

How do I debug a bash script?

Use set -x at the top of your script (or run it with bash -x script.sh) to print every command before it executes, showing variable expansions and the actual commands being run. Use set -e to exit immediately on any error rather than continuing in a broken state. Use set -u to treat references to unset variables as errors instead of silent empty strings. Combine them as set -euxo pipefail for the strictest debugging mode. For static analysis, run shellcheck on your script to catch common mistakes, quoting issues, and portability problems before you even run the code. You can also add trap 'echo "Error on line $LINENO" >&2' ERR to print the exact line where an error occurs.

How do I pass arguments to a bash script?

Arguments passed on the command line are accessed using positional parameters: $1 is the first argument, $2 the second, and so on up to $9. For arguments beyond 9, use ${10}, ${11}, etc. $0 is the script name itself, $# is the total argument count, $@ expands to all arguments as separate quoted words (almost always what you want), and $* expands to all arguments as a single string. For scripts that accept named options like -v or --file=name, use getopts for short options or write a while/case loop for long options. Always validate that required arguments are present and display a usage message when they are missing or incorrect.

What does set -euo pipefail do in bash?

This combination is known as bash strict mode and enables three critical safety options. set -e (errexit) causes the script to exit immediately if any command returns a non-zero exit code, instead of blindly continuing. set -u (nounset) makes references to unset variables an error, catching typos like $naem instead of $name that would otherwise silently expand to an empty string. set -o pipefail causes a pipeline (like cmd1 | cmd2 | cmd3) to return the exit code of the last command that failed, rather than only the exit code of the final command. Together, these catch the three most common classes of scripting bugs and should be at the top of every production bash script.

How do I read a file line by line in bash?

The correct way is while IFS= read -r line; do echo "$line"; done < filename.txt. Setting IFS= (empty) prevents leading and trailing whitespace from being stripped from each line. The -r flag prevents backslash sequences from being interpreted (without it, \n in the text would be converted to a newline). Never use for line in $(cat file) because it splits on all whitespace characters (not just newlines), which breaks lines containing spaces and loads the entire file into memory at once. For processing structured data like CSV, you can use a custom IFS: while IFS=, read -r col1 col2 col3; do ... ; done < data.csv. For very large files where performance matters, consider using awk instead of a bash loop.

Conclusion

Bash scripting is one of the most practical and enduring skills in software development. Every server you SSH into, every CI/CD pipeline you configure, every Docker image you build, and every cron job you schedule runs through bash or a compatible shell. The concepts in this guide — variables, arrays, conditionals, loops, functions, I/O redirection, error handling, regex processing, and argument parsing — cover the full range of what you need to write scripts that are robust, maintainable, and production-ready.

If you are new to bash, start with the fundamentals: write scripts that automate your repetitive daily tasks. Create a backup script, a log cleaner, or a project setup tool. Use set -euo pipefail from day one so you build good habits. Run shellcheck on every script to learn the common pitfalls before they become bugs in production.

If you are already comfortable with the basics, push into the advanced territory: associative arrays for configuration, trap for bulletproof cleanup, getopts and while/case loops for professional argument parsing, and process substitution for elegant data pipelines. Learn to recognize when a task has outgrown bash (complex data structures, JSON parsing, heavy computation) and reach for Python, Go, or another language instead.

The best scripts are the ones that save time, prevent mistakes, and work reliably at 3 AM when you are not watching them. Write defensively, test thoroughly, and document clearly. Your future self and your teammates will thank you.

⚙ Essential tools: Schedule your scripts with the Crontab Generator, decode existing cron expressions with the Cron Expression Parser, test regex patterns with the Regex Tester, and keep our Bash Cheat Sheet bookmarked.

Related Resources

Related Resources

Crontab Generator
Build cron schedule expressions for automating scripts
Cron Expression Parser
Decode existing cron expressions into plain English
Git Complete Guide
Master version control from basics to advanced workflows
Docker Complete Guide
Containerize your applications and orchestrate deployments
Bash Cheat Sheet
Quick reference for bash commands, syntax, and patterns
Regex Tester
Test regex patterns used in grep, sed, and awk