100+ Bash One-Liners Every Developer Should Know

February 12, 2026

The terminal is where developer productivity lives or dies. A well-crafted one-liner can replace a 50-line script, save you from writing throwaway programs, and turn a 20-minute manual task into a two-second command. The problem is remembering them all. This guide is the reference you bookmark and come back to every week — over 100 battle-tested one-liners organized by category, each with a plain-English explanation of what it does and why it works.

Every command here has been tested on Linux (Ubuntu/Debian) and most work on macOS with minor adjustments. Where a command requires GNU-specific tools (common on Linux but not default on macOS), a note is included.

⚙ Related tools: Test regex patterns with the Regex Tester, keep our Bash Shortcuts Cheat Sheet handy, and schedule automated scripts with the Crontab Generator.

Table of Contents

  1. File Operations
  2. Text Processing (grep, sed, awk, sort, uniq)
  3. JSON Processing (jq)
  4. Networking (curl, wget, ss, dig)
  5. System Monitoring (ps, df, du, free, lsof)
  6. Git One-Liners
  7. Docker One-Liners
  8. String Manipulation
  9. Process Management (xargs, jobs, signals)
  10. Date and Time
  11. Frequently Asked Questions

1. File Operations

Find files by name recursively:

find . -name "*.log" -type f

Find files modified in the last 24 hours:

find . -type f -mtime -1

Find and delete all .tmp files (dry run first with -print, then replace with -delete):

find . -name "*.tmp" -type f -delete

Find empty directories and remove them:

find . -type d -empty -delete

Rename all .jpeg files to .jpg in the current directory:

for f in *.jpeg; do mv "$f" "${f%.jpeg}.jpg"; done

Copy directory structure without files (create skeleton):

find src/ -type d -exec mkdir -p dest/{} \;

Find the 10 largest files in a directory tree:

find . -type f -exec du -h {} + | sort -rh | head -10

Count files by extension recursively:

find . -type f | sed 's/.*\.//' | sort | uniq -c | sort -rn

Find duplicate files by checksum (same content, different names):

find . -type f -exec md5sum {} + | sort | uniq -w32 -dD

Sync two directories (fast incremental copy):

rsync -avz --progress source/ destination/

Find files larger than 100MB:

find / -type f -size +100M -exec ls -lh {} \; 2>/dev/null

2. Text Processing

Search for a pattern recursively in all files (with line numbers):

grep -rn "TODO" --include="*.py" .

Count occurrences of a word in a file:

grep -oc "error" logfile.txt

Replace a string in all files recursively (GNU sed):

find . -name "*.py" -exec sed -i 's/old_function/new_function/g' {} +

Extract unique IP addresses from a log file:

grep -oE '\b[0-9]{1,3}(\.[0-9]{1,3}){3}\b' access.log | sort -u

Print lines between two patterns (inclusive):

sed -n '/START_MARKER/,/END_MARKER/p' config.txt

Remove blank lines from a file:

sed '/^$/d' file.txt

Print the Nth line of a file:

sed -n '42p' file.txt

Sum a column of numbers:

awk '{sum += $1} END {print sum}' numbers.txt

Print lines where column 3 is greater than 100:

awk '$3 > 100' data.txt

Find the top 10 most frequent lines in a file:

sort file.txt | uniq -c | sort -rn | head -10

Remove duplicate lines while preserving order:

awk '!seen[$0]++' file.txt

Convert tabs to spaces (4 spaces):

expand -t 4 file.txt > file_spaces.txt

3. JSON Processing (jq)

Pretty-print JSON:

cat data.json | jq '.'

Extract a specific field from JSON:

jq '.name' package.json

Extract nested field (raw string, no quotes):

jq -r '.dependencies | keys[]' package.json

Filter an array by a condition:

jq '[.[] | select(.status == "active")]' users.json

Get the length of a JSON array:

jq '.items | length' response.json

Transform JSON — extract specific fields into a new shape:

jq '.users[] | {name: .name, email: .email}' data.json

Merge two JSON files:

jq -s '.[0] * .[1]' base.json override.json

Convert JSON array to CSV:

jq -r '.[] | [.id, .name, .email] | @csv' users.json

Update a field value in JSON:

jq '.version = "2.0.0"' package.json > tmp.json && mv tmp.json package.json

Count items grouped by a field:

jq 'group_by(.status) | map({status: .[0].status, count: length})' items.json

4. Networking

Check if a website is reachable and get the HTTP status code:

curl -o /dev/null -s -w "%{http_code}\n" https://example.com

Download a file with a progress bar:

curl -L -O --progress-bar https://example.com/file.tar.gz

Make a POST request with JSON body:

curl -s -X POST -H "Content-Type: application/json" -d '{"key":"value"}' https://api.example.com/endpoint

Measure response time of an HTTP request:

curl -o /dev/null -s -w "DNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" https://example.com

Find which process is listening on a specific port:

ss -tlnp | grep :8080

List all open network connections:

ss -tunap

DNS lookup — all record types:

dig example.com ANY +short

Check your public IP address:

curl -s ifconfig.me

Test if a TCP port is open on a remote host (no netcat needed):

timeout 3 bash -c '</dev/tcp/example.com/443 && echo Open || echo Closed' 2>/dev/null || echo Closed

Download an entire website for offline viewing:

wget --mirror --convert-links --adjust-extension --page-requisites --no-parent https://example.com

5. System Monitoring

Show top 10 processes by memory usage:

ps aux --sort=-%mem | head -11

Show top 10 processes by CPU usage:

ps aux --sort=-%cpu | head -11

Disk usage summary of current directory, sorted by size:

du -sh */ 2>/dev/null | sort -rh

Show filesystem disk space with human-readable sizes:

df -h --type=ext4 --type=xfs --type=btrfs

Watch memory usage in real time (refreshes every 2 seconds):

watch -n 2 free -h

Find all files open by a specific process:

lsof -p $(pgrep -f "process_name")

Show which process is using a file:

lsof /path/to/file

Count open file descriptors per process (top 10):

for pid in /proc/[0-9]*; do echo "$(ls "$pid/fd" 2>/dev/null | wc -l) $pid"; done | sort -rn | head -10

Check system uptime and load average:

uptime

Monitor a log file in real time (with highlighting):

tail -f /var/log/syslog | grep --color=always -E "ERROR|WARN|"

6. Git One-Liners

Show a compact, colored log with graph:

git log --oneline --graph --decorate --all

Find commits that changed a specific file:

git log --follow -p -- path/to/file

Show who last modified each line of a file:

git blame -L 10,20 path/to/file

List all files changed between two branches:

git diff --name-only main..feature-branch

Find a commit that introduced a bug (binary search):

git bisect start && git bisect bad HEAD && git bisect good v1.0

Undo the last commit but keep changes staged:

git reset --soft HEAD~1

Stash changes with a descriptive name:

git stash push -m "WIP: refactoring auth module"

Delete all merged local branches (except main/master):

git branch --merged main | grep -vE "main|master|\*" | xargs -r git branch -d

Show the total lines of code per author:

git log --format='%aN' | sort -u | while read name; do echo -en "$name\t"; git log --author="$name" --pretty=tformat: --numstat | awk '{add+=$1; del+=$2} END {print add+0, "++", del+0, "--"}'; done

Search commit messages for a keyword:

git log --all --grep="fix login"

7. Docker One-Liners

Remove all stopped containers:

docker container prune -f

Remove all unused images, networks, and build cache:

docker system prune -af

Show disk usage by Docker:

docker system df -v

View live logs from a container (with timestamps):

docker logs -f --timestamps container_name

Get the IP address of a running container:

docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name

List all containers with their sizes:

docker ps -as --format "table {{.Names}}\t{{.Image}}\t{{.Size}}\t{{.Status}}"

Copy a file from a running container to the host:

docker cp container_name:/path/in/container /path/on/host

Run a one-off command in a new container and remove it:

docker run --rm -it alpine sh -c "apk add curl && curl -s ifconfig.me"

Stop all running containers:

docker stop $(docker ps -q)

Show environment variables in a running container:

docker exec container_name env

8. String Manipulation

Extract filename from a path:

echo "${filepath##*/}"

Extract directory from a path:

echo "${filepath%/*}"

Remove file extension:

echo "${filename%.*}"

Convert string to lowercase (bash 4+):

echo "${string,,}"

Convert string to uppercase (bash 4+):

echo "${string^^}"

Replace first occurrence of a substring:

echo "${string/old/new}"

Replace all occurrences of a substring:

echo "${string//old/new}"

Extract a substring (offset 5, length 10):

echo "${string:5:10}"

Get string length:

echo "${#string}"

Trim leading and trailing whitespace:

echo "  hello world  " | xargs

Generate a random alphanumeric string (32 characters):

tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32; echo

9. Process Management

Run a command on multiple items in parallel (4 at a time):

find . -name "*.png" -print0 | xargs -0 -P 4 -I {} convert {} -resize 50% {}

Kill a process by name:

pkill -f "process_name"

Kill all processes on a specific port:

kill $(lsof -t -i :3000)

Run a command and retry on failure (up to 5 times):

for i in {1..5}; do command_here && break || sleep $((i*2)); done

Run a command with a timeout (exits after 30 seconds):

timeout 30 long_running_command

Run a command in the background and disown it (survives terminal close):

nohup long_running_command > /tmp/output.log 2>&1 & disown

Wait for a port to become available before proceeding:

while ! ss -tlnp | grep -q :8080; do sleep 1; done; echo "Port 8080 is ready"

Execute a command for each line of a file:

while IFS= read -r line; do echo "Processing: $line"; done < urls.txt

Limit memory usage of a command (1GB max, Linux cgroups v2):

systemd-run --scope -p MemoryMax=1G command_here

Show a process tree for a specific PID:

pstree -p -s 12345

10. Date and Time

Current date and time in ISO 8601 format:

date -Iseconds

Convert Unix epoch to human-readable date:

date -d @1707696000

Get current Unix timestamp:

date +%s

Calculate date N days ago:

date -d "7 days ago" +%Y-%m-%d

Calculate date N days from now:

date -d "+30 days" +%Y-%m-%d

Measure execution time of a command:

time command_here

More precise timing with milliseconds:

start=$(date +%s%N); command_here; echo "$(( ($(date +%s%N) - start) / 1000000 ))ms"

Generate a timestamp-based filename:

cp database.sql "backup_$(date +%Y%m%d_%H%M%S).sql"

Find files modified between two dates:

find . -type f -newermt "2026-01-01" ! -newermt "2026-02-01"

Show a calendar for the current month:

cal

Frequently Asked Questions

What is the difference between $() and backticks in bash?

Both $() and backticks (` `) perform command substitution, capturing the output of a command as a string. However, $() is strongly preferred because it can be nested easily — echo $(cat $(find . -name config)) — is more readable, and avoids confusing escaping rules. Backticks require backslash escaping inside nested substitutions and are visually easy to confuse with single quotes. The $() syntax is POSIX compliant and works in all modern shells.

How do I run a bash one-liner on every file in a directory?

Use a for loop: for f in *.txt; do echo "$f"; done. For recursive operations, use find with -exec: find . -name '*.log' -exec gzip {} \;. The \; runs gzip once per file; replace it with + to batch files into a single command for better performance. You can also pipe find to xargs: find . -name '*.tmp' -print0 | xargs -0 rm. The -print0 and -0 flags handle filenames with spaces and special characters safely. For parallel execution, add -P 4 to xargs.

How do I use awk to extract a specific column from output?

Use awk '{print $N}' where N is the column number. For example, ps aux | awk '{print $2}' prints the PID column. By default awk splits on whitespace. To use a different delimiter, use -F: awk -F: '{print $1}' /etc/passwd prints usernames. You can print multiple columns: awk '{print $1, $3}' separates them with a space. Awk also supports conditions: awk '$3 > 100 {print $1}' prints column 1 only for rows where column 3 exceeds 100.

What is the difference between > and >> for output redirection?

The > operator redirects output to a file, overwriting its contents entirely. The >> operator appends to the file, preserving existing content. For example, echo hello > file.txt creates or overwrites file.txt, while echo world >> file.txt adds world after the existing content. Use 2> to redirect stderr and &> to redirect both stdout and stderr. Be careful with > on files you are also reading from in the same pipeline, as the shell truncates the file before the command runs.

How do I chain multiple commands in a single bash line?

Use && to run the next command only if the previous one succeeded: mkdir build && cd build && cmake ... Use || to run a fallback if the previous command failed: grep -q pattern file || echo 'Not found'. Use ; to run commands sequentially regardless of exit status. Use | to pipe stdout of one command into stdin of the next: cat log.txt | grep ERROR | wc -l. You can group commands with { } (same shell) or ( ) (subshell) to redirect their combined output.

How do I safely handle filenames with spaces in bash one-liners?

Always double-quote your variables: rm "$file" instead of rm $file. When using find with xargs, use null-delimited output: find . -name '*.bak' -print0 | xargs -0 rm. In while-read loops, use: find . -print0 | while IFS= read -r -d '' f; do echo "$f"; done. The -d '' tells read to use the null character as the delimiter, matching -print0. Never rely on word splitting for filenames — this is the single most common source of bugs in shell scripts.

What are the most useful bash keyboard shortcuts for writing one-liners?

Ctrl+A moves to the start of the line, Ctrl+E to the end. Ctrl+W deletes the previous word, Ctrl+U deletes before the cursor, Ctrl+K deletes after it. Ctrl+R searches command history interactively. Alt+. inserts the last argument from the previous command. !! repeats the entire last command (sudo !! is a classic). !$ references the last argument of the previous command. Ctrl+X Ctrl+E opens the current line in your editor for multi-line editing, which is essential when a one-liner grows complex.

Conclusion

These one-liners are tools you will reach for every day once they are in your muscle memory. Start with the ones relevant to your work — if you use Git daily, the Git section will pay off immediately; if you manage servers, focus on system monitoring and networking first. Bookmark this page, and the next time you find yourself writing a throwaway Python script to rename files or parse a log, check here first. The command line almost always has a faster way.

The key to mastering one-liners is understanding the building blocks: find for locating files, grep for searching content, sed for transforming text, awk for columnar data, xargs for bridging commands, and jq for JSON. Once you internalize how pipes connect these tools, you can compose new one-liners on the fly for any situation.

⚙ Keep going: Dive deeper with our Bash Scripting Complete Guide, master the terminal with the Linux Commands Guide, and test regex patterns with the Regex Tester.

Related Resources

Related Resources

Bash Scripting Complete Guide
Master bash from variables and loops to production scripts
Linux Commands Guide
Comprehensive reference for essential Linux terminal commands
Bash Shortcuts Cheat Sheet
Keyboard shortcuts and readline bindings for the terminal
Regex Tester
Test regex patterns used in grep, sed, and awk