Cheatsheets
Quick reference guides for common commands and procedures. Click any command to copy it to your clipboard.
Systemd vs OpenRC
linuxService management commands for both init systems
Service Control
systemctl start nginx rc-service nginx start Start a service systemctl stop nginx rc-service nginx stop Stop a service systemctl restart nginx rc-service nginx restart Restart a service systemctl status nginx rc-service nginx status Check status systemctl enable nginx rc-update add nginx default Enable at boot systemctl disable nginx rc-update del nginx default Disable at boot System Information
systemctl list-units rc-status List running services systemctl list-unit-files rc-update show List all services journalctl -u nginx cat /var/log/nginx/*.log View service logs journalctl -f tail -f /var/log/messages Follow system logs Docker Essential Commands
dockerAdvanced Docker operations: buildx, multi-platform, compose watch, airgapped transfers, layer analysis, and BuildKit secrets
docker ps -a --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}' List all containers with custom table format (names, status, ports) docker run -d --name web -p 80:80 --restart unless-stopped nginx Run container with auto-restart policy docker exec -it web /bin/sh Interactive shell in container (sh for alpine-based images) docker logs --since 5m --tail 100 web Last 100 log lines from the past 5 minutes docker compose up -d Start compose stack in background docker compose watch Watch for file changes and auto-sync/rebuild (Compose Watch mode) docker compose --profile monitoring up -d Start only services tagged with the 'monitoring' profile docker buildx build --platform linux/amd64,linux/arm64 -t myimg:latest --push . Multi-platform build and push with buildx (requires builder instance) docker buildx create --name multiarch --driver docker-container --use Create a buildx builder instance with docker-container driver docker build --secret id=mysecret,src=./secret.txt . BuildKit secret mount -- accessible in Dockerfile via RUN --mount=type=secret,id=mysecret docker run --mount type=tmpfs,destination=/tmp,tmpfs-size=256m myimg Mount tmpfs in container (RAM-backed, never hits disk -- good for sensitive scratch data) docker diff web Show filesystem changes in container vs its image (A=added, C=changed, D=deleted) docker commit web myimg:snapshot Create image from running container state (debugging, not production workflows) docker save myimg:latest | gzip > myimg.tar.gz Export image to tarball for airgapped transfer docker load < myimg.tar.gz Import image from tarball on airgapped host docker stats --format 'table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}' Live stats with custom columns (CPU%, memory, network I/O) docker inspect --format '{{json .State.Health}}' web | jq . Inspect container health check status and log docker history --no-trunc myimg:latest Show full layer history with untruncated commands (find bloat sources) docker system df -v Disk usage breakdown: images, containers, volumes, build cache with details docker context create remote --docker 'host=ssh://[email protected]' Create Docker context for remote engine over SSH docker context use remote Switch to remote context -- all subsequent commands run on that engine Kubernetes kubectl Commands
kubernetesPower-user kubectl: debug containers, RBAC testing, JSONPath output, strategic patches, CRD inspection, and drain operations
kubectl get pods -A -o wide List pods across all namespaces with node placement and IPs kubectl get nodes -o wide List nodes with kernel version, container runtime, and IPs kubectl apply -f manifest.yaml Apply configuration (create or update) kubectl logs mypod -f --tail=200 Follow pod logs starting from last 200 lines kubectl exec -it mypod -- /bin/sh Interactive shell in pod (/bin/sh for distroless-compatible access) kubectl debug mypod -it --image=busybox --target=mycontainer Attach ephemeral debug container to running pod (no restart needed) kubectl debug node/mynode -it --image=ubuntu Debug node by spawning privileged pod with host filesystem at /host kubectl auth can-i create deployments --as=system:serviceaccount:default:mysa Test RBAC: check if a service account can create deployments kubectl auth can-i '*' '*' --as=developer --namespace=staging Test if user 'developer' has wildcard access in staging namespace kubectl get events --sort-by=.lastTimestamp -A Cluster events sorted by time across all namespaces (first place to look for failures) kubectl get pods -o jsonpath='{.items[*].metadata.name}' JSONPath: extract just pod names as space-separated list kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}' JSONPath: tab-separated name + phase for each pod kubectl diff -f manifest.yaml Show diff between live state and manifest before applying kubectl wait --for=condition=Ready pod/mypod --timeout=120s Block until pod is Ready or timeout (useful in CI pipelines) KUBECONFIG=~/.kube/config-a:~/.kube/config-b kubectl config view --flatten > merged.yaml Merge multiple kubeconfig files into one kubectl patch deploy myapp --type=json -p '[{"op":"replace","path":"/spec/replicas","value":5}]' JSON patch: precise field-level update without full manifest kubectl top nodes --sort-by=memory Node resource usage sorted by memory consumption kubectl top pods -A --sort-by=cpu Pod resource usage across all namespaces sorted by CPU kubectl rollout history deploy/myapp Show deployment revision history with change-cause annotations kubectl rollout undo deploy/myapp --to-revision=3 Rollback deployment to specific revision kubectl port-forward svc/myapp 8080:80 Forward local 8080 to service port 80 (bypasses ingress for debugging) kubectl cp mypod:/var/log/app.log ./app.log Copy file from pod to local filesystem kubectl create token mysa --duration=1h Generate short-lived ServiceAccount token (K8s 1.24+ TokenRequest API) kubectl get pods --field-selector=status.phase=Failed Field selector: list only Failed pods (server-side filtering) kubectl get crd List all Custom Resource Definitions in the cluster kubectl explain deploy.spec.strategy --recursive Show full schema for a resource field (recursive tree view) kubectl drain mynode --grace-period=60 --ignore-daemonsets --delete-emptydir-data Drain node for maintenance: evict pods with 60s grace, skip DaemonSets Git Workflow Commands
gitAdvanced Git: interactive rebase, bisect, reflog recovery, worktrees, pickaxe search, range-diff, and pre-commit hooks
git add -p Stage changes interactively hunk-by-hunk (review each diff before staging) git commit -m 'message' Commit staged changes with message git push origin main Push commits to remote git pull --rebase Pull and replay local commits on top of upstream (cleaner than merge) git rebase -i HEAD~5 Interactive rebase last 5 commits: reorder, squash, fixup, edit, or drop git commit --fixup=abc123 && git rebase -i --autosquash HEAD~10 Create fixup commit then auto-squash it into the target during rebase git bisect start && git bisect bad && git bisect good v1.0 Binary search for the commit that introduced a bug (then 'git bisect reset' when done) git reflog Show local history of HEAD changes -- recover lost commits, undo bad rebases git checkout HEAD@{3} -- path/to/file Restore file from reflog entry (3 HEAD movements ago) git worktree add ../feature-branch feature/new Check out branch in separate directory (work on two branches simultaneously) git worktree list List all linked worktrees and their checked-out branches git stash --include-untracked -m 'WIP: description' Stash tracked and untracked files with a label git cherry-pick -x abc123 Apply commit from another branch, appending '(cherry picked from ...)' to message git log --graph --oneline --all --decorate Full branch topology visualization across all refs git diff --word-diff Inline word-level diff (highlights changed words, not whole lines) git blame -L 10,20 path/to/file Show who last modified lines 10-20 with commit and date git shortlog -sn --no-merges Leaderboard: commit count per author, sorted descending git clean -fdx Remove all untracked files and directories including gitignored ones (nuclear option) git remote prune origin Delete stale remote-tracking branches for deleted upstream branches git show HEAD:path/to/file Print file contents at specific commit without checking it out git rev-parse --short HEAD Get short commit hash of HEAD (useful in scripts and CI) git config core.hooksPath .githooks Set repo-local hooks directory (commit hooks versioned in repo) git range-diff main~5..main feature~5..feature Compare two commit ranges side-by-side (review rebase changes) git log -S'search_string' --oneline Pickaxe search: find commits that added or removed 'search_string' git archive --format=tar.gz --prefix=myproject/ HEAD > release.tar.gz Create clean tarball from HEAD (no .git directory, ready for distribution) SSH Quick Reference
networkingSSH connections, tunnels, and key management
ssh user@host Basic SSH connection ssh -p 2222 user@host Connect on non-standard port ssh -i ~/.ssh/mykey user@host Connect with specific key ssh -L 8080:localhost:80 user@host Local port forward ssh -R 8080:localhost:80 user@host Remote port forward ssh -D 1080 user@host SOCKS proxy ssh -J jump@bastion user@target Jump host / ProxyJump ssh-keygen -t ed25519 -C 'comment' Generate ED25519 key ssh-copy-id user@host Copy public key to host ssh-add ~/.ssh/mykey Add key to agent ssh-keyscan host >> ~/.ssh/known_hosts Add host key scp file.txt user@host:/path/ Copy file to remote scp user@host:/path/file.txt . Copy file from remote rsync -avz src/ user@host:/dest/ Sync directory to remote Network Debugging
networkingCommands for troubleshooting network issues
ip addr Show IP addresses ip route Show routing table ip link Show network interfaces ss -tulpn Show listening ports ss -s Socket statistics summary ping -c 4 host Ping host (4 packets) traceroute host Trace route to host mtr host Combined ping/traceroute dig domain.com DNS lookup dig @8.8.8.8 domain.com DNS lookup with specific server nslookup domain.com Simple DNS lookup curl -I https://example.com HTTP headers only curl -v https://example.com Verbose HTTP request wget -O- https://example.com Download to stdout nc -zv host 80 Test TCP port connectivity tcpdump -i eth0 port 80 Capture packets on port 80 iptables -L -n -v List firewall rules nft list ruleset List nftables rules ZFS Storage Commands
storageZFS pool and dataset management
zpool status Show pool health status zpool list List pools with capacity zpool import List importable pools zpool import -f tank Force import pool zpool export tank Export pool zpool scrub tank Start scrub zpool history tank Show pool history zfs list List datasets zfs list -t snapshot List snapshots zfs create tank/data Create dataset zfs snapshot tank/data@backup Create snapshot zfs rollback tank/data@backup Rollback to snapshot zfs destroy tank/data@backup Delete snapshot zfs send tank/data@snap | ssh host zfs recv tank/data Send snapshot to remote zfs get all tank/data Show all properties zfs set compression=lz4 tank/data Enable compression zfs set quota=100G tank/data Set quota Tailscale VPN Commands
networkingTailscale mesh VPN management
tailscale status Show connection status tailscale ip Show Tailscale IP tailscale ping hostname Ping via Tailscale tailscale up Connect to network tailscale down Disconnect from network tailscale up --ssh Enable Tailscale SSH tailscale up --advertise-routes=10.42.0.0/24 Advertise subnet tailscale up --accept-routes Accept advertised routes tailscale up --exit-node=hostname Use exit node tailscale netcheck Network connectivity check tailscale debug derp-map Show DERP relay servers tailscale file send file.txt hostname: Send file via Taildrop tailscale logout Log out and disconnect Disk & Filesystem Commands
storageDisk management and filesystem operations
lsblk List block devices lsblk -f List with filesystem info fdisk -l List partitions df -h Disk space usage du -sh /path Directory size du -sh * | sort -h Sorted directory sizes ncdu /path Interactive disk usage mount /dev/sdb1 /mnt Mount filesystem umount /mnt Unmount filesystem blkid Show UUIDs and labels mkfs.ext4 /dev/sdb1 Create ext4 filesystem e2fsck -f /dev/sdb1 Check ext4 filesystem tune2fs -l /dev/sdb1 Show ext4 superblock info smartctl -a /dev/sda SMART health data hdparm -tT /dev/sda Test disk speed fstrim -v / TRIM SSD Linux Process Debugging
linuxReal troubleshooting: strace, lsof, /proc inspection, perf, cgroups, journal/log analysis, and resource limiting
ps aux --sort=-%mem | head -20 Top 20 processes by memory usage pgrep -a nginx Find process by name with full command line htop -p $(pgrep -d, nginx) Monitor only nginx processes in htop strace -p PID -e trace=open,read,write Trace file I/O syscalls on running process (attach without restart) strace -c -p PID Syscall summary: count, time, and errors per syscall (Ctrl+C to print report) ltrace -c -p PID Library call profiling: which libc/shared lib functions are called most lsof -i :8080 Find which process is listening on port 8080 lsof +D /var/log All open files under /var/log (find what is writing to a directory) ss -tlnp Listening TCP sockets with process names and PIDs fuser -v /mnt/data Show all processes using a mount point (check before umount) perf top -p PID Live CPU profiling for a specific process (find hot functions) perf record -g -p PID -- sleep 30 Record 30s of call-graph profiling data (view with perf report) pmap -x PID Detailed memory map: shared, private, RSS per mapping pidstat -d 1 5 Per-process disk I/O stats every 1s for 5 samples sar -r 1 5 System memory utilization every 1s for 5 samples (needs sysstat) cat /proc/PID/status Process status: threads, memory, capabilities, UID/GID, signal masks ls -la /proc/PID/fd/ List all open file descriptors (symlinks show actual files/sockets/pipes) dmesg -T | tail -50 Last 50 kernel messages with human-readable timestamps (OOM kills, hardware errors) journalctl -u myservice --since '5 min ago' Recent service logs (systemd). OpenRC: tail -f /var/log/messages ionice -c3 -p PID Set process to idle I/O class (only uses disk when nothing else needs it) cpulimit -p PID -l 50 Throttle process to 50% CPU (useful for runaway builds) systemd-run --scope -p MemoryMax=512M -p CPUQuota=100% ./myapp Run command in transient cgroup with memory/CPU limits (systemd systems) tmux Session Management
linuxTerminal multiplexer sessions, panes, windows, and scripting
tmux new -s work Create named session tmux ls List active sessions tmux attach -t work Attach to named session tmux detach Detach from current session (or Ctrl-b d) tmux kill-session -t work Destroy a session tmux rename-session -t old new Rename a session Ctrl-b % Split pane vertically Ctrl-b " Split pane horizontally Ctrl-b arrow Navigate between panes Ctrl-b z Toggle pane zoom (fullscreen) Ctrl-b c Create new window Ctrl-b n / p Next / previous window Ctrl-b , Rename current window Ctrl-b [ Enter copy mode (scroll/select) Ctrl-b x Kill current pane (with confirm) tmux send-keys -t work 'cmd' Enter Send keystrokes to session from script Vim/Neovim Quick Reference
editorsModes, motions, macros, search, buffers, and common operations
i / a / o Insert before cursor / after cursor / new line below Esc Return to normal mode dd / yy / p Delete line / yank line / paste ciw / caw Change inner word / change a word (includes space) w / b / e Jump forward word / back word / end of word gg / G Go to first line / last line 0 / $ / ^ Start of line / end of line / first non-blank /pattern Search forward (n = next, N = prev) :%s/old/new/g Replace all occurrences in file :s/old/new/g Replace all on current line qa ... q / @a Record macro into register a / replay it :e filename Open file in new buffer :bn / :bp Next buffer / previous buffer :vs / :sp Vertical split / horizontal split Ctrl-w h/j/k/l Navigate between splits u / Ctrl-r Undo / redo :wq / :q! Write and quit / force quit without saving "ay / "ap Yank into register a / paste from register a Ansible Ad-Hoc Commands
devopsCommon one-liners, inventory patterns, and module usage
ansible all -m ping Ping all hosts in inventory ansible webservers -m ping Ping hosts in a group ansible all -m shell -a 'uptime' Run shell command on all hosts ansible all -m setup --tree /tmp/facts Gather facts and save to files ansible all -m copy -a 'src=f.txt dest=/tmp/' Copy file to all hosts ansible all -m apt -a 'name=nginx state=present' -b Install package via apt (become root) ansible all -m service -a 'name=nginx state=started' -b Start a service ansible all -m user -a 'name=deploy state=present' -b Create a user ansible all -m file -a 'path=/opt/app state=directory' -b Create a directory ansible all -i '10.42.0.10,' -m ping Ad-hoc against a single IP (note trailing comma) ansible-playbook site.yml --check Dry-run a playbook ansible-playbook site.yml --limit webservers Run playbook on subset of hosts ansible-playbook site.yml --tags deploy Run only tasks with specific tag ansible-inventory --graph Show inventory as a tree ansible-vault encrypt secrets.yml Encrypt a file with vault ansible-vault view secrets.yml View encrypted file without decrypting to disk Terraform CLI
devopsInfrastructure as code workflow, state, and workspace management
terraform init Initialize working directory and download providers terraform plan Preview changes without applying terraform apply Apply changes (prompts for confirmation) terraform apply -auto-approve Apply without confirmation prompt terraform destroy Destroy all managed resources terraform fmt -recursive Format all .tf files recursively terraform validate Validate configuration syntax terraform state list List all resources in state terraform state show aws_instance.web Show details of a specific resource terraform state mv old_name new_name Rename resource in state (no destroy/recreate) terraform import aws_instance.web i-1234 Import existing resource into state terraform output Show all output values terraform workspace list List workspaces terraform workspace new staging Create and switch to new workspace terraform workspace select production Switch to existing workspace terraform taint aws_instance.web Mark resource for recreation on next apply terraform untaint aws_instance.web Remove taint from resource terraform plan -out=plan.tfplan Save plan to file for exact apply later Helm Package Manager
devopsKubernetes package management, releases, and chart operations
helm repo add bitnami https://charts.bitnami.com/bitnami Add a chart repository helm repo update Refresh repo index cache helm search repo nginx Search repos for a chart helm install myrelease bitnami/nginx Install chart with release name helm install myrelease bitnami/nginx -f values.yaml Install with custom values file helm install myrelease bitnami/nginx --set service.type=NodePort Install with inline value override helm upgrade myrelease bitnami/nginx Upgrade a release to latest chart version helm upgrade myrelease bitnami/nginx --reuse-values Upgrade keeping existing values helm rollback myrelease 2 Rollback to revision 2 helm uninstall myrelease Remove a release helm list List deployed releases in current namespace helm list -A List releases across all namespaces helm history myrelease Show release revision history helm template myrelease bitnami/nginx Render templates locally without installing helm show values bitnami/nginx Show default values for a chart helm get values myrelease Show values used by a deployed release curl Power Usage
networkingAdvanced HTTP requests, auth, timing, certificates, and downloads
curl -s https://api.example.com/data | jq . Silent request, pipe to jq for formatting curl -X POST -H 'Content-Type: application/json' -d '{"key":"val"}' URL POST JSON body with header curl -X PUT -d @payload.json -H 'Content-Type: application/json' URL PUT with JSON from file curl -u user:pass https://api.example.com/secure Basic auth request curl -H 'Authorization: Bearer TOKEN' URL Bearer token auth curl -o file.tar.gz https://example.com/file.tar.gz Download to file curl -O -L https://example.com/file.tar.gz Download with remote filename, follow redirects curl -w '%{http_code}' -o /dev/null -s URL Print only HTTP status code curl -w 'dns: %{time_namelookup}s\ntcp: %{time_connect}s\ntotal: %{time_total}s\n' -o /dev/null -s URL Show timing breakdown curl -k https://self-signed.example.com Skip TLS certificate verification curl --cacert ca.pem --cert client.pem --key client-key.pem URL Mutual TLS with client cert curl -x http://proxy:8080 URL Route through HTTP proxy curl -L -C - -o largefile.iso URL Resume interrupted download curl -F '[email protected]' https://example.com/upload Upload file via multipart form jq JSON Processing
devopsFiltering, mapping, type operations, and string interpolation
jq '.' file.json Pretty-print JSON jq '.key' file.json Extract top-level key jq '.nested.deep.value' file.json Extract nested value jq '.items[]' file.json Iterate over array elements jq '.items[] | .name' file.json Extract field from each array element jq '.items[] | select(.status == "active")' file.json Filter array by condition jq '.items | length' file.json Count array elements jq '[.items[] | {name: .name, id: .id}]' file.json Reshape into new array of objects jq '.items | map(.price) | add' file.json Sum values from array jq '.items | sort_by(.name)' file.json Sort array by field jq '.items | group_by(.category)' file.json Group array elements by field jq -r '.name' file.json Raw output (no quotes around strings) jq -r '.items[] | "\(.name): \(.value)"' file.json String interpolation jq 'keys' file.json List all keys of an object jq 'to_entries[] | "\(.key)=\(.value)"' file.json Convert object to key=value lines jq -s '.[0] * .[1]' base.json override.json Deep merge two JSON files OpenSSL Reference
securityCertificate inspection, key generation, chain verification, and TLS testing
openssl x509 -in cert.pem -text -noout View certificate details openssl x509 -in cert.pem -enddate -noout Check certificate expiry date openssl x509 -in cert.pem -subject -issuer -noout Show subject and issuer openssl genrsa -out key.pem 4096 Generate 4096-bit RSA private key openssl ecparam -genkey -name prime256v1 -out ec-key.pem Generate EC private key (P-256) openssl req -new -key key.pem -out request.csr Create CSR from existing key openssl req -new -newkey rsa:4096 -nodes -keyout key.pem -out request.csr Generate key and CSR in one step openssl req -x509 -newkey rsa:4096 -nodes -keyout key.pem -out cert.pem -days 365 Generate self-signed cert openssl verify -CAfile ca.pem cert.pem Verify certificate against CA openssl verify -CAfile ca.pem -untrusted intermediate.pem cert.pem Verify full chain openssl s_client -connect example.com:443 -servername example.com Test TLS connection with SNI openssl s_client -connect example.com:443 -showcerts Show full certificate chain from server openssl rsa -in key.pem -check Verify private key integrity openssl pkcs12 -export -out bundle.p12 -inkey key.pem -in cert.pem Create PKCS12 bundle openssl dgst -sha256 file.bin SHA-256 hash of a file awk/sed One-Liners
linuxLog parsing, field extraction, substitution, and multiline operations
awk '{print $1}' file.log Print first field (whitespace-delimited) awk -F: '{print $1, $3}' /etc/passwd Print fields with custom delimiter awk '$3 > 100 {print $0}' data.txt Print lines where field 3 exceeds 100 awk '{sum += $1} END {print sum}' numbers.txt Sum values in first column awk '/ERROR/ {count++} END {print count}' app.log Count lines matching pattern awk '!seen[$0]++' file.txt Remove duplicate lines (preserves order) awk 'NR==10,NR==20' file.txt Print lines 10 through 20 sed 's/old/new/g' file.txt Replace all occurrences on each line sed -i 's/old/new/g' file.txt In-place replacement sed -n '10,20p' file.txt Print lines 10 through 20 sed '/^#/d' config.txt Delete comment lines sed '/^$/d' file.txt Delete blank lines sed -n '/START/,/END/p' file.txt Print between two patterns sed 's/^[ \t]*//' file.txt Strip leading whitespace awk '{print NR": "$0}' file.txt Add line numbers to output iptables/nftables
securityFirewall rules, chain management, NAT, port forwarding, and persistence
iptables -L -n -v List all rules with packet counts iptables -A INPUT -p tcp --dport 22 -j ACCEPT Allow incoming SSH iptables -A INPUT -p tcp --dport 80 -j ACCEPT Allow incoming HTTP iptables -A INPUT -s 10.42.0.0/24 -j ACCEPT Allow traffic from local subnet iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT Allow established connections iptables -P INPUT DROP Set default INPUT policy to drop iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE NAT/masquerade outbound traffic iptables -t nat -A PREROUTING -p tcp --dport 8080 -j REDIRECT --to-port 80 Redirect port 8080 to 80 iptables -D INPUT 3 Delete rule number 3 from INPUT chain iptables-save > /etc/iptables.rules Save current rules to file iptables-restore < /etc/iptables.rules Restore rules from file nft list ruleset Show all nftables rules nft add table inet filter Create a new table nft add chain inet filter input '{ type filter hook input priority 0; policy drop; }' Create input chain with drop policy nft add rule inet filter input tcp dport 22 accept Allow SSH in nftables nft list ruleset > /etc/nftables.conf Save nftables rules Podman vs Docker
linuxSide-by-side comparison of container runtimes and where they differ
podman run -d --name web nginx Identical to docker run -- same CLI syntax for basic operations podman build -t myimage . Build image -- same flags as docker build podman run --userns=keep-id -v $HOME/data:/data:Z myimage Rootless with UID mapping -- Docker needs daemon config for rootless podman generate systemd --new --name web Generate systemd unit from container -- no Docker equivalent podman pod create --name mypod -p 8080:80 Create a pod (shared network namespace) -- Docker uses docker-compose instead podman pod ps List pods -- Docker alternative: docker compose ps podman play kube pod.yaml Deploy from Kubernetes YAML -- Docker: kompose convert + compose up podman generate kube mypod > pod.yaml Export running pod to K8s YAML -- no Docker equivalent podman volume create data Create volume -- identical to docker volume create podman unshare cat /proc/self/uid_map Inspect user namespace mapping -- specific to rootless Podman podman system connection add remote ssh://[email protected]/run/podman/podman.sock Manage remote Podman -- Docker uses DOCKER_HOST or contexts podman auto-update Pull and restart containers with io.containers.autoupdate label -- Docker: Watchtower or manual podman machine init && podman machine start Start Podman VM on macOS/Windows -- Docker Desktop equivalent podman system prune -a Remove unused data -- identical to docker system prune -a Bash Scripting Patterns
linuxArrays, string operations, conditionals, traps, heredocs, process substitution, parameter expansion, and strict mode
set -euo pipefail Strict mode: exit on error (-e), undefined vars (-u), pipe failures (-o pipefail) arr=(alpha bravo charlie) Declare indexed array echo "${arr[@]}" / echo "${#arr[@]}" Print all elements / print array length declare -A map=([key1]=val1 [key2]=val2) Declare associative array (bash 4+) echo "${map[key1]}" / echo "${!map[@]}" Access value by key / list all keys ${var%%pattern} Remove longest suffix match (e.g., ${file%%.*} strips all extensions) ${var#prefix} Remove shortest prefix match (e.g., ${path#*/} strips first directory) ${var:offset:length} Substring extraction: ${str:2:5} gets 5 chars starting at index 2 ${VAR:-default} Use 'default' if VAR is unset or empty (parameter default) ${VAR:?'error message'} Exit with error if VAR is unset or empty (parameter validation) [[ -f /path/to/file ]] Test: true if regular file exists (-d for directory, -L for symlink) [[ $str =~ ^[0-9]+$ ]] Regex match: true if string is all digits (capture groups in BASH_REMATCH) while getopts 'vf:o:' opt; do case $opt in v) verbose=1;; f) file=$OPTARG;; esac; done Parse CLI flags: -v (boolean), -f FILE, -o OUTPUT (colon = requires argument) trap cleanup EXIT Run cleanup function on script exit (any exit: normal, error, signal) trap 'rm -f $tmpfile' EXIT INT TERM Clean up temp file on exit, Ctrl+C, or termination cat <<'EOF'
$VAR is literal here
EOF Heredoc with single-quoted delimiter: no variable expansion (literal text) cat <<EOF
$VAR is expanded here
EOF Heredoc with unquoted delimiter: variables and commands are expanded diff <(sort file1.txt) <(sort file2.txt) Process substitution: compare output of two commands as if they were files echo {1..10} / echo {a..z} / mkdir dir_{01..12} Brace expansion: number ranges, letter ranges, bulk directory creation tmpfile=$(mktemp /tmp/myapp.XXXXXX) Create secure temp file with random suffix (auto-unique, no race condition) while IFS= read -r line; do echo "$line"; done < input.txt Read file line-by-line preserving whitespace and backslashes printf '%-20s %10d\n' "$name" "$count" Formatted output: left-align string (20 chars), right-align integer (10 chars) rsync Deep Reference
linuxChecksum modes, bandwidth limits, incremental snapshots with --link-dest, partial transfers, daemon mode, and itemized output
rsync -avz src/ [email protected]:/dest/ Archive mode + compress over SSH (trailing / on src means contents, not directory itself) rsync -avz --dry-run src/ dest/ Preview what would transfer without changing anything rsync -avz --checksum src/ dest/ Compare files by checksum instead of mtime+size (slower but catches silent corruption) rsync -avz --size-only src/ dest/ Skip files that match in size regardless of mtime (fast sync for known-good sources) rsync -avz --bwlimit=5m src/ dest/ Throttle bandwidth to 5 MB/s (prevent saturating the link) rsync -avz -P src/ dest/ Show per-file progress + keep partial files on interrupt (-P = --partial --progress) rsync -avz --exclude-from=exclude.txt src/ dest/ Exclude patterns from file (one pattern per line, supports wildcards) rsync -avz --include='*.log' --exclude='*' src/ dest/ Include only .log files, exclude everything else (order matters) rsync -avz --delete-after src/ dest/ Delete extraneous files from dest AFTER transfer completes (safer than --delete-before) rsync -avz --delete-before src/ dest/ Delete extraneous files from dest BEFORE transfer (frees space first) rsync -avz --backup --backup-dir=/backups/$(date +%F) src/ dest/ Move replaced/deleted files to dated backup directory instead of destroying them rsync -avz --itemize-changes src/ dest/ Show per-file change codes (>f..T = file transferred, timestamp differs) rsync -avz -e 'ssh -p 2222' src/ [email protected]:/dest/ Use SSH on non-standard port rsync -avz --rsync-path='sudo rsync' src/ [email protected]:/dest/ Run rsync as root on remote side (preserve ownership of root-owned files) rsync -avz --compress-level=9 src/ dest/ Maximum compression (CPU-expensive, useful on slow links) rsync -avz --link-dest=../previous-backup/ src/ dest/current-backup/ Incremental snapshot: unchanged files are hardlinked to previous backup (space-efficient) rsync -avz --files-from=filelist.txt / dest/ Transfer only files listed in filelist.txt (one path per line, relative to source) rsync -avz rsync://mirror.example.com/repo/ /local/mirror/ Pull from rsync daemon (no SSH, uses rsync:// protocol on port 873) systemd Unit Files
linuxWriting services: dependencies, restart policies, security hardening, dynamic users, and capability restrictions. For OpenRC init scripts, see the OpenRC vs systemd comparison.
[Unit] After=network-online.target Start after network is fully up (not just interfaces loaded) [Unit] Wants=network-online.target Weakly depend on network target (service starts even if target fails) [Unit] BindsTo=docker.service Strong dependency: stop this unit if docker.service stops [Service] Type=simple Default: systemd considers service started immediately after ExecStart [Service] Type=forking Service forks into background -- systemd tracks via PIDFile= [Service] Type=oneshot Service runs once and exits (use with RemainAfterExit=yes for 'active' state) [Service] Type=notify Service sends sd_notify() when ready (most robust readiness detection) [Service] Restart=on-failure Restart only on non-zero exit code, signal death, or watchdog timeout [Service] RestartSec=5 Wait 5 seconds between restart attempts (avoid crash loops hammering resources) [Service] ExecStartPre=/usr/bin/myapp --validate-config Run config validation before starting the main process [Service] Environment=NODE_ENV=production Set environment variable inline [Service] EnvironmentFile=/etc/myapp/env Load environment variables from file (KEY=VALUE format, one per line) [Service] DynamicUser=yes Auto-allocate ephemeral user/group for this service (no useradd needed) [Service] StateDirectory=myapp Create /var/lib/myapp owned by service user (pairs with DynamicUser) [Service] LogsDirectory=myapp Create /var/log/myapp owned by service user [Service] ProtectSystem=strict Mount entire filesystem read-only (whitelist writable paths with ReadWritePaths=) [Service] ProtectHome=yes Make /home, /root, /run/user inaccessible to service [Service] NoNewPrivileges=true Prevent privilege escalation via setuid/setgid binaries [Service] PrivateTmp=yes Give service its own /tmp (isolate from other services) [Service] ReadWritePaths=/var/lib/myapp Allow writes to specific paths when ProtectSystem=strict [Service] CapabilityBoundingSet=CAP_NET_BIND_SERVICE Drop all capabilities except binding to ports < 1024 [Service] SystemCallFilter=@system-service Whitelist only syscalls needed for typical services (blocks dangerous ones) [Install] WantedBy=multi-user.target Enable service in normal multi-user boot (equivalent to runlevel 3) SSH Client Config
security~/.ssh/config patterns: ProxyJump, multiplexing, tunnels, SOCKS proxy, wildcard hosts, conditional blocks, and agent forwarding
Host myserver\n HostName 10.42.0.10\n User argo\n Port 22 Basic host entry: 'ssh myserver' expands to full connection details IdentityFile ~/.ssh/id_ed25519_work Use a specific key for this host (overrides default key selection) Host bastion\n HostName 10.42.0.1\nHost internal\n ProxyJump bastion Jump host: 'ssh internal' automatically tunnels through bastion ProxyJump user@jump1,user@jump2 Multi-hop jump: chain through two bastion hosts ControlMaster auto Reuse existing SSH connection for new sessions to same host (no re-auth) ControlPath ~/.ssh/sockets/%r@%h-%p Socket path for multiplexed connections (%r=user, %h=host, %p=port) ControlPersist 600 Keep master connection alive for 600s after last session closes ServerAliveInterval 60 Send keepalive every 60s (prevent NAT/firewall timeout disconnects) ServerAliveCountMax 3 Disconnect after 3 missed keepalives (3 x 60s = 3 min dead connection detection) ForwardAgent yes Forward local SSH agent to remote host (WARNING: remote root can hijack your keys) LocalForward 5432 db.internal:5432 Forward local 5432 to remote db.internal:5432 through SSH tunnel RemoteForward 9090 localhost:9090 Expose local port 9090 on the remote host (reverse tunnel) DynamicForward 1080 SOCKS5 proxy on local :1080 -- route browser/app traffic through remote host Match exec "hostname | grep -q workstation" Conditional block: apply settings only when local hostname matches Include ~/.ssh/config.d/* Split config into multiple files (organize by project/environment) StrictHostKeyChecking accept-new Auto-accept new host keys, reject changed ones (safer than 'no', saner than 'ask') Host *.internal\n User deploy\n IdentityFile ~/.ssh/internal_key Wildcard pattern: apply settings to all .internal hosts LogLevel QUIET Suppress banners and warnings (use for scripted SSH commands in cron/CI) Host *\n AddKeysToAgent yes\n IdentitiesOnly yes Global defaults: auto-add keys to agent, only try explicitly configured keys nmap Network Scanning
securitySYN scans, version detection, OS fingerprinting, NSE scripts, UDP, output formats, timing, and subnet sweeps
nmap -sn 10.42.0.0/24 Ping sweep: discover live hosts without port scanning nmap -sS 10.42.0.10 SYN scan (half-open): fast default scan, requires root/raw sockets nmap -sT 10.42.0.10 TCP connect scan: full handshake, works without root but slower and logged nmap -sV 10.42.0.10 Version detection: probe open ports to identify service/version nmap -O 10.42.0.10 OS fingerprinting: guess target operating system from TCP/IP stack behavior nmap -A 10.42.0.10 Aggressive scan: OS detection + version + scripts + traceroute (noisy) nmap -p- 10.42.0.10 Scan all 65535 TCP ports (default only scans top 1000) nmap --top-ports 100 10.42.0.10 Scan top 100 most common ports (fast reconnaissance) nmap -sU --top-ports 50 10.42.0.10 UDP scan of top 50 ports (slow -- UDP requires timeout-based detection) nmap --script vuln 10.42.0.10 Run vulnerability detection scripts (CVE checks, default creds) nmap --script ssl-enum-ciphers -p 443 10.42.0.10 Enumerate TLS cipher suites and grade them (find weak crypto) nmap --script http-title -p 80,443,8080 10.42.0.0/24 Grab HTTP page titles across a subnet (quick web service inventory) nmap -oX scan.xml 10.42.0.10 XML output (import into Metasploit, OpenVAS, or parse with scripts) nmap -oG - 10.42.0.10 Grepable output to stdout (pipe to grep/awk for quick filtering) nmap --open 10.42.0.10 Show only open ports in output (filter out closed/filtered noise) nmap -T4 10.42.0.10 Timing template: T4=aggressive (faster), T1=sneaky (IDS evasion) nmap --reason 10.42.0.10 Show why each port state was determined (syn-ack, no-response, reset) nmap -6 ::1 IPv6 scan (full IPv6 address or hostname with AAAA record) nmap --traceroute 10.42.0.10 Traceroute: show network hops to target alongside scan results diff <(nmap -sS 10.42.0.10 -oG -) <(nmap -sS 10.42.0.11 -oG -) Compare open ports between two hosts (quick drift detection) GPG/Age Encryption
securityGPG key management, file encryption/signing, age as modern alternative, SOPS integration, and git commit signing
gpg --full-gen-key Generate GPG keypair interactively (choose algorithm, size, expiry) gpg --list-keys --keyid-format short List public keys with short key IDs gpg --list-secret-keys List private keys in your keyring gpg --export -a 'My Name' > pubkey.asc Export public key to ASCII-armored file (share with others) gpg --import pubkey.asc Import someone's public key into your keyring gpg --encrypt -r [email protected] file.txt Encrypt file for specific recipient (produces file.txt.gpg) gpg --decrypt file.txt.gpg > file.txt Decrypt file with your private key gpg -c file.txt Symmetric encryption with passphrase (no public key needed) gpg --sign file.txt Create detached signature (proves you authored/approved the file) gpg --verify file.txt.sig file.txt Verify signature against file gpg --edit-key [email protected] trust Set trust level for imported key (ultimate, full, marginal, untrusted) gpg --keyserver keys.openpgp.org --send-keys KEYID Publish public key to keyserver gpg --keyserver keys.openpgp.org --recv-keys KEYID Fetch public key from keyserver by ID age-keygen -o key.txt Generate age keypair (modern GPG alternative -- simpler, no config) age -r age1publickey... -o file.enc file.txt Encrypt file with age recipient public key age -d -i key.txt -o file.txt file.enc Decrypt file with age private key age -p -o file.enc file.txt Encrypt with passphrase (no keys needed, interactive prompt) sops --encrypt --age age1publickey... secrets.yaml > secrets.enc.yaml SOPS: encrypt YAML values (keys stay plaintext) with age key sops secrets.enc.yaml SOPS: decrypt in-place to $EDITOR, re-encrypt on save git config --global commit.gpgsign true Sign all git commits with GPG (use with git config user.signingkey KEYID) strace/lsof/perf Debugging
linuxSyscall tracing, file descriptor analysis, CPU profiling, memory maps, deleted-but-open files, and bpftrace one-liners
strace -p PID Attach to running process and print every syscall in real-time strace -c -p PID Syscall summary table: count, time, errors per call (Ctrl+C to print) strace -e trace=network -p PID Trace only network syscalls (connect, sendto, recvfrom, accept) strace -e trace=file -p PID Trace only file-related syscalls (open, stat, unlink, rename) strace -ff -o /tmp/trace -p PID Per-thread output files: /tmp/trace.TID for each thread (multi-threaded apps) strace -T -p PID Show time spent in each syscall (find slow I/O, blocking reads) strace -e trace=open,openat -p PID 2>&1 | grep ENOENT Find missing files: filter for failed open() calls ltrace -c -p PID Library call profiling: count and time per libc/shared library function lsof -p PID All open files for a process: regular files, sockets, pipes, devices lsof -i TCP:80-443 Processes with open TCP connections on ports 80 through 443 lsof +L1 Find deleted-but-still-open files (common disk space leak: rm'd logs still held open) lsof -i @10.42.0.10 All connections to/from a specific IP address perf stat -p PID -- sleep 10 Hardware counter stats (IPC, cache misses, branch mispredicts) for 10 seconds perf record -g -p PID -- sleep 30 Record 30s of call-graph CPU samples (view with perf report) perf report --sort=dso,symbol Analyze recorded profile grouped by shared library and function perf top -p PID Live top-like view of hottest functions in a process cat /proc/PID/maps Memory mappings: addresses, permissions, backing files for each region cat /proc/PID/smaps_rollup Summarized memory: RSS, PSS (proportional), shared/private, swap valgrind --tool=callgrind ./myapp Instruction-level profiling (view with kcachegrind). Slow but precise. bpftrace -e 'tracepoint:syscalls:sys_enter_* { @[probe] = count(); }' Count all syscalls system-wide by type (eBPF, requires root) Podman Quadlet & Systemd
linuxQuadlet .container/.volume/.network units, auto-update, rootless lingering, and Kubernetes YAML with Podman
[Container]\nImage=docker.io/nginx:latest Quadlet .container file: Image= specifies the container image to run [Container]\nPublishPort=8080:80 Quadlet: map host port 8080 to container port 80 [Container]\nVolume=mydata.volume:/data:Z Quadlet: mount a Quadlet-managed volume with SELinux relabel (:Z) [Container]\nEnvironment=NODE_ENV=production Quadlet: set environment variable in container [Container]\nNetwork=mynet.network Quadlet: attach container to a Quadlet-managed network [Container]\nLabel=io.containers.autoupdate=registry Quadlet: enable auto-update from registry for this container [Volume]\nDriver=local Quadlet .volume file: define a named volume (placed in ~/.config/containers/systemd/) [Network]\nSubnet=10.42.0.0/24\nGateway=10.42.0.1 Quadlet .network file: define custom network with subnet and gateway [Kube]\nYaml=/path/to/pod.yaml Quadlet .kube file: run Kubernetes YAML directly with Podman (pods, deployments) [Install]\nWantedBy=default.target Quadlet: start container on user login (rootless) or system boot (root) systemctl --user daemon-reload Reload after adding/changing Quadlet files in ~/.config/containers/systemd/ systemctl --user start myapp.service Start Quadlet container (filename myapp.container becomes myapp.service) podman generate systemd --new --name web > web.service Legacy: generate systemd unit from running container (pre-Quadlet method) podman auto-update --dry-run Check which containers have newer images available without pulling podman auto-update Pull newer images and restart containers with autoupdate label loginctl enable-linger $USER Allow rootless containers to run when user is not logged in (required for Quadlet) echo $XDG_RUNTIME_DIR Show runtime dir path (rootless Podman sockets and state live here) podman play kube pod.yaml Deploy Kubernetes YAML (Pod, Deployment, Service) directly with Podman podman kube generate mypod > pod.yaml Export running pod/container to Kubernetes YAML podman system connection add remote ssh://[email protected]/run/user/1000/podman/podman.sock Add rootless remote Podman engine over SSH