Container Lab
Spin up real ephemeral containers in isolated environments
Your Active Labs
0Choose a Lab Template
Each template provisions a real LXC container on isolated infrastructure
Linux Sandbox
Alpine Linux with bash, vim, curl
Container Workshop
Docker-in-container environment
Networking Lab
3 interconnected containers
IaC Playground
Ansible + target container
Active Lab
--- Challenges
Get familiar with the container environment
- Run cat /etc/os-release to see the distribution info
- Note the NAME and VERSION_ID fields β this is Alpine Linux
- Alpine is popular for containers because it is tiny (~5MB base image)
cat /etc/os-release - Run df -h to display disk usage in human-readable format
- Look at the / (root) filesystem β that is your container's writable layer
- Notice how small the container filesystem is compared to a full VM
df -h A table showing filesystems, their sizes, used/available space, and mount points
- Run free -h to see total, used, and available memory
- The "available" column is what your container can actually use
- Compare total memory to what a full Linux install typically needs (512MB+)
free -h - Run apk list --installed to see every package in this container
- Count how few packages there are β Alpine keeps it minimal
- Look for busybox, musl, and alpine-baselayout β these are the core of Alpine
apk list --installed - Run ps aux to list all running processes
- Notice how few processes are running β likely just your shell and ps itself
- In a container, PID 1 is not systemd or init β it is whatever the container entrypoint is
ps aux A short list of processes, typically just the shell session and the ps command itself
Install and manage software in Alpine Linux
- Run apk update to download the latest package list from Alpine repositories
- Watch for the repository URLs β Alpine uses main and community repos
- This is similar to apt update on Debian or dnf check-update on Fedora
apk update - Run apk search nginx to find packages matching "nginx"
- Notice the results include the package name and version number
- You can also use apk search -d nginx to include descriptions in the search
apk search nginx - Run apk add curl to install the curl HTTP client
- Watch the output β apk resolves dependencies and installs them automatically
- Note how fast installation is compared to apt or yum β Alpine packages are tiny
apk add curl Download and install messages followed by OK
- Run curl --version to confirm curl was installed correctly
- Check the protocols listed β curl supports HTTP, HTTPS, FTP, and more
- Try which curl to see where the binary was installed
curl --version - Run apk del curl to uninstall curl and any orphaned dependencies
- Verify removal with which curl β it should return nothing
- Keeping containers clean by removing unused packages reduces image size and attack surface
apk del curl Run and manage services inside the container
- Run apk add nginx to install the nginx web server
- Alpine ships a lightweight nginx build linked against musl libc
- Check that /etc/nginx/nginx.conf was created β this is the main config file
apk add nginx - Run nginx to start the web server in the foreground daemon mode
- Alternatively, you can use rc-service nginx start if OpenRC is initialized
- Nginx forks a master process and worker processes to handle connections
nginx No output on success β nginx starts silently as a daemon
- Run curl localhost to make an HTTP request to the local nginx server
- You should see a default welcome page or a 404 β either means nginx is responding
- This confirms the web server is listening on port 80 inside the container
curl localhost HTML content from the nginx default page or a 404 page
- Run ps aux | grep nginx to find the nginx processes
- You should see a master process running as root and worker processes
- The master process manages configuration reloads; workers handle actual requests
ps aux | grep nginx Lines showing nginx master and worker processes with their PIDs
- Run nginx -s stop to send a stop signal to the nginx master process
- Verify it stopped by running ps aux | grep nginx β there should be no results
- The -s flag sends a signal: stop, quit, reload, and reopen are all valid
nginx -s stop Manage users and explore system logs
- Run adduser -D testuser to create a user without a password prompt
- The -D flag means "defaults" β no password, default shell, auto home directory
- Verify with cat /etc/passwd | grep testuser to see the new entry
adduser -D testuser No output on success β the user is created silently
- Run su - testuser to switch to the new user account
- Run whoami to confirm you are now testuser
- Try touching a file in /root β it should fail with permission denied
su - testuser Your shell prompt changes to reflect the testuser account
- Type exit to leave the testuser session and return to root
- Run whoami to confirm you are root again
- In containers, running as root is common but considered a security risk in production
exit - Run dmesg | tail to see the last kernel ring buffer messages
- These messages show hardware detection, driver loading, and kernel events
- In a container, dmesg shows the host kernel's messages (if permitted)
dmesg | tail Kernel log messages or a permission denied error depending on container privileges
- Run logger "Hello from container lab" to send a message to the system logger
- Check if syslog is running with ps aux | grep syslog
- If syslog is running, check /var/log/messages for your message
logger "Hello from container lab" Understand the Linux filesystem hierarchy
- Run ls /proc to see the virtual filesystem that exposes kernel data structures
- Run cat /proc/1/status to see details about PID 1 β the container's init process
- Note the Name, Pid, PPid, and Threads fields β they describe the main process
cat /proc/1/status Process status fields including Name, State, Pid, PPid, Uid, Gid, and memory details
- Run mount | head -20 to see the first 20 mounted filesystems
- Look for overlay or aufs entries β these are the container's layered filesystem
- Identify /proc, /sys, and /dev mounts β these are bind-mounted from the host
mount | head -20 A list of mounted filesystems showing overlay, proc, sysfs, tmpfs, and other mounts
- Run stat -f / to display filesystem information for the root mount
- Look at the Type field to identify the filesystem (overlay, ext4, xfs, etc.)
- Note the block size and total/free blocks for the container's writable layer
stat -f / Filesystem statistics showing type, block size, blocks total/free, and inodes
- Run mkdir -p /tmp/ramdisk first to ensure the mount point exists
- Run mount -t tmpfs tmpfs /tmp/ramdisk to create a RAM-backed filesystem
- Write a file to it: echo "stored in RAM" > /tmp/ramdisk/test.txt
mkdir -p /tmp/ramdisk && mount -t tmpfs tmpfs /tmp/ramdisk No output on success β verify with df -h /tmp/ramdisk
- Run df -i to display inode usage for all mounted filesystems
- Inodes are data structures that store file metadata (permissions, owner, timestamps)
- Each file and directory consumes one inode β running out of inodes means no new files even if disk space is free
df -i A table showing filesystem inode counts β total, used, free, and percentage
Understand container resource boundaries
- Run cat /sys/fs/cgroup/cpu.max to see the CPU bandwidth limit
- The format is "quota period" β e.g., "100000 100000" means 100% of one CPU
- "max 100000" means no CPU limit is set (the container can use all available CPUs)
cat /sys/fs/cgroup/cpu.max Two numbers (quota and period) or "max" followed by a period value
- Run cat /sys/fs/cgroup/memory.max to see the memory ceiling
- "max" means no limit is set; a number shows the limit in bytes
- Also check cat /sys/fs/cgroup/memory.current to see how much memory is in use right now
cat /sys/fs/cgroup/memory.max "max" if unlimited, or a number in bytes representing the memory cap
- Run ls /sys/fs/cgroup/ to see all available cgroup controllers
- Each file controls a different resource: cpu, memory, io, pids, etc.
- Try cat /sys/fs/cgroup/pids.max to see how many processes the container can create
ls /sys/fs/cgroup/ Files like cpu.max, memory.max, memory.current, pids.max, io.max, and controller files
- Run ulimit -a to see all shell-level resource limits
- Key limits include open files (-n), max processes (-u), and stack size (-s)
- These are inherited from the container runtime configuration and can differ from the host
ulimit -a A list of resource limits including core file size, open files, pipe size, stack size, and more
- Run apk add stress-ng to install the stress testing tool
- Run stress-ng --cpu 1 --timeout 5 to stress one CPU core for 5 seconds
- Watch the output β it reports how many operations were completed under any CPU limits
apk add stress-ng && stress-ng --cpu 1 --timeout 5 stress-ng output showing the test ran for 5 seconds with a summary of operations completed
Understand Linux namespaces from inside the container
- Run ls -la /proc/1/ns/ to see all namespaces attached to PID 1
- Each entry is a symbolic link pointing to a namespace identifier like mnt:[4026532xxx]
- Common namespaces: mnt (filesystem), pid (process IDs), net (networking), uts (hostname), ipc (inter-process comms), user (UIDs)
ls -la /proc/1/ns/ Symbolic links for cgroup, ipc, mnt, net, pid, pid_for_children, user, and uts namespaces
- Run echo $$ to see your shell's PID inside the container
- Run cat /proc/self/status | grep NSpid to see the nested PID mapping
- The NSpid line shows your PID in each nested namespace level β the first number is the host PID, the last is your container PID
echo $$ && cat /proc/self/status | grep NSpid Your shell PID (e.g., 1 or a small number) and NSpid line showing namespace PID mapping
- Run ip link show to see all network interfaces in your container's network namespace
- Look for eth0 or similar β this is your container's virtual network interface
- Run ip addr show to see the IP addresses assigned to each interface
ip link show Network interfaces including lo (loopback) and eth0 or similar with their state and addresses
- Run hostname to see this container's hostname
- The hostname is isolated by the UTS namespace β changing it here does not affect the host
- Try hostname test-container to change it, then run hostname again to verify
hostname The container's hostname, which is typically a random string or the container ID
- Run cat /proc/self/cgroup to see which cgroup paths this process belongs to
- In cgroups v2 (unified hierarchy), you should see a single entry with path /
- Run cat /proc/1/cgroup to compare β PID 1 and your shell should be in the same cgroup
cat /proc/self/cgroup One or more lines showing the cgroup controller and path, typically "0::/" for cgroups v2
Compile and run software from source
- Run apk add build-base to install gcc, g++, make, and standard C headers
- This is Alpine's equivalent of Debian's build-essential package
- Run gcc --version to confirm the compiler is installed and check the version
apk add build-base Package download and installation messages, ending with OK
- Create a simple C source file using the command below
- This writes a minimal "Hello from inside a container!" program to hello.c
- Run cat hello.c to verify the file was written correctly
echo '#include <stdio.h>
int main() {
printf("Hello from inside a container!\n");
return 0;
}' > hello.c - Run gcc -o hello hello.c to compile the source code into a binary
- The -o flag specifies the output filename (hello)
- If there are no errors, the compiler produces the binary silently
gcc -o hello hello.c No output on success β errors would appear if the code had syntax problems
- Run ./hello to execute the compiled binary
- You should see the greeting message printed to stdout
- This binary ran entirely inside the container's isolated environment
./hello Hello from inside a container!
- Run file hello to identify the binary format β it should show ELF 64-bit, dynamically linked
- Run ldd hello to list the shared libraries it depends on
- Note that it links against musl libc (ld-musl-x86_64.so) instead of glibc (ld-linux-x86-64.so)
file hello && ldd hello ELF binary details from file, followed by shared library paths from ldd showing musl libc
Running Containers
No containers running
Select a template to get started
Container Concepts
π³ Docker Containers
Lightweight, portable execution environments that package applications with their dependencies.
- Image-based deployment
- Isolated networking
- Volume mounts for persistence
βΈοΈ Kubernetes
Container orchestration platform for automating deployment, scaling, and management.
- Pod-based workloads
- Service discovery
- Auto-scaling & healing
π¦ LXC Containers
System containers that provide full Linux environments with lower overhead than VMs. Powers these labs.
- Full init systems
- Better for persistent workloads
- Used by Proxmox