Networking Lab
Practice networking with 3 interconnected containers on an isolated bridge
Lab Topology
🖥️
Container A
10.99.0.x
vmbr99 (isolated)
🖥️
Container B
10.99.0.y
🖥️
Container C
10.99.0.z
All containers share an isolated bridge. No internet access, no production network access.
Connecting to lab engine...
Checking if real Linux environments are available
Real Container Lab
Launch an ephemeral LXC container for hands-on practice. Auto-destroys after 60 minutes.
Prefer offline mode?
Provisioning a real Linux container on isolated infrastructure
Creating LXC container on Proxmox...
Booting Alpine Linux & installing tools...
Opening live terminal session...
!
Failed to create lab
Connected to live container -- this is a real, isolated Linux environment
Simulation Mode -- Could not reach lab engine. Using client-side sandbox.
Challenges
Verify and test network connections between containers
○ Ping Container B from Container A
- Switch to the Container A tab in the terminal
- Run ping -c 4 <Container_B_IP> (check the topology diagram for the IP)
- You should see 4 reply lines with round-trip times in milliseconds
- Confirm 0% packet loss in the summary line at the end
ping -c 4 10.99.0.y 💡 The -c 4 flag sends exactly 4 packets and stops. Without it, ping runs forever until you Ctrl+C.
Expected:
4 packets transmitted, 4 received, 0% packet loss
○ Ping Container C from Container A
- Still on Container A, ping Container C using its IP from the topology diagram
- Compare the round-trip times to the Container B ping
- On an isolated bridge, latency between all containers should be nearly identical
ping -c 4 10.99.0.z 💡 All three containers share the same vmbr99 bridge, so there are no routers between them. Latency should be sub-millisecond.
○ Inspect your network interfaces
- Run ip addr to see all network interfaces and their addresses
- Look for the eth0 interface — its inet line shows your IP address
- The /24 suffix means a 255.255.255.0 subnet mask (256 addresses, 254 usable)
- You will also see a lo (loopback) interface at 127.0.0.1 — every host has this
ip addr show 💡 ip addr replaces the older ifconfig command. The "state UP" label means the interface is active and ready to send traffic.
○ Identify your eth0 IP address
- Run ip -4 addr show eth0 to filter to just IPv4 on eth0
- The inet line shows your IP in CIDR notation (e.g. 10.99.0.2/24)
- The part before the slash is your IP; the /24 is the prefix length
- Note this IP — you will need it for tasks in other containers
ip -4 addr show eth0 💡 The -4 flag filters out IPv6 addresses, making the output much easier to read. The brd value is the broadcast address for your subnet.
○ Check the default gateway
- Run ip route to display the routing table
- Look for the line starting with "default via" — that is your gateway
- The gateway is the next-hop router for traffic outside your local subnet
- You should also see a line for 10.99.0.0/24 showing your directly-connected network
ip route 💡 On this isolated bridge there may not be a default gateway since the lab has no internet access. The direct subnet route is what allows containers to reach each other.
Understand interface configuration
○ View all network interfaces
- Run ip link show to list every interface on the system
- Each interface has an index number, name, and state (UP or DOWN)
- You should see at least lo (loopback) and eth0 (your network connection)
- The flags like BROADCAST, MULTICAST describe interface capabilities
ip link show 💡 ip link shows Layer 2 (data link) info — interface state and MAC addresses but no IP addresses. Use ip addr for IP info.
○ Check MAC addresses
- Look at the ip link show output for the link/ether lines
- Each interface has a unique MAC address (6 hex pairs like aa:bb:cc:dd:ee:ff)
- The MAC address is a hardware-level identifier used on the local network segment
- Compare MAC addresses across all three containers — each one will be different
ip link show eth0 💡 MAC addresses operate at Layer 2 (Ethernet). Switches use them to forward frames. The first 3 octets identify the manufacturer (OUI).
○ View the ARP table
- First, ping the other two containers so your ARP cache is populated
- Run ip neigh to see the ARP (Address Resolution Protocol) table
- Each entry maps an IP address to a MAC address on your local network
- The state will show REACHABLE, STALE, or DELAY depending on freshness
ip neigh 💡 ARP is how your machine discovers which MAC address belongs to an IP. Without ARP, your container would not know where to send Ethernet frames. STALE means the entry has not been confirmed recently.
○ Check MTU settings
- Run ip link show eth0 and look for the mtu value
- The default MTU (Maximum Transmission Unit) is usually 1500 bytes
- This means each Ethernet frame can carry at most 1500 bytes of payload
- Packets larger than the MTU must be fragmented, which adds overhead
ip link show eth0 | grep mtu 💡 MTU mismatches between hosts can cause mysterious connectivity issues — packets may be silently dropped. Jumbo frames (MTU 9000) are common in storage networks.
○ Verify subnet mask and broadcast address
- Run ip -4 addr show eth0 and examine the inet line carefully
- The /24 after the IP is CIDR notation for the subnet mask 255.255.255.0
- The brd value is the broadcast address (e.g. 10.99.0.255)
- Sending a packet to the broadcast address reaches all hosts on the subnet
ip -4 addr show eth0 💡 A /24 network has 256 addresses total: .0 is the network address, .255 is broadcast, and .1 through .254 are usable host addresses. That gives you 254 hosts per subnet.
Understand name resolution inside and out
○ Check configured nameservers
- Run cat /etc/resolv.conf to see DNS configuration
- Look for nameserver lines — these are the DNS servers your system queries
- There may also be a search directive that sets the default domain suffix
- Note the nameserver IPs — you will use them with dig in the next tasks
cat /etc/resolv.conf 💡 resolv.conf is the traditional Unix DNS config file. In containers, this is often auto-generated. The first nameserver listed gets tried first.
○ Query A records with dig
- Run dig google.com to perform a DNS lookup for A (address) records
- Look at the ANSWER SECTION — it shows the IP addresses for the domain
- The number before the IP is the TTL (time-to-live) in seconds
- The QUERY TIME at the bottom shows how long the lookup took
dig google.com 💡 A records map domain names to IPv4 addresses. This is the most fundamental DNS record type. The TTL tells caching resolvers how long to keep the result.
Expected:
ANSWER SECTION with one or more A records showing IP addresses
○ Query MX records
- Run dig google.com MX to query mail exchanger records
- MX records show which servers handle email for a domain
- Each MX record has a priority number — lower means higher priority
- Mail servers try the lowest-priority MX first, then fall back to higher ones
dig google.com MX 💡 MX records are critical for email delivery. Without them, other mail servers would not know where to deliver messages for your domain. The priority number enables failover.
Expected:
ANSWER SECTION with MX records showing priority and mail server hostnames
○ Perform a reverse DNS lookup
- First, get an IP from a forward lookup: dig google.com +short
- Run dig -x <IP> to perform a reverse lookup (IP to hostname)
- Reverse DNS uses special PTR records under the in-addr.arpa domain
- Not all IPs have reverse DNS — it depends on the IP owner configuring it
dig -x 8.8.8.8 💡 Reverse DNS is used by mail servers to verify senders, by logging systems to add context, and by traceroute to show hostnames. The -x flag auto-constructs the in-addr.arpa query.
Expected:
PTR record showing a hostname like dns.google
○ Create a custom hostname mapping
- Run echo "10.99.0.y container-b.lab" >> /etc/hosts to add a mapping
- The /etc/hosts file is checked before DNS servers (by default)
- Test it by running ping -c 2 container-b.lab
- You should see ping resolving the name to the IP you set
echo "10.99.0.y container-b.lab" >> /etc/hosts 💡 /etc/hosts is the simplest name resolution — no server needed. It is checked first due to the order in /etc/nsswitch.conf (files before dns). This is how localhost resolves to 127.0.0.1.
Test services across the network
○ Start an HTTP server
- Switch to Container A and create a test file: echo "Hello from A" > index.html
- Start a simple HTTP server: python3 -m http.server 8080
- The server will listen on all interfaces (0.0.0.0) on port 8080
- Leave this terminal running — you will connect to it from another container
python3 -m http.server 8080 💡 python3 -m http.server serves the current directory over HTTP. It is great for quick testing. The 8080 port is a common alternative to port 80 that does not require root.
○ Curl the HTTP server from another container
- Switch to the Container B tab
- Run curl http://<Container_A_IP>:8080/ to fetch the page
- You should see the directory listing or your index.html content
- Try curl -v to see the full HTTP request/response headers
curl http://10.99.0.x:8080/ 💡 curl is the Swiss Army knife of HTTP. The -v (verbose) flag shows the TCP connection, HTTP headers, and response — invaluable for debugging web services.
Expected:
HTML directory listing or the contents of index.html
○ Create a TCP listener with netcat
- On Container A, stop the HTTP server with Ctrl+C if it is running
- Run nc -l -p 9000 to start listening for TCP connections on port 9000
- Netcat will wait for an incoming connection and then display anything received
- Leave this running and switch to another container to connect
nc -l -p 9000 💡 Netcat (nc) is called the "TCP/IP Swiss Army knife." With -l it listens; without it, it connects. It works with raw TCP, so there is no HTTP protocol overhead — just pure bytes.
○ Connect to the netcat listener
- Switch to Container B
- Run nc <Container_A_IP> 9000 to connect to the listener
- Type a message and press Enter — it should appear on Container A
- Type on Container A too — netcat is bidirectional
- Press Ctrl+C on either side to close the connection
nc 10.99.0.x 9000 💡 This is a raw TCP connection. Everything you type is sent as-is, byte for byte. This is how all network protocols work underneath — HTTP, SSH, and SMTP all build on top of TCP connections like this.
○ Transfer a file between containers
- On Container A (receiver), run: nc -l -p 9000 > received.txt
- On Container B (sender), create a file: echo "Secret message from B" > secret.txt
- On Container B, send the file: nc <Container_A_IP> 9000 < secret.txt
- On Container A, Ctrl+C to close, then cat received.txt to verify
- Compare the contents — they should be identical
nc -l -p 9000 > received.txt 💡 This is a bare-bones file transfer over TCP — no encryption, no authentication, no error correction beyond what TCP provides. Tools like scp and rsync add those layers on top of the same basic concept.
Configure packet filtering with iptables
○ List current iptables rules
- Run iptables -L -n -v to list all firewall rules with packet counters
- There are three default chains: INPUT (incoming), OUTPUT (outgoing), FORWARD (routed)
- The policy (ACCEPT or DROP) at the top of each chain is the default action
- With no custom rules, the default policy is usually ACCEPT (allow everything)
iptables -L -n -v 💡 The -n flag shows numeric addresses instead of resolving hostnames (faster). The -v flag adds packet/byte counters so you can see if rules are matching traffic.
○ Block ICMP from Container B
- On Container A, add a rule to drop ICMP packets from Container B
- Run: iptables -A INPUT -s <Container_B_IP> -p icmp -j DROP
- -A INPUT appends to the INPUT chain, -s sets the source IP
- -p icmp matches the ICMP protocol (used by ping), -j DROP silently discards
iptables -A INPUT -s 10.99.0.y -p icmp -j DROP 💡 DROP silently discards packets — the sender gets no response and eventually times out. REJECT would send back an error immediately. DROP is stealthier; REJECT is more polite.
○ Verify the block is working
- Switch to Container B and try to ping Container A
- Run: ping -c 3 -W 2 <Container_A_IP>
- The -W 2 sets a 2-second timeout so you do not wait forever
- You should see 100% packet loss — the pings are being silently dropped
- Switch back to Container A and run iptables -L -n -v to see the counter increment
ping -c 3 -W 2 10.99.0.x 💡 Notice that ping from Container C to A still works — the rule only blocks Container B. Firewall rules are specific. Check the pkts counter in iptables -L -v to confirm packets are hitting the rule.
Expected:
3 packets transmitted, 0 received, 100% packet loss
○ Remove the ICMP block
- On Container A, delete the rule using -D (delete) instead of -A (append)
- Run: iptables -D INPUT -s <Container_B_IP> -p icmp -j DROP
- The syntax is identical to when you added it, just with -D instead of -A
- Verify removal with iptables -L -n — the rule should be gone
- Test from Container B again: ping -c 2 <Container_A_IP> should work now
iptables -D INPUT -s 10.99.0.y -p icmp -j DROP 💡 You can also delete by rule number: iptables -D INPUT 1 removes the first rule. Use iptables -L --line-numbers to see rule numbers. Be careful — numbering shifts after each deletion.
○ Allow only SSH and HTTP traffic
- First, set the default INPUT policy to DROP: iptables -P INPUT DROP
- Allow established/related connections: iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
- Allow SSH: iptables -A INPUT -p tcp --dport 22 -j ACCEPT
- Allow HTTP: iptables -A INPUT -p tcp --dport 80 -j ACCEPT
- Allow loopback: iptables -A INPUT -i lo -j ACCEPT
- Verify with iptables -L -n — only ports 22 and 80 should be reachable
iptables -P INPUT DROP 💡 The ESTABLISHED,RELATED rule is critical — without it, your outgoing connections would work but responses would be dropped. This is the foundation of a stateful firewall. Always add it before setting a DROP policy.
Capture and examine network traffic
○ Capture ICMP packets
- On Container A, start capturing: tcpdump -i eth0 icmp
- Switch to Container B and ping Container A: ping -c 3 <Container_A_IP>
- Switch back to Container A — you will see each ICMP echo request and reply
- Press Ctrl+C to stop the capture and see the summary line
tcpdump -i eth0 icmp 💡 tcpdump shows packets in real time. Each line shows timestamp, source, destination, and protocol details. ICMP echo request is the ping; echo reply is the response. The sequence numbers should match.
Expected:
Lines showing ICMP echo request and echo reply with IP addresses and sequence numbers
○ Capture traffic from a specific host
- Run: tcpdump -i eth0 host <Container_B_IP>
- This captures ALL traffic to/from Container B (not just ICMP)
- From Container B, try pinging and curling Container A
- Notice how tcpdump shows different protocol types (ICMP, TCP, etc.)
tcpdump -i eth0 host 10.99.0.y 💡 tcpdump filters use BPF (Berkeley Packet Filter) syntax. You can combine filters: "host X and port 80", "src host X", "dst host X". These filters run in the kernel for efficiency.
○ Save a capture to file
- Run: tcpdump -i eth0 -w capture.pcap -c 20
- The -w flag writes raw packets to a file instead of printing them
- The -c 20 flag stops after capturing 20 packets
- Generate some traffic from other containers while the capture runs
- The .pcap format is the standard — it works with Wireshark, tshark, and more
tcpdump -i eth0 -w capture.pcap -c 20 💡 The pcap file contains full packet data including headers and payloads. The -c flag is useful to avoid capturing forever and filling up disk space.
○ Read a pcap file
- Run: tcpdump -r capture.pcap to read and display the saved packets
- Add -n to skip DNS resolution for faster output
- Add -v or -vv for more detail on each packet
- You can also apply filters when reading: tcpdump -r capture.pcap icmp
tcpdump -r capture.pcap -n 💡 Reading from pcap files is great for post-mortem analysis. You can apply different filters each time you read the same capture. Professionals often capture broadly and filter later.
Expected:
Packet summaries matching what you captured earlier, with timestamps and protocol details
○ Capture DNS queries
- On Container A, start capturing DNS traffic: tcpdump -i eth0 port 53
- From Container B or another terminal, run: dig google.com
- Watch Container A — you should see the DNS query and response packets
- DNS uses port 53 for both UDP (normal queries) and TCP (large responses)
tcpdump -i eth0 port 53 -n 💡 DNS traffic is usually unencrypted UDP on port 53, so tcpdump can show exactly which domains are being queried. This is why DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT) were invented — to prevent this kind of eavesdropping.
Configure custom routing between containers
○ View the full routing table
- Run ip route show table all to see every routing table on the system
- The "main" table contains your normal routes
- The "local" table has routes for your own addresses (auto-managed by the kernel)
- Each route shows: destination network, gateway (if any), interface, and protocol
ip route show table all 💡 Linux supports multiple routing tables for policy routing. Most setups only use the "main" table. The "local" and "broadcast" tables are managed by the kernel and should not be edited manually.
○ Add a static route
- Add a route to a fictional network via Container B as gateway
- Run: ip route add 192.168.100.0/24 via <Container_B_IP>
- Verify with: ip route — you should see the new route in the table
- Packets destined for 192.168.100.x will now be sent to Container B
- Remove it when done: ip route del 192.168.100.0/24
ip route add 192.168.100.0/24 via 10.99.0.y 💡 Static routes tell the kernel "to reach network X, send packets to gateway Y." The gateway must be reachable on a directly-connected network. This is how routers build their forwarding tables.
○ Trace the path between containers
- Run traceroute to each of the other two containers
- Example: traceroute -n <Container_B_IP>
- On this isolated bridge, you should see a single hop (direct connection)
- The -n flag skips reverse DNS lookups for faster output
- Try traceroute to an external IP — it will fail since the lab is isolated
traceroute -n 10.99.0.y 💡 traceroute works by sending packets with increasing TTL (Time To Live) values. Each router decrements the TTL; when it hits 0, the router sends back a "time exceeded" message. This reveals each hop along the path.
Expected:
A single hop showing the target IP — direct link on the bridge, no intermediate routers
○ Enable IP forwarding
- On Container B, check the current setting: cat /proc/sys/net/ipv4/ip_forward
- A value of 0 means forwarding is disabled — the container will not route packets
- Enable it: echo 1 > /proc/sys/net/ipv4/ip_forward
- Verify: cat /proc/sys/net/ipv4/ip_forward should now show 1
- Container B can now forward packets between its interfaces
echo 1 > /proc/sys/net/ipv4/ip_forward 💡 By default, Linux does NOT forward packets between interfaces — it only processes traffic destined for itself. Enabling ip_forward turns the machine into a router. This is a prerequisite for the next task.
○ Set up Container B as a router
- Ensure IP forwarding is enabled on Container B (previous task)
- On Container A, add a route: ip route add 10.99.1.0/24 via <Container_B_IP>
- On Container C, add a route: ip route add 10.99.1.0/24 via <Container_B_IP>
- On Container B, optionally add NAT: iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
- Test by pinging from Container A to C through B — use tcpdump on B to watch packets being forwarded
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE 💡 MASQUERADE rewrites the source IP of forwarded packets to Container B own IP. This is exactly how your home router works — all your devices share one public IP via NAT (Network Address Translation).
Control bandwidth and latency with tc
○ View current queueing disciplines
- Run tc qdisc show to see all queueing disciplines on each interface
- The default qdisc is usually pfifo_fast or fq_codel
- Each interface has a root qdisc that controls how packets are scheduled
- The "refcnt" value shows how many references point to this qdisc
tc qdisc show 💡 tc (traffic control) is the Linux kernel tool for packet scheduling, shaping, and policing. Queueing disciplines (qdiscs) determine the order packets are sent. fq_codel is the modern default — it fights bufferbloat.
○ Add simulated network latency
- Run: tc qdisc add dev eth0 root netem delay 100ms
- This adds 100ms of delay to every outgoing packet on eth0
- The netem (network emulator) module simulates real-world network conditions
- You can also add jitter: tc qdisc change dev eth0 root netem delay 100ms 20ms
- The 20ms is random variation — so delay will be 80-120ms
tc qdisc add dev eth0 root netem delay 100ms 💡 netem is incredibly useful for testing how applications behave on slow or unreliable networks. 100ms is roughly the latency between US coasts. Transatlantic latency is ~80ms. Satellite is ~600ms.
○ Verify the added latency
- From another container, ping the one where you added latency
- Run: ping -c 5 <Container_IP_with_netem>
- Compare the round-trip times to your earlier pings — they should be ~100ms higher
- The time values will fluctuate slightly due to system scheduling
ping -c 5 10.99.0.x 💡 Note that only outgoing packets are delayed. The ping from the other container will see ~100ms added to the round trip because the reply from this container is delayed. Incoming packets are unaffected.
Expected:
Round-trip times around 100ms instead of sub-millisecond
○ Limit bandwidth
- Change the existing netem rule to add a rate limit
- Run: tc qdisc change dev eth0 root netem delay 50ms rate 1mbit
- This caps outgoing bandwidth to 1 Mbit/s with 50ms latency
- Test with a large transfer: dd if=/dev/zero bs=1M count=5 | nc <other_container_IP> 9000
- On the receiver, run: nc -l -p 9000 > /dev/null and time how long it takes
tc qdisc change dev eth0 root netem delay 50ms rate 1mbit 💡 At 1 Mbit/s, a 5 MB file takes about 40 seconds. The rate limit happens at the kernel level before packets leave the interface. This is how ISPs can throttle your connection speed.
○ Remove traffic shaping rules
- Run: tc qdisc del dev eth0 root to remove the netem qdisc
- Verify with: tc qdisc show dev eth0 — it should be back to the default
- Ping from another container to confirm latency is back to normal
- You can re-add rules at any time — they take effect immediately
tc qdisc del dev eth0 root 💡 Deleting the root qdisc restores the kernel default. All netem settings (delay, rate, loss, duplication) are removed at once. In production, be careful — removing traffic shaping during peak hours can cause congestion.
What You Can Do
- 3 interconnected containers on a private bridge (vmbr99)
- Each container has its own IP on 10.99.0.0/24 subnet
- Full networking tools: ping, traceroute, dig, tcpdump
- Firewall configuration with iptables
- Traffic control with tc (netem)
- HTTP servers, netcat, curl for service testing
Learning Objectives
Beginner
- Test connectivity between containers
- Understand IP addressing and interfaces
Intermediate
- Perform DNS lookups
- Test HTTP services across containers
- Use netcat for TCP connections
Advanced
- Write iptables firewall rules
- Capture and analyze packets with tcpdump
Expert
- Configure custom routing
- Shape traffic with tc netem
- Set up container as router
Useful Tools
pingTest connectivityip addrShow interfacestracerouteTrace packet pathdigDNS queriesss -tlnpListening portstcpdumpPacket capture LIVE Connecting...
LIVE Connecting...
LIVE Connecting...
Session
--- Time Left --:--
CPU 0%
MEM 0%