Subnetting Strategy

Cookbook

Recipes for specific real-world subnetting challenges. Each recipe gives you a concrete CIDR plan you can adapt to your environment.

Small office (50 users)

Start with a /22 parent block — 1,024 addresses, split into 6 VLANs. This leaves room for growth without wasting a huge swath of address space.

VLANPurposeSubnetUsable
10Employee data10.1.0.0/24254
20VoIP / voice10.1.1.0/25126
30Printers / shared10.1.1.128/2730
40Guest / BYOD10.1.2.0/24254
50Management10.1.1.160/2814
99Servers / infra10.1.3.0/25126

Guest gets a full /24 because visitors bring 2–4 devices each (phone, laptop, tablet). Fifty guests can easily consume 150+ IPs.

Campus

Assign a /20 per building. Never span VLANs across buildings — this contains STP failure domains to a single building. Use Layer 3 at the distribution layer so each building's subnets summarize to one route announcement.

Summarization-friendly layout: Building A = 10.10.0.0/20, Building B = 10.10.16.0/20, Building C = 10.10.32.0/20. All three collapse to 10.10.0.0/18 at the core.

VLAN + subnet alignment

The golden rule: 1 VLAN = 1 subnet, always. If you share a subnet across VLANs (or vice versa), troubleshooting becomes a nightmare because ARP and broadcast domains don't align.

Numbering scheme that scales: 1xx for data, 2xx for voice, 3xx for guest, 4xx for IoT, 9xx for management. A branch with VLAN 110 is instantly recognizable as "data, branch 1."

Guest isolation

A separate VLAN is necessary but not sufficient. Also place guests in a dedicated firewall zone with client isolation enabled on the access points (so guests can't see each other). Add explicit deny rules to all RFC 1918 ranges — guests reach the internet and nothing else. Size the subnet at /23 or /22 to handle surges.

DMZ

A /27 or /28 between two firewall interfaces. Strict rules: only specific ports allowed inbound, all outbound initiated from the DMZ is denied except explicit allowlisted destinations. Fewer addresses means a smaller attack surface.

Management network

A /27 or /28 restricted by ACLs to admin jump hosts only. This network carries switch management, IPMI/iDRAC, AP controllers, and UPS monitoring. Never route it to the general user population.

IoT segmentation

Group IoT devices by function, not by floor: cameras in one VLAN, HVAC in another, badge readers in a third. Firewall allowlists should permit only traffic to specific controller or cloud endpoints. IoT devices are notoriously under-patched — segmentation limits blast radius.

Flat-to-segmented migration

Migrating from a single flat network to VLANs is one of the most common and most stressful projects in enterprise networking. Do it in phases:

  • Inventory — document every device, its MAC, its current IP, and whether it uses DHCP or static
  • Pilot one floor — create the new VLANs on one floor, run them in parallel with the flat network
  • DHCP clients first — move DHCP devices by changing the relay/scope; they pick up the new subnet at next lease renewal
  • Static devices last — printers, servers, and controllers need manual readdressing or DHCP reservations
  • Rollback plan — keep the old flat VLAN active with a small DHCP scope so you can move devices back instantly

Strategic gap leaving (buddy allocation)

When you allocate 10.1.0.0/24, keep 10.1.1.0/24 free as its "buddy" so you can later merge them into 10.1.0.0/23. This is exactly how binary tree splitting works — and it's what slashwhat visualizes.

10.1.0.0/22
├── 10.1.0.0/23        (allocate left half)
│   ├── 10.1.0.0/24    (use this now)
│   └── 10.1.1.0/24    (growth space — the buddy)
└── 10.1.2.0/23        (keep right half entirely free)

Gateway placement for future expansion

Place the default gateway at .1 (first usable address). When you later expand 10.1.0.0/24 to 10.1.0.0/23, the gateway at 10.1.0.1 stays valid — no client reconfiguration needed. The new space (10.1.1.010.1.1.255) is purely additive.

Anti-pattern: gateway at .254 works fine for expansion but ends up awkwardly mid-range after you widen the subnet.

IPv6

IPv6 for LANs

Always use /64 for LANs — SLAAC requires it, and it's the universal standard. A /48 per site gives you 65,536 /64 subnets. Dual-stack (IPv4 + IPv6 running simultaneously) is the safest transition strategy. Use GUA (globally routable addresses) as primary; avoid ULA unless you have a specific reason.

Reference: https://www.ripe.net/publications/docs/ripe-690/

Differing Opinion to Consider

ULA vs GUA for IPv6 internal networks. The GUA-only camp says it's simpler and avoids address selection bugs (RFC 6724 historically preferred IPv4 over ULA). The ULA camp says it provides provider-independent stability. IETF draft-ietf-6man-rfc6724-update is fixing the preference issue.

See: https://blog.ipspace.net/2022/05/ipv6-ula-made-useless/

Cloud

AWS, Azure, and GCP subnetting — the constraints that matter, the patterns that work, and the mistakes that cause outages.

The #1 cloud subnetting lesson

IP space is the long pole in the tent. Compute scales in seconds (auto-scaling groups, spot fleets, EKS node pools). Storage scales instantly (EBS, PD). But you cannot resize a subnet in place on AWS or Azure. If your /24 runs out of IPs during a traffic spike, you face a painful migration — not a button click. IP planning is the one thing that must be right before you need it.

Case study: the Neon outage (May 2025). Pods grew from ~5,000 to ~8,100, exhausting IPs across three /20 subnets. 5.5 hours of downtime. The failure was subtle — AWS VPC CNI's warm ENI buffer stranded IPs on nodes that had no CPU capacity. Even though the subnet showed "available" IPs, prefix allocations within ENIs were exhausted. https://neon.com/blog/aws-cni-lessons-from-a-production-outage

GCP is the exception

GCP supports expanding subnet ranges in place (increase only, never shrink). AWS and Azure do not — you must create a new subnet and migrate. https://cloud.google.com/vpc/docs/using-vpc#expand-subnet

Reserved addresses by provider

ProviderReserved/subnetMin subnetUsable in /28
AWS5/2811
Azure5/2911
GCP4/2912

AWS reserves .0 (network), .1 (VPC router), .2 (DNS), .3 (future use), and the last address (broadcast). https://docs.aws.amazon.com/vpc/latest/userguide/subnet-sizing.html · https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-faq · https://cloud.google.com/vpc/docs/subnets

AWS 3-tier VPC pattern

/16 VPC with /24 subnets across 3 AZs. Always start with /16 — you can add secondary CIDRs later but cannot shrink. https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html

TierAZ-aAZ-bAZ-c
Public10.0.1.0/2410.0.2.0/2410.0.3.0/24
Private (app)10.0.11.0/2410.0.12.0/2410.0.13.0/24
Private (data)10.0.21.0/2410.0.22.0/2410.0.23.0/24

AWS EKS pod networking (2026)

Prefix delegation is now the standard approach. VPC CNI assigns /28 blocks (16 IPs) to ENI slots instead of individual IPs, enabling 110 pods/node. Use /20 or larger subnets for pods. Karpenter sets --max-pods=110 by default, ignoring the legacy ENI calculator.

For pod CIDRs that won't conflict with corporate space, add a secondary CIDR from 100.64.0.0/10 (CGNAT / RFC 6598). https://docs.aws.amazon.com/eks/latest/best-practices/vpc-cni.html · https://www.eksworkshop.com/docs/networking/vpc-cni/prefix/

Differing Opinion to Consider

Using 100.64.0.0/10 (CGNAT) for pod CIDRs. AWS docs recommend it to avoid RFC 1918 collisions. But CGNAT space is used by ISPs between their equipment and customers — remote workers on cellular or home connections may have CGNAT addresses, causing VPN routing conflicts. The long-term answer is IPv6 dual-stack, not more creative IPv4 carve-outs.

Azure hub-and-spoke

Hub VNet /16 for shared services + firewall. Spoke VNets per environment. Azure CNI Overlay is the 2026 recommendation for AKS: pods get addresses from an overlay CIDR (default 10.244.0.0/16) that can be reused across independent clusters in the same VNet. https://learn.microsoft.com/en-us/azure/aks/azure-cni-overlay · https://learn.microsoft.com/en-us/azure/virtual-network/concepts-and-best-practices

GCP global VPC

Custom mode, regional subnets. GKE now supports Class E addresses (240.0.0.0/4) for pods — 268M addresses, dwarfing all of RFC 1918 combined. Production-ready but with caveats: Class E traffic gets blocked on Windows hosts and some on-prem hardware. https://cloud.google.com/blog/products/containers-kubernetes/how-class-e-addresses-solve-for-ip-address-exhaustion-in-gke

Multi-region IP planning

Assign each region a /12 from a 10.0.0.0/8 enterprise allocation:

10.0.0.0/8 (enterprise)
  10.0.0.0/12  = us-east-1
  10.16.0.0/12 = us-west-2
  10.32.0.0/12 = eu-west-1
  10.48.0.0/12 = ap-southeast-1
  10.64.0.0/12 = future growth

https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_planning_network_topology_non_overlap_ip.html

Differing Opinion to Consider

Large subnets (/20) vs many small subnets (/24) in cloud. Large subnets are simpler and safer against IP exhaustion during autoscaling. Small subnets provide traditional L2 isolation. 2026 consensus leans toward large subnets + identity-based microsegmentation (Cilium, cloud security groups) instead of subnet-boundary security. https://www.tigera.io/learn/guides/microsegmentation/

Sizing

Right-sizing subnets — the math, the mistakes, and the tradeoffs. Getting this wrong costs either wasted address space or a painful renumbering project.

Kubernetes CIDR sizing

The formula: pod CIDR = (max nodes) × (IPs/node).

PlatformDefault pods/nodeCIDR/node100-node cluster needs
EKS110 (prefix delegation)/24/16 pod CIDR
AKS (CNI Overlay)250/24/16 pod CIDR
GKE110/24/16 pod CIDR

Rolling updates temporarily double IP usage (old + new pods coexist). Always size for 2x peak pod count. https://docs.aws.amazon.com/eks/latest/best-practices/subnets.html · https://learn.microsoft.com/en-us/azure/aks/concepts-network-ip-address-planning · https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr

Cilium and eBPF

Cilium is now the dominant advanced CNI (used by Adobe, Capital One; Azure AKS moving toward it). It uses identity-based policies (Kubernetes labels) instead of IP-based rules — a paradigm shift. But pods still need IPs for the data plane, so CIDR planning remains necessary. https://github.com/cilium/cilium

Service mesh impact

Istio/Linkerd sidecars share the pod's network namespace, so they don't consume extra IPs. The main impact is on pod density: sidecar memory/CPU overhead reduces how many pods fit per node, which indirectly affects how many node IPs you need.

Multi-cluster

All clusters in a mesh must have non-overlapping pod CIDRs. Cilium ClusterMesh requires a native routing CIDR covering all clusters (typically 10.0.0.0/8). Plan pod CIDRs per cluster from day one. https://docs.cilium.io/en/stable/network/clustermesh/clustermesh/

The 2x rule

Size for 2x your current device count. This accounts for 3–5 years of organic growth without renumbering. It's the single most reliable heuristic in subnet planning.

Over-provisioning vs under-provisioning

Over-provisioningUnder-provisioning
CostWastes address spaceForces painful renumbering
RoutingLarger broadcast domainsLarger routing tables
CloudFewer subnets fit in VPCServices fail to launch
Verdict: slightly over-provision. Renumbering costs more than wasted IPs.

Sizing by workload type

WorkloadSizeWhy
User LAN/24254 IPs covers most floors
Voice/24 or /25Matches user count 1:1
Guest / BYOD/23 or /22Multiple devices per person
Server/25 or /26Countable and stable
Management/27 or /28Few devices, well-known
IoT cameras/24Grows quickly
WAN links/31Only 2 endpoints (RFC 3021)
Cloud app subnet/24Room for scaling
K8s pod subnet/16 to /12Depends on cluster size
K8s node subnet/20 or largerPrefix delegation consumes in /28 blocks

The power-of-two trap

For newcomers: 100 devices needs a /25 (126 usable) or a /24 (254 usable). You can't get exactly 100. A /25 is cutting it close — go /24 for the 2x growth rule.

Efficiency vs sprawl

In a /8 you have 16M addresses. Wasting a few /24s is negligible. But in cloud where each VPC is a /16, every /24 you allocate is 1/256th of your VPC. Sprawl becomes real when you have 50 microservices each demanding their own subnet. The fix: use fewer, larger subnets and enforce isolation at the security group / network policy layer.

Differing Opinion to Consider

Flat vs micro-segmented. Traditional networking says more subnets = more security boundaries. Modern cloud-native says fewer subnets + identity-based microsegmentation (Cilium, Calico network policies) is both simpler and more secure because it captures east-west traffic that subnet boundaries miss. Both camps agree: never use a flat /16 with no segmentation at all.

Branch

Branch office IP planning — standardized templates, WAN links, and summarization-friendly allocation.

Standardized branch template

A /22 per branch gives 1,024 addresses. Use the same VLAN layout at every branch — consistency makes troubleshooting and automation possible.

VLANPurposeSubnetUsable
110Employee data10.20.0.0/24254
210Voice10.20.1.0/25126
310Guest10.20.1.128/25126
410IoT / cameras10.20.2.0/2662
910Management10.20.2.64/2730
510Servers10.20.2.96/2730
WAN link10.20.3.252/312
Growth reserve10.20.2.128/25126

Branch 2 = 10.20.4.0/22, Branch 3 = 10.20.8.0/22. Each branch summarizes to one route. Note the intentional gap: increment by 4 in the third octet to stay on /22 boundaries. The growth reserve /25 within each branch provides room for expansion.

Hub-and-spoke WAN

Use /31 per point-to-point link (RFC 3021). Reserve 10.255.0.0/24 for all WAN links — that's 128 links. https://datatracker.ietf.org/doc/html/rfc3021

Differing Opinion to Consider

/30 vs /31 for point-to-point links. /31 saves 2 IPs per link and is supported by all modern equipment. /30 is only needed for legacy gear that doesn't support RFC 3021 (published in 2000). If you have modern gear, use /31. If you have Check Point firewalls or very old IOS versions, test first.

SD-WAN

The overlay simplifies routing policy but underlay IPs still matter. Each hub needs at least a /24; spokes use the standard branch template above.

Summarization-friendly allocation

This is a critical concept. Bad: Branch 1 = 10.20.1.0/24, Branch 2 = 10.20.5.0/24 (can't summarize). Good: Branch 1 = 10.20.0.0/22, Branch 2 = 10.20.4.0/22 (all 8 branches summarize to 10.20.0.0/19). Always allocate on power-of-two boundaries.

DHCP scope planning

Reserve the first 10–20 IPs for static assignments (gateway, printers, servers). Recommended lease times: 8 hours for user devices, 4 hours for guests, infinite for VoIP phones. Use DHCP Option 150 for phone provisioning server addresses.

RFC 1918 tradeoffs

RangeSizeBest forGotchas
10.0.0.0/816MLarge orgs, cloudOverlaps with everyone
172.16.0.0/121MSecondary, DMZOnly 172.16–31 is private; 172.0–15 is public
192.168.0.0/1665KLabs, OOB mgmtToo small; conflicts with every home router

Recommendation: use 10/8 as primary. Reserve 172.16/12 for DMZ or environments that must not overlap with 10.x. Use 192.168/16 only for OOB management. https://datatracker.ietf.org/doc/html/rfc1918

Data Center

Data center fabric subnetting — spine-leaf underlay, VXLAN overlay, out-of-band management, and multi-tenant design.

Spine-leaf underlay

/32 loopbacks for each device, /31 point-to-point links between spines and leaves. Convention: 3rd octet = tier (0 = spine, 1 = leaf), 4th octet = device number.

Loopbacks (10.0.0.0/24):
  Spines: 10.0.0.1/32, 10.0.0.2/32, 10.0.0.3/32, 10.0.0.4/32
  Leaves:  10.0.1.1/32, 10.0.1.2/32, ... 10.0.1.48/32

P2P links (10.0.2.0/24):
  Spine1-Leaf1: 10.0.2.0/31
  Spine1-Leaf2: 10.0.2.2/31
  Spine2-Leaf1: 10.0.2.4/31

eBGP underlay, one ASN per device or per tier. ECMP across all spines. https://www.juniper.net/documentation/us/en/software/nce/sg-005-data-center-fabric/

OOB management

A physically separate network (dedicated switches, separate cabling). Use 192.168.x.x to clearly distinguish from production 10.x.x.x. Access via a dedicated jump host only. https://www.cisco.com/c/en/us/solutions/collateral/service-provider/out-of-band-best-practices-wp.html

East-west vs north-south

VXLAN overlay for east-west traffic (server-to-server within the DC). Border leaf for north-south traffic (DC to the outside world). Every east-west path in a spine-leaf fabric is exactly 2 hops (leaf→spine→leaf).

Multi-tenant (VXLAN-EVPN)

VRF per tenant, VNI per segment. Overlapping IPs are allowed across tenants because VRFs provide full routing isolation. https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-739942.html

Migration and renumbering

When renumbering is unavoidable, four strategies minimize disruption:

  • Parallel prefix — add the new range alongside the old on every interface; migrate traffic gradually
  • DHCP-first — handles 80–90% of devices automatically at next lease renewal
  • DNS-based — services accessed by hostname migrate transparently as DNS records are updated
  • Phased cutover — one rack at a time, maintaining rollback capability throughout

IPv6 in the data center

A /48 per site gives 65,536 /64 subnets. Use /127 for point-to-point links (RFC 6164) instead of /64 to prevent ping-pong attacks. Align to nibble boundaries (prefix lengths divisible by 4) for clean DNS reverse zones. https://www.ripe.net/publications/docs/ripe-690/

IPAM tools

You need something better than a spreadsheet. Options:

  • NetBox — gold standard, open source, full IPAM + DCIM
  • phpIPAM — lightweight, open source, web-based
  • AWS VPC IPAM — native to AWS, tracks VPC allocations
  • Infoblox / BlueCat — enterprise-grade, DDI (DNS/DHCP/IPAM)

Documentation minimum per subnet: CIDR, VLAN, purpose, site, gateway, DHCP range, date allocated. Audit quarterly.