Fake IP till you make IP

Fake IP till you make IP

October 14, 2025· Florian Margaine
Florian Margaine
·Reading time: 3 minutes

Ever peek at your database connection details on Upsun and wonder why the IP addresses look… weird?

Check your relationship IPs and you’ll spot something odd:

$ echo $PLATFORM_RELATIONSHIPS | base64 -d | jq '.db[0].ip'
"169.254.17.139"

But your container’s own IP tells a different story:

$ hostname -i
247.221.80.48

Two completely different subnets. What’s going on here?

The great IP switcheroo

Before we explain the 169.254.0.0/16 subnet mystery, let’s talk about how Upsun’s infrastructure works. We run a grid of VMs, each hosting hundreds of containers. Each VM gets its own chunk of the 240.0.0.0/4 subnet. (If you’re curious about that particular networking choice, we wrote about it.)

All containers on a VM share that VM’s subnet. This works great for routing, but creates an interesting problem: what happens when a container moves?

Containers move between VMs for all sorts of reasons - a VM might die, we might rebalance resources, or your database might need to relocate. When that happens, the container gets a new IP address in its new VM’s subnet.

Do we update all your application configs every time? Update your environment variables? Force you to redeploy? That would be terrible.

Enter the virtual IP

This is where the 169.254 subnet comes in. These are virtual IPs (VIPs), and they never change. The actual number is based on a hash of the relationship name, so it’s consistent and predictable. Your application connects to the VIP, and behind the scenes we make sure it always points to the real container IP, wherever that container happens to be living.

Think of it as a forwarding address that never expires.

The NAT trick

So how do we make one IP address magically point to another? With iptables and Network Address Translation (NAT). The name gives it away - we’re translating one network address into another.

Here’s what the iptables rules look like to route traffic from the virtual IP 169.254.17.139 to the real container IP 247.221.80.48:

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
DNAT       all  --  anywhere             169.254.17.139       to:247.221.80.48

The magic happens in that DNAT rule. When your application sends packets to 169.254.17.139, the kernel rewrites the destination to 247.221.80.48 before forwarding them on. Your application doesn’t know, doesn’t care, and doesn’t need to change its configuration.

Bonus: load balancing for free

Here’s a neat trick - iptables can do more than just one-to-one translation. With the --nth or statistic module, you can make a single VIP round-robin between multiple real IPs. This is perfect for load balancing across replicas.

Want to distribute traffic across three database replicas? Set up rules that send every third connection to a different backend:

DNAT  all  --  anywhere  169.254.17.139  statistic mode nth every 3 packet 0 to:247.221.80.48
DNAT  all  --  anywhere  169.254.17.139  statistic mode nth every 3 packet 1 to:247.221.80.52
DNAT  all  --  anywhere  169.254.17.139  statistic mode nth every 3 packet 2 to:247.221.80.61

Your application still connects to one address, but the kernel spreads the load across multiple backends behind the scenes. No load balancer configuration, no service mesh, no extra moving parts - the kernel handles it all.

The takeaway

Virtual IPs give you stable relationship addresses even when the underlying infrastructure is shifting around. Your database connection string stays the same whether we’re moving containers, rebalancing load, or dealing with hardware failures.

It’s a neat trick that keeps your applications running smoothly while we handle the infrastructure complexity behind the scenes. You get to focus on writing code instead of managing network topology.

Sometimes faking it really is the best way to make it.

Last updated on