Tutorial · Estimated reading 22 mins

QEMU/KVM guests through Linux Mihomo:
NAT port maps vs user-mode networking

You already run Mihomo (or any Clash-compatible core) on a Linux workstation or headless server. Inside QEMU/KVM—whether you drive it with virt-manager, virsh, or plain qemu-system-x86_64—you want apt, Docker image pulls, and browsers to reuse that profile instead of duplicating subscriptions. This guide separates two families people confuse: libvirt NAT on virbr0 (forwarding and SNAT performed by the host) and QEMU user networking (slirp-style 10.0.2.0/24 with a fixed host alias). For each topology we spell out which address is the proxy target, which address is only the default gateway, and when NAT port forwarding is about exposing a guest service versus simply reaching the host listener.

QEMU · KVM · virt-manager · Mihomo · NAT · user networking

1 Why QEMU/KVM on Linux deserves its own Mihomo checklist

Our earlier lab guides already walked through VMware Workstation on Windows, Hyper-V, WSL2, and Parallels. Those stacks ship vendor-specific virtual switches and DHCP stories. On bare-metal Linux, KVM is the mainstream accelerator, QEMU supplies devices and firmware, and libvirt wraps XML plus permissions so virt-manager can stay approachable. The networking surface is more fragmented: many tutorials jump straight to bridged adapters on a physical NIC, yet the default path for quick VMs remains an isolated RFC1918 segment with NAT toward the host routing table. When readers paste Windows-centric instructions into that environment, they misidentify the “host IP” the guest should dial for HTTP_PROXY, or they chase iptables rules when the real issue is still allow-lan: false.

The user intent behind this article is practical: keep one audited Mihomo profile on the hypervisor, let disposable guests inherit it through explicit HTTP/SOCKS listeners or controlled forwarding, and understand how NAT port forwarding differs from “make the guest use the host as a proxy.” Port maps move packets toward a service port inside the guest; they do not magically attach Chrome to Clash unless you still configure the application layer. Conversely, pointing HTTPS_PROXY at the host’s address on the virtual segment is often enough for curl and apt without any inbound map at all.

Finally, Linux operators frequently mix systemd-managed Mihomo with ad-hoc qemu CLI experiments. If your unit file binds listeners to loopback only, LAN-scoped guests will never complete a TCP handshake. The companion Linux Mihomo systemd guide covers service hardening and bind addresses; here we focus on the guest-facing network graph so you can pick the right knob the first time.

2 Two QEMU worlds: libvirt-managed NAT and QEMU user networking

libvirt default network creates a Linux bridge such as virbr0, attaches dnsmasq for DHCP/DNS to guests, and uses iptables or nftables rules generated from the network XML to SNAT guest traffic out through whatever default route the host already uses—which may itself be policy-routed through Mihomo when TUN mode is enabled on the host. Guests see a gateway IP on virbr0 (commonly 192.168.122.1 unless you customized the XML). The hypervisor also holds that same IPv4 on the bridge interface. Therefore the address you export as http_proxy in the guest is typically that .1 host address, not the guest’s own lease, and not the public uplink address of your apartment router.

QEMU user networking (sometimes called slirp) avoids creating a host-visible bridge per VM. Instead QEMU embeds a userspace IPv4 router with a well-known topology: the guest usually receives 10.0.2.15/24, the emulated gateway is 10.0.2.2, and an internal DNS stub often appears as 10.0.2.3. From the guest’s perspective, 10.0.2.2 is how you reach the host’s TCP stack—so a Mihomo mixed port listening on 0.0.0.0:7890 is reachable as http://10.0.2.2:7890 without touching libvirt at all. That simplicity is why quickstart QEMU snippets use -netdev user; the trade-off is throughput, no direct L2 adjacency to your LAN, and limited multicast behavior compared with tap/bridge backends.

Mixing mental models causes the classic failure mode: you attach a VM to user networking but read a blog post written for virbr0, export HTTP_PROXY=http://192.168.122.1:7890, and nothing connects because that subnet literally does not exist inside the guest. The reverse mistake—staying on libvirt NAT while hard-coding 10.0.2.2—is equally common. Start every debugging session by printing routes: ip route on Linux guests, and on the host run ip -br addr to list bridges and their addresses before touching firewall counters.

Rule of thumb: If virsh net-dumpxml default shows a bridge your host participates on, proxy targets follow that bridge IPv4. If qemu-system-... only lists netdev user, proxy targets follow 10.0.2.2 unless you overrode the subnet.

3 libvirt NAT: discovering the real host address on virbr0

After virt-install or the virt-manager wizard attaches a NIC to the “default” network, boot the guest and inspect addressing. On Debian or Ubuntu guests, ip -4 route show default reveals the gateway IP assigned by libvirt—again, usually 192.168.122.1. That gateway answers ARP for the subnet and performs NAT; it is implemented by the host kernel plus libvirt’s forwarding rules. Your Mihomo mixed listener must bind to an address reachable from that subnet, which practically means enabling Allow LAN in the GUI or setting allow-lan: true in YAML so the listener attaches to 0.0.0.0 or at minimum to the virbr0 address.

Some administrators create additional isolated networks (virsh net-list --all) for multi-tier labs. Each network XML declares its own ip address='w.x.y.z' element; treat that value as the canonical proxy destination for guests on that network. If you run multiple bridges, never assume “the host is always .1” without reading the XML—internal ordering can surprise you after upgrades.

Connectivity testing should precede TLS debugging. From the guest, run nc -vz 192.168.122.1 7890 (substitute your bridge IP and mixed port). If TCP succeeds but HTTPS still fails, shift attention to Mihomo rules and DNS mode rather than KVM. If TCP fails immediately, revisit listener bind scope, host nft or iptables INPUT chains, and whether libvirt inserted its own ACCEPT rules ahead of a DROP policy you inherited from a CIS benchmark.

Example: Debian guest environment for apt
# /etc/apt/apt.conf.d/95proxy — replace IP with your virbr0 gateway
Acquire::http::Proxy "http://192.168.122.1:7890/";
Acquire::https::Proxy "http://192.168.122.1:7890/";

4 Allow LAN, bind addresses, and why loopback-only breaks guests

Clash-compatible cores historically bind the mixed HTTP/SOCKS port to 127.0.0.1 for safety. Virtual machines originate connections from their own private addresses, so packets arrive on the host with a source IP like 192.168.122.46. If Mihomo still listens only on loopback, the kernel never delivers those SYN segments to userspace. Toggle Allow LAN during lab work, or explicitly set bind-address: '*' paired with host firewall rules that permit only 192.168.122.0/24 (and similarly scoped CIDRs for other bridges). Document the chosen port—7890 is conventional but not magical—and keep parity with whatever virt-manager users type into Windows guests if you run mixed fleets.

Security posture mirrors physical LAN exposure: any neighbor who can route to that bridge could abuse an open listener. For single-host developer machines the risk is often acceptable; for multi-tenant lab servers, prefer an explicit forward or a dedicated proxy VM instead of widening the host profile to entire RFC1918 universes. When corporate policy forbids widening listeners, run Mihomo inside each guest with a local subscription mirror, or terminate TLS to an internal forwarder on the bridge—higher operational cost, narrower blast radius.

If you run Mihomo purely under systemd with IPAddressDeny=any style hardening, double-check that unit directives do not block the libvirt bridge subnet you actually intend to serve. Hardening templates copied from container hosts sometimes assume only loopback management planes, which silently breaks VM labs until you relax the cgroup or BPF filter set.

Listener collisions: Only one process may bind TCP port 7890 on a given address. If you experiment with a second Mihomo profile “just for the VM,” pick a different port and update every guest export consistently.

5 NAT port forwarding: when maps help—and when they do not replace a proxy

NAT port forwarding on the libvirt side maps a tuple (host_ip, host_port) to (guest_ip, guest_port) through SNAT/DNAT rules generated from <port> stanzas inside the network definition, or through manual iptables -t nat / nftables equivalents when you need something libvirt cannot express. Typical use cases include exposing an HTTP API running inside the guest to your LAN, or temporarily punching a VNC debugging port outward during a classroom demo. Those maps solve inbound reachability; they do not, by themselves, force outbound apt traffic through Mihomo unless you also configure proxy environment variables or a catch-all transparent redirect on the guest.

Another pattern is forwarding a host port into the guest’s unbound service—less common for Clash users, but relevant when you chain tools: for example, map host 8443/tcp to guest 443/tcp where an internal registry listens. Keep directionality explicit in your notes; forums routinely confuse “forward host 7890 to guest 7890” with “guest should use host 7890 as HTTP proxy,” which are orthogonal designs. If your goal is exclusively “reuse host Mihomo for outbound browsing,” you rarely need an inbound map at all; you need a reachable listener on the bridge IP plus guest-side proxy settings.

When you do rely on maps—for example a CI runner on the host hitting a guest-hosted mock API—remember that return traffic must hairpin correctly through the same NAT table. If you also run Mihomo TUN on the host, policy routing can reorder paths so that forwarded packets bypass the intended chain. Capture with tcpdump -i virbr0 host x.x.x.x before editing dozens of forwarding rules based on intuition alone.

6 QEMU user-mode networking: 10.0.2.2, guestfwd, and redirect quirks

Launching QEMU with -netdev user,id=net0 -device virtio-net-pci,netdev=net0 keeps everything inside QEMU’s userspace router. Guests should configure HTTP_PROXY=http://10.0.2.2:7890 when Mihomo listens on all interfaces; the address is stable across many QEMU versions, which is why documentation loves it. DNS inside the guest often resolves through 10.0.2.3; if your Mihomo profile uses FakeIP or a custom DNS listener on the host, validate whether the guest should instead query the host resolver explicitly (10.0.2.2 with a DoH client) to stay consistent with split-domain logic.

QEMU user networking also supports guestfwd=tcp:... and hostfwd=tcp:... options on the -netdev user line. hostfwd exposes a host port that forwards into the guest—useful for SSH jump hosts or web demos without libvirt. guestfwd can steer specific guest-originated destinations toward a host socket, which advanced users occasionally leverage to chain transparently into a local SOCKS helper. These knobs are powerful but easy to mis-type; prefer libvirt XML for anything you expect teammates to reproduce.

Performance-sensitive workloads—large container layer downloads, heavy git clones—often outgrow user networking. Switching the same disk image to a virtio NIC on virbr0 typically improves throughput and reduces CPU spent in userspace forwarding. Treat user networking as the fast on-ramp, not the final architecture for persistent dev boxes.

Example: qemu user netdev with host forward
qemu-system-x86_64 \
  -netdev user,id=n0,hostfwd=tcp::2222-:22 \
  -device virtio-net-pci,netdev=n0 ...

7 virt-manager specifics: NIC models, XML inspection, and live edits

virt-manager hides XML until you need it, yet the XML is the source of truth for NIC <source network='default'/> versus <source network='hostdev'/> SR-IOV attachments. Use “Open” from the connection details dialog to view active networks and DHCP ranges. When cloning a golden-image VM, remember duplicated MAC addresses on the same bridge confuse DHCP; regenerate MACs before parallel boots. If you hot-plug a second NIC—say user networking for an experiment plus libvirt NAT for stable management—document interface metrics inside the guest so default routes do not flap unpredictably.

SPICE or VNC consoles do not influence routing; they are orthogonal channels. Problems that “only happen in the graphical console” are usually DNS or certificate trust issues inside the guest browser, not KVM framebuffers. Still, clipboard and folder sharing features can tempt users to copy half-baked proxy exports from the host; prefer provisioning with cloud-init or Ansible templates so every VM receives the same /etc/environment stanza tied to the correct bridge IP.

SELinux on Fedora/RHEL hosts can block QEMU from binding forwarded ports if you stray from default policies. If ausearch -m avc shows denials while libvirt tries to open a hostfwd socket, adjust booleans or local policy modules rather than globally setting permissive mode. The same discipline applies to AppArmor profiles on Ubuntu hosts running libvirt from packages.

8 apt, Docker, Podman, and browsers: where to inject proxy settings

Debian-family apt honors /etc/apt/apt.conf.d proxy stanzas; Red Hat-family tools respect /etc/dnf/dnf.conf or environment variables depending on version. Shell-level exports cover curl, many language package managers, and git when configured to use HTTPS through the proxy. For Docker, the daemon reads /etc/systemd/system/docker.service.d/http-proxy.conf with Environment=HTTP_PROXY=... overrides; build-time docker build --build-arg HTTPS_PROXY=... is separate from runtime container env. Podman rootless setups inherit the user session environment, which is convenient until a systemd user unit clears it—test with podman run --rm alpine wget -qO- https://example.com after each change.

Chromium and Firefox on Linux guests respect desktop-wide proxy dialogs when you use a full desktop session; headless servers skip that, so rely on env vars or service-specific config. Corporate images sometimes ship NO_PROXY lists that accidentally include registry domains you actually need to intercept; diff env | sort between working and broken snapshots when “only Docker fails.”

When guests pull large images, watch Mihomo logs for throttling or upstream errors that masquerade as KVM bugs. Throughput collapse is often exit-node congestion, not virtio performance. If you need evidence, compare download speeds with Mihomo temporarily set to a simple global outbound rule versus a complex split profile; if speeds diverge wildly, optimize rules before blaming KVM.

9 DNS, FakeIP, and host TUN interactions you should expect

Mihomo’s DNS modes interact badly with “each OS picks its own resolver” setups. If the host uses FakeIP while the guest still queries 1.1.1.1 directly, names and addresses disagree between systems, producing intermittent TLS failures that look like random packet loss. Align strategies: either point the guest resolver at the host bridge IP where Mihomo offers DNS, or keep the guest on public DNS but avoid FakeIP for domains the guest must resolve independently. The Meta core DNS leak prevention article remains the conceptual map—even though that page speaks in Meta-core vocabulary, the FakeIP versus redir-host trade-offs apply whenever multiple kernels participate.

Host-level TUN mode captures traffic that traverses the host routing stack in specific ways. Guest packets leaving through libvirt NAT already undergo SNAT on the host; whether they then enter Mihomo’s TUN depends on mark bits, routing tables, and whether the libvirt chain runs before or after your policy routing hook. Many teams simplify life by relying on explicit guest proxies toward the bridge IP while reserving TUN for host-only processes. If you insist on transparent interception for guests, budget time for nftables tracing and document the final mark-based path so the next engineer does not reverse it as “accidental complexity.”

IPv6 adds another layer: some guests prefer AAAA answers while your Mihomo path is IPv4-only. Symptoms look like “half the internet works.” Test with curl -4 versus curl -6 from the guest before rewriting entire profiles. You can disable IPv6 on the virtual NIC temporarily to confirm the diagnosis, then choose either a consistent dual-stack design or a deliberate IPv4-only posture with aligned DNS filters.

Do not conflate default gateway with proxy URL: On libvirt NAT, the guest default route points at the virtual router IP, while the Mihomo mixed port is a TCP service on the same host but not “the gateway port.” User networking is friendlier because 10.0.2.2 colocates both roles in documentation, yet it is still not equivalent to typing https_proxy=$GATEWAY:7890 blindly.

10 Verification checklist before you open a bug report

  • Topology: Confirm whether the NIC uses libvirt default, a custom bridge, macvtap, or QEMU user; screenshot the XML snippet for teammates.
  • TCP to mixed port: From the guest, run a port check to the correct host-side IP (virbr0 gateway or 10.0.2.2).
  • Allow LAN / bind: On the host, ss -lntp | grep 7890 should show 0.0.0.0 or the bridge IP—not only 127.0.0.1 if guests must connect.
  • Mihomo logs: Tail logs while generating guest traffic; silence implies packets never reached the listener.
  • Rule isolation: Temporarily use a global outbound policy to distinguish YAML mistakes from KVM mistakes.
  • DNS: Compare dig answers inside the guest with Mihomo’s DNS handling on the host.

11 Wrap-up

Running QEMU/KVM on Linux next to a host-resident Mihomo stack is mostly a networking literacy exercise. On libvirt NAT, learn your virbr0 address, enable Allow LAN or an explicit bind, and point guest tools at that bridge IPv4. On QEMU user networking, memorize 10.0.2.2 as the stable door to the host, add hostfwd only when you truly need inbound maps, and graduate heavy workloads to bridged or NAT virtio networks for throughput. NAT port forwarding solves different problems than HTTP proxy exports; keep those threads separate in runbooks so apt, Docker, and browsers stay boringly reliable.

Compared with duplicating subscriptions into every ephemeral guest, a single audited host profile with structured logs wins on maintainability—especially when snapshots roll back VM disks but you still want consistent egress policy. For Mihomo service layout on systemd-based hosts, revisit the Linux Mihomo systemd guide after you finalize bridge addresses and firewall scopes.

When you want installers and platform notes centralized—without treating upstream release pages as the primary distribution path—use the site download page for curated builds, finish host-side setup, then re-run the verification checklist from inside a throwaway VM before promoting the pattern to teammates.

→ Download Clash for free and experience the difference

Tags: QEMU KVM virt-manager Mihomo NAT user networking Linux
Clash Verge Rev logo for QEMU KVM Linux Mihomo proxy setup

Clash Verge Rev

Next-gen Clash client · Free and open source

One mixed port, optional TUN on Linux, and logs you can line up with libvirt captures—so QEMU/KVM guests, systemd services, and browsers share one policy instead of drifting silently.

TUN full traffic takeover Mihomo high-performance core Precise rule routing DNS leak helpers Multi-subscription management

Related reading