1 When “auto fastest node” is not what you want
A url-test policy group (covered in our dedicated URL-Test and Fallback tutorial) periodically probes each candidate and keeps the tunnel that currently wins the latency contest. That is excellent when you truly want one pipe and you are fine with every flow sharing that exit. The downside appears when you operate multiple providers, several PoPs of similar quality, or you simply wish to spread load instead of hammering a single server while the rest sit idle.
Another scenario is session-sensitive sites: some web applications behave poorly when consecutive HTTPS requests hop between different egress IPs, even if each hop is “fast.” A url-test group may switch winners after every probe interval; tolerance reduces flapping but does not turn the group into a load spreader. For destinations where you want stable mapping from logical target to outbound, you need a different selection rule—exactly where load-balance enters the picture in Mihomo-compatible configs.
This article assumes you already import subscriptions or manual proxies: entries. If you are still converting non-Clash links into YAML, start with the subscription converter guide so your node names exist before you reference them from policy groups.
2 Load-balance is distribution, not “pick minimum RTT”
In Mihomo, a proxy-groups entry with type: load-balance tells the core to use multiple child outbounds concurrently according to a strategy. Unlike url-test, the goal is not to crown a single winner for all traffic; it is to partition flows across the list. Health checks still matter—unreachable nodes should drop out of rotation—but the primary optimization target is not “minimum milliseconds to gstatic.”
Operationally, think of load-balance as “I have a pool of interchangeable exits; please spread work across them in a defined way.” That mental model matches dual-airport setups where both vendors sell similar regions, or a single subscription with many Hong Kong relays that you want to use in parallel instead of serial failover. If your requirement is strictly “primary line, then spare,” a fallback chain remains clearer; if your requirement is “use the whole pool,” load-balance fits better.
load-balance groups. Always confirm behavior against your exact core version—field names are stable in recent Meta releases, but edge cases (UDP, QUIC) deserve log verification on your device.
3 Strategies: round-robin, consistent-hashing, sticky-sessions
Mihomo documents three strategies for load-balance. Understanding them prevents misconfiguration when someone says “session stickiness” without specifying which dimension matters.
round-robin distributes requests among proxies in rotation. It is simple and can smooth aggregate throughput when nodes are homogeneous. It does not promise that two consecutive requests to the same website land on the same exit; in fact, they often will not. Use round-robin when you care about spreading bytes, not preserving server-side session affinity.
consistent-hashing assigns traffic so that requests sharing the same target address map to the same proxy inside the group. When the target is a domain name, matching uses top-level domain semantics as described in upstream docs—practical for “everything under example.com should look like one egress to the origin.” This is the usual choice when people say they want “same site, same node” without manually clicking in a select group.
sticky-sessions routes flows with the same source address and target address to the same proxy, with a cache expiration window (documented as ten minutes in Mihomo references). That variant matters when multiple clients behind your LAN hit the same remote service and you still want per-internal-host separation combined with destination stickiness—useful on routers or shared gateways. Compare that with consistent-hashing, which keys more heavily on the destination side for hashing decisions in typical explanations.
None of these strategies magically guarantees application-layer “logged-in session forever.” Web apps can still invalidate cookies, force re-auth after IP changes on other paths, or open parallel connections that hash differently under QUIC. Treat load-balance policies as best-effort transport routing, not a full application session manager.
4 Paste-ready YAML for Clash Meta / Mihomo
The following skeleton follows the public schema: a probe url, polling interval, optional lazy probing, and a strategy. Replace proxy names with strings that actually exist under your proxies: list or nested group names.
proxy-groups:
- name: "LB dual-airport HK"
type: load-balance
strategy: consistent-hashing
url: https://www.gstatic.com/generate_204
interval: 300
lazy: true
proxies:
- sub-a-hk-01
- sub-a-hk-02
- sub-b-hk-01
- sub-b-hk-02
To experiment with rotation instead, swap strategy: round-robin. For sticky behavior keyed on both endpoints when supported by your build, set strategy: sticky-sessions and validate in logs that flows behave as you expect under your workload (browser tabs, background sync, game clients).
Wire the group from rules exactly like any other policy group: reference LB dual-airport HK (or your chosen name) in MATCH or higher-priority lines. Typos break validation; Emoji or invisible characters in imported subscription labels are a frequent culprit when GUIs show a node but YAML still references an outdated string.
rules:
- GEOIP,CN,DIRECT
- MATCH,LB dual-airport HK
On headless Linux routers or VPS setups, pair this with the Linux Mihomo systemd guide so reload cycles and working directories stay predictable while you iterate on strategies.
5 Relationship between stickiness, health checks, and “ping”
Load-balance groups still perform HTTP-style health checks when you provide url and interval. Those probes tell Mihomo whether a proxy is alive; they do not rank nodes by latency for every connection the way url-test does. A newcomer might enable consistent-hashing, observe decent probe numbers in the panel, and still feel “slow” on certain sites because the hashed exit is farther away for that destination family. That is not a bug—it is a different objective function.
If you need both multi-node usage and latency awareness, a practical pattern is to build per-region url-test pools first, then place those group names inside a load-balance group (nesting is allowed when names resolve). You compete for milliseconds inside a city, then distribute across cities. Conversely, wrapping load-balance inside fallback can provide ordered disaster recovery when an entire pool fails—mirroring patterns described in the url-test article but with load spreading at the leaf layer.
DNS interacts with perceived performance too. If domains resolve differently on the client versus inside FakeIP logic, you can see “wrong exit” symptoms that look like broken stickiness. When debugging weird cross-site behavior, revisit resolver settings in our Meta DNS leak prevention guide before you rip apart hashing strategies.
6 Troubleshooting and realistic limits
- Expecting HTTP cookie sessions to never reset: Transport stickiness cannot fix server-side logout policies or multi-tab races; combine sane browser hygiene with realistic expectations.
- UDP games vs TCP web: Different protocols may not follow identical hashing paths in every client stack; verify with connection logs when something only breaks in voice chat or game lobbies.
- QUIC / HTTP3: Multipath or rapid connection churn can interact oddly with any hashing scheme; temporarily testing with HTTP/1.1 or disabling QUIC in the browser isolates whether the proxy policy is at fault.
- Mixing DIRECT with remotes inside the same load-balance group: Like url-test, comparing incompatible paths produces confusing outcomes—keep members homogeneous in role.
- Compliance: Respect provider terms and local law; this article explains routing mechanics, not entitlement to third-party services.
7 Putting it together with url-test and fallback
Advanced profiles often stack policy types: url-test for intra-region latency selection, load-balance for inter-node spreading inside a region or across two subscriptions, and fallback for ordered break-glass when a whole tier dies. The YAML remains readable if you name groups after their role—for example, HK auto as url-test, HK spread as load-balance over multiple HK auto instances if you intentionally duplicate structure (only when it reflects real topology; avoid infinite recursion).
GUIs such as Clash Verge Rev expose the same underlying Mihomo graph; editing raw YAML is still the fastest way to confirm nested references. When something looks ignored, first verify mode: rule, then confirm the winning rules line, then walk outward through nested group names. Most “hashing failed me” reports are actually rule-order issues or stale proxy labels after a subscription refresh.
For per-application splits on Windows instead of destination hashing, see process-name routing on Windows—orthogonal tooling that solves a different class of problems.
8 Wrap-up
load-balance groups exist to distribute traffic across multiple Mihomo outbounds using a strategy such as consistent-hashing for destination-stable mapping, round-robin for even rotation, or sticky-sessions when source-and-destination pairing matters on shared gateways. They complement—not replace—url-test latency selection and fallback ordering; mixing types deliberately mirrors how real networks combine performance tuning with redundancy.
Success still depends on accurate proxy names, sane DNS, and honest expectations about what transport-level stickiness can guarantee. When you are ready to ship the profile to multiple devices, prefer a clean YAML review over guessing from GUI thumbnails, and keep health-check URLs reachable from every member of the pool.
Installers belong on the official site download page—avoid random mirrors when fetching Clash-compatible clients. Pull the build for your platform, import the profile, and confirm in the log that flows hit the intended load-balance group before you tune strategies further.