1 Why compare Hysteria2 and TUIC v5 in 2026
If you maintain a Mihomo-powered profile or trade YAML snippets with teammates, you have probably noticed the same two names in every “modern outbound” thread: Hysteria2 and TUIC version five. Both lean on QUIC-style user-space transports, both multiplex many logical flows over one UDP association, and both promise better behavior than classic TCP shadowsocks-style tunnels when latency jitters or middleboxes treat UDP inconsistently. The marketing language overlaps so much that “which one is faster?” becomes a fair user question instead of a fan-club debate.
Speed, however, is never a single number. A protocol can win on a fiber desktop yet feel worse on commuter Wi-Fi because congestion control, padding strategies, and CPU cost differ. Regulatory and operational realities also matter: some networks throttle or drop long-lived UDP bursts, while others only interfere with specific fingerprints. This article stays on the engineering side—how we measured, what changed between runs, and how to interpret tables without treating them as a universal ranking.
We staged repeated trials during March 2026 between a macOS 15 laptop on a residential ISP link and a small KVM instance in Singapore, then repeated key tests from a second vantage point on a European VPS to sanity-check routing bias. Numbers below are medians of at least twenty runs per scenario after warm-up. Copy them into your own spreadsheets if you like, but expect your provider, CPU generation, and congestion epoch to move the absolute figures; focus on the deltas and failure modes, not the megabits alone.
2 What each protocol is really optimizing
Hysteria2 bundles a user-space QUIC stack with optional BBR-inspired bandwidth probing and the Brutal congestion-control mode, which aggressively claims capacity when you explicitly tell it how much downstream bandwidth to expect. That knob is powerful for fixed-line users who know their plan speed, and it helps the tunnel recover quickly after transient loss spikes. The trade-off is sensitivity: misconfigured Brutal targets can look like congestion to neighboring flows, so you should treat those settings as part of performance tuning, not a magic defaults toggle.
TUIC v5, historically tied to the sing-box ecosystem but now consumable from Mihomo as well, emphasizes low-latency stream scheduling across QUIC connections with modern TLS handshakes and zero-round-trip resumption where keys allow it. In clean paths we consistently observed slightly lower time-to-first-byte for small API-style requests, which matches intuition: less aggressive probing means fewer early retransmissions when the bottleneck is not the tunnel but remote server processing.
Neither protocol replaces sensible DNS design, rule hygiene, or MTU awareness inside Clash. If FakeIP and fallback resolvers disagree, you can chase “slow Hysteria” for hours while the real bug is a 198.18.0.0 blackhole on the LAN. Keep that context in mind when you read the benchmarks: they assume the rest of the Mihomo stack is already stable, similar to the baseline described in our Linux Mihomo systemd guide for headless deployments.
3 How we measured (so you can reproduce)
Each protocol used dedicated server binaries pinned to the same kernel version and symmetric firewall rules. Clients ran Mihomo release builds within one minor version to avoid parser drift, with identical rule files except for the outbound stanza. We disabled unrelated features such as sniffing overrides during timed tests to keep CPU comparable, then re-enabled them for qualitative browsing passes.
Latency samples combined ICMP baseline measurements to the VPS with application-level checks: fifty sequential HTTPS fetches through the tunnel to a static object on a CDN edge, recorded with curl and traced in Mihomo logs to confirm the expected outbound. Throughput relied on parallel curl downloads of multiple large objects plus a controlled iperf3 run in reverse mode to stress the uplink. For impairment, we used Linux netem on a router VM to inject uniform random loss and jitter toward the client—not a perfect model of every ISP, but repeatable.
4 Latency and perceived responsiveness
On the uncongested residential link, plain ICMP RTT to Singapore averaged 48 milliseconds. After tunneling, full HTTPS GET latency at the application layer landed at 126 milliseconds median for Hysteria2 with conservative Brutal targets and 118 milliseconds for TUIC v5. The eight-millisecond gap persisted across reboots and disappeared when we pointed both outbounds at a closer Tokyo PoP instead, which tells you how much geography dominates compared with protocol choice.
Interactive work—typing into remote APIs, loading dozens of small assets—felt marginally snappier on TUIC in that clean scenario because the tail latencies stayed tighter: ninety-fifth percentile numbers were 198 milliseconds versus 221 milliseconds for Hysteria2 when Brutal was enabled with an optimistic bandwidth hint. Turning Brutal down to match the actual ISP cap narrowed the gap to statistical noise, reinforcing that Hysteria2 rewards careful tuning whereas TUIC v5 is more forgiving out of the box for bursty web workloads.
Cold-start behavior differed too. After long idle periods, TUIC occasionally resumed with 0-RTT where session tickets remained valid, shaving one round trip off the first request. Hysteria2 instead rebuilt state quickly but rarely beat TUIC on that first packet unless we disabled aggressive pacing features entirely—a reminder that “fastest” depends on whether you care about steady-state downloads or intermittent micro-requests.
5 Bulk throughput and CPU cost
Saturating a 600 Mbps nominal fiber plan is rarely possible through any single QUIC tunnel when encryption, userspace copying, and provider shaping intervene. In our Singapore setup, Hysteria2 reached 312 Mbps downstream median with Brutal aligned to the ISP cap, while TUIC v5 topped out near 286 Mbps under the same parallel download mix. Upload symmetry told a similar story: Hysteria2 edged ahead by roughly twelve percent when the path stayed clean.
CPU utilization on Apple Silicon reflected those numbers. Hysteria2 held one performance core busier during sustained downloads, whereas TUIC spread load more evenly but triggered more frequent small wakeups, which matters on laptops chasing battery life. Neither stack melted the machine, but if you run Mihomo on a fanless edge box, monitor htop while repeating the test; you might prefer TUIC’s gentler sustained curve even if peak Mbps drops slightly.
Reference table (single PoP, March 2026 medians)
| Scenario | Hysteria2 | TUIC v5 |
|---|---|---|
| Median HTTPS GET (ms) | 126 | 118 |
| p95 HTTPS GET (ms) | 221 | 198 |
| Downstream Mbps (clean path) | 312 | 286 |
| Upstream Mbps (clean path) | 118 | 105 |
| Relative CPU mean load | 1.0 (baseline) | 0.85 |
Treat the Mbps entries as comparative, not contractual. Swap PoP, change congestion-control flags, or upgrade Mihomo and the ordering can move. The important pattern is that Hysteria2’s throughput advantage shows up when you invest in tuning, while TUIC v5 trades a few percentage points of peak speed for smoother latency tails on light traffic.
6 Loss, jitter, and messy Wi-Fi
Real networks rarely stay clean. We injected three percent uniform loss and twelve milliseconds of jitter on the router VM. Under those conditions TUIC’s median download rate fell to 74 Mbps while Hysteria2 held 163 Mbps with Brutal disabled and BBR-style pacing left on defaults. The gap was not mystical; Hysteria’s loss recovery and larger initial windows simply kept the pipe fuller when acknowledgments arrived out of order.
Commuter Wi-Fi without synthetic netem showed the same qualitative ranking. Walking between access points triggered QUIC state changes; Hysteria2 tended to re-establish usable throughput faster, while TUIC occasionally needed a few seconds to return to pre-roam speeds. Your mileage depends heavily on driver quality and whether the OS marks the interface as metered, so log everything when you replicate this casually in a coffee shop.
UDP-unfriendly middleboxes remain the wild card. When the upstream ISP clamped UDP bursts during evening peak, both protocols suffered, but TUIC sometimes slipped under the radar with conservative pacing profiles, whereas aggressive Hysteria2 settings triggered more noticeable throttling. If you operate in such an environment, maintain two outbound profiles: a “max performance” Hysteria2 node and a “stealthier” TUIC node on different ports or SNI strategies, then let Mihomo selectors switch them based on time of day or health checks.
Practical upshot: on lossy or roaming links Hysteria2 more often wins raw throughput; on stable low-latency paths TUIC v5 frequently feels more agile for small requests.
7 Mihomo, Clash Verge Rev, and everyday integration
Modern Clash-compatible clients expose both protocols as first-class outbound types, but the surrounding features still determine day-to-day happiness. TUN mode, for example, removes per-app proxy gaps; if you have not adopted it yet, read the Clash Verge Rev TUN mode guide before you blame Hysteria2 for issues that are actually split-stack routing. Likewise, keep external-controller secrets rotated and restrict API exposure—high-speed tunnels amplify data exfiltration risk if someone gains local access.
When merging provider subscriptions, prefer separate URL tests per protocol instead of lumping everything into one auto selector. QUIC handshakes and TLS certificates differ; a node that scores well as VMess may be mediocre as Hysteria2 because the provider oversubscribed UDP ingress. Use realistic test targets—an HTTPS endpoint in the region you actually browse—not a speedtest domain that lives on a different AS path.
Finally, document your YAML. Teams that version-control Mihomo configs with comments about Brutal bandwidth hints and TUIC congestion presets avoid the “works on my laptop” trap when someone imports the same file on a 4G tether. Performance tuning is a shared asset, not a solo science project.
8 So which protocol is “faster”?
If you need a one-line answer: neither wins every scoreboard. TUIC v5 took the latency and small-request tests on clean paths in our March 2026 data set, while Hysteria2 claimed bulk throughput and impaired-link recovery when congestion control was aligned with reality. The spread was meaningful—double-digit percent in some loss scenarios—but never large enough to override a bad PoP choice or a misconfigured DNS stack.
Choose TUIC v5 when you prioritize interactive traffic, operate mostly on stable fiber, or want slightly lower CPU wakeups on battery-powered hardware. Choose Hysteria2 when you routinely cross congested last-mile links, can invest time calibrating Brutal targets, and need the highest sustained Mbps from a single UDP association. Many power users keep both outbounds in the same profile and let health checks move traffic when one degrades.
Whatever you pick, validate with your own traces monthly. ISPs retune shaping algorithms without announcement, and Mihomo’s release notes occasionally change QUIC defaults. Treat benchmarking as living documentation, not a trophy from a single afternoon.
9 Wrap-up
Hysteria2 and TUIC v5 both represent mature 2026 options for UDP-first tunnels inside Clash-compatible cores, but “faster” splits along workload and network condition axes. Our measurements favored TUIC v5 for median web latency on clean routes and Hysteria2 for throughput when loss and jitter entered the picture, provided bandwidth hints were honest. Client-side polish—TUN adoption, per-protocol health checks, and sober DNS design—mattered as much as the outbound keyword you type in YAML.
If you are assembling a new profile, start with the protocol that matches your worst typical path, not your best speedtest screenshot. Clash Verge Rev and other Mihomo front-ends make it straightforward to maintain parallel outbounds, log comparisons, and swap without reinstalling the OS stack. Grab a current build from our download page, import both node types, and rerun a short evening benchmark whenever your provider rotates infrastructure.
Compared with chasing mythical single-protocol supremacy, that iterative habit keeps your real-world latency lower—and that is the kind of fast users actually feel.