-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WireGuard with obfuscation support #88
Comments
Thanks for starting this discussion and especially for providing an implementation. There have been a few threads in the past on circumvention/obfuscation for WireGuard (2016, 2018), but as far as I know they never went anywhere. A sample of running code is a great way to move the discussion forward. In my opinion, WireGuard with additional blocking resistance is a highly realizable goal with many paths to success—we don't need to wait for a perfect plan before starting to prototype implementations. Some of the challenges of making WireGuard blocking-resistant is that it's based on UDP datagrams rather than TCP streams, and that it's usually implemented in the kernel. Existing circumvention systems tend to focus on TCP (though not exclusively), and are usually implemented in userspace. Yegor Ievlev has posted a recipe showing how to interface kernel WireGuard with a userspace Shadowsocks (which does support UDP proxying). The client configures its WireGuard with an
Recent versions of the Pluggable Transports specification consider UDP proxying—see Section 1.5. I don't have enough personal experience with it to know how it works. In your thread on the WireGuard mailing list, Jason Donenfeld suggests suggests an alternative to setting
Another consideration is what the protocol obfuscation should look like. I think your approach of making packets look random—analogous to obfs4 and Shadowsocks—is a great place to start. There's no strong theoretical basis for why such an approach should work, but in practice it has proven effective. You might consider expanding the random padding schema to permit packets that are all padding. That will break the 1:1 correspondence between the unobfuscated and obfuscated packet streams. |
Lately I have been working a lot with the Turbo Tunnel, the main claim of which is that circumvention tunnels should conceptually a sequence of discrete packets, not a continuous data stream. (Even if those packets end up being encapsulated into a stream-like cover protocol.) The Turbo Tunnel transports I have implemented so far interact with user programs over a TCP interface (e.g. SOCKS). But I think it should be possible to adapt the idea slightly to proxy protocols, like WireGuard, that are natively packet-based. One of the benefits of a Turbo Tunnel design is that it permits transmitting a userspace data stream over an obfuscated channel that is potentially unreliable or out of order. The inner session and reliability protocol (KCP or QUIC, for example) breaks the stream into packets and takes care of concerns like in-order delivery and retransmission (essentially, implementing facilities that are normally provided by the kernel, in userspace). But with WireGuard, there would be no need for a separate inner session and reliability layer. The packets come straight from the kernel, which will do its own session and reliability management. Here's an example of the abstract procedure, with end-to-end stream delivery on the left and end-to-end packet delivery on the right. The packet procedure is actually simpler, because it can take advantage of kernel facilities, rather than reimplementing them.
dnstt may not be the best target for such an adaptation, just because the capacity per DNS query is so small. (Though maybe it would be possible to reduce the MTU on the wireguard interface.) Though it's still in development, Champa would be a convincing demonstration of the idea, as it's a polling-based HTTP channel, optionally through an intermediary, quite unlike the native UDP of WireGuard. Probably it would be easiest to augment both client and server with a UDP listener and a UDP forwarding address: any UDP payload received by the listener is encapsulated into the tunnel, and any UDP payload received through the tunnel is forwarded to the configured address (with a source address of its own listening port). Remove the KCP and smux layer. The details of encapsulation and everything else may remain the same. |
May I ask why is this needed? To convert TCP sessions into a series of UDP datagrams, so RST will no longer work? |
The idea is not to wrap Shadowsocks in WireGuard, but to wrap WireGuard in Shadowsocks. Shadowsocks is not the important part per se—it's just an example of a successful blocking-resistant tunnel protocol. WireGuard provides nice features and security guarantees; Shadowsocks provides blocking resistance: put them together to get a blocking-resistant VPN protocol. Abstractly, the outer tunnel protocol could be TCP, UDP, or anything else. Yegor's post shows how to get the kernel's WireGuard packets into userspace so that an ordinary program can work on them, by setting One advantage of doing obfuscation in a separate program is that you are not constrained to following the kernel's packet-sending schedule. You can delay a packet before sending it, or send extra "chaff" packets according to your own schedule, allowing you to shape the traffic profile however you need. I don't know if a Netfilter module or NFQUEUE permits that level of traffic modification. I don't mean to emphasize the traffic analysis point too much, though, because experience shows it's not yet necessary for effective circumvention (the per-packet content obfuscation you've implemented is certainly more important). |
I got this code working as follows. You need both the patched wireguard-linux-compat (I used commit 721242f0) and the patched wireguard-tools (I used bfc5f2d7) that knows about the altered device name. On both peers I installed the kernel module and tools, and generated keys.
Then set up each peer to refer to each other.
You can verify that the wireguard_obf module was loaded with Then, try pinging one peer from the other:
The obfuscated UDP payloads look like this:
Compare to non-obfuscated WireGuard. Note the
|
I believe there are several additional topics about the obfuscation of WireGuard connections.
|
Thanks for sharing this, Xiaokang Wang Actually I agree with all your points, here are my 2 cents:
|
eBPF seems useful for adding a small amount of varying custom obfuscation and also a big improvement on usability. Though last I looked it was not very easy to mutate packets in this way, if not entirely infeasible. |
Yes, it is not very easy to implement a proxy in this way in its current state, additional support and rework are required on the kernel side to make it work without significant effort and workarounds. |
@el3xyz Thanks for your great work, this has helped me a lot. I just have a few questions:
|
I've built dkms modules and renamed tools for Debian and Arch Linux. The tools have nwg and nwg-quick names. Configuration files can be placed into /etc/notwireguard. systemd units also work. https://github.com/dereference23/notwireguard-linux-compat/releases |
Those interested in userspace WireGuard proxies can take a look at #117. |
In September 2022 an implementation of the Netfilter module idea was posted to the WireGuard mailing list. Iptables WireGuard obfuscation extension
|
Hi @el3xyz . I'm playing around with your patches and want to say thanks for the work done. In first case you're passing obfuscator (key) as a part of the handshake initation message. For me both cases looks pretty the same, but could share a bit of your motivation:
? |
Hey all,
Thanks to David Fifield for invitation to this forum.
WireGuard is known to be one of the most secure and fastest (due to kernel space implementation) VPN protocols. Unfortunately it's quite easily tracked and blocked by DPI due to following issues:
I've added some obfuscation support to make WG detection slightly more difficult:
Code can be found here:
https://github.com/el3xyz/wireguard-linux-compat
However this approach is sensitive to statistical modelling based detection and I'm seeking the ways to improve it. One problem is that all traffic going to a single/few IP is easily detected but this should be addressed by split-tunneling. What are other issues?
Cheers
The text was updated successfully, but these errors were encountered: