This should increase the maximum amount of inflight data and hopefully
the performance of Snowflake, especially for clients geographically
distant from proxies and the server.
Makes BrokerChannel abstract over a rendezvousMethod. BrokerChannel
itself is responsible for keepLocalAddresses and the NAT type state, as
well as encoding and decoding client poll messages. rendezvousMethod is
only responsible for delivery of encoded messages.
Formerly, BrokerChannel represented the broker URL and possible domain
fronting as
bc.url *url.URL
bc.Host string
That is, bc.url is the URL of the server which we contact directly, and
bc.Host is the Host header to use in the request. With no domain
fronting, bc.url points directly at the broker itself, and bc.Host is
blank. With domain fronting, we do the following reshuffling:
if front != "" {
bc.Host = bc.url.Host
bc.url.Host = front
}
That is, we alter bc.url to reflect that the server to which we send
requests directly is the CDN, not the broker, and store the broker's own
URL in the HTTP Host header.
The above representation was always confusing to me, because in my
mental model, we are always conceptually communicating with the broker;
but we may optionally be using a CDN proxy in the middle. The new
representation is
bc.url *url.URL
bc.front string
bc.url is the URL of the broker itself, and never changes. bc.front is
the optional CDN front domain, and likewise never changes after
initialization. When domain fronting is in use, we do the swap in the
http.Request struct, not in BrokerChannel itself:
if bc.front != "" {
request.Host = request.URL.Host
request.URL.Host = bc.front
}
Compare to the representation in meek-client:
https://gitweb.torproject.org/pluggable-transports/meek.git/tree/meek-client/meek-client.go?h=v0.35.0#n94
var options struct {
URL string
Front string
}
https://gitweb.torproject.org/pluggable-transports/meek.git/tree/meek-client/meek-client.go?h=v0.35.0#n308
if ok { // if front is set
info.Host = info.URL.Host
info.URL.Host = front
}
The tests were using a broker URL of "test.broker" (i.e., a schema-less,
host-less, relative path), and running assertions on the value of
b.url.Path. This is strange, especially in tests regarding domain
fronting, where we care about b.url.Host, not b.url.Path. This commit
changes the broker URL to "http://test.broker" and changes tests to
check b.url.Host. I also added an additional assertion for an empty
b.Host in the non-domain-fronted case.
We used a WaitGroup to prevent a call to Peers.End from melting
snowflakes while a new one is being collected. However, calls to
WaitGroup.Add are in a race with WaitGroup.Wait. To fix this, we use a
Mutex instead.
The race condition occurs because concurrent goroutines are intermixing
reads and writes of `WebRTCPeer.closed`.
Spotted when integrating Snowflake inside OONI in
https://github.com/ooni/probe-cli/pull/373.
The race condition occurs because concurrent goroutines are
intermixing reads and writes of `WebRTCPeer.lastReceive`.
Spotted when integrating Snowflake inside OONI in
https://github.com/ooni/probe-cli/pull/373.
Send the client poll request and response in a json-encoded format in
the HTTP request body rather than sending the data in HTTP headers. This
will pave the way for using domain-fronting alternatives for the
Snowflake rendezvous.
Make a stack of cleanup functions to run (as with defer), but clear the
stack before returning if no error occurs.
Uselessly pushing the stream.Close() cleanup just before clearing the
stack is an intentional safeguard, for in case additional operations are
added before the return in the future.
Fixes#40042.
This update required two main changes to how we use the library. First,
we had to make sure we created the datachannel on the offering peer side
before creating the offer. Second, we had to make sure we wait for the
gathering of all candidates to complete since trickle-ice is enabled by
default. See the release notes for more details:
https://github.com/pion/webrtc/wiki/Release-WebRTC@v3.0.0.
Run the snowflake collection ReconnectTimeout timer in parallel to the
negotiation with the broker. This way, if the broker takes a long time
to respond the client doesn't have to wait the full timeout to respond.
Each SOCKS connection has its own set of snowflakes and broker poll
loop. Since the session manager was tied to a single set of snowflakes,
this resulted in a bug where RedialPacketConn would sometimes try to
pull snowflakes from a previously melted pool. The fix is to maintain
separate smux sessions for each SOCKS connection, tied to its own
snowflake pool.
Instead of continuously polling the broker until the client receives a
snowflake, fail back to the Connect() loop and try again to collect more
peers after ReconnectTimeout.
This fixes a race condition in which snowflakes.End() is called while
snowflakes.Collect() is in progress resulting in a write to a closed
channel. We now wait for all in-progress collections to finish and add
an extra check before proceeding with a collection.
Bug #21314: maintains a separate snowflake connect loop per SOCKS
connection. This way, if Tor decides to stop using Snowflake, Snowflake
will stop using the client's network.
The client and proxy use the net/http default transport to make round
trip connecitons to the broker. These by default don't time out and can
wait indefinitely for the broker to respond if the broker hangs and
doesn't terminate the connection.
Snowflake clients will now attempt NAT discovery using the provided STUN
servers and report their NAT type to the Snowflake broker for matching.
The three possibilities for NAT types are:
- unknown (the client was unable to determine their NAT type),
- restricted (the client has a restrictive NAT and can only be paired
with unrestricted NATs)
- unrestricted (the client can be paired with any other NAT).
The underlying smux layer sends a keep-alive ping every 10 seconds. This
modification will allow for one dropped/delayed ping before discarding
the snowflake
It was sticking out in the context of other log messages.
2020/04/30 22:39:10 WebRTC: DataChannel created.
2020/04/30 22:39:20 establishDataChannel: timeout waiting for DataChannel.OnOpen
2020/04/30 22:39:20 WebRTC: closing PeerConnection
2020/04/30 22:39:20 WebRTC: Closing
2020/04/30 22:39:20 WebRTC: WebRTC: Could not establish DataChannel Retrying in 10s...
Now callers cannot call Write without there being a DataChannel to write
to. This lets us remove the internal buffer and checks for transport ==
nil.
Don't set internal fields like writePipe, transport, and pc to nil when
closing; just close them and let them return errors if further calls are
made on them.
There's now a constant DataChannelTimeout that's separate from
SnowflakeTimeout (the latter is what checkForStaleness uses). Now we can
set DataChannel timeout to a lower value, to quickly dispose of
unconnectable proxies, while still keeping the threshold for detecting
the failure of a once-working proxy at 30 seconds.
https://bugs.torproject.org/33897
Formerly, preparePeerConnection set up a callback that sent into a
channel, and exchangeSDP waited until it could receive from the channel.
We can move the channel entirely into preparePeerConnection (having it
not return until the callback has been called) and that way remove some
shared state.
Do it as a side effect of NewWebRTCPeer.
Remove WebRTCPeer tests as they currently require invasively modifying
internal fields at different stages of construction.
The other interfaces in client/lib/interfaces.go exist for the purpose
of running tests, but not Snowflake. Existing code would not have worked
with other types anyway, because it does unchecked .(*WebRTCPeer)
conversions.
A short write will result in a non-nil error. It's an io.PipeWriter
anyway, which blocks until all the data has been read or the read end is
closed, in which case it returns io.ErrClosedPipe if not some other
error.