This should increase the maximum amount of inflight data and hopefully
the performance of Snowflake, especially for clients geographically
distant from proxies and the server.
Make a stack of cleanup functions to run (as with defer), but clear the
stack before returning if no error occurs.
Uselessly pushing the stream.Close() cleanup just before clearing the
stack is an intentional safeguard, for in case additional operations are
added before the return in the future.
Fixes#40042.
Run the snowflake collection ReconnectTimeout timer in parallel to the
negotiation with the broker. This way, if the broker takes a long time
to respond the client doesn't have to wait the full timeout to respond.
Each SOCKS connection has its own set of snowflakes and broker poll
loop. Since the session manager was tied to a single set of snowflakes,
this resulted in a bug where RedialPacketConn would sometimes try to
pull snowflakes from a previously melted pool. The fix is to maintain
separate smux sessions for each SOCKS connection, tied to its own
snowflake pool.
This fixes a race condition in which snowflakes.End() is called while
snowflakes.Collect() is in progress resulting in a write to a closed
channel. We now wait for all in-progress collections to finish and add
an extra check before proceeding with a collection.
Bug #21314: maintains a separate snowflake connect loop per SOCKS
connection. This way, if Tor decides to stop using Snowflake, Snowflake
will stop using the client's network.
The underlying smux layer sends a keep-alive ping every 10 seconds. This
modification will allow for one dropped/delayed ping before discarding
the snowflake
Now callers cannot call Write without there being a DataChannel to write
to. This lets us remove the internal buffer and checks for transport ==
nil.
Don't set internal fields like writePipe, transport, and pc to nil when
closing; just close them and let them return errors if further calls are
made on them.
There's now a constant DataChannelTimeout that's separate from
SnowflakeTimeout (the latter is what checkForStaleness uses). Now we can
set DataChannel timeout to a lower value, to quickly dispose of
unconnectable proxies, while still keeping the threshold for detecting
the failure of a once-working proxy at 30 seconds.
https://bugs.torproject.org/33897
This allows multiple SOCKS connections to share the available proxies,
and in particular prevents a SOCKS connection from being starved of a
proxy when the maximum proxy capacity is less then the number of the
number of SOCKS connections.
This is option 4 from https://bugs.torproject.org/33519.
The client opts into turbotunnel mode by sending a magic token at the
beginning of each WebSocket connection (before sending even the
ClientID). The token is just a random byte string I generated. The
server peeks at the token and, if it matches, uses turbotunnel mode.
Otherwise, it unreads the token and continues in the old
one-session-per-WebSocket mode.
Formerly we waiting until *both* directions finished. What this meant in
practice is that when the remote connection ended, copyLoop would become
useless but would continue blocking its caller until something else
finally closed the socks connection.
The call was
copyLoop(socks, snowflake)
but the function signature was
func copyLoop(WebRTC, SOCKS io.ReadWriter) {
The mistake was mostly harmless, because both arguments were treated the
same, except that error logs would have reported the wrong direction.