We currently don't sort the snowflake-ips metrics:
snowflake-ips CA=1,DE=1,AR=1,NL=1,FR=1,GB=2,US=4,CH=1
To facilitate eyeballing our metrics, this patch sorts snowflake-ips by
value. If the value is identical, we sort by string, i.e.:
snowflake-ips US=4,GB=2,AR=1,CA=1,CH=1,DE=1,FR=1,NL=1
This patch fixes tpo/anti-censorship/pluggable-transports/snowflake#40011
Instead of continuously polling the broker until the client receives a
snowflake, fail back to the Connect() loop and try again to collect more
peers after ReconnectTimeout.
Rather than having standalone proxies determine their NAT type by
conducting the NAT behaviour checks in RFC 5780, use the remote probe
service instead.
The easiest way to set up the probe server behind a symmetric NAT is to
deploy it as a Docker container and alter the iptables rules for the
Docker network subnet that the container runs in.
We expect one of these at the end of just about every proxy session, as
the Conns in both directions are closed as soon as the copy loop
finishes in one direction.
Closes#40016.
This fixes a race condition in which snowflakes.End() is called while
snowflakes.Collect() is in progress resulting in a write to a closed
channel. We now wait for all in-progress collections to finish and add
an extra check before proceeding with a collection.
Bug #21314: maintains a separate snowflake connect loop per SOCKS
connection. This way, if Tor decides to stop using Snowflake, Snowflake
will stop using the client's network.
As we now partition proxies by NAT type, our stats are more useful if they
capture how many proxies of each type we have, and information on
whether we have enough proxies of the right NAT type for our clients.
This change adds proxy counts by NAT type and binned counts of denied clients by NAT type.
The client and proxy use the net/http default transport to make round
trip connecitons to the broker. These by default don't time out and can
wait indefinitely for the broker to respond if the broker hangs and
doesn't terminate the connection.
This will allow browser-based proxies that are unable to determine their
NAT type to conservatively label themselves as restricted NATs if they
fail to work with clients that have restricted NATs.
Now when proxies poll, they provide their NAT type to the broker. This
introduces a new snowflake heap of just restricted snowflakes that the
broker can pull from if the client has a known, unrestricted NAT. All
other clients will pull from a heap of snowflakes with unrestricted or
unknown NAT topologies.
Snowflake clients will now attempt NAT discovery using the provided STUN
servers and report their NAT type to the Snowflake broker for matching.
The three possibilities for NAT types are:
- unknown (the client was unable to determine their NAT type),
- restricted (the client has a restrictive NAT and can only be paired
with unrestricted NATs)
- unrestricted (the client can be paired with any other NAT).
The underlying smux layer sends a keep-alive ping every 10 seconds. This
modification will allow for one dropped/delayed ping before discarding
the snowflake
It was sticking out in the context of other log messages.
2020/04/30 22:39:10 WebRTC: DataChannel created.
2020/04/30 22:39:20 establishDataChannel: timeout waiting for DataChannel.OnOpen
2020/04/30 22:39:20 WebRTC: closing PeerConnection
2020/04/30 22:39:20 WebRTC: Closing
2020/04/30 22:39:20 WebRTC: WebRTC: Could not establish DataChannel Retrying in 10s...
Now callers cannot call Write without there being a DataChannel to write
to. This lets us remove the internal buffer and checks for transport ==
nil.
Don't set internal fields like writePipe, transport, and pc to nil when
closing; just close them and let them return errors if further calls are
made on them.
There's now a constant DataChannelTimeout that's separate from
SnowflakeTimeout (the latter is what checkForStaleness uses). Now we can
set DataChannel timeout to a lower value, to quickly dispose of
unconnectable proxies, while still keeping the threshold for detecting
the failure of a once-working proxy at 30 seconds.
https://bugs.torproject.org/33897
Formerly, preparePeerConnection set up a callback that sent into a
channel, and exchangeSDP waited until it could receive from the channel.
We can move the channel entirely into preparePeerConnection (having it
not return until the callback has been called) and that way remove some
shared state.
Do it as a side effect of NewWebRTCPeer.
Remove WebRTCPeer tests as they currently require invasively modifying
internal fields at different stages of construction.
The other interfaces in client/lib/interfaces.go exist for the purpose
of running tests, but not Snowflake. Existing code would not have worked
with other types anyway, because it does unchecked .(*WebRTCPeer)
conversions.
A short write will result in a non-nil error. It's an io.PipeWriter
anyway, which blocks until all the data has been read or the read end is
closed, in which case it returns io.ErrClosedPipe if not some other
error.