Do it as a side effect of NewWebRTCPeer.
Remove WebRTCPeer tests as they currently require invasively modifying
internal fields at different stages of construction.
The other interfaces in client/lib/interfaces.go exist for the purpose
of running tests, but not Snowflake. Existing code would not have worked
with other types anyway, because it does unchecked .(*WebRTCPeer)
conversions.
A short write will result in a non-nil error. It's an io.PipeWriter
anyway, which blocks until all the data has been read or the read end is
closed, in which case it returns io.ErrClosedPipe if not some other
error.
This allows multiple SOCKS connections to share the available proxies,
and in particular prevents a SOCKS connection from being starved of a
proxy when the maximum proxy capacity is less then the number of the
number of SOCKS connections.
This is option 4 from https://bugs.torproject.org/33519.
The difficulty here is that the whole point of turbotunnel sessions is
that they are not necessarily tied to a single WebSocket connection, nor
even a single client IP address. We use a heuristic: whenever a
WebSocket connection starts that has a new ClientID, we store a mapping
from that ClientID to the IP address attached to the WebSocket
connection in a lookup table. Later, when enough packets have arrived to
establish a turbotunnel session, we recover the ClientID associated with
the session (which kcp-go has stored in the RemoteAddr field), and look
it up in the table to get an IP address. We introduce a new data type,
clientIDMap, to store the clientID-to-IP mapping during the short time
between when a WebSocket connection starts and handleSession receives a
fully fledged KCP session.
The client opts into turbotunnel mode by sending a magic token at the
beginning of each WebSocket connection (before sending even the
ClientID). The token is just a random byte string I generated. The
server peeks at the token and, if it matches, uses turbotunnel mode.
Otherwise, it unreads the token and continues in the old
one-session-per-WebSocket mode.
Formerly we waiting until *both* directions finished. What this meant in
practice is that when the remote connection ended, copyLoop would become
useless but would continue blocking its caller until something else
finally closed the socks connection.
Ever since we started scrubbing log messages, with the help of regexes
for https://bugs.torproject.org/21304 logging has become more CPU
intensive due to our use of regular expressions.
Logging the byte count of every incoming and outgoing message at the
proxy-go instances was taking up a lot of CPU and contrubuting to the
high CPU usage seen in https://bugs.torproject.org/33211.
Some proxies currently send ?client_ip=0.0.0.0 because of an error in
how they attempt to grep the address from the client's SDP. That's
inflating our "%d/%d connections had client_ip" logs. Instead, treat
these cases as if the IP address were absent.
https://bugs.torproject.org/33157https://bugs.torproject.org/33385
Unless something externally called Write after Close, the
writeLoop(ws, pr2) goroutine would run forever, because nothing would
ever close pw2/pr2.
https://bugs.torproject.org/33367#comment:4