Compare commits

...

1044 commits

Author SHA1 Message Date
meskio
70a6b134a7
Merge remote-tracking branch 'gitlab/mr/626' 2025-09-15 19:53:20 +02:00
meskio
13daaaee91
Merge remote-tracking branches 'gitlab/mr/615' and 'gitlab/mr/628' 2025-09-15 19:47:28 +02:00
Renovate Bot
e4715f8bff Update module github.com/aws/aws-sdk-go-v2/config to v1.31.8 2025-09-10 20:09:57 +00:00
Renovate Bot
c504ab262b Update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.42.5 2025-09-10 20:09:45 +00:00
Renovate Bot
f2a9bed6ea Update module github.com/pion/sdp/v3 to v3.0.16 2025-09-10 14:41:54 +00:00
Cecylia Bocovich
766988b77b
Add proxy type label to ProxyPollTotal metric
Annotate ProxyPollTotal prometheus metrics with the proxy type so that
we can track counts of proxies that are matched and that answer by
implementation. This will help us catch bugs by implementation or
deployment.
2025-09-09 18:18:42 -04:00
Cecylia Bocovich
d08efc34c3
Add prometheus metric for proxy answer counts
This adds a prometheus metric that tracks snowflake proxy answers. If
the client has not timed out before the proxy responds with an answer,
the proxy type is recorded along with a status of "success". If the
client has timed out, the type is left blank and the status is recorded
as "timeout".

The goal of these metrics is to help us determine how many proxies fail
to respond and to help narrow down which proxy implementations are
causing client timeouts.
2025-09-09 12:41:27 -04:00
Shelikhoo
c49a86e5a9
Remove dependency proxy to use automatic mirror 2025-09-09 14:56:27 +01:00
Cecylia Bocovich
452a6d22b1
Move and increase sleep time in queuepacketconn test
This should give written data enough time to make it to the post
processing queue before the connection is closed.

See https://github.com/xtaci/kcp-go/issues/273
2025-09-09 09:54:33 -04:00
Cecylia Bocovich
b9e7865c50
Fix data race in queuepacketconn_test.go
Use mutex when checking the length of a TranscriptPacketConn.
2025-09-09 09:54:33 -04:00
Renovate Bot
24c42dff13
Bump kcp-go to v5.6.24 2025-09-09 09:54:33 -04:00
Renovate Bot
dd259faae1
Update module golang.org/x/crypto to v0.41.0 2025-09-02 14:44:32 +01:00
Renovate Bot
e4261cd545
Update module github.com/pion/webrtc/v4 to v4.1.4 2025-09-02 12:31:05 +01:00
Renovate Bot
046946fbcb
Update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.42.3 2025-09-01 14:07:37 +01:00
Shelikhoo
2114f02ae4
Replace utls with patch version with backport for tls downgrade fix 2025-08-28 12:53:01 +01:00
Cecylia Bocovich
5b9f7fc7bd
Bump version of ptutil to fix safeprom data race 2025-08-20 16:42:16 -04:00
Cecylia Bocovich
f777a93b29
Add failing concurrency test
This test fails when run with
  go test -v -race
due to data races in calls to prometheus update functions.
2025-08-20 16:37:31 -04:00
Renovate Bot
21d0d913f4
Update module github.com/xtaci/smux to v1.5.35 2025-08-20 16:29:34 -04:00
Renovate Bot
a5ffa128cf
Update module github.com/aws/aws-sdk-go-v2/credentials to v1.18.5 2025-08-20 16:28:09 -04:00
Renovate Bot
3a38526aa1
Update module golang.org/x/sys to v0.35.0 2025-08-20 09:59:14 -04:00
Renovate Bot
06c6fd0683 Update module golang.org/x/net to v0.38.0 [SECURITY] 2025-08-20 13:41:27 +00:00
Cecylia Bocovich
2740e1bbf9
Bump minimum supported go version to 1.23.0 2025-08-20 09:35:32 -04:00
Cecylia Bocovich
c8b0b31601
Clear map of seen proxy IP addresses
We were not previously clearing the map we keep of seen IP addresses,
which resulted in our unique proxy IP counts representing churn rather
than unique IP counts per day, except during broker process restarts.
2025-08-20 09:35:32 -04:00
Cecylia Bocovich
43a35655ad
Add failing test for cleared IP map
We are not clearing the map of seen IP addresses when metrics are
printed, resulting in lower than expected unique IP address counts for
daily metrics.

See https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40472
2025-08-20 09:35:32 -04:00
Shelikhoo
14b8cde3da
Update probetest container images to build binary and reduce final image size with multiple stages 2025-08-20 11:44:35 +01:00
Shelikhoo
ca07f57448
Remove s390x from container building targets 2025-08-19 21:10:14 +01:00
Cecylia Bocovich
f4027c1128
Use Go 1.24 for android CI job
gomobile now requres go >= 1.24.0
2025-08-19 13:08:48 -04:00
David Fifield
fd42bcea8a Comment typo. 2025-08-19 14:50:23 +00:00
David Fifield
74c39cc8e9
Bin country stats counts before sorting, not after.
This avoids an information leak where, if two countries have the same
count but are not in alphabetical order, you know the first one had a
larger count than the second one before binning.

Example: AA=25,BB=27

Without binning, these should sort descending by count:
BB=27,AA=25

But with binning, the counts are the same, so it should sort ascending
by country code:
AA=32,BB=32

Before this change, BB would sort before AA even after binning, which
lets you infer that the count of BB was greater than the count of AA
*before* binning:
BB=32,AA=32
2025-08-19 09:58:51 -04:00
David Fifield
7a003c9bb1
Add a failing test for ordering of country stats that bin to the same value. 2025-08-19 09:58:50 -04:00
David Fifield
9946c0f1d8
Add a test for formatAndClearCountryStats with binned=true. 2025-08-19 09:58:50 -04:00
David Fifield
bd04cd7752
Refactor TestFormatAndClearCountryStats. 2025-08-19 09:58:50 -04:00
David Fifield
cc0a33faea
Move formatAndClearCountryStats test into a new metrics_test.go.
This is more unit test–y. We don't need a full broker instantiation for
testing this function, unlike other tests in snowflake-broker_test.go.
2025-08-19 09:58:50 -04:00
David Fifield
ec39237e69 Add a test that formatAndClearCountryStats clears the map. 2025-08-15 19:24:58 +00:00
David Fifield
ed3bd99df6 Rename displayCountryStats to formatAndClearCountryStats.
The old name did not make it clear that the function has the side effect
of clearing the map.
2025-08-15 19:24:58 +00:00
David Fifield
75daf2210f Refactor displayCountryStats.
Move the record types closer to where they are used.

Use a strings.Builder rather than repeatedly concatenating strings
(which creates garbage).

Use the value that m.Range already provides us, don't look it up again
with LoadAndDelete.

Add documentation comments.
2025-08-15 19:24:58 +00:00
David Fifield
6e0e5f9137 Express records.Less more clearly. 2025-08-15 19:24:58 +00:00
David Fifield
fed11184c7 Have records.Less express the order we want directly.
The ordering is descending by count, then ascending by cc. Express that
directly, rather than specifying the opposite ordering and using
sort.Reverse.
2025-08-15 19:24:58 +00:00
David Fifield
b058b10a94 Express binCount using integer operations.
No need to bring a float64 into this.
2025-08-15 19:24:58 +00:00
Cecylia Bocovich
70974640ab
Defer SQS client IP extraction to ClientOffers
Now that both SQS and AMP cache are pulling remote addresses from the
SDP, avoid duplicate decodings of the ClientPollRequest by extracting
the remote addr in ClientOffers.
2025-08-14 14:13:47 -04:00
Cecylia Bocovich
0bbcb1eca4
Add test for AMP cache geolocation 2025-08-14 14:13:47 -04:00
Cecylia Bocovich
31f879aad5
Pull client IP from SDP for AMP cache rendezvous
The remote address for AMP cache rendezvous is always geolocated to the
AMP cache server address. For more accurate metrics on where this
rendezvous method is used and working, we can pull the remote address
directly from the client SDP sent in the poll request.
2025-08-14 14:13:47 -04:00
Shelikhoo
8ae1994e4b
Update snowflake proxy image to use most recent golang and geodb 2025-07-31 15:01:07 +01:00
meskio
a9fe899198
Merge remote-tracking branch 'gitlab/mr/593' 2025-07-31 11:19:18 +02:00
Renovate Bot
437cc37443 chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.30.2 2025-07-30 19:29:12 +00:00
Renovate Bot
a176e7567d
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.29.18 2025-07-22 10:00:58 -04:00
Renovate Bot
79c4dfbdc8
chore(deps): update module github.com/pion/sdp/v3 to v3.0.15 2025-07-22 09:58:16 -04:00
Cecylia Bocovich
58b1d48e54
Increment prometheus proxy_total count once per IP
This fixes a regression from !574 that did not check whether the IP was
unique before incrementing the counter.

Closes https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40470
2025-07-10 10:41:26 -04:00
Cecylia Bocovich
d7ebb2f99c
Add clarification to broker-spec on client-*-ips 2025-07-10 10:34:26 -04:00
David Auer
1dc9947d2a Fix missing labels in Docker image
In a multi stage Docker build, the LABEL commands need to be applied in the final stage.
2025-07-08 20:58:09 +00:00
Cecylia Bocovich
1d73e14f34
Rename metrics update functions
This changes the metrics update functions to UpdateProxyStats and
UpdateClientStats, which is more accurate and clear than the previous
CountryStats and RendezvousStats names.
2025-06-24 13:12:10 -04:00
Cecylia Bocovich
78cf8e68b2
Simplify broker metrics and remove mutexes
This is a large change to how the snowflake broker metrics are
implemented. This change removes all uses of mutexes from the metrics
implementation in favor of atomic operations on counters stored in
sync.Map.

There is a small change to the actual metrics output. We used to count
the same proxy ip multiple times in our snowflake-ips-total and
snowflake-ips country stats if the same proxy ip address polled more
than once with different proxy types. This was an overcounting of the
number of unique proxy IP addresses that is now fixed.

If a unique proxy ip polls with more than one proxy type or nat type,
these polls will still be counted once for each proxy type or nat type
in our proxy type and nat type specific stats (e.g.,
snowflake-ips-nat-restricted and snowflake-ips-nat-unrestricted).
2025-06-24 13:12:10 -04:00
David Fifield
64c7a26475 Comment typo. 2025-06-19 15:39:24 +00:00
David Fifield
55a06f216c Delete stray space. 2025-06-19 15:26:39 +00:00
Renovate Bot
2650ef7468
chore(deps): update module github.com/pion/webrtc/v4 to v4.1.2 2025-06-18 15:56:40 +01:00
Renovate Bot
647d5d37c7
chore(deps): update module github.com/pion/sdp/v3 to v3.0.12 2025-05-21 10:55:22 -04:00
Cecylia Bocovich
a377a4e0da
Add client-snowflake-timeout-count to broker spec
We added a new snowflake metric on the number of timeouts. This brings
doc/broker-spec.txt up to date on our current exported metrics.
2025-05-20 12:29:41 -04:00
meskio
1c53a63744
Merge remote-tracking branch 'gitlab/mr/569' 2025-05-13 13:43:15 +02:00
Gus
506c33a2fd Update Snowflake bridge lines - CDN77, ampcache, and SQS 2025-05-12 20:03:54 +01:00
Renovate Bot
5d956456a5
chore(deps): update module github.com/prometheus/client_golang to v1.22.0 2025-05-01 14:24:46 +01:00
Renovate Bot
e5a8a16efc
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.29.14 2025-05-01 13:42:30 +01:00
meskio
726d66c75c
Merge remote-tracking branches 'gitlab/mr/551', 'gitlab/mr/552' and 'gitlab/mr/555' 2025-04-29 10:26:40 +02:00
Renovate Bot
8fa0717552 chore(deps): update module github.com/pion/webrtc/v4 to v4.1.0 2025-04-28 12:40:55 +00:00
Renovate Bot
28fd1ecc2b chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.38.5 2025-04-28 12:40:51 +00:00
Renovate Bot
236f15f81c chore(deps): update module github.com/aws/aws-sdk-go-v2/credentials to v1.17.67 2025-04-28 12:40:45 +00:00
Shelikhoo
a4a55e4398
CI: fix invalid group name by removing trail slash 2025-04-28 13:25:53 +01:00
Renovate Bot
ef276c8161
chore(deps): update module github.com/pion/ice/v4 to v4.0.10 2025-04-24 14:59:22 +01:00
Shelikhoo
3d7dcfc55d
Add updated docker compose file 2025-04-17 16:41:22 +01:00
meskio
2a5a09e451
CI: use the parent group as namespace for the dependency proxy
This should solve our problem failing to get images on CI runs.
2025-04-16 15:41:36 +02:00
Renovate Bot
d264cf2cdb
chore(deps): update module github.com/miekg/dns to v1.1.65 2025-04-14 15:12:45 +01:00
Renovate Bot
a5ee60e3b5
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.29.13 2025-04-09 15:45:54 +01:00
Renovate Bot
396f7b9941
chore(deps): update module github.com/pion/ice/v4 to v4.0.9 2025-04-03 14:10:40 +01:00
Cecylia Bocovich
9378c53d8e
Bump version of shadow for CI tests 2025-04-02 11:11:01 -04:00
Renovate Bot
61c797406b
chore(deps): update module github.com/prometheus/client_golang to v1.21.1 2025-04-01 20:50:59 +01:00
Cecylia Bocovich
f712dfdd72
Fix shadow and tgen cache in .gitlab-ci.yml
Make sure shadow and tgen runtime dependencies are installed and the
paths are correct
2025-03-27 22:12:35 -04:00
Cecylia Bocovich
08239cca2a
Remove broker log messages for invalid SDP and SQS cleanup 2025-03-27 15:34:09 -04:00
Renovate Bot
5ec92a5dd4
chore(deps): update module github.com/aws/aws-sdk-go-v2/credentials to v1.17.64 2025-03-27 14:36:38 +00:00
Cecylia Bocovich
dd5fb03c49
Remove default relay pattern option from broker
This was only useful to us when we first implemented the feature, to be
able to support proxies that hadn't yet updated, when we had a single
Snowflake bridge. Now that we have multiple bridges, it is unecessary as
proxies that don't send their accepted relay pattern are rejected
anyway.
2025-03-26 13:32:30 -04:00
Cecylia Bocovich
c0ac0186f1
Remove bad relay pattern log message
We already count proxies rejected for their supported relay URL in
snowflake metrics and these messages are filling up our broker logs.
2025-03-26 13:32:30 -04:00
Cecylia Bocovich
8343bbc336
Add context with timeout for client requests
Client timeouts are currently counted from when the client is matched
with a proxy. Instead, count client timeouts from the moment when the
request is received.

Closes #40449
2025-03-26 13:30:59 -04:00
Cecylia Bocovich
db0364ef87
Update DEBIAN_STABLE to bookworm in CI tests 2025-03-20 12:32:40 -04:00
Cecylia Bocovich
116fe9f578
Bump minimum version of go to 1.22
This fixes a pointer bug in our broker sqs code by enabling the loopvar
feature https://go.dev/wiki/LoopvarExperiment

See tpo/anti-censorship/pluggable-transports/snowflake#40363
2025-03-20 12:31:26 -04:00
meskio
fdac01ca90
CI: use Dependency Proxy when available
This sets up CI to allow the use of the GitLab Dependency Proxy which
caches images pulled from DockerHub, in order to bypass rate-limiting.

The DOCKER_REGISTRY_URL variable is set dynamically by the
check_dependency_proxy_access job defined in dependency_proxy.yml such
that only pipelines triggered by users with the requisite access will be
configured to use the proxy, while all others will continue to pull from
DockerHub as before.

When DOCKER_REGISTRY_URL is pre-set in a project's CI/CD variable
settings, the extra job is skipped and the dependency proxy is used
always, unconditionally.

To avoid breaking CI pipelines on 3rd-party GitLab instances, we only
include the dependency proxy template on gitlab.tpo

See: https://gitlab.torproject.org/tpo/tpa/team/-/issues/40335
2025-03-20 17:28:05 +01:00
Cecylia Bocovich
6472bd86cd
Bump verison of Snowflake to 2.11.0 2025-03-18 14:37:02 -04:00
WofWca
f3e040bbd8
improvement: less scary failed conn logs & metrics
...and adjust the `totalFailedConnections` metric name
and description.

This commit should make the periodic stats log messages
and the relevant metric look less scary to users:
P2P connection failures are relatively frequent and are usually
not indicative of the proxy operator having done something wrong.
So let's tone the wording down.

See the discussion: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/516#note_3173677.
2025-03-15 11:15:22 -04:00
Shelikhoo
f715c397c2
Update README to reflect project usecase 2025-03-12 13:58:30 +00:00
WofWca
46fdcce5c6
fix: data race warnings of tokens_t
This migrates from using `atomic.LoadInt64` on `int64`
to making the `clients` field itself `atomic.Int64`.
Also `count` now takes `*tokens_t` by reference,
which fixes a linter warning.

It's not clear to me why it warned about this,
but I simplified it anyway.
2025-03-12 09:53:40 -04:00
WofWca
730e400123 fix: periodicProxyStats.connectionCount race
And `failedConnectionCount`.
Convert the `int` / `uint` to `atomic.Int32` / `atomic.Uint32`.
The race was discovered by running a proxy with the `-race` flag.
2025-03-12 00:47:22 +04:00
WofWca
4205121689 fix: make NATPolicy thread-safe
Although it does not look like that there are situations
where it is critical to use a mutex, because it's only used
when performing rendezvous with a proxy, which doesn't happen
too frequently,
let's still do it just to be sure.
2025-03-12 00:47:22 +04:00
WofWca
1923803124 fix: potential race conditions with non-local err
Some of the changes do not appear to have a potential race condition,
so there it is purely a refactor,
while in others (e.g. in broker.go and in proxy/lib/snowflake.go)
we do use the same variable from multiple threads / functions.
2025-03-12 00:47:07 +04:00
WofWca
01819eee32
fix(proxy): race condition warning for isClosing
It appears that there is no need for the `isClosing` var at all:
we can just `close(c.sendMoreCh)` to ensure that it doesn't block
any more `Write()`s.

This is a follow-up to
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/144.
Related:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/524.
2025-03-11 15:50:53 -04:00
Renovate Bot
1aa5a61fe8
chore(deps): update module github.com/pion/sdp/v3 to v3.0.11 2025-03-11 13:39:48 -04:00
Cecylia Bocovich
b8410bd748
Merge branch 'renovate/github.com-pion-ice-v4-4.x' 2025-03-11 12:46:12 -04:00
Renovate Bot
3fb30dbb86
chore(deps): update module github.com/pion/webrtc/v4 to v4.0.13 2025-03-11 12:45:32 -04:00
Renovate Bot
da4164473c
chore(deps): update module github.com/pion/webrtc/v4 to v4.0.13 2025-03-11 12:42:28 -04:00
Cecylia Bocovich
57dc276e48
Update broker metrics to count matches, denials, and timeouts
Our metrics were undercounting client polls by missing the case where
clients are matched with a snowflake but receive a timeout before the
snowflake responds with its answer. This change adds a new metric,
called client-snowflake-timeout-count, to the 24 hour broker stats and a
new "timeout" status label for prometheus metrics.
2025-03-11 12:36:27 -04:00
WofWca
583178f4f2
feat(proxy): add failed connection count stats
For the summary log and for Prometheus metrics.

Log output example:

> In the last 1h0m0s, there were 7 completed successful connections. 2 connections failed. Traffic Relayed ↓ 321 KB (0.10 KB/s), ↑ 123 KB (0.05 KB/s).
2025-03-11 13:12:44 +00:00
Renovate Bot
5ef4761968
chore(deps): update module github.com/xtaci/smux to v1.5.34 2025-03-10 15:13:46 +00:00
Cecylia Bocovich
cfde2b79fc
Create CI artifact regardless of when shadow fails
Closes https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40377
2025-03-05 16:14:30 -05:00
Cecylia Bocovich
9e619a3654
Remove metrics race condition in sqs test
To test that the broker responds with a proxy answer if available, have
only one valid client offer to ensure metrics will always be in the
first multiple of 8.
2025-03-04 10:37:37 -05:00
Cecylia Bocovich
80374c6d93
Move nonblocking AddSnowflake out of goroutine in sqs test
This fixes a race condition in tests where sometimes snowflake matching
happens before enough snowflakes get added to the heap.
2025-03-04 10:37:37 -05:00
WofWca
50bed1e67a
refactor: docstring for checkIsRelayURLAcceptable
Related: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40378.
2025-03-03 12:14:15 +00:00
Cecylia Bocovich
eb13b2ff4b
Copy base client config for each SOCKS connection
Fixes a bug where socksAcceptLoop was reusing the same client config
when processing arguments from multiple SOCKS connections, causing
different bridge lines to clobber each other.
2025-02-25 10:40:51 -05:00
meskio
5f7e23813d
Merge remote-tracking branch 'gitlab/mr/512' 2025-02-24 12:30:30 +01:00
Renovate Bot
0a436a2bc2 chore(deps): update module github.com/prometheus/client_golang to v1.21.0 2025-02-20 14:48:59 +00:00
Cecylia Bocovich
63613cc50a
Fix minor data race in Snowflake broker metrics 2025-02-20 09:39:11 -05:00
Cecylia Bocovich
1180d11a66
Remove data races from sqs tests
Our SQS tests were not concurrency safe and we hadn't noticed until now
because we were processing incoming SQS queue messages sequentially
rather than in parallel.

This fix removes the log output checks, which were prone to error
anyway, and relies instead on gomock's expected function calls and
strategic use of the context cancel function for each test.
2025-02-20 09:39:11 -05:00
Cecylia Bocovich
2250bc86f6
Process and read broker SQS messages more quickly
We're losing a lot of messages from the broker SQS queue because they
are exceeding their maximum lifetime before being read and processed by
the broker. This change speeds up that process by increasing the size of
messagesChn and processing the messages within a go routine.
2025-02-20 09:37:18 -05:00
WofWca
6384643109
fix(proxy): improve NAT test reliability
This is a hack, and I'm not entirely sure how it works,
but it appears to work, at least somewhat.
See https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40419#note_3141855.
2025-02-17 11:47:11 +00:00
meskio
e345c3bac9
proxy: add country to prometheus metrics 2025-02-13 12:44:23 +01:00
meskio
b3c734ed63
proxy: webRTCconn gives the remote IP instead of the Address
We only use the IP part of the address.
2025-02-13 12:44:17 +01:00
WofWca
57eefd4b37
Temove outdated comment
As per https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/502#note_3159902.

The comment was added in c28c8ca489,
and got outdated apparently after
83c01565ef.

Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2025-02-12 11:50:29 -05:00
WofWca
cb0fb02cd5
fix(proxy): not answering before client timeout
This is related to
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40230.

The initial MR that closed that issue,
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/391,
was not semantically correct, because `DataChannelTimeout`
starts after the client has already received the answer.

After
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/498#note_3156256
the code became not only semantically incorrect,
but also functionally incorrect because now if this timeout is hit
by the proxy, the client is guaranteed to be gone already.
This commit fixes it, by lowering the timeout.

This addresses a suggestion in
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40447.

This also closes
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40381
and supersedes
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/415.
2025-02-12 10:17:08 -05:00
Renovate Bot
cb30331aa2
chore(deps): update gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil digest to efaf4e0
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2025-02-12 10:07:00 -05:00
Renovate Bot
5d97990096
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.29.6
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2025-02-12 10:06:02 -05:00
Renovate Bot
d8838d1727
chore(deps): update module github.com/pion/ice/v4 to v4.0.6
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2025-02-12 10:03:22 -05:00
Renovate Bot
971d88ca9d
chore(deps): update module golang.org/x/net to v0.35.0 2025-02-11 11:22:39 +00:00
Shelikhoo
33d00aea24
update golang testing setting in CI 2025-02-10 12:54:43 +00:00
Renovate Bot
2c2839fc7a
chore(deps): update module github.com/aws/aws-sdk-go-v2/credentials to v1.17.59 2025-02-06 13:51:27 +00:00
Renovate Bot
905002d146
chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.37.14 2025-02-06 12:42:01 +00:00
Cecylia Bocovich
4a1e075ee0
Lower broker ClientTimeout to 5 seconds
Matches the observed timeout for CDN77, based on user reports.
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40446
2025-02-04 15:41:35 -05:00
meskio
35bc8ec7c3
Merge remote-tracking branches 'gitlab/mr/486' and 'gitlab/mr/487' 2025-02-04 18:56:11 +01:00
Renovate Bot
a390085d2a chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.29.4 2025-01-31 20:12:47 +00:00
Renovate Bot
276bce42b5 chore(deps): update module github.com/miekg/dns to v1.1.63 2025-01-30 15:46:29 +00:00
onyinyang
26f7ee4b06
Remove utls library from snowflake and Use ptuil/utls 2025-01-29 13:01:33 -05:00
Renovate Bot
0dee9d68bd
chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.37.9
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2025-01-22 14:37:19 -05:00
Renovate Bot
d710216fb7
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.29.1
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2025-01-22 14:33:45 -05:00
meskio
313e54befe
CI: use /etc/localtime instead of /etc/timezone
/etc/timezone is a legacy debian specific file. Let's use localtime.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1038849

* Related: #40414
2025-01-22 17:38:49 +01:00
Renovate Bot
fa122efb61
chore(deps): update module github.com/xtaci/smux to v1.5.33 2025-01-21 15:41:23 +00:00
Renovate Bot
883e8238d1
chore(deps): update module github.com/pion/webrtc/v4 to v4.0.8 2025-01-21 14:08:17 +00:00
meskio
7938509b6f
Merge remote-tracking branches 'gitlab/mr/480' and 'gitlab/mr/485' 2025-01-20 17:42:38 +01:00
Renovate Bot
590735c838 chore(deps): update module github.com/aws/aws-sdk-go-v2 to v1.33.0 2025-01-16 21:16:35 +00:00
Renovate Bot
9ede2ca3da chore(deps): update module github.com/pion/sdp/v3 to v3.0.10 2025-01-16 21:16:21 +00:00
Cecylia Bocovich
eedac71a3a
Add self-signed ISRG Root X1 to cert pool
Replace the expired DST Root CA X3 signed ISRG Root X1 cert with the
self-signed cert.

Closes https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40440
2025-01-15 10:56:17 -05:00
meskio
fad8ddb840
Merge remote-tracking branches 'gitlab/mr/473' and 'gitlab/mr/474' 2025-01-14 10:29:10 +01:00
Renovate Bot
3ac3c177c2 chore(deps): update module golang.org/x/net to v0.34.0 2025-01-13 09:10:43 +00:00
Renovate Bot
2556b3cc7b chore(deps): update module github.com/aws/aws-sdk-go-v2 to v1.32.8 2025-01-13 09:10:15 +00:00
David Fifield
1895bb9d2c Comment typo. 2025-01-13 08:49:15 +00:00
Renovate Bot
e4c95fc242
chore(deps): update module golang.org/x/net to v0.33.0 [security]
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2025-01-07 15:54:08 -05:00
meskio
cdbfc9612f
Merge remote-tracking branches 'gitlab/mr/464', 'gitlab/mr/467' and 'gitlab/mr/471' 2025-01-07 13:08:21 +01:00
WofWca
e038b68d79 refactor(proxy): simplify tokens.ret() on error 2025-01-04 19:31:44 +04:00
Renovate Bot
847c7c45a8 chore(deps): update module golang.org/x/crypto to v0.31.0 [security] 2024-12-23 16:38:11 +00:00
Renovate Bot
1d3772bb80 chore(deps): update module github.com/aws/aws-sdk-go-v2 to v1.32.7 2024-12-19 20:14:48 +00:00
Shelikhoo
e7a7f41c5b
seperate docker hub mirroring to a seperate stage 2024-12-16 13:28:30 +00:00
meskio
63549af07e
Merge remote-tracking branches 'gitlab/mr/459' and 'gitlab/mr/461' 2024-12-16 10:49:17 +01:00
Renovate Bot
0e793d6cb9 chore(deps): update module github.com/pion/webrtc/v4 to v4.0.6 2024-12-16 06:50:07 +00:00
WofWca
85a93c5303 docs: clarify -ports-range is for port forwarding 2024-12-13 17:06:13 +04:00
WofWca
92521b6679 improvement: warn if ports-range is too narrow
...and improve the docstring for the parameter.
2024-12-13 17:06:11 +04:00
WofWca
cb32d008ca docs: improve ephemeral-ports-range description
Clarify that the default range is wide.
2024-12-13 16:09:22 +04:00
Shelikhoo
6e7c177157
copy container tag to generate stable with crane to avoid flattening image 2024-12-12 13:33:52 +00:00
David Fifield
dbad475254 Finish incomplete comment for newEncapsulationPacketConn. 2024-12-12 06:40:57 +00:00
Renovate Bot
a0731443ff
chore(deps): update module golang.org/x/net to v0.32.0 2024-12-10 15:33:15 +00:00
Renovate Bot
ef0d391243
chore(deps): update module gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/goptlib to v1.6.0 2024-12-10 14:11:58 +00:00
WofWca
94b6647d33
feat(client): try restricted proxy if NAT unknown
Just once, to offload unrestricted proxies.
This is useful when our STUN servers are blocked or don't support
the NAT discovery feature, or if they're just slow.

Closes https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40178.
Partially addresses https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40376

Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-12-05 10:34:08 -05:00
WofWca
f6767061e4
refactor: separate some Negotiate logic
As per https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/392#note_3096760
in preparation for further changes to `Negotiate`.
2024-12-05 10:27:36 -05:00
Cecylia Bocovich
75e73ce397
Fixup new STUN servers to include protocol 2024-12-04 12:02:18 -05:00
Cecylia Bocovich
cc644134ad
Added new RFC 5780 compatible STUN servers 2024-12-03 15:36:12 -05:00
Cecylia Bocovich
1607f9ce85
Remove nonfunctional STUN servers
Remove STUN servers that are offline, appear to be misconfigured, or do
not support NAT discovery
2024-12-03 15:21:07 -05:00
Cecylia Bocovich
6ecd5bf6d7
Remove log when offer is nil
After !414, we started returning a nil offer from pollOffer if the proxy
was not matched with a client. It's not longer an indication of failure,
so we should remove the "bad offer from broker" log message.
2024-12-03 15:05:44 -05:00
Cecylia Bocovich
5b479fdb13
Log EventOnCurrentNATTypeDetermined for proxy 2024-12-03 15:05:44 -05:00
Renovate Bot
dfbeee00de
chore(deps): update module github.com/aws/aws-sdk-go-v2 to v1.32.6 2024-12-03 13:25:04 +00:00
Renovate Bot
64995f391b
chore(deps): update golang docker tag to v1.23 2024-12-03 13:02:35 +00:00
WofWca
5e7b35bf12
refactor: use named returns for some funcs
This should make the functions easier to use,
harder to confuse the return values with the same type.
2024-12-03 12:51:42 +00:00
meskio
e6555e4a1e
Merge remote-tracking branch 'gitlab/mr/444' 2024-12-02 15:14:01 +01:00
Renovate Bot
295748f3ff chore(deps): update module github.com/pion/webrtc/v4 to v4.0.5 2024-11-29 14:24:51 +00:00
WofWca
ae5bd52821
improvement: use SetIPFilter for local addrs
Closes https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40271.
Supersedes https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/417.

This simplifies the code and (probably) removes the need for
`StripLocalAddresses`, although makes us more dependent on Pion.

Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-11-28 10:56:40 -05:00
Cecylia Bocovich
43799819a1
Suppress logs of proxy events by default 2024-11-28 10:42:54 -05:00
Shelikhoo
d069a0a1b9
Add Container Image Mirroring from Tor Gitlab to Docker Hub 2024-11-27 14:43:48 +00:00
Renovate Bot
f940d7d6ef
chore(deps): update module github.com/pion/ice/v4 to v4.0.3
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-11-26 14:01:25 -05:00
meskio
ccb351e817
Merge remote-tracking branches 'gitlab/mr/435' and 'gitlab/mr/439' 2024-11-25 15:51:47 +01:00
Renovate Bot
6e1eb39e79 chore(deps): update module github.com/pion/webrtc/v4 to v4.0.2 2024-11-21 15:15:07 +00:00
WofWca
c5d680342b
refactor: separate function for connectToRelay
This should make the code easier to glance over,
to understand that relay connection is performed from inside
the datachannel handler.
2024-11-21 14:55:28 +00:00
WofWca
f65f1d850f improvement: use IsLinkLocalUnicast in IsLocal
Looking at the code, this commit appears to change behavior,
because `IsLocal` will now return `true` for IPv6 link-local unicast
addresses.
2024-11-21 17:31:56 +04:00
WofWca
387096b2a1 refactor: rewrite IsLocal with ip.IsPrivate()
The referenced MR has been implemented.
The extra checks have been added in 8467c01e9e.

With this rewrite the checks are exactly the same as of Go 1.23.3.
2024-11-18 20:49:16 +04:00
Shelikhoo
239357509f
update snowflake to use pion webrtc v4 2024-11-13 14:58:53 +00:00
Renovate Bot
290be512e3 chore(deps): update module github.com/pion/webrtc/v3 to v4 2024-11-11 18:45:36 +00:00
Cecylia Bocovich
8b2e12c96d
Bump version of Snowflake to 2.10.1 2024-11-11 13:15:48 -05:00
Cecylia Bocovich
b06004a365
Bump version of snowflake to 2.10.0 2024-11-07 16:56:55 -05:00
Cecylia Bocovich
aaf8826560
Add proxy event for when client has connected
This enables the usage of callbacks that will be called when a client
has opened a data channel connection to the proxy.
2024-11-06 10:31:33 -05:00
Cecylia Bocovich
0d8bd159ec
Have SnowflakeConn.Close() return errors
Return an error if the connection was already closed. On the first
close, return an error if any of the calls inside Close() returned an
error in this order:
- smux.Stream.Close()
- pconn.Close()
- smux.Session.Close()
2024-10-29 14:58:01 -04:00
Cecylia Bocovich
a019fdaec9
Perform SnowflakeConn.Close() logic only once
Use synchronization to avoid a panic if SnowflakeConn.Close is called
more than once.
2024-10-29 14:58:01 -04:00
Waldemar Zimpel
028ff82683 Optionally enable local time for logging
Introduces the option `-log-local-time` which switches to local time
for logging instead of using UTC. Also if this option is applied, a message
is being output to the log on startup about the usage of local time
to draw attention, so the user/operator can take care of anonymity in case
the logs are going to be shared.
2024-10-28 16:23:44 +01:00
meskio
0e0ca8721e
Merge remote-tracking branch 'gitlab/mr/423' 2024-10-23 09:11:41 +02:00
Waldemar Zimpel
93f5d1ef7f Log average transfer rate
Adds the average transfer rate for the summary interval to the summary log lines
2024-10-23 03:25:26 +02:00
Neel Chauhan
f4305180b9
Remove the pollInterval loop from SignalingServer.pollOffer in the standalone proxy
Closes #40210.
2024-10-22 14:50:43 -04:00
meskio
a7855d506c
Merge remote-tracking branches 'gitlab/mr/420' and 'gitlab/mr/422' 2024-10-21 12:50:40 +02:00
Renovate Bot
f22f1ceb9f chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.28.0 2024-10-17 19:53:19 +00:00
Renovate Bot
ce2fc00fb3 chore(deps): update module github.com/prometheus/client_golang to v1.20.5 2024-10-17 19:53:08 +00:00
Neel Chauhan
8792771cdc
broker and proxy must not reject client offers with no ICE candidates
Fixes #40371. Partially reverts !141.
2024-10-17 15:46:02 -04:00
Neel Chauhan
9ff205dd7f
Probetest/proxy: Set multiple comma-separated default STUN URLs
This adds the BlackBerry STUN server alongside Google's. Closes #40392.
2024-10-17 15:15:02 -04:00
Renovate Bot
1085d048b9
chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.36.2 2024-10-17 14:54:35 -04:00
Renovate Bot
fc79084455
chore(deps): update module golang.org/x/net to v0.30.0 2024-10-17 14:53:30 -04:00
Renovate Bot
33318ea598
chore(deps): update module github.com/pion/webrtc/v3 to v3.3.4 2024-10-17 14:51:40 -04:00
meskio
846ef79c35
Merge remote-tracking branch 'gitlab/mr/412' 2024-10-16 12:13:19 +02:00
Renovate Bot
214ee6b15f chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.43 2024-10-08 20:37:17 +00:00
meskio
177a6bdf68
Merge remote-tracking branches 'gitlab/mr/405' and 'gitlab/mr/410' 2024-10-08 12:19:03 +02:00
Renovate Bot
1b44ee7626 chore(deps): update module golang.org/x/crypto to v0.28.0 2024-10-07 16:34:40 +00:00
Renovate Bot
4e45515cd3 chore(deps): update module github.com/xtaci/smux to v1.5.31 2024-10-07 16:32:43 +00:00
Renovate Bot
17be3430d9
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.41 2024-10-07 16:26:23 +01:00
WofWca
5c7bdcea77
fix(probetest): wrong "restricted" sometimes
Closes https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40387
2024-09-26 18:15:05 +01:00
WofWca
d346639eda
improvement(proxy): improve NAT check logging 2024-09-26 18:15:04 +01:00
WofWca
9b04728809
docs: improve proxy CLI param descriptions
Since the proxy component is the most dedicated for public use,
more comprehensive docs are good.
2024-09-25 16:50:18 +01:00
Cecylia Bocovich
15b3f64a3a
Update go.sum file with go mod tidy 2024-09-24 14:14:03 -04:00
Cecylia Bocovich
177ab12bd9
Revert "chore(deps): update module github.com/xtaci/kcp-go/v5 to v5.6.17"
This reverts commit 99521fb134.
2024-09-24 13:13:15 -04:00
Cecylia Bocovich
443c633aab
Revert "Move time.Sleep call in turbotunnel test"
This reverts commit 4497d68d6f.
2024-09-24 13:12:23 -04:00
Renovate Bot
f353be8388
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.37 2024-09-24 14:11:01 +01:00
meskio
7a8f484e7d
Merge remote-tracking branches 'gitlab/mr/399' and 'gitlab/mr/402' 2024-09-24 11:36:22 +02:00
meskio
d4d517f37b
Merge remote-tracking branch 'gitlab/mr/401' 2024-09-24 11:35:27 +02:00
Renovate Bot
00cf7bdfc6 chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.35.1 2024-09-23 19:21:31 +00:00
anarcat
e8736ecdba use proper image name for debian image
We're deprecating the old image name format, see https://gitlab.torproject.org/tpo/tpa/base-images/-/issues/14
2024-09-23 18:10:39 +00:00
Renovate Bot
61771d80c2 chore(deps): update module github.com/xtaci/smux to v1.5.30 2024-09-23 16:57:56 +00:00
Renovate Bot
d0c52757aa
chore(deps): update module golang.org/x/crypto to v0.27.0 2024-09-23 12:32:33 -04:00
Renovate Bot
60c89648aa
chore(deps): update module github.com/aws/aws-sdk-go-v2/credentials to v1.17.34 2024-09-23 12:20:36 -04:00
Renovate Bot
43b91c79c6
chore(deps): update module github.com/prometheus/client_golang to v1.20.4 2024-09-23 12:19:07 -04:00
Cecylia Bocovich
4497d68d6f
Move time.Sleep call in turbotunnel test
An update the the kcp-go library removes the guarantee that all data
written to a KCP connection will be flushed before the connection is
closed. Moving the sleep call has no impact on the integrity of the
tests, and gives the connection time to flush data before the connection
is closed.

See https://github.com/xtaci/kcp-go/issues/273
2024-09-23 10:08:18 -04:00
Renovate Bot
99521fb134 chore(deps): update module github.com/xtaci/kcp-go/v5 to v5.6.17 2024-09-23 12:49:18 +00:00
Renovate Bot
721c028d73
chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.35.0 2024-09-23 13:21:05 +01:00
David Fifield
de61d7bb8d Document relayURL return in SignalingServer.pollOffer.
The second return value was added in
863a8296e8.
2024-09-21 18:28:17 +00:00
WofWca
0f0f118827 improvement(proxy): don't panic on invalid relayURL
Though prior to this change the panic could only happen
if the default relayURL set by the proxy is invalid,
since `datachannelHandler` is only called after a succesful
`checkIsRelayURLAcceptable()`, which ensures that it _is_ valid.
But in the case of invalid default relay URL, a warning is printed
already.
2024-09-21 18:20:31 +00:00
WofWca
71828580bb fix(broker): empty pattern if bridge-list is empty
i.e. if no bridge list file is provided, the relay pattern
would not get set.

AFAIK this is not a breaking change because the broker
can't be used as a library, unlike client and server.
2024-09-21 15:11:37 +00:00
David Fifield
f752d2ab0c Spell out EphemeralMinPort and EphemeralMaxPort in comment.
For searching purposes.
2024-09-21 14:30:59 +00:00
WofWca
daff4d8913 refactor(proxy): add comment about packet size 2024-09-19 19:14:04 +00:00
Shelikhoo
bcac2250ec
update mobile CI test's golang version to 1.23 2024-09-12 11:10:13 +01:00
meskio
9d2c513e6b
Merge remote-tracking branch 'gitlab/mr/394' 2024-09-09 18:22:27 +02:00
meskio
f046361e4a
Merge remote-tracking branch 'gitlab/mr/393' 2024-09-09 18:22:21 +02:00
Renovate Bot
1d951e3708 chore(deps): update module github.com/prometheus/client_golang to v1.20.3 2024-09-09 15:28:00 +00:00
Renovate Bot
0323ccba49
chore(deps): update module github.com/xtaci/smux to v1.5.29 2024-09-09 15:53:08 +01:00
WofWca
55c4c90a3a
fix(probetest): NAT check timing out sometimes
if ICE gathering on the probetest server is taking long
to complete.

Related: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40230
2024-09-09 15:26:59 +01:00
WofWca
2d13e2a5d1
fix(probetest): maybe resource leak
...on failed requests: WebRTC connection wouldn't get
closed in such cases.
2024-09-09 15:26:58 +01:00
WofWca
51edbbfd26
fix(proxy): maybe memory leak on failed NAT check
Maybe related: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40243
2024-09-09 15:26:58 +01:00
WofWca
f44aa279fe
refactor(proxy): improve NAT check logging
4ed5da7f2f introduced `OnError` but it did not print
failed periodic NAT type check errors - the error was simply
ignored.
2024-09-09 15:26:55 +01:00
WofWca
7f9fea5797
fix(proxy): send answer even if ICE gathering is not complete
Otherwise the connection is guaranteed to fail, even though
we actually might have gathered enough to make a successful
connection.

Closes https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40230

This is the standalone proxy counterpart of https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/merge_requests/55
2024-09-09 11:58:28 +01:00
WofWca
78f4b9dbc5 test(client): add test for BrokerChannel 2024-09-08 14:50:08 +04:00
WofWca
2bbd4d0643
refactor(proxy): better RelayURL description
It's the case that it's simply "default" after
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/87
Now the broker specifies the relay URL (see `ProxyPollResponse`).
2024-09-05 13:04:42 +01:00
WofWca
ec9476e5ab
Better error msg on bad fingerprint 2024-09-04 10:47:08 -04:00
meskio
f701641382
Merge remote-tracking branch 'gitlab/mr/383' 2024-09-04 13:16:17 +02:00
Renovate Bot
f058a3daf5 chore(deps): update module github.com/aws/aws-sdk-go-v2 to v1.30.5 2024-09-03 18:54:49 +00:00
WofWca
94c6089cdd
hardening(proxy): don't proxy private IP addresses
...by default.

This is useful when `RelayDomainNamePattern` is lax (e.g. just "$")
(which is not the case by default, so this is simply
a hardening measure).
2024-09-02 14:59:26 +01:00
WofWca
399bda5257
refactor(proxy): tidy up isRelayURLAcceptable
Add clearer error messages
2024-09-02 14:59:26 +01:00
WofWca
0f2bdffba0
hardening(proxy): only accept ws & wss relays 2024-09-02 14:59:26 +01:00
WofWca
14f4c82ff7
test(proxy): add tests for relayURL check 2024-09-02 14:59:23 +01:00
meskio
978a55b7c4
Merge remote-tracking branch 'gitlab/mr/374' 2024-09-02 13:03:08 +02:00
Renovate Bot
9f832f8bb2 chore(deps): update module github.com/prometheus/client_golang to v1.20.2 2024-08-27 14:00:16 +00:00
Renovate Bot
97e21e3a29
chore(deps): update module github.com/pion/stun to v3 2024-08-27 09:43:08 -04:00
Renovate Bot
0a942f8ecb
chore(deps): update module github.com/aws/aws-sdk-go-v2/credentials to v1.17.30 2024-08-27 09:20:43 -04:00
Renovate Bot
37c1e2c244
chore(deps): update module golang.org/x/sys to v0.24.0 2024-08-27 09:19:09 -04:00
WofWca
f4db64612c
feat: expose pollInterval in CLI
Closes https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40373
2024-08-22 09:31:37 -04:00
Renovate Bot
8f429666a8
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.28 2024-08-22 12:56:14 +01:00
meskio
2ebfcf6e20
Merge remote-tracking branch 'gitlab/mr/367' 2024-08-22 13:36:46 +02:00
meskio
0804d8651f
Merge remote-tracking branch 'gitlab/mr/362' 2024-08-22 13:35:53 +02:00
Renovate Bot
5fb1290fd0 chore(deps): update module github.com/prometheus/client_golang to v1.20.1 2024-08-22 11:10:40 +00:00
Renovate Bot
44a962316c
chore(deps): update module github.com/miekg/dns to v1.1.62 2024-08-22 11:18:02 +01:00
Renovate Bot
450c309653
chore(deps): update module golang.org/x/net to v0.28.0 2024-08-22 11:00:07 +01:00
meskio
240dd3af3c
Merge remote-tracking branch 'gitlab/mr/365' 2024-08-22 11:46:35 +02:00
Renovate Bot
f6320e42f0 chore(deps): update docker.io/library/golang docker tag to v1.23 2024-08-22 05:12:30 +00:00
Renovate Bot
937860b1bb
chore(deps): update module golang.org/x/crypto to v0.26.0 2024-08-22 05:59:24 +01:00
David Fifield
bb2126b7c6
Use %w, not %v, in fmt.Errorf, so errors can be unwrapped.
https://go.dev/blog/go1.13-errors#wrapping-errors-with-w
2024-08-21 17:00:18 -04:00
WofWca
062411143c
docs: fix example server library usage
`Listen` now accepts `numKCPInstances`
2024-08-21 16:23:12 -04:00
WofWca
677146c9d5 add test_bridgeList.txt file
As an example for the `bridge-list-path` parameter
2024-08-21 20:50:59 +04:00
obble
a6d4570c23
Fix log message in CopyLoop 2024-08-21 16:06:41 +01:00
obble
1d6a2580c6 Improving Snowflake Proxy Performance by Adjusting Copy Buffer Size
TL;DR: The current implementation uses a 32K buffer size for a total of 64K of
buffers/connection, but each read/write is less than 2K according to my measurements.

# Background

The Snwoflake proxy uses as particularly hot function `copyLoop`
(proxy/lib/snowflake.go) to proxy data from a Tor relay to a connected client.
This is currently done using the `io.Copy` function to write all incoming data
both ways.

Looking at the `io.Copy` implementation, it internally uses `io.CopyBuffer`,
which in turn defaults to a buffer of size 32K for copying data (I checked and
the current implementation uses 32K every time).

Since `snowflake-proxy` is intended to be run in a very distributed manner, on
as many machines as possible, minimizing the CPU and memory footprint of each
proxied connection would be ideal, as well as maximising throughput for
clients.

# Hypothesis

There might exist a buffer size `X` that is more suitable for usage in `copyLoop` than 32K.

# Testing

## Using tcpdump

Assuming you use `-ephemeral-ports-range 50000:51000` for `snowflake-proxy`,
you can capture the UDP packets being proxied using

```sh
sudo tcpdump  -i <interface> udp portrange 50000-51000
```

which will provide a `length` value for each packet captured. One good start
value for `X` could then be slighly larger than the largest captured packet,
assuming one packet is copied at a time.

Experimentally I found this value to be 1265 bytes, which would make `X = 2K` a
possible starting point.

## Printing actual read

The following snippe was added in `proxy/lib/snowflake.go`:

```go
// Taken straight from standardlib io.copyBuffer
func copyBuffer(dst io.Writer, src io.Reader, buf []byte) (written int64, err error) {
	// If the reader has a WriteTo method, use it to do the copy.
	// Avoids an allocation and a copy.
	if wt, ok := src.(io.WriterTo); ok {
		return wt.WriteTo(dst)
	}
	// Similarly, if the writer has a ReadFrom method, use it to do the copy.
	if rt, ok := dst.(io.ReaderFrom); ok {
		return rt.ReadFrom(src)
	}
	if buf == nil {
		size := 32 * 1024
		if l, ok := src.(*io.LimitedReader); ok && int64(size) > l.N {
			if l.N < 1 {
				size = 1
			} else {
				size = int(l.N)
			}
		}
		buf = make([]byte, size)
	}
	for {
		nr, er := src.Read(buf)
		if nr > 0 {
			log.Printf("Read %d", nr) // THIS IS THE ONLY DIFFERENCE FROM io.CopyBuffer
			nw, ew := dst.Write(buf[0:nr])
			if nw < 0 || nr < nw {
				nw = 0
				if ew == nil {
					ew = errors.New("invalid write result")
				}
			}
			written += int64(nw)
			if ew != nil {
				err = ew
				break
			}
			if nr != nw {
				err = io.ErrShortWrite
				break
			}
		}
		if er != nil {
			if er != io.EOF {
				err = er
			}
			break
		}
	}
	return written, err
}
```

and `copyLoop` was amended to use this instead of `io.Copy`.

The `Read: BYTES` was saved to a file using this command

```sh
./proxy -verbose -ephemeral-ports-range 50000:50010 2>&1 >/dev/null  | awk '/Read: / { print $4 }' | tee read_sizes.txt
```

I got the result:

min: 8
max: 1402
median: 1402
average: 910.305

Suggested buffer size: 2K
Current buffer size: 32768 (32K, experimentally verified)

## Using a Snowflake Proxy in Tor browser and use Wireshark

I also used Wireshark, and concluded that all packets sent was < 2K.

# Conclusion

As per the commit I suggest changing the buffer size to 2K. Some things I have not been able to answer:

1. Does this make a big impact on performance?
1. Are there any unforseen consequences? What happens if a packet is > 2K (I
	 think the Go standard libary just splits the packet, but someone please confirm).
2024-08-21 15:02:15 +00:00
meskio
d25b8306ea
Merge remote-tracking branch 'gitlab/mr/364' 2024-08-21 13:16:02 +02:00
Renovate Bot
5b4caa23e1 chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.34.4 2024-08-21 10:30:24 +00:00
Renovate Bot
b70c060080
chore(deps): update module github.com/aws/aws-sdk-go-v2/credentials to v1.17.28 2024-08-21 11:06:51 +01:00
WofWca
103278d6fa
docs(broker): clarify allowed-relay-pattern
Specify that the broker will reject proxies
whose AcceptedRelayPattern is more restrictive than this,
and not less restrictive.

The parameter was introduced here
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/87
> The proxy sends its allowed URL pattern to the broker.
> The broker rejects proxies that are too restrictive.
2024-08-20 12:43:31 +01:00
meskio
6d2011ded7
Report a different implementation for client and server 2024-08-07 12:33:37 +02:00
Renovate Bot
92f21539f2 chore(deps): update module github.com/pion/webrtc/v3 to v3.2.50 2024-08-02 03:44:34 +00:00
David Fifield
f25b293fb5 Comment typo. 2024-08-02 03:36:37 +00:00
David Fifield
ee5f815f60 Cosmetic changes from dev-snowflake-udp-rebase-extradata.
https://gitlab.torproject.org/shelikhoo/snowflake/-/tree/dev-snowflake-udp-rebase-extradata
commit 59b76dc68d2ee0383c2acd91cb0f44edc46af939
2024-08-01 22:12:56 +00:00
meskio
a93b4859c7
Merge remote-tracking branch 'gitlab/mr/354' 2024-08-01 17:47:19 +02:00
Renovate Bot
21fef74c52 chore(deps): update module github.com/xtaci/smux to v1.5.27 2024-08-01 14:42:28 +00:00
Renovate Bot
8f93d08d71
chore(deps): update module github.com/refraction-networking/utls to v1.6.7 2024-08-01 15:08:32 +01:00
Renovate Bot
308e1816f2
chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.34.3 2024-08-01 12:29:42 +01:00
meskio
f64f234eeb
New ptuitl/safeprom doesn't have Rounded in the type names
This version fixes the test issue of double registering metrics.

* Closes: #40367
2024-07-11 17:45:57 +02:00
meskio
9e977fe6ca
Report the version of snowflake to the Tor process 2024-07-11 13:39:56 +02:00
Arlo Breault
ffdda1358a
Indicate modified in version string
issue 40365
2024-07-11 11:46:57 +01:00
meskio
e2ba4d3539
Merge remote-tracking branches 'gitlab/mr/342', 'gitlab/mr/344' and 'gitlab/mr/345' 2024-07-08 08:37:04 +02:00
Renovate Bot
c21ed7d90f chore(deps): update module github.com/pion/webrtc/v3 to v3.2.44 2024-07-02 15:11:12 +00:00
Renovate Bot
cf1023303a chore(deps): update module github.com/aws/aws-sdk-go-v2 to v1.30.1 2024-06-29 22:09:27 +00:00
Renovate Bot
4b37dd3a19 chore(deps): update gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil digest to e8254c0 2024-06-29 22:09:23 +00:00
Renovate Bot
d94783223d
chore(deps): update module github.com/pion/webrtc/v3 to v3.2.43
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-06-29 17:35:19 -04:00
Cecylia Bocovich
3c0a006369
Revert "chore(deps): update gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil digest to e8254c0"
This reverts commit bd04c0f307.
2024-06-29 17:34:28 -04:00
Renovate Bot
bd04c0f307
chore(deps): update gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil digest to e8254c0
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-06-28 13:47:21 -04:00
meskio
5f0c0c965b
Merge remote-tracking branch 'gitlab/mr/341' 2024-06-27 10:17:44 +02:00
Renovate Bot
c221f70b7a chore(deps): update module github.com/aws/aws-sdk-go-v2/credentials to v1.17.22 2024-06-26 19:10:59 +00:00
Renovate Bot
843d9a9c36
chore(deps): update module github.com/pion/transport/v2 to v2.2.5 2024-06-24 12:25:04 +01:00
meskio
455f9d6eda
Merge remote-tracking branch 'gitlab/mr/335' 2024-06-20 09:31:39 +02:00
Renovate Bot
e821930c43 chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.21 2024-06-19 19:17:18 +00:00
meskio
b8f130e210
Merge remote-tracking branch 'gitlab/mr/332' 2024-06-19 09:47:30 +02:00
Renovate Bot
618b19a0ab chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.20 2024-06-18 19:16:39 +00:00
Renovate Bot
e73c6f3d71
chore(deps): update module github.com/gorilla/websocket to v1.5.3
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-06-17 20:35:22 -04:00
Renovate Bot
b40137f1fe
chore(deps): update gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil digest to 6a4a471
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-06-17 19:18:11 -04:00
Renovate Bot
e5f4e9d455
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.19
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-06-17 19:15:49 -04:00
meskio
b83ef3f385
Merge remote-tracking branch 'gitlab/mr/327' 2024-06-12 10:47:04 +02:00
Renovate Bot
f5d4aabd7b chore(deps): update module github.com/pion/webrtc/v3 to v3.2.42 2024-06-11 18:16:22 +00:00
meskio
985bf9ee1c
Merge remote-tracking branches 'gitlab/mr/318' and 'gitlab/mr/326' 2024-06-11 08:58:50 +02:00
Renovate Bot
e84bddb296 chore(deps): update module golang.org/x/sys to v0.21.0 2024-06-10 16:10:34 +00:00
Renovate Bot
7306b3a29d chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.32.6 2024-06-10 12:54:10 +00:00
itchyonion
4ed5da7f2f
Simplify proxy NAT checking logic 2024-05-28 12:30:44 -07:00
Renovate Bot
54495ceb4e chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.13 2024-05-13 11:09:20 +00:00
Renovate Bot
150b2fe3ed
chore(deps): update module github.com/prometheus/client_golang to v1.19.1 2024-05-13 11:12:45 +01:00
meskio
a9df5dd71a
Use ptutil for safelog and prometheus rounded metrics
* Related: #40354
2024-05-09 16:24:33 +02:00
Renovate Bot
7bd3e31d7e
chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.32.0 2024-05-09 13:17:07 +01:00
Renovate Bot
96a02f80a6
chore(deps): update module github.com/refraction-networking/utls to v1.6.6
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-05-07 18:11:14 -04:00
Renovate Bot
1a8c31994c
chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.31.4
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-05-07 18:07:09 -04:00
Renovate Bot
2eb4686cc7
chore(deps): update module github.com/pion/webrtc/v3 to v3.2.40
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-05-07 18:05:10 -04:00
Renovate Bot
5ffe9fbe83 chore(deps): update module golang.org/x/net to v0.25.0 2024-05-06 17:17:51 +00:00
Renovate Bot
22a945971d chore(deps): update module google.golang.org/protobuf to v1.34.0 2024-04-30 08:11:33 +00:00
Shelikhoo
18f3ac734c
rename stable container tags to latest 2024-04-25 10:02:37 +01:00
Shelikhoo
d40995035f
remove apt install lbzip2 to avoid broken dependencies 2024-04-24 11:33:41 +01:00
Renovate Bot
7e94ef53e9 chore(deps): update module golang.org/x/net to v0.24.0 2024-04-19 15:42:41 +00:00
Renovate Bot
6c38c605bd chore(deps): update module github.com/miekg/dns to v1.1.59 2024-04-17 20:11:37 +00:00
Renovate Bot
47bf72ca86
chore(deps): update module github.com/refraction-networking/utls to v1.6.4
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-04-15 16:52:21 -04:00
Renovate Bot
abf45d3fd5
chore(deps): update module golang.org/x/net to v0.23.0 [security]
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-04-15 16:51:21 -04:00
Renovate Bot
adffd43ceb
chore(deps): update module github.com/pion/sdp/v3 to v3.0.9
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-04-15 16:49:35 -04:00
Renovate Bot
2b5fa62588
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.11
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-04-15 16:46:50 -04:00
Renovate Bot
228e757a37 chore(deps): update module golang.org/x/crypto to v0.22.0 2024-04-04 17:39:39 +00:00
meskio
01588d99db
Merge remote-tracking branches 'gitlab/mr/289' and 'gitlab/mr/293' 2024-04-04 12:27:14 +02:00
Sky
cec3c2df21 Update README.md to include all available CLI options 2024-04-04 08:21:56 +00:00
Sky
d439f89536 Allow to set listen address for metrics service via cl flags 2024-04-04 06:28:33 +00:00
Renovate Bot
debd473977 chore(deps): update module github.com/prometheus/client_model to v0.6.1 2024-04-03 14:14:06 +00:00
Renovate Bot
9997b4ac24 chore(deps): update module github.com/aws/aws-sdk-go-v2 to v1.26.1 2024-03-29 19:17:08 +00:00
Micah Anderson
095e9727ed CI: Remove echo in container stage.
This was here for debugging and is no longer necessary.

It also resulted in the following command being run:

$ echo "Building Docker image with tag: $TAG" /kaniko/executor --context "${CI_PROJECT_DIR}" --dockerfile "${CI_PROJECT_DIR}/Dockerfile" --destination "${CI_REGISTRY_IMAGE}:${TAG}_${ARCH}"

which does not produce the image properly.
2024-03-25 19:23:05 +00:00
Micah Anderson
1a620dd21b CI: make tag-container-release job depend on previous stages 2024-03-25 19:23:05 +00:00
Cecylia Bocovich
96422e0db3
Update torrc file to match Tor Browser builtins
We switched to a CDN77, a cloud provider that supports domain fronting.
2024-03-24 12:41:23 -04:00
David Fifield
1bde730b39 Comment typo. 2024-03-22 00:43:58 +00:00
Renovate Bot
ec36fd4287
chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.31.3 2024-03-20 17:28:00 +00:00
Renovate Bot
27e76279d4
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.8 2024-03-20 17:05:21 +00:00
Renovate Bot
a1d3d28ff3
chore(deps): update module github.com/pion/ice/v2 to v2.3.14 2024-03-19 15:40:51 +00:00
Renovate Bot
f681b1c556 chore(deps): update module github.com/aws/aws-sdk-go-v2/credentials to v1.17.8 2024-03-18 20:47:07 +00:00
Cecylia Bocovich
05a95802c1
Bump version to v2.9.2 2024-03-18 14:47:44 -04:00
Micah Anderson
eef46b9512 CI: tag containers in a meaningful way (Fixes #40345).
If there was a push to `main`, build a container with the tag `latest. If there
was a tag pushed, then build a container with the container tag set to the git
tag, additionally setting a `stable` tag that matches.

Because the process creates a number of temporary intermediary containers before
they are merged into one with the `merge-manifests` job (`$tag_amd64`,
`$tag_arm64`, `$tag_s390x`, `latest_amd64`, `latest_arm64`, `latest_s390x`)
which are only useful for the `merge-manifests` job, we clean these up in the
`clean_image_tags` job using the gitlab API
2024-03-18 18:39:58 +00:00
Renovate Bot
7b74b9e01a
chore(deps): update module golang.org/x/net to v0.22.0
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-03-18 14:36:47 -04:00
Renovate Bot
712f2667eb
chore(deps): update module github.com/xtaci/kcp-go/v5 to v5.6.8
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-03-18 14:34:14 -04:00
Renovate Bot
b05f059ce4
chore(deps): update module github.com/prometheus/client_golang to v1.19.0
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-03-18 14:31:53 -04:00
Renovate Bot
e7dfbebf56
chore(deps): update module github.com/prometheus/client_model to v0.6.0 2024-03-12 11:58:27 +00:00
meskio
f502eca67d
Merge remote-tracking branch 'origin/mr/258' 2024-03-12 08:28:53 -03:00
meskio
d657098340
Merge remote-tracking branch 'origin/mr/264' 2024-03-12 08:26:04 -03:00
Renovate Bot
52fcd3d58a chore(deps): update docker.io/library/golang docker tag to v1.22 2024-03-12 09:29:03 +00:00
Renovate Bot
d1175dac82
chore(deps): update golang docker tag to v1.22 2024-03-12 09:00:37 +00:00
Renovate Bot
2b11f56950
chore(deps): update module github.com/pion/webrtc/v3 to v3.2.29 2024-03-11 20:03:53 +00:00
Michael Pu
8968535c56 Update doc with new lines in metrics output 2024-03-09 13:36:26 -05:00
Michael Pu
b512e242e8 Implement better client IP per rendezvous method tracking for clients
Implement better client IP per rendezvous method tracking for clients

Add tests for added code, fix existing tests

chore(deps): update module github.com/miekg/dns to v1.1.58

Implement better client IP tracking for http and ampcache

Add tests for added code, fix existing tests

Implement GetCandidateAddrs from SDP

Add getting client IP for SQS

Bug fixes

Bug fix for tests
2024-03-09 13:36:25 -05:00
Michael Pu
91b8da423b update docs 2024-03-09 13:35:16 -05:00
Michael Pu
9fe2ca58a0 Switch to sqscreds param for passing in SQS credentials 2024-03-09 13:35:16 -05:00
Cecylia Bocovich
fe56eaddf4
Fix grep command to check output of shadow tests 2024-03-08 13:24:20 -05:00
Renovate Bot
b42966a652
chore(deps): update module github.com/aws/aws-sdk-go-v2/service/sqs to v1.31.2
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-03-08 13:11:49 -05:00
Renovate Bot
1c51e432ae
chore(deps): update module golang.org/x/crypto to v0.21.0
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-03-08 11:45:59 -05:00
Renovate Bot
c4beb91a6c
chore(deps): update module github.com/refraction-networking/utls to v1.6.3
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-03-08 11:43:02 -05:00
Renovate Bot
22bca0fb6b
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.7
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-03-08 11:40:27 -05:00
Cecylia Bocovich
0c8efb4e2b
Only run shadow tests on compatible runners 2024-03-07 17:51:16 -05:00
Renovate Bot
5093c8886b chore(deps): update module google.golang.org/protobuf to v1.33.0 [security] 2024-03-06 05:11:40 +00:00
Michael Pu
0777f0191e
update docs
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-03-05 12:38:50 -05:00
Michael Pu
1e1f827248
Update tests 2024-03-05 12:38:33 -05:00
Michael Pu
9cd362f42d
Move SQS client ID generation to Exchange 2024-03-05 12:38:33 -05:00
Micah Anderson
c4c22fa2a0 Build multi-arch image.
This will build only those architectures that we have runners to build on
2024-03-03 14:07:33 +00:00
Micah Anderson
9b689a105e Build multi-arch image.
This will build only those architectures that we have runners to build on
2024-03-03 14:07:33 +00:00
Micah Anderson
913732356a Build multi-arch image.
This will build only those architectures that we have runners to build on
2024-03-03 14:07:33 +00:00
Micah Anderson
0e593edc9a Build multi-arch image.
This will build only those architectures that we have runners to build on
2024-03-03 14:07:33 +00:00
Micah Anderson
5ee90a78b4 Build multi-arch image.
This will build only those architectures that we have runners to build on
2024-03-03 14:07:33 +00:00
Micah Anderson
9175e86321 Automatically build container on release and push to our registry.
Now that Tor's gitlab has the container registry enabled, we can build a
snowflake container on release, and push the built container to the snowflake
registry.

This is accomplished without using privileged gitlab runners, via kaniko.

This would speed up snowflake updates for people running the docker
container. It would also mean that the 'docker-snowflake-proxy' project would no
longer need to exist.

Fixes docker-snowflake-proxy#10
Fixes docker-snowflake-proxy#13
2024-03-03 14:07:33 +00:00
Cecylia Bocovich
7b47a7d94b
Use known working version of shadow 2024-02-27 13:41:43 -05:00
Cecylia Bocovich
810f1fcc00
Use golang:1.21 container for shadow experiments 2024-02-27 13:41:43 -05:00
Cecylia Bocovich
2c16ef83cb
Patch snowflake server in shadow experiment
Prevent an unsupported syscall in shadow from causing the snowflake
server to fail.
2024-02-27 13:41:43 -05:00
Cecylia Bocovich
f95babc1e1
Export shadow logs as an artifact for debugging 2024-02-27 13:41:43 -05:00
Cecylia Bocovich
b3b03d1a56
Add integration testing with shadow
This change uses the Shadow network simulator[0] to run a minimal snowflake
network and pass data between a client and a server.

[0] https://shadow.github.io/
2024-02-27 13:41:43 -05:00
Cecylia Bocovich
b130151b24
Bump version to v2.9.1 2024-02-27 11:32:09 -05:00
Renovate Bot
0c3d92c646
chore(deps): update module github.com/miekg/dns to v1.1.58 2024-02-21 14:58:56 +00:00
Renovate Bot
533caaf47a
chore(deps): update module golang.org/x/net to v0.21.0 2024-02-20 14:59:50 +00:00
Renovate Bot
95e677c911
chore(deps): update module golang.org/x/crypto to v0.19.0 2024-02-20 14:19:29 +00:00
Renovate Bot
f52785e807
chore(deps): update module github.com/refraction-networking/utls to v1.6.2 2024-02-19 14:08:47 +00:00
meskio
bbd8b3af75
Merge remote-tracking branch 'gitlab/mr/253' 2024-02-19 09:55:49 +01:00
am3o
acce1f1fd9
refactor: change deprecated "io/ioutil" package to recommended "io" package 2024-02-17 12:47:22 +01:00
Renovate Bot
35984c0876 chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.27.0 2024-02-13 20:16:38 +00:00
Renovate Bot
4c67e5103d
chore(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.26.6
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-02-12 16:59:39 -05:00
Renovate Bot
49c4f7dc19
chore(deps): update module github.com/pion/ice/v2 to v2.3.13
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-02-12 16:55:34 -05:00
Anna “CyberTailor”
d411842a9d
chore(ci): use golang:1.21 in generate_tarball job
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-02-12 16:48:12 -05:00
Cecylia Bocovich
38352b22ad
Bump version to v2.9.0 2024-02-05 12:00:05 -05:00
Michael Pu
5f5cbe6431
Prune metrics that are reported for rendezvous
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-01-31 14:34:32 -05:00
Anthony Chang
dbecefa7d2
Move RendezvousMethod field to messages.Arg 2024-01-31 14:34:29 -05:00
Michael Pu
26ceb6e20d
Add metrics for tracking rendezvous method
Update tests for metrics

Add rendezvous_method to Prometheus metrics

Update broker spec docs with rendezvous method metrics

Bug fix
2024-01-31 14:34:29 -05:00
Michael Pu
b8df42a377
Fix nil ptr deference when listing client queues
Signed-off-by: Cecylia Bocovich <cohosh@torproject.org>
2024-01-31 12:50:50 -05:00
Andrew Wang
9b90b77d69
Add unit tests for SQS rendezvous in broker
Co-authored-by: Michael Pu <michael.pu@uwaterloo.ca>
2024-01-22 13:11:03 -05:00
Anthony Chang
32e864b71d
Add unit tests for SQS rendezvous in client
Co-authored-by: Michael Pu <michael.pu@uwaterloo.ca>
2024-01-22 13:11:03 -05:00
Anthony Chang
f3b062ddb2
Add mocks and interfaces for testing SQS rendezvous
Co-authored-by: Michael Pu <michael.pu@uwaterloo.ca>
2024-01-22 13:10:56 -05:00
Michael Pu
8fb17de152
Implement SQS rendezvous in client and broker
This features adds an additional rendezvous method to send client offers
and receive proxy answers through the use of Amazon SQS queues.

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/26151
2024-01-22 13:06:42 -05:00
David Fifield
d0529141ac Cosmetic fixes taken from !219.
shelikhoo/dev-udp-performance-rebased branch
https://gitlab.torproject.org/shelikhoo/snowflake/-/commits/9dce28cfc2093490473432ffecd9abaab7ebdbdb
2024-01-16 18:43:58 +00:00
Cecylia Bocovich
f7a468e31b
Add probetest commandline option for STUN URL 2024-01-10 11:37:24 -05:00
Cecylia Bocovich
fe2f7de9a8
Use SetNet setting in probetest to ignore net.Interfaces error
Needed to get probetest running in shadow. Applies the fix from
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40278
to the probetest server.
2024-01-10 11:06:39 -05:00
Cecylia Bocovich
3939554058
Add proxy commandline option for probe server URL 2024-01-10 11:05:56 -05:00
Renovate Bot
54a47287ee
chore(deps): update module github.com/xtaci/kcp-go/v5 to v5.6.7 2024-01-08 10:13:07 -05:00
Renovate Bot
591be5205a
chore(deps): update module google.golang.org/protobuf to v1.32.0 2024-01-08 10:12:26 -05:00
Renovate Bot
48af2b2138
chore(deps): update module github.com/prometheus/client_golang to v1.18.0 2024-01-08 10:11:14 -05:00
Renovate Bot
c98f50f5a7
chore(deps): update module golang.org/x/sys to v0.16.0 2024-01-08 10:09:53 -05:00
Arlo Breault
e4c818be76
Scrub space separated ip addresses
The issue with ReplaceAllFunc is that it's capturing the leading and
trailing spaces in the regexp, so successive ips don't match.  From the
docstring,

> If 'All' is present, the routine matches successive non-overlapping
> matches of the entire expression.

For #40306
2024-01-08 10:03:35 -05:00
Arlo Breault
98db63ad01 Update recommended torrc options in the client readme
For #40294
2024-01-04 17:36:22 +00:00
Arlo Breault
0d8261c46e Add vcs revision to version string
For #40285
2024-01-04 00:31:08 -05:00
Cecylia Bocovich
a0e3e871c4
Bump version to v2.8.1 2023-12-21 15:54:54 -05:00
Cecylia Bocovich
9f330caa08
Suppress logs of EventOnProxyConnectionOver 2023-12-21 10:39:48 -05:00
meskio
91ffc333d6
Merge remote-tracking branch 'gitlab/mr/224' 2023-12-20 12:57:41 +01:00
Renovate Bot
0995b1dd7a chore(deps): update module golang.org/x/crypto to v0.17.0 [security] 2023-12-19 05:13:10 +00:00
Renovate Bot
04266abb33 chore(deps): update module github.com/refraction-networking/utls to v1.6.0 2023-12-18 13:11:22 +00:00
n8fr8
36a8eb487f
Add Ignore Android Restriction Workaround for Proxy 2023-12-18 12:58:48 +00:00
Renovate Bot
cd0167fe65
chore(deps): update module github.com/pion/webrtc/v3 to v3.2.24 2023-12-14 17:08:54 -05:00
Renovate Bot
f9c333995d chore(deps): update module golang.org/x/net to v0.19.0 2023-11-30 15:41:21 +00:00
Renovate Bot
6796319341
chore(deps): update module golang.org/x/crypto to v0.16.0 2023-11-30 15:02:49 +00:00
Renovate Bot
4fe86a0ec4
chore(deps): update module golang.org/x/sys to v0.15.0 2023-11-30 14:20:56 +00:00
David Fifield
aa06e7bef3 Merge branch 'encapsulation-readdata-buffer' 2023-11-21 03:46:46 +00:00
David Fifield
234d9cb11c Link a section in the pion/webrtc@3.0.0 release notes. 2023-11-21 01:27:09 +00:00
Cecylia Bocovich
a88f73b0ff
Bump version to 2.8.0 2023-11-20 11:43:07 -05:00
Renovate Bot
aca932c5f3 chore(deps): update module github.com/pion/webrtc/v3 to v3.2.23 2023-11-20 16:11:44 +00:00
Cecylia Bocovich
b3b0d3b5dd
Document that prometheus transfer metrics are in KB 2023-11-20 10:40:34 -05:00
Renovate Bot
c5da3c42e9
chore(deps): update module github.com/miekg/dns to v1.1.57 2023-11-20 12:35:01 +00:00
meskio
440f7b791e
Merge remote-tracking branch 'gitlab/mr/207' 2023-11-13 10:27:51 +01:00
Renovate Bot
8b1a48af8b chore(deps): update module golang.org/x/net to v0.18.0 2023-11-08 20:43:13 +00:00
David Fifield
d99f31d881 Have encapsulation.ReadData return an error when the buffer is short.
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/154#note_2919109

Still ignoring the io.ErrShortBuffer at the callers, which retains
current behavior.
2023-11-07 05:51:35 +00:00
David Fifield
001f691b47 Have encapsulation.ReadData read into a provided buffer.
Instead of unconditionally allocating its own.
2023-11-07 05:51:35 +00:00
Renovate Bot
c1715e0928 chore(deps): update module github.com/gorilla/websocket to v1.5.1 2023-11-05 03:39:42 +00:00
Cecylia Bocovich
648609dbea
Refactor disabling the stats logger
Have Snowflake proxy periodically collect throughput stats even if the
stats logger is disabled so that it can be handled by the prometheus
metrics.
2023-10-31 13:15:52 -04:00
Cecylia Bocovich
22d9381d9d
Update prometheus metrics to use new EventOnProxyStats 2023-10-31 13:11:38 -04:00
Cecylia Bocovich
caa2b36463
Process and properly log connection closure stats 2023-10-31 10:02:31 -04:00
Cecylia Bocovich
5c5eb2c339
Modify EventOnProxyStats to include summary data 2023-10-30 12:42:45 -04:00
Cecylia Bocovich
018bbd6d65
Proxy stats log only what occurred that time interval
Modify the periodic stats output by standalone snowflake proxies to only
include the data transferred during the time interval being logged. This
is an improvement of previous behaviour that logged the total data
transferred by all proxy connections that were closed within the time
interval being logged..

Closes #40302:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40302
2023-10-30 12:42:45 -04:00
Cecylia Bocovich
354cb65432
Move creation of periodic stats task inside proxy library
This adds a new type of SnowflakeEvent. EventOnProxyStats is triggered
by the periodic task run at SummaryInterval and produces an event with a
proxy stats output string.
2023-10-30 12:42:45 -04:00
Cecylia Bocovich
83a7422fe6
Zero bytesSyncLogger stats after reading them
This also makes the call to GetStat() more thread safe.
2023-10-30 12:42:45 -04:00
Cecylia Bocovich
939062c7dd
Remove ThroughputSummary from bytesLogger
This was leftover from when we used to log the total throughput of
connections when they close. It should be removed for privacy reasons as
mentioned in
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40079
2023-10-30 12:42:45 -04:00
Cecylia Bocovich
10fb9afaa7
Check if multiple front domains argument is empty
This fixes a regression introduced in 9fdfb3d1, where the list of front
domains always contained an empty string if none were supplied via the
commandline options, causing rendezvous failures for both amp cache and
domain fronting. This fix checks to see whether the commandline option
was supplied.
2023-10-26 17:04:56 -04:00
meskio
778e3af09a
Merge remote-tracking branch 'gitlab/mr/187' 2023-10-26 18:47:01 +02:00
Renovate Bot
4fa43a8892
chore(deps): update module github.com/prometheus/client_golang to v1.17.0 2023-10-25 16:49:19 +01:00
Renovate Bot
2617d2341a
chore(deps): update module github.com/refraction-networking/utls to v1.5.4 2023-10-25 15:53:48 +01:00
Shelikhoo
5df7a06eee
Add outbound proxy configuration propagation 2023-10-24 17:47:25 +01:00
Shelikhoo
f43da1d2d2
Add transport wrapper 2023-10-24 17:43:32 +01:00
Shelikhoo
8b46e60553
Add common proxy utilities 2023-10-24 17:42:46 +01:00
meskio
6b0421db0d
Merge remote-tracking branch 'gitlab/mr/195' 2023-10-24 12:50:27 +02:00
Renovate Bot
fc7053acd5 chore(deps): update module github.com/prometheus/client_model to v0.5.0 2023-10-23 13:10:46 +00:00
Renovate Bot
ef6f8dd500
chore(deps): update module golang.org/x/net to v0.17.0 [security] 2023-10-23 14:00:09 +01:00
Renovate Bot
251a151bf5 chore(deps): update module github.com/xtaci/kcp-go/v5 to v5.6.5 2023-10-20 15:40:01 +00:00
meskio
b11a41482c
Use go 1.21 in renovate 2023-10-16 20:48:47 +02:00
Shelikhoo
bd7391d678
update version to 2.7.0 2023-10-16 15:14:51 +01:00
KokaKiwi
7142fa3ddb
fix(proxy): Correctly close connection pipe when dealing with error 2023-10-12 15:52:43 +01:00
David Fifield
6393af6bab
Remove proxy churn measurements from broker.
We've done the analysis we planned to do on these measurements.

A program to analyze the proxy churn and extract hour-by-hour
intersections is available at:
https://github.com/turfed/snowflake-paper/tree/main/figures/proxy-churn

Closes #40280.
2023-10-09 16:16:05 +01:00
WofWca
a615e8b1ab
fix(proxy): remove _potential_ deadlock
The `dc.Send()` should increase the `bufferedAmount` value,
so there is no need to add the message length a second time.

Also replace GT with GE, for the case where
`BufferedAmountLowThreshold === maxBufferedAmount`

Currently the deadlock cannot happen because `maxBufferedAmount`
and `BufferedAmountLowThreshold` are too far apart, in fact
the former is 2x the latter.

See
- https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/144#note_2902956
- https://github.com/pion/webrtc/pull/2473
- https://github.com/pion/webrtc/pull/2474
2023-10-09 15:15:45 +01:00
Cecylia Bocovich
d434549df8
Maintain backward compatability with old clients
Introduce a new commandline and SOCKS argument for comma-separated
domain fronts rather than repurposing the old one so that we can
maintain backwards compatability with users running old versions of the
client. A new bridge line shared on circumvention settings could have
both the front= and fronts= options set.
2023-10-05 17:51:56 -04:00
Cecylia Bocovich
9fdfb3d1b5
Randomly select front domain from comma-separated list
This commmit changes the command-line and Bridge line arguments to take
a comma-separated list of front domains. The change is backwards
compatible with old Bridge and ClientTransportPlugin lines. At
rendezvous time, a front domain will be randomly chosen from the list.
2023-10-05 17:51:56 -04:00
WofWca
4ff36e3f07 improvement(broker): don't reject unrestricted client if there are no restricted proxies
I.e. match it with an unrestricted proxy (if there is one).

The old behavior exists since the inception of the restricted vs
unrestricted feature, i.e. 0052c0e10c
2023-10-02 21:39:56 +04:00
Shelikhoo
5cdf52c813
Update dependencies 2023-09-27 13:15:50 +01:00
Renovate Bot
1559963f75
chore(deps): update module github.com/xtaci/kcp-go/v5 to v5.6.3 2023-09-25 15:21:28 +01:00
Shelikhoo
60e66beadc
Remove Golang 1.20 from CI Testing 2023-09-25 14:27:23 +01:00
Shelikhoo
1d069ca71d
Update CI targets to test android from golang 1.21 2023-09-20 20:05:28 +01:00
Cecylia Bocovich
3a050c6bb3
Use ShouldBeNil to check for nil values 2023-09-20 12:34:51 -04:00
Renovate Bot
e45e8e555b
chore(deps): update module github.com/smartystreets/goconvey to v1.8.1 2023-09-20 12:34:49 -04:00
Renovate Bot
f47ca18e64 chore(deps): update module gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/goptlib to v1.5.0 2023-09-19 16:06:59 +00:00
Renovate Bot
106da49c27 chore(deps): update module github.com/pion/webrtc/v3 to v3.2.20 2023-09-19 15:49:22 +00:00
Cecylia Bocovich
2844ac6a13
Update CI targets to include only Go 1.20 and 1.21
To keep up with our dependencies, we no longer support versions of Go
older than v1.20.
2023-09-19 11:42:31 -04:00
Renovate Bot
f4e1ab90c8 chore(deps): update module golang.org/x/net to v0.15.0 2023-09-19 14:09:33 +00:00
Renovate Bot
caaff7004e Update module golang.org/x/sys to v0.12.0 2023-09-12 15:44:11 +00:00
Shelikhoo
b5d702f483
update version to v2.6.1 2023-09-11 14:30:00 +01:00
Renovate Bot
a3bfc2802a
Update module golang.org/x/crypto to v0.12.0 2023-08-28 16:37:52 +01:00
Renovate Bot
e37e15ab7c
Update golang Docker tag to v1.21 2023-08-25 17:21:48 +01:00
Cecylia Bocovich
b632c7d49c
Workaround for shadow in lieu of AF_NETLINK support
For details, see https://github.com/shadow/shadow/issues/2980
2023-08-24 16:33:22 +01:00
Renovate Bot
0cb2975fd8
Update module golang.org/x/net to v0.13.0 [SECURITY] 2023-08-24 13:56:29 +01:00
meskio
f73fe6ec00
Keep the 'v' from the tag on the released .tar.gz
Gitlab doesn't support '#v' expansion for the links name and url:
https://docs.gitlab.com/ee/ci/variables/where_variables_can_be_used.html
https://docs.gitlab.com/ee/ci/variables/where_variables_can_be_used.html#gitlab-internal-variable-expansion-mechanism

The current releases include a 'snowflake-.tar.gz' that gives a 404,
because the link provided is missing the tag part. Let's keep it
simple and produce a tar.gz with the v in the name like
snowflake-v2.6.0.tar.gz

Closes: #40282
2023-08-14 08:56:56 +02:00
David Fifield
8104732114 Change DefaultRelayURL back to wss://snowflake.torproject.net/.
Fixes #40283. Compare to #31522.
2023-07-29 22:33:26 +00:00
am3o
d932cb2744
feat: add option to expose the stats by using metrics 2023-07-28 14:23:22 +01:00
meskio
af73ab7d1f
Add renovate config
Closes: #40194
2023-07-03 20:01:18 +02:00
meskio
aaeab3f415
Update dependencies
So renovate doesn't create tons of merge requests.
2023-07-03 19:52:57 +02:00
David Fifield
58c3121c6b Close temporary UDPSession in TestQueuePacketConnWriteToKCP.
With these not being closed, they were continuing to consume resources
after the return of the test function, which was affecting the later
BenchmarkSendQueue.

Before:
```
snowflake/common/turbotunnel$ go test -bench BenchmarkSendQueue -v
=== RUN   TestQueueIncomingOversize
--- PASS: TestQueueIncomingOversize (0.00s)
=== RUN   TestWriteToOversize
--- PASS: TestWriteToOversize (0.00s)
=== RUN   TestRestoreMTU
--- PASS: TestRestoreMTU (0.00s)
=== RUN   TestRestoreCap
--- PASS: TestRestoreCap (0.00s)
=== RUN   TestQueuePacketConnWriteToKCP
--- PASS: TestQueuePacketConnWriteToKCP (1.01s)
goos: linux
goarch: amd64
pkg: gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/turbotunnel
cpu: Intel(R) Core(TM) i5 CPU         680  @ 3.60GHz
BenchmarkSendQueue
BenchmarkSendQueue-4     8519708               136.0 ns/op
PASS
ok      gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/turbotunnel  3.481s
```

After:
```
snowflake/common/turbotunnel$ go test -bench BenchmarkSendQueue -v
=== RUN   TestQueueIncomingOversize
--- PASS: TestQueueIncomingOversize (0.00s)
=== RUN   TestWriteToOversize
--- PASS: TestWriteToOversize (0.00s)
=== RUN   TestRestoreMTU
--- PASS: TestRestoreMTU (0.00s)
=== RUN   TestRestoreCap
--- PASS: TestRestoreCap (0.00s)
=== RUN   TestQueuePacketConnWriteToKCP
--- PASS: TestQueuePacketConnWriteToKCP (1.02s)
goos: linux
goarch: amd64
pkg: gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/turbotunnel
cpu: Intel(R) Core(TM) i5 CPU         680  @ 3.60GHz
BenchmarkSendQueue
BenchmarkSendQueue-4    11620237               105.7 ns/op
PASS
ok      gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/turbotunnel  3.244s
```
2023-06-29 21:12:29 +00:00
David Fifield
80980a3afb Fix a comment left over from turbotunnel-quic. 2023-06-29 19:59:50 +00:00
Cecylia Bocovich
08d1c6d655
Bump minimum required version of go
The version of x/sys we're using requires go1.17 or later
2023-06-20 14:52:09 -04:00
Cecylia Bocovich
2fa8fd9188
Update version to v2.6.0 2023-06-19 12:52:25 -04:00
Vort
ea01c92cf1
Implement DataChannel flow control 2023-06-19 17:44:45 +01:00
Cecylia Bocovich
f8eb86f24d
Append Let's Encrypt ISRG Root X1 to cert pool
This is a workaround for older versions of android that do not trust
the Let's Encrypt root certificate.
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40087
2023-06-14 18:12:29 -04:00
David Fifield
9edaee6547 Use IP_BIND_ADDRESS_NO_PORT when dialing the ORPort on Linux.
When the orport-srcaddr option is set, we bind to a source IP address
before dialing the ORPort/ExtORPort. tor similarly binds to a source IP
address when OutboundBindAddress is set in torrc. Since tor 0.4.7.13,
tor sets IP_BIND_ADDRESS_NO_PORT, and because problems arise when some
programs use IP_BIND_ADDRESS_NO_PORT and some do not, we also have to
start using IP_BIND_ADDRESS_NO_PORT when we upgrade tor
(tpo/anti-censorship/pluggable-transports/snowflake#40270).

Related: tpo/anti-censorship/pluggable-transports/snowflake#40198
2023-06-08 13:24:22 -06:00
itchyonion
130b63ccdd
use debian buster and bullseye as base images 2023-06-08 00:51:42 -07:00
meskio
82cc0f38f7
Move the development to gitlab
Related: tpo/anti-censorship/team#86
2023-05-31 10:01:47 +02:00
itchyonion
88608ad44a
Broker: add warning log when proxy couldn't mach with client 2023-05-29 10:12:48 -07:00
itchyonion
6c431800b0
Broker: update unit tests after adding SDP validation 2023-05-29 10:12:48 -07:00
itchyonion
255cee69ed
Broker: soften non-critical log from error to warning 2023-05-29 10:12:48 -07:00
itchyonion
07b5f07452
Validate SDP offers and answers 2023-05-29 10:12:48 -07:00
David Fifield
8e5ea82611 Add a scanner error check to ClusterCounter.Count.
It was silently exiting at the "recordingStart":"2022-09-23T17:06:59.680537075Z"
line, the first line whose length (66873) exceeds
bufio.MaxScanTokenSize. Now distinctcounter exits with an error status
instead of reporting partial results.

$ ./distinctcounter -from 2023-01-01T00:00:00Z -to 2023-01-10T00:00:00Z -in metrics-ip-salted.jsonl
2023/04/20 13:54:11 unable to count:bufio.Scanner: token too long
2023-04-20 11:28:58 -04:00
meskio
f723cf52e8
Merge remote-tracking branch 'gitlab/main' 2023-04-20 16:37:52 +02:00
meskio
297ca91b1d
Use goptlib from gitlab.torproject.org 2023-04-19 17:15:35 +02:00
David Fifield
c097d5f3bc Use a sync.Pool to reuse packet buffers in QueuePacketConn.
This is meant to reduce overall allocations. See past discussion at
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40260#note_2885524 ff.
2023-04-04 20:22:32 -06:00
David Fifield
97c930013b Fix loop termination in TestQueuePacketConnWriteToKCP.
The noise-generating goroutine was meant to stop when the parent
function returned and closed the `done` channel. The `break` in the loop
was wrongly exiting only from the `select`, not from the `for`.

This was the cause of banchmark anomalies in
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40260#note_2885832.
The noise-generating loop from the test was continuing to run while the
benchmarks were running.
2023-04-04 19:12:22 -06:00
David Fifield
6bae31f077 Use a static array in benchmarks.
Since d2858aeb7e the caller is permitted
to reuse its slice again.
2023-04-04 18:56:55 -06:00
David Fifield
590d158df8 Comment typo. 2023-04-04 18:46:35 -06:00
David Fifield
6bdd48c006 Restore ListenAndServe error return in Transport.Listen.
This error return was lost in 11f0846264;
i.e. !31.

Fixes #40043.
2023-04-03 00:18:26 -06:00
David Fifield
17829d80d5 Comment typo. 2023-03-29 09:49:24 -06:00
Shelikhoo
47dd253a37
Update CI test targets 2023-03-22 12:19:06 +00:00
KokaKiwi
1ef43a0dde
Use latest Pion WebRTC libs version
- webrtc and dtls libs got the "Skip Hello Verify" patches applied

Link: https://github.com/pion/dtls/pull/513
Link: https://github.com/pion/webrtc/pull/2433
2023-03-22 12:19:03 +00:00
itchyonion
5dd0a31d95
Add comments and improve logging 2023-03-14 12:43:00 -07:00
itchyonion
fb35e80b0a
Proxy: add outbound-address config 2023-03-14 12:42:59 -07:00
David Fifield
36d5d2dd83 Fix comment typo on NewRedialPacketConn. 2023-03-13 15:10:35 -06:00
David Fifield
ef51f2063e Merge branch '40260-revert-queuepacketconn-ownership' into 'main'
Revert "Take ownership of buffer in QueuePacketConn QueueIncoming/WriteTo"

See merge request tpo/anti-censorship/pluggable-transports/snowflake!140
2023-03-13 19:36:09 +00:00
David Fifield
d2858aeb7e Revert "Take ownership of buffer in QueuePacketConn QueueIncoming/WriteTo."
This reverts commit 839d221883. (Except for
the added benchmarks in queuepacketconn_test.go.) This change
corresponds to the issues #40187 and #40199.

The analysis in https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40199
was wrong; kcp-go does reuse the buffers it passes to
QueuePacketConn.WriteTo. This led to unsynchronized reuse of packet
buffers and mangled packets observable at the client:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40260.

Undoing the change in QueuePacketConn.QueueIncoming as well, for
symmetry, even though it is not implicated in any correctness problems.
2023-03-13 12:57:35 -06:00
David Fifield
b63d2272bf Test for data race with QueuePacketConn.WriteTo and kcp-go.
For #40260.
2023-03-13 11:42:44 -06:00
Shelikhoo
473cc45987
Add utls-imitate, utls-nosni doc to README: fix style 2023-03-13 14:13:50 +00:00
Shelikhoo
39d906b380
Add utls-imitate, utls-nosni doc to README 2023-03-10 15:25:15 +00:00
WofWca
5cc849e186
fix: up/down traffic stats being mixed up 2023-02-09 11:45:09 -08:00
itchyonion
990fcb4127
Filter out non stun: server addresses in ParseIceServers 2023-01-30 09:10:15 -08:00
itchyonion
66269c07d8
Update README to correctly reflec the type of ICE servers we currently support 2023-01-30 09:10:15 -08:00
itchyonion
a6a18c1a9b
Parse ICE servers with pion/ice library function 2023-01-30 09:10:15 -08:00
David Fifield
b443e99417 Bring client torrc up to date with Tor Browser fc89e8b1.
https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/commits/fc89e8b10c3ff30db2079b2fb327d05b2b5f3c80/projects/common/bridges_list.snowflake.txt

* Use port 80 in placeholder IP addresses
  tpo/applications/tor-browser-build!516
* Enable uTLS
  tpo/applications/tor-browser-build!540
* Shorten bridge line (remove stun.voip.blackberry.com)
  tpo/applications/tor-browser-build!558
* Add snowflake-02 bridge
  tpo/applications/tor-browser-build!571
2023-01-19 11:37:23 -07:00
Shelikhoo
7b77001eaa
Update version to v2.5.1 2023-01-18 14:37:05 +00:00
Shelikhoo
44c76ce3ad
Fix helloverify remove patch not applied 2023-01-18 14:36:18 +00:00
Shelikhoo
daa9b535c8
Update Version to v2.5.0 2023-01-18 11:27:31 +00:00
Shelikhoo
10fd000685
Apply Skip Hello Verify Migration
Backported from https://gitlab.torproject.org/shelikhoo/snowflake/-/tree/dev-skiphelloverify-backup
2023-01-17 12:47:32 +00:00
Cecylia Bocovich
4895a32fd3
Bump version to v2.4.3 2023-01-16 11:55:31 -05:00
Cecylia Bocovich
086bbb4a63
Bump version to v2.4.2 2023-01-13 13:45:17 -05:00
Cecylia Bocovich
7db2568448
Remove duplicate stun.sonetel.net entry 2023-01-03 10:32:03 -05:00
Cecylia Bocovich
8c775562c1
Remove two suggested STUN servers from client docs
Removed stun.stunprotocol.org after a discussion with the operator, and
stun.altar.com.pl after noticing it has gone offline.

https://lists.torproject.org/pipermail/anti-censorship-team/2022-December/000272.html
https://lists.torproject.org/pipermail/anti-censorship-team/2022-December/000276.html
2022-12-31 12:23:29 -05:00
Cecylia Bocovich
f6fa51d749
Switch default proxy STUN server to stun.l.google.com
This is the same default that the web-based proxies use. Proxies do not
need RFC 5780 compatible STUN servers.
2022-12-31 12:23:27 -05:00
David Fifield
936a1f8138 Add a num-turbotunnel server transport option.
Replaces the hardcoded numKCPInstances.
2022-12-14 23:02:26 -07:00
David Fifield
c6fabb212d Use multiple parallel KCP state machines in the server.
To distribute CPU load.

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40200
2022-12-14 23:02:26 -07:00
itchyonion
53e381e45d
Fix server flag name 2022-12-13 09:23:34 -08:00
Flo418
11c3333856 add some more test for URL encoded IPs (safelog) 2022-12-12 19:56:59 +01:00
David Fifield
839d221883 Take ownership of buffer in QueuePacketConn QueueIncoming/WriteTo.
This design is easier to misuse, because it allows the caller to modify
the contents of the slice after queueing it, but it avoids an extra
allocation + memmove per incoming packet.

Before:
	$ go test -bench='Benchmark(QueueIncoming|WriteTo)' -benchtime=2s -benchmem
	BenchmarkQueueIncoming-4         7001494               342.4 ns/op          1024 B/op          2 allocs/op
	BenchmarkWriteTo-4               3777459               627 ns/op            1024 B/op          2 allocs/op
After:
	$ go test -bench=BenchmarkWriteTo -benchtime 2s -benchmem
	BenchmarkQueueIncoming-4        13361600               170.1 ns/op           512 B/op          1 allocs/op
	BenchmarkWriteTo-4               6702324               373 ns/op             512 B/op          1 allocs/op

Despite the benchmark results, the change in QueueIncoming turns out not
to have an effect in practice. It appears that the compiler had already
been optimizing out the allocation and copy in QueueIncoming.
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40187

The WriteTo change, on the other hand, in practice reduces the frequency
of garbage collection.
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40199
2022-12-08 08:03:54 -07:00
David Fifield
d4749d2c1d Reduce turbotunnel queueSize from 2048 to 512.
This is to reduce heap usage.

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40179

Past discussion of queueSize:
https://lists.torproject.org/pipermail/anti-censorship-team/2021-July/000188.html
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/48#note_2744619
2022-12-08 08:03:54 -07:00
David Fifield
77b186ae6e Have SnowflakeClientConn implement io.WriterTo.
By forwarding the method to the inner smux.Stream. This is to prevent
io.Copy in the top-level proxy function from allocating a buffer per
client.

The smux.Stream WriteTo method returns io.EOF on success, contrary to
the contract of io.Copy that says it should return nil. Ignore io.EOF in
the proxy loop to avoid a log message.

/anti-censorship/pluggable-transports/snowflake/-/issues/40177
2022-12-08 08:03:54 -07:00
David Fifield
64491466ce Manually unlock the mutex in ClientMap.SendQueue.
Rather than use defer. It is only a tiny amount faster, but this
function is frequently called.

Before:
	$ go test -bench=BenchmarkSendQueue -benchtime=2s
	BenchmarkSendQueue-4    15901834               151 ns/op
After:
	$ go test -bench=BenchmarkSendQueue -benchtime=2s
	BenchmarkSendQueue-4    15859948               147 ns/op

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40177
2022-12-08 08:03:54 -07:00
David Fifield
8e5af50bdb Increase clientIDAddrMapCapacity to 98304.
Recent increases in usage have exhausted the capacity of the map.
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40173
2022-12-03 13:39:56 -07:00
Flo418
cbc50592d8
update README.md help output, fix #40232 2022-12-02 13:37:17 -08:00
Flo418
cebe4a0af6
enhance help for capacity flag, fix #40208 2022-12-02 13:37:17 -08:00
Cecylia Bocovich
7c154e5fd0
Bump version to v2.4.1 2022-12-01 11:38:22 -05:00
Shelikhoo
788e3ae956
Refactor utls roundtripper_test to deduplicate 2022-11-29 15:41:49 +00:00
Shelikhoo
d8d3e538f1
Fix uTLS RoundTripper Inconsistent Key for host:port
This commit fixes an issue described at:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40224

This bug has been fixed, with test case describing this bug added.
2022-11-29 15:41:49 +00:00
Cecylia Bocovich
56f15a5db7
Update ChangeLog for v2.4.0 2022-11-29 09:52:09 -05:00
Cecylia Bocovich
b547d449cb
Refactor timeout loop to use a context and reuse timers 2022-11-28 17:30:05 -05:00
Cecylia Bocovich
b010de5abb
Terminate timeoutLoop when conn is closed 2022-11-28 17:11:20 -05:00
Cecylia Bocovich
5c23fcf14a
Add timeout for webRTCConn 2022-11-28 17:11:18 -05:00
Cecylia Bocovich
6007d5e08e
Refactor creation of webRTCConn in proxy 2022-11-28 17:10:49 -05:00
luciole
90d1a56719
change regexes for ipv6 addresses to catch url-encoded addresses 2022-11-28 15:56:35 -05:00
Shelikhoo
4ebd85e5d1
add version output to log 2022-11-23 12:29:55 +00:00
Shelikhoo
33248f3dec
Add Version Output Support to Snowflake
From now on, there will be a file at common/version/version.go that includes current version number.
2022-11-23 12:29:51 +00:00
luciole
2c599f8827
change bandwidth type from int to int64 to prevent overflow 2022-11-21 10:33:21 -05:00
Cecylia Bocovich
115ba6a745
Add gofmt output to CI test before calling test -z
We use a call to test -z together with go fmt because it doesn't output
a non-zero exit status (triggering CI test failure). However, we lose
useful debugging output from the go fmt call because test -z swallows
it. This adds very verbose formatting output to the CI test.
2022-11-17 11:07:48 -05:00
David Fifield
e851861e68 Benchmark for encapsulation.ReadData. 2022-11-16 13:48:34 -07:00
David Fifield
a579c969e6 encapsulation.paddingBuffer can be statically allocated. 2022-11-16 13:48:34 -07:00
David Fifield
4ae63eccab Benchmark websocket.Conn Upgrade creation.
I had thought to set a buffer size of 2048, half the websocket package
default of 4096. But it turns out when you don't set a buffer size, the
websocket package reuses the HTTP server's read/write buffers, which
empirically already have a size of 2048.

	$ go test -bench=BenchmarkUpgradeBufferSize -benchmem -benchtime=5s
	BenchmarkUpgradeBufferSize/0-4                     25669            234566 ns/op           32604 B/op        113 allocs/op
	BenchmarkUpgradeBufferSize/128-4                   24739            238283 ns/op           24325 B/op        117 allocs/op
	BenchmarkUpgradeBufferSize/1024-4                  25352            238885 ns/op           28087 B/op        116 allocs/op
	BenchmarkUpgradeBufferSize/2048-4                  22660            234890 ns/op           32444 B/op        116 allocs/op
	BenchmarkUpgradeBufferSize/4096-4                  25668            232591 ns/op           41672 B/op        116 allocs/op
	BenchmarkUpgradeBufferSize/8192-4                  24908            240755 ns/op           59103 B/op        116 allocs/op
2022-11-16 13:48:34 -07:00
David Fifield
2321642f3c Hoist temporary buffers outside the loop.
Otherwise the buffers are re-allocated on every iteration, which is a
surprise to me. I thought the compiler would do this transformation
itself.

Now there is just one allocation per client←server read (one
messageReader) and two allocations per server←client read (one
messageReader and one messageWriter).

	$ go test -bench=BenchmarkReadWrite -benchmem -benchtime=5s
	BenchmarkReadWrite/c←s_150-4              481054             12849 ns/op          11.67 MB/s           8 B/op          1 allocs/op
	BenchmarkReadWrite/s←c_150-4              421809             14095 ns/op          10.64 MB/s          56 B/op          2 allocs/op
	BenchmarkReadWrite/c←s_3000-4             208564             28003 ns/op         107.13 MB/s          16 B/op          2 allocs/op
	BenchmarkReadWrite/s←c_3000-4             186320             30576 ns/op          98.12 MB/s         112 B/op          4 allocs/op
2022-11-16 13:48:34 -07:00
David Fifield
264425a488 Use io.CopyBuffer in websocketconn.readLoop.
This avoids io.Copy allocating a 32 KB buffer on every call.
https://cs.opensource.google/go/go/+/refs/tags/go1.19.1:src/io/io.go;l=416

	$ go test -bench=BenchmarkReadWrite -benchmem -benchtime=5s
	BenchmarkReadWrite/c←s_150-4              385740             15114 ns/op           9.92 MB/s        4104 B/op          3 allocs/op
	BenchmarkReadWrite/s←c_150-4              347070             16824 ns/op           8.92 MB/s        4152 B/op          4 allocs/op
	BenchmarkReadWrite/c←s_3000-4             190257             31581 ns/op          94.99 MB/s        8208 B/op          6 allocs/op
	BenchmarkReadWrite/s←c_3000-4             163233             34821 ns/op          86.16 MB/s        8304 B/op          8 allocs/op
2022-11-16 13:48:34 -07:00
David Fifield
3df514ae29 Call WriteMessage directly in websocketconn.Conn.Write.
In the client←server direction, this hits a fast path that avoids
allocating a messageWriter.
https://github.com/gorilla/websocket/blob/v1.5.0/conn.go#L760

Cuts the number of allocations in half in the client←server direction:

	$ go test -bench=BenchmarkReadWrite -benchmem -benchtime=5s
	BenchmarkReadWrite/c←s_150-4              597511             13358 ns/op          11.23 MB/s       33709 B/op          2 allocs/op
	BenchmarkReadWrite/s←c_150-4              474176             13756 ns/op          10.90 MB/s       34968 B/op          4 allocs/op
	BenchmarkReadWrite/c←s_3000-4             156488             36290 ns/op          82.67 MB/s       68673 B/op          5 allocs/op
	BenchmarkReadWrite/s←c_3000-4             190897             34719 ns/op          86.41 MB/s       69730 B/op          8 allocs/op
2022-11-16 13:48:34 -07:00
David Fifield
8cadcaee70 Benchmark for websocketconn.Conn read/write.
Current output:
	$ go test -bench=BenchmarkReadWrite -benchmem -benchtime=5s
	BenchmarkReadWrite/c←s_150-4              451840             13904 ns/op          10.79 MB/s       34954 B/op          4 allocs/op
	BenchmarkReadWrite/s←c_150-4              452560             16134 ns/op           9.30 MB/s       36378 B/op          4 allocs/op
	BenchmarkReadWrite/c←s_3000-4             202950             40846 ns/op          73.45 MB/s       69833 B/op          8 allocs/op
	BenchmarkReadWrite/s←c_3000-4             189262             37930 ns/op          79.09 MB/s       69768 B/op          8 allocs/op
2022-11-16 13:48:34 -07:00
David Fifield
0780f2e809
Add a orport-srcaddr server transport option.
The option controls what source address to use when dialing the
(Ext)ORPort. Using a source address other than 127.0.0.1, or a range of
addresses, can help with localhost ephemeral port exhaustion.

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40198
2022-11-16 19:41:42 +01:00
itchyonion
9d72b30603
proxy: Let verbose level act on file logging 2022-11-16 10:08:11 -08:00
itchyonion
768b80dbdf
Use event logger for proxy starting message and NAT info 2022-11-16 10:08:10 -08:00
David Fifield
2f55581098
Reduce the smux KeepAliveTimeout on the server from 10 to 4 minutes.
To save memory, we want to more aggressively close stale connections.

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40175
2022-11-16 18:48:14 +01:00
David Fifield
12e8de8b24 Update github.com/gorilla/websocket to v1.5.0. 2022-11-15 17:21:33 -07:00
luciole
3185487aea update formatTraffic so that bandwidth unit is always KB 2022-11-10 15:12:46 +01:00
meskio
ac8562803a
Merge remote-tracking branch 'gitlab/mr/107' 2022-10-17 12:36:19 +02:00
David Fifield
39df9b36b5 Fix uTLS issue number in ChangeLog.
The right issue number is #40054.
The #40095 it referred to was for load balancing on the broker.
2022-10-16 23:14:38 -06:00
KokaKiwi
21d7449851
proxy: Check ephemeral port range ordering at flag parsing 2022-10-14 21:40:07 +02:00
KokaKiwi
10c8173120
proxy: Fix ephemeral ports range CLI flag (again) 2022-10-12 19:48:24 +02:00
Cecylia Bocovich
8b1970a3ce Update CI tests to include latest and min go versions 2022-10-12 11:30:47 -04:00
Cecylia Bocovich
31b958302e Bump minimum go version to 1.15 2022-10-12 11:03:06 -04:00
KokaKiwi
986fc8269a
proxy: Correctly handle argument parsing error 2022-10-12 16:51:39 +02:00
KokaKiwi
c5b291b114
proxy: Fix build with golang 1.13 2022-10-12 16:33:09 +02:00
meskio
56063efbba
Merge remote-tracking branch 'gitlab/mr/102' 2022-10-11 18:47:47 +02:00
trinity-1686a
5ef5142bb0 format using go-1.19 2022-10-09 21:15:50 +02:00
KokaKiwi
068af08703
Change how ephemeral-ports-range CLI flag is handled 2022-09-30 17:55:10 +02:00
KokaKiwi
47f9392645
proxy: Add ICE ephemeral ports range setting CLI flag 2022-09-30 17:55:08 +02:00
KokaKiwi
5e564f36ff
proxy: Add a SnowflakeProxy.makeWebRTCAPI() method 2022-09-30 17:55:06 +02:00
Tommaso Gragnato
9ce1de4eee Use Pion's Setting Engine to reduce Multicast DNS noise
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40123

The purpose of the patch is to prevent Pion from opening the mDNS port,
thus preventing snowflake from directly leaking .local candidates.

What this doesn't prevent is the resolution of .local candidates
once they are passed on to the system DNS.
2022-09-26 08:52:23 -07:00
Daniel Golle
a8829d49b7
Fix proxy command line help output 2022-09-26 10:37:29 -04:00
Shelikhoo
36f03dfd44
Record proxy type for proxy relay stats 2022-09-23 13:08:13 +01:00
itchyonion
03b2b56f87 Fix broker race condition 2022-07-19 18:25:27 -07:00
Shelikhoo
c983c13a84
Updated ChangeLog for v2.3.0 release 2022-06-23 11:40:29 +01:00
Shelikhoo
35e9ab8c0b
Use truncated hash instead crc64 for counted hash 2022-06-16 15:00:12 +01:00
Shelikhoo
b18e6fcfe4
Add document for Distinct IP file 2022-06-16 15:00:12 +01:00
Shelikhoo
af1134362a
Update distinct counter interface 2022-06-16 15:00:12 +01:00
Shelikhoo
be40b623a4
Add go sum for hyperloglog 2022-06-16 15:00:12 +01:00
Shelikhoo
2541b13166
Add distinct IP counter to broker 2022-06-16 15:00:10 +01:00
Shelikhoo
fa7d1e2bb7
Add distinct IP counter to metrics 2022-06-16 14:58:12 +01:00
Shelikhoo
211254fa98
Add distinct IP counter 2022-06-16 14:58:12 +01:00
Shelikhoo
97dea533da
Update Relay Pattern format to include dollar sign 2022-06-16 14:06:58 +01:00
Shelikhoo
ddf72025d1
Restrict Allowed Relay to Tor Pool by default 2022-06-16 14:06:58 +01:00
Shelikhoo
e5b799d618
Update documents for broker messages 2022-06-16 14:06:58 +01:00
Shelikhoo
0ae4d821f0
Move ErrExtraInfo to ipc.go 2022-06-16 14:06:58 +01:00
Shelikhoo
a4bbb728e6
Fix not zero metrics for 1.3 values 2022-06-16 14:06:58 +01:00
Shelikhoo
8ba89179f1
Add document for LoadBridgeInfo input 2022-06-16 14:06:58 +01:00
Shelikhoo
8ab45651d0
Disallow unknown bridge list file field 2022-06-16 14:06:58 +01:00
Shelikhoo
c5e5b45b06
Update message protocol version to 1.3 for RelayURL 2022-06-16 14:06:58 +01:00
Shelikhoo
f789dce6d2
Represent Bridge Fingerprint As String 2022-06-16 14:06:58 +01:00
Shelikhoo
dd61e2be0f
Add Proxy Relay URL Metrics Collection 2022-06-16 14:06:57 +01:00
Shelikhoo
b78eb74e42
Add Proxy Relay URL Rejection Metrics 2022-06-16 14:06:57 +01:00
Shelikhoo
7caab01785
Fixed desynchronized comment and behavior for log interval
In 64ce7dff1b, the log interval is modified while the comment is left unchanged.
2022-06-16 14:06:57 +01:00
Shelikhoo
b391d98679
Add Proxy Relay URL Support Counting Metrics Output 2022-06-16 14:06:57 +01:00
Shelikhoo
1b48ee14f4
Add test for proxy poll with Relay URL 2022-06-16 14:06:57 +01:00
Shelikhoo
6e8fbe54ee
Rejection reason feedback 2022-06-16 14:06:57 +01:00
Shelikhoo
3ebb5a4186
Show relay URL when connecting to relay 2022-06-16 14:06:57 +01:00
Shelikhoo
b18a9431b2
Add Broker Allowed Relay Pattern Indication Rejection for Proxy 2022-06-16 14:06:57 +01:00
Shelikhoo
2ebdc89c42
Add Allowed Relay Hostname Pattern Indication 2022-06-16 14:06:57 +01:00
Shelikhoo
b09a2e09b3
Add Relay URL Check in Snowflake Proxy 2022-06-16 14:06:56 +01:00
Shelikhoo
02c6f764c9
Add support for specifying bridge list file 2022-06-16 14:06:56 +01:00
Shelikhoo
c961b07459
Add Detailed Error Output for datachannelHandler 2022-06-16 14:06:56 +01:00
Shelikhoo
50c0d64e10
Add Detailed Error Output for proxyPolls, proxyAnswers 2022-06-16 14:06:56 +01:00
Shelikhoo
c7549d886e
Update default snowflake server address
Change snowflake broker test for updated address

Amend DefaultBridges Value

Add Default Fingerprint Info for Snowflake
2022-06-16 14:06:56 +01:00
Shelikhoo
5d7a3766d6
Add Relay Info Forwarding for Snowflake 2022-06-16 13:57:34 +01:00
Shelikhoo
d5a87c3c02
Guard Proxy Relay URL Acceptance with Pattern Check 2022-06-16 13:57:33 +01:00
Shelikhoo
863a8296e8
Add RelayURL support in proxy 2022-06-16 13:57:33 +01:00
Shelikhoo
613ceaf970
Add RelayURL and AllowedRelayPattern to snowflake signaling 2022-06-16 13:57:33 +01:00
Shelikhoo
38f0e00e5d
Add Domain Name Matcher
Design difference from original vision: Skipped FQDN step to make it more generalized
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/28651#note_2787394
2022-06-16 13:57:33 +01:00
Shelikhoo
5578b4dd76
Add Bridge List Holder Test 2022-06-16 13:57:00 +01:00
Shelikhoo
0822c5f87b
Add Bridge List Holder 2022-06-16 13:56:59 +01:00
Shelikhoo
3d4f294241
Add Bridge List Definition 2022-06-16 13:56:56 +01:00
meskio
f38c91f906
Don't use entropy for test
Use math/rand instead of crypto/rand, so entropy is not a blocker when
running the tests.
2022-06-02 17:24:54 +02:00
itchyonion
e4c01f0595 Wrap client NAT log 2022-05-31 08:52:23 -07:00
Cecylia Bocovich
6310ca4381
Avoid performing two NAT probe tests at startup
After the initial NAT probe test, a full interval before starting the
recurring NAT retests.
2022-05-27 10:01:19 -04:00
Cecylia Bocovich
4e7f897527
Update snowflake CI to test with go 1.18 2022-05-26 12:51:54 -04:00
Cecylia Bocovich
ae5a71e6e5
Updated ChangeLog for v2.2.0 release 2022-05-25 12:17:54 -04:00
meskio
3473b438e5
Move ptEventLogger into the client/snowflake.go
Remove client/pt_event_logger.go file as is very minimal.
2022-05-25 18:09:09 +02:00
meskio
1d592b06e5
Implement String() method on events
To make it safe for logging safelog.Scrub function is now public.

Closes: #40141
2022-05-25 18:09:06 +02:00
itchyonion
9757784c5a
Wait some time before displaying the proxy usage log 2022-05-25 11:01:01 -04:00
Cecylia Bocovich
dd83b68efa
Bump version of pion/webrtc to v3.1.41
This bumps the version of pion/dtls to v2.1.5 to fix three CVEs:
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-29189
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-29190
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-29222
2022-05-24 11:45:47 -04:00
Cecylia Bocovich
b6875c6ae9
Bump webrtc library version
go get github.com/pion/webrtc/v3@latest
go mod tidy
2022-04-12 12:10:01 -04:00
itchyonion
e2838201ad
Scrub ptEvent logs 2022-04-12 11:52:21 -04:00
Cecylia Bocovich
aab806429f
Fix gitlab CI to work with multiple client .go files 2022-04-11 11:50:36 -04:00
Cecylia Bocovich
d807e9d370
Move tor-specific code outside of client library 2022-04-11 11:38:52 -04:00
Arlo Breault
2f89fbc2ed Represent fingerprint internally as byte array 2022-03-31 11:28:00 -04:00
Arlo Breault
fa2f6824d9 Add some test cases for client poll requests 2022-03-21 15:31:02 -04:00
Arlo Breault
b563141c6a Forward bridge fingerprint
gitlab 28651
2022-03-21 15:06:05 -04:00
Arlo Breault
281d917beb Stop storing version in ClientPollRequest
This continues to asserts the known version while decoding.  The client
will only ever generate the latest version while encoding and if the
response needs to change, the impetus will be a new feature, set in the
deserialized request, which can be used as a distinguisher.
2022-03-21 15:06:05 -04:00
meskio
b73add1550
Make the proxy type configurable for users of the library
Closes: #40104
2022-03-21 19:24:51 +01:00
meskio
b265bd3092
Make easier to extend the list of known proxy types
And include iptproxy as a valid proxy type.
2022-03-21 19:23:49 +01:00
Arlo Breault
bd636a1374 Introduce an unexported newBrokerChannelFromConfig
A follow-up wants to pass in a new property from the ClientConfig but it
would be an API breaking change to NewBrokerChannel.

However, it's unclear why NewBrokerChannel is exported at all.  No other
package in the repo depends on it and the known users of the library
probably wouldn't be construct them.

While this patch was being reviewed, a new constructor was added,
NewBrokerChannelWithUTLSSettings, with effectively the same issue.
Both of those exported ones are deleted here.
2022-03-16 16:33:24 -04:00
Arlo Breault
829cacac5f Parse ClientPollRequest version in DecodeClientPollRequest
Instead of IPC.ClientOffers.  This makes things consistent with
EncodeClientPollRequest which adds the version while serializing.
2022-03-16 15:43:10 -04:00
Arlo Breault
6fd0f1ae5d Rename *PollRequest methods to distinguish client/proxy 2022-03-16 15:43:10 -04:00
Shelikhoo
6e29dc676c
Add document for NewUTLSHTTPRoundTripper 2022-03-16 09:13:30 +00:00
Shelikhoo
ab9604476e
Move uTLS configuration to socks5 arg 2022-03-16 09:13:30 +00:00
Shelikhoo
3132f68012
Add connection expire time for uTLS pendingConn 2022-03-16 09:13:29 +00:00
Shelikhoo
8d5998b744
Harmonize identifiers to uTLS 2022-03-16 09:13:29 +00:00
Shelikhoo
e3aeb5fe5b
Add line wrap to NewBrokerChannelWithUTlsSettings 2022-03-16 09:13:29 +00:00
Shelikhoo
f525490032
Update utls test to match uTLS Round Tripper constructor 2022-03-16 09:13:29 +00:00
Shelikhoo
1573502e93
Use uTLS aware broker channel constructor 2022-03-16 09:13:29 +00:00
Shelikhoo
ccfdcab8fe
Add uTLS remove SNI to snowflake client 2022-03-16 09:13:29 +00:00
Shelikhoo
9af0ad119b
Add utls imitate setting to snowflake client 2022-03-16 09:13:29 +00:00
Max Bittman
c1c3596cf8
Add name to utls client hello id 2022-03-16 09:13:28 +00:00
Shelikhoo
c1b0f763ef
Add reformat for utls roundtripper 2022-03-16 09:13:28 +00:00
Shelikhoo
4447860661
Add repeated test for utls roundtripper 2022-03-16 09:13:28 +00:00
Shelikhoo
006abdead4
Add utls roundtripper 2022-03-16 09:13:25 +00:00
meskio
19e9e38415
Merge remote-tracking branch 'gitlab/mr/78' 2022-03-11 19:58:17 +01:00
Jake Vossen
99eb794a20
Fixed up/downstream metrics 2022-03-02 11:27:33 -05:00
pjsier
df22114fce Fix proxy logging verb tense 2022-02-28 18:38:17 -06:00
Anna “CyberTailor”
e18a4ac147
Generate tarballs in release CI
The `generate_tarball` job vendors all Go modules to make packaging for
distributions easier.
2022-02-27 10:01:50 +05:00
Cecylia Bocovich
01ae5b56e8
Fix client library test
Initialize eventsLogger for WebRTCPeer in client library test.
2022-02-14 15:11:41 -05:00
Cecylia Bocovich
3547b284a9
Make all snowflake events LogSeverityNotice
Let's reserve Tor error logs for more severe events that indicate
a client-side bug or absolute failure. By default, tor logs at severity
level notice (and above).
2022-02-14 14:09:16 -05:00
Cecylia Bocovich
2c008d6589
Add connection failure events for proxy timeouts
This change adds two new connection failure events for snowflake
proxies. One fires when the datachannel times out and another fires when
the connection to the proxy goes stale.
2022-02-14 14:00:01 -05:00
Cecylia Bocovich
bcc162898a
Initialize SnowflakeListener.closed
Fixes a bug where an uninitialized channel causes a panic when closed
(#40099).
2022-02-08 13:00:43 -05:00
Cecylia Bocovich
e6e5e20ae8
Update ChangeLog for v2.1.0 release 2022-02-08 10:56:19 -05:00
Cecylia Bocovich
c0b35076c9
Remove support for oneshot mode
Due to a bug (#40098), legacy oneshot connections have not worked for
awhile. Connections without the turbotunnel token would cause the server
to crash. This fixes that bug by removing support altogether and simply
closes the connection.
2022-02-07 11:39:23 -05:00
Shelikhoo
00e8415d8e
Add verbosity switch to suppress diagnostic output 2022-02-03 13:38:48 +00:00
Shelikhoo
e828b06076
Use log instead of fmt in proxy event logger
See also:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/72#note_2772839
2022-01-28 14:46:45 +00:00
Shelikhoo
bf3bd635f7
Fix build break in Go 1.16 for missing import
See also:
https://gitlab.torproject.org/shelikhoo/snowflake/-/jobs/86751
2022-01-26 13:39:12 +00:00
Shelikhoo
eb229d512b
Fix ProxyEventLogger output 2022-01-25 13:03:19 +00:00
Shelikhoo
88af9da4a2
Fix ProxyEventLogger output 2022-01-25 13:03:19 +00:00
Shelikhoo
1116bc81c8
Add Proxy Event Logger 2022-01-25 13:03:19 +00:00
Shelikhoo
9208364475
Extract traffic formatter 2022-01-25 13:03:19 +00:00
Shelikhoo
f12cfe6a9f
Add proxy event logger state propagate 2022-01-25 13:03:18 +00:00
Shelikhoo
e4305a4d2b
Add EventOnProxyConnectionOver Reporting 2022-01-25 13:03:18 +00:00
Shelikhoo
d64af31394
Add EventOnProxyConnectionOver Event 2022-01-25 13:03:18 +00:00
Shelikhoo
91379a42f3
Add Raw Data Output for bytesLogger 2022-01-25 13:03:14 +00:00
Shelikhoo
6cb82618a0
Refactor WebRTC Peer,Dialer's name to be readable
See also:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/67#note_2771666
2022-01-25 12:49:59 +00:00
Shelikhoo
657aaa6ba8
Refactor event logger setting into function call
See also:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/67#note_2770482
2022-01-25 12:49:59 +00:00
Shelikhoo
55bf117d1a
Reduce PT Event Logger Verbosity 2022-01-25 12:49:59 +00:00
Shelikhoo
7536dd6fb7
Add Propagate EventLogger Setting 2022-01-25 12:49:59 +00:00
Shelikhoo
8d2f662c8c
Emit non-pointer type event 2022-01-25 12:49:58 +00:00
Shelikhoo
128936c825
Enable PT Event Logger 2022-01-25 12:49:58 +00:00
Shelikhoo
ac64d17705
Add PT Event Logger 2022-01-25 12:49:58 +00:00
Shelikhoo
36ca610d6b
Add NewWebRTCPeer3E Initializer
This name includes [E]vent to reduce merge conflict with forward proxy change set.
2022-01-25 12:49:58 +00:00
Shelikhoo
9a7fcdec03
Add Snowflake Event Reporter for Peer Communication 2022-01-25 12:49:57 +00:00
Shelikhoo
c3f09994da
Add Snowflake Event Reporter for Broker Communication 2022-01-25 12:49:57 +00:00
Shelikhoo
cd6d837d85
Add snowflake event handler to client config 2022-01-25 12:49:57 +00:00
Shelikhoo
b5ef18803f
Add Event Bus Test 2022-01-25 12:49:57 +00:00
Shelikhoo
5f03f88d73
Add Event Bus Implementation
This event bus implementation favours simplicity over efficiency and is not suitable for frequent addition and removal of listeners.
2022-01-25 12:49:56 +00:00
Shelikhoo
75f770150d
Add Snowflake Event API interface 2022-01-25 12:49:51 +00:00
Shelikhoo
d2f6ea5417
increase clientIDAddrMapCapacity
See also:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40084
2022-01-18 14:33:34 -05:00
Shelikhoo
50646698e3
Suppress connection end log output
This is an amendment of https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/30
2022-01-18 14:33:27 -05:00
Cecylia Bocovich
b35a79ac24
Validate client and proxy supplied strings
Malicious clients and proxies can provide potentially malicious strings
in the polls. This validates the NAT type and proxy type strings to
ensure that malformed strings are not displayed on a web page
or passed to any of our monitoring infrastructure.

If a client or proxy supplies an invalid NAT type, we return an error
message. If a proxy supplies an unknown proxy type, we set the proxy
type to unknown.
2022-01-12 11:30:41 -05:00
David Fifield
aeb0794d28 Use require rather than replace for dtls version.
go mod edit -dropreplace=github.com/pion/dtls/v2
go get github.com/pion/dtls/v2@v2.0.12

This is an update to
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/66.
2021-12-16 09:46:55 -07:00
Cecylia Bocovich
9c11e479d0
Update go versions in CI tests
Debian packages Go 1.15 and 1.17, and we use 1.16 in Tor Browser.
2021-12-10 10:43:47 -05:00
Cecylia Bocovich
738bd464ea
Update version of DTLS library
Make sure we use a version of the DTLS library that contains the
following fingerprinting fixes:

Only send supported_groups extension in ClientHello
Do not include IP addresses as SNI values

These changes have been merged upstream into pion/dtls.
2021-12-10 10:39:44 -05:00
Hans-Christoph Steiner
221f1c41c9
gitlab-ci: include job number in the artfacts zipball filename 2021-12-01 11:48:08 +01:00
Hans-Christoph Steiner
51f2c026fd
gitlab-ci: include flags to make reproducible builds
* https://github.com/golang/go/issues/33772
2021-12-01 11:48:06 +01:00
Hans-Christoph Steiner
1318b6a9ec
stripped down Android build process for gitlab-ci and Vagrant 2021-12-01 11:48:03 +01:00
Hans-Christoph Steiner
c9399da566
gitlab-ci: expire artifacts in 1 week, improve gradle caching, etc. 2021-12-01 11:09:57 +01:00
Shelikhoo
40f44d6272
Add V2Ray/V2Fly License for task 2021-11-19 15:55:30 +00:00
Shelikhoo
0c62d806a4
Represent NATTypeMeasurementInterval in time.Duration
Adopted the change in according to the recommendation from

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/62#note_2761566
2021-11-16 19:25:27 +00:00
Shelikhoo
c49f72eb0c
Update nat-retest-interval type to duration
Adopted the change in according to the recommendation from

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/62#note_2761438
2021-11-16 15:58:57 +00:00
Shelikhoo
efdb850d2e
Update nat-retest-interval flag name to reflect the change
Adopted the change in according to the recommendation from

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/62#note_2761382
2021-11-16 11:22:44 +00:00
Shelikhoo
9bdb87eaf3
Update nat-retest-seconds format to time.ParseDuration form
Adopted the change in according to the recommendation from

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/62#note_2761382
2021-11-16 11:20:27 +00:00
Shelikhoo
d4fdb35ee8
Add in source indicator of file origin
Adopted the change in according to the recommendation from

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/62#note_2759815
2021-11-12 10:56:57 +00:00
Shelikhoo
1b79962ca8
Rename flag to nat-retest-seconds and retest daily by default
Adopted the change in according to the recommendation from

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/62#note_2759816

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/62#note_2760512
2021-11-12 10:49:32 +00:00
Shelikhoo
59af9927a5
Refactor state transfer logic to simplify it
Adopted the change in according to the recommendation from

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/62#note_2760514
2021-11-12 10:49:32 +00:00
Shelikhoo
2547883cf9
Extract function getCurrentNATType()
Adopted the change in according to the recommendation from

https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/62#note_2759900
2021-11-12 10:49:32 +00:00
Shelikhoo
a6a53ff8ce
Add NAT Type test periodic task 2021-11-12 10:49:30 +00:00
Shelikhoo
ac97ce7136
Add NAT Type measurement command line flag
It is important to include unit in flag name to prevent user from making mistake.
2021-11-12 10:48:15 +00:00
Shelikhoo
4c8a166178
Port V2Ray periodic task standard library to snowflake
This is a mature implementation of periodic task that run a function at given interval. It allows task to be stopped, and deals with edge case like interval too short gracefully.

V2Ray/V2Fly is MIT licensed.
2021-11-12 10:48:14 +00:00
Shelikhoo
04bc471a63
Support recurring NAT Type measurement
currentNATType will from now on be guarded by currentNATTypeAccess for any access.

NAT Type update rule is flattened into state transfer lookup table to assist reading.
2021-11-12 10:48:14 +00:00
Cecylia Bocovich
ead5a960d7
Bump snowflake library imports and go.mod to v2 2021-11-11 10:14:49 -05:00
Cecylia Bocovich
f6b6342a3a
Update ChangeLog for v2 release 2021-11-04 10:34:34 -04:00
Cecylia Bocovich
0a2598a1e8 Export ability to change the URL of NAT probe 2021-10-28 10:05:01 -04:00
Cecylia Bocovich
3caa83d84d Modify handling of misconfigurations and defaults 2021-10-28 10:05:01 -04:00
Cecylia Bocovich
0e8d41ba4b Update comments for exported items 2021-10-28 10:05:01 -04:00
Cecylia Bocovich
84e8a183e5 Comment package and minor changes exports 2021-10-26 15:49:46 -04:00
Cecylia Bocovich
b2edf948e2 Remove BytesLoggers from exported functions 2021-10-26 14:52:17 -04:00
idk
50e4f4fd61 Turn the proxy code into a library
Allow other go programs to easily import the snowflake proxy library and
start/stop a snowflake proxy.
2021-10-26 14:15:44 -04:00
Cecylia Bocovich
54ab79384f Unify broker/bridge domains to torproject.net 2021-10-14 11:14:22 -04:00
Cecylia Bocovich
04ba50a531 Change package name and add a package comment 2021-10-07 11:01:33 -04:00
Cecylia Bocovich
4623c7d3e1 Add documentation where necessary for exported items 2021-10-07 11:01:33 -04:00
Cecylia Bocovich
5339ed2dd7 Stop exporting internal code 2021-10-07 11:01:33 -04:00
Cecylia Bocovich
5927c2bdf9 Default to a maximum value of 1 Snowflake peer 2021-10-04 10:17:37 -04:00
Cecylia Bocovich
6c6a2e44ab Change package name and add a package comment 2021-10-04 10:17:37 -04:00
Cecylia Bocovich
767c07dc58 Update client library usage documentation 2021-10-04 10:17:37 -04:00
Cecylia Bocovich
638ec6c222 Update Snowflake client library documentation
Follow best practices for documenting the exported pieces of the
Snowflake client library.
2021-10-04 10:17:37 -04:00
Cecylia Bocovich
99887cd05d Add package functions to define and set the rendezvous method
Add exported functions to the snowflake client library to allow calling
programs to define and set their own custom broker rendezvous methods.
2021-10-04 10:17:37 -04:00
Cecylia Bocovich
624750d5a8 Stop exporting code that should be internal 2021-10-04 10:17:37 -04:00
meskio
4396d505a3
Use tpo geoip library
Now the geoip implmentation has being moved to it's own library to be
shared between projects.
2021-10-04 12:24:55 +02:00
Cecylia Bocovich
8c6f0dbae7 Check error for calls to preparePeerConnection 2021-09-30 11:46:39 -04:00
Cecylia Bocovich
c8136f4534 Update version of go used in .gitlab-ci.yml 2021-09-10 16:57:53 -04:00
meskio
cbd863d6b1
Fix proxy test
The broker is a global object.
2021-09-02 12:49:00 +02:00
Cecylia Bocovich
ace8df37ed Fix compile bug in client, caught by CI 2021-08-24 10:27:24 -04:00
Cecylia Bocovich
a39d6693e1 Call conn.Reject() if SOCKS arguments are invalid 2021-08-19 21:31:51 -04:00
Cecylia Bocovich
97175a91a5 Modify torrc example to pass client args in bridge line 2021-08-19 21:20:34 -04:00
Cecylia Bocovich
e762f58a31 Parse SOCKS arguments and prefer over command line options
Parsing the Snowflake client options from SOCKS allow us to specify
snowflake client settings in the bridge lines.
2021-08-19 21:20:34 -04:00
Cecylia Bocovich
4acc08cc60 Use a config struct for snowflake client options 2021-08-19 21:20:34 -04:00
Cecylia Bocovich
e6715cb4ee Increase smux and QueuePacketConn buffer sizes
This should increase the maximum amount of inflight data and hopefully
the performance of Snowflake, especially for clients geographically
distant from proxies and the server.
2021-08-10 15:38:11 -04:00
David Fifield
b203a75c41 Document -ampcache in snowflake-client man page. 2021-08-05 16:13:24 -06:00
David Fifield
f2dc41d778 Document /amp/client in broker-spec.txt. 2021-08-05 16:13:24 -06:00
David Fifield
521eb4d4d6 Add info about rendezvous methods to client README. 2021-08-05 16:13:24 -06:00
David Fifield
e833119bef Broker /amp/client route (AMP cache client registration). 2021-08-05 16:13:24 -06:00
David Fifield
5adb994028 Implement ampCacheRendezvous. 2021-08-05 16:13:24 -06:00
David Fifield
c13810192d Skeleton of ampCacheRendezvous.
Currently the same as httpRendezvous, but activated using the -ampcache
command-line option.
2021-08-05 16:13:24 -06:00
David Fifield
c9e0dd287f amp package.
This package contains a CacheURL function that modifies a URL to be
accessed through an AMP cache, and the "AMP armor" data encoding scheme
for encoding data into the AMP subset of HTML.
2021-08-05 16:13:24 -06:00
David Fifield
0f34a7778f Factor out httpRendezvous separate from BrokerChannel.
Makes BrokerChannel abstract over a rendezvousMethod. BrokerChannel
itself is responsible for keepLocalAddresses and the NAT type state, as
well as encoding and decoding client poll messages. rendezvousMethod is
only responsible for delivery of encoded messages.
2021-08-05 16:13:24 -06:00
David Fifield
55f4814dfb Change the representation of domain fronting in HTTP rendezvous.
Formerly, BrokerChannel represented the broker URL and possible domain
fronting as
	bc.url  *url.URL
        bc.Host string
That is, bc.url is the URL of the server which we contact directly, and
bc.Host is the Host header to use in the request. With no domain
fronting, bc.url points directly at the broker itself, and bc.Host is
blank. With domain fronting, we do the following reshuffling:
	if front != "" {
		bc.Host = bc.url.Host
		bc.url.Host = front
	}
That is, we alter bc.url to reflect that the server to which we send
requests directly is the CDN, not the broker, and store the broker's own
URL in the HTTP Host header.

The above representation was always confusing to me, because in my
mental model, we are always conceptually communicating with the broker;
but we may optionally be using a CDN proxy in the middle. The new
representation is
	bc.url   *url.URL
        bc.front string
bc.url is the URL of the broker itself, and never changes. bc.front is
the optional CDN front domain, and likewise never changes after
initialization. When domain fronting is in use, we do the swap in the
http.Request struct, not in BrokerChannel itself:
	if bc.front != "" {
		request.Host = request.URL.Host
		request.URL.Host = bc.front
	}

Compare to the representation in meek-client:

https://gitweb.torproject.org/pluggable-transports/meek.git/tree/meek-client/meek-client.go?h=v0.35.0#n94
	var options struct {
		URL       string
		Front     string
	}
https://gitweb.torproject.org/pluggable-transports/meek.git/tree/meek-client/meek-client.go?h=v0.35.0#n308
	if ok { // if front is set
		info.Host = info.URL.Host
		info.URL.Host = front
	}
2021-08-05 16:13:24 -06:00
David Fifield
191510c416 Use a URL with a Host component in BrokerChannel tests.
The tests were using a broker URL of "test.broker" (i.e., a schema-less,
host-less, relative path), and running assertions on the value of
b.url.Path. This is strange, especially in tests regarding domain
fronting, where we care about b.url.Host, not b.url.Path. This commit
changes the broker URL to "http://test.broker" and changes tests to
check b.url.Host. I also added an additional assertion for an empty
b.Host in the non-domain-fronted case.
2021-08-05 16:13:24 -06:00
meskio
e3d376ca43
Wait pollInterval between proxy offers
Closes: #40055
2021-07-21 16:38:29 +02:00
meskio
099f4127ea
Refactor the poll offer to use a ticker
Simplify the code to use a ticker. Using a pattern to allow a first run
of the loop before hitting the ticker:
https://github.com/golang/go/issues/17601#issuecomment-311955879
2021-07-21 16:38:27 +02:00
Cecylia Bocovich
b4e964c682 Added some Snowflake library documentation 2021-07-19 10:16:26 -04:00
Cecylia Bocovich
c1b0fdd8cf Cleaned up and reorganized READMEs 2021-07-19 10:16:26 -04:00
David Fifield
2d7cd3f2b7 Use the readLimit constant in a test.
Instead of copying the value.
2021-07-18 16:25:09 -06:00
David Fifield
d9a83e26b5 Remove unused FakePeers.
Unused since 1364d7d45b.
2021-07-18 13:11:29 -06:00
Cecylia Bocovich
4f7833b384 Version bump to v1.1.0 2021-07-13 17:50:44 -04:00
Arlo Breault
2c2f93c022 Remove and restore some comments, after review 2021-07-08 15:35:04 -04:00
Arlo Breault
dfb68d7cfc Fix race is broker test reported by go test -race 2021-07-08 15:32:25 -04:00
Arlo Breault
c3c84fdb48 Use variables for string matching
The legacy code does case matching on these exact strings so it's better
to ensure they're constant.
2021-07-08 12:47:23 -04:00
Arlo Breault
87ad06a5e2 Get rid of legacy version
Move the logic for the legacy version into the http handlers and use a
shim when doing ipc.
2021-07-08 12:32:37 -04:00
Arlo Breault
0ced1cc324 Move http handlers to a separate file 2021-07-08 12:32:37 -04:00
Arlo Breault
015958fbe6 Intermediary refactor teasing apart http / ipc
Introduces an IPC struct and moves the logic out of the http handlers
and into methods on that.
2021-07-08 12:32:35 -04:00
meskio
ced539f234
Refactor webRTCConn to its own file 2021-07-07 19:36:24 +02:00
meskio
7a1857c42f
Make the proxy to report the number of clients to the broker
So the assignment of proxies is based on the load. The number of clients
is ronded down to 8. Existing proxies that doesn't report the number
of clients will be distributed equaly to new proxies until they get 8
clients, that is okish as the existing proxies do have a maximum
capacity of 10.

Fixes #40048
2021-07-07 19:36:20 +02:00
Cecylia Bocovich
74bdb85b30 Update example torrc file for client
Remove the -max 3 option because we only use one snowflake. Add
SocksPort auto because many testers have a tor process already bound to
port 9050.
2021-06-24 13:46:11 -04:00
Cecylia Bocovich
53a2365696 Fix leak in server acceptLoop
Refactor out a separate handleStream function and ensure that all
connections are closed and the references are out of scope.
2021-06-24 13:32:55 -04:00
Cecylia Bocovich
10b6075eaa Refactor checkForStaleness to take time.Duration 2021-06-24 11:20:44 -04:00
Cecylia Bocovich
e3351cb08a Fix data race for Peers.collection
We used a WaitGroup to prevent a call to Peers.End from melting
snowflakes while a new one is being collected. However, calls to
WaitGroup.Add are in a race with WaitGroup.Wait. To fix this, we use a
Mutex instead.
2021-06-24 11:16:24 -04:00
Cecylia Bocovich
95cbe36565 Add unit tests to check for webrtc peer data races 2021-06-24 11:16:24 -04:00
Cecylia Bocovich
bb7ff6180b Fix datarace for Peers.melted
Using the boolean value was unnecessary since we already have a channel
we can check for closure.
2021-06-24 11:16:24 -04:00
Cecylia Bocovich
ddcdfc4f09 Fix datarace for WebRTCPeer.closed
The race condition occurs because concurrent goroutines are intermixing
reads and writes of `WebRTCPeer.closed`.

Spotted when integrating Snowflake inside OONI in
https://github.com/ooni/probe-cli/pull/373.
2021-06-24 11:16:24 -04:00
Simone Basso
ed2d5df87d Fix datarace for WebRTCPeer.lastReceive
The race condition occurs because concurrent goroutines are
intermixing reads and writes of `WebRTCPeer.lastReceive`.

Spotted when integrating Snowflake inside OONI in
https://github.com/ooni/probe-cli/pull/373.
2021-06-24 11:16:24 -04:00
Cecylia Bocovich
e84bc81e31 Bump version of kcp and smux libraries 2021-06-23 19:41:03 -04:00
Cecylia Bocovich
6634f2bec9 Store net.Addr in clientIDAddrMap
This fixes a stats collection bug where we were converting client
addresses between a string and net.Addr using the clientAddr function
multiple times, resulting in an empty string for all addresses.
2021-06-19 11:16:38 -04:00
Simone Basso
aefabe683f fix(client/snowflake.go): prevent wg.Add race condition
In VSCode, the staticcheck tool emits this warning:

> should call wg.Add(1) before starting the goroutine to
> avoid a race (SA2000)go-staticcheck

To avoid this warning, just move wg.Add outside.
2021-06-14 10:10:02 +02:00
Cecylia Bocovich
8e0b5bd20a Add changelog and release v1.0.0 2021-06-07 10:24:19 -04:00
meskio
c5ca41f138
Add man pages for proxy and client commands
To be used by the debian package (#19409)
2021-06-02 16:47:50 +02:00
Cecylia Bocovich
270eb21803 Encode client-broker messages as json in HTTP body
Send the client poll request and response in a json-encoded format in
the HTTP request body rather than sending the data in HTTP headers. This
will pave the way for using domain-fronting alternatives for the
Snowflake rendezvous.
2021-06-02 09:52:42 -04:00
David Fifield
ae7cc478fd Release resources in client Transport.Dial on error.
Make a stack of cleanup functions to run (as with defer), but clear the
stack before returning if no error occurs.

Uselessly pushing the stream.Close() cleanup just before clearing the
stack is an intentional safeguard, for in case additional operations are
added before the return in the future.

Fixes #40042.
2021-05-24 15:28:13 -06:00
David Fifield
01a96c7d95 Fix error handling around transport.Dial.
The code checked for and displayed an error, but would then go on to
call copyLoop on the nil Conn returned from transport.Dial. Add a return
in that case, and put the cleanup operations in defer. Also remove an
obsolete comment about an empty address. Obsolete because:
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/31#note_2733279
2021-05-24 14:40:50 -06:00
David Fifield
ef4d0a1da5
Stop timers before expiration
If we don't stop them explicitly, the timers will not get garbage collected
until they timeout:
https://medium.com/@oboturov/golang-time-after-is-not-garbage-collected-4cbc94740082

Related to #40039
2021-05-21 10:09:00 +02:00
Arlo Breault
7ef49272fa Remove sync.Once from around logMetrics
Follow up to 160ae2d

Analysis by @dcf,

> I don't think the sync.Once around logMetrics is necessary anymore.
Its original purpose was to inhibit logging on later file handles of
metrics.log, if there were more than one opened. See 171c55a9 and #29734
(comment 2593039) "Making a singleton *Metrics variable causes problems
with how Convey does tests. It shouldn't be called more than once, but
for now I'm using sync.Once on the logging at least so it's explicit."
Commit ba4fe1a7 changed it so that metrics.log is opened in main, used
to create a *log.Logger, and that same instance of *log.Logger is passed
to both NewMetrics and NewBrokerContext. It's safe to share the same
*log.Logger across multiple BrokerContext.
2021-05-20 15:39:30 -04:00
Arlo Breault
160ae2dd71 Make promMetrics not a global
Doesn't seem like it needs to exist outside of the metrics struct.

Also, the call to logMetrics is moved to the constructor.  A metrics
instance is only created when a BrokerContext is created, which only
happens at startup.  The sync of only doing that once is left for
documentation purposes, since it doesn't hurt, but also seems redundant.
2021-05-18 20:07:43 -04:00
Cecylia Bocovich
0054cb2dec Update .gitlab-ci.yml after refactor of client 2021-05-12 10:50:06 -04:00
Cecylia Bocovich
7c9005bed3 Ensure turbotunnel read and write loop terminate
Introduce a waitgroup and done channel to ensure that both the read and
write gorouting for turbotunnel connections terminate when the
connection is closed.
2021-05-12 09:32:07 -04:00
Cecylia Bocovich
11f0846264 Implement server as a v2.1 PT Go API 2021-05-12 09:08:41 -04:00
Cecylia Bocovich
e87b9175dd Implement snowflake client lib as PTv2.1 Go API
This implements a pluggable transports v2.1 compatible Go API in the
Snowflake client library, and refactors how the main Snowflake program
calls it. The Go API implements the two required client side functions:
a constructor that returns a Transport, and a Dial function for the
Transport that returns a net.Conn. See the PT specification for more
information:
https://github.com/Pluggable-Transports/Pluggable-Transports-spec/blob/master/releases/PTSpecV2.1/Pluggable%20Transport%20Specification%20v2.1%20-%20Go%20Transport%20API.pdf
2021-05-12 09:08:41 -04:00
Cecylia Bocovich
af6e2c30e1 Replace default with custom prometheus registry
The default prometheus registry exports data that may be useful for
side-channel attacks. This removes all of the default metrics and makes
sure we are only reporting snowflake metrics from the broker.
2021-04-26 14:18:50 -04:00
Cecylia Bocovich
2a310682b5 Add new gauge to show currently available proxies 2021-04-26 14:18:50 -04:00
Cecylia Bocovich
92bd900bc5 Implement binned counts for polling metrics 2021-04-26 14:07:55 -04:00
Cecylia Bocovich
83ef0b6f6d Export snowflake broker metrics for prometheus
This change adds a prometheus exporter for our existing snowflake broker
metrics. Current values for the metrics can be fetched by sending a GET
request to /prometheus.
2021-04-22 10:39:35 -04:00
Cecylia Bocovich
eff73c3016 Switch front domain and host to fastly 2021-04-01 11:56:52 -04:00
Cecylia Bocovich
196c230ac7 Update Go version for .gitlab-ci.yml 2021-03-25 16:07:48 -04:00
Cecylia Bocovich
087a037f82 Update webrtc library to v3.0.15
This fixes a vulnerability in the library: CVE-2021-28681
2021-03-18 23:08:05 -04:00
Cecylia Bocovich
c0b6e082f2 Don't log errors from callng close on OR conns
Snowflake copies data between the OR connection and the KCP stream,
meaning that in most cases the copy loops will only terminate once the
OR connection times out. In this case the OR connection is already
closed and so calls to CloseRead and CloseWrite will generate errors.
2021-03-18 22:05:40 -04:00
Cecylia Bocovich
720d2b8eb7 Don't log io.ErrClosedPipe in server
These errors are triggered in three places when the OR connection times
out. They don't tell us anything useful and are filling up our logs.
2021-03-18 22:05:40 -04:00
David Fifield
850d2f0683 Update required Go version to 1.13 in README. 2021-03-05 23:26:35 -07:00
Cecylia Bocovich
7187f1009e Log a throughput summary for each connection
This will increase transparency for people running standalone proxies
and help us debug any potential issues with proxies behaving unreliably.
2021-02-02 11:21:16 -05:00
Cecylia Bocovich
bae0bacbfd Classify proxies with unknown NATs as restricted 2021-01-25 14:05:24 -05:00
Cecylia Bocovich
1b29ad7de1 Bump version of pion/sdp
Update our dependency on pion/sdp from v2 to v3, to match pion/webrtc
v3. This requires some changes in how we parse out addresses from ice
candidates. This will ease tor browser builds of snowflake since we are
now only relying on one version of pion/sdp instead of two different
ones.
2021-01-25 10:28:17 -05:00
Cecylia Bocovich
83c01565ef Update webrtc library to v3.0.0
This update required two main changes to how we use the library. First,
we had to make sure we created the datachannel on the offering peer side
before creating the offer. Second, we had to make sure we wait for the
gathering of all candidates to complete since trickle-ice is enabled by
default. See the release notes for more details:
https://github.com/pion/webrtc/wiki/Release-WebRTC@v3.0.0.
2021-01-12 10:37:26 -05:00
Cecylia Bocovich
f908576c60 Increase the KCP maximum window size 2020-12-17 09:54:18 -05:00
Cecylia Bocovich
8ec8a7cb63 Pass lock to socksAcceptLoop by reference
This fixes a bug where we were passing the lock by value to
socksAcceptLoop.
2020-12-16 10:52:19 -05:00
Cecylia Bocovich
3e8947bfc9 Avoid double delay in client from ReconnectTimeout
Run the snowflake collection ReconnectTimeout timer in parallel to the
negotiation with the broker. This way, if the broker takes a long time
to respond the client doesn't have to wait the full timeout to respond.
2020-12-05 15:51:42 -05:00
Cecylia Bocovich
effc667544 Wait until all goroutines finish before shutdown 2020-12-05 15:50:16 -05:00
Cecylia Bocovich
b9cc54b3b7 Send shutdown signal to shutdown open connections
Normally all dangling goroutines are terminated when the main function
exits. However, for projects that use a patched version of snowflake as
a library, these goroutines continued running as long as the main function
had not yet terminated. This commit has all open SOCKS connections close
after receiving a shutdown signal.
2020-12-05 15:50:16 -05:00
Cecylia Bocovich
114df695ce Create new smux session for each SOCKS connection
Each SOCKS connection has its own set of snowflakes and broker poll
loop. Since the session manager was tied to a single set of snowflakes,
this resulted in a bug where RedialPacketConn would sometimes try to
pull snowflakes from a previously melted pool. The fix is to maintain
separate smux sessions for each SOCKS connection, tied to its own
snowflake pool.
2020-12-04 11:17:13 -05:00
Philipp Winter
5efcde5187
Sort snowflake-ips stats by country count.
We currently don't sort the snowflake-ips metrics:

    snowflake-ips CA=1,DE=1,AR=1,NL=1,FR=1,GB=2,US=4,CH=1

To facilitate eyeballing our metrics, this patch sorts snowflake-ips by
value.  If the value is identical, we sort by string, i.e.:

    snowflake-ips US=4,GB=2,AR=1,CA=1,CH=1,DE=1,FR=1,NL=1

This patch fixes tpo/anti-censorship/pluggable-transports/snowflake#40011
2020-11-27 11:20:40 -08:00
Cecylia Bocovich
665d76c5b0 Remove for loop around broker.Negotiate
Instead of continuously polling the broker until the client receives a
snowflake, fail back to the Connect() loop and try again to collect more
peers after ReconnectTimeout.
2020-11-23 12:10:59 -05:00
Cecylia Bocovich
ece43cbfcf Note that isRestrictedFiltering is no longer used 2020-11-20 01:15:16 -05:00
Cecylia Bocovich
00f8f85f41 Use remote probe to determine proxy NAT type
Rather than having standalone proxies determine their NAT type by
conducting the NAT behaviour checks in RFC 5780, use the remote probe
service instead.
2020-11-20 01:13:18 -05:00
Cecylia Bocovich
cf2eb5e6c0 Add a stub sid to probetest answer
This will prevent calls to DecodeAnswerRequest from returning an error
even though the sid is not needed for the probetest.
2020-11-18 15:57:51 -05:00
Cecylia Bocovich
0bed9c48b7 Redefine only symmetric NATs as restricted 2020-11-18 15:40:32 -05:00
Cecylia Bocovich
61beb9d996 Revert accidentally merged code
Some temporary testing code for the proxy got accidentally merged into
the latest changes. This commit undoes that mistake.
2020-11-05 19:28:20 -05:00
Cecylia Bocovich
4663599382 Make probetest wait for a datachannel to open 2020-11-05 16:48:00 -05:00
Cecylia Bocovich
b5ce259858 Fixed a bug that forced datachannel timeout
The probetest answer response was not being sent until the select call
received a datachannel timeout causing all attempted connections to
fail.
2020-11-05 16:46:48 -05:00
Cecylia Bocovich
a4f10d9d6e Add Dockerfile and README for deploying probetest
The easiest way to set up the probe server behind a symmetric NAT is to
deploy it as a Docker container and alter the iptables rules for the
Docker network subnet that the container runs in.
2020-10-29 11:03:51 -04:00
Cecylia Bocovich
f368c87109 Add a remote service to test NAT compatability
Add a remote probetest service that will allow proxies to test their
compatability with symmetric NATs.
2020-10-29 11:03:51 -04:00
Cecylia Bocovich
7a0428e3b1 Refactor proxy to reuse signaling code
Simplify proxy interactions with the broker signaling server and prepare
for the introduction of an additional signaling server.
2020-10-29 11:03:51 -04:00
David Fifield
912bcae24e Don't log io.ErrClosedPipe in proxy.
We expect one of these at the end of just about every proxy session, as
the Conns in both directions are closed as soon as the copy loop
finishes in one direction.

Closes #40016.
2020-10-22 23:01:45 -06:00
Cecylia Bocovich
6baa3c4d5f Add synchronization to prevent post-melt collects
This fixes a race condition in which snowflakes.End() is called while
snowflakes.Collect() is in progress resulting in a write to a closed
channel. We now wait for all in-progress collections to finish and add
an extra check before proceeding with a collection.
2020-10-15 14:47:51 -04:00
Cecylia Bocovich
d7aa9b8356 Extract remote address from ICE candidates
Parse the received ICE candidates as well as the Connection Data
field for a non-local IP address to pass to the bridge. This fixes
bug #33157.
2020-10-05 17:02:57 -04:00
Peter Gerber
8467c01e9e Consider more IPs to be local 2020-09-21 15:55:14 +00:00
Cecylia Bocovich
2d43dd26b1 Merge branch 'issue/21314' 2020-08-27 16:45:05 -04:00
Cecylia Bocovich
cc55481faf Set max number of snowflakes in the Tongue 2020-08-27 16:44:07 -04:00
Cecylia Bocovich
1364d7d45b Move snowflake ConnectLoop inside SOCKS Handler
Bug #21314: maintains a separate snowflake connect loop per SOCKS
connection. This way, if Tor decides to stop using Snowflake, Snowflake
will stop using the client's network.
2020-08-27 16:43:55 -04:00
Cecylia Bocovich
3c3317503e Update broker stats to include info on NAT types
As we now partition proxies by NAT type, our stats are more useful if they
capture how many proxies of each type we have, and information on
whether we have enough proxies of the right NAT type for our clients.
This change adds proxy counts by NAT type and binned counts of denied clients by NAT type.
2020-08-24 09:39:17 -04:00
Cecylia Bocovich
d5ae7562ac Add response header timeouts to broker transports
The client and proxy use the net/http default transport to make round
trip connecitons to the broker. These by default don't time out and can
wait indefinitely for the broker to respond if the broker hangs and
doesn't terminate the connection.
2020-07-30 17:54:28 -04:00
Cecylia Bocovich
82031289a3 Refactor subsetting of ice servers into main
This moves the subsetting of ice servers out of the parseIceServers
function and into main.
2020-07-24 14:08:09 -04:00
Cecylia Bocovich
92520f681d Choose a random subset from given STUN servers
Only chooses a subset as long as we have over 2 STUN servers to choose
from.
2020-07-23 11:30:36 -04:00
Cecylia Bocovich
eaac9f5b6b Use go modules to build android library
This commit removes the symlinks and turns go modules back on to run
gomobile bind locally on the project.
2020-07-14 09:16:23 -04:00
Cecylia Bocovich
c1fa4efe4b Refactor android script to be in android job 2020-07-14 09:16:23 -04:00
Hans-Christoph Steiner
d44fc23815 update .gitlab-ci.yml 2020-07-14 09:16:23 -04:00
Cecylia Bocovich
8c875f0ba7 Use STUN server compatable with RFC 5780 in proxy 2020-07-09 09:55:41 -04:00
Cecylia Bocovich
818226acf2 Testing Gitlab sync. 2020-07-06 15:42:41 -04:00
Cecylia Bocovich
046dab865f Have broker pass client NAT type to proxy
This will allow browser-based proxies that are unable to determine their
NAT type to conservatively label themselves as restricted NATs if they
fail to work with clients that have restricted NATs.
2020-07-06 13:16:03 -04:00
Cecylia Bocovich
0052c0e10c Add a new heap at the broker for restricted flakes
Now when proxies poll, they provide their NAT type to the broker. This
introduces a new snowflake heap of just restricted snowflakes that the
broker can pull from if the client has a known, unrestricted NAT. All
other clients will pull from a heap of snowflakes with unrestricted or
unknown NAT topologies.
2020-07-06 13:16:03 -04:00
Cecylia Bocovich
f6cf9a453b Implement NAT discover for go standalone proxies 2020-07-06 13:16:03 -04:00
Cecylia Bocovich
bf924445e3 Implement NAT discovery (RFC 5780) at the client
Snowflake clients will now attempt NAT discovery using the provided STUN
servers and report their NAT type to the Snowflake broker for matching.
The three possibilities for NAT types are:
- unknown (the client was unable to determine their NAT type),
- restricted (the client has a restrictive NAT and can only be paired
with unrestricted NATs)
- unrestricted (the client can be paired with any other NAT).
2020-07-06 13:16:03 -04:00
Cecylia Bocovich
1448c3885f Update documentation to include broker spec
Add broker messaging specification with endpoints for clients and
proxies.
2020-06-19 10:05:35 -04:00
Cecylia Bocovich
bbf11a97e4 Reduce SnowflakeTimeout to 20 seconds
The underlying smux layer sends a keep-alive ping every 10 seconds. This
modification will allow for one dropped/delayed ping before discarding
the snowflake
2020-05-07 09:42:09 -04:00
David Fifield
7043a055f9 Reduce DataChannelTimeout from 30s to 10s.
https://bugs.torproject.org/34042
2020-05-04 19:43:48 -06:00
David Fifield
c8293a5de3 Format the establishDataChannel error log message like other log messages.
It was sticking out in the context of other log messages.

2020/04/30 22:39:10 WebRTC: DataChannel created.
2020/04/30 22:39:20 establishDataChannel: timeout waiting for DataChannel.OnOpen
2020/04/30 22:39:20 WebRTC: closing PeerConnection
2020/04/30 22:39:20 WebRTC: Closing
2020/04/30 22:39:20 WebRTC: WebRTC: Could not establish DataChannel  Retrying in 10s...
2020-05-01 10:30:04 -06:00
David Fifield
72cfb96ede Restore check for nil writePipe in WebRTCPeer.Close.
I removed this check in 047d3214bf because
NewWebRTCPeer always initializes writePipe, and it is never reset to
nil. However tests used &WebRTCPeer{} which bypasses NewWebRTCPeer and
leaves writePipe set to nil.

https://bugs.torproject.org/34049#comment:3
https://bugs.torproject.org/34050
2020-04-28 11:47:34 -06:00
Cecylia Bocovich
5e8f9ac538 Update proxy tests to check serialization errors 2020-04-28 13:01:32 -04:00
Cecylia Bocovich
1d2df3cd71 Update calls to session description utils in proxy 2020-04-28 12:55:58 -04:00
David Fifield
047d3214bf Wait for data channel OnOpen before returning from NewWebRTCPeer.
Now callers cannot call Write without there being a DataChannel to write
to. This lets us remove the internal buffer and checks for transport ==
nil.

Don't set internal fields like writePipe, transport, and pc to nil when
closing; just close them and let them return errors if further calls are
made on them.

There's now a constant DataChannelTimeout that's separate from
SnowflakeTimeout (the latter is what checkForStaleness uses). Now we can
set DataChannel timeout to a lower value, to quickly dispose of
unconnectable proxies, while still keeping the threshold for detecting
the failure of a once-working proxy at 30 seconds.

https://bugs.torproject.org/33897
2020-04-27 18:48:00 -06:00
David Fifield
e8c41650ae Move establishDataChannel to after exchangeSDP. 2020-04-27 18:48:00 -06:00
David Fifield
85277274fd Make exchangeSDP into a standalone function. 2020-04-27 18:48:00 -06:00
David Fifield
8295c87fbe Make preparePeerConnection a standalone function. 2020-04-27 18:48:00 -06:00
David Fifield
81d14ad33a Make WebRTCPeer.preparePeerConnection block.
Formerly, preparePeerConnection set up a callback that sent into a
channel, and exchangeSDP waited until it could receive from the channel.
We can move the channel entirely into preparePeerConnection (having it
not return until the callback has been called) and that way remove some
shared state.
2020-04-27 18:48:00 -06:00
David Fifield
5787d5b8b0 Simplify WebRTCPeer.exchangeSDP.
No need to run sendOfferToBroker in a goroutine.
2020-04-27 18:48:00 -06:00
David Fifield
8caa737700 Remove SnowflakeDataChannel interface.
Use *webrtc.DataChannel directly.
2020-04-27 18:48:00 -06:00
David Fifield
32207d6f06 Eliminate separate WebRTCPeer.Connect method.
Do it as a side effect of NewWebRTCPeer.

Remove WebRTCPeer tests as they currently require invasively modifying
internal fields at different stages of construction.
2020-04-27 18:47:59 -06:00
David Fifield
b48fb781ee Have util.{Serialize,Deserialize}SessionDescription return an error
https://bugs.torproject.org/33897#comment:4
2020-04-27 18:46:56 -06:00
David Fifield
76732155e7 Remove Snowflake interface, use *WebRTCPeer directly.
The other interfaces in client/lib/interfaces.go exist for the purpose
of running tests, but not Snowflake. Existing code would not have worked
with other types anyway, because it does unchecked .(*WebRTCPeer)
conversions.
2020-04-27 17:51:21 -06:00
David Fifield
d9b076c32e Don't do a separate check for a short write.
A short write will result in a non-nil error. It's an io.PipeWriter
anyway, which blocks until all the data has been read or the read end is
closed, in which case it returns io.ErrClosedPipe if not some other
error.
2020-04-27 17:49:38 -06:00
David Fifield
51bb49fa6f Move pc.CreateOffer/pc.SetLocalDescription out of a goroutine.
This allows us to remove the internal errorChannel.
2020-04-27 17:47:14 -06:00
David Fifield
3520f4e8b9 Simplify Peers.Pop. 2020-04-24 15:45:15 -06:00
David Fifield
17c0d0ff82 Remove unused Resetter interface.
WaitForReset is not used since 70126177fb.
2020-04-24 13:31:04 -06:00
David Fifield
6c2e3adc41 Disable trickle ICE.
https://bugs.torproject.org/33984

OnICEGatheringStateChange is no longer called when candidate gathering
is complete. SetLocalDescription kicks off the gathering process.

https://bugs.torproject.org/28942#comment:28
https://bugs.torproject.org/33157#comment:2
2020-04-24 10:38:27 -06:00
David Fifield
73173cb698 Simplify BytesSyncLogger. 2020-04-23 21:38:44 -06:00
David Fifield
2853fc9362 Make BytesSyncLogger's implementation details internal.
Provide NewBytesSyncLogger that returns an opaque data structure.
Automatically start up the logging loop goroutine in NewBytesSyncLogger.
2020-04-23 21:38:44 -06:00
David Fifield
9a4e3e7bd9 Remove unused BytesSyncLogger.IsLogging. 2020-04-23 21:38:44 -06:00
David Fifield
d376d7036b Make WebRTCPeer and Peers not inherit the methods of BytesLogger.
You would have been able to do, for example,
snowflake.(*WebRTCPeer).AddInbound(...).
2020-04-23 21:38:44 -06:00
David Fifield
65ecb798ca Update a comment (no signal pipe anymore). 2020-04-23 20:36:55 -06:00
David Fifield
2f52217d2f Restore go 1.13 to go.mod, lost in the turbotunnel merge. 2020-04-23 17:08:49 -06:00
David Fifield
2022496d3b Use a global RedialPacketConn and smux.Session.
This allows multiple SOCKS connections to share the available proxies,
and in particular prevents a SOCKS connection from being starved of a
proxy when the maximum proxy capacity is less then the number of the
number of SOCKS connections.

This is option 4 from https://bugs.torproject.org/33519.
2020-04-23 16:03:03 -06:00
David Fifield
0790954020 USERADDR support for turbotunnel sessions.
The difficulty here is that the whole point of turbotunnel sessions is
that they are not necessarily tied to a single WebSocket connection, nor
even a single client IP address. We use a heuristic: whenever a
WebSocket connection starts that has a new ClientID, we store a mapping
from that ClientID to the IP address attached to the WebSocket
connection in a lookup table. Later, when enough packets have arrived to
establish a turbotunnel session, we recover the ClientID associated with
the session (which kcp-go has stored in the RemoteAddr field), and look
it up in the table to get an IP address. We introduce a new data type,
clientIDMap, to store the clientID-to-IP mapping during the short time
between when a WebSocket connection starts and handleSession receives a
fully fledged KCP session.
2020-04-23 16:03:02 -06:00
David Fifield
70126177fb Turbo Tunnel client and server.
The client opts into turbotunnel mode by sending a magic token at the
beginning of each WebSocket connection (before sending even the
ClientID). The token is just a random byte string I generated. The
server peeks at the token and, if it matches, uses turbotunnel mode.
Otherwise, it unreads the token and continues in the old
one-session-per-WebSocket mode.
2020-04-23 16:02:56 -06:00
David Fifield
222ab3d85a Import Turbo Tunnel support code.
Copied and slightly modified from
https://gitweb.torproject.org/pluggable-transports/meek.git/log/?h=turbotunnel&id=7eb94209f857fc71c2155907b0462cc587fc76cc
https://github.com/net4people/bbs/issues/21

RedialPacketConn is adapted from clientPacketConn in
c64a61c6da/obfs4proxy/turbotunnel_client.go
https://github.com/net4people/bbs/issues/14#issuecomment-544747519
2020-04-23 14:00:03 -06:00
David Fifield
904af9cb8a Let copyLoop exit when either direction finishes.
Formerly we waiting until *both* directions finished. What this meant in
practice is that when the remote connection ended, copyLoop would become
useless but would continue blocking its caller until something else
finally closed the socks connection.
2020-04-23 14:00:03 -06:00
David Fifield
ee2fb42d33 Immediately and unconditionally grant new SOCKS connections. 2020-04-23 14:00:03 -06:00
Cecylia Bocovich
e9b218a65c Clean up .gitignore 2020-04-22 11:11:23 -04:00
Cecylia Bocovich
20180dcb04 Rename proxy-go/ directory to proxy/
Now that the web proxies are in a different repository, no need to
distinguish the two.
2020-04-22 11:11:16 -04:00
Cecylia Bocovich
3ff04c3c65 Update .travis.yml for proxy/ code removal 2020-04-22 11:07:57 -04:00
Cecylia Bocovich
da01bf2323 Remove web proxy instructions from README.md 2020-04-22 11:07:53 -04:00
Cecylia Bocovich
51b0b7ed2e Remove proxy/ subdirectory
We're moving all web proxy code to a different repsitory.
2020-04-16 10:01:18 -04:00
Cecylia Bocovich
6f89fc14f6 Remove proxy/translation submodule
We're moving all web proxy code to another repository.
2020-04-16 10:01:18 -04:00
David Fifield
8eef3b6348 Remove uniuri dependency.
https://bugs.torproject.org/33800
2020-04-03 17:52:44 -06:00
David Fifield
237fed1151 Update GitHub issue numbers to Trac ticket numbers. 2020-04-02 12:36:09 -06:00
Cecylia Bocovich
ea01bf41c3 Change dummy address for snowflake
This will prevent a bug where tor skips bandwidth events for local
addresses (see https://bugs.torproject.org/33693)
2020-04-01 12:55:37 -04:00
Arlo Breault
1867f89562 Remove local LAN address ICE candidates in proxy-go answer
Trac: 19026
2020-03-26 14:04:29 -04:00
Arlo Breault
670e4ba438 Move StripLocalAddresses to a common util
Trac: 19026
2020-03-26 13:13:15 -04:00
Arlo Breault
5fa7578655 Rename logToStateDir/keepLocalAddresses to kebab case
https://en.wikipedia.org/wiki/Letter_case#Special_case_styles
2020-03-25 11:53:24 -04:00
Arlo Breault
f58c865d82 Add unsafe logging 2020-03-25 11:53:24 -04:00
Cecylia Bocovich
e521a7217a Update license 2020-03-19 15:40:11 -04:00
Arlo Breault
d10af300c1 Refactor (De)SerializeSessionDescription as common utils 2020-03-17 20:16:58 -04:00
Cecylia Bocovich
c11461d339 Update go.mod and go.sum 2020-03-17 14:22:20 -04:00
Cecylia Bocovich
6054c09949 Remove the abandoned server-webrtc test code
This existed solely for testing purposes and is no longer being
maintained.
2020-03-17 14:16:57 -04:00
Cecylia Bocovich
58b52eb9f7 Remove go get commands from travis.yml
We no longer need standalone get commands now that we are using go
modules.
2020-03-05 09:21:17 -05:00
Cecylia Bocovich
920f6791f3 Add a go.mod and go.sum for snowflake 2020-03-05 09:21:17 -05:00
Cecylia Bocovich
03315dde02 bump version to 0.2.2 2020-03-04 16:20:34 -05:00
David Fifield
125e71fa6e Remove the now-unused appengine directory.
https://bugs.torproject.org/33429
2020-02-29 17:29:28 -07:00
Cecylia Bocovich
2e9e807178 Remove unecessary log messages
Ever since we started scrubbing log messages, with the help of regexes
for https://bugs.torproject.org/21304 logging has become more CPU
intensive due to our use of regular expressions.

Logging the byte count of every incoming and outgoing message at the
proxy-go instances was taking up a lot of CPU and contrubuting to the
high CPU usage seen in https://bugs.torproject.org/33211.
2020-02-25 18:08:34 -05:00
David Fifield
c2a12c25d1 Update appengine for the Go 1.11 runtime.
https://cloud.google.com/appengine/docs/standard/go111/go-differences
This is untested, because I wasn't actually able to deploy without
enabling Cloud Build and setting up a billing account.
2020-02-24 00:15:54 -07:00
David Fifield
c124e8c643 In server, treat a client IP address of 0.0.0.0 as missing.
Some proxies currently send ?client_ip=0.0.0.0 because of an error in
how they attempt to grep the address from the client's SDP. That's
inflating our "%d/%d connections had client_ip" logs. Instead, treat
these cases as if the IP address were absent.
https://bugs.torproject.org/33157
https://bugs.torproject.org/33385
2020-02-22 16:13:17 -07:00
David Fifield
380b133155 Close internal Pipes in websocketconn.Conn Close.
Unless something externally called Write after Close, the
writeLoop(ws, pr2) goroutine would run forever, because nothing would
ever close pw2/pr2.
https://bugs.torproject.org/33367#comment:4
2020-02-18 14:10:47 -07:00
Arlo Breault
1220853a67 Restructure a bit based on review 2020-02-08 10:13:40 -05:00
Arlo Breault
846473b354 Unmarshal the SDP to filter attributes
Instead of string manipulation.
2020-02-08 10:13:40 -05:00
Arlo Breault
0fae4ee8ea Remove local LAN address ICE candidates
Unfortunately, the "public" RTCIceTransportPolicy was removed.

https://developer.mozilla.org/en-US/docs/Web/API/RTCConfiguration#RTCIceTransportPolicy_enum

Trac: 19026
2020-02-08 10:13:40 -05:00
Arlo Breault
28cf70bb44 Remove unreachable code
go vet was complaining,

common/websocketconn/websocketconn.go:56:2: unreachable code
2020-02-08 10:12:43 -05:00
David Fifield
ca9ae12c38 Simplify a conditional. 2020-02-04 22:35:12 -07:00
David Fifield
256959ca65 Implement net.Conn for websocketconn.Conn.
We had already implemented Read, Write, and Close. Pass RemoteAddr,
LocalAddr, SetReadDeadline, and SetWriteDeadline through to the
underlying *websocket.Conn. Implement SetDeadline by calling both
SetReadDeadline and SetWriteDeadline.

https://bugs.torproject.org/33144
2020-02-04 15:53:15 -07:00
David Fifield
01e28aa460 Rewrite websocketconn with synchronous pipes.
Makes the following changes:
 * permits concurrent Read/Write/Close
 * converts certain CloseErrors into io.EOF

https://bugs.torproject.org/33144
2020-02-04 15:53:15 -07:00
David Fifield
5708a1d57b websocketconn tests.
https://bugs.torproject.org/33144
2020-02-04 15:53:15 -07:00
Cecylia Bocovich
310890aa14 bump version to 0.2.1 2020-02-03 09:49:34 -05:00
David Fifield
564d1c8363 Remove unused maxMessageSize constant. 2020-01-31 00:15:11 -07:00
David Fifield
a2292ce35b Make timeout constants into time.Duration values.
This slightly changes some log messages.
2020-01-31 00:08:50 -07:00
David Fifield
dfb83c6606 Allow handling multiple SOCKS connections simultaneously.
Close the SOCKS connection in the same function that opens it.
2020-01-30 10:18:23 -07:00
David Fifield
20ac2029fd Have websocketconn.New return a pointer.
This makes the return type satisfy the io.ReadWriteCloser interface
directly.
2020-01-30 10:18:23 -07:00
David Fifield
e47dd5e2b4 Remove some redundancy in websocketconn naming.
Rename websocketconn.WebSocketConn to websocketconn.Conn, and
       websocketconn.NewWebSocketConn to websocketconn.New

Following the guidelines at
https://blog.golang.org/package-names#TOC_3%2e
2020-01-30 10:18:23 -07:00
David Fifield
5b01df9030 Initialize the global upgrader.CheckOrigin statically.
Only once, not again on every call to initServer.
2020-01-30 10:18:23 -07:00
David Fifield
a4287095c0 Also show message in the "error copying WebSocket to ORPort" case.
This was the only case out of the three not to show it.
2020-01-30 10:17:15 -07:00
Cecylia Bocovich
50673d4943 Remove client test with nil broker
We are no longer checking for nil BrokerChannels in Catch because this
case is caught from the return values of NewBrokerChannel. This change
caused a no longer necessary unit test to hang.
2020-01-29 11:40:29 -05:00
Cecylia Bocovich
7682986a45 Update client tests for NewBrokerChannel errors
We changed NewBrokerChannel to return an error value on failure. This
updates the tests to check that value.
2020-01-29 11:27:44 -05:00
David Fifield
57d4b0b5bd Use lowercase variable names in copyLoop. 2020-01-28 03:04:33 -07:00
David Fifield
bc5498cb4b Fix the order of arguments of client copyLoop to match the call.
The call was
	copyLoop(socks, snowflake)
but the function signature was
	func copyLoop(WebRTC, SOCKS io.ReadWriter) {

The mistake was mostly harmless, because both arguments were treated the
same, except that error logs would have reported the wrong direction.
2020-01-28 03:04:14 -07:00
David Fifield
db1ba4791b Simplify NewWebRTCDialer. 2020-01-27 20:53:27 -07:00
David Fifield
2fb52c8639 Check for an invalid broker URL at a higher level.
Instead of returning nil from NewBrokerChannel and having
WebRTCDialer.Catch check for nil, let NewBrokerChannel return an error
and bail out before calling WebRTCDialer.Catch.

Suggested by cohosh.
https://bugs.torproject.org/33040#comment:3
2020-01-27 20:50:26 -07:00
David Fifield
f1ab65b1c0 Close the melt channel, don't just send once on it.
Closing the channel makes it always immediately selectable.
2020-01-23 11:24:00 -07:00
David Fifield
febb4936f6 Refactor SOCKS-related logging. 2020-01-23 11:24:00 -07:00
David Fifield
aa3999857f Move ICE server logging out of parseIceServers. 2020-01-23 11:24:00 -07:00
David Fifield
509f634506 NewWebRTCDialer cannot return an error. 2020-01-23 11:24:00 -07:00
David Fifield
d6467ff585 Formatting improvements. 2020-01-23 10:43:31 -07:00
David Fifield
e27709080a Update a comment: we no longer keep track of handlers. 2020-01-23 10:42:35 -07:00
David Fifield
5ff75e1034 Remove erroneous logging around pt.*Error calls.
These functions are called for their side effect of sending a PT error
message on stdout; they also return a representation of the error
message as an error object for the caller to use if it wishes. These
functions *always* return a non-nil error object; it is not something to
be logged, any more than the return value of errors.New is.

The mistaken logging was added in
https://bugs.torproject.org/31794
b26c7a7a73
3ec9dd19fa
ed3d42e1ec
2020-01-20 23:57:31 -07:00
Jascha
37aaaffa15 proxy/make.js: add help output 2019-12-13 16:17:43 -07:00
Arlo Breault
1e45d48a3c Document setting the proxyType for metrics
Trac: 32499
2019-12-06 17:54:54 -05:00
Arlo Breault
af4cc52dc2 Add a build step / documentation for code reuse
Trac: 32499
2019-12-06 17:19:46 -05:00
Cecylia Bocovich
3bdcc3408e Increased test coverage for messages library 2019-12-06 11:30:34 -05:00
Cecylia Bocovich
0f99c5ab12 Touched up snowflake client tests
There were a few tests that needed refreshing since the introduction of
the pion library. Also added a few tests for the ICE server parsing
function in the client.
2019-12-06 11:30:34 -05:00
Cecylia Bocovich
dabdd847ce Expanded snowflake server tests
Now tests the proxy and initServer functionalities. The tests use the
same websocket library as the server and proxy-go implementations.
2019-12-06 11:28:41 -05:00
Cecylia Bocovich
06298eec73 Added another lock to protect broker stats
Added another lock to the metrics struct to synchronize accesses to the
broker stats. There's a possible race condition if stats are updated at
the same time they are being logged.
2019-12-05 10:17:20 -05:00
Cecylia Bocovich
42e16021c4 Add tests to check for data race in broker
We had some data races in the broker that occur when proxies and clients
modify the heap/snowflake map at the same time. This test has a client
and proxy access the broker simultaneously to check for data races.
2019-12-05 10:16:34 -05:00
Cecylia Bocovich
dccc15a6e9 Add synchronization to prevent race in broker
There's a race condition in the broker where both the proxy and the
client processes try to pop/remove the same snowflake from the heap.
This patch adds synchronization to prevent simultaneous accesses to
snowflakes.
2019-12-05 09:47:26 -05:00
Cecylia Bocovich
07f2cd8073 bump version to 0.2.0 2019-12-03 14:09:05 -05:00
Cecylia Bocovich
94de69aa36 Updated broker specification and comments 2019-11-28 13:52:58 -05:00
Cecylia Bocovich
97554e03e4 Updated proxyType variable name for readability 2019-11-28 13:52:58 -05:00
Cecylia Bocovich
981abffbd9 Add proxy type to stats exported by broker 2019-11-28 13:52:58 -05:00
Cecylia Bocovich
8ab81fc6cd Update proxy config to take proxy type
This allows badge and standalone proxies to tell the broker what proxy
type they are.
2019-11-28 13:52:58 -05:00
Cecylia Bocovich
7277bb37cd Update broker--proxy protocol with proxy type
Proxies now include information about what type they are when they poll
for client offers. The broker saves this information along with
snowflake ids and outputs it on the /debug page.
2019-11-28 13:52:58 -05:00
Arlo Breault
7092b2cb2c Revert abstracting copyloop 2019-11-21 19:33:39 -05:00
Arlo Breault
30b5ef8a9e Use gorilla websocket in proxy-go too
Trac: 32465
2019-11-20 19:33:28 -05:00
Cecylia Bocovich
7557e96a8d Remove unnecessary logging at broker 2019-11-13 15:01:03 -05:00
Cecylia Bocovich
742070a7fb Clean up proxy-go tests 2019-11-13 14:31:55 -05:00
Cecylia Bocovich
459286c143 Test proxy-go interactions with broker 2019-11-13 13:57:17 -05:00
Cecylia Bocovich
446f39a9e5 Use http.RoundTripper for connections to broker
This change makes it easier for us to write tests with mock transports
2019-11-13 13:57:14 -05:00
Cecylia Bocovich
574c57cc98 Created tests for proxy-go utility functions 2019-11-13 13:57:11 -05:00
Cecylia Bocovich
32bec89a84 Add tests for session descripion functions
Also removed some unnecessary code
2019-11-13 13:57:06 -05:00
Cecylia Bocovich
3ec2e8b89e Renamed existing test file 2019-11-13 13:57:02 -05:00
Cecylia Bocovich
2f37a73e71 bump version to 0.1.0 2019-11-13 13:36:30 -05:00
Cecylia Bocovich
a7040e2eee Update travis to use go v1.13.x 2019-11-13 11:39:33 -05:00
Cecylia Bocovich
b4b538a17f Implemented new broker messages for browser proxy 2019-11-13 10:54:48 -05:00
Cecylia Bocovich
c4ae64905b Redo protocol for proxy--broker messages
Switch to containing all communication between the proxy and the broker
in the HTTP response body. This will make things easier if we ever use
something other than HTTP communicate between different actors in the
snowflake system.

Other changes to the protocol are as follows:
- requests are accompanied by a version number so the broker can be
backwards compatable if desired in the future
- all responses are 200 OK unless the request was badly formatted
2019-11-13 10:54:48 -05:00
Arlo Breault
abefae1587 Restore sending close message before closing
And simplify EOF check.
2019-11-11 17:20:00 -05:00
Arlo Breault
c417fd5599 Stop using custom websocket library in server
Trac: 31028
2019-11-11 17:20:00 -05:00
Cecylia Bocovich
300a23c6a0 Changed variable name for multiplexed clients
The variable maxNumClients was unused, while connectionsPerClient was
used for spawning multiple proxyPairs. The former is a more appropriate
name for the multiplexing behaviour we use it for.

Multiplexing now just works thanks to implementing ticket #31310.
2019-10-31 12:08:43 -04:00
Cecylia Bocovich
64b66c855f Moved function comments to their definitions
Increase readability of code a bit, the function descriptions were
automatically placed in the constructor when we moved from coffeescript.
2019-10-31 11:59:13 -04:00
Cecylia Bocovich
789285e0df Remove "active" property of proxyPairs
Use their existence in the proxy pair list to indicate they are active.
2019-10-31 11:59:13 -04:00
Cecylia Bocovich
d186fcd401 Remove property "running" from proxy-pair
We don't need it, and already have a function webrtcIsReady that tells
us what we need to know (whether a datachannel was opened before the
timeout period).
2019-10-31 11:59:13 -04:00
Cecylia Bocovich
9b470fbe4b Removed "janky" snowflake state machine
The only place it was used was in window.onpageunload, and we have a
better way of determining if the proxy is active there (through the ui).

I also removed that code from the webextension since the proxy won't
stop running unless you close the browser and after testing it looks
like that code doesn't notify the user anyway.
2019-10-31 11:59:13 -04:00
Cecylia Bocovich
338f1792b8 bump version to 0.0.13 2019-10-28 10:55:51 -04:00
David Fifield
e408988387 Increase proxy poll interval to 300 s.
https://bugs.torproject.org/32129
2019-10-28 10:51:49 -04:00
Cecylia Bocovich
11bd32f62e Remove now unecessary timeoutConn 2019-10-25 17:12:45 -04:00
Cecylia Bocovich
76087a6a77 Don't log error messages from SetDeadline
We haven't implemented SetDeadline for webRTCConn and the error messages
are misleading to proxy-go operators.
2019-10-25 15:34:41 -04:00
Cecylia Bocovich
da8b98d090 Include language name with along with code
Use npm cldr package to get the language name that corresponds to the
country code for the language switcher
2019-10-16 12:32:45 -04:00
Cecylia Bocovich
93d3564109 A few minor fixes to website
- cut down on size of bootstrap.css file
- remove unecessary styles
- fixed typo in javascript comment
2019-10-16 12:32:45 -04:00
Cecylia Bocovich
ab96817381 Added a language switcher for snowflake.tp.o
Also modified the styling of the page to match the main tp.o page a bit
more
2019-10-16 12:32:45 -04:00
Cecylia Bocovich
f6517f60ce Hook up localized messages.json to website
Right now we use the navigator language to determine localization and
replace the website contents with translated strings.
2019-10-16 12:32:45 -04:00
Cecylia Bocovich
9140c7648c Switched to absolute paths for resources
This will make it easier to have translated copies of the site in
subdirectories
2019-10-16 12:32:45 -04:00
Cecylia Bocovich
7fe4e2910c Translate snowflake@tp.o website
Switched to using messages.json for translation strings for
snowflake@tp.o
2019-10-16 12:32:45 -04:00
Cecylia Bocovich
d064e54db9 bump version to 0.0.12 2019-10-16 10:30:20 -04:00
Cecylia Bocovich
b9138d0c7e Make sure we close peer connections in proxy
Not closing peer connections was causing UDP sockets to remain open
indefinitely (as reported in ticket #31285).
2019-10-16 10:26:51 -04:00
Cecylia Bocovich
f74da6e0fc Update try catch blocks to revert changes on error
A failure to set the git tag returns and undoes the changes done
previously
2019-10-16 10:23:54 -04:00
Cecylia Bocovich
6e6e52fd8c Added packaging script for webextension
Added a new script to package the webextension. This will automatically
build and zip the source code and the webextension for upload. It take a
version as an argument and then checks the version in the manifest, and
locally commits a version bump.
2019-10-16 10:23:54 -04:00
David Fifield
b4f4b29a03 Stop counting handlers before terminating.
The requirement to do so is obsolete and has already been removed from
other pluggable transports.

https://bugs.torproject.org/32046
2019-10-11 16:50:25 -06:00
Arlo Breault
d8d3170af8 Regenerate the ico files to reduce size
With,
convert -background transparent toolbar-off.svg -define icon:auto-resize=32 toolbar-off.ico
2019-10-11 13:18:51 -04:00
Arlo Breault
faf02d86a1 Update favicon with badge state on embed.html
.ico files were created with,
convert -density 256x256 -background transparent toolbar-on.svg -define icon:auto-resize -colors 256 toolbar-on.ico

Trac: 31537
2019-10-11 13:18:51 -04:00
David Fifield
5732f1a630 Add --chown=:snowflake to rsync commands.
Thanks cohosh for helping debug this. Uploaded files need correct group
ownership.
2019-10-11 10:37:06 -06:00
Cecylia Bocovich
61d8eb5ef0 bump version to 0.0.11 2019-10-11 10:40:56 -04:00
Shane Howearth
01156e58eb Remove unnecessary initialisation of last
last was initialised twice (creating a shadow), the second time inside
a case statement. The second initialisation is removed, keeping the use
of last aligned to the isame style as its use other parts of the case
statement.
2019-10-08 10:25:44 -04:00
Shane Howearth
8bbdb3b51a Bring code into line with Golangci-lint linters
- Error strings are no longer capitalized nor end with punctuation
- Alias import
- Remove extraneous initilisation code (No need to provide zero value
	for variables, because the compiler does that anyway)
2019-10-08 10:25:44 -04:00
Shane Howearth
b26c7a7a73 Handle generated errors in client 2019-10-08 10:25:44 -04:00
Shane Howearth
78a37844b2 Handle generated errors in proxy-go 2019-10-08 10:25:36 -04:00
Shane Howearth
3cfceb3755 Handle generated errors in broker 2019-10-08 10:13:29 -04:00
Shane Howearth
ed3d42e1ec Handle generated errors in server 2019-10-08 10:12:36 -04:00
Shane Howearth
3ec9dd19fa Handle generated errors in server-webrtc 2019-10-08 10:12:36 -04:00
Cecylia Bocovich
82e5753bcc Reverted logging changes that require Go 1.13 2019-10-08 09:58:12 -04:00
Cecylia Bocovich
18d793798c Updated snowflake client dependencies in README 2019-10-08 09:52:45 -04:00
Cecylia Bocovich
2bf4be71b6 Bumped Go version to access log.Writer 2019-10-08 09:27:52 -04:00
Cecylia Bocovich
2b04357550 Connect pion library logger with snowflake log
We need to set up the pion/webrtc logger to write output to the
snowflake log, otherwise the warnings we are getting from the pion
library are being lost.

Note: this requires go version 1.13 and later in order to use the
`log.Writer()` function.
2019-10-08 09:27:52 -04:00
Cecylia Bocovich
97bab94e67 Make sure command line ice servers are used
This commit fixes a small error introduced in a previous commit. Servers
given by command line options weren't being added to the configuration
because we were checking for `iceServers` to be nil instead of not nil.
2019-10-08 09:27:52 -04:00
Cecylia Bocovich
6cf53c4ef0 Update .travis.yml for new webrtc library 2019-10-08 09:27:52 -04:00
Cecylia Bocovich
b5c50b69d0 Ported snowflake client to work with pion/webrtc
Modified the snowflake client to use pion/webrtc as the webrtc library.
This involved a few small changes to match function signatures as well
as several larger ones:
- OnNegotiationNeeded is no longer supported, so CreateOffer and
SetLocalDescription have been moved to a go routine called after the
other peer connection callbacks are set
- We need our own deserialize/serialize functions
- We need to use a SettingEngine in order to access the
OnICEGatheringStateChange callback
2019-10-08 09:27:52 -04:00
Cecylia Bocovich
0428797ea0 Modified proxy-go to use pion/webrtc
The API is very similar, differences were mostly due to:
- closing peer connections and datachannels (no destroy/delete methods)
- different way to set datachannel/peer connection callbacks
- differences in whether functions take pointers or values
- no serialize/deserialize functions in the API
2019-10-08 09:27:52 -04:00
Cecylia Bocovich
9e22af90c1 Updated webextension translations 2019-10-04 13:52:07 -04:00
David Fifield
be7b531586 Remove obsolete status tracking section from README.
Noted by a blog commenter at
https://blog.torproject.org/comment/284258#comment-284258

In case the above link breaks, it's a comment attached to this post:
https://blog.torproject.org/new-release-tor-browser-90a7
2019-10-02 17:48:34 -06:00
Arlo Breault
36eb07a6fc Use a static label for the button
Trac: 31685
2019-10-01 14:27:19 -04:00
Arlo Breault
a5071ec1d6 Add a favicon
Trac: 31537
2019-09-30 19:18:52 -04:00
Arlo Breault
8d81270a9f Add bridge probe to badge 2019-09-30 18:42:57 -04:00
Arlo Breault
d4aa9ad2b3 Reorder enable checks
First check that it is enabled before doing feature testing.

This will be useful in the badge so that probing only happens if it is
enabled.
2019-09-30 18:42:57 -04:00
Arlo Breault
aa107862c5 Move probe to WS class for reuse in the badge 2019-09-30 18:42:57 -04:00
Arlo Breault
685c3bd262 Disable the webext if the bridge is unreachable 2019-09-30 18:42:57 -04:00
Arlo Breault
19bc6d8858 Move missingFeature to initToggle in webext 2019-09-30 18:42:57 -04:00
Cecylia Bocovich
3c28380bc6 Add locks to safelog
The safelog Write function can be called from multiple go routines, and
it was not thread safe. These locks in particular allow us to pass the
logscrubber's output io.Writer to other libraries, such as pion.
2019-09-30 16:43:51 -04:00
Cecylia Bocovich
f3be34a459 Removed extraneous log messages
Many of our log messages were being used to generate metrics, but are
now being aggregated and logged to a separate metrics log file and so we
don't need them in the regular logs anymore.

This addresses the goal of ticket #30830, to remove unecessary messages
and keep broker logs for debugging purposes.
2019-09-19 16:48:14 -04:00
Cecylia Bocovich
b29b49fc1c Added a folder for documentation
Added a folder to hold snowflake specifications. This folder starts with
a file containing a partial broker spec that focuses on the metrics
reporting spec for CollecTor at the moment.
2019-09-16 14:29:16 -04:00
Arlo Breault
1b14810d34 Enforce consistent indentation in js 2019-08-27 18:19:51 -04:00
Cecylia Bocovich
00eb4aadf5 Modified broker /debug page to display counts only
The broker /debug page was displaying proxy IDs and roundtrip times. As
serna pointed out in bug #31460, the proxy IDs can be used to launch a
denial of service attack. As the metrics team pointed out on #21315, the
round trip time average can be potentially sensitive.

This change displays only proxy counts and uses ID lengths to
distinguish between standalone proxy-go instances and browser-based
snowflake proxies.
2019-08-27 10:01:00 -04:00
emma peel
ea442141db remove exclamation mark. ref https://grammar.yourdictionary.com/punctuation/when/when-to-use-exclamation-marks.html 2019-08-26 15:19:20 -04:00
Arlo Breault
131cf4f8ea Add branch to .gitmodule + bump to bbf11bb
This allows you to run `git submodule update --remote` to bump to the
latest commit on that branch.
2019-08-26 15:14:17 -04:00
Arlo Breault
9faf8293e6 Bump proxy/translation to HEAD of snowflakeaddon-messages.json_completed 2019-08-26 15:14:17 -04:00
Arlo Breault
1c550599b8 Automate generating the list of available languages for the badge
Note that getMessage in the badge depends on having a complete set of
translations, unlike the webextension, which will fallback to the
default for a string.
2019-08-26 15:14:17 -04:00
Arlo Breault
1e33ae830f Get badge locale from navigator.language 2019-08-26 15:14:17 -04:00
Arlo Breault
9c20ab3984 Copy completed translations over when building 2019-08-26 15:14:17 -04:00
Arlo Breault
a0dd3d9edc Add translation submodule
At the head of the snowflakeaddon-messages.json_completed branch
2019-08-26 15:14:17 -04:00
Cecylia Bocovich
4b6871a24e Version bump for bug #31385 2019-08-26 09:16:47 -04:00
Cecylia Bocovich
16a1b69823 Added check for active pair in onopen
Because the timeout makes the pair inactive, we should check for this
state in onopen before connecting to the client. Updated tests to set
the proxy pair to active before testing onopen. Also removed a
redundant statement.
2019-08-26 09:15:38 -04:00
Cecylia Bocovich
8a5941daab Fix to check running status before closing proxy
This fixes a bug reported in #31385. There was an error with the proxy
deadlock fix in #31100 where we close proxies regardless of connection
status.
2019-08-26 09:15:38 -04:00
David Fifield
6be7bedd06 Add --chmod ug=rw,D+x --perms to rsync commands.
This is an attempt to solve mixed-ownership permission issues.
https://bugs.torproject.org/31496
2019-08-23 22:51:27 -06:00
David Fifield
1d6a98a40e Limit the maximum horizontal content width to 55rem. 2019-08-23 22:44:36 -06:00
David Fifield
dff07d6672 Use less side padding on small screens. 2019-08-23 22:43:10 -06:00
David Fifield
49f4a710f8 Use more semantic HTML. 2019-08-23 22:43:10 -06:00
David Fifield
1063ef7b1d Fix certain attributes to be pixel counts, not CSS dimensions.
Found these using https://validator.w3.org/.
2019-08-23 21:56:39 -06:00
David Fifield
3bcd60ad10 Update the iframe embed height to match the live example.
The live example changed from "200px" to "240px" in
4e5a50f2b5.
2019-08-23 21:56:23 -06:00
David Fifield
73174b4039 Add ids to more elements in static/index.html. 2019-08-23 18:31:10 -06:00
David Fifield
0ef7c6f1fa Bug 31453: use only SVG for the status images. 2019-08-19 12:44:30 -06:00
David Fifield
f9173f61a2 Make a dark-mode version of the arrowhead icon.
The former icon used fill="context-fill", which I believe doesn't work
except in Mozilla's own extensions. So I changed that one to
fill="black" and made a new one with fill="white".
2019-08-19 12:24:10 -06:00
David Fifield
251b6a26fa Change the "running" color to #68B030.
Not so light against a white background.
https://bugs.torproject.org/31170#comment:13
2019-08-19 12:24:10 -06:00
David Fifield
6ab50e32b9 Toolbar icons that work in both light and dark modes.
https://bugs.torproject.org/31170#comment:8

I chose these icons for the "on" and "off" icons:
toolbar_icon_purple.svg → toolbar-on.svg
toolbar_icon_grey.svg → toolbar-off.svg

I then made toolbar-running.svg by copying toolbar-off.svg and changing
the stroke and fill from #4A4A4F to #40E0D0.
2019-08-19 12:24:09 -06:00
David Fifield
36815bd57b Popup CSS for dark mode.
In Firefox, this requires version 67 for support for
prefers-color-scheme media queries.
https://hacks.mozilla.org/2019/05/firefox-67-dark-mode-css-webrender/
To force Firefox into dark mode, set ui.systemUsesDarkTheme=1 (and
optionally browser.in-content.dark-mode=true, to put pages such as
about:addons into dark mode as well) in about:config. You can check if
it's working at https://bugzilla.mozilla.org/, which has its own
dark-mode styling. Note that this kind of dark mode is *independent* of
the "Dark" theme that can be selected in about:addons.

Chrome requires version 76 for prefers-color-scheme. You can force it by
running with the --force-dark-mode command-line option.
2019-08-19 12:24:09 -06:00
David Fifield
1e6dd4d86f Redo the status-running icon to match the others.
This one was missing from the redesigned icons. I made it by making a
copy of status-on.svg and changing the fill from #8000D7 to #40E0D0.

I didn't make a separate dark-mode version of the icon.
2019-08-19 12:24:09 -06:00
David Fifield
7e2936dcec Dark-mode images from Antonela.
https://bugs.torproject.org/31170#comment:3

Also revises the light-mode images.
2019-08-19 12:24:09 -06:00
Cecylia Bocovich
0aef40100a Implemented handler to fetch broker stats
This implements a handler at https://[snowflake-broker]/metrics for the
snowflake collecTor module to fetch stats from the broker. Logged
metrics are copied out to the response with a text/plain; charset=utf-8
content type. This implements bug #31376.
2019-08-16 09:12:49 -04:00
Arlo Breault
4e5a50f2b5 Start localization
Trac 30310
2019-08-15 17:15:37 -04:00
Cecylia Bocovich
f94ef87c46 Increase webextension poll period
Raise the webextension poll period from 5 to 20 seconds (bug 31200).
2019-08-12 13:14:25 -04:00
Cecylia Bocovich
0b55fd307a Version bump for webextension 2019-08-08 11:11:56 -04:00
Cecylia Bocovich
e77baabdcf Add a timeout to check if datachannel opened
This is similar to the deadlock bug in the proxy-go instances. If the
proxy-pair sends an answer to the broker, it previously assumed that the
datachannel would be opened and the pair reused only once the
datachannel closed. However, sometimes the datachannel never opens due
to ICE errors or a misbehaving/buggy client causing the proxy to
infinitely loop and the proxy-pair to remain active.

This commit reuses the pair.running attribute to indicate whether or not
the datachannel has been opened and sets a timeout to close the
proxy-pair if it has not been opened by that time.
2019-08-08 10:36:28 -04:00
Cecylia Bocovich
6cc944f2b4 Reuse proxypair if sendAnswer fails
Make sure to set proxypair.active to false if createAnswer or
setLocalDescription fails. This should prevent one edge case the results
in an infinite loop described in ticket #31100.
2019-08-08 10:36:28 -04:00
David Fifield
990047b2f5 Control statusimg using CSS, rather than setting an img src. 2019-07-31 19:09:46 -06:00
David Fifield
8f885c7557 Set an "error" class instead of hardcoding a text color. 2019-07-31 19:09:46 -06:00
David Fifield
8a56baa8e1 Identify popup elements by id. 2019-07-31 19:09:44 -06:00
Arlo Breault
e6f7633961 Remove mentions of snowflake.html
It was removed in e60f228 and aa27c05
2019-07-31 18:14:00 -04:00
Arlo Breault
b324d9d42f Move icons/ to assets/
There's a default alias for icons/ in apache,
https://www.electrictoolbox.com/apache-icons-directory/
2019-07-31 17:59:48 -04:00
Arlo Breault
5321223240 Use execSync in make.js
695554c highlighted the race here.
2019-07-31 16:43:56 -04:00
Arlo Breault
aa27c0556c Redirect removed snowflake.html 2019-07-31 15:49:21 -04:00
Arlo Breault
8de6e26c59 Remove Util.mightBeTBB
Trac 31222
2019-07-27 12:01:03 -04:00
Arlo Breault
5d26f76ba1 Brace expansion is a bashism 2019-07-27 12:01:03 -04:00
Arlo Breault
03512bfa29 Move more UI code to use specific sites 2019-07-27 12:01:03 -04:00
Arlo Breault
a164d61f16 Remove tests referring to BadgeUI
Since that's been overhauled.  The whole ui.spec.js file probably needs
redoing.
2019-07-27 12:01:03 -04:00
Arlo Breault
0f33546fec Clean up some linting errors 2019-07-27 12:01:03 -04:00
Arlo Breault
e60f22833a Reimagine the badge
Trac 27385
2019-07-27 12:01:03 -04:00
David Fifield
0bded511b9 Add a "Deploying" section to proxy/README.md. 2019-07-27 09:53:09 -06:00
David Fifield
695554cbc5 Make "npm run build" include .htaccess.
Formerly it was copying static/*, and the wildcard skipped the dotfile.
2019-07-27 09:42:12 -06:00
David Fifield
905f8b78c1 bamsoftware.com -> freehaven.net in proxy/README.md.
https://bugs.torproject.org/31250
2019-07-27 09:31:16 -06:00
Cecylia Bocovich
299c12b2e9 Version bump to fix issue with addon update 2019-07-26 10:45:50 -04:00
183 changed files with 16733 additions and 6228 deletions

16
.gitignore vendored
View file

@ -6,14 +6,14 @@
datadir/
broker/broker
client/client
server-webrtc/server-webrtc
server/server
proxy-go/proxy-go
proxy/proxy
probetest/probetest
snowflake.log
proxy/test
proxy/build
proxy/node_modules
proxy/spec/support
proxy/webext/snowflake.js
ignore/
npm-debug.log
# from running the vagrant setup
/.vagrant/
/sdk-tools-linux-*.zip*
/android-ndk-*
/tools/

View file

@ -1,29 +1,408 @@
image: golang:1.10-stretch
variables:
DOCKER_REGISTRY_URL: docker.io
cache:
paths:
- .gradle/wrapper
- .gradle/caches
stages:
- test
- deploy
- container-build
- container-mirror
before_script:
# Create symbolic links under $GOPATH, this is needed for local build
- export src=$GOPATH/src
- mkdir -p $src/git.torproject.org/pluggable-transports
- mkdir -p $src/gitlab.com/$CI_PROJECT_NAMESPACE
- ln -s $CI_PROJECT_DIR $src/git.torproject.org/pluggable-transports/snowflake.git
- ln -s $CI_PROJECT_DIR $src/gitlab.com/$CI_PROJECT_PATH
variables:
DEBIAN_FRONTEND: noninteractive
DEBIAN_OLD_STABLE: bullseye
DEBIAN_STABLE: bookworm
REPRODUCIBLE_FLAGS: -trimpath -ldflags=-buildid=
# Don't fail pulling images if dependency_proxy.yml is not included
DOCKER_REGISTRY_URL: "docker.io"
build:
script:
- apt-get -qy update
- apt-get -qy install libx11-dev
- cd $src/gitlab.com/$CI_PROJECT_PATH/client
- go get ./...
- go build ./...
- go vet ./...
- go test -v -race ./...
after_script:
# set up apt for automated use
.apt-template: &apt-template
- export LC_ALL=C.UTF-8
- export DEBIAN_FRONTEND=noninteractive
- ln -fs /usr/share/zoneinfo/Etc/UTC /etc/localtime
- echo 'quiet "1";'
'APT::Install-Recommends "0";'
'APT::Install-Suggests "0";'
'APT::Acquire::Retries "20";'
'APT::Get::Assume-Yes "true";'
'Dpkg::Use-Pty "0";'
> /etc/apt/apt.conf.d/99gitlab
- apt-get update
- apt-get dist-upgrade
# Set things up to use the OS-native packages for Go. Anything that
# is downloaded by go during the `go fmt` stage is not coming from the
# Debian/Ubuntu repo. So those would need to be packaged for this to
# make it into Debian and/or Ubuntu.
.debian-native-template: &debian-native-template
variables:
GOPATH: /usr/share/gocode
before_script:
- apt-get update
- apt-get -qy install --no-install-recommends
build-essential
ca-certificates
git
golang
golang-github-cheekybits-genny-dev
golang-github-jtolds-gls-dev
golang-github-klauspost-reedsolomon-dev
golang-github-lucas-clemente-quic-go-dev
golang-github-smartystreets-assertions-dev
golang-github-smartystreets-goconvey-dev
golang-github-tjfoc-gmsm-dev
golang-github-xtaci-kcp-dev
golang-github-xtaci-smux-dev
golang-golang-x-crypto-dev
golang-golang-x-net-dev
golang-goptlib-dev
golang-golang-x-sys-dev
golang-golang-x-text-dev
golang-golang-x-xerrors-dev
# use Go installed as part of the official, Debian-based Docker images
.golang-docker-debian-template: &golang-docker-debian-template
before_script:
- apt-get update
- apt-get -qy install --no-install-recommends
ca-certificates
git
.go-test: &go-test
- gofmt -d .
- test -z "$(go fmt ./...)"
- go vet ./...
- go test -v -race ./...
- cd $CI_PROJECT_DIR/client/
- go get
- go build $REPRODUCIBLE_FLAGS
.test-template: &test-template
artifacts:
name: "${CI_PROJECT_PATH}_${CI_JOB_STAGE}_${CI_JOB_ID}_${CI_COMMIT_REF_NAME}_${CI_COMMIT_SHA}"
paths:
- client/*.aar
- client/*.jar
- client/client
expire_in: 1 week
when: on_success
after_script:
- echo "Download debug artifacts from https://gitlab.com/${CI_PROJECT_PATH}/-/jobs"
# this file changes every time but should not be cached
- rm -f $GRADLE_USER_HOME/caches/modules-2/modules-2.lock
- rm -fr $GRADLE_USER_HOME/caches/*/plugin-resolution/
- rm -rf $GRADLE_USER_HOME/caches/*/plugin-resolution/
# -- jobs ------------------------------------------------------------
android:
image: ${DOCKER_REGISTRY_URL}/golang:1.24-$DEBIAN_OLD_STABLE
variables:
ANDROID_HOME: /usr/lib/android-sdk
LANG: C.UTF-8
cache:
paths:
- .gradle/wrapper
- .gradle/caches
<<: *test-template
before_script:
- *apt-template
- apt-get install
android-sdk-platform-23
android-sdk-platform-tools
build-essential
curl
default-jdk-headless
git
gnupg
unzip
wget
ca-certificates
- ndk=android-ndk-r21e-linux-x86_64.zip
- wget --continue --no-verbose https://dl.google.com/android/repository/$ndk
- echo "ad7ce5467e18d40050dc51b8e7affc3e635c85bd8c59be62de32352328ed467e $ndk" > $ndk.sha256
- sha256sum -c $ndk.sha256
- unzip -q $ndk
- rm ${ndk}*
- mv android-ndk-* $ANDROID_HOME/ndk-bundle/
- chmod -R a+rX $ANDROID_HOME
script:
- *go-test
- export GRADLE_USER_HOME=$CI_PROJECT_DIR/.gradle
- go version
- go env
- go get golang.org/x/mobile/cmd/gomobile
- go get golang.org/x/mobile/cmd/gobind
- go install golang.org/x/mobile/cmd/gobind
- go install golang.org/x/mobile/cmd/gomobile
- gomobile init
- cd $CI_PROJECT_DIR/client
# gomobile builds a shared library not a CLI executable
- sed -i 's,^package main$,package snowflakeclient,' *.go
- go get golang.org/x/mobile/bind
- gomobile bind -v -target=android $REPRODUCIBLE_FLAGS .
go-1.23:
image: ${DOCKER_REGISTRY_URL}/golang:1.23-$DEBIAN_STABLE
<<: *golang-docker-debian-template
<<: *test-template
script:
- *go-test
go-1.24:
image: ${DOCKER_REGISTRY_URL}/golang:1.24-$DEBIAN_STABLE
<<: *golang-docker-debian-template
<<: *test-template
script:
- *go-test
debian-testing:
image: containers.torproject.org/tpo/tpa/base-images/debian:testing
<<: *debian-native-template
<<: *test-template
script:
- *go-test
shadow-integration:
image: ${DOCKER_REGISTRY_URL}/golang:1.23-$DEBIAN_STABLE
variables:
SHADOW_VERSION: "27d0bcf2cf1c7f0d403b6ad3efd575e45ae93126"
TGEN_VERSION: "v1.1.2"
cache:
- key: sf-integration-shadow-$SHADOW_VERSION
paths:
- opt/shadow
- key: sf-integration-tgen-$TGEN_VERSION
paths:
- opt/tgen
artifacts:
paths:
- shadow.data.tar.gz
when: on_failure
tags:
- amd64
- tpa
script:
- apt-get update
- apt-get install -y git tor libglib2.0-0 libigraph3
- mkdir -p ~/.local/bin
- mkdir -p ~/.local/src
- export PATH=$PATH:$CI_PROJECT_DIR/opt/shadow/bin/:$CI_PROJECT_DIR/opt/tgen/bin/
# Install shadow and tgen
- pushd ~/.local/src
- |
if [ ! -f $CI_PROJECT_DIR/opt/shadow/bin/shadow ]
then
echo "The required version of shadow was not cached, building from source"
git clone --shallow-since=2021-08-01 https://github.com/shadow/shadow.git
pushd shadow/
git checkout $SHADOW_VERSION
CONTAINER=debian:stable-slim ci/container_scripts/install_deps.sh
CC=gcc CONTAINER=debian:stable-slim ci/container_scripts/install_extra_deps.sh
export PATH="$HOME/.cargo/bin:${PATH}"
./setup build --jobs $(nproc) --prefix $CI_PROJECT_DIR/opt/shadow
./setup install
popd
fi
- |
if [ ! -f $CI_PROJECT_DIR/opt/tgen/bin/tgen ]
then
echo "The required version of tgen was not cached, building from source"
git clone --branch $TGEN_VERSION --depth 1 https://github.com/shadow/tgen.git
pushd tgen/
apt-get install -y cmake libglib2.0-dev libigraph-dev
mkdir build && cd build
cmake .. -DCMAKE_INSTALL_PREFIX=$CI_PROJECT_DIR/opt/tgen
make
make install
popd
fi
install $CI_PROJECT_DIR/opt/tgen/bin/tgen ~/.local/bin/tgen
- popd
# Apply snowflake patch(es)
- |
git clone --depth 1 https://github.com/cohosh/shadow-snowflake-minimal
git am -3 shadow-snowflake-minimal/*.patch
# Install snowflake binaries to .local folder
- |
for app in "proxy" "client" "server" "broker" "probetest"; do
pushd $app
go build
install $app ~/.local/bin/snowflake-$app
popd
done
# Install stun server
- GOBIN=~/.local/bin go install github.com/gortc/stund@latest
# Run a minimal snowflake shadow experiment
- pushd shadow-snowflake-minimal/
- shadow --log-level=debug --model-unblocked-syscall-latency=true snowflake-minimal.yaml > shadow.log
# Check to make sure streams succeeded
- |
if [ $(grep -c "stream-success" shadow.data/hosts/snowflakeclient/tgen.*.stdout) = 10 ]
then
echo "All streams in shadow completed successfully"
else
echo "Shadow simulation failed"
exit 1
fi
after_script:
- tar -czvf $CI_PROJECT_DIR/shadow.data.tar.gz shadow-snowflake-minimal/shadow.data/ shadow-snowflake-minimal/shadow.log
generate_tarball:
stage: deploy
image: ${DOCKER_REGISTRY_URL}/golang:1.22-$DEBIAN_STABLE
rules:
- if: $CI_COMMIT_TAG
script:
- go mod vendor
- tar czf ${CI_PROJECT_NAME}-${CI_COMMIT_TAG}.tar.gz --transform "s,^,${CI_PROJECT_NAME}-${CI_COMMIT_TAG}/," *
after_script:
- echo TAR_JOB_ID=$CI_JOB_ID >> generate_tarball.env
artifacts:
paths:
- ${CI_PROJECT_NAME}-${CI_COMMIT_TAG}.tar.gz
reports:
dotenv: generate_tarball.env
release-job:
stage: deploy
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $CI_COMMIT_TAG
needs:
- job: generate_tarball
artifacts: true
script:
- echo "running release_job"
release:
name: 'Release $CI_COMMIT_TAG'
description: 'Created using the release-cli'
tag_name: '$CI_COMMIT_TAG'
ref: '$CI_COMMIT_TAG'
assets:
links:
- name: '${CI_PROJECT_NAME}-${CI_COMMIT_TAG}.tar.gz'
url: '${CI_PROJECT_URL}/-/jobs/${TAR_JOB_ID}/artifacts/file/${CI_PROJECT_NAME}-${CI_COMMIT_TAG}.tar.gz'
# Build the container only if the commit is to main, or it is a tag.
# If the commit is to main, then the docker image tag should be set to `nightly`.
# If it is a tag, then the docker image tag should be set to the tag name.
build-container:
variables:
TAG: $CI_COMMIT_TAG # Will not be set on a non-tag build, will be set later
stage: container-build
parallel:
matrix:
- ARCH: amd64
- ARCH: arm64
tags:
- $ARCH
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- if [ $CI_COMMIT_REF_NAME == "main" ]; then export TAG='nightly'; fi
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${TAG}_${ARCH}"
rules:
- if: $CI_COMMIT_REF_NAME == "main"
- if: $CI_COMMIT_TAG
merge-manifests:
variables:
TAG: $CI_COMMIT_TAG
stage: container-build
needs:
- job: build-container
artifacts: false
image:
name: ${DOCKER_REGISTRY_URL}/mplatform/manifest-tool:alpine
entrypoint: [""]
script:
- if [ $CI_COMMIT_REF_NAME == "main" ]; then export TAG='nightly'; fi
- >-
manifest-tool
--username="${CI_REGISTRY_USER}"
--password="${CI_REGISTRY_PASSWORD}"
push from-args
--platforms linux/amd64,linux/arm64
--template "${CI_REGISTRY_IMAGE}:${TAG}_ARCH"
--target "${CI_REGISTRY_IMAGE}:${TAG}"
rules:
- if: $CI_COMMIT_REF_NAME == "main"
when: always
- if: $CI_COMMIT_TAG
when: always
# If this is a tag, then we want to additionally tag the image as `latest`
tag-container-release:
stage: container-build
needs:
- job: merge-manifests
artifacts: false
image:
name: gcr.io/go-containerregistry/crane:debug
entrypoint: [""]
allow_failure: false
variables:
CI_REGISTRY: $CI_REGISTRY
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
RELEASE_TAG: $CI_REGISTRY_IMAGE:latest
script:
- echo "Tagging docker image with stable tag with crane"
- echo -n "$CI_JOB_TOKEN" | crane auth login $CI_REGISTRY -u gitlab-ci-token --password-stdin
- crane cp $IMAGE_TAG $RELEASE_TAG
rules:
- if: $CI_COMMIT_TAG
when: always
clean-image-tags:
stage: container-build
needs:
- job: merge-manifests
artifacts: false
image: containers.torproject.org/tpo/tpa/base-images/debian:bookworm
before_script:
- *apt-template
- apt-get install -y jq curl
script:
- "REGISTRY_ID=$(curl --silent --request GET --header \"JOB-TOKEN: ${CI_JOB_TOKEN}\" \"https://gitlab.torproject.org/api/v4/projects/${CI_PROJECT_ID}/registry/repositories\" | jq '.[].id')"
- "curl --request DELETE --data \"name_regex_delete=(latest|${CI_COMMIT_TAG})_.*\" --header \"JOB-TOKEN: ${CI_JOB_TOKEN}\" \"https://gitlab.torproject.org/api/v4/projects/${CI_PROJECT_ID}/registry/repositories/${REGISTRY_ID}/tags\""
rules:
- if: $CI_COMMIT_REF_NAME == "main"
when: always
- if: $CI_COMMIT_TAG
when: always
mirror-image-to-dockerhub:
stage: container-mirror
variables:
DOCKERHUB_MIRROR_REPOURL: $DOCKERHUB_MIRROR_REPOURL
DOCKERHUB_USERNAME: $DOCKERHUB_MIRROR_USERNAME
DOCKERHUB_PASSWORD: $DOCKERHUB_MIRROR_PASSWORD
image:
name: gcr.io/go-containerregistry/crane:debug
entrypoint: [""]
rules:
- if: $CI_COMMIT_REF_NAME == "main"
when: always
- if: $CI_COMMIT_TAG
when: always
script:
- echo "$DOCKERHUB_PASSWORD" | crane auth login docker.io -u $DOCKERHUB_MIRROR_USERNAME --password-stdin
- crane cp -a containers.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake $DOCKERHUB_MIRROR_REPOURL

0
.gitmodules vendored Normal file
View file

View file

@ -2,42 +2,12 @@ language: go
dist: xenial
go_import_path: git.torproject.org/pluggable-transports/snowflake.git
addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- g++-5
- gcc-5
go_import_path: git.torproject.org/pluggable-transports/snowflake.git/v2
go:
- 1.10.x
env:
- TRAVIS_NODE_VERSION="8" CC="gcc-5" CXX="g++-5"
before_install:
- nvm install $TRAVIS_NODE_VERSION
install:
- go get -u github.com/smartystreets/goconvey
- go get -u github.com/keroserene/go-webrtc
- go get -u github.com/dchest/uniuri
- go get -u git.torproject.org/pluggable-transports/goptlib.git
- go get -u git.torproject.org/pluggable-transports/websocket.git/websocket
- go get -u google.golang.org/appengine
- go get -u golang.org/x/crypto/acme/autocert
- go get -u golang.org/x/net/http2
- pushd proxy
- npm install
- popd
- 1.13.x
script:
- test -z "$(go fmt ./...)"
- go vet ./...
- go test -v -race ./...
- cd proxy
- npm run lint
- npm test

274
ChangeLog Normal file
View file

@ -0,0 +1,274 @@
Changes in version v2.11.0 - 2025-03-18
- Fix data race warnings for tokens_t
- Fix race condition in proxy connection count stats
- Make NATPolicy thread-safe
- Fix race conditions with error scope
- Fix race condition with proxy isClosing variable
- Issue 40454: Update broker metrics to count matches, denials, and timeouts
- Add proxy event and metrics for failed connections
- Issue 40377: Create CI artifact if shadow fails
- Issue 40438: Copy base client config for each SOCKS connection
- Fix minor data race in Snowflake broker metrics
- Issue 40363: Process and read broker SQS messages more quickly
- Issue 40419: delay before calling dc.Close() to improve NAT test on proxy
- Add country stats to proxy prometheus metrics
- Issue 40381: Avoid snowflake client dependency in proxy
- Issue 40446: Lower broker ClientTimeout to 5 seconds in line with CDN77 defaults
- Refactor out utls library into ptutil/utls
- Issue 40414: Use /etc/localtime for CI
- Issue 40440: Add LE self-signed ISRG Root X1 to cert pool
- Proxy refactor to simplify tokens.ret() on error
- Clarify ephemeral-ports-range proxy option
- Issue 40417: Fixes and updates to CI containers
- Issue 40178: Handle unknown client type better
- Issue 40304: Update STUN server list
- Issue 40210: Remove proxy log when offer is nil
- Issue 40413: Log EventOnCurrentNATTypeDetermined for proxy
- Use named return for some functions to improve readability
- Issue 40271: Use pion SetIPFilter rather than our own StripLocalAddress
- Issue 40413: Suppress logs of proxy events by default
- Add IsLinkLocalUnicast in IsLocal
- Fix comments
- Bump versions of dependencies
Changes in version v2.10.1 - 2024-11-11
- Issue 40406: Update version string
Changes in version v2.10.0 - 2024-11-07
- Issue 40402: Add proxy event for when client has connected
- Issue 40405: Prevent panic for duplicate SnowflakeConn.Close() calls
- Enable local time for proxy logging
- Have proxy summary statistics log average transfer rate
- Issue 40210: Remove duplicate poll interval loop in proxy
- Issue 40371: Prevent broker and proxy from rejecting clients without ICE candidates
- Issue 40392: Allow the proxy and probetest to set multiple STUN URLs
- Issue 40387: Fix error in probetest NAT check
- Fix proxy panic on invalid relayURL
- Set empty pattern if broker bridge-list is empty
- Improve documentation of Ephemeral[Min,Max]Port
- Fix resource leak and NAT check in probetest
- Fix memory leak from failed NAT check
- Improve NAT check logging
- Issue 40230: Send answer even if ICE gathering is not complete
- Improve broker error message on unknown bridge fingerprint
- Don't proxy private IP addresses
- Only accept ws:// and wss:// relay addresses
- Issue 40373: Add cli flag and SnowflakeProxy field to modify proxy poll interval
- Use %w not $v in fmt.Errorf
- Updates to documentation
- Adjust copy buffer size to improve proxy performance
- Improve descriptions of cli flags
- Cosmetic changes for code readability
- Issue 40367: Deduplicate prometheus metrics names
- Report the version of snowflake to the tor process
- Issue 40365: Indicate whether the repo was modified in the version string
- Simplify NAT checking logic
- Issue 40354: Use ptutil library for safelog and prometheus metrics
- Add cli flag to set a listen address for proxy prometheus metrics
- Issue 40345: Integrate docker image with release process
- Bump versions of dependencies
Changes in version v2.9.2 - 2024-03-18
- Issue 40288: Add integration testing with Shadow
- Issue 40345: Automatically build and push containers to our registry
- Issue 40339: Fix client ID reuse bug in SQS rendezvous
- Issue 40341: Modify SQS rendezvous arguments to use b64 encoded parameters
- Issue 40330: Add new metrics at the broker for per-country rendezvous stats
- Issue 40345: Update docker container tags
- Bump versions of dependencies
Changes in version v2.9.1 - 2024-02-27
- Issue 40335: Fix release job
- Change deprecated io/ioutil package to io package
- Bump versions of dependencies
Changes in version v2.9.0 - 2024-02-05
- Issue 40285: Add vcs revision to version string
- Issue 40294: Update recommended torrc options in client README
- Issue 40306: Scrub space-separated IP addresses
- Add proxy commandline option for probe server URL
- Use SetNet setting in probest to ignore net.Interfaces error
- Add probetest commandline option for STUN URL
- Issue 26151: Implement SQS rendezvous in client and broker
- Add broker metrics to track rendezvous method
- Cosmetic code quality fixes
- Bump versions of dependencies
Changes in version v2.8.1 - 2023-12-21
- Issue 40276: Reduce allocations in encapsulation.ReadData
- Issue 40310: Remove excessive logging for closed proxy connections
- Issue 40278: Add network fix for old version of android to proxy
- Bump versions of dependencies
Changes in version v2.8.0 - 2023-11-20
- Issue 40069: Add outbound proxy support
- Issue 40301: Fix for a bug in domain fronting configurations
- Issue 40302: Remove throughput summary from proxy logger
- Issue 40302: Change proxy stats logging to only log stats for traffic that occurred in the summary interval
- Update renovate bot configuration to use Go 1.21
- Bump versions of dependencies
Changes in version v2.7.0 - 2023-10-16
7142fa3 fix(proxy): Correctly close connection pipe when dealing with error
6393af6 Remove proxy churn measurements from broker.
a615e8b fix(proxy): remove _potential_ deadlock
d434549 Maintain backward compatability with old clients
9fdfb3d Randomly select front domain from comma-separated list
5cdf52c Update dependencies
1559963 chore(deps): update module github.com/xtaci/kcp-go/v5 to v5.6.3
60e66be Remove Golang 1.20 from CI Testing
1d069ca Update CI targets to test android from golang 1.21
3a050c6 Use ShouldBeNil to check for nil values
e45e8e5 chore(deps): update module github.com/smartystreets/goconvey to v1.8.1
f47ca18 chore(deps): update module gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/goptlib to v1.5.0
106da49 chore(deps): update module github.com/pion/webrtc/v3 to v3.2.20
2844ac6 Update CI targets to include only Go 1.20 and 1.21
f4e1ab9 chore(deps): update module golang.org/x/net to v0.15.0
caaff70 Update module golang.org/x/sys to v0.12.0
Changes in version v2.6.1 - 2023-09-11
- a3bfc28 Update module golang.org/x/crypto to v0.12.0
- e37e15a Update golang Docker tag to v1.21
- b632c7d Workaround for shadow in lieu of AF_NETLINK support
- 0cb2975 Update module golang.org/x/net to v0.13.0 [SECURITY]
- f73fe6e Keep the 'v' from the tag on the released .tar.gz
- 8104732 Change DefaultRelayURL back to wss://snowflake.torproject.net/.
- d932cb2 feat: add option to expose the stats by using metrics
- af73ab7 Add renovate config
- aaeab3f Update dependencies
- 58c3121 Close temporary UDPSession in TestQueuePacketConnWriteToKCP.
- 80980a3 Fix a comment left over from turbotunnel-quic.
- 08d1c6d Bump minimum required version of go
Changes in version v2.6.0 - 2023-06-19
- Issue 40243: Implement datachannel flow control at proxy
- Issue 40087: Append Let's Encrypt ISRG Root X1 to cert pool
- Issue 40198: Use IP_BIND_ADDRESS_NO_PORT when dialing the ORPort on linux
- Move from gitweb to gitlab
- Add warning log at broker when proxy does not connect with client
- Fix unit tests after SDP validation
- Soften non-critical log from error to warning
- Issue 40231: Validate SDP offers and answers
- Add scanner error check to ClusterCounter.Count
- Fix server benchmark tests
- Issue 40260: Use a sync.Pool to reuse QueuePacketConn buffers
- Issue 40043: Restore ListenAndServe error in server
- Update pion webrtc library versions
- Issue 40108: Add outbound address config option to proxy
- Issue 40260: Fix a data race in the Snowflake server
- Issue 40216: Add utls-imitate, utls-nosni documentation to the README
- Fix up/down traffic stats in standalone proxy
- Issue 40226: Filter out ICE servers that are not STUN
- Issue 40226: Update README to reflect the type of ICE servers we support
- Issue 40226: Parse ICE servers using the pion/ice library function
- Bring client torrc up to date with Tor Browser
Changes in version v2.5.1 - 2023-01-18
- Issue 40249: Fix issue with Skip Hello Verify patch
Changes in version v2.5.0 - 2023-01-18
- Issue 40249: Apply Skip Hello Verify Migration
Changes in version v2.4.3 - 2023-01-16
- Fix version number in version.go
Changes in version v2.4.2 - 2023-01-13
- Issue 40208: Enhance help info for capacity flag
- Issue 40232: Update README and fix help output
- Issue 40173: Increase clientIDAddrMapCapacity
- Issue 40177: Manually unlock mutex in ClientMap.SendQueue
- Issue 40177: Have SnowflakeClientConn implement io.WriterTo
- Issue 40179: Reduce turbotunnel queueSize from 2048 to 512
- Issue 40187/40199: Take ownership of buffer in QueuePacketConn QueueIncoming/WriteTo
- Add more tests for URL encoded IPs (safelog)
- Fix server flag name
- Issue 40200: Use multiple parallel KCP state machines in the server
- Add a num-turbotunnel server transport option
- Issue: 40241: Switch default proxy STUN server to stun.l.google.com
Changes in version v2.4.1 - 2022-12-01
- Issue 40224: Bug fix in utls roundtripper
Changes in version v2.4.0 - 2022-11-29
- Fix proxy command line help output
- Issue 40123: Reduce multicast DNS candidates
- Add ICE ephemeral ports range setting
- Reformat using Go 1.19
- Update CI tests to include latest and minimum Go versions
- Issue 40184: Use fixed unit for bandwidth logging
- Update gorilla/websocket to v1.5.0
- Issue 40175: Server performance improvements
- Issue 40183: Change snowflake proxy log verbosity
- Issue 40117: Display proxy NAT type in logs
- Issue 40198: Add a `orport-srcaddr` server transport option
- Add gofmt output to CI test
- Issue 40185: Change bandwidth type from int to int64 to prevent overflow
- Add version output support to snowflake
- Issue 40229: Change regexes for ipv6 addresses to catch url-encoded addresses
- Issue 40220: Close stale connections in standalone proxy
Changes in version v2.3.0 - 2022-06-23
- Issue 40146: Avoid performing two NAT probe tests at startup
- Issue 40134: Log messages from client NAT check failures are confusing
- Issue 34075: Implement metrics to measure snowflake churn
- Issue 28651: Prepare all pieces of the snowflake pipeline for a second snowflake bridge
- Issue 40129: Distributed Snowflake Server Support
Changes in version v2.2.0 - 2022-05-25
- Issue 40099: Initialize SnowflakeListener.closed
- Add connection failure events for proxy timeouts
- Issue 40103: Fix proxy logging verb tense
- Fix up and downstream metrics output for proxy
- Issue 40054: uTLS for broker negotiation
- Forward bridge fingerprint from client to broker (WIP, Issue 28651)
- Issue 40104: Make it easier to configure proxy type
- Remove version from ClientPollRequest
- Issue 40124: Move tor-specific code out of library
- Issue 40115: Scrub pt event logs
- Issue 40127: Bump webrtc and dtls library versions
- Bump version of webrtc and dtls to fix dtls CVEs
- Issue 40141: Ensure library calls of events can be scrubbed
Changes in version v2.1.0 - 2022-02-08
- Issue 40098: Remove support for legacy one shot mode
- Issue 40079: Make connection summary at proxy privacy preserving
- Issue 40076: Add snowflake event API for notifications of connection events
- Issue 40084: Increase capacity of client address map at the server
- Issue 40060: Further clean up snowflake server logs
- Issue 40089: Validate proxy and client supplied strings at broker
- Issue 40014: Update version of DTLS library to include fingerprinting fixes
- Issue 40075: Support recurring NAT type check in standalone proxy
Changes in version v2.0.0 - 2021-11-04
- Turn the standalone snowflake proxy code into a library
- Clean up and reworked the snowflake client and server library code
- Unify broker/bridge domains to *.torproject.net
- Updates to the snowflake library documentation
- New package functions to define and set a rendezvous method with the
broker
- Factor out the broker geoip code into its own external library
- Bug fix to check error calls in preparePeerConnection
- Bug fixes in snowflake tests
- Issue 40059: add the ability to pass in snowflake arguments through SOCKS
- Increase buffer sizes for sending and receiving snowflake data
- Issue 25985: rendezvous with the broker using AMP cache
- Issue 40055: wait for the full poll interval between proxy polls
Changes in version v1.1.0 - 2021-07-13
- Refactors of the Snowflake broker code
- Refactors of the Snowflake proxy code
- Issue 40048: assign proxies based on self-reported client load
- Issue 40052: fixed a memory leak in the server accept loop
- Version bump of kcp and smux libraries
- Bug fix to pass the correct client address to the Snowflake bridge metrics
counter
- Bug fixes to prevent race conditions in the Snowflake client
Changes in version v1.0.0 - 2021-06-07
- Initial release.

50
Dockerfile Normal file
View file

@ -0,0 +1,50 @@
FROM docker.io/library/golang:latest AS build
ADD . /app
WORKDIR /app/proxy
RUN go get
RUN CGO_ENABLED=0 go build -o proxy -ldflags '-extldflags "-static" -w -s' .
FROM containers.torproject.org/tpo/tpa/base-images/debian:bookworm as debian-base
# Install dependencies to add Tor's repository.
RUN apt-get update && apt-get install -y \
curl \
gpg \
gpg-agent \
ca-certificates \
libcap2-bin \
--no-install-recommends
# See: <https://2019.www.torproject.org/docs/debian.html.en>
RUN curl https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --import
RUN gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
RUN printf "deb https://deb.torproject.org/torproject.org bookworm main\n" >> /etc/apt/sources.list.d/tor.list
# Install remaining dependencies.
RUN apt-get update && apt-get install -y \
tor \
tor-geoipdb \
--no-install-recommends
FROM scratch
COPY --from=debian-base /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=debian-base /usr/share/zoneinfo /usr/share/zoneinfo
COPY --from=debian-base /usr/share/tor/geoip* /usr/share/tor/
COPY --from=build /app/proxy/proxy /bin/proxy
ENTRYPOINT [ "/bin/proxy" ]
# Set some labels
# io.containers.autoupdate label will instruct podman to reach out to the
# corresponding registry to check if the image has been updated. If an image
# must be updated, Podman pulls it down and restarts the systemd unit executing
# the container. See podman-auto-update(1) for more details, or
# https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html
LABEL io.containers.autoupdate=registry
LABEL org.opencontainers.image.authors="anti-censorship-team@lists.torproject.org"

View file

@ -3,7 +3,7 @@
================================================================================
Copyright (c) 2016, Serene Han, Arlo Breault
All rights reserved.
Copyright (c) 2019-2020, The Tor Project, Inc
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:

142
README.md
View file

@ -1,119 +1,59 @@
# Snowflake
[![Build Status](https://travis-ci.org/keroserene/snowflake.svg?branch=master)](https://travis-ci.org/keroserene/snowflake)
Pluggable Transport using WebRTC, inspired by Flashproxy.
### Status
- [x] Transport: Successfully connects using WebRTC.
- [x] Rendezvous: HTTP signaling (with optional domain fronting) to the Broker
arranges peer-to-peer connections with multitude of volunteer "snowflakes".
- [x] Client multiplexes remote snowflakes.
- [x] Can browse using Tor over Snowflake.
- [ ] Reproducible build with TBB.
Snowflake is a censorship-evasion pluggable transport using WebRTC, inspired by Flashproxy.
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Structure of this Repository](#structure-of-this-repository)
- [Usage](#usage)
- [Dependencies](#dependencies)
- [More Info](#more-info)
- [Building](#building)
- [Test Environment](#test-environment)
- [Using Snowflake with Tor](#using-snowflake-with-tor)
- [Running a Snowflake Proxy](#running-a-snowflake-proxy)
- [Using the Snowflake Library with Other Applications](#using-the-snowflake-library-with-other-applications)
- [Test Environment](#test-environment)
- [FAQ](#faq)
- [Appendix](#appendix)
- [-- Testing directly via WebRTC Server --](#---testing-directly-via-webrtc-server---)
- [More info and links](#more-info-and-links)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
### Structure of this Repository
- `broker/` contains code for the Snowflake broker
- `doc/` contains Snowflake documentation and manpages
- `client/` contains the Tor pluggable transport client and client library code
- `common/` contains generic libraries used by multiple pieces of Snowflake
- `proxy/` contains code for the Go standalone Snowflake proxy
- `probetest/` contains code for a NAT probetesting service
- `server/` contains the Tor pluggable transport server and server library code
### Usage
```
cd client/
go get
go build
tor -f torrc
```
This should start the client plugin, bootstrapping to 100% using WebRTC.
Snowflake is currently deployed as a pluggable transport for Tor.
#### Dependencies
#### Using Snowflake with Tor
Client:
- [go-webrtc](https://github.com/keroserene/go-webrtc)
- Go 1.5+
To use the Snowflake client with Tor, you will need to add the appropriate `Bridge` and `ClientTransportPlugin` lines to your [torrc](https://2019.www.torproject.org/docs/tor-manual.html.en) file. See the [client README](client) for more information on building and running the Snowflake client.
Proxy:
- JavaScript
#### Running a Snowflake Proxy
---
You can contribute to Snowflake by running a Snowflake proxy. We have the option to run a proxy in your browser or as a standalone Go program. See our [community documentation](https://community.torproject.org/relay/setup/snowflake/) for more details.
#### More Info
#### Using the Snowflake Library with Other Applications
Tor can plug in the Snowflake client via a correctly configured `torrc`.
For example:
Snowflake can be used as a Go API, and adheres to the [v2.1 pluggable transports specification](). For more information on using the Snowflake Go library, see the [Snowflake library documentation](doc/using-the-snowflake-library.md).
```
ClientTransportPlugin snowflake exec ./client \
-url https://snowflake-broker.azureedge.net/ \
-front ajax.aspnetcdn.com \
-ice stun:stun.l.google.com:19302
-max 3
```
The flags `-url` and `-front` allow the Snowflake client to speak to the Broker,
in order to get connected with some volunteer's browser proxy. `-ice` is a
comma-separated list of ICE servers, which are required for NAT traversal.
For logging, run `tail -F snowflake.log` in a second terminal.
You can modify the `torrc` to use your own broker:
```
ClientTransportPlugin snowflake exec ./client --meek
```
#### Building
This describes how to build the in-browser snowflake. For the client, see Usage,
above.
The client will only work if there are browser snowflakes available.
To run your own:
```
cd proxy/
npm run build
```
Then, start a local http server in the `proxy/build/` in any way you like.
For instance:
```
cd build/
python -m http.server
```
Then, open a browser tab to `http://127.0.0.1:8000/snowflake.html` to view
the debug-console of the snowflake.,
So long as that tab is open, you are an ephemeral Tor bridge.
#### Test Environment
### Test Environment
There is a Docker-based test environment at https://github.com/cohosh/snowbox.
### FAQ
**Q: How does it work?**
In the Tor use-case:
1. Volunteers visit websites which host the "snowflake" proxy. (just
like flashproxy)
1. Volunteers visit websites that host the 'snowflake' proxy, run a snowflake [web extension](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext), or use a standalone proxy.
2. Tor clients automatically find available browser proxies via the Broker
(the domain fronted signaling channel).
3. Tor client and browser proxy establish a WebRTC peer connection.
@ -141,22 +81,26 @@ manual port forwarding!
It utilizes the "ICE" negotiation via WebRTC, and also involves a great
abundance of ephemeral and short-lived (and special!) volunteer proxies...
### Appendix
### More info and links
##### -- Testing with Standalone Proxy --
We have more documentation in the [Snowflake wiki](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/wikis/home) and at https://snowflake.torproject.org/.
```
cd proxy-go
go build
./proxy-go
```
##### -- Testing directly via WebRTC Server --
##### -- Android AAR Reproducible Build Setup --
See server-webrtc/README.md for information on connecting directly to a
WebRTC server transport plugin, bypassing the Broker and browser proxy.
Using `gomobile` it is possible to build snowflake as shared libraries for all
the architectures supported by Android. This is in the _.gitlab-ci.yml_, which
runs in GitLab CI. It is also possible to run this setup in a Virtual Machine
using [vagrant](https://www.vagrantup.com/). Just run `vagrant up` and it will
create and provision the VM. `vagrant ssh` to get into the VM to use it as a
development environment.
More documentation on the way.
##### uTLS Settings
Also available at:
[torproject.org/pluggable-transports/snowflake](https://gitweb.torproject.org/pluggable-transports/snowflake.git/)
Snowflake communicate with broker that serves as signaling server with TLS based domain fronting connection, which may be identified by its usage of Go language TLS stack.
uTLS is a software library designed to initiate the TLS Client Hello fingerprint of browsers or other popular software's TLS stack to evade censorship based on TLS client hello fingerprint with `-utls-imitate` . You can use `-version` to see a list of supported values.
Depending on client and server configuration, it may not always work as expected as not all extensions are correctly implemented.
You can also remove SNI (Server Name Indication) from client hello to evade censorship with `-utls-nosni`, not all servers supports this.

67
Vagrantfile vendored Normal file
View file

@ -0,0 +1,67 @@
require 'pathname'
require 'tempfile'
require 'yaml'
srvpath = Pathname.new(File.dirname(__FILE__)).realpath
configfile = YAML.load_file(File.join(srvpath, "/.gitlab-ci.yml"))
remote_url = 'https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake'
# set up essential environment variables
env = configfile['variables']
env = env.merge(configfile['android']['variables'])
env['CI_PROJECT_DIR'] = '/builds/tpo/anti-censorship/pluggable-transports/snowflake'
env_file = Tempfile.new('env')
File.chmod(0644, env_file.path)
env.each do |k,v|
env_file.write("export #{k}='#{v}'\n")
end
env_file.rewind
sourcepath = '/etc/profile.d/env.sh'
header = "#!/bin/bash -ex\nsource #{sourcepath}\ncd $CI_PROJECT_DIR\n"
before_script_file = Tempfile.new('before_script')
File.chmod(0755, before_script_file.path)
before_script_file.write(header)
configfile['android']['before_script'].flatten.each do |line|
before_script_file.write(line)
before_script_file.write("\n")
end
before_script_file.rewind
script_file = Tempfile.new('script')
File.chmod(0755, script_file.path)
script_file.write(header)
configfile['android']['script'].flatten.each do |line|
script_file.write(line)
script_file.write("\n")
end
script_file.rewind
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.synced_folder '.', '/vagrant', disabled: true
config.vm.provision "file", source: env_file.path, destination: 'env.sh'
config.vm.provision :shell, inline: <<-SHELL
set -ex
mv ~vagrant/env.sh #{sourcepath}
source #{sourcepath}
test -d /go || mkdir /go
mkdir -p $(dirname $CI_PROJECT_DIR)
chown -R vagrant.vagrant $(dirname $CI_PROJECT_DIR)
apt-get update
apt-get -qy install --no-install-recommends git
git clone #{remote_url} $CI_PROJECT_DIR
chmod -R a+rX,u+w /go $CI_PROJECT_DIR
chown -R vagrant.vagrant /go $CI_PROJECT_DIR
SHELL
config.vm.provision "file", source: before_script_file.path, destination: 'before_script.sh'
config.vm.provision "file", source: script_file.path, destination: 'script.sh'
config.vm.provision :shell, inline: '/home/vagrant/before_script.sh'
config.vm.provision :shell, privileged: false, inline: '/home/vagrant/script.sh'
# remove this or comment it out to use VirtualBox instead of libvirt
config.vm.provider :libvirt do |libvirt|
libvirt.memory = 1536
end
end

View file

@ -1,28 +0,0 @@
This component runs on Google App Engine. It reflects domain-fronted
requests from a client to the Snowflake broker.
You need the Go App Engine SDK in order to deploy the app.
https://cloud.google.com/sdk/docs/#linux
After unpacking, install the app-engine-go component:
google-cloud-sdk/bin/gcloud components install app-engine-go
To test locally, run
google-cloud-sdk/bin/dev_appserver.py app.yaml
The app will be running at http://127.0.0.1:8080/.
To deploy to App Engine, first create a new project and app. You have to
think of a unique name (marked as "<appname>" in the commands). You only
have to do the "create" step once; subsequent times you can go straight
to the "deploy" step. The "gcloud auth login" command will open a
browser window so you can log in to a Google account.
google-cloud-sdk/bin/gcloud auth login
google-cloud-sdk/bin/gcloud projects create <appname>
google-cloud-sdk/bin/gcloud app create --project=<appname>
Then to deploy the project, run:
google-cloud-sdk/bin/gcloud app deploy --project=<appname>
To configure the Snowflake client to talk to the App Engine app, provide
"https://<appname>.appspot.com/" as the --url option.
UseBridges 1
Bridge snowflake 0.0.2.0:1
ClientTransportPlugin snowflake exec ./client -url https://<appname>.appspot.com/ -front www.google.com

View file

@ -1,7 +0,0 @@
runtime: go
api_version: go1
handlers:
- url: /.*
script: _go_app
secure: always

View file

@ -1,111 +0,0 @@
// A web app for Google App Engine that proxies HTTP requests and responses to
// the Snowflake broker.
package reflect
import (
"context"
"io"
"net/http"
"net/url"
"time"
"google.golang.org/appengine"
"google.golang.org/appengine/log"
"google.golang.org/appengine/urlfetch"
)
const (
forwardURL = "https://snowflake-broker.bamsoftware.com/"
// A timeout of 0 means to use the App Engine default (5 seconds).
urlFetchTimeout = 20 * time.Second
)
var ctx context.Context
// Join two URL paths.
func pathJoin(a, b string) string {
if len(a) > 0 && a[len(a)-1] == '/' {
a = a[:len(a)-1]
}
if len(b) == 0 || b[0] != '/' {
b = "/" + b
}
return a + b
}
// We reflect only a whitelisted set of header fields. Otherwise, we may copy
// headers like Transfer-Encoding that interfere with App Engine's own
// hop-by-hop headers.
var reflectedHeaderFields = []string{
"Content-Type",
"X-Session-Id",
}
// Make a copy of r, with the URL being changed to be relative to forwardURL,
// and including only the headers in reflectedHeaderFields.
func copyRequest(r *http.Request) (*http.Request, error) {
u, err := url.Parse(forwardURL)
if err != nil {
return nil, err
}
// Append the requested path to the path in forwardURL, so that
// forwardURL can be something like "https://example.com/reflect".
u.Path = pathJoin(u.Path, r.URL.Path)
c, err := http.NewRequest(r.Method, u.String(), r.Body)
if err != nil {
return nil, err
}
for _, key := range reflectedHeaderFields {
values, ok := r.Header[key]
if ok {
for _, value := range values {
c.Header.Add(key, value)
}
}
}
return c, nil
}
func handler(w http.ResponseWriter, r *http.Request) {
ctx = appengine.NewContext(r)
fr, err := copyRequest(r)
if err != nil {
log.Errorf(ctx, "copyRequest: %s", err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
if urlFetchTimeout != 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, urlFetchTimeout)
defer cancel()
}
// Use urlfetch.Transport directly instead of urlfetch.Client because we
// want only a single HTTP transaction, not following redirects.
transport := urlfetch.Transport{
Context: ctx,
}
resp, err := transport.RoundTrip(fr)
if err != nil {
log.Errorf(ctx, "RoundTrip: %s", err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer resp.Body.Close()
for _, key := range reflectedHeaderFields {
values, ok := resp.Header[key]
if ok {
for _, value := range values {
w.Header().Add(key, value)
}
}
}
w.WriteHeader(resp.StatusCode)
n, err := io.Copy(w, resp.Body)
if err != nil {
log.Errorf(ctx, "io.Copy after %d bytes: %s", n, err)
}
}
func init() {
http.HandleFunc("/", handler)
}

View file

@ -1,3 +1,12 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Overview](#overview)
- [Running your own](#running-your-own)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
This is the Broker component of Snowflake.
### Overview

83
broker/amp.go Normal file
View file

@ -0,0 +1,83 @@
package main
import (
"context"
"log"
"net/http"
"strings"
"time"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/amp"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/messages"
)
// ampClientOffers is the AMP-speaking endpoint for client poll messages,
// intended for access via an AMP cache. In contrast to the other clientOffers,
// the client's encoded poll message is stored in the URL path rather than the
// HTTP request body (because an AMP cache does not support POST), and the
// encoded client poll response is sent back as AMP-armored HTML.
func ampClientOffers(i *IPC, w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), ClientTimeout*time.Second)
defer cancel()
// The encoded client poll message immediately follows the /amp/client/
// path prefix, so this function unfortunately needs to be aware of and
// remove its own routing prefix.
path := strings.TrimPrefix(r.URL.Path, "/amp/client/")
if path == r.URL.Path {
// The path didn't start with the expected prefix. This probably
// indicates an internal bug.
log.Println("ampClientOffers: unexpected prefix in path")
w.WriteHeader(http.StatusInternalServerError)
return
}
var encPollReq []byte
var response []byte
var err error
encPollReq, err = amp.DecodePath(path)
if err == nil {
arg := messages.Arg{
Body: encPollReq,
RemoteAddr: "",
RendezvousMethod: messages.RendezvousAmpCache,
Context: ctx,
}
err = i.ClientOffers(arg, &response)
} else {
response, err = (&messages.ClientPollResponse{
Error: "cannot decode URL path",
}).EncodePollResponse()
}
if err != nil {
// We couldn't even construct a JSON object containing an error
// message :( Nothing to do but signal an error at the HTTP
// layer. The AMP cache will translate this 500 status into a
// 404 status.
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/#redirect-%26-error-handling
log.Printf("ampClientOffers: %v", err)
w.WriteHeader(http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "text/html")
// Attempt to hint to an AMP cache not to waste resources caching this
// document. "The Google AMP Cache considers any document fresh for at
// least 15 seconds."
// https://developers.google.com/amp/cache/overview#google-amp-cache-updates
w.Header().Set("Cache-Control", "max-age=15")
w.WriteHeader(http.StatusOK)
enc, err := amp.NewArmorEncoder(w)
if err != nil {
log.Printf("amp.NewArmorEncoder: %v", err)
return
}
defer enc.Close()
if _, err := enc.Write(response); err != nil {
log.Printf("ampClientOffers: unable to write answer: %v", err)
}
}

94
broker/bridge-list.go Normal file
View file

@ -0,0 +1,94 @@
/* (*BridgeListHolderFileBased).LoadBridgeInfo loads a Snowflake Server bridge info description file,
its format is as follows:
This file should be in newline-delimited JSON format(https://jsonlines.org/).
For each line, the format of json data should be in the format of:
{"displayName":"default", "webSocketAddress":"wss://snowflake.torproject.net/", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80A72"}
displayName:string is the name of this bridge. This value is not currently used programmatically.
webSocketAddress:string is the WebSocket URL of this bridge.
This will be the address proxy used to connect to this snowflake server.
fingerprint:string is the identifier of the bridge.
This will be used by a client to identify the bridge it wishes to connect to.
The existence of ANY other fields is NOT permitted.
The file will be considered invalid if there is at least one invalid json record.
In this case, an error will be returned, and none of the records will be loaded.
*/
package main
import (
"bufio"
"bytes"
"encoding/json"
"errors"
"io"
"sync"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/bridgefingerprint"
)
var ErrBridgeNotFound = errors.New("bridge with requested fingerprint is unknown to the broker")
func NewBridgeListHolder() BridgeListHolderFileBased {
return &bridgeListHolder{}
}
type bridgeListHolder struct {
bridgeInfo map[bridgefingerprint.Fingerprint]BridgeInfo
accessBridgeInfo sync.RWMutex
}
type BridgeListHolder interface {
GetBridgeInfo(bridgefingerprint.Fingerprint) (BridgeInfo, error)
}
type BridgeListHolderFileBased interface {
BridgeListHolder
LoadBridgeInfo(reader io.Reader) error
}
type BridgeInfo struct {
DisplayName string `json:"displayName"`
WebSocketAddress string `json:"webSocketAddress"`
Fingerprint string `json:"fingerprint"`
}
func (h *bridgeListHolder) GetBridgeInfo(fingerprint bridgefingerprint.Fingerprint) (BridgeInfo, error) {
h.accessBridgeInfo.RLock()
defer h.accessBridgeInfo.RUnlock()
if bridgeInfo, ok := h.bridgeInfo[fingerprint]; ok {
return bridgeInfo, nil
}
return BridgeInfo{}, ErrBridgeNotFound
}
func (h *bridgeListHolder) LoadBridgeInfo(reader io.Reader) error {
bridgeInfoMap := map[bridgefingerprint.Fingerprint]BridgeInfo{}
inputScanner := bufio.NewScanner(reader)
for inputScanner.Scan() {
inputLine := inputScanner.Bytes()
bridgeInfo := BridgeInfo{}
decoder := json.NewDecoder(bytes.NewReader(inputLine))
decoder.DisallowUnknownFields()
if err := decoder.Decode(&bridgeInfo); err != nil {
return err
}
var bridgeFingerprint bridgefingerprint.Fingerprint
var err error
if bridgeFingerprint, err = bridgefingerprint.FingerprintFromHexString(bridgeInfo.Fingerprint); err != nil {
return err
}
bridgeInfoMap[bridgeFingerprint] = bridgeInfo
}
h.accessBridgeInfo.Lock()
defer h.accessBridgeInfo.Unlock()
h.bridgeInfo = bridgeInfoMap
return nil
}

View file

@ -0,0 +1,64 @@
package main
import (
"bytes"
"encoding/hex"
. "github.com/smartystreets/goconvey/convey"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/bridgefingerprint"
"testing"
)
const DefaultBridges = `{"displayName":"default", "webSocketAddress":"wss://snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80A72"}
`
const ImaginaryBridges = `{"displayName":"default", "webSocketAddress":"wss://snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80A72"}
{"displayName":"imaginary-1", "webSocketAddress":"wss://imaginary-1-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B00"}
{"displayName":"imaginary-2", "webSocketAddress":"wss://imaginary-2-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B01"}
{"displayName":"imaginary-3", "webSocketAddress":"wss://imaginary-3-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B02"}
{"displayName":"imaginary-4", "webSocketAddress":"wss://imaginary-4-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B03"}
{"displayName":"imaginary-5", "webSocketAddress":"wss://imaginary-5-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B04"}
{"displayName":"imaginary-6", "webSocketAddress":"wss://imaginary-6-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B05"}
{"displayName":"imaginary-7", "webSocketAddress":"wss://imaginary-7-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B06"}
{"displayName":"imaginary-8", "webSocketAddress":"wss://imaginary-8-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B07"}
{"displayName":"imaginary-9", "webSocketAddress":"wss://imaginary-9-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B08"}
{"displayName":"imaginary-10", "webSocketAddress":"wss://imaginary-10-snowflake.torproject.org", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80B09"}
`
func TestBridgeLoad(t *testing.T) {
Convey("load default list", t, func() {
bridgeList := NewBridgeListHolder()
So(bridgeList.LoadBridgeInfo(bytes.NewReader([]byte(DefaultBridges))), ShouldBeNil)
{
bridgeFingerprint := [20]byte{}
{
n, err := hex.Decode(bridgeFingerprint[:], []byte("2B280B23E1107BB62ABFC40DDCC8824814F80A72"))
So(n, ShouldEqual, 20)
So(err, ShouldBeNil)
}
Fingerprint, err := bridgefingerprint.FingerprintFromBytes(bridgeFingerprint[:])
So(err, ShouldBeNil)
bridgeInfo, err := bridgeList.GetBridgeInfo(Fingerprint)
So(err, ShouldBeNil)
So(bridgeInfo.DisplayName, ShouldEqual, "default")
So(bridgeInfo.WebSocketAddress, ShouldEqual, "wss://snowflake.torproject.org")
}
})
Convey("load imaginary list", t, func() {
bridgeList := NewBridgeListHolder()
So(bridgeList.LoadBridgeInfo(bytes.NewReader([]byte(ImaginaryBridges))), ShouldBeNil)
{
bridgeFingerprint := [20]byte{}
{
n, err := hex.Decode(bridgeFingerprint[:], []byte("2B280B23E1107BB62ABFC40DDCC8824814F80B07"))
So(n, ShouldEqual, 20)
So(err, ShouldBeNil)
}
Fingerprint, err := bridgefingerprint.FingerprintFromBytes(bridgeFingerprint[:])
So(err, ShouldBeNil)
bridgeInfo, err := bridgeList.GetBridgeInfo(Fingerprint)
So(err, ShouldBeNil)
So(bridgeInfo.DisplayName, ShouldEqual, "imaginary-8")
So(bridgeInfo.WebSocketAddress, ShouldEqual, "wss://imaginary-8-snowflake.torproject.org")
}
})
}

View file

@ -6,43 +6,60 @@ SessionDescriptions in order to negotiate a WebRTC connection.
package main
import (
"bytes"
"container/heap"
"context"
"crypto/tls"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"os"
"os/signal"
"strings"
"sync"
"syscall"
"time"
"git.torproject.org/pluggable-transports/snowflake.git/common/safelog"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/bridgefingerprint"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/sqs"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil/safelog"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/namematcher"
"golang.org/x/crypto/acme/autocert"
)
const (
ClientTimeout = 10
ProxyTimeout = 10
readLimit = 100000 //Maximum number of bytes to be read from an HTTP request
)
type BrokerContext struct {
snowflakes *SnowflakeHeap
// Map keeping track of snowflakeIDs required to match SDP answers from
// the second http POST.
snowflakes *SnowflakeHeap
restrictedSnowflakes *SnowflakeHeap
// Maps keeping track of snowflakeIDs required to match SDP answers from
// the second http POST. Restricted snowflakes can only be matched up with
// clients behind an unrestricted NAT.
idToSnowflake map[string]*Snowflake
// Synchronization for the snowflake map and heap
snowflakeLock sync.Mutex
proxyPolls chan *ProxyPoll
metrics *Metrics
bridgeList BridgeListHolderFileBased
allowedRelayPattern string
}
func NewBrokerContext(metricsLogger *log.Logger) *BrokerContext {
func (ctx *BrokerContext) GetBridgeInfo(fingerprint bridgefingerprint.Fingerprint) (BridgeInfo, error) {
return ctx.bridgeList.GetBridgeInfo(fingerprint)
}
func NewBrokerContext(
metricsLogger *log.Logger,
allowedRelayPattern string,
) *BrokerContext {
snowflakes := new(SnowflakeHeap)
heap.Init(snowflakes)
rSnowflakes := new(SnowflakeHeap)
heap.Init(rSnowflakes)
metrics, err := NewMetrics(metricsLogger)
if err != nil {
@ -53,42 +70,41 @@ func NewBrokerContext(metricsLogger *log.Logger) *BrokerContext {
panic("Failed to create metrics")
}
bridgeListHolder := NewBridgeListHolder()
const DefaultBridges = `{"displayName":"default", "webSocketAddress":"wss://snowflake.torproject.net/", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80A72"}
`
bridgeListHolder.LoadBridgeInfo(bytes.NewReader([]byte(DefaultBridges)))
return &BrokerContext{
snowflakes: snowflakes,
idToSnowflake: make(map[string]*Snowflake),
proxyPolls: make(chan *ProxyPoll),
metrics: metrics,
snowflakes: snowflakes,
restrictedSnowflakes: rSnowflakes,
idToSnowflake: make(map[string]*Snowflake),
proxyPolls: make(chan *ProxyPoll),
metrics: metrics,
bridgeList: bridgeListHolder,
allowedRelayPattern: allowedRelayPattern,
}
}
// Implements the http.Handler interface
type SnowflakeHandler struct {
*BrokerContext
handle func(*BrokerContext, http.ResponseWriter, *http.Request)
}
func (sh SnowflakeHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Origin, X-Session-ID")
// Return early if it's CORS preflight.
if "OPTIONS" == r.Method {
return
}
sh.handle(sh.BrokerContext, w, r)
}
// Proxies may poll for client offers concurrently.
type ProxyPoll struct {
id string
offerChannel chan []byte
proxyType string
natType string
clients int
offerChannel chan *ClientOffer
}
// Registers a Snowflake and waits for some Client to send an offer,
// as part of the polling logic of the proxy handler.
func (ctx *BrokerContext) RequestOffer(id string) []byte {
func (ctx *BrokerContext) RequestOffer(id string, proxyType string, natType string, clients int) *ClientOffer {
request := new(ProxyPoll)
request.id = id
request.offerChannel = make(chan []byte)
request.proxyType = proxyType
request.natType = natType
request.clients = clients
request.offerChannel = make(chan *ClientOffer)
ctx.proxyPolls <- request
// Block until an offer is available, or timeout which sends a nil offer.
offer := <-request.offerChannel
@ -100,19 +116,26 @@ func (ctx *BrokerContext) RequestOffer(id string) []byte {
// client offer or nil on timeout / none are available.
func (ctx *BrokerContext) Broker() {
for request := range ctx.proxyPolls {
snowflake := ctx.AddSnowflake(request.id)
snowflake := ctx.AddSnowflake(request.id, request.proxyType, request.natType, request.clients)
// Wait for a client to avail an offer to the snowflake.
go func(request *ProxyPoll) {
select {
case offer := <-snowflake.offerChannel:
log.Println("Passing client offer to snowflake proxy.")
request.offerChannel <- offer
case <-time.After(time.Second * ProxyTimeout):
// This snowflake is no longer available to serve clients.
// TODO: Fix race using a delete channel
heap.Remove(ctx.snowflakes, snowflake.index)
delete(ctx.idToSnowflake, snowflake.id)
request.offerChannel <- nil
ctx.snowflakeLock.Lock()
defer ctx.snowflakeLock.Unlock()
if snowflake.index != -1 {
if request.natType == NATUnrestricted {
heap.Remove(ctx.snowflakes, snowflake.index)
} else {
heap.Remove(ctx.restrictedSnowflakes, snowflake.index)
}
ctx.metrics.promMetrics.AvailableProxies.With(prometheus.Labels{"nat": request.natType, "type": request.proxyType}).Dec()
delete(ctx.idToSnowflake, snowflake.id)
close(request.offerChannel)
}
}
}(request)
}
@ -121,134 +144,47 @@ func (ctx *BrokerContext) Broker() {
// Create and add a Snowflake to the heap.
// Required to keep track of proxies between providing them
// with an offer and awaiting their second POST with an answer.
func (ctx *BrokerContext) AddSnowflake(id string) *Snowflake {
func (ctx *BrokerContext) AddSnowflake(id string, proxyType string, natType string, clients int) *Snowflake {
snowflake := new(Snowflake)
snowflake.id = id
snowflake.clients = 0
snowflake.offerChannel = make(chan []byte)
snowflake.answerChannel = make(chan []byte)
heap.Push(ctx.snowflakes, snowflake)
snowflake.clients = clients
snowflake.proxyType = proxyType
snowflake.natType = natType
snowflake.offerChannel = make(chan *ClientOffer)
snowflake.answerChannel = make(chan string)
ctx.snowflakeLock.Lock()
if natType == NATUnrestricted {
heap.Push(ctx.snowflakes, snowflake)
} else {
heap.Push(ctx.restrictedSnowflakes, snowflake)
}
ctx.metrics.promMetrics.AvailableProxies.With(prometheus.Labels{"nat": natType, "type": proxyType}).Inc()
ctx.idToSnowflake[id] = snowflake
ctx.snowflakeLock.Unlock()
return snowflake
}
/*
For snowflake proxies to request a client from the Broker.
*/
func proxyPolls(ctx *BrokerContext, w http.ResponseWriter, r *http.Request) {
id := r.Header.Get("X-Session-ID")
body, err := ioutil.ReadAll(http.MaxBytesReader(w, r.Body, readLimit))
if nil != err {
log.Println("Invalid data.")
w.WriteHeader(http.StatusBadRequest)
return
func (ctx *BrokerContext) InstallBridgeListProfile(reader io.Reader) error {
if err := ctx.bridgeList.LoadBridgeInfo(reader); err != nil {
return err
}
if string(body) != id {
log.Println("Mismatched IDs!")
w.WriteHeader(http.StatusBadRequest)
return
}
log.Println("Received snowflake: ", id)
// Log geoip stats
remoteIP, _, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
log.Println("Error processing proxy IP: ", err.Error())
} else {
ctx.metrics.UpdateCountryStats(remoteIP)
}
// Wait for a client to avail an offer to the snowflake, or timeout if nil.
offer := ctx.RequestOffer(id)
if nil == offer {
log.Println("Proxy " + id + " did not receive a Client offer.")
ctx.metrics.proxyIdleCount++
w.WriteHeader(http.StatusGatewayTimeout)
return
}
log.Println("Passing client offer to snowflake.")
w.Write(offer)
return nil
}
/*
Expects a WebRTC SDP offer in the Request to give to an assigned
snowflake proxy, which responds with the SDP answer to be sent in
the HTTP response back to the client.
*/
func clientOffers(ctx *BrokerContext, w http.ResponseWriter, r *http.Request) {
startTime := time.Now()
offer, err := ioutil.ReadAll(http.MaxBytesReader(w, r.Body, readLimit))
if nil != err {
log.Println("Invalid data.")
w.WriteHeader(http.StatusBadRequest)
return
}
// Immediately fail if there are no snowflakes available.
if ctx.snowflakes.Len() <= 0 {
log.Println("Client: No snowflake proxies available.")
ctx.metrics.clientDeniedCount++
w.WriteHeader(http.StatusServiceUnavailable)
return
}
// Otherwise, find the most available snowflake proxy, and pass the offer to it.
// Delete must be deferred in order to correctly process answer request later.
snowflake := heap.Pop(ctx.snowflakes).(*Snowflake)
defer delete(ctx.idToSnowflake, snowflake.id)
snowflake.offerChannel <- offer
// Wait for the answer to be returned on the channel or timeout.
select {
case answer := <-snowflake.answerChannel:
log.Println("Client: Retrieving answer")
ctx.metrics.clientProxyMatchCount++
w.Write(answer)
// Initial tracking of elapsed time.
ctx.metrics.clientRoundtripEstimate = time.Since(startTime) /
time.Millisecond
case <-time.After(time.Second * ClientTimeout):
log.Println("Client: Timed out.")
w.WriteHeader(http.StatusGatewayTimeout)
w.Write([]byte("timed out waiting for answer!"))
func (ctx *BrokerContext) CheckProxyRelayPattern(pattern string, nonSupported bool) bool {
if nonSupported {
return false
}
proxyPattern := namematcher.NewNameMatcher(pattern)
brokerPattern := namematcher.NewNameMatcher(ctx.allowedRelayPattern)
return proxyPattern.IsSupersetOf(brokerPattern)
}
/*
Expects snowflake proxes which have previously successfully received
an offer from proxyHandler to respond with an answer in an HTTP POST,
which the broker will pass back to the original client.
*/
func proxyAnswers(ctx *BrokerContext, w http.ResponseWriter, r *http.Request) {
id := r.Header.Get("X-Session-ID")
snowflake, ok := ctx.idToSnowflake[id]
if !ok || nil == snowflake {
// The snowflake took too long to respond with an answer, so its client
// disappeared / the snowflake is no longer recognized by the Broker.
w.WriteHeader(http.StatusGone)
return
}
body, err := ioutil.ReadAll(http.MaxBytesReader(w, r.Body, readLimit))
if nil != err || nil == body || len(body) <= 0 {
log.Println("Invalid data.")
w.WriteHeader(http.StatusBadRequest)
return
}
log.Println("Received answer.")
snowflake.answerChannel <- body
}
func debugHandler(ctx *BrokerContext, w http.ResponseWriter, r *http.Request) {
s := fmt.Sprintf("current snowflakes available: %d\n", ctx.snowflakes.Len())
for _, snowflake := range ctx.idToSnowflake {
s += fmt.Sprintf("\nsnowflake %d: %s", snowflake.index, snowflake.id)
}
s += fmt.Sprintf("\n\nroundtrip avg: %d", ctx.metrics.clientRoundtripEstimate)
w.Write([]byte(s))
}
func robotsTxtHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Write([]byte("User-agent: *\nDisallow: /\n"))
// Client offer contains an SDP, bridge fingerprint and the NAT type of the client
type ClientOffer struct {
natType string
sdp []byte
fingerprint []byte
}
func main() {
@ -258,10 +194,13 @@ func main() {
var addr string
var geoipDatabase string
var geoip6Database string
var bridgeListFilePath, allowedRelayPattern string
var brokerSQSQueueName, brokerSQSQueueRegion string
var disableTLS bool
var certFilename, keyFilename string
var disableGeoip bool
var metricsFilename string
var unsafeLogging bool
flag.StringVar(&acmeEmail, "acme-email", "", "optional contact email for Let's Encrypt notifications")
flag.StringVar(&acmeHostnamesCommas, "acme-hostnames", "", "comma-separated hostnames for TLS certificate")
@ -271,20 +210,29 @@ func main() {
flag.StringVar(&addr, "addr", ":443", "address to listen on")
flag.StringVar(&geoipDatabase, "geoipdb", "/usr/share/tor/geoip", "path to correctly formatted geoip database mapping IPv4 address ranges to country codes")
flag.StringVar(&geoip6Database, "geoip6db", "/usr/share/tor/geoip6", "path to correctly formatted geoip database mapping IPv6 address ranges to country codes")
flag.StringVar(&bridgeListFilePath, "bridge-list-path", "", "file path for bridgeListFile")
flag.StringVar(&allowedRelayPattern, "allowed-relay-pattern", "", "allowed pattern for relay host name. The broker will reject proxies whose AcceptedRelayPattern is more restrictive than this")
flag.StringVar(&brokerSQSQueueName, "broker-sqs-name", "", "name of broker SQS queue to listen for incoming messages on")
flag.StringVar(&brokerSQSQueueRegion, "broker-sqs-region", "", "name of AWS region of broker SQS queue")
flag.BoolVar(&disableTLS, "disable-tls", false, "don't use HTTPS")
flag.BoolVar(&disableGeoip, "disable-geoip", false, "don't use geoip for stats collection")
flag.StringVar(&metricsFilename, "metrics-log", "", "path to metrics logging output")
flag.BoolVar(&unsafeLogging, "unsafe-logging", false, "prevent logs from being scrubbed")
flag.Parse()
var err error
var metricsFile io.Writer = os.Stdout
var metricsFile io.Writer
var logOutput io.Writer = os.Stderr
//We want to send the log output through our scrubber first
log.SetOutput(&safelog.LogScrubber{Output: logOutput})
if unsafeLogging {
log.SetOutput(logOutput)
} else {
// We want to send the log output through our scrubber first
log.SetOutput(&safelog.LogScrubber{Output: logOutput})
}
log.SetFlags(log.LstdFlags | log.LUTC)
if metricsFilename != "" {
var err error
metricsFile, err = os.OpenFile(metricsFilename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
@ -296,7 +244,18 @@ func main() {
metricsLogger := log.New(metricsFile, "", 0)
ctx := NewBrokerContext(metricsLogger)
ctx := NewBrokerContext(metricsLogger, allowedRelayPattern)
if bridgeListFilePath != "" {
bridgeListFile, err := os.Open(bridgeListFilePath)
if err != nil {
log.Fatal(err.Error())
}
err = ctx.InstallBridgeListProfile(bridgeListFile)
if err != nil {
log.Fatal(err.Error())
}
}
if !disableGeoip {
err := ctx.metrics.LoadGeoipDatabases(geoipDatabase, geoip6Database)
@ -307,17 +266,39 @@ func main() {
go ctx.Broker()
i := &IPC{ctx}
http.HandleFunc("/robots.txt", robotsTxtHandler)
http.Handle("/proxy", SnowflakeHandler{ctx, proxyPolls})
http.Handle("/client", SnowflakeHandler{ctx, clientOffers})
http.Handle("/answer", SnowflakeHandler{ctx, proxyAnswers})
http.Handle("/debug", SnowflakeHandler{ctx, debugHandler})
http.Handle("/proxy", SnowflakeHandler{i, proxyPolls})
http.Handle("/client", SnowflakeHandler{i, clientOffers})
http.Handle("/answer", SnowflakeHandler{i, proxyAnswers})
http.Handle("/debug", SnowflakeHandler{i, debugHandler})
http.Handle("/metrics", MetricsHandler{metricsFilename, metricsHandler})
http.Handle("/prometheus", promhttp.HandlerFor(ctx.metrics.promMetrics.registry, promhttp.HandlerOpts{}))
http.Handle("/amp/client/", SnowflakeHandler{i, ampClientOffers})
server := http.Server{
Addr: addr,
}
// Run SQS Handler to continuously poll and process messages from SQS
if brokerSQSQueueName != "" && brokerSQSQueueRegion != "" {
log.Printf("Loading SQSHandler using SQS Queue %s in region %s\n", brokerSQSQueueName, brokerSQSQueueRegion)
sqsHandlerContext := context.Background()
cfg, err := config.LoadDefaultConfig(sqsHandlerContext, config.WithRegion(brokerSQSQueueRegion))
if err != nil {
log.Fatal(err)
}
client := sqs.NewFromConfig(cfg)
sqsHandler, err := newSQSHandler(sqsHandlerContext, client, brokerSQSQueueName, brokerSQSQueueRegion, i)
if err != nil {
log.Fatal(err)
}
go sqsHandler.PollAndHandleMessages(sqsHandlerContext)
}
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGHUP)
@ -327,8 +308,10 @@ func main() {
go func() {
for {
signal := <-sigChan
log.Println("Received signal:", signal, ". Reloading geoip databases.")
ctx.metrics.LoadGeoipDatabases(geoipDatabase, geoip6Database)
log.Printf("Received signal: %s. Reloading geoip databases.", signal)
if err := ctx.metrics.LoadGeoipDatabases(geoipDatabase, geoip6Database); err != nil {
log.Fatalf("reload of Geo IP databases on signal %s returned error: %v", signal, err)
}
}
}()
@ -339,12 +322,13 @@ func main() {
// --disable-tls
// The outputs of this block of code are the disableTLS,
// needHTTP01Listener, certManager, and getCertificate variables.
var err error
if acmeHostnamesCommas != "" {
acmeHostnames := strings.Split(acmeHostnamesCommas, ",")
log.Printf("ACME hostnames: %q", acmeHostnames)
var cache autocert.Cache
if err = os.MkdirAll(acmeCertCacheDir, 0700); err != nil {
if err := os.MkdirAll(acmeCertCacheDir, 0700); err != nil {
log.Printf("Warning: Couldn't create cache directory %q (reason: %s) so we're *not* using our certificate cache.", acmeCertCacheDir, err)
} else {
cache = autocert.DirCache(acmeCertCacheDir)

View file

@ -1,240 +0,0 @@
/*
This code is for loading database data that maps ip addresses to countries
for collecting and presenting statistics on snowflake use that might alert us
to censorship events.
The functions here are heavily based off of how tor maintains and searches their
geoip database
The tables used for geoip data must be structured as follows:
Recognized line format for IPv4 is:
INTIPLOW,INTIPHIGH,CC
where INTIPLOW and INTIPHIGH are IPv4 addresses encoded as big-endian 4-byte unsigned
integers, and CC is a country code.
Note that the IPv4 line format
"INTIPLOW","INTIPHIGH","CC","CC3","COUNTRY NAME"
is not currently supported.
Recognized line format for IPv6 is:
IPV6LOW,IPV6HIGH,CC
where IPV6LOW and IPV6HIGH are IPv6 addresses and CC is a country code.
It also recognizes, and skips over, blank lines and lines that start
with '#' (comments).
*/
package main
import (
"bufio"
"bytes"
"crypto/sha1"
"encoding/hex"
"fmt"
"io"
"log"
"net"
"os"
"sort"
"strconv"
"strings"
"sync"
)
type GeoIPTable interface {
parseEntry(string) (*GeoIPEntry, error)
Len() int
Append(GeoIPEntry)
ElementAt(int) GeoIPEntry
Lock()
Unlock()
}
type GeoIPEntry struct {
ipLow net.IP
ipHigh net.IP
country string
}
type GeoIPv4Table struct {
table []GeoIPEntry
lock sync.Mutex // synchronization for geoip table accesses and reloads
}
type GeoIPv6Table struct {
table []GeoIPEntry
lock sync.Mutex // synchronization for geoip table accesses and reloads
}
func (table *GeoIPv4Table) Len() int { return len(table.table) }
func (table *GeoIPv6Table) Len() int { return len(table.table) }
func (table *GeoIPv4Table) Append(entry GeoIPEntry) {
(*table).table = append(table.table, entry)
}
func (table *GeoIPv6Table) Append(entry GeoIPEntry) {
(*table).table = append(table.table, entry)
}
func (table *GeoIPv4Table) ElementAt(i int) GeoIPEntry { return table.table[i] }
func (table *GeoIPv6Table) ElementAt(i int) GeoIPEntry { return table.table[i] }
func (table *GeoIPv4Table) Lock() { (*table).lock.Lock() }
func (table *GeoIPv6Table) Lock() { (*table).lock.Lock() }
func (table *GeoIPv4Table) Unlock() { (*table).lock.Unlock() }
func (table *GeoIPv6Table) Unlock() { (*table).lock.Unlock() }
// Convert a geoip IP address represented as a big-endian unsigned integer to net.IP
func geoipStringToIP(ipStr string) (net.IP, error) {
ip, err := strconv.ParseUint(ipStr, 10, 32)
if err != nil {
return net.IPv4(0, 0, 0, 0), fmt.Errorf("Error parsing IP %s", ipStr)
}
var bytes [4]byte
bytes[0] = byte(ip & 0xFF)
bytes[1] = byte((ip >> 8) & 0xFF)
bytes[2] = byte((ip >> 16) & 0xFF)
bytes[3] = byte((ip >> 24) & 0xFF)
return net.IPv4(bytes[3], bytes[2], bytes[1], bytes[0]), nil
}
//Parses a line in the provided geoip file that corresponds
//to an address range and a two character country code
func (table *GeoIPv4Table) parseEntry(candidate string) (*GeoIPEntry, error) {
if candidate[0] == '#' {
return nil, nil
}
parsedCandidate := strings.Split(candidate, ",")
if len(parsedCandidate) != 3 {
return nil, fmt.Errorf("Provided geoip file is incorrectly formatted. Could not parse line:\n%s", parsedCandidate)
}
low, err := geoipStringToIP(parsedCandidate[0])
if err != nil {
return nil, err
}
high, err := geoipStringToIP(parsedCandidate[1])
if err != nil {
return nil, err
}
geoipEntry := &GeoIPEntry{
ipLow: low,
ipHigh: high,
country: parsedCandidate[2],
}
return geoipEntry, nil
}
//Parses a line in the provided geoip file that corresponds
//to an address range and a two character country code
func (table *GeoIPv6Table) parseEntry(candidate string) (*GeoIPEntry, error) {
if candidate[0] == '#' {
return nil, nil
}
parsedCandidate := strings.Split(candidate, ",")
if len(parsedCandidate) != 3 {
return nil, fmt.Errorf("")
}
low := net.ParseIP(parsedCandidate[0])
if low == nil {
return nil, fmt.Errorf("")
}
high := net.ParseIP(parsedCandidate[1])
if high == nil {
return nil, fmt.Errorf("")
}
geoipEntry := &GeoIPEntry{
ipLow: low,
ipHigh: high,
country: parsedCandidate[2],
}
return geoipEntry, nil
}
//Loads provided geoip file into our tables
//Entries are stored in a table
func GeoIPLoadFile(table GeoIPTable, pathname string) error {
//open file
geoipFile, err := os.Open(pathname)
if err != nil {
return err
}
defer geoipFile.Close()
hash := sha1.New()
table.Lock()
defer table.Unlock()
hashedFile := io.TeeReader(geoipFile, hash)
//read in strings and call parse function
scanner := bufio.NewScanner(hashedFile)
for scanner.Scan() {
entry, err := table.parseEntry(scanner.Text())
if err != nil {
return fmt.Errorf("Provided geoip file is incorrectly formatted. Line is: %+q", scanner.Text())
}
if entry != nil {
table.Append(*entry)
}
}
if err := scanner.Err(); err != nil {
return err
}
sha1Hash := hex.EncodeToString(hash.Sum(nil))
log.Println("Using geoip file ", pathname, " with checksum", sha1Hash)
log.Println("Loaded ", table.Len(), " entries into table")
return nil
}
//Returns the country location of an IPv4 or IPv6 address, and a boolean value
//that indicates whether the IP address was present in the geoip database
func GetCountryByAddr(table GeoIPTable, ip net.IP) (string, bool) {
table.Lock()
defer table.Unlock()
//look IP up in database
index := sort.Search(table.Len(), func(i int) bool {
entry := table.ElementAt(i)
return (bytes.Compare(ip.To16(), entry.ipHigh.To16()) <= 0)
})
if index == table.Len() {
return "", false
}
// check to see if addr is in the range specified by the returned index
// search on IPs in invalid ranges (e.g., 127.0.0.0/8) will return the
//country code of the next highest range
entry := table.ElementAt(index)
if !(bytes.Compare(ip.To16(), entry.ipLow.To16()) >= 0 &&
bytes.Compare(ip.To16(), entry.ipHigh.To16()) <= 0) {
return "", false
}
return table.ElementAt(index).country, true
}

259
broker/http.go Normal file
View file

@ -0,0 +1,259 @@
package main
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"log"
"net/http"
"os"
"time"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/messages"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/util"
)
const (
readLimit = 100000 // Maximum number of bytes to be read from an HTTP request
)
// Implements the http.Handler interface
type SnowflakeHandler struct {
*IPC
handle func(*IPC, http.ResponseWriter, *http.Request)
}
func (sh SnowflakeHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Origin, X-Session-ID")
// Return early if it's CORS preflight.
if "OPTIONS" == r.Method {
return
}
sh.handle(sh.IPC, w, r)
}
// Implements the http.Handler interface
type MetricsHandler struct {
logFilename string
handle func(string, http.ResponseWriter, *http.Request)
}
func (mh MetricsHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Origin, X-Session-ID")
// Return early if it's CORS preflight.
if "OPTIONS" == r.Method {
return
}
mh.handle(mh.logFilename, w, r)
}
func robotsTxtHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
if _, err := w.Write([]byte("User-agent: *\nDisallow: /\n")); err != nil {
log.Printf("robotsTxtHandler unable to write, with this error: %v", err)
}
}
func metricsHandler(metricsFilename string, w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
if metricsFilename == "" {
http.NotFound(w, r)
return
}
metricsFile, err := os.OpenFile(metricsFilename, os.O_RDONLY, 0644)
if err != nil {
log.Println("Error opening metrics file for reading")
http.NotFound(w, r)
return
}
if _, err := io.Copy(w, metricsFile); err != nil {
log.Printf("copying metricsFile returned error: %v", err)
}
}
func debugHandler(i *IPC, w http.ResponseWriter, r *http.Request) {
var response string
err := i.Debug(new(interface{}), &response)
if err != nil {
log.Println(err)
w.WriteHeader(http.StatusInternalServerError)
return
}
if _, err := w.Write([]byte(response)); err != nil {
log.Printf("writing proxy information returned error: %v ", err)
}
}
/*
For snowflake proxies to request a client from the Broker.
*/
func proxyPolls(i *IPC, w http.ResponseWriter, r *http.Request) {
body, err := io.ReadAll(http.MaxBytesReader(w, r.Body, readLimit))
if err != nil {
log.Println("Invalid data.", err.Error())
w.WriteHeader(http.StatusBadRequest)
return
}
arg := messages.Arg{
Body: body,
RemoteAddr: util.GetClientIp(r),
}
var response []byte
err = i.ProxyPolls(arg, &response)
switch {
case err == nil:
case errors.Is(err, messages.ErrBadRequest):
w.WriteHeader(http.StatusBadRequest)
return
case errors.Is(err, messages.ErrInternal):
fallthrough
default:
log.Println(err)
w.WriteHeader(http.StatusInternalServerError)
return
}
if _, err := w.Write(response); err != nil {
log.Printf("proxyPolls unable to write offer with error: %v", err)
}
}
/*
Expects a WebRTC SDP offer in the Request to give to an assigned
snowflake proxy, which responds with the SDP answer to be sent in
the HTTP response back to the client.
*/
func clientOffers(i *IPC, w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), ClientTimeout*time.Second)
defer cancel()
body, err := io.ReadAll(http.MaxBytesReader(w, r.Body, readLimit))
if err != nil {
log.Printf("Error reading client request: %s", err.Error())
w.WriteHeader(http.StatusBadRequest)
return
}
// Handle the legacy version
//
// We support two client message formats. The legacy format is for backwards
// compatability and relies heavily on HTTP headers and status codes to convey
// information.
isLegacy := false
if len(body) > 0 && body[0] == '{' {
isLegacy = true
req := messages.ClientPollRequest{
Offer: string(body),
NAT: r.Header.Get("Snowflake-NAT-Type"),
}
body, err = req.EncodeClientPollRequest()
if err != nil {
log.Printf("Error shimming the legacy request: %s", err.Error())
w.WriteHeader(http.StatusInternalServerError)
return
}
}
arg := messages.Arg{
Body: body,
RemoteAddr: util.GetClientIp(r),
RendezvousMethod: messages.RendezvousHttp,
Context: ctx,
}
var response []byte
err = i.ClientOffers(arg, &response)
if err != nil {
log.Println(err)
w.WriteHeader(http.StatusInternalServerError)
return
}
if isLegacy {
resp, err := messages.DecodeClientPollResponse(response)
if err != nil {
log.Println(err)
w.WriteHeader(http.StatusInternalServerError)
return
}
switch resp.Error {
case "":
response = []byte(resp.Answer)
case messages.StrNoProxies:
w.WriteHeader(http.StatusServiceUnavailable)
return
case messages.StrTimedOut:
w.WriteHeader(http.StatusGatewayTimeout)
return
default:
panic("unknown error")
}
}
if _, err := w.Write(response); err != nil {
log.Printf("clientOffers unable to write answer with error: %v", err)
}
}
/*
Expects snowflake proxies which have previously successfully received
an offer from proxyHandler to respond with an answer in an HTTP POST,
which the broker will pass back to the original client.
*/
func proxyAnswers(i *IPC, w http.ResponseWriter, r *http.Request) {
body, err := io.ReadAll(http.MaxBytesReader(w, r.Body, readLimit))
if err != nil {
log.Println("Invalid data.", err.Error())
w.WriteHeader(http.StatusBadRequest)
return
}
err = validateSDP(body)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
arg := messages.Arg{
Body: body,
RemoteAddr: util.GetClientIp(r),
}
var response []byte
err = i.ProxyAnswers(arg, &response)
switch {
case err == nil:
case errors.Is(err, messages.ErrBadRequest):
w.WriteHeader(http.StatusBadRequest)
return
case errors.Is(err, messages.ErrInternal):
fallthrough
default:
log.Println(err)
w.WriteHeader(http.StatusInternalServerError)
return
}
if _, err := w.Write(response); err != nil {
log.Printf("proxyAnswers unable to write answer response with error: %v", err)
}
}
func validateSDP(SDP []byte) error {
// TODO: more validation likely needed
if !bytes.Contains(SDP, []byte("a=candidate")) {
return fmt.Errorf("SDP contains no candidate")
}
return nil
}

272
broker/ipc.go Normal file
View file

@ -0,0 +1,272 @@
package main
import (
"container/heap"
"encoding/hex"
"fmt"
"log"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/bridgefingerprint"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/constants"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/util"
"github.com/prometheus/client_golang/prometheus"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/messages"
)
const (
ClientTimeout = constants.BrokerClientTimeout
ProxyTimeout = 10
NATUnknown = "unknown"
NATRestricted = "restricted"
NATUnrestricted = "unrestricted"
)
type IPC struct {
ctx *BrokerContext
}
func (i *IPC) Debug(_ interface{}, response *string) error {
var unknowns int
var natRestricted, natUnrestricted, natUnknown int
proxyTypes := make(map[string]int)
i.ctx.snowflakeLock.Lock()
s := fmt.Sprintf("current snowflakes available: %d\n", len(i.ctx.idToSnowflake))
for _, snowflake := range i.ctx.idToSnowflake {
if messages.KnownProxyTypes[snowflake.proxyType] {
proxyTypes[snowflake.proxyType]++
} else {
unknowns++
}
switch snowflake.natType {
case NATRestricted:
natRestricted++
case NATUnrestricted:
natUnrestricted++
default:
natUnknown++
}
}
i.ctx.snowflakeLock.Unlock()
for pType, num := range proxyTypes {
s += fmt.Sprintf("\t%s proxies: %d\n", pType, num)
}
s += fmt.Sprintf("\tunknown proxies: %d", unknowns)
s += fmt.Sprintf("\nNAT Types available:")
s += fmt.Sprintf("\n\trestricted: %d", natRestricted)
s += fmt.Sprintf("\n\tunrestricted: %d", natUnrestricted)
s += fmt.Sprintf("\n\tunknown: %d", natUnknown)
*response = s
return nil
}
func (i *IPC) ProxyPolls(arg messages.Arg, response *[]byte) error {
sid, proxyType, natType, clients, relayPattern, relayPatternSupported, err := messages.DecodeProxyPollRequestWithRelayPrefix(arg.Body)
if err != nil {
return messages.ErrBadRequest
}
if !relayPatternSupported {
i.ctx.metrics.IncrementCounter("proxy-poll-without-relay-url")
i.ctx.metrics.promMetrics.ProxyPollWithoutRelayURLExtensionTotal.With(prometheus.Labels{"nat": natType, "type": proxyType}).Inc()
} else {
i.ctx.metrics.IncrementCounter("proxy-poll-with-relay-url")
i.ctx.metrics.promMetrics.ProxyPollWithRelayURLExtensionTotal.With(prometheus.Labels{"nat": natType, "type": proxyType}).Inc()
}
if !i.ctx.CheckProxyRelayPattern(relayPattern, !relayPatternSupported) {
i.ctx.metrics.IncrementCounter("proxy-poll-rejected-relay-url")
i.ctx.metrics.promMetrics.ProxyPollRejectedForRelayURLExtensionTotal.With(prometheus.Labels{"nat": natType, "type": proxyType}).Inc()
b, err := messages.EncodePollResponseWithRelayURL("", false, "", "", "incorrect relay pattern")
*response = b
if err != nil {
return messages.ErrInternal
}
return nil
}
// Log geoip stats
remoteIP := arg.RemoteAddr
if err != nil {
log.Println("Warning: cannot process proxy IP: ", err.Error())
} else {
i.ctx.metrics.UpdateProxyStats(remoteIP, proxyType, natType)
}
var b []byte
// Wait for a client to avail an offer to the snowflake, or timeout if nil.
offer := i.ctx.RequestOffer(sid, proxyType, natType, clients)
if offer == nil {
i.ctx.metrics.IncrementCounter("proxy-idle")
i.ctx.metrics.promMetrics.ProxyPollTotal.With(prometheus.Labels{"nat": natType, "type": proxyType, "status": "idle"}).Inc()
b, err = messages.EncodePollResponse("", false, "")
if err != nil {
return messages.ErrInternal
}
*response = b
return nil
}
i.ctx.metrics.promMetrics.ProxyPollTotal.With(prometheus.Labels{"nat": natType, "type": proxyType, "status": "matched"}).Inc()
var relayURL string
bridgeFingerprint, err := bridgefingerprint.FingerprintFromBytes(offer.fingerprint)
if err != nil {
return messages.ErrBadRequest
}
if info, err := i.ctx.bridgeList.GetBridgeInfo(bridgeFingerprint); err != nil {
return err
} else {
relayURL = info.WebSocketAddress
}
b, err = messages.EncodePollResponseWithRelayURL(string(offer.sdp), true, offer.natType, relayURL, "")
if err != nil {
return messages.ErrInternal
}
*response = b
return nil
}
func sendClientResponse(resp *messages.ClientPollResponse, response *[]byte) error {
data, err := resp.EncodePollResponse()
if err != nil {
log.Printf("error encoding answer")
return messages.ErrInternal
} else {
*response = []byte(data)
return nil
}
}
func (i *IPC) ClientOffers(arg messages.Arg, response *[]byte) error {
req, err := messages.DecodeClientPollRequest(arg.Body)
if err != nil {
return sendClientResponse(&messages.ClientPollResponse{Error: err.Error()}, response)
}
// If we couldn't extract the remote IP from the rendezvous method
// pull it from the offer SDP
remoteAddr := arg.RemoteAddr
if remoteAddr == "" {
sdp, err := util.DeserializeSessionDescription(req.Offer)
if err == nil {
candidateAddrs := util.GetCandidateAddrs(sdp.SDP)
if len(candidateAddrs) > 0 {
remoteAddr = candidateAddrs[0].String()
}
}
}
offer := &ClientOffer{
natType: req.NAT,
sdp: []byte(req.Offer),
}
fingerprint, err := hex.DecodeString(req.Fingerprint)
if err != nil {
return sendClientResponse(&messages.ClientPollResponse{Error: err.Error()}, response)
}
BridgeFingerprint, err := bridgefingerprint.FingerprintFromBytes(fingerprint)
if err != nil {
return sendClientResponse(&messages.ClientPollResponse{Error: err.Error()}, response)
}
if _, err := i.ctx.GetBridgeInfo(BridgeFingerprint); err != nil {
return sendClientResponse(
&messages.ClientPollResponse{Error: err.Error()},
response,
)
}
offer.fingerprint = BridgeFingerprint.ToBytes()
snowflake := i.matchSnowflake(offer.natType)
if snowflake != nil {
snowflake.offerChannel <- offer
} else {
i.ctx.metrics.UpdateClientStats(remoteAddr, arg.RendezvousMethod, offer.natType, "denied")
resp := &messages.ClientPollResponse{Error: messages.StrNoProxies}
return sendClientResponse(resp, response)
}
// Wait for the answer to be returned on the channel or timeout.
select {
case answer := <-snowflake.answerChannel:
i.ctx.metrics.UpdateClientStats(remoteAddr, arg.RendezvousMethod, offer.natType, "matched")
resp := &messages.ClientPollResponse{Answer: answer}
err = sendClientResponse(resp, response)
case <-arg.Context.Done():
i.ctx.metrics.UpdateClientStats(remoteAddr, arg.RendezvousMethod, offer.natType, "timeout")
resp := &messages.ClientPollResponse{Error: messages.StrTimedOut}
err = sendClientResponse(resp, response)
}
i.ctx.snowflakeLock.Lock()
i.ctx.metrics.promMetrics.AvailableProxies.With(prometheus.Labels{"nat": snowflake.natType, "type": snowflake.proxyType}).Dec()
delete(i.ctx.idToSnowflake, snowflake.id)
i.ctx.snowflakeLock.Unlock()
return err
}
func (i *IPC) matchSnowflake(natType string) *Snowflake {
i.ctx.snowflakeLock.Lock()
defer i.ctx.snowflakeLock.Unlock()
// Proiritize known restricted snowflakes for unrestricted clients
if natType == NATUnrestricted && i.ctx.restrictedSnowflakes.Len() > 0 {
return heap.Pop(i.ctx.restrictedSnowflakes).(*Snowflake)
}
if i.ctx.snowflakes.Len() > 0 {
return heap.Pop(i.ctx.snowflakes).(*Snowflake)
}
return nil
}
func (i *IPC) ProxyAnswers(arg messages.Arg, response *[]byte) error {
answer, id, err := messages.DecodeAnswerRequest(arg.Body)
if err != nil || answer == "" {
return messages.ErrBadRequest
}
var success = true
i.ctx.snowflakeLock.Lock()
snowflake, ok := i.ctx.idToSnowflake[id]
i.ctx.snowflakeLock.Unlock()
if !ok || snowflake == nil {
// The snowflake took too long to respond with an answer, so its client
// disappeared / the snowflake is no longer recognized by the Broker.
success = false
i.ctx.metrics.promMetrics.ProxyAnswerTotal.With(prometheus.Labels{"type": "", "status": "timeout"}).Inc()
}
b, err := messages.EncodeAnswerResponse(success)
if err != nil {
log.Printf("Error encoding answer: %s", err.Error())
return messages.ErrInternal
}
*response = b
if success {
i.ctx.metrics.promMetrics.ProxyAnswerTotal.With(prometheus.Labels{"type": snowflake.proxyType, "status": "success"}).Inc()
snowflake.answerChannel <- answer
}
return nil
}

View file

@ -1,198 +1,390 @@
/*
We export metrics in the following format:
"snowflake-stats-end" YYYY-MM-DD HH:MM:SS (NSEC s) NL
[At most once.]
YYYY-MM-DD HH:MM:SS defines the end of the included measurement
interval of length NSEC seconds (86400 seconds by default).
"snowflake-ips" CC=NUM,CC=NUM,... NL
[At most once.]
List of mappings from two-letter country codes to the number of
unique IP addresses of snowflake proxies that have polled.
"snowflake-ips-total" NUM NL
[At most once.]
A count of the total number of unique IP addresses of snowflake
proxies that have polled.
"snowflake-idle-count" NUM NL
[At most once.]
A count of the number of times a proxy has polled but received
no client offer, rounded up to the nearest multiple of 8.
"client-denied-count" NUM NL
[At most once.]
A count of the number of times a client has requested a proxy
from the broker but no proxies were available, rounded up to
the nearest multiple of 8.
"client-snowflake-match-count" NUM NL
[At most once.]
A count of the number of times a client successfully received a
proxy from the broker, rounded up to the nearest multiple of 8.
We export metrics in the format specified in our broker spec:
https://gitweb.torproject.org/pluggable-transports/snowflake.git/tree/doc/broker-spec.txt
*/
package main
import (
// "golang.org/x/net/internal/timeseries"
"fmt"
"log"
"math"
"net"
"sort"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/prometheus/client_golang/prometheus"
"gitlab.torproject.org/tpo/anti-censorship/geoip"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil/safeprom"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/messages"
)
var (
once sync.Once
const (
prometheusNamespace = "snowflake"
metricsResolution = 60 * 60 * 24 * time.Second //86400 seconds
)
const metricsResolution = 60 * 60 * 24 * time.Second //86400 seconds
type CountryStats struct {
addrs map[string]bool
counts map[string]int
}
// Implements Observable
type Metrics struct {
logger *log.Logger
tablev4 *GeoIPv4Table
tablev6 *GeoIPv6Table
geoipdb *geoip.Geoip
countryStats CountryStats
clientRoundtripEstimate time.Duration
proxyIdleCount uint
clientDeniedCount uint
clientProxyMatchCount uint
}
ips *sync.Map // proxy IP addresses we've seen before
counters *sync.Map // counters for ip-based metrics
func (s CountryStats) Display() string {
output := ""
for cc, count := range s.counts {
output += fmt.Sprintf("%s=%d,", cc, count)
}
// counters for country-based metrics
proxies *sync.Map // ip-based counts of proxy country codes
clientHTTPPolls *sync.Map // poll-based counts of client HTTP rendezvous
clientAMPPolls *sync.Map // poll-based counts of client AMP cache rendezvous
clientSQSPolls *sync.Map // poll-based counts of client SQS rendezvous
// cut off trailing ","
if len(output) > 0 {
return output[:len(output)-1]
}
return output
}
func (m *Metrics) UpdateCountryStats(addr string) {
var country string
var ok bool
if m.countryStats.addrs[addr] {
return
}
ip := net.ParseIP(addr)
if ip.To4() != nil {
//This is an IPv4 address
if m.tablev4 == nil {
return
}
country, ok = GetCountryByAddr(m.tablev4, ip)
} else {
if m.tablev6 == nil {
return
}
country, ok = GetCountryByAddr(m.tablev6, ip)
}
if !ok {
country = "??"
log.Println("Unknown geoip")
}
//update map of unique ips and counts
m.countryStats.counts[country]++
m.countryStats.addrs[addr] = true
return
}
func (m *Metrics) LoadGeoipDatabases(geoipDB string, geoip6DB string) error {
// Load geoip databases
log.Println("Loading geoip databases")
tablev4 := new(GeoIPv4Table)
err := GeoIPLoadFile(tablev4, geoipDB)
if err != nil {
m.tablev4 = nil
return err
} else {
m.tablev4 = tablev4
}
tablev6 := new(GeoIPv6Table)
err = GeoIPLoadFile(tablev6, geoip6DB)
if err != nil {
m.tablev6 = nil
return err
} else {
m.tablev6 = tablev6
}
return nil
promMetrics *PromMetrics
}
func NewMetrics(metricsLogger *log.Logger) (*Metrics, error) {
m := new(Metrics)
m.countryStats = CountryStats{
counts: make(map[string]int),
addrs: make(map[string]bool),
}
m.logger = metricsLogger
m.promMetrics = initPrometheus()
m.ips = new(sync.Map)
m.counters = new(sync.Map)
m.proxies = new(sync.Map)
m.clientHTTPPolls = new(sync.Map)
m.clientAMPPolls = new(sync.Map)
m.clientSQSPolls = new(sync.Map)
// Write to log file every hour with updated metrics
go once.Do(m.logMetrics)
// Write to log file every day with updated metrics
go m.logMetrics()
return m, nil
}
func incrementMapCounter(counters *sync.Map, key string) {
start := uint64(1)
val, loaded := counters.LoadOrStore(key, &start)
if loaded {
ptr := val.(*uint64)
atomic.AddUint64(ptr, 1)
}
}
func (m *Metrics) IncrementCounter(key string) {
incrementMapCounter(m.counters, key)
}
func (m *Metrics) UpdateProxyStats(addr string, proxyType string, natType string) {
// perform geolocation of IP address
ip := net.ParseIP(addr)
if m.geoipdb == nil {
return
}
country, ok := m.geoipdb.GetCountryByAddr(ip)
if !ok {
country = "??"
}
// check whether we've seen this proxy ip before
if _, loaded := m.ips.LoadOrStore(addr, true); !loaded {
m.IncrementCounter("proxy-total")
incrementMapCounter(m.proxies, country)
m.promMetrics.ProxyTotal.With(prometheus.Labels{
"nat": natType,
"type": proxyType,
"cc": country,
}).Inc()
}
// update unique IP proxy NAT metrics
key := fmt.Sprintf("%s-%s", addr, natType)
if _, loaded := m.ips.LoadOrStore(key, true); !loaded {
switch natType {
case NATRestricted:
m.IncrementCounter("proxy-nat-restricted")
case NATUnrestricted:
m.IncrementCounter("proxy-nat-unrestricted")
default:
m.IncrementCounter("proxy-nat-unknown")
}
}
// update unique IP proxy type metrics
key = fmt.Sprintf("%s-%s", addr, proxyType)
if _, loaded := m.ips.LoadOrStore(key, true); !loaded {
switch proxyType {
case "standalone":
m.IncrementCounter("proxy-standalone")
case "badge":
m.IncrementCounter("proxy-badge")
case "iptproxy":
m.IncrementCounter("proxy-iptproxy")
case "webext":
m.IncrementCounter("proxy-webext")
}
}
}
func (m *Metrics) UpdateClientStats(addr string, rendezvousMethod messages.RendezvousMethod, natType, status string) {
ip := net.ParseIP(addr)
country := "??"
if m.geoipdb != nil {
country_by_addr, ok := m.geoipdb.GetCountryByAddr(ip)
if ok {
country = country_by_addr
}
}
switch status {
case "denied":
m.IncrementCounter("client-denied")
if natType == NATUnrestricted {
m.IncrementCounter("client-unrestricted-denied")
} else {
m.IncrementCounter("client-restricted-denied")
}
case "matched":
m.IncrementCounter("client-match")
case "timeout":
m.IncrementCounter("client-timeout")
default:
log.Printf("Unknown rendezvous status: %s", status)
}
switch rendezvousMethod {
case messages.RendezvousHttp:
m.IncrementCounter("client-http")
incrementMapCounter(m.clientHTTPPolls, country)
case messages.RendezvousAmpCache:
m.IncrementCounter("client-amp")
incrementMapCounter(m.clientAMPPolls, country)
case messages.RendezvousSqs:
m.IncrementCounter("client-sqs")
incrementMapCounter(m.clientSQSPolls, country)
}
m.promMetrics.ClientPollTotal.With(prometheus.Labels{
"nat": natType,
"status": status,
"rendezvous_method": string(rendezvousMethod),
"cc": country,
}).Inc()
}
// Types to facilitate sorting in formatAndClearCountryStats.
type record struct {
cc string
count uint64
}
type records []record
// Implementation of sort.Interface for records. The ordering is lexicographic:
// first by count (descending), then by cc (ascending).
func (r records) Len() int { return len(r) }
func (r records) Swap(i, j int) { r[i], r[j] = r[j], r[i] }
func (r records) Less(i, j int) bool {
return r[i].count > r[j].count || (r[i].count == r[j].count && r[i].cc < r[j].cc)
}
// formatAndClearCountryStats takes a map from country codes to counts, and
// returns a formatted string of comma-separated CC=COUNT. Entries are sorted by
// count from largest to smallest. When counts are equal, entries are sorted by
// country code in ascending order.
//
// formatAndClearCountryStats has the side effect of deleting all entries in m.
func formatAndClearCountryStats(m *sync.Map, binned bool) string {
// Extract entries from the map into a slice of records, binning counts
// if asked to.
rs := records{}
m.Range(func(cc, countPtr any) bool {
count := *countPtr.(*uint64)
if binned {
count = binCount(count)
}
rs = append(rs, record{cc: cc.(string), count: count})
m.Delete(cc)
return true
})
// Sort the records.
sort.Sort(rs)
// Format and concatenate.
var output strings.Builder
for i, r := range rs {
if i != 0 {
output.WriteString(",")
}
fmt.Fprintf(&output, "%s=%d", r.cc, r.count)
}
return output.String()
}
func (m *Metrics) LoadGeoipDatabases(geoipDB string, geoip6DB string) error {
// Load geoip databases
var err error
log.Println("Loading geoip databases")
m.geoipdb, err = geoip.New(geoipDB, geoip6DB)
return err
}
// Logs metrics in intervals specified by metricsResolution
func (m *Metrics) logMetrics() {
heartbeat := time.Tick(metricsResolution)
for range heartbeat {
m.printMetrics()
m.zeroMetrics()
}
}
func (m *Metrics) loadAndZero(key string) uint64 {
count, loaded := m.counters.LoadAndDelete(key)
if !loaded {
count = new(uint64)
}
ptr := count.(*uint64)
return *ptr
}
func (m *Metrics) printMetrics() {
m.logger.Println("snowflake-stats-end", time.Now().UTC().Format("2006-01-02 15:04:05"), fmt.Sprintf("(%d s)", int(metricsResolution.Seconds())))
m.logger.Println("snowflake-ips", m.countryStats.Display())
m.logger.Println("snowflake-ips-total", len(m.countryStats.addrs))
m.logger.Println("snowflake-idle-count", binCount(m.proxyIdleCount))
m.logger.Println("client-denied-count", binCount(m.clientDeniedCount))
m.logger.Println("client-snowflake-match-count", binCount(m.clientProxyMatchCount))
m.logger.Println(
"snowflake-stats-end",
time.Now().UTC().Format("2006-01-02 15:04:05"),
fmt.Sprintf("(%d s)", int(metricsResolution.Seconds())),
)
m.logger.Println("snowflake-ips", formatAndClearCountryStats(m.proxies, false))
m.logger.Printf("snowflake-ips-iptproxy %d\n", m.loadAndZero("proxy-iptproxy"))
m.logger.Printf("snowflake-ips-standalone %d\n", m.loadAndZero("proxy-standalone"))
m.logger.Printf("snowflake-ips-webext %d\n", m.loadAndZero("proxy-webext"))
m.logger.Printf("snowflake-ips-badge %d\n", m.loadAndZero("proxy-badge"))
m.logger.Println("snowflake-ips-total", m.loadAndZero("proxy-total"))
m.logger.Println("snowflake-idle-count", binCount(m.loadAndZero("proxy-idle")))
m.logger.Println("snowflake-proxy-poll-with-relay-url-count", binCount(m.loadAndZero("proxy-poll-with-relay-url")))
m.logger.Println("snowflake-proxy-poll-without-relay-url-count", binCount(m.loadAndZero("proxy-poll-without-relay-url")))
m.logger.Println("snowflake-proxy-rejected-for-relay-url-count", binCount(m.loadAndZero("proxy-poll-rejected-relay-url")))
m.logger.Println("client-denied-count", binCount(m.loadAndZero("client-denied")))
m.logger.Println("client-restricted-denied-count", binCount(m.loadAndZero("client-restricted-denied")))
m.logger.Println("client-unrestricted-denied-count", binCount(m.loadAndZero("client-unrestricted-denied")))
m.logger.Println("client-snowflake-match-count", binCount(m.loadAndZero("client-match")))
m.logger.Println("client-snowflake-timeout-count", binCount(m.loadAndZero("client-timeout")))
m.logger.Printf("client-http-count %d\n", binCount(m.loadAndZero("client-http")))
m.logger.Printf("client-http-ips %s\n", formatAndClearCountryStats(m.clientHTTPPolls, true))
m.logger.Printf("client-ampcache-count %d\n", binCount(m.loadAndZero("client-amp")))
m.logger.Printf("client-ampcache-ips %s\n", formatAndClearCountryStats(m.clientAMPPolls, true))
m.logger.Printf("client-sqs-count %d\n", binCount(m.loadAndZero("client-sqs")))
m.logger.Printf("client-sqs-ips %s\n", formatAndClearCountryStats(m.clientSQSPolls, true))
m.logger.Println("snowflake-ips-nat-restricted", m.loadAndZero("proxy-nat-restricted"))
m.logger.Println("snowflake-ips-nat-unrestricted", m.loadAndZero("proxy-nat-unrestricted"))
m.logger.Println("snowflake-ips-nat-unknown", m.loadAndZero("proxy-nat-unknown"))
m.ips.Clear()
}
// Restores all metrics to original values
func (m *Metrics) zeroMetrics() {
m.proxyIdleCount = 0
m.clientDeniedCount = 0
m.clientProxyMatchCount = 0
m.countryStats.counts = make(map[string]int)
m.countryStats.addrs = make(map[string]bool)
// binCount rounds count up to the next multiple of 8. Returns 0 on integer
// overflow.
func binCount(count uint64) uint64 {
return (count + 7) / 8 * 8
}
// Rounds up a count to the nearest multiple of 8.
func binCount(count uint) uint {
return uint((math.Ceil(float64(count) / 8)) * 8)
type PromMetrics struct {
registry *prometheus.Registry
ProxyTotal *prometheus.CounterVec
ProxyPollTotal *safeprom.CounterVec
ClientPollTotal *safeprom.CounterVec
ProxyAnswerTotal *safeprom.CounterVec
AvailableProxies *prometheus.GaugeVec
ProxyPollWithRelayURLExtensionTotal *safeprom.CounterVec
ProxyPollWithoutRelayURLExtensionTotal *safeprom.CounterVec
ProxyPollRejectedForRelayURLExtensionTotal *safeprom.CounterVec
}
// Initialize metrics for prometheus exporter
func initPrometheus() *PromMetrics {
promMetrics := &PromMetrics{}
promMetrics.registry = prometheus.NewRegistry()
promMetrics.ProxyTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "proxy_total",
Help: "The number of unique snowflake IPs",
},
[]string{"type", "nat", "cc"},
)
promMetrics.AvailableProxies = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: prometheusNamespace,
Name: "available_proxies",
Help: "The number of currently available snowflake proxies",
},
[]string{"type", "nat"},
)
promMetrics.ProxyPollTotal = safeprom.NewCounterVec(
prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "rounded_proxy_poll_total",
Help: "The number of snowflake proxy polls, rounded up to a multiple of 8",
},
[]string{"nat", "type", "status"},
)
promMetrics.ProxyAnswerTotal = safeprom.NewCounterVec(
prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "rounded_proxy_answer_total",
Help: "The number of snowflake proxy answers, rounded up to a multiple of 8",
},
[]string{"type", "status"},
)
promMetrics.ProxyPollWithRelayURLExtensionTotal = safeprom.NewCounterVec(
prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "rounded_proxy_poll_with_relay_url_extension_total",
Help: "The number of snowflake proxy polls with Relay URL Extension, rounded up to a multiple of 8",
},
[]string{"nat", "type"},
)
promMetrics.ProxyPollWithoutRelayURLExtensionTotal = safeprom.NewCounterVec(
prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "rounded_proxy_poll_without_relay_url_extension_total",
Help: "The number of snowflake proxy polls without Relay URL Extension, rounded up to a multiple of 8",
},
[]string{"nat", "type"},
)
promMetrics.ProxyPollRejectedForRelayURLExtensionTotal = safeprom.NewCounterVec(
prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "rounded_proxy_poll_rejected_relay_url_extension_total",
Help: "The number of snowflake proxy polls rejected by Relay URL Extension, rounded up to a multiple of 8",
},
[]string{"nat", "type"},
)
promMetrics.ClientPollTotal = safeprom.NewCounterVec(
prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "rounded_client_poll_total",
Help: "The number of snowflake client polls, rounded up to a multiple of 8",
},
[]string{"nat", "status", "cc", "rendezvous_method"},
)
// We need to register our metrics so they can be exported.
promMetrics.registry.MustRegister(
promMetrics.ClientPollTotal, promMetrics.ProxyPollTotal,
promMetrics.ProxyTotal, promMetrics.ProxyAnswerTotal, promMetrics.AvailableProxies,
promMetrics.ProxyPollWithRelayURLExtensionTotal,
promMetrics.ProxyPollWithoutRelayURLExtensionTotal,
promMetrics.ProxyPollRejectedForRelayURLExtensionTotal,
)
return promMetrics
}

47
broker/metrics_test.go Normal file
View file

@ -0,0 +1,47 @@
package main
import (
"sync"
"testing"
. "github.com/smartystreets/goconvey/convey"
)
func TestFormatAndClearCountryStats(t *testing.T) {
Convey("given a mapping of country stats", t, func() {
stats := new(sync.Map)
for _, record := range []struct {
cc string
count uint64
}{
{"IT", 50},
{"FR", 200},
{"TZ", 100},
{"CN", 250},
{"RU", 150},
{"CA", 1},
{"BE", 1},
{"PH", 1},
// The next 3 bin to the same value, 112. When not
// binned, they should go in the order MY,ZA,AT (ordered
// by count). When binned, they should go in the order
// AT,MY,ZA (ordered by country code).
{"AT", 105},
{"MY", 112},
{"ZA", 108},
} {
stats.Store(record.cc, &record.count)
}
Convey("the order should be correct with binned=false", func() {
So(formatAndClearCountryStats(stats, false), ShouldEqual, "CN=250,FR=200,RU=150,MY=112,ZA=108,AT=105,TZ=100,IT=50,BE=1,CA=1,PH=1")
})
Convey("the order should be correct with binned=true", func() {
So(formatAndClearCountryStats(stats, true), ShouldEqual, "CN=256,FR=200,RU=152,AT=112,MY=112,ZA=112,TZ=104,IT=56,BE=8,CA=8,PH=8")
})
// The map should be cleared on return.
stats.Range(func(_, _ any) bool { panic("map was not cleared") })
})
}

File diff suppressed because it is too large Load diff

View file

@ -10,8 +10,10 @@ over the offer and answer channels.
*/
type Snowflake struct {
id string
offerChannel chan []byte
answerChannel chan []byte
proxyType string
natType string
offerChannel chan *ClientOffer
answerChannel chan string
clients int
index int
}

217
broker/sqs.go Normal file
View file

@ -0,0 +1,217 @@
package main
import (
"context"
"log"
"strconv"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/sqs"
"github.com/aws/aws-sdk-go-v2/service/sqs/types"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/messages"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/sqsclient"
)
const (
cleanupThreshold = -2 * time.Minute
)
type sqsHandler struct {
SQSClient sqsclient.SQSClient
SQSQueueURL *string
IPC *IPC
cleanupInterval time.Duration
}
func (r *sqsHandler) pollMessages(ctx context.Context, chn chan<- *types.Message) {
for {
select {
case <-ctx.Done():
// if context is cancelled
return
default:
res, err := r.SQSClient.ReceiveMessage(ctx, &sqs.ReceiveMessageInput{
QueueUrl: r.SQSQueueURL,
MaxNumberOfMessages: 10,
WaitTimeSeconds: 15,
MessageAttributeNames: []string{
string(types.QueueAttributeNameAll),
},
})
if err != nil {
log.Printf("SQSHandler: encountered error while polling for messages: %v\n", err)
continue
}
for _, message := range res.Messages {
chn <- &message
}
}
}
}
func (r *sqsHandler) cleanupClientQueues(ctx context.Context) {
for range time.NewTicker(r.cleanupInterval).C {
// Runs at fixed intervals to clean up any client queues that were last changed more than 2 minutes ago
select {
case <-ctx.Done():
// if context is cancelled
return
default:
queueURLsList := []string{}
var nextToken *string
for {
res, err := r.SQSClient.ListQueues(ctx, &sqs.ListQueuesInput{
QueueNamePrefix: aws.String("snowflake-client-"),
MaxResults: aws.Int32(1000),
NextToken: nextToken,
})
if err != nil {
log.Printf("SQSHandler: encountered error while retrieving client queues to clean up: %v\n", err)
// client queues will be cleaned up the next time the cleanup operation is triggered automatically
break
}
queueURLsList = append(queueURLsList, res.QueueUrls...)
if res.NextToken == nil {
break
} else {
nextToken = res.NextToken
}
}
numDeleted := 0
cleanupCutoff := time.Now().Add(cleanupThreshold)
for _, queueURL := range queueURLsList {
if !strings.Contains(queueURL, "snowflake-client-") {
continue
}
res, err := r.SQSClient.GetQueueAttributes(ctx, &sqs.GetQueueAttributesInput{
QueueUrl: aws.String(queueURL),
AttributeNames: []types.QueueAttributeName{types.QueueAttributeNameLastModifiedTimestamp},
})
if err != nil {
// According to the AWS SQS docs, the deletion process for a queue can take up to 60 seconds. So the queue
// can be in the process of being deleted, but will still be returned by the ListQueues operation, but
// fail when we try to GetQueueAttributes for the queue
log.Printf("SQSHandler: encountered error while getting attribute of client queue %s. queue may already be deleted.\n", queueURL)
continue
}
lastModifiedInt64, err := strconv.ParseInt(res.Attributes[string(types.QueueAttributeNameLastModifiedTimestamp)], 10, 64)
if err != nil {
log.Printf("SQSHandler: encountered invalid lastModifiedTimetamp value from client queue %s: %v\n", queueURL, err)
continue
}
lastModified := time.Unix(lastModifiedInt64, 0)
if lastModified.Before(cleanupCutoff) {
_, err := r.SQSClient.DeleteQueue(ctx, &sqs.DeleteQueueInput{
QueueUrl: aws.String(queueURL),
})
if err != nil {
log.Printf("SQSHandler: encountered error when deleting client queue %s: %v\n", queueURL, err)
continue
} else {
numDeleted += 1
}
}
}
}
}
}
func (r *sqsHandler) handleMessage(mainCtx context.Context, message *types.Message) {
var encPollReq []byte
var response []byte
var err error
ctx, cancel := context.WithTimeout(mainCtx, ClientTimeout*time.Second)
defer cancel()
clientID := message.MessageAttributes["ClientID"].StringValue
if clientID == nil {
log.Println("SQSHandler: got SDP offer in SQS message with no client ID. ignoring this message.")
return
}
res, err := r.SQSClient.CreateQueue(ctx, &sqs.CreateQueueInput{
QueueName: aws.String("snowflake-client-" + *clientID),
})
if err != nil {
log.Printf("SQSHandler: error encountered when creating answer queue for client %s: %v\n", *clientID, err)
return
}
answerSQSURL := res.QueueUrl
encPollReq = []byte(*message.Body)
arg := messages.Arg{
Body: encPollReq,
RemoteAddr: "",
RendezvousMethod: messages.RendezvousSqs,
Context: ctx,
}
err = r.IPC.ClientOffers(arg, &response)
if err != nil {
log.Printf("SQSHandler: error encountered when handling message: %v\n", err)
return
}
r.SQSClient.SendMessage(ctx, &sqs.SendMessageInput{
QueueUrl: answerSQSURL,
MessageBody: aws.String(string(response)),
})
}
func (r *sqsHandler) deleteMessage(context context.Context, message *types.Message) {
r.SQSClient.DeleteMessage(context, &sqs.DeleteMessageInput{
QueueUrl: r.SQSQueueURL,
ReceiptHandle: message.ReceiptHandle,
})
}
func newSQSHandler(context context.Context, client sqsclient.SQSClient, sqsQueueName string, region string, i *IPC) (*sqsHandler, error) {
// Creates the queue if a queue with the same name doesn't exist. If a queue with the same name and attributes
// already exists, then nothing will happen. If a queue with the same name, but different attributes exists, then
// an error will be returned
res, err := client.CreateQueue(context, &sqs.CreateQueueInput{
QueueName: aws.String(sqsQueueName),
Attributes: map[string]string{
"MessageRetentionPeriod": strconv.FormatInt(int64((5 * time.Minute).Seconds()), 10),
},
})
if err != nil {
return nil, err
}
return &sqsHandler{
SQSClient: client,
SQSQueueURL: res.QueueUrl,
IPC: i,
cleanupInterval: time.Second * 30,
}, nil
}
func (r *sqsHandler) PollAndHandleMessages(ctx context.Context) {
log.Println("SQSHandler: Starting to poll for messages at: " + *r.SQSQueueURL)
messagesChn := make(chan *types.Message, 20)
go r.pollMessages(ctx, messagesChn)
go r.cleanupClientQueues(ctx)
for message := range messagesChn {
select {
case <-ctx.Done():
// if context is cancelled
return
default:
go func(msg *types.Message) {
r.handleMessage(ctx, msg)
r.deleteMessage(ctx, msg)
}(message)
}
}
}

307
broker/sqs_test.go Normal file
View file

@ -0,0 +1,307 @@
package main
import (
"bytes"
"context"
"errors"
"log"
"strconv"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/sqs"
"github.com/aws/aws-sdk-go-v2/service/sqs/types"
"github.com/golang/mock/gomock"
. "github.com/smartystreets/goconvey/convey"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/sqsclient"
)
func TestSQS(t *testing.T) {
Convey("Context", t, func() {
buf := new(bytes.Buffer)
ipcCtx := NewBrokerContext(log.New(buf, "", 0), "")
i := &IPC{ipcCtx}
Convey("Responds to SQS client offers...", func() {
ctrl := gomock.NewController(t)
mockSQSClient := sqsclient.NewMockSQSClient(ctrl)
brokerSQSQueueName := "example-name"
responseQueueURL := aws.String("https://sqs.us-east-1.amazonaws.com/testing")
runSQSHandler := func(sqsHandlerContext context.Context) {
mockSQSClient.EXPECT().CreateQueue(sqsHandlerContext, &sqs.CreateQueueInput{
QueueName: aws.String(brokerSQSQueueName),
Attributes: map[string]string{
"MessageRetentionPeriod": strconv.FormatInt(int64((5 * time.Minute).Seconds()), 10),
},
}).Return(&sqs.CreateQueueOutput{
QueueUrl: responseQueueURL,
}, nil).Times(1)
sqsHandler, err := newSQSHandler(sqsHandlerContext, mockSQSClient, brokerSQSQueueName, "example-region", i)
So(err, ShouldBeNil)
go sqsHandler.PollAndHandleMessages(sqsHandlerContext)
}
messageBody := aws.String("1.0\n{\"offer\": \"fake\", \"nat\": \"unknown\"}")
receiptHandle := "fake-receipt-handle"
sqsReceiveMessageInput := sqs.ReceiveMessageInput{
QueueUrl: responseQueueURL,
MaxNumberOfMessages: 10,
WaitTimeSeconds: 15,
MessageAttributeNames: []string{
string(types.QueueAttributeNameAll),
},
}
sqsDeleteMessageInput := sqs.DeleteMessageInput{
QueueUrl: responseQueueURL,
ReceiptHandle: &receiptHandle,
}
Convey("by ignoring it if no client id specified", func(c C) {
sqsHandlerContext, sqsCancelFunc := context.WithCancel(context.Background())
mockSQSClient.EXPECT().ReceiveMessage(sqsHandlerContext, &sqsReceiveMessageInput).MinTimes(1).DoAndReturn(
func(ctx context.Context, input *sqs.ReceiveMessageInput, optFns ...func(*sqs.Options)) (*sqs.ReceiveMessageOutput, error) {
return &sqs.ReceiveMessageOutput{
Messages: []types.Message{
{
Body: messageBody,
ReceiptHandle: &receiptHandle,
},
},
}, nil
},
)
mockSQSClient.EXPECT().DeleteMessage(sqsHandlerContext, &sqsDeleteMessageInput).MinTimes(1).Do(
func(ctx context.Context, input *sqs.DeleteMessageInput, optFns ...func(*sqs.Options)) {
sqsCancelFunc()
},
)
// We expect no queues to be created
mockSQSClient.EXPECT().CreateQueue(gomock.Any(), gomock.Any()).Times(0)
runSQSHandler(sqsHandlerContext)
<-sqsHandlerContext.Done()
})
Convey("by doing nothing if an error occurs upon receipt of the message", func(c C) {
sqsHandlerContext, sqsCancelFunc := context.WithCancel(context.Background())
mockSQSClient.EXPECT().ReceiveMessage(sqsHandlerContext, &sqsReceiveMessageInput).MinTimes(1).DoAndReturn(
func(ctx context.Context, input *sqs.ReceiveMessageInput, optFns ...func(*sqs.Options)) (*sqs.ReceiveMessageOutput, error) {
sqsCancelFunc()
return nil, errors.New("error")
},
)
// We expect no queues to be created or deleted
mockSQSClient.EXPECT().CreateQueue(gomock.Any(), gomock.Any()).Times(0)
mockSQSClient.EXPECT().DeleteMessage(gomock.Any(), gomock.Any()).Times(0)
runSQSHandler(sqsHandlerContext)
<-sqsHandlerContext.Done()
})
Convey("by attempting to create a new sqs queue...", func() {
clientId := "fake-id"
sqsCreateQueueInput := sqs.CreateQueueInput{
QueueName: aws.String("snowflake-client-fake-id"),
}
validMessage := &sqs.ReceiveMessageOutput{
Messages: []types.Message{
{
Body: messageBody,
MessageAttributes: map[string]types.MessageAttributeValue{
"ClientID": {StringValue: &clientId},
},
ReceiptHandle: &receiptHandle,
},
},
}
Convey("and does not attempt to send a message via SQS if queue creation fails.", func(c C) {
sqsHandlerContext, sqsCancelFunc := context.WithCancel(context.Background())
mockSQSClient.EXPECT().ReceiveMessage(sqsHandlerContext, &sqsReceiveMessageInput).AnyTimes().DoAndReturn(
func(ctx context.Context, input *sqs.ReceiveMessageInput, optFns ...func(*sqs.Options)) (*sqs.ReceiveMessageOutput, error) {
sqsCancelFunc()
return validMessage, nil
})
mockSQSClient.EXPECT().CreateQueue(sqsHandlerContext, &sqsCreateQueueInput).Return(nil, errors.New("error")).AnyTimes()
mockSQSClient.EXPECT().DeleteMessage(sqsHandlerContext, &sqsDeleteMessageInput).AnyTimes()
runSQSHandler(sqsHandlerContext)
<-sqsHandlerContext.Done()
})
Convey("and responds with a proxy answer if available.", func(c C) {
sqsHandlerContext, sqsCancelFunc := context.WithCancel(context.Background())
var numTimes atomic.Uint32
mockSQSClient.EXPECT().ReceiveMessage(gomock.Any(), &sqsReceiveMessageInput).AnyTimes().DoAndReturn(
func(ctx context.Context, input *sqs.ReceiveMessageInput, optFns ...func(*sqs.Options)) (*sqs.ReceiveMessageOutput, error) {
n := numTimes.Add(1)
if n == 1 {
snowflake := ipcCtx.AddSnowflake("fake", "", NATUnrestricted, 0)
go func(c C) {
<-snowflake.offerChannel
snowflake.answerChannel <- "fake answer"
}(c)
return validMessage, nil
}
return nil, errors.New("error")
})
mockSQSClient.EXPECT().CreateQueue(gomock.Any(), &sqsCreateQueueInput).Return(&sqs.CreateQueueOutput{
QueueUrl: responseQueueURL,
}, nil).AnyTimes()
mockSQSClient.EXPECT().DeleteMessage(gomock.Any(), gomock.Any()).AnyTimes()
mockSQSClient.EXPECT().SendMessage(gomock.Any(), gomock.Any()).Times(1).DoAndReturn(
func(ctx context.Context, input *sqs.SendMessageInput, optFns ...func(*sqs.Options)) (*sqs.SendMessageOutput, error) {
c.So(input.MessageBody, ShouldEqual, aws.String("{\"answer\":\"fake answer\"}"))
// Ensure that match is correctly recorded in metrics
ipcCtx.metrics.printMetrics()
c.So(buf.String(), ShouldContainSubstring, `client-denied-count 0
client-restricted-denied-count 0
client-unrestricted-denied-count 0
client-snowflake-match-count 8
client-snowflake-timeout-count 0
client-http-count 0
client-http-ips
client-ampcache-count 0
client-ampcache-ips
client-sqs-count 8
client-sqs-ips ??=8
`)
sqsCancelFunc()
return &sqs.SendMessageOutput{}, nil
},
)
runSQSHandler(sqsHandlerContext)
<-sqsHandlerContext.Done()
})
})
})
Convey("Cleans up SQS client queues...", func() {
brokerSQSQueueName := "example-name"
responseQueueURL := aws.String("https://sqs.us-east-1.amazonaws.com/testing")
ctrl := gomock.NewController(t)
mockSQSClient := sqsclient.NewMockSQSClient(ctrl)
runSQSHandler := func(sqsHandlerContext context.Context) {
mockSQSClient.EXPECT().CreateQueue(sqsHandlerContext, &sqs.CreateQueueInput{
QueueName: aws.String(brokerSQSQueueName),
Attributes: map[string]string{
"MessageRetentionPeriod": strconv.FormatInt(int64((5 * time.Minute).Seconds()), 10),
},
}).Return(&sqs.CreateQueueOutput{
QueueUrl: responseQueueURL,
}, nil).Times(1)
mockSQSClient.EXPECT().ReceiveMessage(sqsHandlerContext, gomock.Any()).AnyTimes().Return(
&sqs.ReceiveMessageOutput{
Messages: []types.Message{},
}, nil,
)
sqsHandler, err := newSQSHandler(sqsHandlerContext, mockSQSClient, brokerSQSQueueName, "example-region", i)
So(err, ShouldBeNil)
// Set the cleanup interval to 1 ns so we can immediately test the cleanup logic
sqsHandler.cleanupInterval = time.Nanosecond
go sqsHandler.PollAndHandleMessages(sqsHandlerContext)
}
Convey("does nothing if there are no open queues.", func() {
var wg sync.WaitGroup
wg.Add(1)
sqsHandlerContext, sqsCancelFunc := context.WithCancel(context.Background())
defer wg.Wait()
mockSQSClient.EXPECT().ListQueues(sqsHandlerContext, &sqs.ListQueuesInput{
QueueNamePrefix: aws.String("snowflake-client-"),
MaxResults: aws.Int32(1000),
NextToken: nil,
}).DoAndReturn(func(ctx context.Context, input *sqs.ListQueuesInput, optFns ...func(*sqs.Options)) (*sqs.ListQueuesOutput, error) {
wg.Done()
// Cancel the handler context since we are only interested in testing one iteration of the cleanup
sqsCancelFunc()
return &sqs.ListQueuesOutput{
QueueUrls: []string{},
}, nil
})
runSQSHandler(sqsHandlerContext)
})
Convey("deletes open queue when there is one open queue.", func(c C) {
var wg sync.WaitGroup
wg.Add(1)
sqsHandlerContext, sqsCancelFunc := context.WithCancel(context.Background())
clientQueueUrl1 := "https://sqs.us-east-1.amazonaws.com/snowflake-client-1"
clientQueueUrl2 := "https://sqs.us-east-1.amazonaws.com/snowflake-client-2"
gomock.InOrder(
mockSQSClient.EXPECT().ListQueues(sqsHandlerContext, &sqs.ListQueuesInput{
QueueNamePrefix: aws.String("snowflake-client-"),
MaxResults: aws.Int32(1000),
NextToken: nil,
}).Times(1).Return(&sqs.ListQueuesOutput{
QueueUrls: []string{
clientQueueUrl1,
clientQueueUrl2,
},
}, nil),
mockSQSClient.EXPECT().ListQueues(sqsHandlerContext, &sqs.ListQueuesInput{
QueueNamePrefix: aws.String("snowflake-client-"),
MaxResults: aws.Int32(1000),
NextToken: nil,
}).Times(1).DoAndReturn(func(ctx context.Context, input *sqs.ListQueuesInput, optFns ...func(*sqs.Options)) (*sqs.ListQueuesOutput, error) {
// Executed on second iteration of cleanupClientQueues loop. This means that one full iteration has completed and we can verify the results of that iteration
wg.Done()
sqsCancelFunc()
return &sqs.ListQueuesOutput{
QueueUrls: []string{},
}, nil
}),
)
gomock.InOrder(
mockSQSClient.EXPECT().GetQueueAttributes(sqsHandlerContext, &sqs.GetQueueAttributesInput{
QueueUrl: aws.String(clientQueueUrl1),
AttributeNames: []types.QueueAttributeName{types.QueueAttributeNameLastModifiedTimestamp},
}).Times(1).Return(&sqs.GetQueueAttributesOutput{
Attributes: map[string]string{
string(types.QueueAttributeNameLastModifiedTimestamp): "0",
}}, nil),
mockSQSClient.EXPECT().GetQueueAttributes(sqsHandlerContext, &sqs.GetQueueAttributesInput{
QueueUrl: aws.String(clientQueueUrl2),
AttributeNames: []types.QueueAttributeName{types.QueueAttributeNameLastModifiedTimestamp},
}).Times(1).Return(&sqs.GetQueueAttributesOutput{
Attributes: map[string]string{
string(types.QueueAttributeNameLastModifiedTimestamp): "0",
}}, nil),
)
gomock.InOrder(
mockSQSClient.EXPECT().DeleteQueue(sqsHandlerContext, &sqs.DeleteQueueInput{
QueueUrl: aws.String(clientQueueUrl1),
}).Return(&sqs.DeleteQueueOutput{}, nil),
mockSQSClient.EXPECT().DeleteQueue(sqsHandlerContext, &sqs.DeleteQueueInput{
QueueUrl: aws.String(clientQueueUrl2),
}).Return(&sqs.DeleteQueueOutput{}, nil),
)
runSQSHandler(sqsHandlerContext)
wg.Wait()
})
})
})
}

View file

@ -0,0 +1,2 @@
{"displayName":"flakey", "webSocketAddress":"wss://snowflake.torproject.net", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80A72"}
{"displayName":"second", "webSocketAddress":"wss://02.snowflake.torproject.net", "fingerprint":"8838024498816A039FCBBAB14E6F40A0843051FA"}

View file

@ -1,20 +1,122 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Dependencies](#dependencies)
- [Building the Snowflake client](#building-the-snowflake-client)
- [Running the Snowflake client with Tor](#running-the-snowflake-client-with-tor)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
This is the Tor client component of Snowflake.
It is based on goptlib.
It is based on the [goptlib](https://gitweb.torproject.org/pluggable-transports/goptlib.git/) pluggable transports library for Tor.
### Flags
The client uses these following `torrc` options by default:
### Dependencies
- Go 1.15+
- We use the [pion/webrtc](https://github.com/pion/webrtc) library for WebRTC communication with Snowflake proxies. Note: running `go get` will fetch this dependency automatically during the build process.
### Building the Snowflake client
To build the Snowflake client, make sure you are in the `client/` directory, and then run:
```
ClientTransportPlugin snowflake exec ./client \
-url https://snowflake-broker.azureedge.net/ \
-front ajax.aspnetcdn.com \
-ice stun:stun.l.google.com:19302
go get
go build
```
`-url` should be the URL of a Broker instance.
### Running the Snowflake client with Tor
`-front` is an optional front domain for the Broker request.
The Snowflake client can be configured with SOCKS options. We have a few example `torrc` files in this directory. We recommend the following `torrc` options by default:
```
UseBridges 1
`-ice` is a comma-separated list of ICE servers. These can be STUN or TURN
servers.
ClientTransportPlugin snowflake exec ./client -log snowflake.log
# CDN77
Bridge snowflake 192.0.2.4:80 8838024498816A039FCBBAB14E6F40A0843051FA fingerprint=8838024498816A039FCBBAB14E6F40A0843051FA url=https://1098762253.rsc.cdn77.org/ fronts=www.cdn77.com,www.phpmyadmin.net ice=stun:stun.antisip.com:3478,stun:stun.epygi.com:3478,stun:stun.uls.co.za:3478,stun:stun.voipgate.com:3478,stun:stun.mixvoip.com:3478,stun:stun.nextcloud.com:3478,stun:stun.bethesda.net:3478,stun:stun.nextcloud.com:443 utls-imitate=hellorandomizedalpn
Bridge snowflake 192.0.2.3:80 2B280B23E1107BB62ABFC40DDCC8824814F80A72 fingerprint=2B280B23E1107BB62ABFC40DDCC8824814F80A72 url=https://1098762253.rsc.cdn77.org/ fronts=www.cdn77.com,www.phpmyadmin.net ice=stun:stun.antisip.com:3478,stun:stun.epygi.com:3478,stun:stun.uls.co.za:3478,stun:stun.voipgate.com:3478,stun:stun.mixvoip.com:3478,stun:stun.nextcloud.com:3478,stun:stun.bethesda.net:3478,stun:stun.nextcloud.com:443 utls-imitate=hellorandomizedalpn
# ampcache
#Bridge snowflake 192.0.2.5:80 2B280B23E1107BB62ABFC40DDCC8824814F80A72 fingerprint=2B280B23E1107BB62ABFC40DDCC8824814F80A72 url=https://snowflake-broker.torproject.net/ ampcache=https://cdn.ampproject.org/ front=www.google.com ice=stun:stun.antisip.com:3478,stun:stun.epygi.com:3478,stun:stun.uls.co.za:3478,stun:stun.voipgate.com:3478,stun:stun.mixvoip.com:3478,stun:stun.nextcloud.com:3478,stun:stun.bethesda.net:3478,stun:stun.nextcloud.com:443 utls-imitate=hellorandomizedalpn
#Bridge snowflake 192.0.2.6:80 8838024498816A039FCBBAB14E6F40A0843051FA fingerprint=8838024498816A039FCBBAB14E6F40A0843051FA url=https://snowflake-broker.torproject.net/ ampcache=https://cdn.ampproject.org/ front=www.google.com ice=stun:stun.antisip.com:3478,stun:stun.epygi.com:3478,stun:stun.uls.co.za:3478,stun:stun.voipgate.com:3478,stun:stun.mixvoip.com:3478,stun:stun.nextcloud.com:3478,stun:stun.bethesda.net:3478,stun:stun.nextcloud.com:443 utls-imitate=hellorandomizedalpn
# sqs
#Bridge snowflake 192.0.2.5:80 2B280B23E1107BB62ABFC40DDCC8824814F80A72 fingerprint=2B280B23E1107BB62ABFC40DDCC8824814F80A72 sqsqueue=https://sqs.us-east-1.amazonaws.com/893902434899/snowflake-broker sqscreds=eyJhd3MtYWNjZXNzLWtleS1pZCI6IkFLSUE1QUlGNFdKSlhTN1lIRUczIiwiYXdzLXNlY3JldC1rZXkiOiI3U0RNc0pBNHM1RitXZWJ1L3pMOHZrMFFXV0lsa1c2Y1dOZlVsQ0tRIn0= ice=stun:stun.antisip.com:3478,stun:stun.epygi.com:3478,stun:stun.uls.co.za:3478,stun:stun.voipgate.com:3478,stun:stun.nextcloud.com:3478,stun:stun.bethesda.net:3478,stun:stun.nextcloud.com:443 utls-imitate=hellorandomizedalpn
#Bridge snowflake 192.0.2.6:80 8838024498816A039FCBBAB14E6F40A0843051FA fingerprint=8838024498816A039FCBBAB14E6F40A0843051FA sqsqueue=https://sqs.us-east-1.amazonaws.com/893902434899/snowflake-broker sqscreds=eyJhd3MtYWNjZXNzLWtleS1pZCI6IkFLSUE1QUlGNFdKSlhTN1lIRUczIiwiYXdzLXNlY3JldC1rZXkiOiI3U0RNc0pBNHM1RitXZWJ1L3pMOHZrMFFXV0lsa1c2Y1dOZlVsQ0tRIn0= ice=stun:stun.antisip.com:3478,stun:stun.epygi.com:3478,stun:stun.uls.co.za:3478,stun:stun.voipgate.com:3478,stun:stun.nextcloud.com:3478,stun:stun.bethesda.net:3478,stun:stun.nextcloud.com:443 utls-imitate=hellorandomizedalpn
```
`fingerprint=` is the fingerprint of bridge that the client will ultimately be connecting to.
`url=` is the URL of a broker instance. If you would like to try out Snowflake with your own broker, simply provide the URL of your broker instance with this option.
`fronts=` is an optional, comma-seperated list front domains for the broker request.
`ice=` is a comma-separated list of ICE servers. These must be STUN (over UDP) servers with the form stun:<var>host</var>[:<var>port</var>]. We recommend using servers that have implemented NAT discovery. See our wiki page on [NAT traversal](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/wikis/NAT-matching) for more information.
`utls-imitate=` configuration instructs the client to use fingerprinting resistance when connecting when rendez-vous'ing with the broker.
To bootstrap Tor, run:
```
tor -f torrc
```
This should start the client plugin, bootstrapping to 100% using WebRTC.
### Registration methods
The Snowflake client supports a few different ways of communicating with the broker.
This initial step is sometimes called rendezvous.
#### Domain fronting HTTPS
For domain fronting rendezvous, use the `-url` and `-front` command-line options together.
[Domain fronting](https://www.bamsoftware.com/papers/fronting/)
hides the externally visible domain name from an external observer,
making it appear that the Snowflake client is communicating with some server
other than the Snowflake broker.
* `-url` is the HTTPS URL of a forwarder to the broker, on some service that supports domain fronting, such as a CDN.
* `-front` is the domain name to show externally. It must be another domain on the same service.
Example:
```
-url https://snowflake-broker.torproject.net.global.prod.fastly.net/ \
-front cdn.sstatic.net \
```
#### AMP cache
For AMP cache rendezvous, use the `-url`, `-ampcache`, and `-front` command-line options together.
[AMP](https://amp.dev/documentation/) is a standard for web pages for mobile computers.
An [AMP cache](https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/how_amp_pages_are_cached/)
is a cache and proxy specialized for AMP pages.
The Snowflake broker has the ability to make its client registration responses look like AMP pages,
so it can be accessed through an AMP cache.
When you use AMP cache rendezvous, it appears to an observer that the Snowflake client
is accessing an AMP cache, or some other domain operated by the same organization.
You still need to use the `-front` command-line option, because the
[format of AMP cache URLs](https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/)
would otherwise reveal the domain name of the broker.
There is only one AMP cache that works with this option,
the Google AMP cache at https://cdn.ampproject.org/.
* `-url` is the HTTPS URL of the broker.
* `-ampcache` is `https://cdn.ampproject.org/`.
* `-front` is any Google domain, such as `www.google.com`.
Example:
```
-url https://snowflake-broker.torproject.net/ \
-ampcache https://cdn.ampproject.org/ \
-front www.google.com \
```
#### Direct access
It is also possible to access the broker directly using HTTPS, without domain fronting,
for testing purposes. This mode is not suitable for circumvention, because the
broker is easily blocked by its address.

View file

@ -1,56 +1,24 @@
package lib
package snowflake_client
import (
"io"
"net"
)
type Connector interface {
Connect() error
}
type Resetter interface {
Reset()
WaitForReset()
}
// Interface for a single remote WebRTC peer.
// In the Client context, "Snowflake" refers to the remote browser proxy.
type Snowflake interface {
io.ReadWriteCloser
Resetter
Connector
}
// Interface for catching Snowflakes. (aka the remote dialer)
// Tongue is an interface for catching Snowflakes. (aka the remote dialer)
type Tongue interface {
Catch() (Snowflake, error)
// Catch makes a connection to a new snowflake.
Catch() (*WebRTCPeer, error)
// GetMax returns the maximum number of snowflakes a client can have.
GetMax() int
}
// Interface for collecting some number of Snowflakes, for passing along
// ultimately to the SOCKS handler.
// SnowflakeCollector is an interface for managing a client's collection of snowflakes.
type SnowflakeCollector interface {
// Collect adds a snowflake to the collection.
// The implementation of Collect should decide how to connect to and maintain
// the connection to the WebRTCPeer.
Collect() (*WebRTCPeer, error)
// Add a Snowflake to the collection.
// Implementation should decide how to connect and maintain the webRTCConn.
Collect() (Snowflake, error)
// Pop removes and returns the most available snowflake from the collection.
Pop() *WebRTCPeer
// Remove and return the most available Snowflake from the collection.
Pop() Snowflake
// Signal when the collector has stopped collecting.
// Melted returns a channel that will signal when the collector has stopped.
Melted() <-chan struct{}
}
// Interface to adapt to goptlib's SocksConn struct.
type SocksConnector interface {
Grant(*net.TCPAddr) error
Reject() error
net.Conn
}
// Interface for the Snowflake's transport. (Typically just webrtc.DataChannel)
type SnowflakeDataChannel interface {
io.Closer
Send([]byte)
}

View file

@ -1,57 +1,26 @@
package lib
package snowflake_client
import (
"bytes"
"fmt"
"io/ioutil"
"net"
"net/http"
"testing"
"time"
"github.com/keroserene/go-webrtc"
. "github.com/smartystreets/goconvey/convey"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/event"
)
type MockDataChannel struct {
destination bytes.Buffer
done chan bool
type FakeDialer struct {
max int
}
func (m *MockDataChannel) Send(data []byte) {
m.destination.Write(data)
m.done <- true
}
func (*MockDataChannel) Close() error { return nil }
type MockResponse struct{}
func (m *MockResponse) Read(p []byte) (int, error) {
p = []byte(`{"type":"answer","sdp":"fake"}`)
return 0, nil
}
func (m *MockResponse) Close() error { return nil }
type MockTransport struct {
statusOverride int
body []byte
}
// Just returns a response with fake SDP answer.
func (m *MockTransport) RoundTrip(req *http.Request) (*http.Response, error) {
s := ioutil.NopCloser(bytes.NewReader(m.body))
r := &http.Response{
StatusCode: m.statusOverride,
Body: s,
}
return r, nil
}
type FakeDialer struct{}
func (w FakeDialer) Catch() (Snowflake, error) {
func (w FakeDialer) Catch() (*WebRTCPeer, error) {
fmt.Println("Caught a dummy snowflake.")
return &WebRTCPeer{}, nil
return &WebRTCPeer{closed: make(chan struct{})}, nil
}
func (w FakeDialer) GetMax() int {
return w.max
}
type FakeSocksConn struct {
@ -65,29 +34,23 @@ func (f FakeSocksConn) Reject() error {
}
func (f FakeSocksConn) Grant(addr *net.TCPAddr) error { return nil }
type FakePeers struct{ toRelease *WebRTCPeer }
func (f FakePeers) Collect() (Snowflake, error) { return &WebRTCPeer{}, nil }
func (f FakePeers) Pop() Snowflake { return nil }
func (f FakePeers) Melted() <-chan struct{} { return nil }
func TestSnowflakeClient(t *testing.T) {
Convey("Peers", t, func() {
Convey("Can construct", func() {
p := NewPeers(1)
So(p.capacity, ShouldEqual, 1)
d := &FakeDialer{max: 1}
p, _ := NewPeers(d)
So(p.Tongue.GetMax(), ShouldEqual, 1)
So(p.snowflakeChan, ShouldNotBeNil)
So(cap(p.snowflakeChan), ShouldEqual, 1)
})
Convey("Collecting a Snowflake requires a Tongue.", func() {
p := NewPeers(1)
_, err := p.Collect()
p, err := NewPeers(nil)
So(err, ShouldNotBeNil)
So(p.Count(), ShouldEqual, 0)
// Set the dialer so that collection is possible.
p.Tongue = FakeDialer{}
d := &FakeDialer{max: 1}
p, err = NewPeers(d)
_, err = p.Collect()
So(err, ShouldBeNil)
So(p.Count(), ShouldEqual, 1)
@ -97,8 +60,7 @@ func TestSnowflakeClient(t *testing.T) {
Convey("Collection continues until capacity.", func() {
c := 5
p := NewPeers(c)
p.Tongue = FakeDialer{}
p, _ := NewPeers(FakeDialer{max: c})
// Fill up to capacity.
for i := 0; i < c; i++ {
fmt.Println("Adding snowflake ", i)
@ -112,7 +74,7 @@ func TestSnowflakeClient(t *testing.T) {
So(err, ShouldNotBeNil)
So(p.Count(), ShouldEqual, c)
// But popping and closing allows it to continue.
// But popping allows it to continue.
s := p.Pop()
s.Close()
So(s, ShouldNotBeNil)
@ -124,8 +86,7 @@ func TestSnowflakeClient(t *testing.T) {
})
Convey("Count correctly purges peers marked for deletion.", func() {
p := NewPeers(4)
p.Tongue = FakeDialer{}
p, _ := NewPeers(FakeDialer{max: 5})
p.Collect()
p.Collect()
p.Collect()
@ -141,9 +102,9 @@ func TestSnowflakeClient(t *testing.T) {
Convey("End Closes all peers.", func() {
cnt := 5
p := NewPeers(cnt)
p, _ := NewPeers(FakeDialer{max: cnt})
for i := 0; i < cnt; i++ {
p.activePeers.PushBack(&WebRTCPeer{})
p.activePeers.PushBack(&WebRTCPeer{closed: make(chan struct{})})
}
So(p.Count(), ShouldEqual, cnt)
p.End()
@ -152,8 +113,7 @@ func TestSnowflakeClient(t *testing.T) {
})
Convey("Pop skips over closed peers.", func() {
p := NewPeers(4)
p.Tongue = FakeDialer{}
p, _ := NewPeers(FakeDialer{max: 4})
wc1, _ := p.Collect()
wc2, _ := p.Collect()
wc3, _ := p.Collect()
@ -171,163 +131,91 @@ func TestSnowflakeClient(t *testing.T) {
So(r, ShouldEqual, wc4)
})
})
Convey("Terminate Connect() loop", func() {
p, _ := NewPeers(FakeDialer{max: 4})
go func() {
for {
p.Collect()
select {
case <-p.Melted():
return
default:
}
}
}()
<-time.After(10 * time.Second)
Convey("Snowflake", t, func() {
SkipConvey("Handler Grants correctly", func() {
socks := &FakeSocksConn{}
snowflakes := &FakePeers{}
So(socks.rejected, ShouldEqual, false)
snowflakes.toRelease = nil
Handler(socks, snowflakes)
So(socks.rejected, ShouldEqual, true)
p.End()
<-p.Melted()
So(p.Count(), ShouldEqual, 0)
})
Convey("WebRTC Connection", func() {
c := NewWebRTCPeer(nil, nil)
So(c.buffer.Bytes(), ShouldEqual, nil)
Convey("Can construct a WebRTCConn", func() {
s := NewWebRTCPeer(nil, nil)
So(s, ShouldNotBeNil)
So(s.offerChannel, ShouldNotBeNil)
So(s.answerChannel, ShouldNotBeNil)
s.Close()
})
Convey("Write buffers when datachannel is nil", func() {
c.Write([]byte("test"))
c.transport = nil
So(c.buffer.Bytes(), ShouldResemble, []byte("test"))
})
Convey("Write sends to datachannel when not nil", func() {
mock := new(MockDataChannel)
c.transport = mock
mock.done = make(chan bool, 1)
c.Write([]byte("test"))
<-mock.done
So(c.buffer.Bytes(), ShouldEqual, nil)
So(mock.destination.Bytes(), ShouldResemble, []byte("test"))
})
Convey("Exchange SDP sets remote description", func() {
c.offerChannel = make(chan *webrtc.SessionDescription, 1)
c.answerChannel = make(chan *webrtc.SessionDescription, 1)
c.config = webrtc.NewConfiguration()
c.preparePeerConnection()
c.offerChannel <- nil
answer := webrtc.DeserializeSessionDescription(
`{"type":"answer","sdp":""}`)
c.answerChannel <- answer
c.exchangeSDP()
})
SkipConvey("Exchange SDP fails on nil answer", func() {
c.reset = make(chan struct{})
c.offerChannel = make(chan *webrtc.SessionDescription, 1)
c.answerChannel = make(chan *webrtc.SessionDescription, 1)
c.offerChannel <- nil
c.answerChannel <- nil
c.exchangeSDP()
<-c.reset
})
})
})
Convey("Dialers", t, func() {
Convey("Can construct WebRTCDialer.", func() {
broker := &BrokerChannel{Host: "test"}
d := NewWebRTCDialer(broker, nil)
broker := &BrokerChannel{}
d := NewWebRTCDialer(broker, nil, 1)
So(d, ShouldNotBeNil)
So(d.BrokerChannel, ShouldNotBeNil)
So(d.BrokerChannel.Host, ShouldEqual, "test")
})
Convey("WebRTCDialer cannot Catch a snowflake with nil broker.", func() {
d := NewWebRTCDialer(nil, nil)
conn, err := d.Catch()
So(conn, ShouldBeNil)
So(err, ShouldNotBeNil)
})
SkipConvey("WebRTCDialer can Catch a snowflake.", func() {
broker := &BrokerChannel{Host: "test"}
d := NewWebRTCDialer(broker, nil)
broker := &BrokerChannel{}
d := NewWebRTCDialer(broker, nil, 1)
conn, err := d.Catch()
So(conn, ShouldBeNil)
So(err, ShouldNotBeNil)
})
})
Convey("Rendezvous", t, func() {
webrtc.SetLoggingVerbosity(0)
transport := &MockTransport{
http.StatusOK,
[]byte(`{"type":"answer","sdp":"fake"}`),
}
fakeOffer := webrtc.DeserializeSessionDescription("test")
}
Convey("Construct BrokerChannel with no front domain", func() {
b := NewBrokerChannel("test.broker", "", transport)
So(b.url, ShouldNotBeNil)
So(b.url.Path, ShouldResemble, "test.broker")
So(b.transport, ShouldNotBeNil)
})
Convey("Construct BrokerChannel *with* front domain", func() {
b := NewBrokerChannel("test.broker", "front", transport)
So(b.url, ShouldNotBeNil)
So(b.url.Path, ShouldResemble, "test.broker")
So(b.url.Host, ShouldResemble, "front")
So(b.transport, ShouldNotBeNil)
})
Convey("BrokerChannel.Negotiate responds with answer", func() {
b := NewBrokerChannel("test.broker", "", transport)
answer, err := b.Negotiate(fakeOffer)
So(err, ShouldBeNil)
So(answer, ShouldNotBeNil)
So(answer.Sdp, ShouldResemble, "fake")
})
Convey("BrokerChannel.Negotiate fails with 503", func() {
b := NewBrokerChannel("test.broker", "",
&MockTransport{http.StatusServiceUnavailable, []byte("\n")})
answer, err := b.Negotiate(fakeOffer)
So(err, ShouldNotBeNil)
So(answer, ShouldBeNil)
So(err.Error(), ShouldResemble, BrokerError503)
})
Convey("BrokerChannel.Negotiate fails with 400", func() {
b := NewBrokerChannel("test.broker", "",
&MockTransport{http.StatusBadRequest, []byte("\n")})
answer, err := b.Negotiate(fakeOffer)
So(err, ShouldNotBeNil)
So(answer, ShouldBeNil)
So(err.Error(), ShouldResemble, BrokerError400)
})
Convey("BrokerChannel.Negotiate fails with large read", func() {
b := NewBrokerChannel("test.broker", "",
&MockTransport{http.StatusOK, make([]byte, 100001, 100001)})
answer, err := b.Negotiate(fakeOffer)
So(err, ShouldNotBeNil)
So(answer, ShouldBeNil)
So(err.Error(), ShouldResemble, "unexpected EOF")
})
Convey("BrokerChannel.Negotiate fails with unexpected error", func() {
b := NewBrokerChannel("test.broker", "",
&MockTransport{123, []byte("")})
answer, err := b.Negotiate(fakeOffer)
So(err, ShouldNotBeNil)
So(answer, ShouldBeNil)
So(err.Error(), ShouldResemble, BrokerErrorUnexpected)
func TestWebRTCPeer(t *testing.T) {
Convey("WebRTCPeer", t, func(c C) {
p := &WebRTCPeer{closed: make(chan struct{}),
eventsLogger: event.NewSnowflakeEventDispatcher()}
Convey("checks for staleness", func() {
go p.checkForStaleness(time.Second)
<-time.After(2 * time.Second)
So(p.Closed(), ShouldEqual, true)
})
})
}
func TestICEServerParser(t *testing.T) {
Convey("Test parsing of ICE servers", t, func() {
for _, test := range []struct {
input []string
urls [][]string
length int
}{
{
[]string{"stun:stun.l.google.com:19302", "stun:stun.ekiga.net"},
[][]string{[]string{"stun:stun.l.google.com:19302"}, []string{"stun:stun.ekiga.net:3478"}},
2,
},
{
[]string{"stun:stun1.l.google.com:19302", "stun.ekiga.net", "stun:stun.example.com:1234/path?query",
"https://example.com", "turn:relay.metered.ca:80?transport=udp"},
[][]string{[]string{"stun:stun1.l.google.com:19302"}},
1,
},
} {
servers := parseIceServers(test.input)
if test.urls == nil {
So(servers, ShouldBeNil)
} else {
So(servers, ShouldNotBeNil)
}
So(len(servers), ShouldEqual, test.length)
for _, server := range servers {
So(test.urls, ShouldContain, server.URLs)
}
}
})
}

View file

@ -1,13 +1,14 @@
package lib
package snowflake_client
import (
"container/list"
"errors"
"fmt"
"log"
"sync"
)
// Container which keeps track of multiple WebRTC remote peers.
// Peers is a container that keeps track of multiple WebRTC remote peers.
// Implements |SnowflakeCollector|.
//
// Maintaining a set of pre-connected Peers with fresh but inactive datachannels
@ -20,38 +21,51 @@ import (
// version of Snowflake)
type Peers struct {
Tongue
BytesLogger
bytesLogger bytesLogger
snowflakeChan chan Snowflake
snowflakeChan chan *WebRTCPeer
activePeers *list.List
capacity int
melt chan struct{}
collectLock sync.Mutex
closeOnce sync.Once
}
// Construct a fresh container of remote peers.
func NewPeers(max int) *Peers {
p := &Peers{capacity: max}
// NewPeers constructs a fresh container of remote peers.
func NewPeers(tongue Tongue) (*Peers, error) {
p := &Peers{}
// Use buffered go channel to pass snowflakes onwards to the SOCKS handler.
p.snowflakeChan = make(chan Snowflake, max)
if tongue == nil {
return nil, errors.New("missing Tongue to catch Snowflakes with")
}
p.snowflakeChan = make(chan *WebRTCPeer, tongue.GetMax())
p.activePeers = list.New()
p.melt = make(chan struct{}, 1)
return p
p.melt = make(chan struct{})
p.Tongue = tongue
return p, nil
}
// As part of |SnowflakeCollector| interface.
func (p *Peers) Collect() (Snowflake, error) {
// Collect connects to and adds a new remote peer as part of |SnowflakeCollector| interface.
func (p *Peers) Collect() (*WebRTCPeer, error) {
// Engage the Snowflake Catching interface, which must be available.
p.collectLock.Lock()
defer p.collectLock.Unlock()
select {
case <-p.melt:
return nil, fmt.Errorf("Snowflakes have melted")
default:
}
if nil == p.Tongue {
return nil, errors.New("missing Tongue to catch Snowflakes with")
}
cnt := p.Count()
s := fmt.Sprintf("Currently at [%d/%d]", cnt, p.capacity)
if cnt >= p.capacity {
s := fmt.Sprintf("At capacity [%d/%d]", cnt, p.capacity)
return nil, errors.New(s)
capacity := p.Tongue.GetMax()
s := fmt.Sprintf("Currently at [%d/%d]", cnt, capacity)
if cnt >= capacity {
return nil, fmt.Errorf("At capacity [%d/%d]", cnt, capacity)
}
log.Println("WebRTC: Collecting a new Snowflake.", s)
// Engage the Snowflake Catching interface, which must be available.
if nil == p.Tongue {
return nil, errors.New("Missing Tongue to catch Snowflakes with.")
}
// BUG: some broker conflict here.
connection, err := p.Tongue.Catch()
if nil != err {
@ -63,32 +77,30 @@ func (p *Peers) Collect() (Snowflake, error) {
return connection, nil
}
// As part of |SnowflakeCollector| interface.
func (p *Peers) Pop() Snowflake {
// Blocks until an available, valid snowflake appears.
var snowflake Snowflake
var ok bool
for nil == snowflake {
snowflake, ok = <-p.snowflakeChan
conn := snowflake.(*WebRTCPeer)
// Pop blocks until an available, valid snowflake appears.
// Pop will return nil after End has been called.
func (p *Peers) Pop() *WebRTCPeer {
for {
snowflake, ok := <-p.snowflakeChan
if !ok {
return nil
}
if conn.closed {
snowflake = nil
if snowflake.Closed() {
continue
}
// Set to use the same rate-limited traffic logger to keep consistency.
snowflake.bytesLogger = p.bytesLogger
return snowflake
}
// Set to use the same rate-limited traffic logger to keep consistency.
snowflake.(*WebRTCPeer).BytesLogger = p.BytesLogger
return snowflake
}
// As part of |SnowflakeCollector| interface.
// Melted returns a channel that will close when peers stop being collected.
// Melted is a necessary part of |SnowflakeCollector| interface.
func (p *Peers) Melted() <-chan struct{} {
return p.melt
}
// Returns total available Snowflakes (including the active one)
// Count returns the total available Snowflakes (including the active ones)
// The count only reduces when connections themselves close, rather than when
// they are popped.
func (p *Peers) Count() int {
@ -101,24 +113,29 @@ func (p *Peers) purgeClosedPeers() {
next := e.Next()
conn := e.Value.(*WebRTCPeer)
// Purge those marked for deletion.
if conn.closed {
if conn.Closed() {
p.activePeers.Remove(e)
}
e = next
}
}
// Close all Peers contained here.
// End closes all active connections to Peers contained here, and stops the
// collection of future Peers.
func (p *Peers) End() {
close(p.snowflakeChan)
p.melt <- struct{}{}
cnt := p.Count()
for e := p.activePeers.Front(); e != nil; {
next := e.Next()
conn := e.Value.(*WebRTCPeer)
conn.Close()
p.activePeers.Remove(e)
e = next
}
log.Println("WebRTC: melted all", cnt, "snowflakes.")
p.closeOnce.Do(func() {
close(p.melt)
p.collectLock.Lock()
defer p.collectLock.Unlock()
close(p.snowflakeChan)
cnt := p.Count()
for e := p.activePeers.Front(); e != nil; {
next := e.Next()
conn := e.Value.(*WebRTCPeer)
conn.Close()
p.activePeers.Remove(e)
e = next
}
log.Printf("WebRTC: melted all %d snowflakes.", cnt)
})
}

View file

@ -1,151 +1,323 @@
// WebRTC rendezvous requires the exchange of SessionDescriptions between
// peers in order to establish a PeerConnection.
//
// This file contains the one method currently available to Snowflake:
//
// - Domain-fronted HTTP signaling. The Broker automatically exchange offers
// and answers between this client and some remote WebRTC proxy.
package lib
package snowflake_client
import (
"bytes"
"crypto/tls"
"errors"
"io"
"io/ioutil"
"fmt"
"log"
"net/http"
"net/url"
"sync"
"sync/atomic"
"time"
"github.com/keroserene/go-webrtc"
"github.com/pion/webrtc/v4"
utls "github.com/refraction-networking/utls"
utlsutil "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil/utls"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/certs"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/event"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/messages"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/nat"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/util"
)
const (
BrokerError503 string = "No snowflake proxies currently available."
BrokerError400 string = "You sent an invalid offer in the request."
BrokerErrorUnexpected string = "Unexpected error, no answer."
readLimit = 100000 //Maximum number of bytes to be read from an HTTP response
brokerErrorUnexpected string = "Unexpected error, no answer."
rendezvousErrorMsg string = "One of SQS, AmpCache, or Domain Fronting rendezvous methods must be used."
readLimit = 100000 //Maximum number of bytes to be read from an HTTP response
)
// Signalling Channel to the Broker.
// RendezvousMethod represents a way of communicating with the broker: sending
// an encoded client poll request (SDP offer) and receiving an encoded client
// poll response (SDP answer) in return. RendezvousMethod is used by
// BrokerChannel, which is in charge of encoding and decoding, and all other
// tasks that are independent of the rendezvous method.
type RendezvousMethod interface {
Exchange([]byte) ([]byte, error)
}
// BrokerChannel uses a RendezvousMethod to communicate with the Snowflake broker.
// The BrokerChannel is responsible for encoding and decoding SDP offers and answers;
// RendezvousMethod is responsible for the exchange of encoded information.
type BrokerChannel struct {
// The Host header to put in the HTTP request (optional and may be
// different from the host name in URL).
Host string
url *url.URL
transport http.RoundTripper // Used to make all requests.
Rendezvous RendezvousMethod
keepLocalAddresses bool
natType string
lock sync.Mutex
BridgeFingerprint string
}
// We make a copy of DefaultTransport because we want the default Dial
// and TLSHandshakeTimeout settings. But we want to disable the default
// ProxyFromEnvironment setting.
func CreateBrokerTransport() http.RoundTripper {
transport := http.DefaultTransport.(*http.Transport)
func createBrokerTransport(proxy *url.URL) http.RoundTripper {
tlsConfig := &tls.Config{
RootCAs: certs.GetRootCAs(),
}
transport := &http.Transport{TLSClientConfig: tlsConfig}
transport.Proxy = nil
if proxy != nil {
transport.Proxy = http.ProxyURL(proxy)
}
transport.ResponseHeaderTimeout = 15 * time.Second
return transport
}
// Construct a new BrokerChannel, where:
// |broker| is the full URL of the facilitating program which assigns proxies
// to clients, and |front| is the option fronting domain.
func NewBrokerChannel(broker string, front string, transport http.RoundTripper) *BrokerChannel {
targetURL, err := url.Parse(broker)
if nil != err {
return nil
}
log.Println("Rendezvous using Broker at:", broker)
bc := new(BrokerChannel)
bc.url = targetURL
if "" != front { // Optional front domain.
log.Println("Domain fronting using:", front)
bc.Host = bc.url.Host
bc.url.Host = front
func newBrokerChannelFromConfig(config ClientConfig) (*BrokerChannel, error) {
log.Println("Rendezvous using Broker at:", config.BrokerURL)
if len(config.FrontDomains) != 0 {
log.Printf("Domain fronting using a randomly selected domain from: %v", config.FrontDomains)
}
bc.transport = transport
return bc
}
brokerTransport := createBrokerTransport(config.CommunicationProxy)
func limitedRead(r io.Reader, limit int64) ([]byte, error) {
p, err := ioutil.ReadAll(&io.LimitedReader{R: r, N: limit + 1})
if err != nil {
return p, err
} else if int64(len(p)) == limit+1 {
return p[0:limit], io.ErrUnexpectedEOF
}
return p, err
}
// Roundtrip HTTP POST using WebRTC SessionDescriptions.
//
// Send an SDP offer to the broker, which assigns a proxy and responds
// with an SDP answer from a designated remote WebRTC peer.
func (bc *BrokerChannel) Negotiate(offer *webrtc.SessionDescription) (
*webrtc.SessionDescription, error) {
log.Println("Negotiating via BrokerChannel...\nTarget URL: ",
bc.Host, "\nFront URL: ", bc.url.Host)
data := bytes.NewReader([]byte(offer.Serialize()))
// Suffix with broker's client registration handler.
clientURL := bc.url.ResolveReference(&url.URL{Path: "client"})
request, err := http.NewRequest("POST", clientURL.String(), data)
if nil != err {
return nil, err
}
if "" != bc.Host { // Set true host if necessary.
request.Host = bc.Host
}
resp, err := bc.transport.RoundTrip(request)
if nil != err {
return nil, err
}
defer resp.Body.Close()
log.Printf("BrokerChannel Response:\n%s\n\n", resp.Status)
switch resp.StatusCode {
case http.StatusOK:
body, err := limitedRead(resp.Body, readLimit)
if nil != err {
return nil, err
if config.UTLSClientID != "" {
utlsClientHelloID, err := utlsutil.NameToUTLSID(config.UTLSClientID)
if err != nil {
return nil, fmt.Errorf("unable to create broker channel: %w", err)
}
answer := webrtc.DeserializeSessionDescription(string(body))
return answer, nil
utlsConfig := &utls.Config{
RootCAs: certs.GetRootCAs(),
}
brokerTransport = utlsutil.NewUTLSHTTPRoundTripperWithProxy(utlsClientHelloID, utlsConfig, brokerTransport,
config.UTLSRemoveSNI, config.CommunicationProxy)
}
case http.StatusServiceUnavailable:
return nil, errors.New(BrokerError503)
case http.StatusBadRequest:
return nil, errors.New(BrokerError400)
default:
return nil, errors.New(BrokerErrorUnexpected)
var rendezvous RendezvousMethod
var err error
if config.SQSQueueURL != "" {
if config.AmpCacheURL != "" || config.BrokerURL != "" {
log.Fatalln("Multiple rendezvous methods specified. " + rendezvousErrorMsg)
}
if config.SQSCredsStr == "" {
log.Fatalln("sqscreds must be specified to use SQS rendezvous method.")
}
log.Println("Through SQS queue at:", config.SQSQueueURL)
rendezvous, err = newSQSRendezvous(config.SQSQueueURL, config.SQSCredsStr, brokerTransport)
} else if config.AmpCacheURL != "" && config.BrokerURL != "" {
log.Println("Through AMP cache at:", config.AmpCacheURL)
rendezvous, err = newAMPCacheRendezvous(
config.BrokerURL, config.AmpCacheURL, config.FrontDomains,
brokerTransport)
} else if config.BrokerURL != "" {
rendezvous, err = newHTTPRendezvous(
config.BrokerURL, config.FrontDomains, brokerTransport)
} else {
log.Fatalln("No rendezvous method was specified. " + rendezvousErrorMsg)
}
if err != nil {
return nil, err
}
return &BrokerChannel{
Rendezvous: rendezvous,
keepLocalAddresses: config.KeepLocalAddresses,
natType: nat.NATUnknown,
BridgeFingerprint: config.BridgeFingerprint,
}, nil
}
// Negotiate uses a RendezvousMethod to send the client's WebRTC SDP offer
// and receive a snowflake proxy WebRTC SDP answer in return.
func (bc *BrokerChannel) Negotiate(
offer *webrtc.SessionDescription,
natTypeToSend string,
) (
*webrtc.SessionDescription, error,
) {
encReq, err := preparePollRequest(offer, natTypeToSend, bc.BridgeFingerprint)
if err != nil {
return nil, err
}
// Do the exchange using our RendezvousMethod.
encResp, err := bc.Rendezvous.Exchange(encReq)
if err != nil {
return nil, err
}
log.Printf("Received answer: %s", string(encResp))
// Decode the client poll response.
resp, err := messages.DecodeClientPollResponse(encResp)
if err != nil {
return nil, err
}
if resp.Error != "" {
return nil, errors.New(resp.Error)
}
return util.DeserializeSessionDescription(resp.Answer)
}
// Pure function
func preparePollRequest(
offer *webrtc.SessionDescription,
natType string,
bridgeFingerprint string,
) (encReq []byte, err error) {
offerSDP, err := util.SerializeSessionDescription(offer)
if err != nil {
return nil, err
}
req := &messages.ClientPollRequest{
Offer: offerSDP,
NAT: natType,
Fingerprint: bridgeFingerprint,
}
encReq, err = req.EncodeClientPollRequest()
return
}
// SetNATType sets the NAT type of the client so we can send it to the WebRTC broker.
func (bc *BrokerChannel) SetNATType(NATType string) {
bc.lock.Lock()
bc.natType = NATType
bc.lock.Unlock()
log.Printf("NAT Type: %s", NATType)
}
func (bc *BrokerChannel) GetNATType() string {
bc.lock.Lock()
defer bc.lock.Unlock()
return bc.natType
}
// All of the methods of the struct are thread-safe.
type NATPolicy struct {
assumedUnrestrictedNATAndFailedToConnect atomic.Bool
}
// When our NAT type is unknown, we want to try to connect to a
// restricted / unknown proxy initially
// to offload the unrestricted ones.
// So, instead of always sending the actual NAT type,
// we should use this function to determine the NAT type to send.
//
// This is useful when our STUN servers are blocked or don't support
// the NAT discovery feature, or if they're just slow.
func (p *NATPolicy) NATTypeToSend(actualNatType string) string {
if !p.assumedUnrestrictedNATAndFailedToConnect.Load() &&
actualNatType == nat.NATUnknown {
// If our NAT type is unknown, and we haven't failed to connect
// with a spoofed NAT type yet, then spoof a NATUnrestricted
// type.
return nat.NATUnrestricted
} else {
// In all other cases, do not spoof, and just return our actual
// NAT type (even if it is NATUnknown).
return actualNatType
}
}
// Implements the |Tongue| interface to catch snowflakes, using BrokerChannel.
// This function must be called whenever a connection with a proxy succeeds,
// because the connection outcome tells us about NAT compatibility
// between the proxy and us.
func (p *NATPolicy) Success(actualNATType, sentNATType string) {
// Yes, right now this does nothing but log.
if actualNATType != sentNATType {
log.Printf(
"Connected to a proxy by using a spoofed NAT type \"%v\"! "+
"Our actual NAT type was \"%v\"",
sentNATType,
actualNATType,
)
}
}
// This function must be called whenever a connection with a proxy fails,
// because the connection outcome tells us about NAT compatibility
// between the proxy and us.
func (p *NATPolicy) Failure(actualNATType, sentNATType string) {
if actualNATType == nat.NATUnknown && sentNATType == nat.NATUnrestricted {
log.Printf(
"Tried to connect to a restricted proxy while our NAT type "+
"is \"%v\", and failed. Let's not do that again.",
actualNATType,
)
p.assumedUnrestrictedNATAndFailedToConnect.Store(true)
}
}
// WebRTCDialer implements the |Tongue| interface to catch snowflakes, using BrokerChannel.
type WebRTCDialer struct {
*BrokerChannel
// Can be `nil`, in which case we won't apply special logic,
// and simply always send the current NAT type instead.
natPolicy *NATPolicy
webrtcConfig *webrtc.Configuration
max int
eventLogger event.SnowflakeEventReceiver
proxy *url.URL
}
func NewWebRTCDialer(
broker *BrokerChannel, iceServers IceServerList) *WebRTCDialer {
config := webrtc.NewConfiguration(iceServers...)
if nil == config {
log.Println("Unable to prepare WebRTC configuration.")
return nil
// Deprecated: Use NewWebRTCDialerWithNatPolicyAndEventsAndProxy instead
func NewWebRTCDialer(broker *BrokerChannel, iceServers []webrtc.ICEServer, max int) *WebRTCDialer {
return NewWebRTCDialerWithNatPolicyAndEventsAndProxy(
broker, nil, iceServers, max, nil, nil,
)
}
// Deprecated: Use NewWebRTCDialerWithNatPolicyAndEventsAndProxy instead
func NewWebRTCDialerWithEvents(broker *BrokerChannel, iceServers []webrtc.ICEServer, max int, eventLogger event.SnowflakeEventReceiver) *WebRTCDialer {
return NewWebRTCDialerWithNatPolicyAndEventsAndProxy(
broker, nil, iceServers, max, eventLogger, nil,
)
}
// Deprecated: Use NewWebRTCDialerWithNatPolicyAndEventsAndProxy instead
func NewWebRTCDialerWithEventsAndProxy(broker *BrokerChannel, iceServers []webrtc.ICEServer, max int,
eventLogger event.SnowflakeEventReceiver, proxy *url.URL,
) *WebRTCDialer {
return NewWebRTCDialerWithNatPolicyAndEventsAndProxy(
broker,
nil,
iceServers,
max,
eventLogger,
proxy,
)
}
// NewWebRTCDialerWithNatPolicyAndEventsAndProxy constructs a new WebRTCDialer.
func NewWebRTCDialerWithNatPolicyAndEventsAndProxy(
broker *BrokerChannel,
natPolicy *NATPolicy,
iceServers []webrtc.ICEServer,
max int,
eventLogger event.SnowflakeEventReceiver,
proxy *url.URL,
) *WebRTCDialer {
config := webrtc.Configuration{
ICEServers: iceServers,
}
return &WebRTCDialer{
BrokerChannel: broker,
webrtcConfig: config,
natPolicy: natPolicy,
webrtcConfig: &config,
max: max,
eventLogger: eventLogger,
proxy: proxy,
}
}
// Initialize a WebRTC Connection by signaling through the broker.
func (w WebRTCDialer) Catch() (Snowflake, error) {
if nil == w.BrokerChannel {
return nil, errors.New("Cannot Dial WebRTC without a BrokerChannel.")
}
// TODO: [#3] Fetch ICE server information from Broker.
// TODO: [#18] Consider TURN servers here too.
connection := NewWebRTCPeer(w.webrtcConfig, w.BrokerChannel)
err := connection.Connect()
return connection, err
// Catch initializes a WebRTC Connection by signaling through the BrokerChannel.
func (w WebRTCDialer) Catch() (*WebRTCPeer, error) {
// TODO: [#25591] Fetch ICE server information from Broker.
// TODO: [#25596] Consider TURN servers here too.
return NewWebRTCPeerWithNatPolicyAndEventsAndProxy(
w.webrtcConfig, w.BrokerChannel, w.natPolicy, w.eventLogger, w.proxy,
)
}
// GetMax returns the maximum number of snowflakes to collect.
func (w WebRTCDialer) GetMax() int {
return w.max
}

View file

@ -0,0 +1,127 @@
package snowflake_client
import (
"errors"
"io"
"log"
"math/rand"
"net/http"
"net/url"
"time"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/amp"
)
// ampCacheRendezvous is a RendezvousMethod that communicates with the
// .../amp/client route of the broker, optionally over an AMP cache proxy, and
// with optional domain fronting.
type ampCacheRendezvous struct {
brokerURL *url.URL
cacheURL *url.URL // Optional AMP cache URL.
fronts []string // Optional front domains to replace url.Host in requests.
transport http.RoundTripper // Used to make all requests.
}
// newAMPCacheRendezvous creates a new ampCacheRendezvous that contacts the
// broker at the given URL, optionally proxying through an AMP cache, and with
// an optional front domain. transport is the http.RoundTripper used to make all
// requests.
func newAMPCacheRendezvous(broker, cache string, fronts []string, transport http.RoundTripper) (*ampCacheRendezvous, error) {
brokerURL, err := url.Parse(broker)
if err != nil {
return nil, err
}
var cacheURL *url.URL
if cache != "" {
var err error
cacheURL, err = url.Parse(cache)
if err != nil {
return nil, err
}
}
return &ampCacheRendezvous{
brokerURL: brokerURL,
cacheURL: cacheURL,
fronts: fronts,
transport: transport,
}, nil
}
func (r *ampCacheRendezvous) Exchange(encPollReq []byte) ([]byte, error) {
log.Println("Negotiating via AMP cache rendezvous...")
log.Println("Broker URL:", r.brokerURL)
log.Println("AMP cache URL:", r.cacheURL)
// We cannot POST a body through an AMP cache, so instead we GET and
// encode the client poll request message into the URL.
reqURL := r.brokerURL.ResolveReference(&url.URL{
Path: "amp/client/" + amp.EncodePath(encPollReq),
})
if r.cacheURL != nil {
// Rewrite reqURL to its AMP cache version.
var err error
reqURL, err = amp.CacheURL(reqURL, r.cacheURL, "c")
if err != nil {
return nil, err
}
}
req, err := http.NewRequest("GET", reqURL.String(), nil)
if err != nil {
return nil, err
}
if len(r.fronts) != 0 {
// Do domain fronting. Replace the domain in the URL's with a randomly
// selected front, and store the original domain the HTTP Host header.
rand.Seed(time.Now().UnixNano())
front := r.fronts[rand.Intn(len(r.fronts))]
log.Println("Front domain:", front)
req.Host = req.URL.Host
req.URL.Host = front
}
resp, err := r.transport.RoundTrip(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
log.Printf("AMP cache rendezvous response: %s", resp.Status)
if resp.StatusCode != http.StatusOK {
// A non-200 status indicates an error:
// * If the broker returns a page with invalid AMP, then the AMP
// cache returns a redirect that would bypass the cache.
// * If the broker returns a 5xx status, the AMP cache
// translates it to a 404.
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/#redirect-%26-error-handling
return nil, errors.New(brokerErrorUnexpected)
}
if _, err := resp.Location(); err == nil {
// The Google AMP Cache may return a "silent redirect" with
// status 200, a Location header set, and a JavaScript redirect
// in the body. The redirect points directly at the origin
// server for the request (bypassing the AMP cache). We do not
// follow redirects nor execute JavaScript, but in any case we
// cannot extract information from this response and can only
// treat it as an error.
return nil, errors.New(brokerErrorUnexpected)
}
lr := io.LimitReader(resp.Body, readLimit+1)
dec, err := amp.NewArmorDecoder(lr)
if err != nil {
return nil, err
}
encPollResp, err := io.ReadAll(dec)
if err != nil {
return nil, err
}
if lr.(*io.LimitedReader).N == 0 {
// We hit readLimit while decoding AMP armor, that's an error.
return nil, io.ErrUnexpectedEOF
}
return encPollResp, err
}

View file

@ -0,0 +1,80 @@
package snowflake_client
import (
"bytes"
"errors"
"io"
"log"
"math/rand"
"net/http"
"net/url"
"time"
)
// httpRendezvous is a RendezvousMethod that communicates with the .../client
// route of the broker over HTTP or HTTPS, with optional domain fronting.
type httpRendezvous struct {
brokerURL *url.URL
fronts []string // Optional front domain to replace url.Host in requests.
transport http.RoundTripper // Used to make all requests.
}
// newHTTPRendezvous creates a new httpRendezvous that contacts the broker at
// the given URL, with an optional front domain. transport is the
// http.RoundTripper used to make all requests.
func newHTTPRendezvous(broker string, fronts []string, transport http.RoundTripper) (*httpRendezvous, error) {
brokerURL, err := url.Parse(broker)
if err != nil {
return nil, err
}
return &httpRendezvous{
brokerURL: brokerURL,
fronts: fronts,
transport: transport,
}, nil
}
func (r *httpRendezvous) Exchange(encPollReq []byte) ([]byte, error) {
log.Println("Negotiating via HTTP rendezvous...")
log.Println("Target URL: ", r.brokerURL.Host)
// Suffix the path with the broker's client registration handler.
reqURL := r.brokerURL.ResolveReference(&url.URL{Path: "client"})
req, err := http.NewRequest("POST", reqURL.String(), bytes.NewReader(encPollReq))
if err != nil {
return nil, err
}
if len(r.fronts) != 0 {
// Do domain fronting. Replace the domain in the URL's with a randomly
// selected front, and store the original domain the HTTP Host header.
rand.Seed(time.Now().UnixNano())
front := r.fronts[rand.Intn(len(r.fronts))]
log.Println("Front URL: ", front)
req.Host = req.URL.Host
req.URL.Host = front
}
resp, err := r.transport.RoundTrip(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
log.Printf("HTTP rendezvous response: %s", resp.Status)
if resp.StatusCode != http.StatusOK {
return nil, errors.New(brokerErrorUnexpected)
}
return limitedRead(resp.Body, readLimit)
}
func limitedRead(r io.Reader, limit int64) ([]byte, error) {
p, err := io.ReadAll(&io.LimitedReader{R: r, N: limit + 1})
if err != nil {
return p, err
} else if int64(len(p)) == limit+1 {
return p[0:limit], io.ErrUnexpectedEOF
}
return p, err
}

View file

@ -0,0 +1,143 @@
package snowflake_client
import (
"context"
"crypto/rand"
"encoding/hex"
"log"
"net/http"
"net/url"
"regexp"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/sqs"
"github.com/aws/aws-sdk-go-v2/service/sqs/types"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/sqsclient"
sqscreds "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/sqscreds/lib"
)
type sqsRendezvous struct {
transport http.RoundTripper
sqsClient sqsclient.SQSClient
sqsURL *url.URL
timeout time.Duration
numRetries int
}
func newSQSRendezvous(sqsQueue string, sqsCredsStr string, transport http.RoundTripper) (*sqsRendezvous, error) {
sqsURL, err := url.Parse(sqsQueue)
if err != nil {
return nil, err
}
sqsCreds, err := sqscreds.AwsCredsFromBase64(sqsCredsStr)
if err != nil {
return nil, err
}
queueURL := sqsURL.String()
hostName := sqsURL.Hostname()
regionRegex, _ := regexp.Compile(`^sqs\.([\w-]+)\.amazonaws\.com$`)
res := regionRegex.FindStringSubmatch(hostName)
if len(res) < 2 {
log.Fatal("Could not extract AWS region from SQS URL. Ensure that the SQS Queue URL provided is valid.")
}
region := res[1]
cfg, err := config.LoadDefaultConfig(context.TODO(),
config.WithCredentialsProvider(
credentials.NewStaticCredentialsProvider(sqsCreds.AwsAccessKeyId, sqsCreds.AwsSecretKey, ""),
),
config.WithRegion(region),
)
if err != nil {
log.Fatal(err)
}
client := sqs.NewFromConfig(cfg)
log.Println("Queue URL: ", queueURL)
return &sqsRendezvous{
transport: transport,
sqsClient: client,
sqsURL: sqsURL,
timeout: time.Second,
numRetries: 5,
}, nil
}
func (r *sqsRendezvous) Exchange(encPollReq []byte) ([]byte, error) {
log.Println("Negotiating via SQS Queue rendezvous...")
var id [8]byte
_, err := rand.Read(id[:])
if err != nil {
return nil, err
}
sqsClientID := hex.EncodeToString(id[:])
log.Println("SQS Client ID for rendezvous: " + sqsClientID)
_, err = r.sqsClient.SendMessage(context.TODO(), &sqs.SendMessageInput{
MessageAttributes: map[string]types.MessageAttributeValue{
"ClientID": {
DataType: aws.String("String"),
StringValue: aws.String(sqsClientID),
},
},
MessageBody: aws.String(string(encPollReq)),
QueueUrl: aws.String(r.sqsURL.String()),
})
if err != nil {
return nil, err
}
time.Sleep(r.timeout) // wait for client queue to be created by the broker
var responseQueueURL *string
for i := 0; i < r.numRetries; i++ {
// The SQS queue corresponding to the client where the SDP Answer will be placed
// may not be created yet. We will retry up to 5 times before we error out.
var res *sqs.GetQueueUrlOutput
res, err = r.sqsClient.GetQueueUrl(context.TODO(), &sqs.GetQueueUrlInput{
QueueName: aws.String("snowflake-client-" + sqsClientID),
})
if err != nil {
log.Println(err)
log.Printf("Attempt %d of %d to retrieve URL of response SQS queue failed.\n", i+1, r.numRetries)
time.Sleep(r.timeout)
} else {
responseQueueURL = res.QueueUrl
break
}
}
if err != nil {
return nil, err
}
var answer string
for i := 0; i < r.numRetries; i++ {
// Waiting for SDP Answer from proxy to be placed in SQS queue.
// We will retry upt to 5 times before we error out.
res, err := r.sqsClient.ReceiveMessage(context.TODO(), &sqs.ReceiveMessageInput{
QueueUrl: responseQueueURL,
MaxNumberOfMessages: 1,
WaitTimeSeconds: 20,
})
if err != nil {
return nil, err
}
if len(res.Messages) == 0 {
log.Printf("Attempt %d of %d to receive message from response SQS queue failed. No message found in queue.\n", i+1, r.numRetries)
delay := float64(i)/2.0 + 1
time.Sleep(time.Duration(delay*1000) * (r.timeout / 1000))
} else {
answer = *res.Messages[0].Body
break
}
}
return []byte(answer), nil
}

View file

@ -0,0 +1,442 @@
package snowflake_client
import (
"bytes"
"errors"
"fmt"
"io"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/sqs"
"github.com/aws/aws-sdk-go-v2/service/sqs/types"
"github.com/golang/mock/gomock"
"github.com/pion/webrtc/v4"
. "github.com/smartystreets/goconvey/convey"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/amp"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/messages"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/nat"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/sqsclient"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/util"
)
// mockTransport's RoundTrip method returns a response with a fake status and
// body.
type mockTransport struct {
statusCode int
body []byte
}
func (t *mockTransport) RoundTrip(req *http.Request) (*http.Response, error) {
return &http.Response{
Status: fmt.Sprintf("%d %s", t.statusCode, http.StatusText(t.statusCode)),
StatusCode: t.statusCode,
Body: io.NopCloser(bytes.NewReader(t.body)),
}, nil
}
// errorTransport's RoundTrip method returns an error.
type errorTransport struct {
err error
}
func (t errorTransport) RoundTrip(req *http.Request) (*http.Response, error) {
return nil, t.err
}
// makeEncPollReq returns an encoded client poll request containing a given
// offer.
func makeEncPollReq(offer string) []byte {
encPollReq, err := (&messages.ClientPollRequest{
Offer: offer,
NAT: nat.NATUnknown,
}).EncodeClientPollRequest()
if err != nil {
panic(err)
}
return encPollReq
}
// makeEncPollResp returns an encoded client poll response with given answer and
// error strings.
func makeEncPollResp(answer, errorStr string) []byte {
encPollResp, err := (&messages.ClientPollResponse{
Answer: answer,
Error: errorStr,
}).EncodePollResponse()
if err != nil {
panic(err)
}
return encPollResp
}
var fakeEncPollReq = makeEncPollReq(`{"type":"offer","sdp":"test"}`)
func TestHTTPRendezvous(t *testing.T) {
Convey("HTTP rendezvous", t, func() {
Convey("Construct httpRendezvous with no front domain", func() {
transport := &mockTransport{http.StatusOK, []byte{}}
rend, err := newHTTPRendezvous("http://test.broker", []string{}, transport)
So(err, ShouldBeNil)
So(rend.brokerURL, ShouldNotBeNil)
So(rend.brokerURL.Host, ShouldResemble, "test.broker")
So(rend.fronts, ShouldEqual, []string{})
So(rend.transport, ShouldEqual, transport)
})
Convey("Construct httpRendezvous *with* front domain", func() {
transport := &mockTransport{http.StatusOK, []byte{}}
rend, err := newHTTPRendezvous("http://test.broker", []string{"front"}, transport)
So(err, ShouldBeNil)
So(rend.brokerURL, ShouldNotBeNil)
So(rend.brokerURL.Host, ShouldResemble, "test.broker")
So(rend.fronts, ShouldContain, "front")
So(rend.transport, ShouldEqual, transport)
})
Convey("httpRendezvous.Exchange responds with answer", func() {
fakeEncPollResp := makeEncPollResp(
`{"answer": "{\"type\":\"answer\",\"sdp\":\"fake\"}" }`,
"",
)
rend, err := newHTTPRendezvous("http://test.broker", []string{},
&mockTransport{http.StatusOK, fakeEncPollResp})
So(err, ShouldBeNil)
answer, err := rend.Exchange(fakeEncPollReq)
So(err, ShouldBeNil)
So(answer, ShouldResemble, fakeEncPollResp)
})
Convey("httpRendezvous.Exchange responds with no answer", func() {
fakeEncPollResp := makeEncPollResp(
"",
`{"error": "no snowflake proxies currently available"}`,
)
rend, err := newHTTPRendezvous("http://test.broker", []string{},
&mockTransport{http.StatusOK, fakeEncPollResp})
So(err, ShouldBeNil)
answer, err := rend.Exchange(fakeEncPollReq)
So(err, ShouldBeNil)
So(answer, ShouldResemble, fakeEncPollResp)
})
Convey("httpRendezvous.Exchange fails with unexpected HTTP status code", func() {
rend, err := newHTTPRendezvous("http://test.broker", []string{},
&mockTransport{http.StatusInternalServerError, []byte{}})
So(err, ShouldBeNil)
answer, err := rend.Exchange(fakeEncPollReq)
So(err, ShouldNotBeNil)
So(answer, ShouldBeNil)
So(err.Error(), ShouldResemble, brokerErrorUnexpected)
})
Convey("httpRendezvous.Exchange fails with error", func() {
transportErr := errors.New("error")
rend, err := newHTTPRendezvous("http://test.broker", []string{},
&errorTransport{err: transportErr})
So(err, ShouldBeNil)
answer, err := rend.Exchange(fakeEncPollReq)
So(err, ShouldEqual, transportErr)
So(answer, ShouldBeNil)
})
Convey("httpRendezvous.Exchange fails with large read", func() {
rend, err := newHTTPRendezvous("http://test.broker", []string{},
&mockTransport{http.StatusOK, make([]byte, readLimit+1)})
So(err, ShouldBeNil)
_, err = rend.Exchange(fakeEncPollReq)
So(err, ShouldEqual, io.ErrUnexpectedEOF)
})
})
}
func ampArmorEncode(p []byte) []byte {
var buf bytes.Buffer
enc, err := amp.NewArmorEncoder(&buf)
if err != nil {
panic(err)
}
_, err = enc.Write(p)
if err != nil {
panic(err)
}
err = enc.Close()
if err != nil {
panic(err)
}
return buf.Bytes()
}
func TestAMPCacheRendezvous(t *testing.T) {
Convey("AMP cache rendezvous", t, func() {
Convey("Construct ampCacheRendezvous with no cache and no front domain", func() {
transport := &mockTransport{http.StatusOK, []byte{}}
rend, err := newAMPCacheRendezvous("http://test.broker", "", []string{}, transport)
So(err, ShouldBeNil)
So(rend.brokerURL, ShouldNotBeNil)
So(rend.brokerURL.String(), ShouldResemble, "http://test.broker")
So(rend.cacheURL, ShouldBeNil)
So(rend.fronts, ShouldResemble, []string{})
So(rend.transport, ShouldEqual, transport)
})
Convey("Construct ampCacheRendezvous with cache and no front domain", func() {
transport := &mockTransport{http.StatusOK, []byte{}}
rend, err := newAMPCacheRendezvous("http://test.broker", "https://amp.cache/", []string{}, transport)
So(err, ShouldBeNil)
So(rend.brokerURL, ShouldNotBeNil)
So(rend.brokerURL.String(), ShouldResemble, "http://test.broker")
So(rend.cacheURL, ShouldNotBeNil)
So(rend.cacheURL.String(), ShouldResemble, "https://amp.cache/")
So(rend.fronts, ShouldResemble, []string{})
So(rend.transport, ShouldEqual, transport)
})
Convey("Construct ampCacheRendezvous with no cache and front domain", func() {
transport := &mockTransport{http.StatusOK, []byte{}}
rend, err := newAMPCacheRendezvous("http://test.broker", "", []string{"front"}, transport)
So(err, ShouldBeNil)
So(rend.brokerURL, ShouldNotBeNil)
So(rend.brokerURL.String(), ShouldResemble, "http://test.broker")
So(rend.cacheURL, ShouldBeNil)
So(rend.fronts, ShouldContain, "front")
So(rend.transport, ShouldEqual, transport)
})
Convey("Construct ampCacheRendezvous with cache and front domain", func() {
transport := &mockTransport{http.StatusOK, []byte{}}
rend, err := newAMPCacheRendezvous("http://test.broker", "https://amp.cache/", []string{"front"}, transport)
So(err, ShouldBeNil)
So(rend.brokerURL, ShouldNotBeNil)
So(rend.brokerURL.String(), ShouldResemble, "http://test.broker")
So(rend.cacheURL, ShouldNotBeNil)
So(rend.cacheURL.String(), ShouldResemble, "https://amp.cache/")
So(rend.fronts, ShouldContain, "front")
So(rend.transport, ShouldEqual, transport)
})
Convey("ampCacheRendezvous.Exchange responds with answer", func() {
fakeEncPollResp := makeEncPollResp(
`{"answer": "{\"type\":\"answer\",\"sdp\":\"fake\"}" }`,
"",
)
rend, err := newAMPCacheRendezvous("http://test.broker", "", []string{},
&mockTransport{http.StatusOK, ampArmorEncode(fakeEncPollResp)})
So(err, ShouldBeNil)
answer, err := rend.Exchange(fakeEncPollReq)
So(err, ShouldBeNil)
So(answer, ShouldResemble, fakeEncPollResp)
})
Convey("ampCacheRendezvous.Exchange responds with no answer", func() {
fakeEncPollResp := makeEncPollResp(
"",
`{"error": "no snowflake proxies currently available"}`,
)
rend, err := newAMPCacheRendezvous("http://test.broker", "", []string{},
&mockTransport{http.StatusOK, ampArmorEncode(fakeEncPollResp)})
So(err, ShouldBeNil)
answer, err := rend.Exchange(fakeEncPollReq)
So(err, ShouldBeNil)
So(answer, ShouldResemble, fakeEncPollResp)
})
Convey("ampCacheRendezvous.Exchange fails with unexpected HTTP status code", func() {
rend, err := newAMPCacheRendezvous("http://test.broker", "", []string{},
&mockTransport{http.StatusInternalServerError, []byte{}})
So(err, ShouldBeNil)
answer, err := rend.Exchange(fakeEncPollReq)
So(err, ShouldNotBeNil)
So(answer, ShouldBeNil)
So(err.Error(), ShouldResemble, brokerErrorUnexpected)
})
Convey("ampCacheRendezvous.Exchange fails with error", func() {
transportErr := errors.New("error")
rend, err := newAMPCacheRendezvous("http://test.broker", "", []string{},
&errorTransport{err: transportErr})
So(err, ShouldBeNil)
answer, err := rend.Exchange(fakeEncPollReq)
So(err, ShouldEqual, transportErr)
So(answer, ShouldBeNil)
})
Convey("ampCacheRendezvous.Exchange fails with large read", func() {
// readLimit should apply to the raw HTTP body, not the
// encoded bytes. Encode readLimit bytes—the encoded
// size will be larger—and try to read the body. It
// should fail.
rend, err := newAMPCacheRendezvous("http://test.broker", "", []string{},
&mockTransport{http.StatusOK, ampArmorEncode(make([]byte, readLimit))})
So(err, ShouldBeNil)
_, err = rend.Exchange(fakeEncPollReq)
// We may get io.ErrUnexpectedEOF here, or something
// like "missing </pre> tag".
So(err, ShouldNotBeNil)
})
})
}
func TestSQSRendezvous(t *testing.T) {
Convey("SQS Rendezvous", t, func() {
var sendMessageInput *sqs.SendMessageInput
var getQueueUrlInput *sqs.GetQueueUrlInput
Convey("Construct SQS queue rendezvous", func() {
transport := &mockTransport{http.StatusOK, []byte{}}
rend, err := newSQSRendezvous("https://sqs.us-east-1.amazonaws.com", "eyJhd3MtYWNjZXNzLWtleS1pZCI6InRlc3QtYWNjZXNzLWtleSIsImF3cy1zZWNyZXQta2V5IjoidGVzdC1zZWNyZXQta2V5In0=", transport)
So(err, ShouldBeNil)
So(rend.sqsClient, ShouldNotBeNil)
So(rend.sqsURL, ShouldNotBeNil)
So(rend.sqsURL.String(), ShouldResemble, "https://sqs.us-east-1.amazonaws.com")
})
ctrl := gomock.NewController(t)
mockSqsClient := sqsclient.NewMockSQSClient(ctrl)
responseQueueURL := "https://sqs.us-east-1.amazonaws.com/testing"
sqsUrl, _ := url.Parse("https://sqs.us-east-1.amazonaws.com/broker")
fakeEncPollResp := makeEncPollResp(
`{"answer": "{\"type\":\"answer\",\"sdp\":\"fake\"}" }`,
"",
)
sqsRendezvous := sqsRendezvous{
transport: &mockTransport{http.StatusOK, []byte{}},
sqsClient: mockSqsClient,
sqsURL: sqsUrl,
timeout: 0,
numRetries: 5,
}
Convey("sqsRendezvous.Exchange responds with answer", func() {
sqsClientId := ""
mockSqsClient.EXPECT().SendMessage(gomock.Any(), gomock.AssignableToTypeOf(sendMessageInput)).Do(func(ctx interface{}, input *sqs.SendMessageInput, optFns ...interface{}) {
So(*input.MessageBody, ShouldEqual, string(fakeEncPollResp))
So(*input.QueueUrl, ShouldEqual, sqsUrl.String())
sqsClientId = *input.MessageAttributes["ClientID"].StringValue
})
mockSqsClient.EXPECT().GetQueueUrl(gomock.Any(), gomock.AssignableToTypeOf(getQueueUrlInput)).DoAndReturn(func(ctx interface{}, input *sqs.GetQueueUrlInput, optFns ...interface{}) (*sqs.GetQueueUrlOutput, error) {
So(*input.QueueName, ShouldEqual, "snowflake-client-"+sqsClientId)
return &sqs.GetQueueUrlOutput{
QueueUrl: aws.String(responseQueueURL),
}, nil
})
mockSqsClient.EXPECT().ReceiveMessage(gomock.Any(), gomock.Eq(&sqs.ReceiveMessageInput{
QueueUrl: &responseQueueURL,
MaxNumberOfMessages: 1,
WaitTimeSeconds: 20,
})).Return(&sqs.ReceiveMessageOutput{
Messages: []types.Message{{Body: aws.String("answer")}},
}, nil)
answer, err := sqsRendezvous.Exchange(fakeEncPollResp)
So(answer, ShouldEqual, []byte("answer"))
So(err, ShouldBeNil)
})
Convey("sqsRendezvous.Exchange cannot get queue url", func() {
sqsClientId := ""
mockSqsClient.EXPECT().SendMessage(gomock.Any(), gomock.AssignableToTypeOf(sendMessageInput)).Do(func(ctx interface{}, input *sqs.SendMessageInput, optFns ...interface{}) {
So(*input.MessageBody, ShouldEqual, string(fakeEncPollResp))
So(*input.QueueUrl, ShouldEqual, sqsUrl.String())
sqsClientId = *input.MessageAttributes["ClientID"].StringValue
})
for i := 0; i < sqsRendezvous.numRetries; i++ {
mockSqsClient.EXPECT().GetQueueUrl(gomock.Any(), gomock.AssignableToTypeOf(getQueueUrlInput)).DoAndReturn(func(ctx interface{}, input *sqs.GetQueueUrlInput, optFns ...interface{}) (*sqs.GetQueueUrlOutput, error) {
So(*input.QueueName, ShouldEqual, "snowflake-client-"+sqsClientId)
return nil, errors.New("test error")
})
}
answer, err := sqsRendezvous.Exchange(fakeEncPollResp)
So(answer, ShouldBeNil)
So(err, ShouldNotBeNil)
So(err, ShouldEqual, errors.New("test error"))
})
Convey("sqsRendezvous.Exchange does not receive answer", func() {
sqsClientId := ""
mockSqsClient.EXPECT().SendMessage(gomock.Any(), gomock.AssignableToTypeOf(sendMessageInput)).Do(func(ctx interface{}, input *sqs.SendMessageInput, optFns ...interface{}) {
So(*input.MessageBody, ShouldEqual, string(fakeEncPollResp))
So(*input.QueueUrl, ShouldEqual, sqsUrl.String())
sqsClientId = *input.MessageAttributes["ClientID"].StringValue
})
mockSqsClient.EXPECT().GetQueueUrl(gomock.Any(), gomock.AssignableToTypeOf(getQueueUrlInput)).DoAndReturn(func(ctx interface{}, input *sqs.GetQueueUrlInput, optFns ...interface{}) (*sqs.GetQueueUrlOutput, error) {
So(*input.QueueName, ShouldEqual, "snowflake-client-"+sqsClientId)
return &sqs.GetQueueUrlOutput{
QueueUrl: aws.String(responseQueueURL),
}, nil
})
for i := 0; i < sqsRendezvous.numRetries; i++ {
mockSqsClient.EXPECT().ReceiveMessage(gomock.Any(), gomock.Eq(&sqs.ReceiveMessageInput{
QueueUrl: &responseQueueURL,
MaxNumberOfMessages: 1,
WaitTimeSeconds: 20,
})).Return(&sqs.ReceiveMessageOutput{
Messages: []types.Message{},
}, nil)
}
answer, err := sqsRendezvous.Exchange(fakeEncPollResp)
So(answer, ShouldEqual, []byte{})
So(err, ShouldBeNil)
})
})
}
func TestBrokerChannel(t *testing.T) {
Convey("Requests a proxy and handles response", t, func() {
answerSdp := &webrtc.SessionDescription{
Type: webrtc.SDPTypeAnswer,
SDP: "test",
}
answerSdpStr, _ := util.SerializeSessionDescription(answerSdp)
serverResponse, _ := (&messages.ClientPollResponse{
Answer: answerSdpStr,
}).EncodePollResponse()
offerSdp := &webrtc.SessionDescription{
Type: webrtc.SDPTypeOffer,
SDP: "test",
}
requestBodyChan := make(chan []byte)
mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
body, _ := io.ReadAll(r.Body)
go func() {
requestBodyChan <- body
}()
w.Write(serverResponse)
}))
defer mockServer.Close()
brokerChannel, err := newBrokerChannelFromConfig(ClientConfig{
BrokerURL: mockServer.URL,
BridgeFingerprint: "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
})
So(err, ShouldBeNil)
brokerChannel.SetNATType(nat.NATRestricted)
answerSdpReturned, err := brokerChannel.Negotiate(
offerSdp,
brokerChannel.GetNATType(),
)
So(err, ShouldBeNil)
So(answerSdpReturned, ShouldEqual, answerSdp)
body := <-requestBodyChan
pollReq, err := messages.DecodeClientPollRequest(body)
So(err, ShouldBeNil)
So(pollReq.Fingerprint, ShouldEqual, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
So(pollReq.NAT, ShouldEqual, nat.NATRestricted)
requestSdp, err := util.DeserializeSessionDescription(pollReq.Offer)
So(err, ShouldBeNil)
So(requestSdp, ShouldEqual, offerSdp)
})
}

View file

@ -1,69 +1,411 @@
package lib
/*
Package snowflake_client implements functionality necessary for a client to establish a connection
to a server using Snowflake.
Included in the package is a Transport type that implements the Pluggable Transports v2.1 Go API
specification. To use Snowflake, you must first create a client from a configuration:
config := snowflake_client.ClientConfig{
BrokerURL: "https://snowflake-broker.example.com",
FrontDomain: "https://friendlyfrontdomain.net",
// ...
}
transport, err := snowflake_client.NewSnowflakeClient(config)
if err != nil {
// handle error
}
The Dial function connects to a Snowflake server:
conn, err := transport.Dial()
if err != nil {
// handle error
}
defer conn.Close()
*/
package snowflake_client
import (
"context"
"errors"
"io"
"log"
"math/rand"
"net"
"sync"
"net/url"
"strings"
"time"
"github.com/pion/ice/v4"
"github.com/pion/webrtc/v4"
"github.com/xtaci/kcp-go/v5"
"github.com/xtaci/smux"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/event"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/nat"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/turbotunnel"
)
const (
ReconnectTimeout = 10
SnowflakeTimeout = 30
// ReconnectTimeout is the time a Snowflake client will wait before collecting
// more snowflakes.
ReconnectTimeout = 10 * time.Second
// SnowflakeTimeout is the time a Snowflake client will wait before determining that
// a remote snowflake has been disconnected. If no new messages are sent or received
// in this time period, the client will terminate the connection with the remote
// peer and collect a new snowflake.
SnowflakeTimeout = 20 * time.Second
// DataChannelTimeout is how long the client will wait for the OnOpen callback
// on a newly created DataChannel.
DataChannelTimeout = 10 * time.Second
// WindowSize is the number of packets in the send and receive window of a KCP connection.
WindowSize = 65535
// StreamSize controls the maximum amount of in flight data between a client and server.
StreamSize = 1048576 // 1MB
)
// When a connection handler starts, +1 is written to this channel; when it
// ends, -1 is written.
var HandlerChan = make(chan int)
type dummyAddr struct{}
// Given an accepted SOCKS connection, establish a WebRTC connection to the
// remote peer and exchange traffic.
func Handler(socks SocksConnector, snowflakes SnowflakeCollector) error {
HandlerChan <- 1
defer func() {
HandlerChan <- -1
}()
// Obtain an available WebRTC remote. May block.
snowflake := snowflakes.Pop()
if nil == snowflake {
socks.Reject()
return errors.New("handler: Received invalid Snowflake")
func (addr dummyAddr) Network() string { return "dummy" }
func (addr dummyAddr) String() string { return "dummy" }
// Transport is a structure with methods that conform to the Go PT v2.1 API
// https://github.com/Pluggable-Transports/Pluggable-Transports-spec/blob/master/releases/PTSpecV2.1/Pluggable%20Transport%20Specification%20v2.1%20-%20Go%20Transport%20API.pdf
type Transport struct {
dialer *WebRTCDialer
// EventDispatcher is the event bus for snowflake events.
// When an important event happens, it will be distributed here.
eventDispatcher event.SnowflakeEventDispatcher
}
// ClientConfig defines how the SnowflakeClient will connect to the broker and Snowflake proxies.
type ClientConfig struct {
// BrokerURL is the full URL of the Snowflake broker that the client will connect to.
BrokerURL string
// AmpCacheURL is the full URL of a valid AMP cache. A nonzero value indicates
// that AMP cache will be used as the rendezvous method with the broker.
AmpCacheURL string
// SQSQueueURL is the full URL of an AWS SQS Queue. A nonzero value indicates
// that SQS queue will be used as the rendezvous method with the broker.
SQSQueueURL string
// Base64 encoded string of the credentials containing access Key ID and secret key used to access the AWS SQS Qeueue
SQSCredsStr string
// FrontDomain is the full URL of an optional front domain that can be used with either
// the AMP cache or HTTP domain fronting rendezvous method.
FrontDomain string
// ICEAddresses are a slice of ICE server URLs that will be used for NAT traversal and
// the creation of the client's WebRTC SDP offer.
FrontDomains []string
// ICEAddresses are a slice of ICE server URLs that will be used for NAT traversal and
// the creation of the client's WebRTC SDP offer.
ICEAddresses []string
// KeepLocalAddresses is an optional setting that will prevent the removal of local or
// invalid addresses from the client's SDP offer. This is useful for local deployments
// and testing.
KeepLocalAddresses bool
// Max is the maximum number of snowflake proxy peers that the client should attempt to
// connect to. Defaults to 1.
Max int
// UTLSClientID is the type of user application that snowflake should imitate.
// If an empty value is provided, it will use Go's default TLS implementation
UTLSClientID string
// UTLSRemoveSNI is the flag to control whether SNI should be removed from Client Hello
// when uTLS is used.
UTLSRemoveSNI bool
// BridgeFingerprint is the fingerprint of the bridge that the client will eventually
// connect to, as specified in the Bridge line of the torrc.
BridgeFingerprint string
// CommunicationProxy is the proxy address for network communication
CommunicationProxy *url.URL
}
// NewSnowflakeClient creates a new Snowflake transport client that can spawn multiple
// Snowflake connections.
//
// brokerURL and frontDomain are the urls for the broker host and domain fronting host
// iceAddresses are the STUN/TURN urls needed for WebRTC negotiation
// keepLocalAddresses is a flag to enable sending local network addresses (for testing purposes)
// max is the maximum number of snowflakes the client should gather for each SOCKS connection
func NewSnowflakeClient(config ClientConfig) (*Transport, error) {
log.Println("\n\n\n --- Starting Snowflake Client ---")
iceServers := parseIceServers(config.ICEAddresses)
// chooses a random subset of servers from inputs
rand.Seed(time.Now().UnixNano())
rand.Shuffle(len(iceServers), func(i, j int) {
iceServers[i], iceServers[j] = iceServers[j], iceServers[i]
})
if len(iceServers) > 2 {
iceServers = iceServers[:(len(iceServers)+1)/2]
}
defer socks.Close()
defer snowflake.Close()
log.Println("---- Handler: snowflake assigned ----")
err := socks.Grant(&net.TCPAddr{IP: net.IPv4zero, Port: 0})
log.Printf("Using ICE servers:")
for _, server := range iceServers {
log.Printf("url: %v", strings.Join(server.URLs, " "))
}
// Maintain backwards compatibility with old FrontDomain field of ClientConfig
if (len(config.FrontDomains) == 0) && (config.FrontDomain != "") {
config.FrontDomains = []string{config.FrontDomain}
}
// Rendezvous with broker using the given parameters.
broker, err := newBrokerChannelFromConfig(config)
if err != nil {
return err
return nil, err
}
go updateNATType(iceServers, broker, config.CommunicationProxy)
natPolicy := &NATPolicy{}
max := 1
if config.Max > max {
max = config.Max
}
eventsLogger := event.NewSnowflakeEventDispatcher()
transport := &Transport{dialer: NewWebRTCDialerWithNatPolicyAndEventsAndProxy(broker, natPolicy, iceServers, max, eventsLogger, config.CommunicationProxy), eventDispatcher: eventsLogger}
return transport, nil
}
// Dial creates a new Snowflake connection.
// Dial starts the collection of snowflakes and returns a SnowflakeConn that is a
// wrapper around a smux.Stream that will reliably deliver data to a Snowflake
// server through one or more snowflake proxies.
func (t *Transport) Dial() (net.Conn, error) {
// Cleanup functions to run before returning, in case of an error.
var cleanup []func()
defer func() {
// Run cleanup in reverse order, as defer does.
for i := len(cleanup) - 1; i >= 0; i-- {
cleanup[i]()
}
}()
// Prepare to collect remote WebRTC peers.
snowflakes, err := NewPeers(t.dialer)
if err != nil {
return nil, err
}
cleanup = append(cleanup, func() { snowflakes.End() })
// Use a real logger to periodically output how much traffic is happening.
snowflakes.bytesLogger = newBytesSyncLogger()
log.Printf("---- SnowflakeConn: begin collecting snowflakes ---")
go connectLoop(snowflakes)
// Create a new smux session
log.Printf("---- SnowflakeConn: starting a new session ---")
pconn, sess, err := newSession(snowflakes)
if err != nil {
return nil, err
}
cleanup = append(cleanup, func() {
pconn.Close()
sess.Close()
})
// On the smux session we overlay a stream.
stream, err := sess.OpenStream()
if err != nil {
return nil, err
}
// Begin exchanging data.
log.Printf("---- SnowflakeConn: begin stream %v ---", stream.ID())
cleanup = append(cleanup, func() { stream.Close() })
// All good, clear the cleanup list.
cleanup = nil
return &SnowflakeConn{Stream: stream, sess: sess, pconn: pconn, snowflakes: snowflakes}, nil
}
func (t *Transport) AddSnowflakeEventListener(receiver event.SnowflakeEventReceiver) {
t.eventDispatcher.AddSnowflakeEventListener(receiver)
}
func (t *Transport) RemoveSnowflakeEventListener(receiver event.SnowflakeEventReceiver) {
t.eventDispatcher.RemoveSnowflakeEventListener(receiver)
}
// SetRendezvousMethod sets the rendezvous method to the Snowflake broker.
func (t *Transport) SetRendezvousMethod(r RendezvousMethod) {
t.dialer.Rendezvous = r
}
// SnowflakeConn is a reliable connection to a snowflake server that implements net.Conn.
type SnowflakeConn struct {
*smux.Stream
sess *smux.Session
pconn net.PacketConn
snowflakes *Peers
}
// Close closes the connection.
//
// The collection of snowflake proxies for this connection is stopped.
func (conn *SnowflakeConn) Close() error {
var err error
log.Printf("---- SnowflakeConn: closed stream %v ---", conn.ID())
err = conn.Stream.Close()
log.Printf("---- SnowflakeConn: end collecting snowflakes ---")
conn.snowflakes.End()
if inerr := conn.pconn.Close(); err == nil {
err = inerr
}
log.Printf("---- SnowflakeConn: discarding finished session ---")
if inerr := conn.sess.Close(); err == nil {
err = inerr
}
return err
}
// loop through all provided STUN servers until we exhaust the list or find
// one that is compatible with RFC 5780
func updateNATType(servers []webrtc.ICEServer, broker *BrokerChannel, proxy *url.URL) {
var restrictedNAT bool
var err error
for _, server := range servers {
addr := strings.TrimPrefix(server.URLs[0], "stun:")
restrictedNAT, err = nat.CheckIfRestrictedNATWithProxy(addr, proxy)
if err != nil {
log.Printf("Warning: NAT checking failed for server at %s: %s", addr, err)
} else {
if restrictedNAT {
broker.SetNATType(nat.NATRestricted)
} else {
broker.SetNATType(nat.NATUnrestricted)
}
break
}
}
if err != nil {
broker.SetNATType(nat.NATUnknown)
}
}
// Returns a slice of webrtc.ICEServer given a slice of addresses
func parseIceServers(addresses []string) []webrtc.ICEServer {
var servers []webrtc.ICEServer
if len(addresses) == 0 {
return nil
}
for _, address := range addresses {
address = strings.TrimSpace(address)
// ice.ParseURL recognizes many types of ICE servers,
// but we only support stun over UDP currently
u, err := url.Parse(address)
if err != nil {
log.Printf("Warning: Parsing ICE server %v resulted in error: %v, skipping", address, err)
continue
}
if u.Scheme != "stun" {
log.Printf("Warning: Only stun: (STUN over UDP) servers are supported currently, skipping %v", address)
continue
}
// add default port, other sanity checks
parsedURL, err := ice.ParseURL(address)
if err != nil {
log.Printf("Warning: Parsing ICE server %v resulted in error: %v, skipping", address, err)
continue
}
servers = append(servers, webrtc.ICEServer{
URLs: []string{parsedURL.String()},
})
}
return servers
}
// newSession returns a new smux.Session and the net.PacketConn it is running
// over. The net.PacketConn successively connects through Snowflake proxies
// pulled from snowflakes.
func newSession(snowflakes SnowflakeCollector) (net.PacketConn, *smux.Session, error) {
clientID := turbotunnel.NewClientID()
// We build a persistent KCP session on a sequence of ephemeral WebRTC
// connections. This dialContext tells RedialPacketConn how to get a new
// WebRTC connection when the previous one dies. Inside each WebRTC
// connection, we use encapsulationPacketConn to encode packets into a
// stream.
dialContext := func(ctx context.Context) (net.PacketConn, error) {
log.Printf("redialing on same connection")
// Obtain an available WebRTC remote. May block.
conn := snowflakes.Pop()
if conn == nil {
return nil, errors.New("handler: Received invalid Snowflake")
}
log.Println("---- Handler: snowflake assigned ----")
// Send the magic Turbo Tunnel token.
_, err := conn.Write(turbotunnel.Token[:])
if err != nil {
return nil, err
}
// Send ClientID prefix.
_, err = conn.Write(clientID[:])
if err != nil {
return nil, err
}
return newEncapsulationPacketConn(dummyAddr{}, dummyAddr{}, conn), nil
}
pconn := turbotunnel.NewRedialPacketConn(dummyAddr{}, dummyAddr{}, dialContext)
// conn is built on the underlying RedialPacketConn—when one WebRTC
// connection dies, another one will be found to take its place. The
// sequence of packets across multiple WebRTC connections drives the KCP
// engine.
conn, err := kcp.NewConn2(dummyAddr{}, nil, 0, 0, pconn)
if err != nil {
pconn.Close()
return nil, nil, err
}
// Permit coalescing the payloads of consecutive sends.
conn.SetStreamMode(true)
// Set the maximum send and receive window sizes to a high number
// Removes KCP bottlenecks: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40026
conn.SetWindowSize(WindowSize, WindowSize)
// Disable the dynamic congestion window (limit only by the
// maximum of local and remote static windows).
conn.SetNoDelay(
0, // default nodelay
0, // default interval
0, // default resend
1, // nc=1 => congestion window off
)
// On the KCP connection we overlay an smux session and stream.
smuxConfig := smux.DefaultConfig()
smuxConfig.Version = 2
smuxConfig.KeepAliveTimeout = 10 * time.Minute
smuxConfig.MaxStreamBuffer = StreamSize
sess, err := smux.Client(conn, smuxConfig)
if err != nil {
conn.Close()
pconn.Close()
return nil, nil, err
}
go func() {
// When WebRTC resets, close the SOCKS connection too.
snowflake.WaitForReset()
socks.Close()
}()
// Begin exchanging data. Either WebRTC or localhost SOCKS will close first.
// In eithercase, this closes the handler and induces a new handler.
copyLoop(socks, snowflake)
log.Println("---- Handler: closed ---")
return nil
return pconn, sess, err
}
// Exchanges bytes between two ReadWriters.
// (In this case, between a SOCKS and WebRTC connection.)
func copyLoop(a, b io.ReadWriter) {
var wg sync.WaitGroup
wg.Add(2)
go func() {
io.Copy(b, a)
wg.Done()
}()
go func() {
io.Copy(a, b)
wg.Done()
}()
wg.Wait()
log.Println("copy loop ended")
// Maintain |SnowflakeCapacity| number of available WebRTC connections, to
// transfer to the Tor SOCKS handler when needed.
func connectLoop(snowflakes SnowflakeCollector) {
for {
timer := time.After(ReconnectTimeout)
_, err := snowflakes.Collect()
if err != nil {
log.Printf("WebRTC: %v Retrying...", err)
}
select {
case <-timer:
continue
case <-snowflakes.Melted():
log.Println("ConnectLoop: stopped.")
return
}
}
}

69
client/lib/turbotunnel.go Normal file
View file

@ -0,0 +1,69 @@
package snowflake_client
import (
"bufio"
"errors"
"io"
"net"
"time"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/encapsulation"
)
var errNotImplemented = errors.New("not implemented")
// encapsulationPacketConn implements the net.PacketConn interface over an
// io.ReadWriteCloser stream, using the encapsulation package to represent
// packets in a stream.
type encapsulationPacketConn struct {
io.ReadWriteCloser
localAddr net.Addr
remoteAddr net.Addr
bw *bufio.Writer
}
// newEncapsulationPacketConn makes an encapsulationPacketConn out of a given
// io.ReadWriteCloser and provided local and remote addresses.
func newEncapsulationPacketConn(
localAddr, remoteAddr net.Addr,
conn io.ReadWriteCloser,
) *encapsulationPacketConn {
return &encapsulationPacketConn{
ReadWriteCloser: conn,
localAddr: localAddr,
remoteAddr: remoteAddr,
bw: bufio.NewWriter(conn),
}
}
// ReadFrom reads an encapsulated packet from the stream.
func (c *encapsulationPacketConn) ReadFrom(p []byte) (int, net.Addr, error) {
n, err := encapsulation.ReadData(c.ReadWriteCloser, p)
if err == io.ErrShortBuffer {
err = nil
}
return n, c.remoteAddr, err
}
// WriteTo writes an encapsulated packet to the stream.
func (c *encapsulationPacketConn) WriteTo(p []byte, addr net.Addr) (int, error) {
// addr is ignored.
_, err := encapsulation.WriteData(c.bw, p)
if err == nil {
err = c.bw.Flush()
}
if err != nil {
return 0, err
}
return len(p), nil
}
// LocalAddr returns the localAddr value that was passed to
// NewEncapsulationPacketConn.
func (c *encapsulationPacketConn) LocalAddr() net.Addr {
return c.localAddr
}
func (c *encapsulationPacketConn) SetDeadline(t time.Time) error { return errNotImplemented }
func (c *encapsulationPacketConn) SetReadDeadline(t time.Time) error { return errNotImplemented }
func (c *encapsulationPacketConn) SetWriteDeadline(t time.Time) error { return errNotImplemented }

View file

@ -1,95 +1,71 @@
package lib
package snowflake_client
import (
"fmt"
"log"
"time"
"github.com/keroserene/go-webrtc"
)
const (
LogTimeInterval = 5
LogTimeInterval = 5 * time.Second
)
type IceServerList []webrtc.ConfigurationOption
func (i *IceServerList) String() string {
return fmt.Sprint(*i)
type bytesLogger interface {
addOutbound(int64)
addInbound(int64)
}
type BytesLogger interface {
Log()
AddOutbound(int)
AddInbound(int)
}
// Default bytesLogger does nothing.
type bytesNullLogger struct{}
// Default BytesLogger does nothing.
type BytesNullLogger struct{}
func (b bytesNullLogger) addOutbound(amount int64) {}
func (b bytesNullLogger) addInbound(amount int64) {}
func (b BytesNullLogger) Log() {}
func (b BytesNullLogger) AddOutbound(amount int) {}
func (b BytesNullLogger) AddInbound(amount int) {}
// BytesSyncLogger uses channels to safely log from multiple sources with output
// bytesSyncLogger uses channels to safely log from multiple sources with output
// occuring at reasonable intervals.
type BytesSyncLogger struct {
OutboundChan chan int
InboundChan chan int
Outbound int
Inbound int
OutEvents int
InEvents int
IsLogging bool
type bytesSyncLogger struct {
outboundChan chan int64
inboundChan chan int64
}
func (b *BytesSyncLogger) Log() {
b.IsLogging = true
var amount int
output := func() {
log.Printf("Traffic Bytes (in|out): %d | %d -- (%d OnMessages, %d Sends)",
b.Inbound, b.Outbound, b.InEvents, b.OutEvents)
b.Outbound = 0
b.OutEvents = 0
b.Inbound = 0
b.InEvents = 0
// newBytesSyncLogger returns a new bytesSyncLogger and starts it loggin.
func newBytesSyncLogger() *bytesSyncLogger {
b := &bytesSyncLogger{
outboundChan: make(chan int64, 5),
inboundChan: make(chan int64, 5),
}
last := time.Now()
go b.log()
return b
}
func (b *bytesSyncLogger) log() {
var outbound, inbound int64
var outEvents, inEvents int
ticker := time.NewTicker(LogTimeInterval)
for {
select {
case amount = <-b.OutboundChan:
b.Outbound += amount
b.OutEvents++
last := time.Now()
if time.Since(last) > time.Second*LogTimeInterval {
last = time.Now()
output()
}
case amount = <-b.InboundChan:
b.Inbound += amount
b.InEvents++
if time.Since(last) > time.Second*LogTimeInterval {
last = time.Now()
output()
}
case <-time.After(time.Second * LogTimeInterval):
if b.InEvents > 0 || b.OutEvents > 0 {
output()
case <-ticker.C:
if outEvents > 0 || inEvents > 0 {
log.Printf("Traffic Bytes (in|out): %d | %d -- (%d OnMessages, %d Sends)",
inbound, outbound, inEvents, outEvents)
}
outbound = 0
outEvents = 0
inbound = 0
inEvents = 0
case amount := <-b.outboundChan:
outbound += amount
outEvents++
case amount := <-b.inboundChan:
inbound += amount
inEvents++
}
}
}
func (b *BytesSyncLogger) AddOutbound(amount int) {
if !b.IsLogging {
return
}
b.OutboundChan <- amount
func (b *bytesSyncLogger) addOutbound(amount int64) {
b.outboundChan <- amount
}
func (b *BytesSyncLogger) AddInbound(amount int) {
if !b.IsLogging {
return
}
b.InboundChan <- amount
func (b *bytesSyncLogger) addInbound(amount int64) {
b.inboundChan <- amount
}

View file

@ -1,67 +1,118 @@
package lib
package snowflake_client
import (
"bytes"
"crypto/rand"
"encoding/hex"
"errors"
"io"
"log"
"net"
"net/url"
"sync"
"time"
"github.com/dchest/uniuri"
"github.com/keroserene/go-webrtc"
"github.com/pion/ice/v4"
"github.com/pion/transport/v3"
"github.com/pion/transport/v3/stdnet"
"github.com/pion/webrtc/v4"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/event"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/proxy"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/util"
)
// Remote WebRTC peer.
// Implements the |Snowflake| interface, which includes
// |io.ReadWriter|, |Resetter|, and |Connector|.
// WebRTCPeer represents a WebRTC connection to a remote snowflake proxy.
//
// Handles preparation of go-webrtc PeerConnection. Only ever has
// one DataChannel.
// Each WebRTCPeer only ever has one DataChannel that is used as the peer's transport.
type WebRTCPeer struct {
id string
config *webrtc.Configuration
pc *webrtc.PeerConnection
transport SnowflakeDataChannel // Holds the WebRTC DataChannel.
broker *BrokerChannel
transport *webrtc.DataChannel
offerChannel chan *webrtc.SessionDescription
answerChannel chan *webrtc.SessionDescription
errorChannel chan error
recvPipe *io.PipeReader
writePipe *io.PipeWriter
lastReceive time.Time
buffer bytes.Buffer
reset chan struct{}
recvPipe *io.PipeReader
writePipe *io.PipeWriter
closed bool
mu sync.Mutex // protects the following:
lastReceive time.Time
lock sync.Mutex // Synchronization for DataChannel destruction
once sync.Once // Synchronization for PeerConnection destruction
open chan struct{} // Channel to notify when datachannel opens
closed chan struct{}
BytesLogger
once sync.Once // Synchronization for PeerConnection destruction
bytesLogger bytesLogger
eventsLogger event.SnowflakeEventReceiver
proxy *url.URL
}
// Construct a WebRTC PeerConnection.
func NewWebRTCPeer(config *webrtc.Configuration,
broker *BrokerChannel) *WebRTCPeer {
// Deprecated: Use NewWebRTCPeerWithNatPolicyAndEventsAndProxy Instead.
func NewWebRTCPeer(
config *webrtc.Configuration, broker *BrokerChannel,
) (*WebRTCPeer, error) {
return NewWebRTCPeerWithNatPolicyAndEventsAndProxy(
config, broker, nil, nil, nil,
)
}
// Deprecated: Use NewWebRTCPeerWithNatPolicyAndEventsAndProxy Instead.
func NewWebRTCPeerWithEvents(
config *webrtc.Configuration, broker *BrokerChannel,
eventsLogger event.SnowflakeEventReceiver,
) (*WebRTCPeer, error) {
return NewWebRTCPeerWithNatPolicyAndEventsAndProxy(
config, broker, nil, eventsLogger, nil,
)
}
// Deprecated: Use NewWebRTCPeerWithNatPolicyAndEventsAndProxy Instead.
func NewWebRTCPeerWithEventsAndProxy(
config *webrtc.Configuration, broker *BrokerChannel,
eventsLogger event.SnowflakeEventReceiver, proxy *url.URL,
) (*WebRTCPeer, error) {
return NewWebRTCPeerWithNatPolicyAndEventsAndProxy(
config, broker, nil, eventsLogger, proxy,
)
}
// NewWebRTCPeerWithNatPolicyAndEventsAndProxy constructs
// a WebRTC PeerConnection to a snowflake proxy.
//
// The creation of the peer handles the signaling to the Snowflake broker, including
// the exchange of SDP information, the creation of a PeerConnection, and the establishment
// of a DataChannel to the Snowflake proxy.
func NewWebRTCPeerWithNatPolicyAndEventsAndProxy(
config *webrtc.Configuration, broker *BrokerChannel, natPolicy *NATPolicy,
eventsLogger event.SnowflakeEventReceiver, proxy *url.URL,
) (*WebRTCPeer, error) {
if eventsLogger == nil {
eventsLogger = event.NewSnowflakeEventDispatcher()
}
connection := new(WebRTCPeer)
connection.id = "snowflake-" + uniuri.New()
connection.config = config
connection.broker = broker
connection.offerChannel = make(chan *webrtc.SessionDescription, 1)
connection.answerChannel = make(chan *webrtc.SessionDescription, 1)
// Error channel is mostly for reporting during the initial SDP offer
// creation & local description setting, which happens asynchronously.
connection.errorChannel = make(chan error, 1)
connection.reset = make(chan struct{}, 1)
{
var buf [8]byte
if _, err := rand.Read(buf[:]); err != nil {
panic(err)
}
connection.id = "snowflake-" + hex.EncodeToString(buf[:])
}
connection.closed = make(chan struct{})
// Override with something that's not NullLogger to have real logging.
connection.BytesLogger = &BytesNullLogger{}
connection.bytesLogger = &bytesNullLogger{}
// Pipes remain the same even when DataChannel gets switched.
connection.recvPipe, connection.writePipe = io.Pipe()
return connection
connection.eventsLogger = eventsLogger
connection.proxy = proxy
err := connection.connect(config, broker, natPolicy)
if err != nil {
connection.Close()
return nil, err
}
return connection, nil
}
// Read bytes from local SOCKS.
@ -73,285 +124,262 @@ func (c *WebRTCPeer) Read(b []byte) (int, error) {
// Writes bytes out to remote WebRTC.
// As part of |io.ReadWriter|
func (c *WebRTCPeer) Write(b []byte) (int, error) {
c.lock.Lock()
defer c.lock.Unlock()
c.BytesLogger.AddOutbound(len(b))
// TODO: Buffering could be improved / separated out of WebRTCPeer.
if nil == c.transport {
log.Printf("Buffered %d bytes --> WebRTC", len(b))
c.buffer.Write(b)
} else {
c.transport.Send(b)
err := c.transport.Send(b)
if err != nil {
return 0, err
}
c.bytesLogger.addOutbound(int64(len(b)))
return len(b), nil
}
// As part of |Snowflake|
// Closed returns a boolean indicated whether the peer is closed.
func (c *WebRTCPeer) Closed() bool {
select {
case <-c.closed:
return true
default:
}
return false
}
// Close closes the connection the snowflake proxy.
func (c *WebRTCPeer) Close() error {
c.once.Do(func() {
c.closed = true
close(c.closed)
c.cleanup()
c.Reset()
log.Printf("WebRTC: Closing")
})
return nil
}
// As part of |Resetter|
func (c *WebRTCPeer) Reset() {
if nil == c.reset {
return
}
c.reset <- struct{}{}
}
// As part of |Resetter|
func (c *WebRTCPeer) WaitForReset() { <-c.reset }
// Prevent long-lived broken remotes.
// Should also update the DataChannel in underlying go-webrtc's to make Closes
// more immediate / responsive.
func (c *WebRTCPeer) checkForStaleness() {
func (c *WebRTCPeer) checkForStaleness(timeout time.Duration) {
c.mu.Lock()
c.lastReceive = time.Now()
c.mu.Unlock()
for {
if c.closed {
return
}
if time.Since(c.lastReceive).Seconds() > SnowflakeTimeout {
log.Println("WebRTC: No messages received for", SnowflakeTimeout,
"seconds -- closing stale connection.")
c.mu.Lock()
lastReceive := c.lastReceive
c.mu.Unlock()
if time.Since(lastReceive) > timeout {
log.Printf("WebRTC: No messages received for %v -- closing stale connection.",
timeout)
err := errors.New("no messages received, closing stale connection")
c.eventsLogger.OnNewSnowflakeEvent(event.EventOnSnowflakeConnectionFailed{Error: err})
c.Close()
return
}
<-time.After(time.Second)
}
}
// As part of |Connector| interface.
func (c *WebRTCPeer) Connect() error {
log.Println(c.id, " connecting...")
// TODO: When go-webrtc is more stable, it's possible that a new
// PeerConnection won't need to be re-prepared each time.
err := c.preparePeerConnection()
if err != nil {
return err
}
err = c.establishDataChannel()
if err != nil {
return errors.New("WebRTC: Could not establish DataChannel.")
}
err = c.exchangeSDP()
if err != nil {
return err
}
go c.checkForStaleness()
return nil
}
// Create and prepare callbacks on a new WebRTC PeerConnection.
func (c *WebRTCPeer) preparePeerConnection() error {
if nil != c.pc {
c.pc.Destroy()
c.pc = nil
}
pc, err := webrtc.NewPeerConnection(c.config)
if err != nil {
log.Printf("NewPeerConnection ERROR: %s", err)
return err
}
// Prepare PeerConnection callbacks.
pc.OnNegotiationNeeded = func() {
log.Println("WebRTC: OnNegotiationNeeded")
go func() {
offer, err := pc.CreateOffer()
// TODO: Potentially timeout and retry if ICE isn't working.
if err != nil {
c.errorChannel <- err
return
}
err = pc.SetLocalDescription(offer)
if err != nil {
c.errorChannel <- err
return
}
}()
}
// Allow candidates to accumulate until IceGatheringStateComplete.
pc.OnIceCandidate = func(candidate webrtc.IceCandidate) {
log.Printf(candidate.Candidate)
}
pc.OnIceGatheringStateChange = func(state webrtc.IceGatheringState) {
if state == webrtc.IceGatheringStateComplete {
log.Printf("WebRTC: IceGatheringStateComplete")
c.offerChannel <- pc.LocalDescription()
}
}
// This callback is not expected, as the Client initiates the creation
// of the data channel, not the remote peer.
pc.OnDataChannel = func(channel *webrtc.DataChannel) {
log.Println("OnDataChannel")
panic("Unexpected OnDataChannel!")
}
c.pc = pc
log.Println("WebRTC: PeerConnection created.")
return nil
}
// Create a WebRTC DataChannel locally.
func (c *WebRTCPeer) establishDataChannel() error {
c.lock.Lock()
defer c.lock.Unlock()
if c.transport != nil {
panic("Unexpected datachannel already exists!")
}
dc, err := c.pc.CreateDataChannel(c.id)
// Triggers "OnNegotiationNeeded" on the PeerConnection, which will prepare
// an SDP offer while other goroutines operating on this struct handle the
// signaling. Eventually fires "OnOpen".
if err != nil {
log.Printf("CreateDataChannel ERROR: %s", err)
return err
}
dc.OnOpen = func() {
c.lock.Lock()
defer c.lock.Unlock()
log.Println("WebRTC: DataChannel.OnOpen")
if nil != c.transport {
panic("WebRTC: transport already exists.")
}
// Flush buffered outgoing SOCKS data if necessary.
if c.buffer.Len() > 0 {
dc.Send(c.buffer.Bytes())
log.Println("Flushed", c.buffer.Len(), "bytes.")
c.buffer.Reset()
}
// Then enable the datachannel.
c.transport = dc
}
dc.OnClose = func() {
c.lock.Lock()
// Future writes will go to the buffer until a new DataChannel is available.
if nil == c.transport {
// Closed locally, as part of a reset.
log.Println("WebRTC: DataChannel.OnClose [locally]")
c.lock.Unlock()
select {
case <-c.closed:
return
case <-time.After(time.Second):
}
// Closed remotely, need to reset everything.
// Disable the DataChannel as a write destination.
log.Println("WebRTC: DataChannel.OnClose [remotely]")
c.transport = nil
c.pc.DeleteDataChannel(dc)
// Unlock before Close'ing, since it calls cleanup and asks for the
// lock to check if the transport needs to be be deleted.
c.lock.Unlock()
c.Close()
}
dc.OnMessage = func(msg []byte) {
if len(msg) <= 0 {
log.Println("0 length message---")
}
c.BytesLogger.AddInbound(len(msg))
n, err := c.writePipe.Write(msg)
if err != nil {
// TODO: Maybe shouldn't actually close.
log.Println("Error writing to SOCKS pipe")
c.writePipe.CloseWithError(err)
}
if n != len(msg) {
log.Println("Error: short write")
panic("short write")
}
c.lastReceive = time.Now()
}
log.Println("WebRTC: DataChannel created.")
return nil
}
func (c *WebRTCPeer) sendOfferToBroker() {
if nil == c.broker {
return
}
offer := c.pc.LocalDescription()
answer, err := c.broker.Negotiate(offer)
if nil != err || nil == answer {
log.Printf("BrokerChannel Error: %s", err)
answer = nil
}
c.answerChannel <- answer
}
// connect does the bulk of the work: gather ICE candidates, send the SDP offer to broker,
// receive an answer from broker, and wait for data channel to open.
//
// `natPolicy` can be nil, in which case we'll always send our actual
// NAT type to the broker.
func (c *WebRTCPeer) connect(
config *webrtc.Configuration,
broker *BrokerChannel,
natPolicy *NATPolicy,
) error {
log.Println(c.id, " connecting...")
// Block until an SDP offer is available, send it to either
// the Broker or signal pipe, then await for the SDP answer.
func (c *WebRTCPeer) exchangeSDP() error {
select {
case <-c.offerChannel:
case err := <-c.errorChannel:
log.Println("Failed to prepare offer", err)
c.Close()
err := c.preparePeerConnection(config, broker.keepLocalAddresses)
localDescription := c.pc.LocalDescription()
c.eventsLogger.OnNewSnowflakeEvent(event.EventOnOfferCreated{
WebRTCLocalDescription: localDescription,
Error: err,
})
if err != nil {
return err
}
// Keep trying the same offer until a valid answer arrives.
var ok bool
var answer *webrtc.SessionDescription = nil
for nil == answer {
go c.sendOfferToBroker()
answer, ok = <-c.answerChannel // Blocks...
if !ok || nil == answer {
log.Printf("Failed to retrieve answer. Retrying in %d seconds", ReconnectTimeout)
<-time.After(time.Second * ReconnectTimeout)
answer = nil
}
actualNatType := broker.GetNATType()
var natTypeToSend string
if natPolicy != nil {
natTypeToSend = natPolicy.NATTypeToSend(actualNatType)
} else {
natTypeToSend = actualNatType
}
if natTypeToSend != actualNatType {
log.Printf(
"Our NAT type is \"%v\", but let's tell the broker it's \"%v\".",
actualNatType,
natTypeToSend,
)
} else {
log.Printf("natTypeToSend: \"%v\" (same as actualNatType)", natTypeToSend)
}
answer, err := broker.Negotiate(localDescription, natTypeToSend)
c.eventsLogger.OnNewSnowflakeEvent(event.EventOnBrokerRendezvous{
WebRTCRemoteDescription: answer,
Error: err,
})
if err != nil {
return err
}
log.Printf("Received Answer.\n")
err := c.pc.SetRemoteDescription(answer)
err = c.pc.SetRemoteDescription(*answer)
if nil != err {
log.Println("WebRTC: Unable to SetRemoteDescription:", err)
return err
}
// Wait for the datachannel to open or time out
select {
case <-c.open:
if natPolicy != nil {
natPolicy.Success(actualNatType, natTypeToSend)
}
case <-time.After(DataChannelTimeout):
c.transport.Close()
err := errors.New("timeout waiting for DataChannel.OnOpen")
if natPolicy != nil {
natPolicy.Failure(actualNatType, natTypeToSend)
}
c.eventsLogger.OnNewSnowflakeEvent(event.EventOnSnowflakeConnectionFailed{Error: err})
return err
}
go c.checkForStaleness(SnowflakeTimeout)
return nil
}
// Close all channels and transports
// preparePeerConnection creates a new WebRTC PeerConnection and returns it
// after non-trickle ICE candidate gathering is complete.
func (c *WebRTCPeer) preparePeerConnection(
config *webrtc.Configuration,
keepLocalAddresses bool,
) error {
s := webrtc.SettingEngine{}
if !keepLocalAddresses {
s.SetIPFilter(func(ip net.IP) (keep bool) {
// `IsLoopback()` and `IsUnspecified` are likely not neded here,
// but let's keep them just in case.
// FYI there is similar code in other files in this project.
keep = !util.IsLocal(ip) && !ip.IsLoopback() && !ip.IsUnspecified()
return
})
s.SetICEMulticastDNSMode(ice.MulticastDNSModeDisabled)
}
s.SetIncludeLoopbackCandidate(keepLocalAddresses)
// Use the SetNet setting https://pkg.go.dev/github.com/pion/webrtc/v3#SettingEngine.SetNet
// to get snowflake working in shadow (where the AF_NETLINK family is not implemented).
// These two lines of code functionally revert a new change in pion by silently ignoring
// when net.Interfaces() fails, rather than throwing an error
var vnet transport.Net
vnet, _ = stdnet.NewNet()
if c.proxy != nil {
if err := proxy.CheckProxyProtocolSupport(c.proxy); err != nil {
return err
}
socksClient := proxy.NewSocks5UDPClient(c.proxy)
vnet = proxy.NewTransportWrapper(&socksClient, vnet)
}
s.SetNet(vnet)
api := webrtc.NewAPI(webrtc.WithSettingEngine(s))
var err error
c.pc, err = api.NewPeerConnection(*config)
if err != nil {
log.Printf("NewPeerConnection ERROR: %s", err)
return err
}
ordered := true
dataChannelOptions := &webrtc.DataChannelInit{
Ordered: &ordered,
}
// We must create the data channel before creating an offer
// https://github.com/pion/webrtc/wiki/Release-WebRTC@v3.0.0#a-data-channel-is-no-longer-implicitly-created-with-a-peerconnection
dc, err := c.pc.CreateDataChannel(c.id, dataChannelOptions)
if err != nil {
log.Printf("CreateDataChannel ERROR: %s", err)
return err
}
dc.OnOpen(func() {
c.eventsLogger.OnNewSnowflakeEvent(event.EventOnSnowflakeConnected{})
log.Println("WebRTC: DataChannel.OnOpen")
close(c.open)
})
dc.OnClose(func() {
log.Println("WebRTC: DataChannel.OnClose")
c.Close()
})
dc.OnError(func(err error) {
c.eventsLogger.OnNewSnowflakeEvent(event.EventOnSnowflakeConnectionFailed{Error: err})
})
dc.OnMessage(func(msg webrtc.DataChannelMessage) {
if len(msg.Data) <= 0 {
log.Println("0 length message---")
}
n, err := c.writePipe.Write(msg.Data)
c.bytesLogger.addInbound(int64(n))
if err != nil {
// TODO: Maybe shouldn't actually close.
log.Println("Error writing to SOCKS pipe")
if inerr := c.writePipe.CloseWithError(err); inerr != nil {
log.Printf("c.writePipe.CloseWithError returned error: %v", inerr)
}
}
c.mu.Lock()
c.lastReceive = time.Now()
c.mu.Unlock()
})
c.transport = dc
c.open = make(chan struct{})
log.Println("WebRTC: DataChannel created")
offer, err := c.pc.CreateOffer(nil)
// TODO: Potentially timeout and retry if ICE isn't working.
if err != nil {
log.Println("Failed to prepare offer", err)
c.pc.Close()
return err
}
log.Println("WebRTC: Created offer")
// Allow candidates to accumulate until ICEGatheringStateComplete.
done := webrtc.GatheringCompletePromise(c.pc)
// Start gathering candidates
err = c.pc.SetLocalDescription(offer)
if err != nil {
log.Println("Failed to apply offer", err)
c.pc.Close()
return err
}
log.Println("WebRTC: Set local description")
<-done // Wait for ICE candidate gathering to complete.
return nil
}
// cleanup closes all channels and transports
func (c *WebRTCPeer) cleanup() {
if nil != c.offerChannel {
close(c.offerChannel)
}
if nil != c.answerChannel {
close(c.answerChannel)
}
if nil != c.errorChannel {
close(c.errorChannel)
}
// Close this side of the SOCKS pipe.
if nil != c.writePipe {
if c.writePipe != nil { // c.writePipe can be nil in tests.
c.writePipe.Close()
c.writePipe = nil
}
c.lock.Lock()
if nil != c.transport {
log.Printf("WebRTC: closing DataChannel")
dataChannel := c.transport
// Setting transport to nil *before* dc Close indicates to OnClose that
// this was locally triggered.
c.transport = nil
// Release the lock before calling DeleteDataChannel (which in turn
// calls Close on the dataChannel), but after nil'ing out the transport,
// since otherwise we'll end up in the onClose handler in a deadlock.
c.lock.Unlock()
if c.pc == nil {
panic("DataChannel w/o PeerConnection, not good.")
}
c.pc.DeleteDataChannel(dataChannel.(*webrtc.DataChannel))
} else {
c.lock.Unlock()
c.transport.Close()
}
if nil != c.pc {
log.Printf("WebRTC: closing PeerConnection")
err := c.pc.Destroy()
err := c.pc.Close()
if nil != err {
log.Printf("Error closing peerconnection...")
}
c.pc = nil
}
}

View file

@ -3,65 +3,161 @@ package main
import (
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net"
"os"
"os/signal"
"path/filepath"
"strconv"
"strings"
"sync"
"syscall"
"time"
"git.torproject.org/pluggable-transports/goptlib.git"
sf "git.torproject.org/pluggable-transports/snowflake.git/client/lib"
"git.torproject.org/pluggable-transports/snowflake.git/common/safelog"
"github.com/keroserene/go-webrtc"
pt "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/goptlib"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil/safelog"
sf "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/client/lib"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/event"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/proxy"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/version"
)
const (
DefaultSnowflakeCapacity = 1
)
// Maintain |SnowflakeCapacity| number of available WebRTC connections, to
// transfer to the Tor SOCKS handler when needed.
func ConnectLoop(snowflakes sf.SnowflakeCollector) {
for {
// Check if ending is necessary.
_, err := snowflakes.Collect()
if nil != err {
log.Println("WebRTC:", err,
" Retrying in", sf.ReconnectTimeout, "seconds...")
}
select {
case <-time.After(time.Second * sf.ReconnectTimeout):
continue
case <-snowflakes.Melted():
log.Println("ConnectLoop: stopped.")
return
}
}
type ptEventLogger struct {
}
// Accept local SOCKS connections and pass them to the handler.
func socksAcceptLoop(ln *pt.SocksListener, snowflakes sf.SnowflakeCollector) error {
func NewPTEventLogger() event.SnowflakeEventReceiver {
return &ptEventLogger{}
}
func (p ptEventLogger) OnNewSnowflakeEvent(e event.SnowflakeEvent) {
pt.Log(pt.LogSeverityNotice, e.String())
}
// Exchanges bytes between two ReadWriters.
// (In this case, between a SOCKS connection and a snowflake transport conn)
func copyLoop(socks, sfconn io.ReadWriter) {
done := make(chan struct{}, 2)
go func() {
if _, err := io.Copy(socks, sfconn); err != nil {
log.Printf("copying Snowflake to SOCKS resulted in error: %v", err)
}
done <- struct{}{}
}()
go func() {
if _, err := io.Copy(sfconn, socks); err != nil {
log.Printf("copying SOCKS to Snowflake resulted in error: %v", err)
}
done <- struct{}{}
}()
<-done
log.Println("copy loop ended")
}
// Accept local SOCKS connections and connect to a Snowflake connection
func socksAcceptLoop(ln *pt.SocksListener, baseConfig sf.ClientConfig,
shutdown chan struct{}, wg *sync.WaitGroup) {
defer ln.Close()
log.Println("Started SOCKS listener.")
for {
log.Println("SOCKS listening...")
conn, err := ln.AcceptSocks()
if err != nil {
if e, ok := err.(net.Error); ok && e.Temporary() {
if err, ok := err.(net.Error); ok && err.Temporary() {
continue
}
return err
}
log.Println("SOCKS accepted: ", conn.Req)
err = sf.Handler(conn, snowflakes)
if err != nil {
log.Printf("handler error: %s", err)
log.Printf("SOCKS accept error: %s", err)
break
}
log.Printf("SOCKS accepted: %v", conn.Req)
wg.Add(1)
go func() {
defer wg.Done()
defer conn.Close()
config := baseConfig
// Check to see if our command line options are overriden by SOCKS options
if arg, ok := conn.Req.Args.Get("ampcache"); ok {
config.AmpCacheURL = arg
}
if arg, ok := conn.Req.Args.Get("sqsqueue"); ok {
config.SQSQueueURL = arg
}
if arg, ok := conn.Req.Args.Get("sqscreds"); ok {
config.SQSCredsStr = arg
}
if arg, ok := conn.Req.Args.Get("fronts"); ok {
if arg != "" {
config.FrontDomains = strings.Split(strings.TrimSpace(arg), ",")
}
} else if arg, ok := conn.Req.Args.Get("front"); ok {
config.FrontDomains = strings.Split(strings.TrimSpace(arg), ",")
}
if arg, ok := conn.Req.Args.Get("ice"); ok {
config.ICEAddresses = strings.Split(strings.TrimSpace(arg), ",")
}
if arg, ok := conn.Req.Args.Get("max"); ok {
max, err := strconv.Atoi(arg)
if err != nil {
conn.Reject()
log.Println("Invalid SOCKS arg: max=", arg)
return
}
config.Max = max
}
if arg, ok := conn.Req.Args.Get("url"); ok {
config.BrokerURL = arg
}
if arg, ok := conn.Req.Args.Get("utls-nosni"); ok {
switch strings.ToLower(arg) {
case "true":
fallthrough
case "yes":
config.UTLSRemoveSNI = true
}
}
if arg, ok := conn.Req.Args.Get("utls-imitate"); ok {
config.UTLSClientID = arg
}
if arg, ok := conn.Req.Args.Get("fingerprint"); ok {
config.BridgeFingerprint = arg
}
transport, err := sf.NewSnowflakeClient(config)
if err != nil {
conn.Reject()
log.Println("Failed to start snowflake transport: ", err)
return
}
transport.AddSnowflakeEventListener(NewPTEventLogger())
err = conn.Grant(&net.TCPAddr{IP: net.IPv4zero, Port: 0})
if err != nil {
log.Printf("conn.Grant error: %s", err)
return
}
handler := make(chan struct{})
go func() {
defer close(handler)
sconn, err := transport.Dial()
if err != nil {
log.Printf("dial error: %s", err)
return
}
defer sconn.Close()
// copy between the created Snowflake conn and the SOCKS conn
copyLoop(conn, sconn)
}()
select {
case <-shutdown:
log.Println("Received shutdown signal")
case <-handler:
log.Println("Handler ended")
}
return
}()
}
}
@ -69,23 +165,39 @@ func main() {
iceServersCommas := flag.String("ice", "", "comma-separated list of ICE servers")
brokerURL := flag.String("url", "", "URL of signaling broker")
frontDomain := flag.String("front", "", "front domain")
frontDomainsCommas := flag.String("fronts", "", "comma-separated list of front domains")
ampCacheURL := flag.String("ampcache", "", "URL of AMP cache to use as a proxy for signaling")
sqsQueueURL := flag.String("sqsqueue", "", "URL of SQS Queue to use as a proxy for signaling")
sqsCredsStr := flag.String("sqscreds", "", "credentials to access SQS Queue")
logFilename := flag.String("log", "", "name of log file")
logToStateDir := flag.Bool("logToStateDir", false, "resolve the log file relative to tor's pt state dir")
logToStateDir := flag.Bool("log-to-state-dir", false, "resolve the log file relative to tor's pt state dir")
keepLocalAddresses := flag.Bool("keep-local-addresses", false, "keep local LAN address ICE candidates.\nThis is usually pointless because Snowflake proxies don't usually reside on the same local network as the client.")
unsafeLogging := flag.Bool("unsafe-logging", false, "keep IP addresses and other sensitive info in the logs")
max := flag.Int("max", DefaultSnowflakeCapacity,
"capacity for number of multiplexed WebRTC peers")
versionFlag := flag.Bool("version", false, "display version info to stderr and quit")
// Deprecated
oldLogToStateDir := flag.Bool("logToStateDir", false, "use -log-to-state-dir instead")
oldKeepLocalAddresses := flag.Bool("keepLocalAddresses", false, "use -keep-local-addresses instead")
flag.Parse()
webrtc.SetLoggingVerbosity(1)
if *versionFlag {
fmt.Fprintf(os.Stderr, "snowflake-client %s", version.ConstructResult())
os.Exit(0)
}
log.SetFlags(log.LstdFlags | log.LUTC)
// Don't write to stderr; versions of tor earlier than about
// 0.3.5.6 do not read from the pipe, and eventually we will
// deadlock because the buffer is full.
// Don't write to stderr; versions of tor earlier than about 0.3.5.6 do
// not read from the pipe, and eventually we will deadlock because the
// buffer is full.
// https://bugs.torproject.org/26360
// https://bugs.torproject.org/25600#comment:14
var logOutput io.Writer = ioutil.Discard
var logOutput = io.Discard
if *logFilename != "" {
if *logToStateDir {
if *logToStateDir || *oldLogToStateDir {
stateDir, err := pt.MakeStateDir()
if err != nil {
log.Fatal(err)
@ -100,40 +212,37 @@ func main() {
defer logFile.Close()
logOutput = logFile
}
//We want to send the log output through our scrubber first
log.SetOutput(&safelog.LogScrubber{Output: logOutput})
log.Println("\n\n\n --- Starting Snowflake Client ---")
var iceServers sf.IceServerList
if len(strings.TrimSpace(*iceServersCommas)) > 0 {
option := webrtc.OptionIceServer(*iceServersCommas)
iceServers = append(iceServers, option)
if *unsafeLogging {
log.SetOutput(logOutput)
} else {
// We want to send the log output through our scrubber first
log.SetOutput(&safelog.LogScrubber{Output: logOutput})
}
// Prepare to collect remote WebRTC peers.
snowflakes := sf.NewPeers(*max)
log.Printf("snowflake-client %s\n", version.GetVersion())
// Use potentially domain-fronting broker to rendezvous.
broker := sf.NewBrokerChannel(*brokerURL, *frontDomain, sf.CreateBrokerTransport())
snowflakes.Tongue = sf.NewWebRTCDialer(broker, iceServers)
iceAddresses := strings.Split(strings.TrimSpace(*iceServersCommas), ",")
if nil == snowflakes.Tongue {
log.Fatal("Unable to prepare rendezvous method.")
return
var frontDomains []string
if *frontDomainsCommas != "" {
frontDomains = strings.Split(strings.TrimSpace(*frontDomainsCommas), ",")
}
// Use a real logger to periodically output how much traffic is happening.
snowflakes.BytesLogger = &sf.BytesSyncLogger{
InboundChan: make(chan int, 5),
OutboundChan: make(chan int, 5),
Inbound: 0,
Outbound: 0,
InEvents: 0,
OutEvents: 0,
}
go snowflakes.BytesLogger.Log()
go ConnectLoop(snowflakes)
// Maintain backwards compatability with legacy commandline option
if (len(frontDomains) == 0) && (*frontDomain != "") {
frontDomains = []string{*frontDomain}
}
config := sf.ClientConfig{
BrokerURL: *brokerURL,
AmpCacheURL: *ampCacheURL,
SQSQueueURL: *sqsQueueURL,
SQSCredsStr: *sqsCredsStr,
FrontDomains: frontDomains,
ICEAddresses: iceAddresses,
KeepLocalAddresses: *keepLocalAddresses || *oldKeepLocalAddresses,
Max: *max,
}
// Begin goptlib client process.
ptInfo, err := pt.ClientSetup(nil)
@ -141,10 +250,25 @@ func main() {
log.Fatal(err)
}
if ptInfo.ProxyURL != nil {
pt.ProxyError("proxy is not supported")
os.Exit(1)
if err := proxy.CheckProxyProtocolSupport(ptInfo.ProxyURL); err != nil {
pt.ProxyError("proxy is not supported:" + err.Error())
os.Exit(1)
} else {
config.CommunicationProxy = ptInfo.ProxyURL
client := proxy.NewSocks5UDPClient(config.CommunicationProxy)
conn, err := client.ListenPacket("udp", nil)
if err != nil {
pt.ProxyError("proxy test failure:" + err.Error())
os.Exit(1)
}
conn.Close()
pt.ProxyDone()
}
}
pt.ReportVersion("snowflake-client", version.GetVersion())
listeners := make([]net.Listener, 0)
shutdown := make(chan struct{})
var wg sync.WaitGroup
for _, methodName := range ptInfo.MethodNames {
switch methodName {
case "snowflake":
@ -154,7 +278,8 @@ func main() {
pt.CmethodError(methodName, err.Error())
break
}
go socksAcceptLoop(ln, snowflakes)
log.Printf("Started SOCKS listener at %v.", ln.Addr())
go socksAcceptLoop(ln, config, shutdown, &wg)
pt.Cmethod(methodName, ln.Version(), ln.Addr())
listeners = append(listeners, ln)
default:
@ -163,8 +288,6 @@ func main() {
}
pt.CmethodsDone()
var numHandlers int = 0
var sig os.Signal
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM)
@ -172,29 +295,23 @@ func main() {
// This environment variable means we should treat EOF on stdin
// just like SIGTERM: https://bugs.torproject.org/15435.
go func() {
io.Copy(ioutil.Discard, os.Stdin)
if _, err := io.Copy(io.Discard, os.Stdin); err != nil {
log.Printf("calling io.Copy(io.Discard, os.Stdin) returned error: %v", err)
}
log.Printf("synthesizing SIGTERM because of stdin close")
sigChan <- syscall.SIGTERM
}()
}
// keep track of handlers and wait for a signal
sig = nil
for sig == nil {
select {
case n := <-sf.HandlerChan:
numHandlers += n
case sig = <-sigChan:
}
}
// Wait for a signal.
<-sigChan
log.Println("stopping snowflake")
// signal received, shut down
// Signal received, shut down.
for _, ln := range listeners {
ln.Close()
}
snowflakes.End()
for numHandlers > 0 {
numHandlers += <-sf.HandlerChan
}
close(shutdown)
wg.Wait()
log.Println("snowflake is done.")
}

View file

@ -1,10 +1,9 @@
UseBridges 1
DataDirectory datadir
ClientTransportPlugin snowflake exec ./client \
-url https://snowflake-broker.azureedge.net/ \
-front ajax.aspnetcdn.com \
-ice stun:stun.l.google.com:19302 \
-max 3
ClientTransportPlugin snowflake exec ./client -log snowflake.log
Bridge snowflake 0.0.3.0:1
Bridge snowflake 192.0.2.3:80 2B280B23E1107BB62ABFC40DDCC8824814F80A72 fingerprint=2B280B23E1107BB62ABFC40DDCC8824814F80A72 url=https://1098762253.rsc.cdn77.org/ fronts=www.cdn77.com,www.phpmyadmin.net ice=stun:stun.antisip.com:3478,stun:stun.epygi.com:3478,stun:stun.uls.co.za:3478,stun:stun.voipgate.com:3478,stun:stun.mixvoip.com:3478,stun:stun.nextcloud.com:3478,stun:stun.bethesda.net:3478,stun:stun.nextcloud.com:443 utls-imitate=hellorandomizedalpn
Bridge snowflake 192.0.2.4:80 8838024498816A039FCBBAB14E6F40A0843051FA fingerprint=8838024498816A039FCBBAB14E6F40A0843051FA url=https://1098762253.rsc.cdn77.org/ fronts=www.cdn77.com,www.phpmyadmin.net ice=stun:stun.antisip.com:3478,stun:stun.epygi.com:3478,stun:stun.uls.co.za:3478,stun:stun.voipgate.com:3478,stun:stun.mixvoip.com:3478,stun:stun.nextcloud.com:3478,stun:stun.bethesda.net:3478,stun:stun.nextcloud.com:443 utls-imitate=hellorandomizedalpn
SocksPort auto

View file

@ -1,7 +0,0 @@
UseBridges 1
DataDirectory datadir
ClientTransportPlugin snowflake exec ./client \
-url http://localhost:8080/ \
Bridge snowflake 0.0.3.0:1

6
client/torrc.localhost Normal file
View file

@ -0,0 +1,6 @@
UseBridges 1
DataDirectory datadir
ClientTransportPlugin snowflake exec ./client -keep-local-addresses
Bridge snowflake 192.0.2.3:1 url=http://localhost:8080/

136
common/amp/armor_decoder.go Normal file
View file

@ -0,0 +1,136 @@
package amp
import (
"bufio"
"bytes"
"encoding/base64"
"fmt"
"io"
"golang.org/x/net/html"
)
// ErrUnknownVersion is the error returned when the first character inside the
// element encoding (but outside the base64 encoding) is not '0'.
type ErrUnknownVersion byte
func (err ErrUnknownVersion) Error() string {
return fmt.Sprintf("unknown armor version indicator %+q", byte(err))
}
func isASCIIWhitespace(b byte) bool {
switch b {
// https://infra.spec.whatwg.org/#ascii-whitespace
case '\x09', '\x0a', '\x0c', '\x0d', '\x20':
return true
default:
return false
}
}
func splitASCIIWhitespace(data []byte, atEOF bool) (advance int, token []byte, err error) {
var i, j int
// Skip initial whitespace.
for i = 0; i < len(data); i++ {
if !isASCIIWhitespace(data[i]) {
break
}
}
// Look for next whitespace.
for j = i; j < len(data); j++ {
if isASCIIWhitespace(data[j]) {
return j + 1, data[i:j], nil
}
}
// We reached the end of data without finding more whitespace. Only
// consider it a token if we are at EOF.
if atEOF && i < j {
return j, data[i:j], nil
}
// Otherwise, request more data.
return i, nil, nil
}
func decodeToWriter(w io.Writer, r io.Reader) (int64, error) {
tokenizer := html.NewTokenizer(r)
// Set a memory limit on token sizes, otherwise the tokenizer will
// buffer text indefinitely if it is not broken up by other token types.
tokenizer.SetMaxBuf(elementSizeLimit)
active := false
total := int64(0)
for {
tt := tokenizer.Next()
switch tt {
case html.ErrorToken:
err := tokenizer.Err()
if err == io.EOF {
err = nil
}
if err == nil && active {
return total, fmt.Errorf("missing </pre> tag")
}
return total, err
case html.TextToken:
if active {
// Re-join the separate chunks of text and
// feed them to the decoder.
scanner := bufio.NewScanner(bytes.NewReader(tokenizer.Text()))
scanner.Split(splitASCIIWhitespace)
for scanner.Scan() {
n, err := w.Write(scanner.Bytes())
total += int64(n)
if err != nil {
return total, err
}
}
if err := scanner.Err(); err != nil {
return total, err
}
}
case html.StartTagToken:
tn, _ := tokenizer.TagName()
if string(tn) == "pre" {
if active {
// nesting not allowed
return total, fmt.Errorf("unexpected %s", tokenizer.Token())
}
active = true
}
case html.EndTagToken:
tn, _ := tokenizer.TagName()
if string(tn) == "pre" {
if !active {
// stray end tag
return total, fmt.Errorf("unexpected %s", tokenizer.Token())
}
active = false
}
}
}
}
// NewArmorDecoder returns a new AMP armor decoder.
func NewArmorDecoder(r io.Reader) (io.Reader, error) {
pr, pw := io.Pipe()
go func() {
_, err := decodeToWriter(pw, r)
pw.CloseWithError(err)
}()
// The first byte inside the element encoding is a serverclient
// protocol version indicator.
var version [1]byte
_, err := pr.Read(version[:])
if err != nil {
pr.CloseWithError(err)
return nil, err
}
switch version[0] {
case '0':
return base64.NewDecoder(base64.StdEncoding, pr), nil
default:
err := ErrUnknownVersion(version[0])
pr.CloseWithError(err)
return nil, err
}
}

176
common/amp/armor_encoder.go Normal file
View file

@ -0,0 +1,176 @@
package amp
import (
"encoding/base64"
"io"
)
// https://amp.dev/boilerplate/
// https://amp.dev/documentation/guides-and-tutorials/learn/spec/amp-boilerplate/?format=websites
// https://amp.dev/documentation/guides-and-tutorials/learn/spec/amphtml/?format=websites#the-amp-html-format
const (
boilerplateStart = `<!doctype html>
<html amp>
<head>
<meta charset="utf-8">
<script async src="https://cdn.ampproject.org/v0.js"></script>
<link rel="canonical" href="#">
<meta name="viewport" content="width=device-width">
<style amp-boilerplate>body{-webkit-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-moz-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-ms-animation:-amp-start 8s steps(1,end) 0s 1 normal both;animation:-amp-start 8s steps(1,end) 0s 1 normal both}@-webkit-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-moz-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-ms-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-o-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}</style><noscript><style amp-boilerplate>body{-webkit-animation:none;-moz-animation:none;-ms-animation:none;animation:none}</style></noscript>
</head>
<body>
`
boilerplateEnd = `</body>
</html>`
)
const (
// We restrict the amount of text may go inside an HTML element, in
// order to limit the amount a decoder may have to buffer.
elementSizeLimit = 32 * 1024
// The payload is conceptually a long base64-encoded string, but we
// break the string into short chunks separated by whitespace. This is
// to protect against modification by AMP caches, which reportedly may
// truncate long words in text:
// https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/25985#note_2592348
bytesPerChunk = 32
// We set the number of chunks per element so as to stay under
// elementSizeLimit. Here, we assume that there is 1 byte of whitespace
// after each chunk (with an additional whitespace byte at the beginning
// of the element).
chunksPerElement = (elementSizeLimit - 1) / (bytesPerChunk + 1)
)
// The AMP armor encoder is a chain of a base64 encoder (base64.NewEncoder) and
// an HTML element encoder (elementEncoder). A top-level encoder (armorEncoder)
// coordinates these two, and handles prepending and appending the AMP
// boilerplate. armorEncoder's Write method writes data into the base64 encoder,
// where it makes its way through the chain.
// NewArmorEncoder returns a new AMP armor encoder. Anything written to the
// returned io.WriteCloser will be encoded and written to w. The caller must
// call Close to flush any partially written data and output the AMP boilerplate
// trailer.
func NewArmorEncoder(w io.Writer) (io.WriteCloser, error) {
// Immediately write the AMP boilerplate header.
_, err := w.Write([]byte(boilerplateStart))
if err != nil {
return nil, err
}
element := &elementEncoder{w: w}
// Write a serverclient protocol version indicator, outside the base64
// layer.
_, err = element.Write([]byte{'0'})
if err != nil {
return nil, err
}
base64 := base64.NewEncoder(base64.StdEncoding, element)
return &armorEncoder{
w: w,
element: element,
base64: base64,
}, nil
}
type armorEncoder struct {
base64 io.WriteCloser
element *elementEncoder
w io.Writer
}
func (enc *armorEncoder) Write(p []byte) (int, error) {
// Write into the chain base64 | element | w.
return enc.base64.Write(p)
}
func (enc *armorEncoder) Close() error {
// Close the base64 encoder first, to flush out any buffered data and
// the final padding.
err := enc.base64.Close()
if err != nil {
return err
}
// Next, close the element encoder, to close any open elements.
err = enc.element.Close()
if err != nil {
return err
}
// Finally, output the AMP boilerplate trailer.
_, err = enc.w.Write([]byte(boilerplateEnd))
if err != nil {
return err
}
return nil
}
// elementEncoder arranges written data into pre elements, with the text within
// separated into chunks. It does no HTML encoding, so data written must not
// contain any bytes that are meaningful in HTML.
type elementEncoder struct {
w io.Writer
chunkCounter int
elementCounter int
}
func (enc *elementEncoder) Write(p []byte) (n int, err error) {
total := 0
for len(p) > 0 {
if enc.elementCounter == 0 && enc.chunkCounter == 0 {
_, err := enc.w.Write([]byte("<pre>\n"))
if err != nil {
return total, err
}
}
n := bytesPerChunk - enc.chunkCounter
if n > len(p) {
n = len(p)
}
nn, err := enc.w.Write(p[:n])
if err != nil {
return total, err
}
total += nn
p = p[n:]
enc.chunkCounter += n
if enc.chunkCounter >= bytesPerChunk {
enc.chunkCounter = 0
enc.elementCounter += 1
nn, err := enc.w.Write([]byte("\n"))
if err != nil {
return total, err
}
total += nn
}
if enc.elementCounter >= chunksPerElement {
enc.elementCounter = 0
nn, err := enc.w.Write([]byte("</pre>\n"))
if err != nil {
return total, err
}
total += nn
}
}
return total, nil
}
func (enc *elementEncoder) Close() error {
var err error
if !(enc.elementCounter == 0 && enc.chunkCounter == 0) {
if enc.chunkCounter == 0 {
_, err = enc.w.Write([]byte("</pre>\n"))
} else {
_, err = enc.w.Write([]byte("\n</pre>\n"))
}
}
return err
}

226
common/amp/armor_test.go Normal file
View file

@ -0,0 +1,226 @@
package amp
import (
"io"
"math/rand"
"strings"
"testing"
)
func armorDecodeToString(src string) (string, error) {
dec, err := NewArmorDecoder(strings.NewReader(src))
if err != nil {
return "", err
}
p, err := io.ReadAll(dec)
return string(p), err
}
func TestArmorDecoder(t *testing.T) {
for _, test := range []struct {
input string
expectedOutput string
expectedErr bool
}{
{`
<pre>
0
</pre>
`,
"",
false,
},
{`
<pre>
0aGVsbG8gd29ybGQK
</pre>
`,
"hello world\n",
false,
},
// bad version indicator
{`
<pre>
1aGVsbG8gd29ybGQK
</pre>
`,
"",
true,
},
// text outside <pre> elements
{`
0aGVsbG8gd29ybGQK
blah blah blah
<pre>
0aGVsbG8gd29ybGQK
</pre>
0aGVsbG8gd29ybGQK
blah blah blah
`,
"hello world\n",
false,
},
{`
<pre>
0QUJDREV
GR0hJSkt
MTU5PUFF
SU1RVVld
</pre>
junk
<pre>
YWVowMTI
zNDU2Nzg
5Cg
=
</pre>
<pre>
=
</pre>
`,
"ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\n",
false,
},
// no <pre> elements, hence no version indicator
{`
aGVsbG8gd29ybGQK
blah blah blah
aGVsbG8gd29ybGQK
aGVsbG8gd29ybGQK
blah blah blah
`,
"",
true,
},
// empty <pre> elements, hence no version indicator
{`
aGVsbG8gd29ybGQK
blah blah blah
<pre> </pre>
aGVsbG8gd29ybGQK
aGVsbG8gd29ybGQK<pre></pre>
blah blah blah
`,
"",
true,
},
// other elements inside <pre>
{
"blah <pre>0aGVsb<p>G8gd29</p>ybGQK</pre>",
"hello world\n",
false,
},
// HTML comment
{
"blah <!-- <pre>aGVsbG8gd29ybGQK</pre> -->",
"",
true,
},
// all kinds of ASCII whitespace
{
"blah <pre>\x200\x09aG\x0aV\x0csb\x0dG8\x20gd29ybGQK</pre>",
"hello world\n",
false,
},
// bad padding
{`
<pre>
0QUJDREV
GR0hJSkt
MTU5PUFF
SU1RVVld
</pre>
junk
<pre>
YWVowMTI
zNDU2Nzg
5Cg
=
</pre>
`,
"",
true,
},
/*
// per-chunk base64
// test disabled because Go stdlib handles this incorrectly:
// https://github.com/golang/go/issues/31626
{
"<pre>QQ==</pre><pre>Qg==</pre>",
"",
true,
},
*/
// missing </pre>
{
"blah <pre></pre><pre>0aGVsbG8gd29ybGQK",
"",
true,
},
// nested <pre>
{
"blah <pre>0aGVsb<pre>G8gd29</pre>ybGQK</pre>",
"",
true,
},
} {
output, err := armorDecodeToString(test.input)
if test.expectedErr && err == nil {
t.Errorf("%+q → (%+q, %v), expected error", test.input, output, err)
continue
}
if !test.expectedErr && err != nil {
t.Errorf("%+q → (%+q, %v), expected no error", test.input, output, err)
continue
}
if !test.expectedErr && output != test.expectedOutput {
t.Errorf("%+q → (%+q, %v), expected (%+q, %v)",
test.input, output, err, test.expectedOutput, nil)
continue
}
}
}
func armorRoundTrip(s string) (string, error) {
var encoded strings.Builder
enc, err := NewArmorEncoder(&encoded)
if err != nil {
return "", err
}
_, err = io.Copy(enc, strings.NewReader(s))
if err != nil {
return "", err
}
err = enc.Close()
if err != nil {
return "", err
}
return armorDecodeToString(encoded.String())
}
func TestArmorRoundTrip(t *testing.T) {
lengths := make([]int, 0)
// Test short strings and lengths around elementSizeLimit thresholds.
for i := 0; i < bytesPerChunk*2; i++ {
lengths = append(lengths, i)
}
for i := -10; i < +10; i++ {
lengths = append(lengths, elementSizeLimit+i)
lengths = append(lengths, 2*elementSizeLimit+i)
}
for _, n := range lengths {
buf := make([]byte, n)
rand.Read(buf)
input := string(buf)
output, err := armorRoundTrip(input)
if err != nil {
t.Errorf("length %d → error %v", n, err)
continue
}
if output != input {
t.Errorf("length %d → %+q", n, output)
continue
}
}
}

178
common/amp/cache.go Normal file
View file

@ -0,0 +1,178 @@
package amp
import (
"crypto/sha256"
"encoding/base32"
"fmt"
"net"
"net/url"
"path"
"strings"
"golang.org/x/net/idna"
)
// domainPrefixBasic does the basic domain prefix conversion. Does not do any
// IDNA mapping, such as https://www.unicode.org/reports/tr46/.
//
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/#basic-algorithm
func domainPrefixBasic(domain string) (string, error) {
// 1. Punycode Decode the publisher domain.
prefix, err := idna.ToUnicode(domain)
if err != nil {
return "", err
}
// 2. Replace any "-" (hyphen) character in the output of step 1 with
// "--" (two hyphens).
prefix = strings.Replace(prefix, "-", "--", -1)
// 3. Replace any "." (dot) character in the output of step 2 with "-"
// (hyphen).
prefix = strings.Replace(prefix, ".", "-", -1)
// 4. If the output of step 3 has a "-" (hyphen) at both positions 3 and
// 4, then to the output of step 3, add a prefix of "0-" and add a
// suffix of "-0".
if len(prefix) >= 4 && prefix[2] == '-' && prefix[3] == '-' {
prefix = "0-" + prefix + "-0"
}
// 5. Punycode Encode the output of step 3.
return idna.ToASCII(prefix)
}
// Lower-case base32 without padding.
var fallbackBase32Encoding = base32.NewEncoding("abcdefghijklmnopqrstuvwxyz234567").WithPadding(base32.NoPadding)
// domainPrefixFallback does the fallback domain prefix conversion. The returned
// base32 domain uses lower-case letters.
//
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/#fallback-algorithm
func domainPrefixFallback(domain string) string {
// The algorithm specification does not say what, exactly, we are to
// take the SHA-256 of. domain is notionally an abstract Unicode
// string, not a byte sequence. While
// https://github.com/ampproject/amp-toolbox/blob/84cb3057e5f6c54d64369ddd285db1cb36237ee8/packages/cache-url/lib/AmpCurlUrlGenerator.js#L62
// says "Take the SHA256 of the punycode view of the domain," in reality
// it hashes the UTF-8 encoding of the domain, without Punycode:
// https://github.com/ampproject/amp-toolbox/blob/84cb3057e5f6c54d64369ddd285db1cb36237ee8/packages/cache-url/lib/AmpCurlUrlGenerator.js#L141
// https://github.com/ampproject/amp-toolbox/blob/84cb3057e5f6c54d64369ddd285db1cb36237ee8/packages/cache-url/lib/browser/Sha256.js#L24
// We do the same here, hashing the raw bytes of domain, presumed to be
// UTF-8.
// 1. Hash the publisher's domain using SHA256.
h := sha256.Sum256([]byte(domain))
// 2. Base32 Escape the output of step 1.
// 3. Remove the last 4 characters from the output of step 2, which are
// always "=" (equals) characters.
return fallbackBase32Encoding.EncodeToString(h[:])
}
// domainPrefix computes the domain prefix of an AMP cache URL.
//
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/#domain-name-prefix
func domainPrefix(domain string) string {
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/#combined-algorithm
// 1. Run the Basic Algorithm. If the output is a valid DNS label,
// [append the Cache domain suffix and] return. Otherwise continue to
// step 2.
prefix, err := domainPrefixBasic(domain)
// "A domain prefix is not a valid DNS label if it is longer than 63
// characters"
if err == nil && len(prefix) <= 63 {
return prefix
}
// 2. Run the Fallback Algorithm. [Append the Cache domain suffix and]
// return.
return domainPrefixFallback(domain)
}
// CacheURL computes the AMP cache URL for the publisher URL pubURL, using the
// AMP cache at cacheURL. contentType is a string such as "c" or "i" that
// indicates what type of serving the AMP cache is to perform. The Scheme of
// pubURL must be "http" or "https". The Port of pubURL, if any, must match the
// default for the scheme. cacheURL may not have RawQuery, Fragment, or
// RawFragment set, because the resulting URL's query and fragment are taken
// from the publisher URL.
//
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/
func CacheURL(pubURL, cacheURL *url.URL, contentType string) (*url.URL, error) {
// The cache URL subdomain, including the domain prefix corresponding to
// the publisher URL's domain.
resultHost := domainPrefix(pubURL.Hostname()) + "." + cacheURL.Hostname()
if cacheURL.Port() != "" {
resultHost = net.JoinHostPort(resultHost, cacheURL.Port())
}
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/#url-path
// The first part of the path is the cache URL's own path, if any.
pathComponents := []string{cacheURL.EscapedPath()}
// The next path component is the content type. We cannot encode an
// empty content type, because it would result in consecutive path
// separators, which would semantically combine into a single separator.
if contentType == "" {
return nil, fmt.Errorf("invalid content type %+q", contentType)
}
pathComponents = append(pathComponents, url.PathEscape(contentType))
// Then, we add an "s" path component, if the publisher URL scheme is
// "https".
switch pubURL.Scheme {
case "http":
// Do nothing.
case "https":
pathComponents = append(pathComponents, "s")
default:
return nil, fmt.Errorf("invalid scheme %+q in publisher URL", pubURL.Scheme)
}
// The next path component is the publisher URL's host. The AMP cache
// URL format specification is not clear about whether other
// subcomponents of the authority (namely userinfo and port) may appear
// here. We adopt a policy of forbidding userinfo, and requiring that
// the port be the default for the scheme (and then we omit the port
// entirely from the returned URL).
if pubURL.User != nil {
return nil, fmt.Errorf("publisher URL may not contain userinfo")
}
if port := pubURL.Port(); port != "" {
if !((pubURL.Scheme == "http" && port == "80") || (pubURL.Scheme == "https" && port == "443")) {
return nil, fmt.Errorf("publisher URL port %+q is not the default for scheme %+q", port, pubURL.Scheme)
}
}
// As with the content type, we cannot encode an empty host, because
// that would result in an empty path component.
if pubURL.Hostname() == "" {
return nil, fmt.Errorf("invalid host %+q in publisher URL", pubURL.Hostname())
}
pathComponents = append(pathComponents, url.PathEscape(pubURL.Hostname()))
// Finally, we append the remainder of the original escaped path from
// the publisher URL.
pathComponents = append(pathComponents, pubURL.EscapedPath())
resultRawPath := path.Join(pathComponents...)
resultPath, err := url.PathUnescape(resultRawPath)
if err != nil {
return nil, err
}
// The query and fragment of the returned URL always come from pubURL.
// Any query or fragment of cacheURL would be ignored. Return an error
// if either is set.
if cacheURL.RawQuery != "" {
return nil, fmt.Errorf("cache URL may not contain a query")
}
if cacheURL.Fragment != "" {
return nil, fmt.Errorf("cache URL may not contain a fragment")
}
return &url.URL{
Scheme: cacheURL.Scheme,
User: cacheURL.User,
Host: resultHost,
Path: resultPath,
RawPath: resultRawPath,
RawQuery: pubURL.RawQuery,
Fragment: pubURL.Fragment,
}, nil
}

320
common/amp/cache_test.go Normal file
View file

@ -0,0 +1,320 @@
package amp
import (
"bytes"
"net/url"
"testing"
"golang.org/x/net/idna"
)
func TestDomainPrefixBasic(t *testing.T) {
// Tests expecting no error.
for _, test := range []struct {
domain, expected string
}{
{"", ""},
{"xn--", ""},
{"...", "---"},
// Should not apply mappings such as case folding and
// normalization.
{"b\u00fccher.de", "xn--bcher-de-65a"},
{"B\u00fccher.de", "xn--Bcher-de-65a"},
{"bu\u0308cher.de", "xn--bucher-de-hkf"},
// Check some that differ between IDNA 2003 and IDNA 2008.
// https://unicode.org/reports/tr46/#Deviations
// https://util.unicode.org/UnicodeJsps/idna.jsp
{"faß.de", "xn--fa-de-mqa"},
{"βόλοσ.com", "xn---com-4ld8c2a6a8e"},
// Lengths of 63 and 64. 64 is too long for a DNS label, but
// domainPrefixBasic is not expected to check for that.
{"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"},
{"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"},
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/#basic-algorithm
{"example.com", "example-com"},
{"foo.example.com", "foo-example-com"},
{"foo-example.com", "foo--example-com"},
{"xn--57hw060o.com", "xn---com-p33b41770a"},
{"\u26a1\U0001f60a.com", "xn---com-p33b41770a"},
{"en-us.example.com", "0-en--us-example-com-0"},
} {
output, err := domainPrefixBasic(test.domain)
if err != nil || output != test.expected {
t.Errorf("%+q → (%+q, %v), expected (%+q, %v)",
test.domain, output, err, test.expected, nil)
}
}
// Tests expecting an error.
for _, domain := range []string{
"xn---",
} {
output, err := domainPrefixBasic(domain)
if err == nil || output != "" {
t.Errorf("%+q → (%+q, %v), expected (%+q, non-nil)",
domain, output, err, "")
}
}
}
func TestDomainPrefixFallback(t *testing.T) {
for _, test := range []struct {
domain, expected string
}{
{
"",
"4oymiquy7qobjgx36tejs35zeqt24qpemsnzgtfeswmrw6csxbkq",
},
{
"example.com",
"un42n5xov642kxrxrqiyanhcoupgql5lt4wtbkyt2ijflbwodfdq",
},
// These checked against the output of
// https://github.com/ampproject/amp-toolbox/tree/84cb3057e5f6c54d64369ddd285db1cb36237ee8/packages/cache-url,
// using the widget at
// https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/amp-cache-urls/#url-format.
{
"000000000000000000000000000000000000000000000000000000000000.com",
"stejanx4hsijaoj4secyecy4nvqodk56kw72whwcmvdbtucibf5a",
},
{
"00000000000000000000000000000000000000000000000000000000000a.com",
"jdcvbsorpnc3hcjrhst56nfm6ymdpovlawdbm2efyxpvlt4cpbya",
},
{
"00000000000000000000000000000000000000000000000000000000000\u03bb.com",
"qhzqeumjkfpcpuic3vqruyjswcr7y7gcm3crqyhhywvn3xrhchfa",
},
} {
output := domainPrefixFallback(test.domain)
if output != test.expected {
t.Errorf("%+q → %+q, expected %+q",
test.domain, output, test.expected)
}
}
}
// Checks that domainPrefix chooses domainPrefixBasic or domainPrefixFallback as
// appropriate; i.e., always returns string that is a valid DNS label and is
// IDNA-decodable.
func TestDomainPrefix(t *testing.T) {
// A validating IDNA profile, which checks label length and that the
// label contains only certain ASCII characters. It does not do the
// ValidateLabels check, because that depends on the input having
// certain properties.
profile := idna.New(
idna.VerifyDNSLength(true),
idna.StrictDomainName(true),
)
for _, domain := range []string{
"example.com",
"\u0314example.com",
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", // 63 bytes
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", // 64 bytes
"xn--57hw060o.com",
"a b c",
} {
output := domainPrefix(domain)
if bytes.IndexByte([]byte(output), '.') != -1 {
t.Errorf("%+q → %+q contains a dot", domain, output)
}
_, err := profile.ToUnicode(output)
if err != nil {
t.Errorf("%+q → error %v", domain, err)
}
}
}
func mustParseURL(rawurl string) *url.URL {
u, err := url.Parse(rawurl)
if err != nil {
panic(err)
}
return u
}
func TestCacheURL(t *testing.T) {
// Tests expecting no error.
for _, test := range []struct {
pub string
cache string
contentType string
expected string
}{
// With or without trailing slash on pubURL.
{
"http://example.com/",
"https://amp.cache/",
"c",
"https://example-com.amp.cache/c/example.com",
},
{
"http://example.com",
"https://amp.cache/",
"c",
"https://example-com.amp.cache/c/example.com",
},
// https pubURL.
{
"https://example.com/",
"https://amp.cache/",
"c",
"https://example-com.amp.cache/c/s/example.com",
},
// The content type should be escaped if necessary.
{
"http://example.com/",
"https://amp.cache/",
"/",
"https://example-com.amp.cache/%2F/example.com",
},
// Retain pubURL path, query, and fragment, including escaping.
{
"http://example.com/my%2Fpath/index.html?a=1#fragment",
"https://amp.cache/",
"c",
"https://example-com.amp.cache/c/example.com/my%2Fpath/index.html?a=1#fragment",
},
// Retain scheme, userinfo, port, and path of cacheURL, escaping
// whatever is necessary.
{
"http://example.com",
"http://cache%2Fuser:cache%40pass@amp.cache:123/with/../../path/..%2f../",
"c",
"http://cache%2Fuser:cache%40pass@example-com.amp.cache:123/path/..%2f../c/example.com",
},
// Port numbers in pubURL are allowed, if they're the default
// for scheme.
{
"http://example.com:80/",
"https://amp.cache/",
"c",
"https://example-com.amp.cache/c/example.com",
},
{
"https://example.com:443/",
"https://amp.cache/",
"c",
"https://example-com.amp.cache/c/s/example.com",
},
// "?" at the end of cacheURL is okay, as long as the query is
// empty.
{
"http://example.com/",
"https://amp.cache/?",
"c",
"https://example-com.amp.cache/c/example.com",
},
// https://developers.google.com/amp/cache/overview#example-requesting-document-using-tls
{
"https://example.com/amp_document.html",
"https://cdn.ampproject.org/",
"c",
"https://example-com.cdn.ampproject.org/c/s/example.com/amp_document.html",
},
// https://developers.google.com/amp/cache/overview#example-requesting-image-using-plain-http
{
"http://example.com/logo.png",
"https://cdn.ampproject.org/",
"i",
"https://example-com.cdn.ampproject.org/i/example.com/logo.png",
},
// https://developers.google.com/amp/cache/overview#query-parameter-example
{
"https://example.com/g?value=Hello%20World",
"https://cdn.ampproject.org/",
"c",
"https://example-com.cdn.ampproject.org/c/s/example.com/g?value=Hello%20World",
},
} {
pubURL := mustParseURL(test.pub)
cacheURL := mustParseURL(test.cache)
outputURL, err := CacheURL(pubURL, cacheURL, test.contentType)
if err != nil {
t.Errorf("%+q %+q %+q → error %v",
test.pub, test.cache, test.contentType, err)
continue
}
if outputURL.String() != test.expected {
t.Errorf("%+q %+q %+q → %+q, expected %+q",
test.pub, test.cache, test.contentType, outputURL, test.expected)
continue
}
}
// Tests expecting an error.
for _, test := range []struct {
pub string
cache string
contentType string
}{
// Empty content type.
{
"http://example.com/",
"https://amp.cache/",
"",
},
// Empty host.
{
"http:///index.html",
"https://amp.cache/",
"c",
},
// Empty scheme.
{
"//example.com/",
"https://amp.cache/",
"c",
},
// Unrecognized scheme.
{
"ftp://example.com/",
"https://amp.cache/",
"c",
},
// Wrong port number for scheme.
{
"http://example.com:443/",
"https://amp.cache/",
"c",
},
// userinfo in pubURL.
{
"http://user@example.com/",
"https://amp.cache/",
"c",
},
{
"http://user:pass@example.com/",
"https://amp.cache/",
"c",
},
// cacheURL may not contain a query.
{
"http://example.com/",
"https://amp.cache/?a=1",
"c",
},
// cacheURL may not contain a fragment.
{
"http://example.com/",
"https://amp.cache/#fragment",
"c",
},
} {
pubURL := mustParseURL(test.pub)
cacheURL := mustParseURL(test.cache)
outputURL, err := CacheURL(pubURL, cacheURL, test.contentType)
if err == nil {
t.Errorf("%+q %+q %+q → %+q, expected error",
test.pub, test.cache, test.contentType, outputURL)
continue
}
}
}

91
common/amp/doc.go Normal file
View file

@ -0,0 +1,91 @@
/*
Package amp provides functions for working with the AMP (Accelerated Mobile
Pages) subset of HTML, and conveying binary data through an AMP cache.
# AMP cache
The CacheURL function takes a plain URL and converts it to be accessed through a
given AMP cache.
The EncodePath and DecodePath functions provide a way to encode data into the
suffix of a URL path. AMP caches do not support HTTP POST, but encoding data
into a URL path with GET is an alternative means of sending data to the server.
The format of an encoded path is:
0<0 or more bytes, including slash>/<base64 of data>
That is:
* "0", a format version number, which controls the interpretation of the rest of
the path. Only the first byte matters as a version indicator (not the whole
first path component).
* Any number of slash or non-slash bytes. These may be used as padding or to
prevent cache collisions in the AMP cache.
* A final slash.
* base64 encoding of the data, using the URL-safe alphabet (which does not
include slash).
For example, an encoding of the string "This is path-encoded data." is the
following. The "lgWHcwhXFjUm" following the format version number is random
padding that will be ignored on decoding.
0lgWHcwhXFjUm/VGhpcyBpcyBwYXRoLWVuY29kZWQgZGF0YS4
It is the caller's responsibility to add or remove any directory path prefix
before calling EncodePath or DecodePath.
# AMP armor
AMP armor is a data encoding scheme that that satisfies the requirements of the
AMP (Accelerated Mobile Pages) subset of HTML, and survives modification by an
AMP cache. For the requirements of AMP HTML, see
https://amp.dev/documentation/guides-and-tutorials/learn/spec/amphtml/.
For modifications that may be made by an AMP cache, see
https://github.com/ampproject/amphtml/blob/main/docs/spec/amp-cache-modifications.md.
The encoding is based on ones created by Ivan Markin. See codec/amp/ in
https://github.com/nogoegst/amper and discussion at
https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/25985.
The encoding algorithm works as follows. Base64-encode the input. Prepend the
input with the byte '0'; this is a protocol version indicator that the decoder
can use to determine how to interpret the bytes that follow. Split the base64
into fixed-size chunks separated by whitespace. Take up to 1024 chunks at a
time, and wrap them in a pre element. Then, situate the markup so far within the
body of the AMP HTML boilerplate. The decoding algorithm is to scan the HTML for
pre elements, split their text contents on whitespace and concatenate, then
base64 decode. The base64 encoding uses the standard alphabet, with normal "="
padding (https://tools.ietf.org/html/rfc4648#section-4).
The reason for splitting the base64 into chunks is that AMP caches reportedly
truncate long strings that are not broken by whitespace:
https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/25985#note_2592348.
The characters that may separate the chunks are the ASCII whitespace characters
(https://infra.spec.whatwg.org/#ascii-whitespace) "\x09", "\x0a", "\x0c",
"\x0d", and "\x20". The reason for separating the chunks into pre elements is to
limit the amount of text a decoder may have to buffer while parsing the HTML.
Each pre element may contain at most 64 KB of text. pre elements may not be
nested.
# Example
The following is the result of encoding the string
"This was encoded with AMP armor.":
<!doctype html>
<html amp>
<head>
<meta charset="utf-8">
<script async src="https://cdn.ampproject.org/v0.js"></script>
<link rel="canonical" href="#">
<meta name="viewport" content="width=device-width">
<style amp-boilerplate>body{-webkit-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-moz-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-ms-animation:-amp-start 8s steps(1,end) 0s 1 normal both;animation:-amp-start 8s steps(1,end) 0s 1 normal both}@-webkit-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-moz-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-ms-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-o-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}</style><noscript><style amp-boilerplate>body{-webkit-animation:none;-moz-animation:none;-ms-animation:none;animation:none}</style></noscript>
</head>
<body>
<pre>
0VGhpcyB3YXMgZW5jb2RlZCB3aXRoIEF
NUCBhcm1vci4=
</pre>
</body>
</html>
*/
package amp

44
common/amp/path.go Normal file
View file

@ -0,0 +1,44 @@
package amp
import (
"crypto/rand"
"encoding/base64"
"fmt"
"strings"
)
// EncodePath encodes data in a way that is suitable for the suffix of an AMP
// cache URL.
func EncodePath(data []byte) string {
var cacheBreaker [9]byte
_, err := rand.Read(cacheBreaker[:])
if err != nil {
panic(err)
}
b64 := base64.RawURLEncoding.EncodeToString
return "0" + b64(cacheBreaker[:]) + "/" + b64(data)
}
// DecodePath decodes data from a path suffix as encoded by EncodePath. The path
// must have already been trimmed of any directory prefix (as might be present
// in, e.g., an HTTP request). That is, the first character of path should be
// the "0" message format indicator.
func DecodePath(path string) ([]byte, error) {
if len(path) < 1 {
return nil, fmt.Errorf("missing format indicator")
}
version := path[0]
rest := path[1:]
switch version {
case '0':
// Ignore everything else up to and including the final slash
// (there must be at least one slash).
i := strings.LastIndexByte(rest, '/')
if i == -1 {
return nil, fmt.Errorf("missing data")
}
return base64.RawURLEncoding.DecodeString(rest[i+1:])
default:
return nil, fmt.Errorf("unknown format indicator %q", version)
}
}

54
common/amp/path_test.go Normal file
View file

@ -0,0 +1,54 @@
package amp
import (
"testing"
)
func TestDecodePath(t *testing.T) {
for _, test := range []struct {
path string
expectedData string
expectedErrStr string
}{
{"", "", "missing format indicator"},
{"0", "", "missing data"},
{"0foobar", "", "missing data"},
{"/0/YWJj", "", "unknown format indicator '/'"},
{"0/", "", ""},
{"0foobar/", "", ""},
{"0/YWJj", "abc", ""},
{"0///YWJj", "abc", ""},
{"0foobar/YWJj", "abc", ""},
{"0/foobar/YWJj", "abc", ""},
} {
data, err := DecodePath(test.path)
if test.expectedErrStr != "" {
if err == nil || err.Error() != test.expectedErrStr {
t.Errorf("%+q expected error %+q, got %+q",
test.path, test.expectedErrStr, err)
}
} else if err != nil {
t.Errorf("%+q expected no error, got %+q", test.path, err)
} else if string(data) != test.expectedData {
t.Errorf("%+q expected data %+q, got %+q",
test.path, test.expectedData, data)
}
}
}
func TestPathRoundTrip(t *testing.T) {
for _, data := range []string{
"",
"\x00",
"/",
"hello world",
} {
decoded, err := DecodePath(EncodePath([]byte(data)))
if err != nil {
t.Errorf("%+q roundtripped with error %v", data, err)
} else if string(decoded) != data {
t.Errorf("%+q roundtripped to %+q", data, decoded)
}
}
}

View file

@ -0,0 +1,30 @@
package bridgefingerprint
import (
"encoding/hex"
"errors"
)
type Fingerprint string
var ErrBridgeFingerprintInvalid = errors.New("bridge fingerprint invalid")
func FingerprintFromBytes(bytes []byte) (Fingerprint, error) {
n := len(bytes)
if n != 20 && n != 32 {
return Fingerprint(""), ErrBridgeFingerprintInvalid
}
return Fingerprint(bytes), nil
}
func FingerprintFromHexString(hexString string) (Fingerprint, error) {
decoded, err := hex.DecodeString(hexString)
if err != nil {
return "", err
}
return FingerprintFromBytes(decoded)
}
func (f Fingerprint) ToBytes() []byte {
return []byte(f)
}

54
common/certs/certs.go Normal file
View file

@ -0,0 +1,54 @@
package certs
import (
"crypto/x509"
"log"
)
// https://crt.sh/?id=9314791
const LetsEncryptRootCert = `-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
-----END CERTIFICATE-----`
// GetRootCAs is a workaround for older versions of Android that do not trust
// Let's Encrypt's ISRG Root X1. This manually adds the ISRG root to the device's
// existing cert pool.
func GetRootCAs() *x509.CertPool {
rootCerts, err := x509.SystemCertPool()
if err != nil {
rootCerts = x509.NewCertPool()
}
if ok := rootCerts.AppendCertsFromPEM([]byte(LetsEncryptRootCert)); !ok {
log.Println("Error appending Let's Encrypt root certificate to cert poool")
return nil
}
return rootCerts
}

View file

@ -0,0 +1,10 @@
package constants
const (
// If the broker does not receive the proxy answer within this many seconds
// after the broker received the client offer,
// the broker will respond with an error to the client.
//
// this is calibrated to match the timeout of the CDNs we use for rendezvous
BrokerClientTimeout = 5
)

View file

@ -0,0 +1,209 @@
// Package encapsulation implements a way of encoding variable-size chunks of
// data and padding into a byte stream.
//
// Each chunk of data or padding starts with a variable-size length prefix. One
// bit ("d") in the first byte of the prefix indicates whether the chunk
// represents data or padding (1=data, 0=padding). Another bit ("c" for
// "continuation") is the indicates whether there are more bytes in the length
// prefix. The remaining 6 bits ("x") encode part of the length value.
//
// dcxxxxxx
//
// If the continuation bit is set, then the next byte is also part of the length
// prefix. It lacks the "d" bit, has its own "c" bit, and 7 value-carrying bits
// ("y").
//
// cyyyyyyy
//
// The length is decoded by concatenating value-carrying bits, from left to
// right, of all value-carrying bits, up to and including the first byte whose
// "c" bit is 0. Although in principle this encoding would allow for length
// prefixes of any size, length prefixes are arbitrarily limited to 3 bytes and
// any attempt to read or write a longer one is an error. These are therefore
// the only valid formats:
//
// 00xxxxxx xxxxxx₂ bytes of padding
// 10xxxxxx xxxxxx₂ bytes of data
// 01xxxxxx 0yyyyyyy xxxxxxyyyyyyy₂ bytes of padding
// 11xxxxxx 0yyyyyyy xxxxxxyyyyyyy₂ bytes of data
// 01xxxxxx 1yyyyyyy 0zzzzzzz xxxxxxyyyyyyyzzzzzzz₂ bytes of padding
// 11xxxxxx 1yyyyyyy 0zzzzzzz xxxxxxyyyyyyyzzzzzzz₂ bytes of data
//
// The maximum encodable length is 11111111111111111111₂ = 0xfffff = 1048575.
// There is no requirement to use a length prefix of minimum size; i.e. 00000100
// and 01000000 00000100 are both valid encodings of the value 4.
//
// After the length prefix follow that many bytes of padding or data. There are
// no restrictions on the value of bytes comprising padding.
//
// The idea for this encapsulation is sketched here:
// https://github.com/net4people/bbs/issues/9#issuecomment-524095186
package encapsulation
import (
"errors"
"io"
)
// ErrTooLong is the error returned when an encoded length prefix is longer than
// 3 bytes, or when ReadData receives an input whose length is too large to
// encode in a 3-byte length prefix.
var ErrTooLong = errors.New("length prefix is too long")
// ReadData the next available data chunk, skipping over any padding chunks that
// may come first, and copies the data into p. If p is shorter than the length
// of the data chunk, only the first len(p) bytes are copied into p, and the
// error return is io.ErrShortBuffer. The returned error value is nil if and
// only if a data chunk was present and was read in its entirety. The returned
// error is io.EOF only if r ended before the first byte of a length prefix. If
// r ended in the middle of a length prefix or data/padding, the returned error
// is io.ErrUnexpectedEOF.
func ReadData(r io.Reader, p []byte) (int, error) {
for {
var b [1]byte
_, err := r.Read(b[:])
if err != nil {
// This is the only place we may return a real io.EOF.
return 0, err
}
isData := (b[0] & 0x80) != 0
moreLength := (b[0] & 0x40) != 0
n := int(b[0] & 0x3f)
for i := 0; moreLength; i++ {
if i >= 2 {
return 0, ErrTooLong
}
_, err := r.Read(b[:])
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
if err != nil {
return 0, err
}
moreLength = (b[0] & 0x80) != 0
n = (n << 7) | int(b[0]&0x7f)
}
if isData {
if len(p) > n {
p = p[:n]
}
numData, err := io.ReadFull(r, p)
if err == nil && numData < n {
// If the caller's buffer was too short, discard
// the rest of the data and return
// io.ErrShortBuffer.
_, err = io.CopyN(io.Discard, r, int64(n-numData))
if err == nil {
err = io.ErrShortBuffer
}
}
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
return numData, err
} else if n > 0 {
_, err := io.CopyN(io.Discard, r, int64(n))
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
if err != nil {
return 0, err
}
}
}
}
// dataPrefixForLength returns a length prefix for the given length, with the
// "d" bit set to 1.
func dataPrefixForLength(n int) ([]byte, error) {
switch {
case (n>>0)&0x3f == (n >> 0):
return []byte{0x80 | byte((n>>0)&0x3f)}, nil
case (n>>7)&0x3f == (n >> 7):
return []byte{0xc0 | byte((n>>7)&0x3f), byte((n >> 0) & 0x7f)}, nil
case (n>>14)&0x3f == (n >> 14):
return []byte{0xc0 | byte((n>>14)&0x3f), 0x80 | byte((n>>7)&0x7f), byte((n >> 0) & 0x7f)}, nil
default:
return nil, ErrTooLong
}
}
// WriteData encodes a data chunk into w. It returns the total number of bytes
// written; i.e., including the length prefix. The error is ErrTooLong if the
// length of data cannot fit into a length prefix.
func WriteData(w io.Writer, data []byte) (int, error) {
prefix, err := dataPrefixForLength(len(data))
if err != nil {
return 0, err
}
total := 0
n, err := w.Write(prefix)
total += n
if err != nil {
return total, err
}
n, err = w.Write(data)
total += n
return total, err
}
var paddingBuffer [1024]byte
// WritePadding encodes padding chunks, whose total size (including their own
// length prefixes) is n. Returns the total number of bytes written to w, which
// will be exactly n unless there was an error. The error cannot be ErrTooLong
// because this function will write multiple padding chunks if necessary to
// reach the requested size. Panics if n is negative.
func WritePadding(w io.Writer, n int) (int, error) {
if n < 0 {
panic("negative length")
}
total := 0
for n > 0 {
p := len(paddingBuffer)
if p > n {
p = n
}
n -= p
var prefix []byte
switch {
case ((p-1)>>0)&0x3f == ((p - 1) >> 0):
p = p - 1
prefix = []byte{byte((p >> 0) & 0x3f)}
case ((p-2)>>7)&0x3f == ((p - 2) >> 7):
p = p - 2
prefix = []byte{0x40 | byte((p>>7)&0x3f), byte((p >> 0) & 0x7f)}
case ((p-3)>>14)&0x3f == ((p - 3) >> 14):
p = p - 3
prefix = []byte{0x40 | byte((p>>14)&0x3f), 0x80 | byte((p>>7)&0x3f), byte((p >> 0) & 0x7f)}
}
nn, err := w.Write(prefix)
total += nn
if err != nil {
return total, err
}
nn, err = w.Write(paddingBuffer[:p])
total += nn
if err != nil {
return total, err
}
}
return total, nil
}
// MaxDataForSize returns the length of the longest slice that can pe passed to
// WriteData, whose total encoded size (including length prefix) is no larger
// than n. Call this to find out if a chunk of data will fit into a length
// budget. Panics if n == 0.
func MaxDataForSize(n int) int {
if n == 0 {
panic("zero length")
}
prefix, err := dataPrefixForLength(n)
if err == ErrTooLong {
return (1 << (6 + 7 + 7)) - 1 - 3
} else if err != nil {
panic(err)
}
return n - len(prefix)
}

View file

@ -0,0 +1,408 @@
package encapsulation
import (
"bytes"
"io"
"math/rand"
"testing"
)
// Return a byte slice with non-trivial contents.
func pseudorandomBuffer(n int) []byte {
source := rand.NewSource(0)
p := make([]byte, n)
for i := 0; i < len(p); i++ {
p[i] = byte(source.Int63() & 0xff)
}
return p
}
func mustWriteData(w io.Writer, p []byte) int {
n, err := WriteData(w, p)
if err != nil {
panic(err)
}
return n
}
func mustWritePadding(w io.Writer, n int) int {
n, err := WritePadding(w, n)
if err != nil {
panic(err)
}
return n
}
// Test that ReadData(WriteData()) recovers the original data.
func TestRoundtrip(t *testing.T) {
// Test above and below interesting thresholds.
for _, i := range []int{
0x00, 0x01,
0x3e, 0x3f, 0x40, 0x41,
0xfe, 0xff, 0x100, 0x101,
0x1ffe, 0x1fff, 0x2000, 0x2001,
0xfffe, 0xffff, 0x10000, 0x10001,
0xffffe, 0xfffff,
} {
original := pseudorandomBuffer(i)
var enc bytes.Buffer
n, err := WriteData(&enc, original)
if err != nil {
t.Fatalf("size %d, WriteData returned error %v", i, err)
}
if enc.Len() != n {
t.Fatalf("size %d, returned length was %d, written length was %d",
i, n, enc.Len())
}
inverse := make([]byte, i)
n, err = ReadData(&enc, inverse)
if err != nil {
t.Fatalf("size %d, ReadData returned error %v", i, err)
}
if !bytes.Equal(inverse[:n], original) {
t.Fatalf("size %d, got <%x>, expected <%x>", i, inverse[:n], original)
}
}
}
// Test that WritePadding writes exactly as much as requested.
func TestPaddingLength(t *testing.T) {
// Test above and below interesting thresholds. WritePadding also gets
// values above 0xfffff, the maximum value of a single length prefix.
for _, i := range []int{
0x00, 0x01,
0x3f, 0x40, 0x41, 0x42,
0xff, 0x100, 0x101, 0x102,
0x2000, 0x2001, 0x2002, 0x2003,
0x10000, 0x10001, 0x10002, 0x10003,
0x100001, 0x100002, 0x100003, 0x100004,
} {
var enc bytes.Buffer
n, err := WritePadding(&enc, i)
if err != nil {
t.Fatalf("size %d, WritePadding returned error %v", i, err)
}
if n != i {
t.Fatalf("requested %d bytes, returned %d", i, n)
}
if enc.Len() != n {
t.Fatalf("requested %d bytes, wrote %d bytes", i, enc.Len())
}
}
}
// Test that ReadData skips over padding.
func TestSkipPadding(t *testing.T) {
var data = [][]byte{{}, {}, []byte("hello"), {}, []byte("world")}
var enc bytes.Buffer
mustWritePadding(&enc, 10)
mustWritePadding(&enc, 100)
mustWriteData(&enc, data[0])
mustWriteData(&enc, data[1])
mustWritePadding(&enc, 10)
mustWriteData(&enc, data[2])
mustWriteData(&enc, data[3])
mustWritePadding(&enc, 10)
mustWriteData(&enc, data[4])
mustWritePadding(&enc, 10)
mustWritePadding(&enc, 10)
for i, expected := range data {
var actual [10]byte
n, err := ReadData(&enc, actual[:])
if err != nil {
t.Fatalf("slice %d, got error %v, expected %v", i, err, nil)
}
if !bytes.Equal(actual[:n], expected) {
t.Fatalf("slice %d, got <%x>, expected <%x>", i, actual[:n], expected)
}
}
n, err := ReadData(&enc, nil)
if n != 0 || err != io.EOF {
t.Fatalf("got (%v, %v), expected (%v, %v)", n, err, 0, io.EOF)
}
}
// Test that EOF before a length prefix returns io.EOF.
func TestEOF(t *testing.T) {
n, err := ReadData(bytes.NewReader(nil), nil)
if n != 0 || err != io.EOF {
t.Fatalf("got (%v, %v), expected (%v, %v)", n, err, 0, io.EOF)
}
}
// Test that an EOF while reading a length prefix, or while reading the
// subsequent data/padding, returns io.ErrUnexpectedEOF.
func TestUnexpectedEOF(t *testing.T) {
for _, test := range [][]byte{
{0x40}, // expecting a second length byte
{0xc0}, // expecting a second length byte
{0x41, 0x80}, // expecting a third length byte
{0xc1, 0x80}, // expecting a third length byte
{0x02}, // expecting 2 bytes of padding
{0x82}, // expecting 2 bytes of data
{0x02, 'X'}, // expecting 1 byte of padding
{0x82, 'X'}, // expecting 1 byte of data
{0x41, 0x00}, // expecting 128 bytes of padding
{0xc1, 0x00}, // expecting 128 bytes of data
{0x41, 0x00, 'X'}, // expecting 127 bytes of padding
{0xc1, 0x00, 'X'}, // expecting 127 bytes of data
{0x41, 0x80, 0x00}, // expecting 32768 bytes of padding
{0xc1, 0x80, 0x00}, // expecting 32768 bytes of data
{0x41, 0x80, 0x00, 'X'}, // expecting 32767 bytes of padding
{0xc1, 0x80, 0x00, 'X'}, // expecting 32767 bytes of data
} {
n, err := ReadData(bytes.NewReader(test), nil)
if n != 0 || err != io.ErrUnexpectedEOF {
t.Fatalf("<%x> got (%v, %v), expected (%v, %v)", test, n, err, 0, io.ErrUnexpectedEOF)
}
}
}
// Test that length encodings that are longer than they could be are still
// interpreted.
func TestNonMinimalLengthEncoding(t *testing.T) {
for _, test := range []struct {
enc []byte
expected []byte
}{
{[]byte{0x81, 'X'}, []byte("X")},
{[]byte{0xc0, 0x01, 'X'}, []byte("X")},
{[]byte{0xc0, 0x80, 0x01, 'X'}, []byte("X")},
} {
var p [10]byte
n, err := ReadData(bytes.NewReader(test.enc), p[:])
if err != nil {
t.Fatalf("<%x> got error %v, expected %v", test.enc, err, nil)
}
if !bytes.Equal(p[:n], test.expected) {
t.Fatalf("<%x> got <%x>, expected <%x>", test.enc, p[:n], test.expected)
}
}
}
// Test that ReadData only reads up to 3 bytes of length prefix.
func TestReadLimits(t *testing.T) {
// Test the maximum length that's possible with 3 bytes of length
// prefix.
maxLength := (0x3f << 14) | (0x7f << 7) | 0x7f
data := bytes.Repeat([]byte{'X'}, maxLength)
prefix := []byte{0xff, 0xff, 0x7f} // encodes 0xfffff
var p [0xfffff]byte
n, err := ReadData(bytes.NewReader(append(prefix, data...)), p[:])
if err != nil {
t.Fatalf("got error %v, expected %v", err, nil)
}
if !bytes.Equal(p[:n], data) {
t.Fatalf("got %d bytes unequal to %d bytes", len(p), len(data))
}
// Test a 4-byte prefix.
prefix = []byte{0xc0, 0xc0, 0x80, 0x80} // encodes 0x100000
data = bytes.Repeat([]byte{'X'}, maxLength+1)
n, err = ReadData(bytes.NewReader(append(prefix, data...)), nil)
if n != 0 || err != ErrTooLong {
t.Fatalf("got (%v, %v), expected (%v, %v)", n, err, 0, ErrTooLong)
}
// Test that 4 bytes don't work, even when they encode an integer that
// would fix in 3 bytes.
prefix = []byte{0xc0, 0x80, 0x80, 0x80} // encodes 0x0
data = []byte{}
n, err = ReadData(bytes.NewReader(append(prefix, data...)), nil)
if n != 0 || err != ErrTooLong {
t.Fatalf("got (%v, %v), expected (%v, %v)", n, err, 0, ErrTooLong)
}
// Do the same tests with padding lengths.
data = []byte("hello")
prefix = []byte{0x7f, 0xff, 0x7f} // encodes 0xfffff
padding := bytes.Repeat([]byte{'X'}, maxLength)
enc := bytes.NewBuffer(append(prefix, padding...))
mustWriteData(enc, data)
n, err = ReadData(enc, p[:])
if err != nil {
t.Fatalf("got error %v, expected %v", err, nil)
}
if !bytes.Equal(p[:n], data) {
t.Fatalf("got <%x>, expected <%x>", p[:n], data)
}
prefix = []byte{0x40, 0xc0, 0x80, 0x80} // encodes 0x100000
padding = bytes.Repeat([]byte{'X'}, maxLength+1)
enc = bytes.NewBuffer(append(prefix, padding...))
mustWriteData(enc, data)
n, err = ReadData(enc, nil)
if n != 0 || err != ErrTooLong {
t.Fatalf("got (%v, %v), expected (%v, %v)", n, err, 0, ErrTooLong)
}
prefix = []byte{0x40, 0x80, 0x80, 0x80} // encodes 0x0
padding = []byte{}
enc = bytes.NewBuffer(append(prefix, padding...))
mustWriteData(enc, data)
n, err = ReadData(enc, nil)
if n != 0 || err != ErrTooLong {
t.Fatalf("got (%v, %v), expected (%v, %v)", n, err, 0, ErrTooLong)
}
}
// Test that WriteData and WritePadding only accept lengths that can be encoded
// in up to 3 bytes of length prefix.
func TestWriteLimits(t *testing.T) {
maxLength := (0x3f << 14) | (0x7f << 7) | 0x7f
var enc bytes.Buffer
n, err := WriteData(&enc, bytes.Repeat([]byte{'X'}, maxLength))
if n != maxLength+3 || err != nil {
t.Fatalf("got (%d, %v), expected (%d, %v)", n, err, maxLength, nil)
}
enc.Reset()
n, err = WriteData(&enc, bytes.Repeat([]byte{'X'}, maxLength+1))
if n != 0 || err != ErrTooLong {
t.Fatalf("got (%d, %v), expected (%d, %v)", n, err, 0, ErrTooLong)
}
// Padding gets an extra 3 bytes because the prefix is counted as part
// of the length.
enc.Reset()
n, err = WritePadding(&enc, maxLength+3)
if n != maxLength+3 || err != nil {
t.Fatalf("got (%d, %v), expected (%d, %v)", n, err, maxLength+3, nil)
}
// Writing a too-long padding is okay because WritePadding will break it
// into smaller chunks.
enc.Reset()
n, err = WritePadding(&enc, maxLength+4)
if n != maxLength+4 || err != nil {
t.Fatalf("got (%d, %v), expected (%d, %v)", n, err, maxLength+4, nil)
}
}
// Test that WritePadding panics when given a negative length.
func TestNegativeLength(t *testing.T) {
for _, n := range []int{-1, ^0} {
var enc bytes.Buffer
panicked, nn, err := testNegativeLengthSub(t, &enc, n)
if !panicked {
t.Fatalf("WritePadding(%d) returned (%d, %v) instead of panicking", n, nn, err)
}
}
}
// Calls WritePadding(w, n) and augments the return value with a flag indicating
// whether the call panicked.
func testNegativeLengthSub(t *testing.T, w io.Writer, n int) (panicked bool, nn int, err error) {
defer func() {
if r := recover(); r != nil {
panicked = true
}
}()
t.Helper()
nn, err = WritePadding(w, n)
return false, n, err
}
// Test that MaxDataForSize panics when given a 0 length.
func TestMaxDataForSizeZero(t *testing.T) {
defer func() {
if r := recover(); r == nil {
t.Fatal("didn't panic")
}
}()
MaxDataForSize(0)
}
// Test thresholds of available sizes for MaxDataForSize.
func TestMaxDataForSize(t *testing.T) {
for _, test := range []struct {
size int
expected int
}{
{0x01, 0x00},
{0x02, 0x01},
{0x3f, 0x3e},
{0x40, 0x3e},
{0x41, 0x3f},
{0x1fff, 0x1ffd},
{0x2000, 0x1ffd},
{0x2001, 0x1ffe},
{0xfffff, 0xffffc},
{0x100000, 0xffffc},
{0x100001, 0xffffc},
{0x7fffffff, 0xffffc},
} {
max := MaxDataForSize(test.size)
if max != test.expected {
t.Fatalf("size %d, got %d, expected %d", test.size, max, test.expected)
}
}
}
// Test that ReadData truncates the data when the destination slice is too
// short.
func TestReadDataTruncate(t *testing.T) {
var enc bytes.Buffer
mustWriteData(&enc, []byte("12345678"))
mustWriteData(&enc, []byte("abcdefgh"))
var p [4]byte
// First ReadData should return truncated "1234".
n, err := ReadData(&enc, p[:])
if err != io.ErrShortBuffer {
t.Fatalf("got error %v, expected %v", err, io.ErrShortBuffer)
}
if !bytes.Equal(p[:n], []byte("1234")) {
t.Fatalf("got <%x>, expected <%x>", p[:n], []byte("1234"))
}
// Second ReadData should return truncated "abcd", not the rest of
// "12345678".
n, err = ReadData(&enc, p[:])
if err != io.ErrShortBuffer {
t.Fatalf("got error %v, expected %v", err, io.ErrShortBuffer)
}
if !bytes.Equal(p[:n], []byte("abcd")) {
t.Fatalf("got <%x>, expected <%x>", p[:n], []byte("abcd"))
}
// Last ReadData should give io.EOF.
n, err = ReadData(&enc, p[:])
if err != io.EOF {
t.Fatalf("got error %v, expected %v", err, io.EOF)
}
}
// Test that even when the result is truncated, ReadData fills the provided
// buffer as much as possible (and not stop at the boundary of an internal Read,
// say).
func TestReadDataTruncateFull(t *testing.T) {
pr, pw := io.Pipe()
go func() {
// Send one data chunk that will be delivered across two Read
// calls.
pw.Write([]byte{0x8a, 'h', 'e', 'l', 'l', 'o'})
pw.Write([]byte{'w', 'o', 'r', 'l', 'd'})
}()
var p [8]byte
n, err := ReadData(pr, p[:])
if err != io.ErrShortBuffer {
t.Fatalf("got error %v, expected %v", err, io.ErrShortBuffer)
}
// Should not stop after "hello".
if !bytes.Equal(p[:n], []byte("hellowor")) {
t.Fatalf("got <%x>, expected <%x>", p[:n], []byte("hellowor"))
}
}
// Benchmark the ReadData function when reading from a stream of data packets of
// different sizes.
func BenchmarkReadData(b *testing.B) {
pr, pw := io.Pipe()
go func() {
for {
for length := 0; length < 128; length++ {
WriteData(pw, paddingBuffer[:length])
}
}
}()
var p [128]byte
for i := 0; i < b.N; i++ {
_, err := ReadData(pr, p[:])
if err != nil {
b.Fatal(err)
}
}
}

39
common/event/bus.go Normal file
View file

@ -0,0 +1,39 @@
package event
import "sync"
func NewSnowflakeEventDispatcher() SnowflakeEventDispatcher {
return &eventBus{lock: &sync.Mutex{}}
}
type eventBus struct {
lock *sync.Mutex
listeners []SnowflakeEventReceiver
}
func (e *eventBus) OnNewSnowflakeEvent(event SnowflakeEvent) {
e.lock.Lock()
defer e.lock.Unlock()
for _, v := range e.listeners {
v.OnNewSnowflakeEvent(event)
}
}
func (e *eventBus) AddSnowflakeEventListener(receiver SnowflakeEventReceiver) {
e.lock.Lock()
defer e.lock.Unlock()
e.listeners = append(e.listeners, receiver)
}
func (e *eventBus) RemoveSnowflakeEventListener(receiver SnowflakeEventReceiver) {
e.lock.Lock()
defer e.lock.Unlock()
var newListeners []SnowflakeEventReceiver
for _, v := range e.listeners {
if v != receiver {
newListeners = append(newListeners, v)
}
}
e.listeners = newListeners
return
}

32
common/event/bus_test.go Normal file
View file

@ -0,0 +1,32 @@
package event
import (
"github.com/stretchr/testify/assert"
"testing"
)
type stubReceiver struct {
counter int
}
func (s *stubReceiver) OnNewSnowflakeEvent(event SnowflakeEvent) {
s.counter++
}
func TestBusDispatch(t *testing.T) {
EventBus := NewSnowflakeEventDispatcher()
StubReceiverA := &stubReceiver{}
StubReceiverB := &stubReceiver{}
EventBus.AddSnowflakeEventListener(StubReceiverA)
EventBus.AddSnowflakeEventListener(StubReceiverB)
assert.Equal(t, 0, StubReceiverA.counter)
assert.Equal(t, 0, StubReceiverB.counter)
EventBus.OnNewSnowflakeEvent(EventOnSnowflakeConnected{})
assert.Equal(t, 1, StubReceiverA.counter)
assert.Equal(t, 1, StubReceiverB.counter)
EventBus.RemoveSnowflakeEventListener(StubReceiverB)
EventBus.OnNewSnowflakeEvent(EventOnSnowflakeConnected{})
assert.Equal(t, 2, StubReceiverA.counter)
assert.Equal(t, 1, StubReceiverB.counter)
}

141
common/event/interface.go Normal file
View file

@ -0,0 +1,141 @@
package event
import (
"fmt"
"time"
"github.com/pion/webrtc/v4"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil/safelog"
)
type SnowflakeEvent interface {
IsSnowflakeEvent()
String() string
}
type EventOnOfferCreated struct {
SnowflakeEvent
WebRTCLocalDescription *webrtc.SessionDescription
Error error
}
func (e EventOnOfferCreated) String() string {
if e.Error != nil {
scrubbed := safelog.Scrub([]byte(e.Error.Error()))
return fmt.Sprintf("offer creation failure %s", scrubbed)
}
return "offer created"
}
type EventOnBrokerRendezvous struct {
SnowflakeEvent
WebRTCRemoteDescription *webrtc.SessionDescription
Error error
}
func (e EventOnBrokerRendezvous) String() string {
if e.Error != nil {
scrubbed := safelog.Scrub([]byte(e.Error.Error()))
return fmt.Sprintf("broker failure %s", scrubbed)
}
return "broker rendezvous peer received"
}
type EventOnSnowflakeConnected struct {
SnowflakeEvent
}
func (e EventOnSnowflakeConnected) String() string {
return "connected"
}
type EventOnSnowflakeConnectionFailed struct {
SnowflakeEvent
Error error
}
func (e EventOnSnowflakeConnectionFailed) String() string {
scrubbed := safelog.Scrub([]byte(e.Error.Error()))
return fmt.Sprintf("trying a new proxy: %s", scrubbed)
}
type EventOnProxyStarting struct {
SnowflakeEvent
}
func (e EventOnProxyStarting) String() string {
return "Proxy starting"
}
type EventOnProxyClientConnected struct {
SnowflakeEvent
}
func (e EventOnProxyClientConnected) String() string {
return fmt.Sprintf("client connected")
}
// The connection with the client has now been closed,
// after getting successfully established.
type EventOnProxyConnectionOver struct {
SnowflakeEvent
Country string
}
func (e EventOnProxyConnectionOver) String() string {
return fmt.Sprintf("Proxy connection closed")
}
// Rendezvous with a client succeeded,
// but a data channel has not been created.
type EventOnProxyConnectionFailed struct {
SnowflakeEvent
}
func (e EventOnProxyConnectionFailed) String() string {
return "Failed to connect to the client"
}
type EventOnProxyStats struct {
SnowflakeEvent
// Completed successful connections.
ConnectionCount int
// Connections that failed to establish.
FailedConnectionCount uint
InboundBytes, OutboundBytes int64
InboundUnit, OutboundUnit string
SummaryInterval time.Duration
}
func (e EventOnProxyStats) String() string {
statString := fmt.Sprintf("In the last %v, there were %v completed successful connections. Traffic Relayed ↓ %v %v (%.2f %v%s), ↑ %v %v (%.2f %v%s).",
e.SummaryInterval.String(), e.ConnectionCount,
e.InboundBytes, e.InboundUnit, float64(e.InboundBytes)/e.SummaryInterval.Seconds(), e.InboundUnit, "/s",
e.OutboundBytes, e.OutboundUnit, float64(e.OutboundBytes)/e.SummaryInterval.Seconds(), e.OutboundUnit, "/s")
return statString
}
type EventOnCurrentNATTypeDetermined struct {
SnowflakeEvent
CurNATType string
}
func (e EventOnCurrentNATTypeDetermined) String() string {
return fmt.Sprintf("NAT type: %v", e.CurNATType)
}
type SnowflakeEventReceiver interface {
// OnNewSnowflakeEvent notify receiver about a new event
// This method MUST not block
OnNewSnowflakeEvent(event SnowflakeEvent)
}
type SnowflakeEventDispatcher interface {
SnowflakeEventReceiver
// AddSnowflakeEventListener allow receiver(s) to receive event notification
// when OnNewSnowflakeEvent is called on the dispatcher.
// Every event listener added will be called when an event is received by the dispatcher.
// The order each listener is called is undefined.
AddSnowflakeEventListener(receiver SnowflakeEventReceiver)
RemoveSnowflakeEventListener(receiver SnowflakeEventReceiver)
}

151
common/messages/client.go Normal file
View file

@ -0,0 +1,151 @@
//Package for communication with the snowflake broker
// import "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/messages"
package messages
import (
"bytes"
"encoding/json"
"fmt"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/bridgefingerprint"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/nat"
)
const ClientVersion = "1.0"
/* Client--Broker protocol v1.x specification:
All messages contain the version number
followed by a new line and then the message body
<message> := <version>\n<body>
<version> := <digit>.<digit>
<body> := <poll request>|<poll response>
There are two different types of body messages,
each encoded in JSON format
== ClientPollRequest ==
<poll request> :=
{
offer: <sdp offer>
[nat: (unknown|restricted|unrestricted)]
[fingerprint: <fingerprint string>]
}
The NAT field is optional, and if it is missing a
value of "unknown" will be assumed. The fingerprint
is also optional and, if absent, will be assigned the
fingerprint of the default bridge.
== ClientPollResponse ==
<poll response> :=
{
[answer: <sdp answer>]
[error: <error string>]
}
If the broker succeeded in matching the client with a proxy,
the answer field MUST contain a valid SDP answer, and the
error field MUST be empty. If the answer field is empty, the
error field MUST contain a string explaining with a reason
for the error.
*/
// The bridge fingerprint to assume, for client poll requests that do not
// specify a fingerprint. Before #28651, there was only one bridge with one
// fingerprint, which all clients expected to be connected to implicitly.
// If a client is old enough that it does not specify a fingerprint, this is
// the fingerprint it expects. Clients that do set a fingerprint in the
// SOCKS params will also be assumed to want to connect to the default bridge.
const defaultBridgeFingerprint = "2B280B23E1107BB62ABFC40DDCC8824814F80A72"
type ClientPollRequest struct {
Offer string `json:"offer"`
NAT string `json:"nat"`
Fingerprint string `json:"fingerprint"`
}
// Encodes a poll message from a snowflake client
func (req *ClientPollRequest) EncodeClientPollRequest() ([]byte, error) {
if req.Fingerprint == "" {
req.Fingerprint = defaultBridgeFingerprint
}
body, err := json.Marshal(req)
if err != nil {
return nil, err
}
return append([]byte(ClientVersion+"\n"), body...), nil
}
// Decodes a poll message from a snowflake client
func DecodeClientPollRequest(data []byte) (*ClientPollRequest, error) {
parts := bytes.SplitN(data, []byte("\n"), 2)
if len(parts) < 2 {
// no version number found
return nil, fmt.Errorf("unsupported message version")
}
var message ClientPollRequest
if string(parts[0]) != ClientVersion {
return nil, fmt.Errorf("unsupported message version")
}
err := json.Unmarshal(parts[1], &message)
if err != nil {
return nil, err
}
if message.Offer == "" {
return nil, fmt.Errorf("no supplied offer")
}
if message.Fingerprint == "" {
message.Fingerprint = defaultBridgeFingerprint
}
if _, err := bridgefingerprint.FingerprintFromHexString(message.Fingerprint); err != nil {
return nil, fmt.Errorf("cannot decode fingerprint")
}
switch message.NAT {
case "":
message.NAT = nat.NATUnknown
case nat.NATUnknown:
case nat.NATRestricted:
case nat.NATUnrestricted:
default:
return nil, fmt.Errorf("invalid NAT type")
}
return &message, nil
}
type ClientPollResponse struct {
Answer string `json:"answer,omitempty"`
Error string `json:"error,omitempty"`
}
// Encodes a poll response for a snowflake client
func (resp *ClientPollResponse) EncodePollResponse() ([]byte, error) {
return json.Marshal(resp)
}
// Decodes a poll response for a snowflake client
// If the Error field is empty, the Answer should be non-empty
func DecodeClientPollResponse(data []byte) (*ClientPollResponse, error) {
var message ClientPollResponse
err := json.Unmarshal(data, &message)
if err != nil {
return nil, err
}
if message.Error == "" && message.Answer == "" {
return nil, fmt.Errorf("received empty broker response")
}
return &message, nil
}

30
common/messages/ipc.go Normal file
View file

@ -0,0 +1,30 @@
package messages
import (
"context"
"errors"
)
type RendezvousMethod string
const (
RendezvousHttp RendezvousMethod = "http"
RendezvousAmpCache RendezvousMethod = "ampcache"
RendezvousSqs RendezvousMethod = "sqs"
)
type Arg struct {
Body []byte
RemoteAddr string
RendezvousMethod RendezvousMethod
Context context.Context
}
var (
ErrBadRequest = errors.New("bad request")
ErrInternal = errors.New("internal error")
ErrExtraInfo = errors.New("client sent extra info")
StrTimedOut = "timed out waiting for answer!"
StrNoProxies = "no snowflake proxies currently available"
)

View file

@ -0,0 +1,472 @@
package messages
import (
"encoding/json"
"fmt"
"testing"
. "github.com/smartystreets/goconvey/convey"
)
func TestDecodeProxyPollRequest(t *testing.T) {
Convey("Context", t, func() {
for _, test := range []struct {
sid string
proxyType string
natType string
clients int
data string
err error
acceptedRelayPattern string
}{
{
//Version 1.0 proxy message
sid: "ymbcCMto7KHNGYlp",
proxyType: "unknown",
natType: "unknown",
clients: 0,
data: `{"Sid":"ymbcCMto7KHNGYlp","Version":"1.0"}`,
err: nil,
},
{
//Version 1.1 proxy message
sid: "ymbcCMto7KHNGYlp",
proxyType: "standalone",
natType: "unknown",
clients: 0,
data: `{"Sid":"ymbcCMto7KHNGYlp","Version":"1.1","Type":"standalone"}`,
err: nil,
},
{
//Version 1.2 proxy message
sid: "ymbcCMto7KHNGYlp",
proxyType: "standalone",
natType: "restricted",
clients: 0,
data: `{"Sid":"ymbcCMto7KHNGYlp","Version":"1.2","Type":"standalone", "NAT":"restricted"}`,
err: nil,
},
{
//Version 1.2 proxy message with clients
sid: "ymbcCMto7KHNGYlp",
proxyType: "standalone",
natType: "restricted",
clients: 24,
data: `{"Sid":"ymbcCMto7KHNGYlp","Version":"1.2","Type":"standalone", "NAT":"restricted","Clients":24}`,
err: nil,
},
{
//Version 1.3 proxy message with clients and proxyURL
sid: "ymbcCMto7KHNGYlp",
proxyType: "standalone",
natType: "restricted",
clients: 24,
acceptedRelayPattern: "snowfalke.torproject.org",
data: `{"Sid":"ymbcCMto7KHNGYlp","Version":"1.2","Type":"standalone", "NAT":"restricted","Clients":24, "AcceptedRelayPattern":"snowfalke.torproject.org"}`,
err: nil,
},
{
//Version 0.X proxy message:
sid: "",
proxyType: "",
natType: "",
clients: 0,
data: "",
err: &json.SyntaxError{},
},
{
sid: "",
proxyType: "",
natType: "",
clients: 0,
data: `{"Sid":"ymbcCMto7KHNGYlp"}`,
err: fmt.Errorf(""),
},
{
sid: "",
proxyType: "",
natType: "",
clients: 0,
data: "{}",
err: fmt.Errorf(""),
},
{
sid: "",
proxyType: "",
natType: "",
clients: 0,
data: `{"Version":"1.0"}`,
err: fmt.Errorf(""),
},
{
sid: "",
proxyType: "",
natType: "",
clients: 0,
data: `{"Version":"2.0"}`,
err: fmt.Errorf(""),
},
} {
sid, proxyType, natType, clients, relayPattern, _, err := DecodeProxyPollRequestWithRelayPrefix([]byte(test.data))
So(sid, ShouldResemble, test.sid)
So(proxyType, ShouldResemble, test.proxyType)
So(natType, ShouldResemble, test.natType)
So(clients, ShouldEqual, test.clients)
So(relayPattern, ShouldResemble, test.acceptedRelayPattern)
So(err, ShouldHaveSameTypeAs, test.err)
}
})
}
func TestEncodeProxyPollRequests(t *testing.T) {
Convey("Context", t, func() {
b, err := EncodeProxyPollRequest("ymbcCMto7KHNGYlp", "standalone", "unknown", 16)
So(err, ShouldBeNil)
sid, proxyType, natType, clients, err := DecodeProxyPollRequest(b)
So(sid, ShouldEqual, "ymbcCMto7KHNGYlp")
So(proxyType, ShouldEqual, "standalone")
So(natType, ShouldEqual, "unknown")
So(clients, ShouldEqual, 16)
So(err, ShouldBeNil)
})
}
func TestDecodeProxyPollResponse(t *testing.T) {
Convey("Context", t, func() {
for _, test := range []struct {
offer string
data string
relayURL string
err error
}{
{
offer: "fake offer",
data: `{"Status":"client match","Offer":"fake offer","NAT":"unknown"}`,
err: nil,
},
{
offer: "fake offer",
data: `{"Status":"client match","Offer":"fake offer","NAT":"unknown", "RelayURL":"wss://snowflake.torproject.org/proxy"}`,
relayURL: "wss://snowflake.torproject.org/proxy",
err: nil,
},
{
offer: "",
data: `{"Status":"no match"}`,
err: nil,
},
{
offer: "",
data: `{"Status":"client match"}`,
err: fmt.Errorf("no supplied offer"),
},
{
offer: "",
data: `{"Test":"test"}`,
err: fmt.Errorf(""),
},
} {
offer, _, relayURL, err := DecodePollResponseWithRelayURL([]byte(test.data))
So(err, ShouldHaveSameTypeAs, test.err)
So(offer, ShouldResemble, test.offer)
So(relayURL, ShouldResemble, test.relayURL)
}
})
}
func TestEncodeProxyPollResponse(t *testing.T) {
Convey("Context", t, func() {
b, err := EncodePollResponse("fake offer", true, "restricted")
So(err, ShouldBeNil)
offer, natType, err := DecodePollResponse(b)
So(offer, ShouldEqual, "fake offer")
So(natType, ShouldEqual, "restricted")
So(err, ShouldBeNil)
b, err = EncodePollResponse("", false, "unknown")
So(err, ShouldBeNil)
offer, natType, err = DecodePollResponse(b)
So(offer, ShouldEqual, "")
So(natType, ShouldEqual, "unknown")
So(err, ShouldBeNil)
})
}
func TestEncodeProxyPollResponseWithProxyURL(t *testing.T) {
Convey("Context", t, func() {
b, err := EncodePollResponseWithRelayURL("fake offer", true, "restricted", "wss://test/", "")
So(err, ShouldBeNil)
offer, natType, err := DecodePollResponse(b)
So(err, ShouldNotBeNil)
offer, natType, relay, err := DecodePollResponseWithRelayURL(b)
So(offer, ShouldEqual, "fake offer")
So(natType, ShouldEqual, "restricted")
So(relay, ShouldEqual, "wss://test/")
So(err, ShouldBeNil)
b, err = EncodePollResponse("", false, "unknown")
So(err, ShouldBeNil)
offer, natType, relay, err = DecodePollResponseWithRelayURL(b)
So(offer, ShouldEqual, "")
So(natType, ShouldEqual, "unknown")
So(err, ShouldBeNil)
b, err = EncodePollResponseWithRelayURL("fake offer", false, "restricted", "wss://test/", "test error reason")
So(err, ShouldBeNil)
offer, natType, relay, err = DecodePollResponseWithRelayURL(b)
So(err, ShouldNotBeNil)
So(err.Error(), ShouldContainSubstring, "test error reason")
})
}
func TestDecodeProxyAnswerRequest(t *testing.T) {
Convey("Context", t, func() {
for _, test := range []struct {
answer string
sid string
data string
err error
}{
{
"test",
"test",
`{"Version":"1.0","Sid":"test","Answer":"test"}`,
nil,
},
{
"",
"",
`{"type":"offer","sdp":"v=0\r\no=- 4358805017720277108 2 IN IP4 [scrubbed]\r\ns=-\r\nt=0 0\r\na=group:BUNDLE data\r\na=msid-semantic: WMS\r\nm=application 56688 DTLS/SCTP 5000\r\nc=IN IP4 [scrubbed]\r\na=candidate:3769337065 1 udp 2122260223 [scrubbed] 56688 typ host generation 0 network-id 1 network-cost 50\r\na=candidate:2921887769 1 tcp 1518280447 [scrubbed] 35441 typ host tcptype passive generation 0 network-id 1 network-cost 50\r\na=ice-ufrag:aMAZ\r\na=ice-pwd:jcHb08Jjgrazp2dzjdrvPPvV\r\na=ice-options:trickle\r\na=fingerprint:sha-256 C8:88:EE:B9:E7:02:2E:21:37:ED:7A:D1:EB:2B:A3:15:A2:3B:5B:1C:3D:D4:D5:1F:06:CF:52:40:03:F8:DD:66\r\na=setup:actpass\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"}`,
fmt.Errorf(""),
},
{
"",
"",
`{"Version":"1.0","Answer":"test"}`,
fmt.Errorf(""),
},
{
"",
"",
`{"Version":"1.0","Sid":"test"}`,
fmt.Errorf(""),
},
} {
answer, sid, err := DecodeAnswerRequest([]byte(test.data))
So(answer, ShouldResemble, test.answer)
So(sid, ShouldResemble, test.sid)
So(err, ShouldHaveSameTypeAs, test.err)
}
})
}
func TestEncodeProxyAnswerRequest(t *testing.T) {
Convey("Context", t, func() {
b, err := EncodeAnswerRequest("test answer", "test sid")
So(err, ShouldBeNil)
answer, sid, err := DecodeAnswerRequest(b)
So(answer, ShouldEqual, "test answer")
So(sid, ShouldEqual, "test sid")
So(err, ShouldBeNil)
})
}
func TestDecodeProxyAnswerResponse(t *testing.T) {
Convey("Context", t, func() {
for _, test := range []struct {
success bool
data string
err error
}{
{
true,
`{"Status":"success"}`,
nil,
},
{
false,
`{"Status":"client gone"}`,
nil,
},
{
false,
`{"Test":"test"}`,
fmt.Errorf(""),
},
} {
success, err := DecodeAnswerResponse([]byte(test.data))
So(success, ShouldResemble, test.success)
So(err, ShouldHaveSameTypeAs, test.err)
}
})
}
func TestEncodeProxyAnswerResponse(t *testing.T) {
Convey("Context", t, func() {
b, err := EncodeAnswerResponse(true)
So(err, ShouldBeNil)
success, err := DecodeAnswerResponse(b)
So(success, ShouldEqual, true)
So(err, ShouldBeNil)
b, err = EncodeAnswerResponse(false)
So(err, ShouldBeNil)
success, err = DecodeAnswerResponse(b)
So(success, ShouldEqual, false)
So(err, ShouldBeNil)
})
}
func TestDecodeClientPollRequest(t *testing.T) {
Convey("Context", t, func() {
for _, test := range []struct {
natType string
offer string
data string
err error
}{
{
//version 1.0 client message
"unknown",
"fake",
`1.0
{"nat":"unknown","offer":"fake"}`,
nil,
},
{
//version 1.0 client message
"unknown",
"fake",
`1.0
{"offer":"fake"}`,
nil,
},
{
//unknown version
"",
"",
`{"version":"2.0"}`,
fmt.Errorf(""),
},
{
//no offer
"",
"",
`1.0
{"nat":"unknown"}`,
fmt.Errorf(""),
},
} {
req, err := DecodeClientPollRequest([]byte(test.data))
So(err, ShouldHaveSameTypeAs, test.err)
if test.err == nil {
So(req.NAT, ShouldResemble, test.natType)
So(req.Offer, ShouldResemble, test.offer)
}
}
})
}
func TestEncodeClientPollRequests(t *testing.T) {
Convey("Context", t, func() {
for i, test := range []struct {
natType string
offer string
fingerprint string
err error
}{
{
"unknown",
"fake",
"",
nil,
},
{
"unknown",
"fake",
defaultBridgeFingerprint,
nil,
},
{
"unknown",
"fake",
"123123",
fmt.Errorf(""),
},
} {
req1 := &ClientPollRequest{
NAT: test.natType,
Offer: test.offer,
Fingerprint: test.fingerprint,
}
b, err := req1.EncodeClientPollRequest()
So(err, ShouldBeNil)
req2, err := DecodeClientPollRequest(b)
So(err, ShouldHaveSameTypeAs, test.err)
if test.err == nil {
So(req2.Offer, ShouldEqual, req1.Offer)
So(req2.NAT, ShouldEqual, req1.NAT)
fingerprint := test.fingerprint
if i == 0 {
fingerprint = defaultBridgeFingerprint
}
So(req2.Fingerprint, ShouldEqual, fingerprint)
}
}
})
}
func TestDecodeClientPollResponse(t *testing.T) {
Convey("Context", t, func() {
for _, test := range []struct {
answer string
msg string
data string
}{
{
"fake answer",
"",
`{"answer":"fake answer"}`,
},
{
"",
"no snowflakes",
`{"error":"no snowflakes"}`,
},
} {
resp, err := DecodeClientPollResponse([]byte(test.data))
So(err, ShouldBeNil)
So(resp.Answer, ShouldResemble, test.answer)
So(resp.Error, ShouldResemble, test.msg)
}
})
}
func TestEncodeClientPollResponse(t *testing.T) {
Convey("Context", t, func() {
resp1 := &ClientPollResponse{
Answer: "fake answer",
}
b, err := resp1.EncodePollResponse()
So(err, ShouldBeNil)
resp2, err := DecodeClientPollResponse(b)
So(err, ShouldBeNil)
So(resp1, ShouldResemble, resp2)
resp1 = &ClientPollResponse{
Error: "failed",
}
b, err = resp1.EncodePollResponse()
So(err, ShouldBeNil)
resp2, err = DecodeClientPollResponse(b)
So(err, ShouldBeNil)
So(resp1, ShouldResemble, resp2)
})
}

315
common/messages/proxy.go Normal file
View file

@ -0,0 +1,315 @@
//Package for communication with the snowflake broker
// import "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/messages"
package messages
import (
"encoding/json"
"errors"
"fmt"
"strings"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/nat"
)
const (
version = "1.3"
ProxyUnknown = "unknown"
)
var KnownProxyTypes = map[string]bool{
"standalone": true,
"webext": true,
"badge": true,
"iptproxy": true,
}
/* Version 1.3 specification:
== ProxyPollRequest ==
{
Sid: [generated session id of proxy],
Version: 1.3,
Type: ["badge"|"webext"|"standalone"],
NAT: ["unknown"|"restricted"|"unrestricted"],
Clients: [number of current clients, rounded down to multiples of 8],
AcceptedRelayPattern: [a pattern representing accepted set of relay domains]
}
== ProxyPollResponse ==
1) If a client is matched:
HTTP 200 OK
{
Status: "client match",
{
type: offer,
sdp: [WebRTC SDP]
},
NAT: ["unknown"|"restricted"|"unrestricted"],
RelayURL: [the WebSocket URL proxy should connect to relay Snowflake traffic]
}
2) If a client is not matched:
HTTP 200 OK
{
Status: "no match"
}
3) If the request is malformed:
HTTP 400 BadRequest
== ProxyAnswerRequest ==
{
Sid: [generated session id of proxy],
Version: 1.3,
Answer:
{
type: answer,
sdp: [WebRTC SDP]
}
}
== ProxyAnswerResponse ==
1) If the client retrieved the answer:
HTTP 200 OK
{
Status: "success"
}
2) If the client left:
HTTP 200 OK
{
Status: "client gone"
}
3) If the request is malformed:
HTTP 400 BadRequest
*/
type ProxyPollRequest struct {
Sid string
Version string
Type string
NAT string
Clients int
AcceptedRelayPattern *string
}
func EncodeProxyPollRequest(sid string, proxyType string, natType string, clients int) ([]byte, error) {
return EncodeProxyPollRequestWithRelayPrefix(sid, proxyType, natType, clients, "")
}
func EncodeProxyPollRequestWithRelayPrefix(sid string, proxyType string, natType string, clients int, relayPattern string) ([]byte, error) {
return json.Marshal(ProxyPollRequest{
Sid: sid,
Version: version,
Type: proxyType,
NAT: natType,
Clients: clients,
AcceptedRelayPattern: &relayPattern,
})
}
func DecodeProxyPollRequest(data []byte) (sid string, proxyType string, natType string, clients int, err error) {
var relayPrefix string
sid, proxyType, natType, clients, relayPrefix, _, err = DecodeProxyPollRequestWithRelayPrefix(data)
if relayPrefix != "" {
return "", "", "", 0, ErrExtraInfo
}
return
}
// Decodes a poll message from a snowflake proxy and returns the
// sid, proxy type, nat type and clients of the proxy on success
// and an error if it failed
func DecodeProxyPollRequestWithRelayPrefix(data []byte) (
sid string, proxyType string, natType string, clients int, relayPrefix string, relayPrefixAware bool, err error) {
var message ProxyPollRequest
err = json.Unmarshal(data, &message)
if err != nil {
return
}
majorVersion := strings.Split(message.Version, ".")[0]
if majorVersion != "1" {
err = fmt.Errorf("using unknown version")
return
}
// Version 1.x requires an Sid
if message.Sid == "" {
err = fmt.Errorf("no supplied session id")
return
}
switch message.NAT {
case "":
message.NAT = nat.NATUnknown
case nat.NATUnknown:
case nat.NATRestricted:
case nat.NATUnrestricted:
default:
err = fmt.Errorf("invalid NAT type")
return
}
// we don't reject polls with an unknown proxy type because we encourage
// projects that embed proxy code to include their own type
if !KnownProxyTypes[message.Type] {
message.Type = ProxyUnknown
}
var acceptedRelayPattern = ""
if message.AcceptedRelayPattern != nil {
acceptedRelayPattern = *message.AcceptedRelayPattern
}
return message.Sid, message.Type, message.NAT, message.Clients,
acceptedRelayPattern, message.AcceptedRelayPattern != nil, nil
}
type ProxyPollResponse struct {
Status string
Offer string
NAT string
RelayURL string
}
func EncodePollResponse(offer string, success bool, natType string) ([]byte, error) {
return EncodePollResponseWithRelayURL(offer, success, natType, "", "no match")
}
func EncodePollResponseWithRelayURL(offer string, success bool, natType, relayURL, failReason string) ([]byte, error) {
if success {
return json.Marshal(ProxyPollResponse{
Status: "client match",
Offer: offer,
NAT: natType,
RelayURL: relayURL,
})
}
return json.Marshal(ProxyPollResponse{
Status: failReason,
})
}
func DecodePollResponse(data []byte) (offer string, natType string, err error) {
offer, natType, relayURL, err := DecodePollResponseWithRelayURL(data)
if relayURL != "" {
return "", "", ErrExtraInfo
}
return offer, natType, err
}
// Decodes a poll response from the broker and returns an offer and the client's NAT type
// If there is a client match, the returned offer string will be non-empty
func DecodePollResponseWithRelayURL(data []byte) (
offer string,
natType string,
relayURL string,
err_ error,
) {
var message ProxyPollResponse
err := json.Unmarshal(data, &message)
if err != nil {
return "", "", "", err
}
if message.Status == "" {
return "", "", "", fmt.Errorf("received invalid data")
}
err = nil
if message.Status == "client match" {
if message.Offer == "" {
return "", "", "", fmt.Errorf("no supplied offer")
}
} else {
message.Offer = ""
if message.Status != "no match" {
err = errors.New(message.Status)
}
}
natType = message.NAT
if natType == "" {
natType = "unknown"
}
return message.Offer, natType, message.RelayURL, err
}
type ProxyAnswerRequest struct {
Version string
Sid string
Answer string
}
func EncodeAnswerRequest(answer string, sid string) ([]byte, error) {
return json.Marshal(ProxyAnswerRequest{
Version: version,
Sid: sid,
Answer: answer,
})
}
// Returns the sdp answer and proxy sid
func DecodeAnswerRequest(data []byte) (answer string, sid string, err error) {
var message ProxyAnswerRequest
err = json.Unmarshal(data, &message)
if err != nil {
return "", "", err
}
majorVersion := strings.Split(message.Version, ".")[0]
if majorVersion != "1" {
return "", "", fmt.Errorf("using unknown version")
}
if message.Sid == "" || message.Answer == "" {
return "", "", fmt.Errorf("no supplied sid or answer")
}
return message.Answer, message.Sid, nil
}
type ProxyAnswerResponse struct {
Status string
}
func EncodeAnswerResponse(success bool) ([]byte, error) {
if success {
return json.Marshal(ProxyAnswerResponse{
Status: "success",
})
}
return json.Marshal(ProxyAnswerResponse{
Status: "client gone",
})
}
func DecodeAnswerResponse(data []byte) (bool, error) {
var message ProxyAnswerResponse
var success bool
err := json.Unmarshal(data, &message)
if err != nil {
return success, err
}
if message.Status == "" {
return success, fmt.Errorf("received invalid data")
}
if message.Status == "success" {
success = true
}
return success, nil
}

View file

@ -0,0 +1,31 @@
package namematcher
import "strings"
func NewNameMatcher(rule string) NameMatcher {
rule = strings.TrimSuffix(rule, "$")
return NameMatcher{suffix: strings.TrimPrefix(rule, "^"), exact: strings.HasPrefix(rule, "^")}
}
func IsValidRule(rule string) bool {
return strings.HasSuffix(rule, "$")
}
type NameMatcher struct {
exact bool
suffix string
}
func (m *NameMatcher) IsSupersetOf(matcher NameMatcher) bool {
if m.exact {
return matcher.exact && m.suffix == matcher.suffix
}
return strings.HasSuffix(matcher.suffix, m.suffix)
}
func (m *NameMatcher) IsMember(s string) bool {
if m.exact {
return s == m.suffix
}
return strings.HasSuffix(s, m.suffix)
}

View file

@ -0,0 +1,55 @@
package namematcher
import "testing"
import . "github.com/smartystreets/goconvey/convey"
func TestMatchMember(t *testing.T) {
testingVector := []struct {
matcher string
target string
expects bool
}{
{matcher: "", target: "", expects: true},
{matcher: "^snowflake.torproject.net$", target: "snowflake.torproject.net", expects: true},
{matcher: "^snowflake.torproject.net$", target: "faketorproject.net", expects: false},
{matcher: "snowflake.torproject.net$", target: "faketorproject.net", expects: false},
{matcher: "snowflake.torproject.net$", target: "snowflake.torproject.net", expects: true},
{matcher: "snowflake.torproject.net$", target: "imaginary-01-snowflake.torproject.net", expects: true},
{matcher: "snowflake.torproject.net$", target: "imaginary-aaa-snowflake.torproject.net", expects: true},
{matcher: "snowflake.torproject.net$", target: "imaginary-aaa-snowflake.faketorproject.net", expects: false},
}
for _, v := range testingVector {
t.Run(v.matcher+"<>"+v.target, func(t *testing.T) {
Convey("test", t, func() {
matcher := NewNameMatcher(v.matcher)
So(matcher.IsMember(v.target), ShouldEqual, v.expects)
})
})
}
}
func TestMatchSubset(t *testing.T) {
testingVector := []struct {
matcher string
target string
expects bool
}{
{matcher: "", target: "", expects: true},
{matcher: "^snowflake.torproject.net$", target: "^snowflake.torproject.net$", expects: true},
{matcher: "snowflake.torproject.net$", target: "^snowflake.torproject.net$", expects: true},
{matcher: "snowflake.torproject.net$", target: "snowflake.torproject.net$", expects: true},
{matcher: "snowflake.torproject.net$", target: "testing-snowflake.torproject.net$", expects: true},
{matcher: "snowflake.torproject.net$", target: "^testing-snowflake.torproject.net$", expects: true},
{matcher: "snowflake.torproject.net$", target: "", expects: false},
}
for _, v := range testingVector {
t.Run(v.matcher+"<>"+v.target, func(t *testing.T) {
Convey("test", t, func() {
matcher := NewNameMatcher(v.matcher)
target := NewNameMatcher(v.target)
So(matcher.IsSupersetOf(target), ShouldEqual, v.expects)
})
})
}
}

256
common/nat/nat.go Normal file
View file

@ -0,0 +1,256 @@
/*
The majority of this code is taken from a utility I wrote for pion/stun
https://github.com/pion/stun/blob/master/cmd/stun-nat-behaviour/main.go
Copyright 2018 Pion LLC
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package nat
import (
"errors"
"fmt"
"gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/proxy"
"log"
"net"
"net/url"
"time"
"github.com/pion/stun/v3"
)
var ErrTimedOut = errors.New("timed out waiting for response")
const (
NATUnknown = "unknown"
NATRestricted = "restricted"
NATUnrestricted = "unrestricted"
)
// Deprecated: Use CheckIfRestrictedNATWithProxy Instead.
func CheckIfRestrictedNAT(server string) (bool, error) {
return CheckIfRestrictedNATWithProxy(server, nil)
}
// CheckIfRestrictedNATWithProxy checks the NAT mapping and filtering
// behaviour and returns true if the NAT is restrictive
// (address-dependent mapping and/or port-dependent filtering)
// and false if the NAT is unrestrictive (meaning it
// will work with most other NATs),
func CheckIfRestrictedNATWithProxy(server string, proxy *url.URL) (bool, error) {
return isRestrictedMapping(server, proxy)
}
// Performs two tests from RFC 5780 to determine whether the mapping type
// of the client's NAT is address-independent or address-dependent
// Returns true if the mapping is address-dependent and false otherwise
func isRestrictedMapping(addrStr string, proxy *url.URL) (bool, error) {
var xorAddr1 stun.XORMappedAddress
var xorAddr2 stun.XORMappedAddress
mapTestConn, err := connect(addrStr, proxy)
if err != nil {
return false, fmt.Errorf("Error creating STUN connection: %w", err)
}
defer mapTestConn.Close()
// Test I: Regular binding request
message := stun.MustBuild(stun.TransactionID, stun.BindingRequest)
resp, err := mapTestConn.RoundTrip(message, mapTestConn.PrimaryAddr)
if err != nil {
return false, fmt.Errorf("Error completing roundtrip map test: %w", err)
}
// Decoding XOR-MAPPED-ADDRESS attribute from message.
if err = xorAddr1.GetFrom(resp); err != nil {
return false, fmt.Errorf("Error retrieving XOR-MAPPED-ADDRESS resonse: %w", err)
}
// Decoding OTHER-ADDRESS attribute from message.
var otherAddr stun.OtherAddress
if err = otherAddr.GetFrom(resp); err != nil {
return false, fmt.Errorf("NAT discovery feature not supported: %w", err)
}
if err = mapTestConn.AddOtherAddr(otherAddr.String()); err != nil {
return false, fmt.Errorf("Error resolving address %s: %w", otherAddr.String(), err)
}
// Test II: Send binding request to other address
resp, err = mapTestConn.RoundTrip(message, mapTestConn.OtherAddr)
if err != nil {
return false, fmt.Errorf("Error retrieveing server response: %w", err)
}
// Decoding XOR-MAPPED-ADDRESS attribute from message.
if err = xorAddr2.GetFrom(resp); err != nil {
return false, fmt.Errorf("Error retrieving XOR-MAPPED-ADDRESS resonse: %w", err)
}
return xorAddr1.String() != xorAddr2.String(), nil
}
// Performs two tests from RFC 5780 to determine whether the filtering type
// of the client's NAT is port-dependent.
// Returns true if the filtering is port-dependent and false otherwise
// Note: This function is no longer used because a client's NAT type is
// determined only by their mapping type, but the functionality might
// be useful in the future and remains here.
func isRestrictedFiltering(addrStr string, proxy *url.URL) (bool, error) {
var xorAddr stun.XORMappedAddress
mapTestConn, err := connect(addrStr, proxy)
if err != nil {
log.Printf("Error creating STUN connection: %s", err.Error())
return false, err
}
defer mapTestConn.Close()
// Test I: Regular binding request
message := stun.MustBuild(stun.TransactionID, stun.BindingRequest)
resp, err := mapTestConn.RoundTrip(message, mapTestConn.PrimaryAddr)
if err == ErrTimedOut {
log.Printf("Error: no response from server")
return false, err
}
if err != nil {
log.Printf("Error: %s", err.Error())
return false, err
}
// Decoding XOR-MAPPED-ADDRESS attribute from message.
if err = xorAddr.GetFrom(resp); err != nil {
log.Printf("Error retrieving XOR-MAPPED-ADDRESS from resonse: %s", err.Error())
return false, err
}
// Test III: Request port change
message.Add(stun.AttrChangeRequest, []byte{0x00, 0x00, 0x00, 0x02})
_, err = mapTestConn.RoundTrip(message, mapTestConn.PrimaryAddr)
if err != ErrTimedOut && err != nil {
// something else went wrong
log.Printf("Error reading response from server: %s", err.Error())
return false, err
}
return err == ErrTimedOut, nil
}
// Given an address string, returns a StunServerConn
func connect(addrStr string, proxyAddr *url.URL) (*StunServerConn, error) {
// Creating a "connection" to STUN server.
var conn net.PacketConn
ResolveUDPAddr := net.ResolveUDPAddr
if proxyAddr != nil {
socksClient := proxy.NewSocks5UDPClient(proxyAddr)
ResolveUDPAddr = socksClient.ResolveUDPAddr
}
addr, err := ResolveUDPAddr("udp4", addrStr)
if err != nil {
log.Printf("Error resolving address: %s\n", err.Error())
return nil, err
}
if proxyAddr == nil {
c, err := net.ListenUDP("udp4", nil)
if err != nil {
return nil, err
}
conn = c
} else {
socksClient := proxy.NewSocks5UDPClient(proxyAddr)
c, err := socksClient.ListenPacket("udp", nil)
if err != nil {
return nil, err
}
conn = c
}
mChan := listen(conn)
return &StunServerConn{
conn: conn,
PrimaryAddr: addr,
messageChan: mChan,
}, nil
}
type StunServerConn struct {
conn net.PacketConn
PrimaryAddr *net.UDPAddr
OtherAddr *net.UDPAddr
messageChan chan *stun.Message
}
func (c *StunServerConn) Close() {
c.conn.Close()
}
func (c *StunServerConn) RoundTrip(msg *stun.Message, addr net.Addr) (*stun.Message, error) {
_, err := c.conn.WriteTo(msg.Raw, addr)
if err != nil {
return nil, err
}
// Wait for response or timeout
select {
case m, ok := <-c.messageChan:
if !ok {
return nil, fmt.Errorf("error reading from messageChan")
}
return m, nil
case <-time.After(10 * time.Second):
return nil, ErrTimedOut
}
}
func (c *StunServerConn) AddOtherAddr(addrStr string) error {
addr2, err := net.ResolveUDPAddr("udp4", addrStr)
if err != nil {
return err
}
c.OtherAddr = addr2
return nil
}
// taken from https://github.com/pion/stun/blob/master/cmd/stun-traversal/main.go
func listen(conn net.PacketConn) chan *stun.Message {
messages := make(chan *stun.Message)
go func() {
for {
buf := make([]byte, 1024)
n, _, err := conn.ReadFrom(buf)
if err != nil {
close(messages)
return
}
buf = buf[:n]
m := new(stun.Message)
m.Raw = buf
err = m.Decode()
if err != nil {
close(messages)
return
}
messages <- m
}
}()
return messages
}

18
common/proxy/check.go Normal file
View file

@ -0,0 +1,18 @@
package proxy
import (
"errors"
"net/url"
"strings"
)
var errUnsupportedProxyType = errors.New("unsupported proxy type")
func CheckProxyProtocolSupport(proxy *url.URL) error {
switch strings.ToLower(proxy.Scheme) {
case "socks5":
return nil
default:
return errUnsupportedProxyType
}
}

274
common/proxy/client.go Normal file
View file

@ -0,0 +1,274 @@
package proxy
import (
"context"
"errors"
"log"
"net"
"net/url"
"strconv"
"time"
"github.com/miekg/dns"
"github.com/pion/transport/v3"
"github.com/txthinking/socks5"
)
func NewSocks5UDPClient(addr *url.URL) SocksClient {
return SocksClient{addr: addr}
}
type SocksClient struct {
addr *url.URL
}
type SocksConn struct {
net.Conn
socks5Client *socks5.Client
}
func (s SocksConn) SetReadBuffer(bytes int) error {
return nil
}
func (s SocksConn) SetWriteBuffer(bytes int) error {
return nil
}
func (s SocksConn) ReadFromUDP(b []byte) (n int, addr *net.UDPAddr, err error) {
var buf [2000]byte
n, err = s.Conn.Read(buf[:])
if err != nil {
return 0, nil, err
}
Datagram, err := socks5.NewDatagramFromBytes(buf[:n])
if err != nil {
return 0, nil, err
}
addr, err = net.ResolveUDPAddr("udp", Datagram.Address())
if err != nil {
return 0, nil, err
}
n = copy(b, Datagram.Data)
if n < len(Datagram.Data) {
return 0, nil, errors.New("short buffer")
}
return len(Datagram.Data), addr, nil
}
func (s SocksConn) ReadMsgUDP(b, oob []byte) (n, oobn, flags int, addr *net.UDPAddr, err error) {
panic("unimplemented")
}
func (s SocksConn) WriteToUDP(b []byte, addr *net.UDPAddr) (int, error) {
a, addrb, portb, err := socks5.ParseAddress(addr.String())
if err != nil {
return 0, err
}
packet := socks5.NewDatagram(a, addrb, portb, b)
_, err = s.Conn.Write(packet.Bytes())
if err != nil {
return 0, err
}
return len(b), nil
}
func (s SocksConn) WriteMsgUDP(b, oob []byte, addr *net.UDPAddr) (n, oobn int, err error) {
panic("unimplemented")
}
func (sc *SocksClient) ListenPacket(network string, locAddr *net.UDPAddr) (transport.UDPConn, error) {
conn, err := sc.listenPacket()
if err != nil {
log.Println("[SOCKS5 Client Error] cannot listen packet", err)
}
return conn, err
}
func (sc *SocksClient) listenPacket() (transport.UDPConn, error) {
var username, password string
if sc.addr.User != nil {
username = sc.addr.User.Username()
password, _ = sc.addr.User.Password()
}
client, err := socks5.NewClient(
sc.addr.Host,
username, password, 300, 300)
if err != nil {
return nil, err
}
err = client.Negotiate(nil)
if err != nil {
return nil, err
}
udpRequest := socks5.NewRequest(socks5.CmdUDP, socks5.ATYPIPv4, []byte{0x00, 0x00, 0x00, 0x00}, []byte{0x00, 0x00})
reply, err := client.Request(udpRequest)
if err != nil {
return nil, err
}
udpServerAddr := socks5.ToAddress(reply.Atyp, reply.BndAddr, reply.BndPort)
conn, err := net.Dial("udp", udpServerAddr)
if err != nil {
return nil, err
}
return &SocksConn{conn, client}, nil
}
func (s SocksConn) WriteTo(p []byte, addr net.Addr) (n int, err error) {
return s.WriteToUDP(p, addr.(*net.UDPAddr))
}
func (s SocksConn) ReadFrom(p []byte) (n int, addr net.Addr, err error) {
return s.ReadFromUDP(p)
}
func (s SocksConn) Read(b []byte) (int, error) {
panic("implement me")
}
func (s SocksConn) RemoteAddr() net.Addr {
panic("implement me")
}
func (s SocksConn) Write(b []byte) (int, error) {
panic("implement me")
}
func (sc *SocksClient) ResolveUDPAddr(network string, address string) (*net.UDPAddr, error) {
dnsServer, err := net.ResolveUDPAddr("udp", "1.1.1.1:53")
if err != nil {
return nil, err
}
proxiedResolver := newDnsResolver(sc, dnsServer)
ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
defer cancel()
host, port, err := net.SplitHostPort(address)
if err != nil {
return nil, err
}
ip, err := proxiedResolver.lookupIPAddr(ctx, host, network == "udp6")
if err != nil {
return nil, err
}
if len(ip) <= 0 {
return nil, errors.New("cannot resolve hostname: NXDOMAIN")
}
switch network {
case "udp4":
var v4IPAddr []net.IPAddr
for _, v := range ip {
if v.IP.To4() != nil {
v4IPAddr = append(v4IPAddr, v)
}
}
ip = v4IPAddr
case "udp6":
var v6IPAddr []net.IPAddr
for _, v := range ip {
if v.IP.To4() == nil {
v6IPAddr = append(v6IPAddr, v)
}
}
ip = v6IPAddr
case "udp":
default:
return nil, errors.New("unknown network")
}
if len(ip) <= 0 {
return nil, errors.New("cannot resolve hostname: so suitable address")
}
portInInt, err := strconv.ParseInt(port, 10, 32)
return &net.UDPAddr{
IP: ip[0].IP,
Port: int(portInInt),
Zone: "",
}, nil
}
func newDnsResolver(sc *SocksClient,
serverAddress net.Addr) *dnsResolver {
return &dnsResolver{sc: sc, serverAddress: serverAddress}
}
type dnsResolver struct {
sc *SocksClient
serverAddress net.Addr
}
func (r *dnsResolver) lookupIPAddr(ctx context.Context, host string, ipv6 bool) ([]net.IPAddr, error) {
packetConn, err := r.sc.listenPacket()
if err != nil {
return nil, err
}
msg := new(dns.Msg)
if !ipv6 {
msg.SetQuestion(dns.Fqdn(host), dns.TypeA)
} else {
msg.SetQuestion(dns.Fqdn(host), dns.TypeAAAA)
}
encodedMsg, err := msg.Pack()
if err != nil {
log.Println(err.Error())
}
for i := 2; i >= 0; i-- {
_, err := packetConn.WriteTo(encodedMsg, r.serverAddress)
if err != nil {
log.Println(err.Error())
}
}
ctx, cancel := context.WithTimeout(ctx, time.Second)
defer cancel()
go func() {
<-ctx.Done()
packetConn.Close()
}()
var dataBuf [1600]byte
n, _, err := packetConn.ReadFrom(dataBuf[:])
if err != nil {
return nil, err
}
err = msg.Unpack(dataBuf[:n])
if err != nil {
return nil, err
}
var returnedIPs []net.IPAddr
for _, resp := range msg.Answer {
switch respTyped := resp.(type) {
case *dns.A:
returnedIPs = append(returnedIPs, net.IPAddr{IP: respTyped.A})
case *dns.AAAA:
returnedIPs = append(returnedIPs, net.IPAddr{IP: respTyped.AAAA})
}
}
return returnedIPs, nil
}
func NewTransportWrapper(sc *SocksClient, innerNet transport.Net) transport.Net {
return &transportWrapper{sc: sc, Net: innerNet}
}
type transportWrapper struct {
transport.Net
sc *SocksClient
}
func (t *transportWrapper) ListenUDP(network string, locAddr *net.UDPAddr) (transport.UDPConn, error) {
return t.sc.ListenPacket(network, nil)
}
func (t *transportWrapper) ListenPacket(network string, address string) (net.PacketConn, error) {
return t.sc.ListenPacket(network, nil)
}
func (t *transportWrapper) ResolveUDPAddr(network string, address string) (*net.UDPAddr, error) {
return t.sc.ResolveUDPAddr(network, address)
}

View file

@ -1,62 +0,0 @@
//Package for a safer logging wrapper around the standard logging package
//import "git.torproject.org/pluggable-transports/snowflake.git/common/safelog"
package safelog
import (
"bytes"
"io"
"regexp"
)
const ipv4Address = `\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}`
const ipv6Address = `([0-9a-fA-F]{0,4}:){5,7}([0-9a-fA-F]{0,4})?`
const ipv6Compressed = `([0-9a-fA-F]{0,4}:){0,5}([0-9a-fA-F]{0,4})?(::)([0-9a-fA-F]{0,4}:){0,5}([0-9a-fA-F]{0,4})?`
const ipv6Full = `(` + ipv6Address + `(` + ipv4Address + `))` +
`|(` + ipv6Compressed + `(` + ipv4Address + `))` +
`|(` + ipv6Address + `)` + `|(` + ipv6Compressed + `)`
const optionalPort = `(:\d{1,5})?`
const addressPattern = `((` + ipv4Address + `)|(\[(` + ipv6Full + `)\])|(` + ipv6Full + `))` + optionalPort
const fullAddrPattern = `(^|\s|[^\w:])` + addressPattern + `(\s|(:\s)|[^\w:]|$)`
var scrubberPatterns = []*regexp.Regexp{
regexp.MustCompile(fullAddrPattern),
}
var addressRegexp = regexp.MustCompile(addressPattern)
// An io.Writer that can be used as the output for a logger that first
// sanitizes logs and then writes to the provided io.Writer
type LogScrubber struct {
Output io.Writer
buffer []byte
}
func scrub(b []byte) []byte {
scrubbedBytes := b
for _, pattern := range scrubberPatterns {
// this is a workaround since go does not yet support look ahead or look
// behind for regular expressions.
scrubbedBytes = pattern.ReplaceAllFunc(scrubbedBytes, func(b []byte) []byte {
return addressRegexp.ReplaceAll(b, []byte("[scrubbed]"))
})
}
return scrubbedBytes
}
func (ls *LogScrubber) Write(b []byte) (n int, err error) {
n = len(b)
ls.buffer = append(ls.buffer, b...)
for {
i := bytes.LastIndexByte(ls.buffer, '\n')
if i == -1 {
return
}
fullLines := ls.buffer[:i+1]
_, err = ls.Output.Write(scrub(fullLines))
if err != nil {
return
}
ls.buffer = ls.buffer[i+1:]
}
}

View file

@ -1,148 +0,0 @@
package safelog
import (
"bytes"
"log"
"testing"
)
//Check to make sure that addresses split across calls to write are still scrubbed
func TestLogScrubberSplit(t *testing.T) {
input := []byte("test\nhttp2: panic serving [2620:101:f000:780:9097:75b1:519f:dbb8]:58344: interface conversion: *http2.responseWriter is not http.Hijacker: missing method Hijack\n")
expected := "test\nhttp2: panic serving [scrubbed]: interface conversion: *http2.responseWriter is not http.Hijacker: missing method Hijack\n"
var buff bytes.Buffer
scrubber := &LogScrubber{Output: &buff}
n, err := scrubber.Write(input[:12]) //test\nhttp2:
if n != 12 {
t.Errorf("wrong number of bytes %d", n)
}
if err != nil {
t.Errorf("%q", err)
}
if buff.String() != "test\n" {
t.Errorf("Got %q, expected %q", buff.String(), "test\n")
}
n, err = scrubber.Write(input[12:30]) //panic serving [2620:101:f
if n != 18 {
t.Errorf("wrong number of bytes %d", n)
}
if err != nil {
t.Errorf("%q", err)
}
if buff.String() != "test\n" {
t.Errorf("Got %q, expected %q", buff.String(), "test\n")
}
n, err = scrubber.Write(input[30:]) //000:780:9097:75b1:519f:dbb8]:58344: interface conversion: *http2.responseWriter is not http.Hijacker: missing method Hijack\n
if n != (len(input) - 30) {
t.Errorf("wrong number of bytes %d", n)
}
if err != nil {
t.Errorf("%q", err)
}
if buff.String() != expected {
t.Errorf("Got %q, expected %q", buff.String(), expected)
}
}
//Test the log scrubber on known problematic log messages
func TestLogScrubberMessages(t *testing.T) {
for _, test := range []struct {
input, expected string
}{
{
"http: TLS handshake error from 129.97.208.23:38310: ",
"http: TLS handshake error from [scrubbed]: \n",
},
{
"http2: panic serving [2620:101:f000:780:9097:75b1:519f:dbb8]:58344: interface conversion: *http2.responseWriter is not http.Hijacker: missing method Hijack",
"http2: panic serving [scrubbed]: interface conversion: *http2.responseWriter is not http.Hijacker: missing method Hijack\n",
},
{
//Make sure it doesn't scrub fingerprint
"a=fingerprint:sha-256 33:B6:FA:F6:94:CA:74:61:45:4A:D2:1F:2C:2F:75:8A:D9:EB:23:34:B2:30:E9:1B:2A:A6:A9:E0:44:72:CC:74",
"a=fingerprint:sha-256 33:B6:FA:F6:94:CA:74:61:45:4A:D2:1F:2C:2F:75:8A:D9:EB:23:34:B2:30:E9:1B:2A:A6:A9:E0:44:72:CC:74\n",
},
{
//try with enclosing parens
"(1:2:3:4:c:d:e:f) {1:2:3:4:c:d:e:f}",
"([scrubbed]) {[scrubbed]}\n",
},
{
//Make sure it doesn't scrub timestamps
"2019/05/08 15:37:31 starting",
"2019/05/08 15:37:31 starting\n",
},
} {
var buff bytes.Buffer
log.SetFlags(0) //remove all extra log output for test comparisons
log.SetOutput(&LogScrubber{Output: &buff})
log.Print(test.input)
if buff.String() != test.expected {
t.Errorf("%q: got %q, expected %q", test.input, buff.String(), test.expected)
}
}
}
func TestLogScrubberGoodFormats(t *testing.T) {
for _, addr := range []string{
// IPv4
"1.2.3.4",
"255.255.255.255",
// IPv4 with port
"1.2.3.4:55",
"255.255.255.255:65535",
// IPv6
"1:2:3:4:c:d:e:f",
"1111:2222:3333:4444:CCCC:DDDD:EEEE:FFFF",
// IPv6 with brackets
"[1:2:3:4:c:d:e:f]",
"[1111:2222:3333:4444:CCCC:DDDD:EEEE:FFFF]",
// IPv6 with brackets and port
"[1:2:3:4:c:d:e:f]:55",
"[1111:2222:3333:4444:CCCC:DDDD:EEEE:FFFF]:65535",
// compressed IPv6
"::f",
"::d:e:f",
"1:2:3::",
"1:2:3::d:e:f",
"1:2:3:d:e:f::",
"::1:2:3:d:e:f",
"1111:2222:3333::DDDD:EEEE:FFFF",
// compressed IPv6 with brackets
"[::d:e:f]",
"[1:2:3::]",
"[1:2:3::d:e:f]",
"[1111:2222:3333::DDDD:EEEE:FFFF]",
"[1:2:3:4:5:6::8]",
"[1::7:8]",
// compressed IPv6 with brackets and port
"[1::]:58344",
"[::d:e:f]:55",
"[1:2:3::]:55",
"[1:2:3::d:e:f]:55",
"[1111:2222:3333::DDDD:EEEE:FFFF]:65535",
// IPv4-compatible and IPv4-mapped
"::255.255.255.255",
"::ffff:255.255.255.255",
"[::255.255.255.255]",
"[::ffff:255.255.255.255]",
"[::255.255.255.255]:65535",
"[::ffff:255.255.255.255]:65535",
"[::ffff:0:255.255.255.255]",
"[2001:db8:3:4::192.0.2.33]",
} {
var buff bytes.Buffer
log.SetFlags(0) //remove all extra log output for test comparisons
log.SetOutput(&LogScrubber{Output: &buff})
log.Print(addr)
if buff.String() != "[scrubbed]\n" {
t.Errorf("%q: Got %q, expected %q", addr, buff.String(), "[scrubbed]\n")
}
}
}

View file

@ -0,0 +1,18 @@
package sqsclient
import (
"context"
"github.com/aws/aws-sdk-go-v2/service/sqs"
)
type SQSClient interface {
ReceiveMessage(ctx context.Context, input *sqs.ReceiveMessageInput, optFns ...func(*sqs.Options)) (*sqs.ReceiveMessageOutput, error)
ListQueues(ctx context.Context, input *sqs.ListQueuesInput, optFns ...func(*sqs.Options)) (*sqs.ListQueuesOutput, error)
GetQueueAttributes(ctx context.Context, input *sqs.GetQueueAttributesInput, optFns ...func(*sqs.Options)) (*sqs.GetQueueAttributesOutput, error)
DeleteQueue(ctx context.Context, input *sqs.DeleteQueueInput, optFns ...func(*sqs.Options)) (*sqs.DeleteQueueOutput, error)
CreateQueue(ctx context.Context, input *sqs.CreateQueueInput, optFns ...func(*sqs.Options)) (*sqs.CreateQueueOutput, error)
SendMessage(ctx context.Context, input *sqs.SendMessageInput, optFns ...func(*sqs.Options)) (*sqs.SendMessageOutput, error)
DeleteMessage(ctx context.Context, input *sqs.DeleteMessageInput, optFns ...func(*sqs.Options)) (*sqs.DeleteMessageOutput, error)
GetQueueUrl(ctx context.Context, input *sqs.GetQueueUrlInput, optFns ...func(*sqs.Options)) (*sqs.GetQueueUrlOutput, error)
}

View file

@ -0,0 +1,196 @@
// Code generated by MockGen. DO NOT EDIT.
// Source: common/sqsclient/sqsclient.go
// Package mock_sqsclient is a generated GoMock package.
package sqsclient
import (
context "context"
reflect "reflect"
sqs "github.com/aws/aws-sdk-go-v2/service/sqs"
gomock "github.com/golang/mock/gomock"
)
// MockSQSClient is a mock of SQSClient interface.
type MockSQSClient struct {
ctrl *gomock.Controller
recorder *MockSQSClientMockRecorder
}
// MockSQSClientMockRecorder is the mock recorder for MockSQSClient.
type MockSQSClientMockRecorder struct {
mock *MockSQSClient
}
// NewMockSQSClient creates a new mock instance.
func NewMockSQSClient(ctrl *gomock.Controller) *MockSQSClient {
mock := &MockSQSClient{ctrl: ctrl}
mock.recorder = &MockSQSClientMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockSQSClient) EXPECT() *MockSQSClientMockRecorder {
return m.recorder
}
// CreateQueue mocks base method.
func (m *MockSQSClient) CreateQueue(ctx context.Context, input *sqs.CreateQueueInput, optFns ...func(*sqs.Options)) (*sqs.CreateQueueOutput, error) {
m.ctrl.T.Helper()
varargs := []interface{}{ctx, input}
for _, a := range optFns {
varargs = append(varargs, a)
}
ret := m.ctrl.Call(m, "CreateQueue", varargs...)
ret0, _ := ret[0].(*sqs.CreateQueueOutput)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CreateQueue indicates an expected call of CreateQueue.
func (mr *MockSQSClientMockRecorder) CreateQueue(ctx, input interface{}, optFns ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]interface{}{ctx, input}, optFns...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateQueue", reflect.TypeOf((*MockSQSClient)(nil).CreateQueue), varargs...)
}
// DeleteMessage mocks base method.
func (m *MockSQSClient) DeleteMessage(ctx context.Context, input *sqs.DeleteMessageInput, optFns ...func(*sqs.Options)) (*sqs.DeleteMessageOutput, error) {
m.ctrl.T.Helper()
varargs := []interface{}{ctx, input}
for _, a := range optFns {
varargs = append(varargs, a)
}
ret := m.ctrl.Call(m, "DeleteMessage", varargs...)
ret0, _ := ret[0].(*sqs.DeleteMessageOutput)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// DeleteMessage indicates an expected call of DeleteMessage.
func (mr *MockSQSClientMockRecorder) DeleteMessage(ctx, input interface{}, optFns ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]interface{}{ctx, input}, optFns...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteMessage", reflect.TypeOf((*MockSQSClient)(nil).DeleteMessage), varargs...)
}
// DeleteQueue mocks base method.
func (m *MockSQSClient) DeleteQueue(ctx context.Context, input *sqs.DeleteQueueInput, optFns ...func(*sqs.Options)) (*sqs.DeleteQueueOutput, error) {
m.ctrl.T.Helper()
varargs := []interface{}{ctx, input}
for _, a := range optFns {
varargs = append(varargs, a)
}
ret := m.ctrl.Call(m, "DeleteQueue", varargs...)
ret0, _ := ret[0].(*sqs.DeleteQueueOutput)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// DeleteQueue indicates an expected call of DeleteQueue.
func (mr *MockSQSClientMockRecorder) DeleteQueue(ctx, input interface{}, optFns ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]interface{}{ctx, input}, optFns...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteQueue", reflect.TypeOf((*MockSQSClient)(nil).DeleteQueue), varargs...)
}
// GetQueueAttributes mocks base method.
func (m *MockSQSClient) GetQueueAttributes(ctx context.Context, input *sqs.GetQueueAttributesInput, optFns ...func(*sqs.Options)) (*sqs.GetQueueAttributesOutput, error) {
m.ctrl.T.Helper()
varargs := []interface{}{ctx, input}
for _, a := range optFns {
varargs = append(varargs, a)
}
ret := m.ctrl.Call(m, "GetQueueAttributes", varargs...)
ret0, _ := ret[0].(*sqs.GetQueueAttributesOutput)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetQueueAttributes indicates an expected call of GetQueueAttributes.
func (mr *MockSQSClientMockRecorder) GetQueueAttributes(ctx, input interface{}, optFns ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]interface{}{ctx, input}, optFns...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetQueueAttributes", reflect.TypeOf((*MockSQSClient)(nil).GetQueueAttributes), varargs...)
}
// GetQueueUrl mocks base method.
func (m *MockSQSClient) GetQueueUrl(ctx context.Context, input *sqs.GetQueueUrlInput, optFns ...func(*sqs.Options)) (*sqs.GetQueueUrlOutput, error) {
m.ctrl.T.Helper()
varargs := []interface{}{ctx, input}
for _, a := range optFns {
varargs = append(varargs, a)
}
ret := m.ctrl.Call(m, "GetQueueUrl", varargs...)
ret0, _ := ret[0].(*sqs.GetQueueUrlOutput)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetQueueUrl indicates an expected call of GetQueueUrl.
func (mr *MockSQSClientMockRecorder) GetQueueUrl(ctx, input interface{}, optFns ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]interface{}{ctx, input}, optFns...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetQueueUrl", reflect.TypeOf((*MockSQSClient)(nil).GetQueueUrl), varargs...)
}
// ListQueues mocks base method.
func (m *MockSQSClient) ListQueues(ctx context.Context, input *sqs.ListQueuesInput, optFns ...func(*sqs.Options)) (*sqs.ListQueuesOutput, error) {
m.ctrl.T.Helper()
varargs := []interface{}{ctx, input}
for _, a := range optFns {
varargs = append(varargs, a)
}
ret := m.ctrl.Call(m, "ListQueues", varargs...)
ret0, _ := ret[0].(*sqs.ListQueuesOutput)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListQueues indicates an expected call of ListQueues.
func (mr *MockSQSClientMockRecorder) ListQueues(ctx, input interface{}, optFns ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]interface{}{ctx, input}, optFns...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListQueues", reflect.TypeOf((*MockSQSClient)(nil).ListQueues), varargs...)
}
// ReceiveMessage mocks base method.
func (m *MockSQSClient) ReceiveMessage(ctx context.Context, input *sqs.ReceiveMessageInput, optFns ...func(*sqs.Options)) (*sqs.ReceiveMessageOutput, error) {
m.ctrl.T.Helper()
varargs := []interface{}{ctx, input}
for _, a := range optFns {
varargs = append(varargs, a)
}
ret := m.ctrl.Call(m, "ReceiveMessage", varargs...)
ret0, _ := ret[0].(*sqs.ReceiveMessageOutput)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ReceiveMessage indicates an expected call of ReceiveMessage.
func (mr *MockSQSClientMockRecorder) ReceiveMessage(ctx, input interface{}, optFns ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]interface{}{ctx, input}, optFns...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ReceiveMessage", reflect.TypeOf((*MockSQSClient)(nil).ReceiveMessage), varargs...)
}
// SendMessage mocks base method.
func (m *MockSQSClient) SendMessage(ctx context.Context, input *sqs.SendMessageInput, optFns ...func(*sqs.Options)) (*sqs.SendMessageOutput, error) {
m.ctrl.T.Helper()
varargs := []interface{}{ctx, input}
for _, a := range optFns {
varargs = append(varargs, a)
}
ret := m.ctrl.Call(m, "SendMessage", varargs...)
ret0, _ := ret[0].(*sqs.SendMessageOutput)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// SendMessage indicates an expected call of SendMessage.
func (mr *MockSQSClientMockRecorder) SendMessage(ctx, input interface{}, optFns ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]interface{}{ctx, input}, optFns...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SendMessage", reflect.TypeOf((*MockSQSClient)(nil).SendMessage), varargs...)
}

View file

@ -0,0 +1,36 @@
package main
import (
"fmt"
sqscreds "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/common/sqscreds/lib"
)
// This script can be run to generate the encoded SQS credentials to pass as a CLI param or SOCKS option to the client
func main() {
var accessKey, secretKey string
fmt.Print("Enter Access Key: ")
_, err := fmt.Scanln(&accessKey)
if err != nil {
fmt.Println("Error reading access key:", err)
return
}
fmt.Print("Enter Secret Key: ")
_, err = fmt.Scanln(&secretKey)
if err != nil {
fmt.Println("Error reading access key:", err)
return
}
awsCreds := sqscreds.AwsCreds{AwsAccessKeyId: accessKey, AwsSecretKey: secretKey}
println()
println("Encoded Credentials:")
res, err := awsCreds.Base64()
if err != nil {
fmt.Println("Error encoding credentials:", err)
return
}
println(res)
}

View file

@ -0,0 +1,35 @@
package sqscreds
import (
"encoding/base64"
"encoding/json"
)
type AwsCreds struct {
AwsAccessKeyId string `json:"aws-access-key-id"`
AwsSecretKey string `json:"aws-secret-key"`
}
func (awsCreds AwsCreds) Base64() (string, error) {
jsonData, err := json.Marshal(awsCreds)
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(jsonData), nil
}
func AwsCredsFromBase64(base64Str string) (AwsCreds, error) {
var awsCreds AwsCreds
jsonData, err := base64.StdEncoding.DecodeString(base64Str)
if err != nil {
return awsCreds, err
}
err = json.Unmarshal(jsonData, &awsCreds)
if err != nil {
return awsCreds, err
}
return awsCreds, nil
}

123
common/task/periodic.go Normal file
View file

@ -0,0 +1,123 @@
// Package task
// Reused from https://github.com/v2fly/v2ray-core/blob/784775f68922f07d40c9eead63015b2026af2ade/common/task/periodic.go
/*
The MIT License (MIT)
Copyright (c) 2015-2021 V2Ray & V2Fly Community
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
package task
import (
"sync"
"time"
)
// Periodic is a task that runs periodically.
type Periodic struct {
// Interval of the task being run
Interval time.Duration
// Execute is the task function
Execute func() error
// OnError handles the error of the task
OnError func(error)
access sync.Mutex
timer *time.Timer
running bool
}
func (t *Periodic) hasClosed() bool {
t.access.Lock()
defer t.access.Unlock()
return !t.running
}
func (t *Periodic) checkedExecute() error {
if t.hasClosed() {
return nil
}
if err := t.Execute(); err != nil {
if t.OnError != nil {
t.OnError(err)
} else {
// default error handling is to shut down the task
t.access.Lock()
t.running = false
t.access.Unlock()
return err
}
}
t.access.Lock()
defer t.access.Unlock()
if !t.running {
return nil
}
t.timer = time.AfterFunc(t.Interval, func() {
t.checkedExecute()
})
return nil
}
// Start implements common.Runnable.
func (t *Periodic) Start() error {
t.access.Lock()
if t.running {
t.access.Unlock()
return nil
}
t.running = true
t.access.Unlock()
if err := t.checkedExecute(); err != nil {
t.access.Lock()
t.running = false
t.access.Unlock()
return err
}
return nil
}
func (t *Periodic) WaitThenStart() {
time.AfterFunc(t.Interval, func() {
t.Start()
})
}
// Close implements common.Closable.
func (t *Periodic) Close() error {
t.access.Lock()
defer t.access.Unlock()
t.running = false
if t.timer != nil {
t.timer.Stop()
t.timer = nil
}
return nil
}

View file

@ -0,0 +1,28 @@
package turbotunnel
import (
"crypto/rand"
"encoding/hex"
)
// ClientID is an abstract identifier that binds together all the communications
// belonging to a single client session, even though those communications may
// arrive from multiple IP addresses or over multiple lower-level connections.
// It plays the same role that an (IP address, port number) tuple plays in a
// net.UDPConn: it's the return address pertaining to a long-lived abstract
// client session. The client attaches its ClientID to each of its
// communications, enabling the server to disambiguate requests among its many
// clients. ClientID implements the net.Addr interface.
type ClientID [8]byte
func NewClientID() ClientID {
var id ClientID
_, err := rand.Read(id[:])
if err != nil {
panic(err)
}
return id
}
func (id ClientID) Network() string { return "clientid" }
func (id ClientID) String() string { return hex.EncodeToString(id[:]) }

View file

@ -0,0 +1,146 @@
package turbotunnel
import (
"container/heap"
"net"
"sync"
"time"
)
// clientRecord is a record of a recently seen client, with the time it was last
// seen and a send queue.
type clientRecord struct {
Addr net.Addr
LastSeen time.Time
SendQueue chan []byte
}
// ClientMap manages a mapping of live clients (keyed by address, which will be
// a ClientID) to their respective send queues. ClientMap's functions are safe
// to call from multiple goroutines.
type ClientMap struct {
// We use an inner structure to avoid exposing public heap.Interface
// functions to users of clientMap.
inner clientMapInner
// Synchronizes access to inner.
lock sync.Mutex
}
// NewClientMap creates a ClientMap that expires clients after a timeout.
//
// The timeout does not have to be kept in sync with smux's internal idle
// timeout. If a client is removed from the client map while the smux session is
// still live, the worst that can happen is a loss of whatever packets were in
// the send queue at the time. If smux later decides to send more packets to the
// same client, we'll instantiate a new send queue, and if the client ever
// connects again with the proper client ID, we'll deliver them.
func NewClientMap(timeout time.Duration) *ClientMap {
m := &ClientMap{
inner: clientMapInner{
byAge: make([]*clientRecord, 0),
byAddr: make(map[net.Addr]int),
},
}
go func() {
for {
time.Sleep(timeout / 2)
now := time.Now()
m.lock.Lock()
m.inner.removeExpired(now, timeout)
m.lock.Unlock()
}
}()
return m
}
// SendQueue returns the send queue corresponding to addr, creating it if
// necessary.
func (m *ClientMap) SendQueue(addr net.Addr) chan []byte {
m.lock.Lock()
queue := m.inner.SendQueue(addr, time.Now())
m.lock.Unlock()
return queue
}
// clientMapInner is the inner type of ClientMap, implementing heap.Interface.
// byAge is the backing store, a heap ordered by LastSeen time, to facilitate
// expiring old client records. byAddr is a map from addresses (i.e., ClientIDs)
// to heap indices, to allow looking up by address. Unlike ClientMap,
// clientMapInner requires external synchonization.
type clientMapInner struct {
byAge []*clientRecord
byAddr map[net.Addr]int
}
// removeExpired removes all client records whose LastSeen timestamp is more
// than timeout in the past.
func (inner *clientMapInner) removeExpired(now time.Time, timeout time.Duration) {
for len(inner.byAge) > 0 && now.Sub(inner.byAge[0].LastSeen) >= timeout {
heap.Pop(inner)
}
}
// SendQueue finds the existing client record corresponding to addr, or creates
// a new one if none exists yet. It updates the client record's LastSeen time
// and returns its SendQueue.
func (inner *clientMapInner) SendQueue(addr net.Addr, now time.Time) chan []byte {
var record *clientRecord
i, ok := inner.byAddr[addr]
if ok {
// Found one, update its LastSeen.
record = inner.byAge[i]
record.LastSeen = now
heap.Fix(inner, i)
} else {
// Not found, create a new one.
record = &clientRecord{
Addr: addr,
LastSeen: now,
SendQueue: make(chan []byte, queueSize),
}
heap.Push(inner, record)
}
return record.SendQueue
}
// heap.Interface for clientMapInner.
func (inner *clientMapInner) Len() int {
if len(inner.byAge) != len(inner.byAddr) {
panic("inconsistent clientMap")
}
return len(inner.byAge)
}
func (inner *clientMapInner) Less(i, j int) bool {
return inner.byAge[i].LastSeen.Before(inner.byAge[j].LastSeen)
}
func (inner *clientMapInner) Swap(i, j int) {
inner.byAge[i], inner.byAge[j] = inner.byAge[j], inner.byAge[i]
inner.byAddr[inner.byAge[i].Addr] = i
inner.byAddr[inner.byAge[j].Addr] = j
}
func (inner *clientMapInner) Push(x interface{}) {
record := x.(*clientRecord)
if _, ok := inner.byAddr[record.Addr]; ok {
panic("duplicate address in clientMap")
}
// Insert into byAddr map.
inner.byAddr[record.Addr] = len(inner.byAge)
// Insert into byAge slice.
inner.byAge = append(inner.byAge, record)
}
func (inner *clientMapInner) Pop() interface{} {
n := len(inner.byAddr)
// Remove from byAge slice.
record := inner.byAge[n-1]
inner.byAge[n-1] = nil
inner.byAge = inner.byAge[:n-1]
// Remove from byAddr map.
delete(inner.byAddr, record.Addr)
close(record.SendQueue)
return record
}

View file

@ -0,0 +1,18 @@
package turbotunnel
import (
"testing"
"time"
)
// Benchmark the ClientMap.SendQueue function. This is mainly measuring the cost
// of the mutex operations around the call to clientMapInner.SendQueue.
func BenchmarkSendQueue(b *testing.B) {
m := NewClientMap(1 * time.Hour)
id := NewClientID()
m.SendQueue(id) // populate the entry for id
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.SendQueue(id)
}
}

View file

@ -0,0 +1,17 @@
// Package turbotunnel provides support for overlaying a virtual net.PacketConn
// on some other network carrier.
//
// https://github.com/net4people/bbs/issues/9
package turbotunnel
import "errors"
// This magic prefix is how a client opts into turbo tunnel mode. It is just a
// randomly generated byte string.
var Token = [8]byte{0x12, 0x93, 0x60, 0x5d, 0x27, 0x81, 0x75, 0xf5}
// The size of receive and send queues.
const queueSize = 512
var errClosedPacketConn = errors.New("operation on closed connection")
var errNotImplemented = errors.New("not implemented")

View file

@ -0,0 +1,168 @@
package turbotunnel
import (
"net"
"sync"
"sync/atomic"
"time"
)
// taggedPacket is a combination of a []byte and a net.Addr, encapsulating the
// return type of PacketConn.ReadFrom.
type taggedPacket struct {
P []byte
Addr net.Addr
}
// QueuePacketConn implements net.PacketConn by storing queues of packets. There
// is one incoming queue (where packets are additionally tagged by the source
// address of the client that sent them). There are many outgoing queues, one
// for each client address that has been recently seen. The QueueIncoming method
// inserts a packet into the incoming queue, to eventually be returned by
// ReadFrom. WriteTo inserts a packet into an address-specific outgoing queue,
// which can later by accessed through the OutgoingQueue method.
type QueuePacketConn struct {
clients *ClientMap
localAddr net.Addr
recvQueue chan taggedPacket
closeOnce sync.Once
closed chan struct{}
mtu int
// Pool of reusable mtu-sized buffers.
bufPool sync.Pool
// What error to return when the QueuePacketConn is closed.
err atomic.Value
}
// NewQueuePacketConn makes a new QueuePacketConn, set to track recent clients
// for at least a duration of timeout. The maximum packet size is mtu.
func NewQueuePacketConn(localAddr net.Addr, timeout time.Duration, mtu int) *QueuePacketConn {
return &QueuePacketConn{
clients: NewClientMap(timeout),
localAddr: localAddr,
recvQueue: make(chan taggedPacket, queueSize),
closed: make(chan struct{}),
mtu: mtu,
bufPool: sync.Pool{New: func() interface{} { return make([]byte, mtu) }},
}
}
// QueueIncoming queues an incoming packet and its source address, to be
// returned in a future call to ReadFrom. If p is longer than the MTU, only its
// first MTU bytes will be used.
func (c *QueuePacketConn) QueueIncoming(p []byte, addr net.Addr) {
select {
case <-c.closed:
// If we're closed, silently drop it.
return
default:
}
// Copy the slice so that the caller may reuse it.
buf := c.bufPool.Get().([]byte)
if len(p) < cap(buf) {
buf = buf[:len(p)]
} else {
buf = buf[:cap(buf)]
}
copy(buf, p)
select {
case c.recvQueue <- taggedPacket{buf, addr}:
default:
// Drop the incoming packet if the receive queue is full.
c.Restore(buf)
}
}
// OutgoingQueue returns the queue of outgoing packets corresponding to addr,
// creating it if necessary. The contents of the queue will be packets that are
// written to the address in question using WriteTo.
func (c *QueuePacketConn) OutgoingQueue(addr net.Addr) <-chan []byte {
return c.clients.SendQueue(addr)
}
// Restore adds a slice to the internal pool of packet buffers. Typically you
// will call this with a slice from the OutgoingQueue channel once you are done
// using it. (It is not an error to fail to do so, it will just result in more
// allocations.)
func (c *QueuePacketConn) Restore(p []byte) {
if cap(p) >= c.mtu {
c.bufPool.Put(p)
}
}
// ReadFrom returns a packet and address previously stored by QueueIncoming.
func (c *QueuePacketConn) ReadFrom(p []byte) (int, net.Addr, error) {
select {
case <-c.closed:
return 0, nil, &net.OpError{Op: "read", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
default:
}
select {
case <-c.closed:
return 0, nil, &net.OpError{Op: "read", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
case packet := <-c.recvQueue:
n := copy(p, packet.P)
c.Restore(packet.P)
return n, packet.Addr, nil
}
}
// WriteTo queues an outgoing packet for the given address. The queue can later
// be retrieved using the OutgoingQueue method. If p is longer than the MTU,
// only its first MTU bytes will be used.
func (c *QueuePacketConn) WriteTo(p []byte, addr net.Addr) (int, error) {
select {
case <-c.closed:
return 0, &net.OpError{Op: "write", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
default:
}
// Copy the slice so that the caller may reuse it.
buf := c.bufPool.Get().([]byte)
if len(p) < cap(buf) {
buf = buf[:len(p)]
} else {
buf = buf[:cap(buf)]
}
copy(buf, p)
select {
case c.clients.SendQueue(addr) <- buf:
return len(buf), nil
default:
// Drop the outgoing packet if the send queue is full.
c.Restore(buf)
return len(p), nil
}
}
// closeWithError unblocks pending operations and makes future operations fail
// with the given error. If err is nil, it becomes errClosedPacketConn.
func (c *QueuePacketConn) closeWithError(err error) error {
var newlyClosed bool
c.closeOnce.Do(func() {
newlyClosed = true
// Store the error to be returned by future PacketConn
// operations.
if err == nil {
err = errClosedPacketConn
}
c.err.Store(err)
close(c.closed)
})
if !newlyClosed {
return &net.OpError{Op: "close", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
}
return nil
}
// Close unblocks pending operations and makes future operations fail with a
// "closed connection" error.
func (c *QueuePacketConn) Close() error {
return c.closeWithError(nil)
}
// LocalAddr returns the localAddr value that was passed to NewQueuePacketConn.
func (c *QueuePacketConn) LocalAddr() net.Addr { return c.localAddr }
func (c *QueuePacketConn) SetDeadline(t time.Time) error { return errNotImplemented }
func (c *QueuePacketConn) SetReadDeadline(t time.Time) error { return errNotImplemented }
func (c *QueuePacketConn) SetWriteDeadline(t time.Time) error { return errNotImplemented }

View file

@ -0,0 +1,242 @@
package turbotunnel
import (
"bytes"
"fmt"
"net"
"sync"
"testing"
"time"
"github.com/xtaci/kcp-go/v5"
)
type emptyAddr struct{}
func (_ emptyAddr) Network() string { return "empty" }
func (_ emptyAddr) String() string { return "empty" }
type intAddr int
func (i intAddr) Network() string { return "int" }
func (i intAddr) String() string { return fmt.Sprintf("%d", i) }
// Run with -benchmem to see memory allocations.
func BenchmarkQueueIncoming(b *testing.B) {
conn := NewQueuePacketConn(emptyAddr{}, 1*time.Hour, 500)
defer conn.Close()
b.ResetTimer()
var p [500]byte
for i := 0; i < b.N; i++ {
conn.QueueIncoming(p[:], emptyAddr{})
}
b.StopTimer()
}
// BenchmarkWriteTo benchmarks the QueuePacketConn.WriteTo function.
func BenchmarkWriteTo(b *testing.B) {
conn := NewQueuePacketConn(emptyAddr{}, 1*time.Hour, 500)
defer conn.Close()
b.ResetTimer()
var p [500]byte
for i := 0; i < b.N; i++ {
conn.WriteTo(p[:], emptyAddr{})
}
b.StopTimer()
}
// TestQueueIncomingOversize tests that QueueIncoming truncates packets that are
// larger than the MTU.
func TestQueueIncomingOversize(t *testing.T) {
const payload = "abcdefghijklmnopqrstuvwxyz"
conn := NewQueuePacketConn(emptyAddr{}, 1*time.Hour, len(payload)-1)
defer conn.Close()
conn.QueueIncoming([]byte(payload), emptyAddr{})
var p [500]byte
n, _, err := conn.ReadFrom(p[:])
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(p[:n], []byte(payload[:len(payload)-1])) {
t.Fatalf("payload was %+q, expected %+q", p[:n], payload[:len(payload)-1])
}
}
// TestWriteToOversize tests that WriteTo truncates packets that are larger than
// the MTU.
func TestWriteToOversize(t *testing.T) {
const payload = "abcdefghijklmnopqrstuvwxyz"
conn := NewQueuePacketConn(emptyAddr{}, 1*time.Hour, len(payload)-1)
defer conn.Close()
conn.WriteTo([]byte(payload), emptyAddr{})
p := <-conn.OutgoingQueue(emptyAddr{})
if !bytes.Equal(p, []byte(payload[:len(payload)-1])) {
t.Fatalf("payload was %+q, expected %+q", p, payload[:len(payload)-1])
}
}
// TestRestoreMTU tests that Restore ignores any inputs that are not at least
// MTU-sized.
func TestRestoreMTU(t *testing.T) {
const mtu = 500
const payload = "hello"
conn := NewQueuePacketConn(emptyAddr{}, 1*time.Hour, mtu)
defer conn.Close()
conn.Restore(make([]byte, mtu-1))
// This WriteTo may use the short slice we just gave to Restore.
conn.WriteTo([]byte(payload), emptyAddr{})
// Read the queued slice and ensure its capacity is at least the MTU.
p := <-conn.OutgoingQueue(emptyAddr{})
if cap(p) != mtu {
t.Fatalf("cap was %v, expected %v", cap(p), mtu)
}
// Check the payload while we're at it.
if !bytes.Equal(p, []byte(payload)) {
t.Fatalf("payload was %+q, expected %+q", p, payload)
}
}
// TestRestoreCap tests that Restore can use slices whose cap is at least the
// MTU, even if the len is shorter.
func TestRestoreCap(t *testing.T) {
const mtu = 500
const payload = "hello"
conn := NewQueuePacketConn(emptyAddr{}, 1*time.Hour, mtu)
defer conn.Close()
conn.Restore(make([]byte, 0, mtu))
conn.WriteTo([]byte(payload), emptyAddr{})
p := <-conn.OutgoingQueue(emptyAddr{})
if !bytes.Equal(p, []byte(payload)) {
t.Fatalf("payload was %+q, expected %+q", p, payload)
}
}
// DiscardPacketConn is a net.PacketConn whose ReadFrom method block forever and
// whose WriteTo method discards whatever it is called with.
type DiscardPacketConn struct{}
func (_ DiscardPacketConn) ReadFrom(_ []byte) (int, net.Addr, error) { select {} } // block forever
func (_ DiscardPacketConn) WriteTo(p []byte, _ net.Addr) (int, error) { return len(p), nil }
func (_ DiscardPacketConn) Close() error { return nil }
func (_ DiscardPacketConn) LocalAddr() net.Addr { return emptyAddr{} }
func (_ DiscardPacketConn) SetDeadline(t time.Time) error { return nil }
func (_ DiscardPacketConn) SetReadDeadline(t time.Time) error { return nil }
func (_ DiscardPacketConn) SetWriteDeadline(t time.Time) error { return nil }
// TranscriptPacketConn keeps a log of the []byte argument to every call to
// WriteTo.
type TranscriptPacketConn struct {
Transcript [][]byte
lock sync.Mutex
net.PacketConn
}
func NewTranscriptPacketConn(inner net.PacketConn) *TranscriptPacketConn {
return &TranscriptPacketConn{
PacketConn: inner,
}
}
func (c *TranscriptPacketConn) WriteTo(p []byte, addr net.Addr) (int, error) {
c.lock.Lock()
defer c.lock.Unlock()
p2 := make([]byte, len(p))
copy(p2, p)
c.Transcript = append(c.Transcript, p2)
return c.PacketConn.WriteTo(p, addr)
}
func (c *TranscriptPacketConn) Length() int {
c.lock.Lock()
defer c.lock.Unlock()
return len(c.Transcript)
}
// Tests that QueuePacketConn.WriteTo is compatible with the way kcp-go uses
// PacketConn, allocating source buffers in a sync.Pool.
//
// https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/40260
func TestQueuePacketConnWriteToKCP(t *testing.T) {
// Start a goroutine to constantly exercise kcp UDPSession.tx, writing
// packets with payload "XXXX".
done := make(chan struct{}, 0)
defer close(done)
ready := make(chan struct{}, 0)
go func() {
var readyClose sync.Once
defer readyClose.Do(func() { close(ready) })
pconn := DiscardPacketConn{}
defer pconn.Close()
loop:
for {
select {
case <-done:
break loop
default:
}
// Create a new UDPSession, send once, then discard the
// UDPSession.
conn, err := kcp.NewConn2(intAddr(2), nil, 0, 0, pconn)
if err != nil {
panic(err)
}
_, err = conn.Write([]byte("XXXX"))
if err != nil {
panic(err)
}
conn.Close()
// Signal the main test to start once we have done one
// iterator of this noisy loop.
readyClose.Do(func() { close(ready) })
}
}()
pconn := NewQueuePacketConn(emptyAddr{}, 1*time.Hour, 500)
defer pconn.Close()
addr1 := intAddr(1)
outgoing := pconn.OutgoingQueue(addr1)
// Once the "XXXX" goroutine is started, repeatedly send a packet, wait,
// then retrieve it and check whether it has changed since being sent.
<-ready
for i := 0; i < 10; i++ {
transcript := NewTranscriptPacketConn(pconn)
conn, err := kcp.NewConn2(addr1, nil, 0, 0, transcript)
if err != nil {
panic(err)
}
_, err = conn.Write([]byte("hello world"))
if err != nil {
panic(err)
}
// A sleep after the Write makes buffer reuse more likely,
// and to allow the connection to flush before close
time.Sleep(500 * time.Millisecond)
err = conn.Close()
if err != nil {
panic(err)
}
if transcript.Length() == 0 {
panic("empty transcript")
}
for j, tr := range transcript.Transcript {
p := <-outgoing
// This test is meant to detect unsynchronized memory
// changes, so freeze the slice we just read.
p2 := make([]byte, len(p))
copy(p2, p)
if !bytes.Equal(p2, tr) {
t.Fatalf("%d %d packet changed between send and recv\nsend: %+q\nrecv: %+q", i, j, tr, p2)
}
}
}
}

View file

@ -0,0 +1,204 @@
package turbotunnel
import (
"context"
"errors"
"net"
"sync"
"sync/atomic"
"time"
)
// RedialPacketConn implements a long-lived net.PacketConn atop a sequence of
// other, transient net.PacketConns. RedialPacketConn creates a new
// net.PacketConn by calling a provided dialContext function. Whenever the
// net.PacketConn experiences a ReadFrom or WriteTo error, RedialPacketConn
// calls the dialContext function again and starts sending and receiving packets
// on the new net.PacketConn. RedialPacketConn's own ReadFrom and WriteTo
// methods return an error only when the dialContext function returns an error.
//
// RedialPacketConn uses static local and remote addresses that are independent
// of those of any dialed net.PacketConn.
type RedialPacketConn struct {
localAddr net.Addr
remoteAddr net.Addr
dialContext func(context.Context) (net.PacketConn, error)
recvQueue chan []byte
sendQueue chan []byte
closed chan struct{}
closeOnce sync.Once
// The first dial error, which causes the clientPacketConn to be
// closed and is returned from future read/write operations. Compare to
// the rerr and werr in io.Pipe.
err atomic.Value
}
// NewRedialPacketConn makes a new RedialPacketConn, with the given static local
// and remote addresses, and dialContext function.
func NewRedialPacketConn(
localAddr, remoteAddr net.Addr,
dialContext func(context.Context) (net.PacketConn, error),
) *RedialPacketConn {
c := &RedialPacketConn{
localAddr: localAddr,
remoteAddr: remoteAddr,
dialContext: dialContext,
recvQueue: make(chan []byte, queueSize),
sendQueue: make(chan []byte, queueSize),
closed: make(chan struct{}),
err: atomic.Value{},
}
go c.dialLoop()
return c
}
// dialLoop repeatedly calls c.dialContext and passes the resulting
// net.PacketConn to c.exchange. It returns only when c is closed or dialContext
// returns an error.
func (c *RedialPacketConn) dialLoop() {
ctx, cancel := context.WithCancel(context.Background())
for {
select {
case <-c.closed:
cancel()
return
default:
}
conn, err := c.dialContext(ctx)
if err != nil {
c.closeWithError(err)
cancel()
return
}
c.exchange(conn)
conn.Close()
}
}
// exchange calls ReadFrom on the given net.PacketConn and places the resulting
// packets in the receive queue, and takes packets from the send queue and calls
// WriteTo on them, making the current net.PacketConn active.
func (c *RedialPacketConn) exchange(conn net.PacketConn) {
readErrCh := make(chan error)
writeErrCh := make(chan error)
go func() {
defer close(readErrCh)
for {
select {
case <-c.closed:
return
case <-writeErrCh:
return
default:
}
var buf [1500]byte
n, _, err := conn.ReadFrom(buf[:])
if err != nil {
readErrCh <- err
return
}
p := make([]byte, n)
copy(p, buf[:])
select {
case c.recvQueue <- p:
default: // OK to drop packets.
}
}
}()
go func() {
defer close(writeErrCh)
for {
select {
case <-c.closed:
return
case <-readErrCh:
return
case p := <-c.sendQueue:
_, err := conn.WriteTo(p, c.remoteAddr)
if err != nil {
writeErrCh <- err
return
}
}
}
}()
select {
case <-readErrCh:
case <-writeErrCh:
}
}
// ReadFrom reads a packet from the currently active net.PacketConn. The
// packet's original remote address is replaced with the RedialPacketConn's own
// remote address.
func (c *RedialPacketConn) ReadFrom(p []byte) (int, net.Addr, error) {
select {
case <-c.closed:
return 0, nil, &net.OpError{Op: "read", Net: c.LocalAddr().Network(), Source: c.LocalAddr(), Addr: c.remoteAddr, Err: c.err.Load().(error)}
default:
}
select {
case <-c.closed:
return 0, nil, &net.OpError{Op: "read", Net: c.LocalAddr().Network(), Source: c.LocalAddr(), Addr: c.remoteAddr, Err: c.err.Load().(error)}
case buf := <-c.recvQueue:
return copy(p, buf), c.remoteAddr, nil
}
}
// WriteTo writes a packet to the currently active net.PacketConn. The addr
// argument is ignored and instead replaced with the RedialPacketConn's own
// remote address.
func (c *RedialPacketConn) WriteTo(p []byte, addr net.Addr) (int, error) {
// addr is ignored.
select {
case <-c.closed:
return 0, &net.OpError{Op: "write", Net: c.LocalAddr().Network(), Source: c.LocalAddr(), Addr: c.remoteAddr, Err: c.err.Load().(error)}
default:
}
buf := make([]byte, len(p))
copy(buf, p)
select {
case c.sendQueue <- buf:
return len(buf), nil
default:
// Drop the outgoing packet if the send queue is full.
return len(buf), nil
}
}
// closeWithError unblocks pending operations and makes future operations fail
// with the given error. If err is nil, it becomes errClosedPacketConn.
func (c *RedialPacketConn) closeWithError(err error) error {
var once bool
c.closeOnce.Do(func() {
// Store the error to be returned by future read/write
// operations.
if err == nil {
err = errors.New("operation on closed connection")
}
c.err.Store(err)
close(c.closed)
once = true
})
if !once {
return &net.OpError{Op: "close", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
}
return nil
}
// Close unblocks pending operations and makes future operations fail with a
// "closed connection" error.
func (c *RedialPacketConn) Close() error {
return c.closeWithError(nil)
}
// LocalAddr returns the localAddr value that was passed to NewRedialPacketConn.
func (c *RedialPacketConn) LocalAddr() net.Addr { return c.localAddr }
func (c *RedialPacketConn) SetDeadline(t time.Time) error { return errNotImplemented }
func (c *RedialPacketConn) SetReadDeadline(t time.Time) error { return errNotImplemented }
func (c *RedialPacketConn) SetWriteDeadline(t time.Time) error { return errNotImplemented }

173
common/util/util.go Normal file
View file

@ -0,0 +1,173 @@
package util
import (
"encoding/json"
"errors"
"log"
"net"
"net/http"
"slices"
"sort"
"github.com/pion/ice/v4"
"github.com/pion/sdp/v3"
"github.com/pion/webrtc/v4"
"github.com/realclientip/realclientip-go"
)
func SerializeSessionDescription(desc *webrtc.SessionDescription) (string, error) {
bytes, err := json.Marshal(*desc)
if err != nil {
return "", err
}
return string(bytes), nil
}
func DeserializeSessionDescription(msg string) (*webrtc.SessionDescription, error) {
var parsed map[string]interface{}
err := json.Unmarshal([]byte(msg), &parsed)
if err != nil {
return nil, err
}
if _, ok := parsed["type"]; !ok {
return nil, errors.New("cannot deserialize SessionDescription without type field")
}
if _, ok := parsed["sdp"]; !ok {
return nil, errors.New("cannot deserialize SessionDescription without sdp field")
}
var stype webrtc.SDPType
switch parsed["type"].(string) {
default:
return nil, errors.New("Unknown SDP type")
case "offer":
stype = webrtc.SDPTypeOffer
case "pranswer":
stype = webrtc.SDPTypePranswer
case "answer":
stype = webrtc.SDPTypeAnswer
case "rollback":
stype = webrtc.SDPTypeRollback
}
return &webrtc.SessionDescription{
Type: stype,
SDP: parsed["sdp"].(string),
}, nil
}
func IsLocal(ip net.IP) bool {
if ip.IsPrivate() {
return true
}
// Dynamic Configuration as per https://tools.ietf.org/htm/rfc3927
if ip.IsLinkLocalUnicast() {
return true
}
if ip4 := ip.To4(); ip4 != nil {
// Carrier-Grade NAT as per https://tools.ietf.org/htm/rfc6598
if ip4[0] == 100 && ip4[1]&0xc0 == 64 {
return true
}
}
return false
}
// Removes local LAN address ICE candidates
//
// This is unused after https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/442,
// but come in handy later for https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40322
// Also this is exported, so let's not remove it at least until
// the next major release.
func StripLocalAddresses(str string) string {
var desc sdp.SessionDescription
err := desc.Unmarshal([]byte(str))
if err != nil {
return str
}
for _, m := range desc.MediaDescriptions {
attrs := make([]sdp.Attribute, 0)
for _, a := range m.Attributes {
if a.IsICECandidate() {
c, err := ice.UnmarshalCandidate(a.Value)
if err == nil && c.Type() == ice.CandidateTypeHost {
ip := net.ParseIP(c.Address())
if ip != nil && (IsLocal(ip) || ip.IsUnspecified() || ip.IsLoopback()) {
/* no append in this case */
continue
}
}
}
attrs = append(attrs, a)
}
m.Attributes = attrs
}
bts, err := desc.Marshal()
if err != nil {
return str
}
return string(bts)
}
// Attempts to retrieve the client IP of where the HTTP request originating.
// There is no standard way to do this since the original client IP can be included in a number of different headers,
// depending on the proxies and load balancers between the client and the server. We attempt to check as many of these
// headers as possible to determine a "best guess" of the client IP
// Using this as a reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Forwarded
func GetClientIp(req *http.Request) string {
// We check the "Fowarded" header first, followed by the "X-Forwarded-For" header, and then use the "RemoteAddr" as
// a last resort. We use the leftmost address since it is the closest one to the client.
strat := realclientip.NewChainStrategy(
realclientip.Must(realclientip.NewLeftmostNonPrivateStrategy("Forwarded")),
realclientip.Must(realclientip.NewLeftmostNonPrivateStrategy("X-Forwarded-For")),
realclientip.RemoteAddrStrategy{},
)
clientIp := strat.ClientIP(req.Header, req.RemoteAddr)
return clientIp
}
// Returns a list of IP addresses of ICE candidates, roughly in descending order for accuracy for geolocation
func GetCandidateAddrs(sdpStr string) []net.IP {
var desc sdp.SessionDescription
err := desc.Unmarshal([]byte(sdpStr))
if err != nil {
log.Printf("GetCandidateAddrs: failed to unmarshal SDP: %v\n", err)
return []net.IP{}
}
iceCandidates := make([]ice.Candidate, 0)
for _, m := range desc.MediaDescriptions {
for _, a := range m.Attributes {
if a.IsICECandidate() {
c, err := ice.UnmarshalCandidate(a.Value)
if err == nil {
iceCandidates = append(iceCandidates, c)
}
}
}
}
// ICE candidates are first sorted in asecending order of priority, to match convention of providing a custom Less
// function to sort
sort.Slice(iceCandidates, func(i, j int) bool {
if iceCandidates[i].Type() != iceCandidates[j].Type() {
// Sort by candidate type first, in the order specified in https://datatracker.ietf.org/doc/html/rfc8445#section-5.1.2.2
// Higher priority candidate types are more efficient, which likely means they are closer to the client
// itself, providing a more accurate result for geolocation
return ice.CandidateType(iceCandidates[i].Type().Preference()) < ice.CandidateType(iceCandidates[j].Type().Preference())
}
// Break ties with the ICE candidate's priority property
return iceCandidates[i].Priority() < iceCandidates[j].Priority()
})
slices.Reverse(iceCandidates)
sortedIpAddr := make([]net.IP, 0)
for _, c := range iceCandidates {
ip := net.ParseIP(c.Address())
if ip != nil {
sortedIpAddr = append(sortedIpAddr, ip)
}
}
return sortedIpAddr
}

75
common/util/util_test.go Normal file
View file

@ -0,0 +1,75 @@
package util
import (
"net"
"net/http"
"testing"
. "github.com/smartystreets/goconvey/convey"
)
func TestUtil(t *testing.T) {
Convey("Strip", t, func() {
const offerStart = "v=0\r\no=- 4358805017720277108 2 IN IP4 8.8.8.8\r\ns=-\r\nt=0 0\r\na=group:BUNDLE data\r\na=msid-semantic: WMS\r\nm=application 56688 DTLS/SCTP 5000\r\nc=IN IP4 8.8.8.8\r\n"
const goodCandidate = "a=candidate:3769337065 1 udp 2122260223 8.8.8.8 56688 typ host generation 0 network-id 1 network-cost 50\r\n"
const offerEnd = "a=ice-ufrag:aMAZ\r\na=ice-pwd:jcHb08Jjgrazp2dzjdrvPPvV\r\na=ice-options:trickle\r\na=fingerprint:sha-256 C8:88:EE:B9:E7:02:2E:21:37:ED:7A:D1:EB:2B:A3:15:A2:3B:5B:1C:3D:D4:D5:1F:06:CF:52:40:03:F8:DD:66\r\na=setup:actpass\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"
offer := offerStart + goodCandidate +
"a=candidate:3769337065 1 udp 2122260223 192.168.0.100 56688 typ host generation 0 network-id 1 network-cost 50\r\n" + // IsLocal IPv4
"a=candidate:3769337065 1 udp 2122260223 100.127.50.5 56688 typ host generation 0 network-id 1 network-cost 50\r\n" + // IsLocal IPv4
"a=candidate:3769337065 1 udp 2122260223 169.254.250.88 56688 typ host generation 0 network-id 1 network-cost 50\r\n" + // IsLocal IPv4
"a=candidate:3769337065 1 udp 2122260223 fdf8:f53b:82e4::53 56688 typ host generation 0 network-id 1 network-cost 50\r\n" + // IsLocal IPv6
"a=candidate:3769337065 1 udp 2122260223 0.0.0.0 56688 typ host generation 0 network-id 1 network-cost 50\r\n" + // IsUnspecified IPv4
"a=candidate:3769337065 1 udp 2122260223 :: 56688 typ host generation 0 network-id 1 network-cost 50\r\n" + // IsUnspecified IPv6
"a=candidate:3769337065 1 udp 2122260223 127.0.0.1 56688 typ host generation 0 network-id 1 network-cost 50\r\n" + // IsLoopback IPv4
"a=candidate:3769337065 1 udp 2122260223 ::1 56688 typ host generation 0 network-id 1 network-cost 50\r\n" + // IsLoopback IPv6
offerEnd
So(StripLocalAddresses(offer), ShouldEqual, offerStart+goodCandidate+offerEnd)
})
Convey("GetClientIp", t, func() {
// Should use Forwarded header
req1, _ := http.NewRequest("GET", "https://example.com", nil)
req1.Header.Add("X-Forwarded-For", "1.1.1.1, 2001:db8:cafe::99%eth0, 3.3.3.3, 192.168.1.1")
req1.Header.Add("Forwarded", `For=fe80::abcd;By=fe80::1234, Proto=https;For=::ffff:188.0.2.128, For="[2001:db8:cafe::17]:4848", For=fc00::1`)
req1.RemoteAddr = "192.168.1.2:8888"
So(GetClientIp(req1), ShouldEqual, "188.0.2.128")
// Should use X-Forwarded-For header
req2, _ := http.NewRequest("GET", "https://example.com", nil)
req2.Header.Add("X-Forwarded-For", "1.1.1.1, 2001:db8:cafe::99%eth0, 3.3.3.3, 192.168.1.1")
req2.RemoteAddr = "192.168.1.2:8888"
So(GetClientIp(req2), ShouldEqual, "1.1.1.1")
// Should use RemoteAddr
req3, _ := http.NewRequest("GET", "https://example.com", nil)
req3.RemoteAddr = "192.168.1.2:8888"
So(GetClientIp(req3), ShouldEqual, "192.168.1.2")
// Should return empty client IP
req4, _ := http.NewRequest("GET", "https://example.com", nil)
So(GetClientIp(req4), ShouldEqual, "")
})
Convey("GetCandidateAddrs", t, func() {
// Should prioritize type in the following order: https://datatracker.ietf.org/doc/html/rfc8445#section-5.1.2.2
// Break ties using priority value
const offerStart = "v=0\r\no=- 4358805017720277108 2 IN IP4 8.8.8.8\r\ns=-\r\nt=0 0\r\na=group:BUNDLE data\r\na=msid-semantic: WMS\r\nm=application 56688 DTLS/SCTP 5000\r\nc=IN IP4 8.8.8.8\r\n"
const offerEnd = "a=ice-ufrag:aMAZ\r\na=ice-pwd:jcHb08Jjgrazp2dzjdrvPPvV\r\na=ice-options:trickle\r\na=fingerprint:sha-256 C8:88:EE:B9:E7:02:2E:21:37:ED:7A:D1:EB:2B:A3:15:A2:3B:5B:1C:3D:D4:D5:1F:06:CF:52:40:03:F8:DD:66\r\na=setup:actpass\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"
const sdp = offerStart + "a=candidate:3769337065 1 udp 2122260223 8.8.8.8 56688 typ prflx\r\n" +
"a=candidate:3769337065 1 udp 2122260223 129.97.124.13 56688 typ relay\r\n" +
"a=candidate:3769337065 1 udp 2122260223 129.97.124.14 56688 typ srflx\r\n" +
"a=candidate:3769337065 1 udp 2122260223 129.97.124.15 56688 typ host\r\n" +
"a=candidate:3769337065 1 udp 2122260224 129.97.124.16 56688 typ host\r\n" + offerEnd
So(GetCandidateAddrs(sdp), ShouldEqual, []net.IP{
net.ParseIP("129.97.124.16"),
net.ParseIP("129.97.124.15"),
net.ParseIP("8.8.8.8"),
net.ParseIP("129.97.124.14"),
net.ParseIP("129.97.124.13"),
})
})
}

View file

@ -0,0 +1,5 @@
package version
func ConstructResult() string {
return GetVersion() + "\n" + GetVersionDetail()
}

13
common/version/detail.go Normal file
View file

@ -0,0 +1,13 @@
package version
import "strings"
var detailBuilder strings.Builder
func AddVersionDetail(detail string) {
detailBuilder.WriteString(detail)
}
func GetVersionDetail() string {
return detailBuilder.String()
}

32
common/version/version.go Normal file
View file

@ -0,0 +1,32 @@
package version
import (
"fmt"
"runtime/debug"
)
var version = func() string {
ver := "2.11.0"
if info, ok := debug.ReadBuildInfo(); ok {
var revision string
var modified string
for _, setting := range info.Settings {
switch setting.Key {
case "vcs.revision":
revision = setting.Value[:8]
case "vcs.modified":
if setting.Value == "true" {
modified = "*"
}
}
}
if revision != "" {
return fmt.Sprintf("%v (%v%v)", ver, revision, modified)
}
}
return ver
}()
func GetVersion() string {
return version
}

View file

@ -0,0 +1,112 @@
package websocketconn
import (
"io"
"time"
"github.com/gorilla/websocket"
)
// An abstraction that makes an underlying WebSocket connection look like a
// net.Conn.
type Conn struct {
*websocket.Conn
Reader io.Reader
Writer io.Writer
}
func (conn *Conn) Read(b []byte) (n int, err error) {
return conn.Reader.Read(b)
}
func (conn *Conn) Write(b []byte) (n int, err error) {
return conn.Writer.Write(b)
}
func (conn *Conn) Close() error {
conn.Reader.(*io.PipeReader).Close()
conn.Writer.(*io.PipeWriter).Close()
// Ignore any error in trying to write a Close frame.
_ = conn.Conn.WriteControl(websocket.CloseMessage, []byte{}, time.Now().Add(time.Second))
return conn.Conn.Close()
}
func (conn *Conn) SetDeadline(t time.Time) error {
errRead := conn.Conn.SetReadDeadline(t)
errWrite := conn.Conn.SetWriteDeadline(t)
err := errRead
if err == nil {
err = errWrite
}
return err
}
func readLoop(w io.Writer, ws *websocket.Conn) error {
var buf [2048]byte
for {
messageType, r, err := ws.NextReader()
if err != nil {
return err
}
if messageType != websocket.BinaryMessage && messageType != websocket.TextMessage {
continue
}
_, err = io.CopyBuffer(w, r, buf[:])
if err != nil {
return err
}
}
}
func writeLoop(ws *websocket.Conn, r io.Reader) error {
var buf [2048]byte
for {
n, err := r.Read(buf[:])
if err != nil {
return err
}
err = ws.WriteMessage(websocket.BinaryMessage, buf[:n])
if err != nil {
return err
}
}
}
// websocket.Conn methods start returning websocket.CloseError after the
// connection has been closed. We want to instead interpret that as io.EOF, just
// as you would find with a normal net.Conn. This only converts
// websocket.CloseErrors with known codes; other codes like CloseProtocolError
// and CloseAbnormalClosure will still be reported as anomalous.
func closeErrorToEOF(err error) error {
if websocket.IsCloseError(err, websocket.CloseNormalClosure, websocket.CloseNoStatusReceived) {
err = io.EOF
}
return err
}
// Create a new Conn.
func New(ws *websocket.Conn) *Conn {
// Set up synchronous pipes to serialize reads and writes to the
// underlying websocket.Conn.
//
// https://godoc.org/github.com/gorilla/websocket#hdr-Concurrency
// "Connections support one concurrent reader and one concurrent writer.
// Applications are responsible for ensuring that no more than one
// goroutine calls the write methods (WriteMessage, etc.) concurrently
// and that no more than one goroutine calls the read methods
// (NextReader, etc.) concurrently. The Close and WriteControl methods
// can be called concurrently with all other methods."
pr1, pw1 := io.Pipe()
go func() {
pw1.CloseWithError(closeErrorToEOF(readLoop(pw1, ws)))
}()
pr2, pw2 := io.Pipe()
go func() {
pr2.CloseWithError(closeErrorToEOF(writeLoop(ws, pr2)))
}()
return &Conn{
Conn: ws,
Reader: pr1,
Writer: pw2,
}
}

View file

@ -0,0 +1,372 @@
package websocketconn
import (
"bytes"
"fmt"
"io"
"net"
"net/http"
"net/url"
"sync"
"testing"
"time"
"github.com/gorilla/websocket"
)
// Returns a (server, client) pair of websocketconn.Conns.
func connPair() (*Conn, *Conn, error) {
// Will be assigned inside server.Handler.
var serverConn *Conn
// Start up a web server to receive the request.
ln, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
return nil, nil, err
}
defer ln.Close()
errCh := make(chan error)
server := http.Server{
Handler: http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
upgrader := websocket.Upgrader{
CheckOrigin: func(*http.Request) bool { return true },
}
ws, err := upgrader.Upgrade(rw, req, nil)
if err != nil {
errCh <- err
return
}
serverConn = New(ws)
close(errCh)
}),
}
defer server.Close()
go func() {
err := server.Serve(ln)
if err != nil && err != http.ErrServerClosed {
errCh <- err
}
}()
// Make a request to the web server.
urlStr := (&url.URL{Scheme: "ws", Host: ln.Addr().String()}).String()
ws, _, err := (&websocket.Dialer{}).Dial(urlStr, nil)
if err != nil {
return nil, nil, err
}
clientConn := New(ws)
// The server is finished when errCh is written to or closed.
err = <-errCh
if err != nil {
return nil, nil, err
}
return serverConn, clientConn, nil
}
// Test that you can write in chunks and read the result concatenated.
func TestWrite(t *testing.T) {
tests := [][][]byte{
{},
{[]byte("foo")},
{[]byte("foo"), []byte("bar")},
{{}, []byte("foo"), {}, {}, []byte("bar")},
}
for _, test := range tests {
s, c, err := connPair()
if err != nil {
t.Fatal(err)
}
// This is a little awkward because we need to read to and write
// from both ends of the Conn, and we need to do it in separate
// goroutines because otherwise a Write may block waiting for
// someone to Read it. Here we set up a loop in a separate
// goroutine, reading from the Conn s and writing to the dataCh
// and errCh channels, whose ultimate effect in the select loop
// below is like
// data, err := io.ReadAll(s)
dataCh := make(chan []byte)
errCh := make(chan error)
go func() {
for {
var buf [1024]byte
n, err := s.Read(buf[:])
if err != nil {
errCh <- err
return
}
p := make([]byte, n)
copy(p, buf[:])
dataCh <- p
}
}()
// Write the data to the client side of the Conn, one chunk at a
// time.
for i, chunk := range test {
n, err := c.Write(chunk)
if err != nil || n != len(chunk) {
t.Fatalf("%+q Write chunk %d: got (%d, %v), expected (%d, %v)",
test, i, n, err, len(chunk), nil)
}
}
// We cannot immediately c.Close here, because that closes the
// connection right away, without waiting for buffered data to
// be sent.
// Pull data and err from the server goroutine above.
var data []byte
err = nil
loop:
for {
select {
case p := <-dataCh:
data = append(data, p...)
case err = <-errCh:
break loop
case <-time.After(100 * time.Millisecond):
break loop
}
}
s.Close()
c.Close()
// Now data and err contain the result of reading everything
// from s.
expected := bytes.Join(test, []byte{})
if err != nil || !bytes.Equal(data, expected) {
t.Fatalf("%+q ReadAll: got (%+q, %v), expected (%+q, %v)",
test, data, err, expected, nil)
}
}
}
// Test that multiple goroutines may call Read on a Conn simultaneously. Run
// this with
//
// go test -race
func TestConcurrentRead(t *testing.T) {
s, c, err := connPair()
if err != nil {
t.Fatal(err)
}
defer s.Close()
// Set up multiple threads reading from the same conn.
errCh := make(chan error, 2)
var wg sync.WaitGroup
wg.Add(2)
for i := 0; i < 2; i++ {
go func() {
defer wg.Done()
_, err := io.Copy(io.Discard, s)
if err != nil {
errCh <- err
}
}()
}
// Write a bunch of data to the other end.
for i := 0; i < 2000; i++ {
_, err := fmt.Fprintf(c, "%d", i)
if err != nil {
c.Close()
t.Fatalf("Write: %v", err)
}
}
c.Close()
wg.Wait()
close(errCh)
err = <-errCh
if err != nil {
t.Fatalf("Read: %v", err)
}
}
// Test that multiple goroutines may call Write on a Conn simultaneously. Run
// this with
//
// go test -race
func TestConcurrentWrite(t *testing.T) {
s, c, err := connPair()
if err != nil {
t.Fatal(err)
}
// Set up multiple threads writing to the same conn.
errCh := make(chan error, 3)
var wg sync.WaitGroup
wg.Add(2)
for i := 0; i < 2; i++ {
go func() {
defer wg.Done()
for j := 0; j < 1000; j++ {
_, err := fmt.Fprintf(s, "%d", j)
if err != nil {
errCh <- err
break
}
}
}()
}
go func() {
wg.Wait()
err := s.Close()
if err != nil {
errCh <- err
}
close(errCh)
}()
// Read from the other end.
_, err = io.Copy(io.Discard, c)
c.Close()
if err != nil {
t.Fatalf("Read: %v", err)
}
err = <-errCh
if err != nil {
t.Fatalf("Write: %v", err)
}
}
// Test that Read and Write methods return errors after Close.
func TestClose(t *testing.T) {
s, c, err := connPair()
if err != nil {
t.Fatal(err)
}
defer c.Close()
err = s.Close()
if err != nil {
t.Fatal(err)
}
var buf [10]byte
n, err := s.Read(buf[:])
if n != 0 || err == nil {
t.Fatalf("Read after Close returned (%v, %v), expected (%v, non-nil)", n, err, 0)
}
_, err = s.Write([]byte{1, 2, 3})
// Here we break the abstraction a little and look for a specific error,
// io.ErrClosedPipe. This is because we know the Conn uses an io.Pipe
// internally.
if err != io.ErrClosedPipe {
t.Fatalf("Write after Close returned %v, expected %v", err, io.ErrClosedPipe)
}
}
// Benchmark creating a server websocket.Conn (without the websocketconn.Conn
// wrapper) for different read/write buffer sizes.
func BenchmarkUpgradeBufferSize(b *testing.B) {
// Buffer size of 0 would mean the default of 4096:
// https://github.com/gorilla/websocket/blob/v1.5.0/conn.go#L37
// But a size of zero also has the effect of causing reuse of the HTTP
// server's buffers. So we test 4096 separately from 0.
// https://github.com/gorilla/websocket/blob/v1.5.0/server.go#L32
for _, bufSize := range []int{0, 128, 1024, 2048, 4096, 8192} {
upgrader := websocket.Upgrader{
CheckOrigin: func(*http.Request) bool { return true },
ReadBufferSize: bufSize,
WriteBufferSize: bufSize,
}
b.Run(fmt.Sprintf("%d", bufSize), func(b *testing.B) {
// Start up a web server to receive the request.
ln, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
b.Fatal(err)
}
defer ln.Close()
wsCh := make(chan *websocket.Conn)
errCh := make(chan error)
server := http.Server{
Handler: http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
ws, err := upgrader.Upgrade(rw, req, nil)
if err != nil {
errCh <- err
return
}
wsCh <- ws
}),
}
defer server.Close()
go func() {
err := server.Serve(ln)
if err != nil && err != http.ErrServerClosed {
errCh <- err
}
}()
// Make a request to the web server.
dialer := &websocket.Dialer{
ReadBufferSize: bufSize,
WriteBufferSize: bufSize,
}
urlStr := (&url.URL{Scheme: "ws", Host: ln.Addr().String()}).String()
b.ResetTimer()
for i := 0; i < b.N; i++ {
ws, _, err := dialer.Dial(urlStr, nil)
if err != nil {
b.Fatal(err)
}
ws.Close()
select {
case <-wsCh:
case err := <-errCh:
b.Fatal(err)
}
}
b.StopTimer()
})
}
}
// Benchmark read/write in the client←server and server←client directions, with
// messages of different sizes. Run with -benchmem to see memory allocations.
func BenchmarkReadWrite(b *testing.B) {
trial := func(b *testing.B, readConn, writeConn *Conn, msgSize int) {
go func() {
io.Copy(io.Discard, readConn)
}()
data := make([]byte, msgSize)
b.ResetTimer()
for i := 0; i < b.N; i++ {
n, err := writeConn.Write(data[:])
b.SetBytes(int64(n))
if err != nil {
b.Fatal(err)
}
}
}
for _, msgSize := range []int{150, 3000} {
s, c, err := connPair()
if err != nil {
b.Fatal(err)
}
b.Run(fmt.Sprintf("c←s %d", msgSize), func(b *testing.B) {
trial(b, c, s, msgSize)
})
b.Run(fmt.Sprintf("s←c %d", msgSize), func(b *testing.B) {
trial(b, s, c, msgSize)
})
err = s.Close()
if err != nil {
b.Fatal(err)
}
err = c.Close()
if err != nil {
b.Fatal(err)
}
}
}

334
doc/broker-spec.txt Normal file
View file

@ -0,0 +1,334 @@
Snowflake broker protocol
0. Scope and Preliminaries
The Snowflake broker is used to hand out Snowflake proxies to clients using the Snowflake pluggable transport. There are some similarities to the function of the broker and how BridgeDB hands out Tor bridges.
This document specifies how the Snowflake broker interacts with other parts of the Tor ecosystem, starting with the metrics CollecTor module and to be expanded upon later.
1. Metrics Reporting (version 1.1)
Metrics data from the Snowflake broker can be retrieved by sending an HTTP GET request to https://[Snowflake broker URL]/metrics and consists of the following items:
"snowflake-stats-end" YYYY-MM-DD HH:MM:SS (NSEC s) NL
[At start, exactly once.]
YYYY-MM-DD HH:MM:SS defines the end of the included measurement
interval of length NSEC seconds (86400 seconds by default).
"snowflake-ips" [CC=NUM,CC=NUM,...,CC=NUM] NL
[At most once.]
List of mappings from two-letter country codes to the number of
unique IP addresses of Snowflake proxies that have polled. Each
country code only appears once.
"snowflake-ips-total" NUM NL
[At most once.]
A count of the total number of unique IP addresses of Snowflake
proxies that have polled.
"snowflake-ips-standalone" NUM NL
[At most once.]
A count of the total number of unique IP addresses of snowflake
proxies of type "standalone" that have polled.
"snowflake-ips-badge" NUM NL
[At most once.]
A count of the total number of unique IP addresses of snowflake
proxies of type "badge" that have polled.
"snowflake-ips-webext" NUM NL
[At most once.]
A count of the total number of unique IP addresses of snowflake
proxies of type "webext" that have polled.
"snowflake-idle-count" NUM NL
[At most once.]
A count of the number of times a proxy has polled but received
no client offer, rounded up to the nearest multiple of 8.
"client-denied-count" NUM NL
[At most once.]
A count of the number of times a client has requested a proxy
from the broker but no proxies were available, rounded up to
the nearest multiple of 8.
"client-restricted-denied-count" NUM NL
[At most once.]
A count of the number of times a client with a restricted or
unknown NAT type has requested a proxy from the broker but no
proxies were available, rounded up to the nearest multiple of 8.
"client-unrestricted-denied-count" NUM NL
[At most once.]
A count of the number of times a client with an unrestricted NAT
type has requested a proxy from the broker but no proxies were
available, rounded up to the nearest multiple of 8.
"client-snowflake-match-count" NUM NL
[At most once.]
A count of the number of times a client successfully received a
proxy from the broker, rounded up to the nearest multiple of 8.
"client-snowflake-timeout-count" NUM NL
[At most once.]
A count of the number of times a client was matched with a proxy
but timed out before receiving the proxy's WebRTC answer,
rounded up to the nearest multiple of 8.
"client-http-count" NUM NL
[At most once.]
A count of the number of times a client has requested a proxy using
the HTTP rendezvous method from the broker, rounded up to the nearest
multiple of 8.
"client-http-ips" [CC=NUM,CC=NUM,...,CC=NUM] NL
[At most once.]
List of mappings from two-letter country codes to the number of
times a client has requested a proxy using the HTTP rendezvous method,
rounded up to the nearest multiple of 8. Each country code only appears
once.
Note that this descriptor field name is misleading. We use IP addresses
to partition by country, but this metric counts polls, not unique IPs.
"client-ampcache-count" NUM NL
[At most once.]
A count of the number of times a client has requested a proxy using
the ampcache rendezvous method from the broker, rounded up to the
nearest multiple of 8.
"client-ampcache-ips" [CC=NUM,CC=NUM,...,CC=NUM] NL
[At most once.]
List of mappings from two-letter country codes to the number of
times a client has requested a proxy using the ampcache rendezvous
method, rounded up to the nearest multiple of 8. Each country code only
appears once.
Note that this descriptor field name is misleading. We use IP addresses
to partition by country, but this metric counts polls, not unique IPs.
"client-sqs-count" NUM NL
[At most once.]
A count of the number of times a client has requested a proxy using
the sqs rendezvous method from the broker, rounded up to the nearest
multiple of 8.
"client-sqs-ips" [CC=NUM,CC=NUM,...,CC=NUM] NL
[At most once.]
List of mappings from two-letter country codes to the number of
times a client has requested a proxy using the sqs rendezvous method,
rounded up to the nearest multiple of 8. Each country code only appears
once.
Note that this descriptor field name is misleading. We use IP addresses
to partition by country, but this metric counts polls, not unique IPs.
"snowflake-ips-nat-restricted" NUM NL
[At most once.]
A count of the total number of unique IP addresses of snowflake
proxies that have a restricted NAT type.
"snowflake-ips-nat-unrestricted" NUM NL
[At most once.]
A count of the total number of unique IP addresses of snowflake
proxies that have an unrestricted NAT type.
"snowflake-ips-nat-unknown" NUM NL
[At most once.]
A count of the total number of unique IP addresses of snowflake
proxies that have an unknown NAT type.
"snowflake-proxy-poll-with-relay-url-count" NUM NL
[At most once.]
A count of snowflake proxy polls with relay url extension present.
This means this proxy understands relay url, and is sending its
allowed prefix.
"snowflake-proxy-poll-without-relay-url-count" NUM NL
[At most once.]
A count of snowflake proxy polls with relay url extension absent.
This means this proxy is not yet updated.
"snowflake-proxy-rejected-for-relay-url-count" NUM NL
[At most once.]
A count of snowflake proxy polls with relay url extension rejected
based on broker's relay url extension policy.
This means an incompatible allowed relay pattern is included in the
proxy poll message.
2. Broker messaging specification and endpoints
The broker facilitates the connection of snowflake clients and snowflake proxies
through the exchange of WebRTC SDP information with its endpoints.
2.1. Client interactions with the broker
The broker offers multiple ways for clients to exchange registration
messages.
2.1.1. HTTPS POST
Clients interact with the broker by making a POST request to `/client` with the
offer SDP in the request body:
```
POST /client HTTP
[offer SDP]
```
If the broker is behind a domain-fronted connection, this request is accompanied
with the necessary HOST information.
If the client is matched up with a proxy, they receive a 200 OK response with
the proxy's answer SDP in the request body:
```
HTTP 200 OK
[answer SDP]
```
If no proxies were available, they receive a 503 status code:
```
HTTP 503 Service Unavailable
```
2.1.2. AMP
The broker's /amp/client endpoint receives client poll messages encoded
into the URL path, and sends client poll responses encoded as HTML that
conforms to the requirements of AMP (Accelerated Mobile Pages). This
endpoint is intended to be accessed through an AMP cache, using the
-ampcache option of snowflake-client.
The client encodes its poll message into a GET request as follows:
```
GET /amp/client/0[0 or more bytes]/[base64 of client poll message]
```
The components of the path are as follows:
* "/amp/client/", the root of the endpoint.
* "0", a format version number, which controls the interpretation of the
rest of the path. Only the first byte matters as a version indicator
(not the whole first path component).
* Any number of slash or non-slash bytes. These may be used as padding
or to prevent cache collisions in the AMP cache.
* A final slash.
* base64 encoding of the client poll message, using the URL-safe
alphabet (which does not include slash).
The broker returns a client poll response message in the HTTP response.
The message is encoded using AMP armor, an AMP-compatible HTML encoding.
The data stream is notionally a "0" byte (a format version indicator)
followed by the base64 encoding of the message (using the standard
alphabet, with "=" padding). This stream is broken into
whitespace-separated chunks, which are then bundled into HTML <pre>
elements. The <pre> elements are then surrounded by AMP boilerplate. To
decode, search the HTML for <pre> elements, concatenate their contents
and join on whitespace, discard the "0" prefix, and base64 decode.
2.2 Proxy interactions with the broker
Proxies poll the broker with a proxy poll request to `/proxy`:
```
POST /proxy HTTP
{
Sid: [generated session id of proxy],
Version: 1.3,
Type: ["badge"|"webext"|"standalone"|"mobile"],
NAT: ["unknown"|"restricted"|"unrestricted"],
Clients: [number of current clients, rounded down to multiples of 8],
AcceptedRelayPattern: [a pattern representing accepted set of relay domains]
}
```
If the request is well-formed, they receive a 200 OK response.
If a client is matched:
```
HTTP 200 OK
{
Status: "client match",
{
type: offer,
sdp: [WebRTC SDP]
},
RelayURL: [the WebSocket URL proxy should connect to relay Snowflake traffic]
}
```
If a client is not matched:
```
HTTP 200 OK
{
Status: "no match"
}
```
If the request is malformed:
```
HTTP 400 BadRequest
```
If they are matched with a client, they provide their SDP answer with a POST
request to `/answer`:
```
POST /answer HTTP
{
Sid: [generated session id of proxy],
Version: 1.3,
Answer:
{
type: answer,
sdp: [WebRTC SDP]
}
}
```
If the request is well-formed, they receive a 200 OK response.
If the client retrieved the answer:
```
HTTP 200 OK
{
Status: "success"
}
```
If the client left:
```
HTTP 200 OK
{
Status: "client gone"
}
3) If the request is malformed:
HTTP 400 BadRequest
```

View file

@ -0,0 +1,44 @@
# Rendezvous with Amazon SQS
This is a new experimental rendezvous method (in addition to the existing HTTPs and AMP cache methods).
It leverages the Amazon SQS Queue service for a client to communicate with the broker server.
## Broker
To run the broker with this rendezvous method, use the following CLI flags (they are both required):
- `broker-sqs-name` - name of the broker SQS queue to listen for incoming messages
- `broker-sqs-region` - name of AWS region of the SQS queue
These two parameters determine the SQS queue URL that the client needs to be run with as a CLI flag in order to communicate with the broker. For example, the following values can be used:
`-broker-sqs-name snowflake-broker -broker-sqs-region us-east-1`
The machine on which the broker is being run must be equiped with the correct AWS configs and credentials that would allow the broker program to create, read from, and write to the SQS queue. These are typically stored at `~/.aws/config` and `~/.aws/credentials`. However, enviornment variables may also be used as described in the [AWS Docs](https://docs.aws.amazon.com/sdkref/latest/guide/creds-config-files.html)
## Client
To run the client with this rendezvous method, use the following CLI flags (they are all required):
- `sqsqueue` - URL of the SQS queue to use as a proxy for signalling
- `sqscreds` - Encoded credentials for accessing the SQS queue
`sqsqueue` should correspond to the URL of the SQS queue that the broker is listening on.
For the example above, the following value can be used:
`-sqsqueue https://sqs.us-east-1.amazonaws.com/893902434899/snowflake-broker -sqscreds some-encoded-sqs-creds`
*Public access to SQS queues is not allowed, so there needs to be some form of authentication to be able to access the queue. Limited permission credentials will be provided by the Snowflake team to access the corresponding SQS queue.*
## Implementation Details
```
╭――――――――――――――――――╮ ╭――――――――――――――――――╮ ╭――――――――――――――――――╮ ╭―――――――――――――――――-―╮
│ Client │ <=> │ Amazon SQS │ <=> │ Broker │ <=> │ Snowflake Proxy │
╰――――――――――――――――――╯ ╰――――――――――――――――――╯ ╰――――――――――――――――――╯ ╰――――――――――――――――――-╯
```
1. On startup, the **broker** ensures that an SQS queue with the name of the `broker-sqs-name` parameter exists. It will create such a queue if it doesnt exist. Afterwards, it will enter a loop of continuously:
- polling for new messages
- cleaning up client queues
2. **Client** sends SDP Offer to the SQS queue at the URL provided by the `sqsqueue` parameter using a message with a unique ID (clientID) corresponding to the client along with the contents of the SDP Offer. The client will randomly generate a new ClientID to use each rendezvous attempt.
3. The **broker** will receive this message during its polling and process it.
- A client SQS queue with the name `"snowflake-client" + clientID` will be created for the broker to send messages to the client. This is needed because if a queue shared between all clients was used for outgoing messages from the server, then clients would have to pick off the top message, check if it is addressed to them, and then process the message if it is. This means clients would possibly have to check many messages before they find the one addressed to them.
- When the broker has a response for the client, it will send a message to the client queue with the details of the SDP answer.
- The SDP offer message from the client is then deleted from the broker queue.
4. The **client** will continuously poll its client queue and eventually receive the message with the SDP answer from the broker.
5. The broker server will periodically clean up the unique SQS queues it has created for each client once the queues are no longer needed (it will delete queues that were last modified before a certain amount of time ago)

50
doc/snowflake-client.1 Normal file
View file

@ -0,0 +1,50 @@
.TH SNOWFLAKE-CLIENT "1" "July 2021" "snowflake-client" "User Commands"
.SH NAME
snowflake-client \- WebRTC pluggable transport client for Tor
.SH DESCRIPTION
Snowflake helps users circumvent censorship by making a WebRTC
connection to volunteer proxies. These proxies relay Tor traffic to a
Snowflake bridge and then through the Tor network.
.SS "Usage of snowflake-client:"
.HP
\fB\-ampcache\fR string
.IP
URL of AMP cache to use as a proxy for signaling
.HP
\fB\-front\fR string
.IP
front domain
.HP
\fB\-ice\fR string
.IP
comma\-separated list of ICE servers
.HP
\fB\-keep\-local\-addresses\fR
.IP
keep local LAN address ICE candidates
.HP
\fB\-log\fR string
.IP
name of log file
.HP
\fB\-log\-to\-state\-dir\fR
.IP
resolve the log file relative to tor's pt state dir
.HP
\fB\-logToStateDir\fR
.IP
use \fB\-log\-to\-state\-dir\fR instead
.HP
\fB\-max\fR int
.IP
capacity for number of multiplexed WebRTC peers (default 1)
.HP
\fB\-unsafe\-logging\fR
.IP
prevent logs from being scrubbed
.HP
\fB\-url\fR string
.IP
URL of signaling broker
.SH "SEE ALSO"
https://snowflake.torproject.org

38
doc/snowflake-proxy.1 Normal file
View file

@ -0,0 +1,38 @@
.TH SNOWFLAKE-PROXY "1" "June 2021" "swnoflake-proxy" "User Commands"
.SH NAME
snowflake-proxy \- WebRTC pluggable transport proxy for Tor
.SH DESCRIPTION
Snowflake helps users circumvent censorship by making a WebRTC
connection to volunteer proxies. These proxies relay Tor traffic to a
Snowflake bridge and then through the Tor network.
.SS "Usage of snowflake-proxy:"
.HP
\fB\-broker\fR string
.IP
broker URL (default "https://snowflake\-broker.torproject.net/")
.HP
\fB\-capacity\fR uint
.IP
maximum concurrent clients (default 10)
.HP
\fB\-keep\-local\-addresses\fR
.IP
keep local LAN address ICE candidates
.HP
\fB\-log\fR string
.IP
log filename
.HP
\fB\-relay\fR string
.IP
websocket relay URL (default "wss://snowflake.torproject.net/")
.HP
\fB\-stun\fR string
.IP
stun URL (default "stun:stun.l.google.com:19302")
.HP
\fB\-unsafe\-logging\fR
.IP
prevent logs from being scrubbed
.SH "SEE ALSO"
https://snowflake.torproject.org

View file

@ -0,0 +1,165 @@
Snowflake is available as a general-purpose pluggable transports library and adheres to the [pluggable transports v2.1 Go API](https://github.com/Pluggable-Transports/Pluggable-Transports-spec/blob/master/releases/PTSpecV2.1/Pluggable%20Transport%20Specification%20v2.1%20-%20Go%20Transport%20API.pdf).
### Client library
The Snowflake client library contains functions for running a Snowflake client.
Example usage:
```Golang
package main
import (
"log"
sf "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/client/lib"
)
func main() {
config := sf.ClientConfig{
BrokerURL: "https://snowflake-broker.example.com",
FrontDomain: "https://friendlyfrontdomain.net",
ICEAddresses: []string{
"stun:stun.voip.blackberry.com:3478",
},
Max: 1,
}
transport, err := sf.NewSnowflakeClient(config)
if err != nil {
log.Fatal("Failed to start snowflake transport: ", err)
}
// transport implements the ClientFactory interface and returns a net.Conn
conn, err := transport.Dial()
if err != nil {
log.Printf("dial error: %s", err)
return
}
defer conn.Close()
// ...
}
```
#### Using your own rendezvous method
You can define and use your own rendezvous method to communicate with a Snowflake broker by implementing the `RendezvousMethod` interface.
```Golang
package main
import (
"log"
sf "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/client/lib"
)
type StubMethod struct {
}
func (m *StubMethod) Exchange(pollReq []byte) ([]byte, error) {
var brokerResponse []byte
var err error
//Implement the logic you need to communicate with the Snowflake broker here
return brokerResponse, err
}
func main() {
config := sf.ClientConfig{
ICEAddresses: []string{
"stun:stun.voip.blackberry.com:3478",
},
}
transport, err := sf.NewSnowflakeClient(config)
if err != nil {
log.Fatal("Failed to start snowflake transport: ", err)
}
// custom rendezvous methods can be set with `SetRendezvousMethod`
rendezvous := &StubMethod{}
transport.SetRendezvousMethod(rendezvous)
// transport implements the ClientFactory interface and returns a net.Conn
conn, err := transport.Dial()
if err != nil {
log.Printf("dial error: %s", err)
return
}
defer conn.Close()
// ...
}
```
### Server library
The Snowflake server library contains functions for running a Snowflake server.
Example usage:
```Golang
package main
import (
"log"
"net"
sf "gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2/server/lib"
"golang.org/x/crypto/acme/autocert"
)
func main() {
// The snowflake server runs a websocket server. To run this securely, you will
// need a valid certificate.
certManager := &autocert.Manager{
Prompt: autocert.AcceptTOS,
HostPolicy: autocert.HostWhitelist("snowflake.yourdomain.com"),
Email: "you@yourdomain.com",
}
transport := sf.NewSnowflakeServer(certManager.GetCertificate)
addr, err := net.ResolveTCPAddr("tcp", "127.0.0.1:443")
if err != nil {
log.Printf("error resolving bind address: %s", err.Error())
}
numKCPInstances := 1
ln, err := transport.Listen(addr, numKCPInstances)
if err != nil {
log.Printf("error opening listener: %s", err.Error())
}
for {
conn, err := ln.Accept()
if err != nil {
if err, ok := err.(net.Error); ok && err.Temporary() {
continue
}
log.Printf("Snowflake accept error: %s", err)
break
}
go func() {
// ...
defer conn.Close()
}()
}
// ...
}
```
### Running your own Snowflake infrastructure
At the moment we do not have the ability to share Snowfake infrastructure between different types of applications. If you are planning on using Snowflake as a transport for your application, you will need to:
- Run a Snowflake broker. See our [broker documentation](../broker/) and [installation guide](https://gitlab.torproject.org/tpo/anti-censorship/team/-/wikis/Survival-Guides/Snowflake-Broker-Installation-Guide) for more information
- Run Snowflake proxies. These can be run as [standalone Go proxies](../proxy/) or [browser-based proxies](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext).

15
docker-compose.yml Normal file
View file

@ -0,0 +1,15 @@
services:
snowflake-proxy:
network_mode: host
image: containers.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake:latest
container_name: snowflake-proxy
restart: unless-stopped
# For a full list of Snowflake Proxy CLI parameters see
# https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/tree/main/proxy?ref_type=heads#running-a-standalone-snowflake-proxy
#command: [ "-ephemeral-ports-range", "30000:60000" ]
watchtower:
image: containrrr/watchtower
container_name: watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: snowflake-proxy

86
go.mod Normal file
View file

@ -0,0 +1,86 @@
module gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/v2
go 1.23.0
require (
github.com/aws/aws-sdk-go-v2 v1.39.0
github.com/aws/aws-sdk-go-v2/config v1.31.8
github.com/aws/aws-sdk-go-v2/credentials v1.18.12
github.com/aws/aws-sdk-go-v2/service/sqs v1.42.5
github.com/golang/mock v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/miekg/dns v1.1.65
github.com/pion/ice/v4 v4.0.10
github.com/pion/sdp/v3 v3.0.16
github.com/pion/stun/v3 v3.0.0
github.com/pion/transport/v3 v3.0.7
github.com/pion/webrtc/v4 v4.1.4
github.com/prometheus/client_golang v1.22.0
github.com/realclientip/realclientip-go v1.0.0
github.com/refraction-networking/utls v1.6.7
github.com/smartystreets/goconvey v1.8.1
github.com/stretchr/testify v1.11.1
github.com/txthinking/socks5 v0.0.0-20230325130024-4230056ae301
github.com/xtaci/kcp-go/v5 v5.6.24
github.com/xtaci/smux v1.5.35
gitlab.torproject.org/tpo/anti-censorship/geoip v0.0.0-20210928150955-7ce4b3d98d01
gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/goptlib v1.6.0
gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil v0.0.0-20250815012447-418f76dcf315
golang.org/x/crypto v0.41.0
golang.org/x/net v0.42.0
golang.org/x/sys v0.35.0
)
require (
github.com/andybalholm/brotli v1.0.6 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.7 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.7 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.7 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.7 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.29.3 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.34.4 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.4 // indirect
github.com/aws/smithy-go v1.23.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cloudflare/circl v1.3.7 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gopherjs/gopherjs v1.17.2 // indirect
github.com/jtolds/gls v4.20.0+incompatible // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.6 // indirect
github.com/klauspost/reedsolomon v1.12.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/patrickmn/go-cache v2.1.0+incompatible // indirect
github.com/pion/datachannel v1.5.10 // indirect
github.com/pion/dtls/v3 v3.0.7 // indirect
github.com/pion/interceptor v0.1.40 // indirect
github.com/pion/logging v0.2.4 // indirect
github.com/pion/mdns/v2 v2.0.7 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/pion/rtcp v1.2.15 // indirect
github.com/pion/rtp v1.8.21 // indirect
github.com/pion/sctp v1.8.39 // indirect
github.com/pion/srtp/v3 v3.0.7 // indirect
github.com/pion/turn/v4 v4.1.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/smarty/assertions v1.15.0 // indirect
github.com/tjfoc/gmsm v1.4.1 // indirect
github.com/txthinking/runnergroup v0.0.0-20210608031112-152c7c4432bf // indirect
github.com/wlynxg/anet v0.0.5 // indirect
golang.org/x/mod v0.26.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/text v0.28.0 // indirect
golang.org/x/tools v0.35.0 // indirect
google.golang.org/protobuf v1.36.5 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
replace github.com/refraction-networking/utls v1.6.7 => gitlab.torproject.org/shelikhoo/utls-temporary v0.0.0-20250428152032-7f32539913c8

274
go.sum Normal file
View file

@ -0,0 +1,274 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/andybalholm/brotli v1.0.6 h1:Yf9fFpf49Zrxb9NlQaluyE92/+X7UVHlhMNJN2sxfOI=
github.com/andybalholm/brotli v1.0.6/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
github.com/aws/aws-sdk-go-v2 v1.39.0 h1:xm5WV/2L4emMRmMjHFykqiA4M/ra0DJVSWUkDyBjbg4=
github.com/aws/aws-sdk-go-v2 v1.39.0/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2/config v1.31.8 h1:kQjtOLlTU4m4A64TsRcqwNChhGCwaPBt+zCQt/oWsHU=
github.com/aws/aws-sdk-go-v2/config v1.31.8/go.mod h1:QPpc7IgljrKwH0+E6/KolCgr4WPLerURiU592AYzfSY=
github.com/aws/aws-sdk-go-v2/credentials v1.18.12 h1:zmc9e1q90wMn8wQbjryy8IwA6Q4XlaL9Bx2zIqdNNbk=
github.com/aws/aws-sdk-go-v2/credentials v1.18.12/go.mod h1:3VzdRDR5u3sSJRI4kYcOSIBbeYsgtVk7dG5R/U6qLWY=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.7 h1:Is2tPmieqGS2edBnmOJIbdvOA6Op+rRpaYR60iBAwXM=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.7/go.mod h1:F1i5V5421EGci570yABvpIXgRIBPb5JM+lSkHF6Dq5w=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.7 h1:UCxq0X9O3xrlENdKf1r9eRJoKz/b0AfGkpp3a7FPlhg=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.7/go.mod h1:rHRoJUNUASj5Z/0eqI4w32vKvC7atoWR0jC+IkmVH8k=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.7 h1:Y6DTZUn7ZUC4th9FMBbo8LVE+1fyq3ofw+tRwkUd3PY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.7/go.mod h1:x3XE6vMnU9QvHN/Wrx2s44kwzV2o2g5x/siw4ZUJ9g8=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 h1:oegbebPEMA/1Jny7kvwejowCaHz1FWZAQ94WXFNCyTM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1/go.mod h1:kemo5Myr9ac0U9JfSjMo9yHLtw+pECEHsFtJ9tqCEI8=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.7 h1:mLgc5QIgOy26qyh5bvW+nDoAppxgn3J2WV3m9ewq7+8=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.7/go.mod h1:wXb/eQnqt8mDQIQTTmcw58B5mYGxzLGZGK8PWNFZ0BA=
github.com/aws/aws-sdk-go-v2/service/sqs v1.42.5 h1:HbaHWaTkGec2pMa/UQa3+WNWtUaFFF1ZLfwCeVFtBns=
github.com/aws/aws-sdk-go-v2/service/sqs v1.42.5/go.mod h1:wCAPjT7bNg5+4HSNefwNEC2hM3d+NSD5w5DU/8jrPrI=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.3 h1:7PKX3VYsZ8LUWceVRuv0+PU+E7OtQb1lgmi5vmUE9CM=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.3/go.mod h1:Ql6jE9kyyWI5JHn+61UT/Y5Z0oyVJGmgmJbZD5g4unY=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.34.4 h1:e0XBRn3AptQotkyBFrHAxFB8mDhAIOfsG+7KyJ0dg98=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.34.4/go.mod h1:XclEty74bsGBCr1s0VSaA11hQ4ZidK4viWK7rRfO88I=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.4 h1:PR00NXRYgY4FWHqOGx3fC3lhVKjsp1GdloDv2ynMSd8=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.4/go.mod h1:Z+Gd23v97pX9zK97+tX4ppAgqCt3Z2dIXB02CtBncK8=
github.com/aws/smithy-go v1.23.0 h1:8n6I3gXzWJB2DxBDnfxgBaSX6oe0d/t10qGz7OKqMCE=
github.com/aws/smithy-go v1.23.0/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudflare/circl v1.3.7 h1:qlCDlTPz2n9fu58M0Nh1J/JzcFpfgkFHHX3O35r5vcU=
github.com/cloudflare/circl v1.3.7/go.mod h1:sRTcRWXGLrKw6yIGJ+l7amYJFfAXbZG0kBSc8r4zxgA=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v1.17.2 h1:fQnZVsXk8uxXIStYb0N4bGk7jeyTalG/wsZjQ25dO0g=
github.com/gopherjs/gopherjs v1.17.2/go.mod h1:pRRIvn/QzFLrKfvEz3qUuEhtE/zLCWfreZ6J5gM2i+k=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/cpuid/v2 v2.2.6 h1:ndNyv040zDGIDh8thGkXYjnFtiN02M1PVVF+JE/48xc=
github.com/klauspost/cpuid/v2 v2.2.6/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/klauspost/reedsolomon v1.12.0 h1:I5FEp3xSwVCcEh3F5A7dofEfhXdF/bWhQWPH+XwBFno=
github.com/klauspost/reedsolomon v1.12.0/go.mod h1:EPLZJeh4l27pUGC3aXOjheaoh1I9yut7xTURiW3LQ9Y=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/miekg/dns v1.1.51/go.mod h1:2Z9d3CP1LQWihRZUf29mQ19yDThaI4DAYzte2CaQW5c=
github.com/miekg/dns v1.1.65 h1:0+tIPHzUW0GCge7IiK3guGP57VAw7hoPDfApjkMD1Fc=
github.com/miekg/dns v1.1.65/go.mod h1:Dzw9769uoKVaLuODMDZz9M6ynFU6Em65csPuoi8G0ck=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/pion/datachannel v1.5.10 h1:ly0Q26K1i6ZkGf42W7D4hQYR90pZwzFOjTq5AuCKk4o=
github.com/pion/datachannel v1.5.10/go.mod h1:p/jJfC9arb29W7WrxyKbepTU20CFgyx5oLo8Rs4Py/M=
github.com/pion/dtls/v3 v3.0.7 h1:bItXtTYYhZwkPFk4t1n3Kkf5TDrfj6+4wG+CZR8uI9Q=
github.com/pion/dtls/v3 v3.0.7/go.mod h1:uDlH5VPrgOQIw59irKYkMudSFprY9IEFCqz/eTz16f8=
github.com/pion/ice/v4 v4.0.10 h1:P59w1iauC/wPk9PdY8Vjl4fOFL5B+USq1+xbDcN6gT4=
github.com/pion/ice/v4 v4.0.10/go.mod h1:y3M18aPhIxLlcO/4dn9X8LzLLSma84cx6emMSu14FGw=
github.com/pion/interceptor v0.1.40 h1:e0BjnPcGpr2CFQgKhrQisBU7V3GXK6wrfYrGYaU6Jq4=
github.com/pion/interceptor v0.1.40/go.mod h1:Z6kqH7M/FYirg3frjGJ21VLSRJGBXB/KqaTIrdqnOic=
github.com/pion/logging v0.2.4 h1:tTew+7cmQ+Mc1pTBLKH2puKsOvhm32dROumOZ655zB8=
github.com/pion/logging v0.2.4/go.mod h1:DffhXTKYdNZU+KtJ5pyQDjvOAh/GsNSyv1lbkFbe3so=
github.com/pion/mdns/v2 v2.0.7 h1:c9kM8ewCgjslaAmicYMFQIde2H9/lrZpjBkN8VwoVtM=
github.com/pion/mdns/v2 v2.0.7/go.mod h1:vAdSYNAT0Jy3Ru0zl2YiW3Rm/fJCwIeM0nToenfOJKA=
github.com/pion/randutil v0.1.0 h1:CFG1UdESneORglEsnimhUjf33Rwjubwj6xfiOXBa3mA=
github.com/pion/randutil v0.1.0/go.mod h1:XcJrSMMbbMRhASFVOlj/5hQial/Y8oH/HVo7TBZq+j8=
github.com/pion/rtcp v1.2.15 h1:LZQi2JbdipLOj4eBjK4wlVoQWfrZbh3Q6eHtWtJBZBo=
github.com/pion/rtcp v1.2.15/go.mod h1:jlGuAjHMEXwMUHK78RgX0UmEJFV4zUKOFHR7OP+D3D0=
github.com/pion/rtp v1.8.21 h1:3yrOwmZFyUpcIosNcWRpQaU+UXIJ6yxLuJ8Bx0mw37Y=
github.com/pion/rtp v1.8.21/go.mod h1:bAu2UFKScgzyFqvUKmbvzSdPr+NGbZtv6UB2hesqXBk=
github.com/pion/sctp v1.8.39 h1:PJma40vRHa3UTO3C4MyeJDQ+KIobVYRZQZ0Nt7SjQnE=
github.com/pion/sctp v1.8.39/go.mod h1:cNiLdchXra8fHQwmIoqw0MbLLMs+f7uQ+dGMG2gWebE=
github.com/pion/sdp/v3 v3.0.16 h1:0dKzYO6gTAvuLaAKQkC02eCPjMIi4NuAr/ibAwrGDCo=
github.com/pion/sdp/v3 v3.0.16/go.mod h1:9tyKzznud3qiweZcD86kS0ff1pGYB3VX+Bcsmkx6IXo=
github.com/pion/srtp/v3 v3.0.7 h1:QUElw0A/FUg3MP8/KNMZB3i0m8F9XeMnTum86F7S4bs=
github.com/pion/srtp/v3 v3.0.7/go.mod h1:qvnHeqbhT7kDdB+OGB05KA/P067G3mm7XBfLaLiaNF0=
github.com/pion/stun/v3 v3.0.0 h1:4h1gwhWLWuZWOJIJR9s2ferRO+W3zA/b6ijOI6mKzUw=
github.com/pion/stun/v3 v3.0.0/go.mod h1:HvCN8txt8mwi4FBvS3EmDghW6aQJ24T+y+1TKjB5jyU=
github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0=
github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo=
github.com/pion/turn/v4 v4.1.1 h1:9UnY2HB99tpDyz3cVVZguSxcqkJ1DsTSZ+8TGruh4fc=
github.com/pion/turn/v4 v4.1.1/go.mod h1:2123tHk1O++vmjI5VSD0awT50NywDAq5A2NNNU4Jjs8=
github.com/pion/webrtc/v4 v4.1.4 h1:/gK1ACGHXQmtyVVbJFQDxNoODg4eSRiFLB7t9r9pg8M=
github.com/pion/webrtc/v4 v4.1.4/go.mod h1:Oab9npu1iZtQRMic3K3toYq5zFPvToe/QBw7dMI2ok4=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/realclientip/realclientip-go v1.0.0 h1:+yPxeC0mEaJzq1BfCt2h4BxlyrvIIBzR6suDc3BEF1U=
github.com/realclientip/realclientip-go v1.0.0/go.mod h1:CXnUdVwFRcXFJIRb/dTYqbT7ud48+Pi2pFm80bxDmcI=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/smarty/assertions v1.15.0 h1:cR//PqUBUiQRakZWqBiFFQ9wb8emQGDb0HeGdqGByCY=
github.com/smarty/assertions v1.15.0/go.mod h1:yABtdzeQs6l1brC900WlRNwj6ZR55d7B+E8C6HtKdec=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/smartystreets/goconvey v1.8.1 h1:qGjIddxOk4grTu9JPOU31tVfq3cNdBlNa5sSznIX1xY=
github.com/smartystreets/goconvey v1.8.1/go.mod h1:+/u4qLyY6x1jReYOp7GOM2FSt8aP9CzCZL03bI28W60=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tjfoc/gmsm v1.4.1 h1:aMe1GlZb+0bLjn+cKTPEvvn9oUEBlJitaZiiBwsbgho=
github.com/tjfoc/gmsm v1.4.1/go.mod h1:j4INPkHWMrhJb38G+J6W4Tw0AbuN8Thu3PbdVYhVcTE=
github.com/txthinking/runnergroup v0.0.0-20210608031112-152c7c4432bf h1:7PflaKRtU4np/epFxRXlFhlzLXZzKFrH5/I4so5Ove0=
github.com/txthinking/runnergroup v0.0.0-20210608031112-152c7c4432bf/go.mod h1:CLUSJbazqETbaR+i0YAhXBICV9TrKH93pziccMhmhpM=
github.com/txthinking/socks5 v0.0.0-20230325130024-4230056ae301 h1:d/Wr/Vl/wiJHc3AHYbYs5I3PucJvRuw3SvbmlIRf+oM=
github.com/txthinking/socks5 v0.0.0-20230325130024-4230056ae301/go.mod h1:ntmMHL/xPq1WLeKiw8p/eRATaae6PiVRNipHFJxI8PM=
github.com/wlynxg/anet v0.0.5 h1:J3VJGi1gvo0JwZ/P1/Yc/8p63SoW98B5dHkYDmpgvvU=
github.com/wlynxg/anet v0.0.5/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/xtaci/kcp-go/v5 v5.6.24 h1:0tZL4NfpoESDrhaScrZfVDnYZ/3LhyVAbN/dQ2b4hbI=
github.com/xtaci/kcp-go/v5 v5.6.24/go.mod h1:7cAxNX/qFGeRUmUSnnDMoOg53FbXDK9IWBXAUfh+aBA=
github.com/xtaci/lossyconn v0.0.0-20190602105132-8df528c0c9ae h1:J0GxkO96kL4WF+AIT3M4mfUVinOCPgf2uUWYFUzN0sM=
github.com/xtaci/lossyconn v0.0.0-20190602105132-8df528c0c9ae/go.mod h1:gXtu8J62kEgmN++bm9BVICuT/e8yiLI2KFobd/TRFsE=
github.com/xtaci/smux v1.5.35 h1:RosihGJBeaS8gxOZ17HNxbhONwnqQwNwusHx4+SEGhk=
github.com/xtaci/smux v1.5.35/go.mod h1:OMlQbT5vcgl2gb49mFkYo6SMf+zP3rcjcwQz7ZU7IGY=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
gitlab.torproject.org/shelikhoo/utls-temporary v0.0.0-20250428152032-7f32539913c8 h1:zZ1r9UjJ4qSPoLZG/vzITRsO0Qacpm20HlRAg7JVJ8Y=
gitlab.torproject.org/shelikhoo/utls-temporary v0.0.0-20250428152032-7f32539913c8/go.mod h1:BC3O4vQzye5hqpmDTWUqi4P5DDhzJfkV1tdqtawQIH0=
gitlab.torproject.org/tpo/anti-censorship/geoip v0.0.0-20210928150955-7ce4b3d98d01 h1:4949mHh9Vj2/okk48yG8nhP6TosFWOUfSfSr502sKGE=
gitlab.torproject.org/tpo/anti-censorship/geoip v0.0.0-20210928150955-7ce4b3d98d01/go.mod h1:K3LOI4H8fa6j+7E10ViHeGEQV10304FG4j94ypmKLjY=
gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/goptlib v1.6.0 h1:KD9m+mRBwtEdqe94Sv72uiedMWeRdIr4sXbrRyzRiIo=
gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/goptlib v1.6.0/go.mod h1:70bhd4JKW/+1HLfm+TMrgHJsUHG4coelMWwiVEJ2gAg=
gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil v0.0.0-20250815012447-418f76dcf315 h1:9lmXguW9aH5sdZR5h5jOrdInCt0tQ9NRa7+wFD4MQBk=
gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/ptutil v0.0.0-20250815012447-418f76dcf315/go.mod h1:PK7EvweKeypdelDyh1m7N922aldSeCAG8n0lJ7RAXWQ=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201012173705-84dcc777aaee/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.26.0 h1:EGMPT//Ezu+ylkCijjPc+f4Aih7sZvaAr+O3EHBxvZg=
golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201010224723-4f7140c49acb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
golang.org/x/net v0.42.0 h1:jzkYrhi3YQWD6MLBJcsklgQsoAcw89EcZbJw8Z614hs=
golang.org/x/net v0.42.0/go.mod h1:FF1RA5d3u7nAYA4z2TkclSCKh68eSXtiFwcWQpPXdt8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k=
golang.org/x/tools v0.35.0 h1:mBffYraMEf7aa0sB+NuKnuCy8qI/9Bughn8dC2Gu5r0=
golang.org/x/tools v0.35.0/go.mod h1:NKdj5HkL/73byiZSJjqJgKn3ep7KjFkBOkR/Hps3VPw=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

28
probetest/Dockerfile Normal file
View file

@ -0,0 +1,28 @@
FROM docker.io/library/golang:latest AS build
ADD . /app
WORKDIR /app/probetest
RUN go get
RUN CGO_ENABLED=0 go build -o probetest -ldflags '-extldflags "-static" -w -s' .
FROM containers.torproject.org/tpo/tpa/base-images/debian:bookworm as debian-base
RUN apt-get update && apt-get install -y \
curl \
gpg \
gpg-agent \
ca-certificates \
libcap2-bin \
--no-install-recommends
FROM scratch
COPY --from=debian-base /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=debian-base /usr/share/zoneinfo /usr/share/zoneinfo
COPY --from=build /app/probetest/probetest /bin/probetest
ENTRYPOINT [ "/bin/probetest" ]
LABEL org.opencontainers.image.authors="anti-censorship-team@lists.torproject.org"

60
probetest/README.md Normal file
View file

@ -0,0 +1,60 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Overview](#overview)
- [Running your own](#running-your-own)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
This is code for a remote probe test component of Snowflake.
### Overview
This is a probe test server to allow proxies to test their compatability
with Snowflake. Right now the only type of test implemented is a
compatability check for clients with symmetric NATs.
### Running your own
The server uses TLS by default.
There is a `--disable-tls` option for testing purposes,
but you should use TLS in production.
To build the probe server, run
```go build```
Or alternatively:
```
cd .. # switch to the repo root directory or $(git rev-parse --show-toplevel)
docker build -t snowflake-probetest -f probetest/Dockerfile .
```
To deploy the probe server, first set the necessary env variables with
```
export HOSTNAMES=${YOUR HOSTNAMES}
export EMAIL=${YOUR EMAIL}
```
then run ```docker-compose up```
Setting up a symmetric NAT configuration requires a few extra steps. After
upping the docker container, run
```docker inspect snowflake-probetest```
to find the subnet used by the probetest container. Then run
```sudo iptables -L -t nat``` to find the POSTROUTING rules for the subnet.
It should look something like this:
```
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.19.0.0/16 anywhere
```
to modify this rule, execute the command
```sudo iptables -t nat -R POSTROUTING $RULE_NUM -s 172.19.0.0/16 -j MASQUERADE --random```
where RULE_NUM is the numbered rule corresponding to your docker container's subnet masquerade rule.
Afterwards, you should see the rule changed to be:
```
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.19.0.0/16 anywhere random
```

Some files were not shown because too many files have changed in this diff Show more