This design is easier to misuse, because it allows the caller to modify
the contents of the slice after queueing it, but it avoids an extra
allocation + memmove per incoming packet.
Before:
$ go test -bench='Benchmark(QueueIncoming|WriteTo)' -benchtime=2s -benchmem
BenchmarkQueueIncoming-4 7001494 342.4 ns/op 1024 B/op 2 allocs/op
BenchmarkWriteTo-4 3777459 627 ns/op 1024 B/op 2 allocs/op
After:
$ go test -bench=BenchmarkWriteTo -benchtime 2s -benchmem
BenchmarkQueueIncoming-4 13361600 170.1 ns/op 512 B/op 1 allocs/op
BenchmarkWriteTo-4 6702324 373 ns/op 512 B/op 1 allocs/op
Despite the benchmark results, the change in QueueIncoming turns out not
to have an effect in practice. It appears that the compiler had already
been optimizing out the allocation and copy in QueueIncoming.
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40187
The WriteTo change, on the other hand, in practice reduces the frequency
of garbage collection.
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40199