[CLI]
qbittorrent-killer
A BitTorrent client written from scratch in Go — implements the full protocol stack from .torrent parsing and tracker communication to peer handshakes, piece downloading, and SHA-1 validation.
A BitTorrent client written from scratch in Go. No library handles the protocol — every
layer is implemented by hand: .torrent file parsing, tracker communication over both
HTTP and UDP, the BitTorrent peer handshake, request pipelining, SHA-1 integrity
verification, and concurrent worker orchestration. The project was built to understand
exactly what happens between clicking "download" and a file appearing on disk.
Project Structure
The codebase is organized into focused packages with a one-directional dependency graph. Lower-level packages know nothing about higher-level ones:
torrent-at-home/
├── cmd/app/ # CLI entry point
├── engine/ # Download orchestration + goroutine worker pool
├── protocol/
│ ├── frames/ # BitTorrent message encoding / decoding
│ └── greeting/ # Peer handshake (protocol negotiation)
├── network/
│ ├── connector/ # TCP peer connections + message I/O
│ └── endpoints/ # Compact peer address parsing
└── data/
├── descriptor/ # .torrent parsing + tracker communication
└── mask/ # Bitfield operations for piece tracking
Data flows in one direction: cmd → descriptor → engine → connector / protocol / mask.
No package imports its parent.
Step 1 — Parsing the .torrent File
A .torrent file is bencoded — a compact serialization format specific to BitTorrent.
The descriptor package decodes it into a strongly typed TorrentFile struct using
struct tags, then computes two critical values that every subsequent step depends on:
// data/descriptor/descriptor.go
type TorrentFile struct {
Announce string // primary tracker URL
AnnounceList [][]string // tiered fallback trackers
InfoHash [20]byte // SHA-1 of the info dict — global torrent identity
PieceHashes [][20]byte // SHA-1 per piece — used for integrity checks later
PieceLength int // bytes per piece (last piece may be smaller)
Length int // total file size in bytes
Name string
}
// Raw bencode structs — field names match the .torrent spec exactly
type bencodeInfo struct {
Pieces string `bencode:"pieces"`
PieceLength int `bencode:"piece length"`
Length int `bencode:"length"`
Name string `bencode:"name"`
}
type bencodeTorrent struct {
Announce string `bencode:"announce"`
AnnounceList [][]string `bencode:"announce-list"`
Info bencodeInfo `bencode:"info"`
}
The InfoHash is computed by re-encoding the info dict back to bencode bytes and
hashing with SHA-1. This exact 20-byte value is sent in every tracker request and
peer handshake — it is how both sides confirm they are talking about the same torrent:
// data/descriptor/descriptor.go
func (i *bencodeInfo) hash() ([20]byte, error) {
var buf bytes.Buffer
err := bencode.Marshal(&buf, *i) // canonical bencode re-encoding
if err != nil {
return [20]byte{}, err
}
return sha1.Sum(buf.Bytes()), nil
}
Piece hashes are packed as a single concatenated binary string in the file.
splitPieceHashes slices it into an array of 20-byte hashes — one per piece:
// data/descriptor/descriptor.go
func (i *bencodeInfo) splitPieceHashes() ([][20]byte, error) {
const hashLen = 20
raw := []byte(i.Pieces)
if len(raw)%hashLen != 0 {
return nil, fmt.Errorf("invalid piece hash data")
}
count := len(raw) / hashLen
result := make([][20]byte, count)
for j := 0; j < count; j++ {
copy(result[j][:], raw[j*hashLen:(j+1)*hashLen])
}
return result, nil
}
Step 2 — Tracker Communication
The client supports both HTTP and UDP trackers. UDP is tried first when the tracker
URL uses the udp:// scheme — it is faster and has less overhead. HTTP is the fallback.
All tracker URLs from announce and announce-list are collected, deduplicated, and
tried in sequence. The final result is a deduplicated pool of peer addresses:
// data/descriptor/announce.go
func (t *TorrentFile) announce(peerID [20]byte, port uint16) ([]endpoints.Endpoint, error) {
var allPeers []endpoints.Endpoint
trackers := t.getTrackerList() // merges announce + announce-list without duplicates
for i, trackerURL := range trackers {
t.Announce = trackerURL
peers, err := t.announceSingleTracker(peerID, port)
if err != nil {
log.Printf("[tracker] [%d/%d] %s → failed: %v\n", i+1, len(trackers), truncateURL(trackerURL), err)
continue // try the next tracker
}
allPeers = append(allPeers, peers...)
}
if len(allPeers) == 0 {
return nil, errors.New("no peers received from any tracker")
}
// Deduplicate by string address — multiple trackers may return the same peers
seen := make(map[string]bool)
unique := make([]endpoints.Endpoint, 0, len(allPeers))
for _, p := range allPeers {
if key := p.String(); !seen[key] {
seen[key] = true
unique = append(unique, p)
}
}
log.Printf("[tracker] total: %d unique peer(s)\n", len(unique))
return unique, nil
}
HTTP Tracker
The HTTP announce builds a query string with the info hash, peer ID, port, and download stats, then parses the bencoded response to extract the compact peer list:
// data/descriptor/announce.go
func (t *TorrentFile) assembleURL(id [20]byte, p uint16) (string, error) {
base, err := url.Parse(t.Announce)
if err != nil {
return "", err
}
query := url.Values{}
query.Set("info_hash", string(t.InfoHash[:]))
query.Set("peer_id", string(id[:]))
query.Set("port", strconv.Itoa(int(p)))
query.Set("uploaded", "0")
query.Set("downloaded", "0")
query.Set("compact", "1") // compact format: 6 bytes per peer
query.Set("left", strconv.Itoa(t.Length))
base.RawQuery = query.Encode()
return base.String(), nil
}
UDP Tracker
The UDP tracker protocol is a two-step exchange. Step one obtains a connection ID
from the tracker by sending the magic protocol constant 0x41727101980 with a random
transaction ID. The tracker's reply contains a connection ID valid for 2 minutes:
// data/descriptor/announce_udp.go
func (t *TorrentFile) getUDPConnectionID(conn *net.UDPConn) (int64, error) {
txID := randomTransactionID() // crypto/rand — unpredictable, prevents spoofing
req := make([]byte, 16)
binary.BigEndian.PutUint64(req[0:8], udpProtocolID) // 0x41727101980
binary.BigEndian.PutUint32(req[8:12], udpActionConnect) // action = 0
binary.BigEndian.PutUint32(req[12:16], txID)
resp, err := sendUDPRequest(conn, req, udpConnectTimeout)
if err != nil {
return 0, err
}
action := binary.BigEndian.Uint32(resp[0:4])
respTxID := binary.BigEndian.Uint32(resp[4:8])
if action != udpActionConnect { return 0, ErrUDPUnexpectedAction }
if respTxID != txID { return 0, ErrUDPTransactionMismatch }
return int64(binary.BigEndian.Uint64(resp[8:16])), nil
}
Step two sends the 98-byte announce packet. The response is a compact peer list — 6 bytes per peer (4-byte IPv4 address + 2-byte port), parsed manually for precise bounds control:
// data/descriptor/announce_udp.go
func (t *TorrentFile) sendUDPAnnounce(conn *net.UDPConn, connID int64, peerID [20]byte, port uint16) ([]endpoints.Endpoint, error) {
txID := randomTransactionID()
req := make([]byte, 98)
binary.BigEndian.PutUint64(req[0:8], uint64(connID))
binary.BigEndian.PutUint32(req[8:12], udpActionAnnounce)
binary.BigEndian.PutUint32(req[12:16], txID)
copy(req[16:36], t.InfoHash[:])
copy(req[36:56], peerID[:])
binary.BigEndian.PutUint64(req[56:64], 0) // downloaded
binary.BigEndian.PutUint64(req[64:72], uint64(t.Length)) // left (bytes remaining)
binary.BigEndian.PutUint64(req[72:80], 0) // uploaded
binary.BigEndian.PutUint32(req[80:84], 0) // event = none
binary.BigEndian.PutUint32(req[84:88], 0) // IP = use sender's
binary.BigEndian.PutUint32(req[88:92], txID) // key
binary.BigEndian.PutUint32(req[92:96], math.MaxInt32) // num_want = -1 (all peers)
binary.BigEndian.PutUint16(req[96:98], port)
resp, err := sendUDPRequest(conn, req, udpConnectTimeout)
if err != nil {
return nil, err
}
// Compact peer list starts at byte 20 — 6 bytes per peer
peerData := resp[20:]
peers := make([]endpoints.Endpoint, 0, len(peerData)/6)
for i := 0; i+6 <= len(peerData); i += 6 {
ip := net.IP(peerData[i : i+4])
port := binary.BigEndian.Uint16(peerData[i+4 : i+6])
peers = append(peers, endpoints.Endpoint{Addr: ip, Port: port})
}
return peers, nil
}
UDP retries follow exponential backoff per the spec — 15 × 2^n seconds, up to
8 attempts before giving up:
// data/descriptor/announce_udp.go
func sendUDPRequest(conn *net.UDPConn, data []byte, timeout time.Duration) ([]byte, error) {
for attempt := 0; attempt <= 8; attempt++ {
conn.SetDeadline(time.Now().Add(timeout))
conn.Write(data)
buf := make([]byte, 2048)
n, err := conn.Read(buf)
if err == nil {
return buf[:n], nil
}
// 15s → 30s → 60s → 120s → 240s...
timeout = 15 * time.Second * time.Duration(1<<attempt)
}
return nil, errors.New("UDP tracker timeout after all retries")
}
Step 3 — Peer Handshake
Before exchanging any data, both sides perform the BitTorrent handshake — a fixed-format
message that verifies they are talking about the same torrent. The wire format is:
1 byte (protocol name length) + 19 bytes ("BitTorrent protocol") + 8 reserved bytes +
20 bytes (info hash) + 20 bytes (peer ID).
// protocol/greeting/intro.go
type Greeting struct {
Protocol string
Hash [20]byte // must equal our InfoHash — reject if it doesn't
ID [20]byte // remote peer's randomly generated ID
}
func (g *Greeting) Pack() []byte {
pstrLen := byte(len(g.Protocol))
out := make([]byte, 1+len(g.Protocol)+reservedBytes+hashSize+hashSize)
out[0] = pstrLen
pos := 1
pos += copy(out[pos:], g.Protocol)
pos += copy(out[pos:], make([]byte, reservedBytes)) // 8 zero bytes (extension flags)
copy(out[pos:], append(g.Hash[:], g.ID[:]...))
return out
}
func Unpack(stream io.Reader) (*Greeting, error) {
var lenBuf [1]byte
io.ReadFull(stream, lenBuf[:])
pstrLen := int(lenBuf[0])
if pstrLen == 0 {
return nil, ErrInvalidProtocolLen
}
raw := make([]byte, pstrLen+reservedBytes+hashSize+hashSize)
io.ReadFull(stream, raw)
var g Greeting
g.Protocol = string(raw[:pstrLen])
hashStart := pstrLen + reservedBytes
copy(g.Hash[:], raw[hashStart:hashStart+hashSize])
copy(g.ID[:], raw[hashStart+hashSize:])
return &g, nil
}
The connector performs the full connection sequence in one call: TCP dial → handshake → read initial bitfield. Any failure closes the connection immediately and returns an error — no partial state leaks out:
// network/connector/link.go
func Connect(p endpoints.Endpoint, peerID, infoHash [20]byte) (*PeerConn, error) {
conn, err := net.DialTimeout("tcp", p.String(), 3*time.Second)
if err != nil {
return nil, err
}
if _, err = doHandshake(conn, infoHash, peerID); err != nil {
conn.Close()
return nil, err
}
bf, err := readBitfield(conn)
if err != nil {
conn.Close()
return nil, err
}
return &PeerConn{
Conn: conn,
Choked: true, // all peers start choked — must send Interested and wait for Unchoke
Bitfield: bf,
peer: p,
infoHash: infoHash,
peerID: peerID,
}, nil
}
func doHandshake(conn net.Conn, infohash, peerID [20]byte) (*greeting.Greeting, error) {
conn.SetDeadline(time.Now().Add(3 * time.Second))
defer conn.SetDeadline(time.Time{})
conn.Write(greeting.Build(infohash, peerID).Pack())
incoming, err := greeting.Unpack(conn)
if err != nil {
return nil, err
}
// Hard reject — wrong torrent
if !bytes.Equal(incoming.Hash[:], infohash[:]) {
return nil, fmt.Errorf("infohash does not match")
}
return incoming, nil
}
Step 4 — The Message Protocol
All BitTorrent messages share one wire format: a 4-byte big-endian length prefix, a 1-byte type ID, and a payload. A zero-length message is a keep-alive.
The frames package handles serialization for all 9 message types:
// protocol/frames/codec.go
const (
TypeChoke = iota // 0 — stop sending requests
TypeUnchoke // 1 — you may send requests
TypeInterested // 2 — I want data from you
TypeNotInterested // 3
TypeHave // 4 — I completed a piece
TypeBitfield // 5 — here are all my pieces
TypeRequest // 6 — send me this block
TypePiece // 7 — here is the block data
TypeCancel // 8 — cancel a pending request
)
func (f *Frame) Pack() []byte {
if f == nil {
return []byte{0, 0, 0, 0} // keep-alive: zero length, no type byte
}
size := uint32(len(f.Data) + 1) // +1 for the type byte
buf := make([]byte, 4+size)
binary.BigEndian.PutUint32(buf, size)
buf[4] = f.Type
copy(buf[5:], f.Data)
return buf
}
func Unpack(r io.Reader) (*Frame, error) {
var lenBuf [4]byte
if _, err := io.ReadFull(r, lenBuf[:]); err != nil {
return nil, err
}
msgLen := binary.BigEndian.Uint32(lenBuf[:])
if msgLen == 0 {
return nil, nil // keep-alive
}
data := make([]byte, msgLen)
if _, err := io.ReadFull(r, data); err != nil {
return nil, err
}
return &Frame{Type: data[0], Data: data[1:]}, nil
}
ReadPieceData writes the block payload into the correct byte offset of the piece
buffer and validates every field before touching memory. Each error case has its own
sentinel error so callers can distinguish them:
// protocol/frames/codec.go
var (
ErrInvalidType = errors.New("invalid message type")
ErrPayloadTooShort = errors.New("payload too short")
ErrIndexMismatch = errors.New("index mismatch")
ErrOffsetTooHigh = errors.New("offset too high")
ErrDataTooLong = errors.New("data too long")
)
func ReadPieceData(target []byte, pieceIdx int, frm *Frame) (int, error) {
if frm.Type != TypePiece { return 0, ErrInvalidType }
if len(frm.Data) < 8 { return 0, ErrPayloadTooShort }
idx := int(binary.BigEndian.Uint32(frm.Data[0:4]))
offset := int(binary.BigEndian.Uint32(frm.Data[4:8]))
if idx != pieceIdx { return 0, ErrIndexMismatch }
if offset >= len(target) { return 0, ErrOffsetTooHigh }
payload := frm.Data[8:]
if offset+len(payload) > len(target) { return 0, ErrDataTooLong }
copy(target[offset:], payload)
return len(payload), nil
}
Step 5 — Piece Downloading with Request Pipelining
A naive implementation would request one 16 KB chunk, wait for the response, then request the next. That leaves the connection idle for a full round-trip between every block — devastating throughput over high-latency connections.
The engine uses request pipelining: up to MaxPending = 5 requests are kept
in flight simultaneously. While waiting for block N, requests for N+1 through N+4
are already on the wire:
// engine/transfer.go
const (
DefaultChunkSize = 16384 // 16 KB — standard BitTorrent block size
MaxPending = 5 // max in-flight requests per peer connection
)
type transferState struct {
index int
conn *connector.PeerConn
buf []byte
received int // bytes written into buf
requested int // bytes for which requests have been sent
pending int // requests sent but not yet answered
}
func fetchPiece(conn *connector.PeerConn, j *job) ([]byte, error) {
state := transferState{
index: j.index,
conn: conn,
buf: make([]byte, j.length),
}
conn.Conn.SetDeadline(time.Now().Add(30 * time.Second))
defer conn.Conn.SetDeadline(time.Time{})
for state.received < j.length {
// Keep the pipeline full: send requests until MaxPending in flight
if !state.conn.Choked {
for state.pending < MaxPending && state.requested < j.length {
chunkSize := DefaultChunkSize
if j.length-state.requested < chunkSize {
chunkSize = j.length - state.requested // smaller last chunk
}
conn.SendRequest(j.index, state.requested, chunkSize)
state.pending++
state.requested += chunkSize
}
}
state.processMsg() // read one incoming message
}
return state.buf, nil
}
processMsg handles every incoming message type in one switch. A Piece message
decrements the pending counter — freeing one pipeline slot so fetchPiece can
immediately send another request on the next loop iteration:
// engine/transfer.go
func (s *transferState) processMsg() error {
msg, err := s.conn.Read()
if err != nil {
return err
}
if msg == nil {
return nil // keep-alive, nothing to do
}
switch msg.Type {
case frames.TypeUnchoke:
s.conn.Choked = false // pipeline opened — start sending requests
case frames.TypeChoke:
s.conn.Choked = true // pipeline paused — peer is overwhelmed
case frames.TypeHave:
idx, _ := frames.ReadHave(msg)
s.conn.Bitfield.Mark(idx) // peer completed a new piece — update our view
case frames.TypePiece:
n, err := frames.ReadPieceData(s.buf, s.index, msg)
if err != nil {
return err
}
s.received += n
s.pending-- // one slot freed — fetchPiece will refill the pipeline
}
return nil
}
Step 6 — SHA-1 Integrity Verification
After every piece is fully received, it is hashed with SHA-1 and compared
byte-for-byte against the expected hash from the .torrent file. A mismatch means
the data was corrupted in transit — the piece is discarded and re-queued:
// engine/transfer.go
func verifyPiece(j *job, buf []byte) error {
hash := sha1.Sum(buf)
if !bytes.Equal(hash[:], j.hash[:]) {
return fmt.Errorf("piece %d failed validation", j.index)
}
return nil
}
Step 7 — Concurrent Download Orchestration
Session.Download is the top-level coordinator. It pushes every piece into a buffered
job channel, spawns one goroutine per peer, and collects results into a pre-allocated
file buffer. The channel acts as both a work queue and a load balancer — fast peers
naturally drain more jobs than slow ones:
// engine/transfer.go
func (s *Session) Download() ([]byte, error) {
jobs := make(chan *job, len(s.PieceHashes)) // all jobs queued upfront
results := make(chan *result) // unbuffered — backpressure on workers
for index, hash := range s.PieceHashes {
jobs <- &job{index, hash, s.pieceSize(index)}
}
for _, peer := range s.Peers {
go s.spawnWorker(peer, jobs, results)
}
buf := make([]byte, s.Length) // pre-allocated full file buffer
completed := 0
for completed < len(s.PieceHashes) {
res := <-results
begin, end := s.pieceRange(res.index)
copy(buf[begin:end], res.buf) // write piece at correct offset in the file
completed++
pct := float64(completed) / float64(len(s.PieceHashes)) * 100
workers := runtime.NumGoroutine() - 1
log.Printf("[progress] [%5.1f%%] ✓ piece %d (%d worker(s))\n", pct, res.index, workers)
}
close(jobs) // draining the channel signals workers to exit
return buf, nil
}
Each worker connects to its peer and enters a job loop. If the peer doesn't have a piece, the job goes back on the channel. If a fetch fails, the job is re-queued and the worker exits — other workers continue uninterrupted:
// engine/transfer.go
func (s *Session) spawnWorker(peer endpoints.Endpoint, jobs chan *job, results chan *result) {
conn, err := connector.Connect(peer, s.PeerID, s.InfoHash)
if err != nil {
log.Printf("[peer] ✗ %s unreachable\n", peer.Addr)
return // goroutine exits; others keep running
}
defer conn.Conn.Close()
conn.SendUnchoke()
conn.SendInterested()
for j := range jobs {
if !conn.Bitfield.Check(j.index) {
jobs <- j // peer doesn't have this piece — put it back for another worker
continue
}
buf, err := fetchPiece(conn, j)
if err != nil {
jobs <- j // fetch failed — re-queue, this connection is likely broken
return
}
if err := verifyPiece(j, buf); err != nil {
log.Printf("piece %d corrupted, re-queuing\n", j.index)
jobs <- j // SHA-1 mismatch — discard and retry on any available peer
continue
}
conn.SendHave(j.index) // advertise to the peer that we now have this piece
results <- &result{j.index, buf}
}
}
Bitfield — Compact Piece Tracking
mask.Mask is a compact bitfield — one bit per piece — that tracks which pieces each
peer holds. Bit manipulation is done with shifts and OR masks rather than []bool,
which would use 8× more memory:
// data/mask/bitmap.go
type Mask []byte
// Check returns true if the bit at position idx is set (peer has that piece)
func (m Mask) Check(idx int) bool {
if idx < 0 {
return false
}
byteIdx := idx >> 3 // integer divide by 8 → which byte
if byteIdx >= len(m) {
return false
}
bitPos := 7 - (idx & 7) // MSB-first within each byte (BitTorrent wire order)
return (m[byteIdx]>>bitPos)&1 == 1
}
// Mark sets the bit at position idx (we completed that piece)
func (m Mask) Mark(idx int) {
if idx < 0 {
return
}
byteIdx := idx >> 3
if byteIdx >= len(m) {
return
}
bitPos := 7 - (idx & 7)
m[byteIdx] |= 1 << bitPos
}
BitTorrent sends bitfields MSB-first: the most significant bit of the first byte
represents piece 0. The 7 - (idx & 7) expression converts from a zero-indexed
piece number into that MSB-first bit position within each byte.
Peer Address Parsing — Compact Format
Trackers return peer addresses in compact binary format: 4 bytes for the IPv4 address
followed by 2 bytes for the port, no separators, no framing. The endpoints package
parses this into typed Endpoint structs with an explicit length check:
// network/endpoints/addr.go
const peerEntrySize = 6
type Endpoint struct {
Addr net.IP
Port uint16
}
func Parse(raw []byte) ([]Endpoint, error) {
if len(raw)%peerEntrySize != 0 {
return nil, ErrMalformedPeerData // trailing bytes = malformed — reject entirely
}
count := len(raw) / peerEntrySize
result := make([]Endpoint, count)
for i := 0; i < count; i++ {
pos := i * peerEntrySize
result[i].Addr = net.IP(raw[pos : pos+4])
result[i].Port = binary.BigEndian.Uint16(raw[pos+4 : pos+6])
}
return result, nil
}
func (e Endpoint) String() string {
return net.JoinHostPort(e.Addr.String(), strconv.Itoa(int(e.Port)))
}
Testing
Tests use the standard testing package and testify/assert. Every protocol-level
package has its own test file covering happy-path, edge cases, and error conditions.
Bitfield Tests
The bitfield tests verify Check and Mark against hand-computed binary patterns.
0b01010100 in binary is 01010100 — positions 1, 3, and 5 are set. The test
asserts this bit-by-bit across two bytes, including out-of-range indices:
// data/mask/bitmap_test.go
func TestCheck(t *testing.T) {
m := Mask{0b01010100, 0b01010100}
expected := []bool{
false, true, false, true, false, true, false, false, // byte 0: 01010100
false, true, false, true, false, true, false, false, // byte 1: 01010100
false, false, false, false, // beyond mask length
}
for i, want := range expected {
assert.Equal(t, want, m.Check(i))
}
}
func TestMark(t *testing.T) {
cases := []struct {
data Mask
idx int
want Mask
}{
// Set bit 4 in 0b01010100 → 0b01011100
{Mask{0b01010100, 0b01010100}, 4, Mask{0b01011100, 0b01010100}},
// Bit 9 is already set — no change expected
{Mask{0b01010100, 0b01010100}, 9, Mask{0b01010100, 0b01010100}},
// Set bit 15 (LSB of byte 1) → 0b01010101
{Mask{0b01010100, 0b01010100}, 15, Mask{0b01010100, 0b01010101}},
// Bit 19 — out of bounds, mask unchanged
{Mask{0b01010100, 0b01010100}, 19, Mask{0b01010100, 0b01010100}},
}
for _, c := range cases {
m := c.data
m.Mark(c.idx)
assert.Equal(t, c.want, m)
}
}
Frame Codec Tests
The codec tests verify the full serialization round-trip. The TestUnpack table
covers the normal case, keep-alive, a truncated length prefix, and a frame that
declares a larger payload than the bytes present:
// protocol/frames/codec_test.go
func TestPack(t *testing.T) {
tests := map[string]struct {
input *Frame
output []byte
}{
// TypeHave (4) with 4-byte payload → length prefix = 5, type = 4, data = [1,2,3,4]
"normal": {&Frame{Type: TypeHave, Data: []byte{1, 2, 3, 4}}, []byte{0, 0, 0, 5, 4, 1, 2, 3, 4}},
"keepalive": {nil, []byte{0, 0, 0, 0}},
}
for _, test := range tests {
assert.Equal(t, test.output, test.input.Pack())
}
}
func TestUnpack(t *testing.T) {
tests := map[string]struct {
input []byte
output *Frame
fails bool
}{
"normal": {[]byte{0, 0, 0, 5, 4, 1, 2, 3, 4}, &Frame{TypeHave, []byte{1, 2, 3, 4}}, false},
"keepalive": {[]byte{0, 0, 0, 0}, nil, false},
"too short": {[]byte{1, 2, 3}, nil, true}, // can't read 4-byte length prefix
"incomplete": {[]byte{0, 0, 0, 5, 4, 1, 2}, nil, true}, // length=5 but only 3 bytes follow
}
for _, test := range tests {
r := bytes.NewReader(test.input)
m, err := Unpack(r)
if test.fails {
assert.Error(t, err)
} else {
assert.NoError(t, err)
}
assert.Equal(t, test.output, m)
}
}
TestReadPieceData verifies that the payload lands at the correct offset inside the
target buffer — not at position 0, because blocks within a piece can arrive in any order:
// protocol/frames/codec_test.go
func TestReadPieceData(t *testing.T) {
buf := make([]byte, 10)
frm := &Frame{
Type: TypePiece,
Data: []byte{
0x00, 0x00, 0x00, 0x04, // piece index = 4
0x00, 0x00, 0x00, 0x02, // begin offset = 2
0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff, // 6 bytes of payload
},
}
n, err := ReadPieceData(buf, 4, frm)
assert.NoError(t, err)
assert.Equal(t, 6, n)
// bytes written at offset 2, not 0 — positions 0 and 1 stay zero
assert.Equal(t, []byte{0x00, 0x00, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff, 0x00, 0x00}, buf)
}
Endpoint Parser Tests
The endpoint tests cover a valid compact peer list and a malformed one with a trailing partial entry — 5 bytes instead of a multiple of 6:
// network/endpoints/addr_test.go
func TestParse(t *testing.T) {
cases := map[string]struct {
raw string
want []Endpoint
fail bool
}{
"valid peer list": {
// 127.0.0.1:80 and 1.1.1.1:443 packed as 12 bytes
raw: string([]byte{127, 0, 0, 1, 0x00, 0x50, 1, 1, 1, 1, 0x01, 0xbb}),
want: []Endpoint{
{Addr: net.IP{127, 0, 0, 1}, Port: 80},
{Addr: net.IP{1, 1, 1, 1}, Port: 443},
},
},
"incomplete peer entry": {
raw: string([]byte{127, 0, 0, 1, 0x00}), // 5 bytes — not a multiple of 6
fail: true,
},
}
for _, c := range cases {
got, err := Parse([]byte(c.raw))
if c.fail {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, c.want, got)
}
}
}
Run the full test suite:
go test ./...
Or the integration script that builds first then runs tests:
bash tools/test.sh
CLI Entry Point
The CLI is intentionally thin — validate arguments, print a summary, then delegate everything to the library. No protocol logic lives here:
// cmd/app/main.go
func entry() {
if len(os.Args) < 3 {
fmt.Fprintf(os.Stderr, "usage: %s <torrent-file> <output-path>\n", os.Args[0])
os.Exit(1)
}
src, dst := os.Args[1], os.Args[2]
meta, err := descriptor.Open(src)
if err != nil {
log.Fatalf("failed to load torrent: %v", err)
}
fmt.Printf("[info] torrent: %s\n", meta.Name)
fmt.Printf("[info] size: %.2f MB\n", float64(meta.Length)/1024/1024)
fmt.Printf("[info] pieces: %d\n\n", len(meta.PieceHashes))
if err := meta.DownloadToFile(dst); err != nil {
log.Fatalf("transfer failed: %v", err)
}
}
Build and run:
go build -o torrent-at-home ./cmd/app
./torrent-at-home kali-linux-2025.4-installer-amd64.iso.torrent ./kali.iso
What's Next
- DHT (Distributed Hash Table) — peer discovery without a tracker
- PEX (Peer Exchange) — learn about new peers from connected peers
- Magnet link support — derive torrent metadata from the info hash alone
- Multi-file torrent support
- Protocol encryption (PE/MSE) — avoid ISP throttling