GithubHelp home page GithubHelp logo

smux's Introduction

Hi there 👋

Projects I'm working on:

  • ⚙️ gaio - High performance async-io(proactor) networking for Golang.
  • 🛡️ safebox - A retro key management TUI tool for blockchain and others, one key to derive all.
  • 💬 smux - A Stream Multiplexing Library for golang with least memory usage.
  • 👯 kcp-go - A Crypto-Secure, Production-Grade $Reliable-UDP$ Library for golang with FEC.
  • kcptun - A Stable & Secure Tunnel based on KCP with N:M multiplexing and FEC. Available for ARM, MIPS, 386 and AMD64.
  • ⚛️ algorithms - $Algorithms$ & Data structures in C++.
  • 🤝 bdls - Initial implementation of BDLS $BFT$ $consensus$ algorithm, now integrated in https://labs.hyperledger.org/labs/bdls.html .
  • 🐕 lossyconn - Lossy connection simulator.
  • 😄 tcpraw - Sending packets through TCP.
  • 🤖 navmesh - Navigation mesh in golang.
  • 📫 gonet - A fancy game server skeleton in golang.
  • 🈳️ budda - My personal collection of Buddhist materials.
  • 📐 algebra - My learning notes on $\mathnormal{algebra}$.

📧 Contact me: imap at live dot com

🏫 UESTC(BA), HKPolyU(MSc)

smux's People

Contributors

audriusbutkevicius avatar fulirockx avatar genisysram avatar haraldnordgren avatar jannson avatar joesis avatar oxtoacart avatar persistentsnail avatar rogers0 avatar sintanial avatar testwill avatar xtaci avatar zachzhaozy avatar zhuyie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smux's Issues

NAT traversal?

We've been encountering an issue with certain customers whose ISPs are NAT'ing their UDP connections. This is causing pipes to disconnect due to NOPs no longer being sent back to the originating client.

Example:

  1. Client A establishes connection using port 1234
  2. ISP translates port 1234 to 23456
  3. Client B handles connection and returns frames on A:23456. This works fine.
  4. After 2-3 minutes, ISP translates port 1234 to 32767
  5. Client B keeps pinging port A:23456.
  6. Client A closes the connection due to keepalive no longer working

This seems incredibly common... is there a solution in place already?

Flaky tests (panic: Fail in goroutine) in v1.3.4 discovered by Ubuntu autopkgtest

Ubuntu autopkgtest of the latest github.com/xtaci/smux v1.3.4 (Debian package golang-github-xtaci-smux/1.3.4+ds-1) fails Ubuntu autopkgtest on ppc64el:

(Special thanks to @LocutusOfBorg for telling me about this issue over IRC on #ubuntu-devel as this apparent test regression is preventing golang-github-xtaci-smux's migration to Eoan, i.e. Ubuntu 19.10.)

Further tests reveal that:

  • It is not limited to ppc64le (ppc64el); I was able to reproduce the same errors on my amd64 computer.
  • It is found in both v1.1.0 and v1.3.4 (master as of this writing)
  • It happens randomly, and I need to run loops like below to trigger the errors:
$ while go test -vet=off -p 1 github.com/xtaci/smux; do go clean -testcache; done
ok  	github.com/xtaci/smux	8.712s
ok  	github.com/xtaci/smux	8.677s
ok  	github.com/xtaci/smux	9.041s
ok  	github.com/xtaci/smux	8.803s
ok  	github.com/xtaci/smux	8.756s
ok  	github.com/xtaci/smux	8.800s
ok  	github.com/xtaci/smux	8.742s
ok  	github.com/xtaci/smux	8.719s
--- FAIL: TestIsClose (0.00s)
    session_test.go:36: accept tcp 127.0.0.1:44465: use of closed network connection
FAIL
FAIL	github.com/xtaci/smux	8.796s

Another run:

$ while go test -vet=off -p 4 github.com/xtaci/smux; do go clean -testcache; done
ok  	github.com/xtaci/smux	8.734s
ok  	github.com/xtaci/smux	8.719s
ok  	github.com/xtaci/smux	8.691s
ok  	github.com/xtaci/smux	8.705s
ok  	github.com/xtaci/smux	9.074s
ok  	github.com/xtaci/smux	9.369s
panic: Fail in goroutine after TestSessionDoubleClose has completed

goroutine 2106 [running]:
testing.(*common).Fail(0xc000d66100)
	/usr/lib/go-1.12/src/testing/testing.go:565 +0x11e
testing.(*common).Error(0xc000d66100, 0xc000090660, 0x1, 0x1)
	/usr/lib/go-1.12/src/testing/testing.go:654 +0x79
github.com/xtaci/smux.setupServer.func1(0x8e3a20, 0xc000010058, 0x8e99e0, 0xc000d66100)
	/home/foka/go/src/github.com/xtaci/smux/session_test.go:36 +0xbc
created by github.com/xtaci/smux.setupServer
	/home/foka/go/src/github.com/xtaci/smux/session_test.go:33 +0xf9
FAIL	github.com/xtaci/smux	1.814s

and also:

panic: Fail in goroutine after TestCloseThenOpen has completed

goroutine 1430 [running]:
testing.(*common).Fail(0xc000184100)
	/usr/lib/go-1.12/src/testing/testing.go:565 +0x12c
testing.(*common).Error(0xc000184100, 0xc0001100d0, 0x1, 0x1)
	/usr/lib/go-1.12/src/testing/testing.go:654 +0x70
github.com/xtaci/smux.setupServer.func1(0x104af340, 0xc0000a8068, 0x104b52c0, 0xc000184100)
	/home/foka/go/src/github.com/xtaci/smux/session_test.go:36 +0xbc
created by github.com/xtaci/smux.setupServer
	/home/foka/go/src/github.com/xtaci/smux/session_test.go:33 +0xd0
FAIL	github.com/xtaci/smux	2.360s

Thanks for looking into the issue.

/cc @rogers0

Proposal: extra data when open stream

For my case, maybe useful:

diff --git a/session.go b/session.go
index 12fc4cb..b9d2aed 100644
--- a/session.go
+++ b/session.go
@@ -80,7 +80,7 @@ func newSession(config *Config, conn io.ReadWriteCloser, client bool) *Session {
 }
 
 // OpenStream is used to create a new stream
-func (s *Session) OpenStream() (*Stream, error) {
+func (s *Session) OpenStream(syndat string) (*Stream, error) {
 	if s.IsClosed() {
 		return nil, errors.New(errBrokenPipe)
 	}
@@ -103,7 +103,9 @@ func (s *Session) OpenStream() (*Stream, error) {
 
 	stream := newStream(sid, s.config.MaxFrameSize, s)
 
-	if _, err := s.writeFrame(newFrame(cmdSYN, sid)); err != nil {
+	f := newFrame(cmdSYN, sid)
+	f.data = []byte(syndat)
+	if _, err := s.writeFrame(f); err != nil {
 		return nil, errors.Wrap(err, "writeFrame")
 	}
 
@@ -248,6 +250,7 @@ func (s *Session) recvLoop() {
 				s.streamLock.Lock()
 				if _, ok := s.streams[f.sid]; !ok {
 					stream := newStream(f.sid, s.config.MaxFrameSize, s)
+					stream.syndat = string(f.data)
 					s.streams[f.sid] = stream
 					select {
 					case s.chAccepts <- stream:
diff --git a/stream.go b/stream.go
index 613bd63..f82ca94 100644
--- a/stream.go
+++ b/stream.go
@@ -24,6 +24,8 @@ type Stream struct {
 	dieLock       sync.Mutex
 	readDeadline  atomic.Value
 	writeDeadline atomic.Value
+
+	syndat string
 }
 
 // newStream initiates a Stream struct
@@ -41,6 +43,7 @@ func newStream(id uint32, frameSize int, sess *Session) *Stream {
 func (s *Stream) ID() uint32 {
 	return s.id
 }
+func (s *Stream) Syndat() string { return s.syndat }
 
 // Read implements net.Conn
 func (s *Stream) Read(b []byte) (n int, err error) {

Satisfy net.Conn

Given this is network oriented, it would make sense for it to satisfy net.Conn and for example deadlines.

丢包问题

因为多路复用底层一条通路,那么是否意味着也存在http2.0的head-of-line blocking问题?线路上丢了一个包,滑动窗口减小,是否意味着在此线路上复用的所有连接全都被降速了?

Question

Can you explain how is this better than yamux?
Also, has the limit per stream in yamux has been addressed in smux?

Stream ID reuse

So not sure if this is an issue, so treat it as a question.

  1. Imagine B connects to A, establish a single stream.
  2. Imagina B crashes and restarts within a short period of time.
  3. B reattempts to connect to A
  4. B thinks it's establishing a new stream where as the stream is mid-way on A (given there is no RST)

stream.Close() stuck if buffer is full

If the underlying buffer of net.Conn is full (any may be full forever), it would block the write call to smux stream. If a Close() is called on the stream, it would attempt to write cmdFIN, but this write blocks as well.
The keepalive() goroutine should come and save the day, but it is also stuck at sending cmdNOP. This leaves no one watching on tickerTimeout, and the Close() function of the underlying net.Conn wouldn't be called to resolve the deadlock.
My program sometime gets into a stuck and i believe this is the reason. Could someone take a look? Or I made a mistake in understanding the code?

修正一处细节,

session.go文件, 搜索 s.notifyWriteError(err)

然后下面添加一行
s.Close()

完整文件:

package smux

import (
	"container/heap"
	"encoding/binary"
	"errors"
	"io"
	"net"
	"sync"
	"sync/atomic"
	"time"
)

const (
	defaultAcceptBacklog = 1024
)

var (
	ErrInvalidProtocol = errors.New("invalid protocol")
	ErrConsumed        = errors.New("peer consumed more than sent")
	ErrGoAway          = errors.New("stream id overflows, should start a new connection")
	ErrTimeout         = errors.New("timeout")
	ErrWouldBlock      = errors.New("operation would block on IO")
)

type writeRequest struct {
	prio   uint64
	frame  Frame
	result chan writeResult
}

type writeResult struct {
	n   int
	err error
}

type buffersWriter interface {
	WriteBuffers(v [][]byte) (n int, err error)
}

// Session defines a multiplexed connection for streams
type Session struct {
	conn io.ReadWriteCloser

	config           *Config
	nextStreamID     uint32 // next stream identifier
	nextStreamIDLock sync.Mutex

	bucket       int32         // token bucket
	bucketNotify chan struct{} // used for waiting for tokens

	streams    map[uint32]*Stream // all streams in this session
	streamLock sync.Mutex         // locks streams

	die     chan struct{} // flag session has died
	dieOnce sync.Once

	// socket error handling
	socketReadError      atomic.Value
	socketWriteError     atomic.Value
	chSocketReadError    chan struct{}
	chSocketWriteError   chan struct{}
	socketReadErrorOnce  sync.Once
	socketWriteErrorOnce sync.Once

	// smux protocol errors
	protoError     atomic.Value
	chProtoError   chan struct{}
	protoErrorOnce sync.Once

	chAccepts chan *Stream

	dataReady int32 // flag data has arrived

	goAway int32 // flag id exhausted

	deadline atomic.Value

	shaper chan writeRequest // a shaper for writing
	writes chan writeRequest
}

func newSession(config *Config, conn io.ReadWriteCloser, client bool) *Session {
	s := new(Session)
	s.die = make(chan struct{})
	s.conn = conn
	s.config = config
	s.streams = make(map[uint32]*Stream)
	s.chAccepts = make(chan *Stream, defaultAcceptBacklog)
	s.bucket = int32(config.MaxReceiveBuffer)
	s.bucketNotify = make(chan struct{}, 1)
	s.shaper = make(chan writeRequest)
	s.writes = make(chan writeRequest)
	s.chSocketReadError = make(chan struct{})
	s.chSocketWriteError = make(chan struct{})
	s.chProtoError = make(chan struct{})

	if client {
		s.nextStreamID = 1
	} else {
		s.nextStreamID = 0
	}

	go s.shaperLoop()
	go s.recvLoop()
	go s.sendLoop()
	go s.keepalive()
	return s
}

// OpenStream is used to create a new stream
func (s *Session) OpenStream() (*Stream, error) {
	if s.IsClosed() {
		return nil, io.ErrClosedPipe
	}

	// generate stream id
	s.nextStreamIDLock.Lock()
	if s.goAway > 0 {
		s.nextStreamIDLock.Unlock()
		return nil, ErrGoAway
	}

	s.nextStreamID += 2
	sid := s.nextStreamID
	if sid == sid%2 { // stream-id overflows
		s.goAway = 1
		s.nextStreamIDLock.Unlock()
		return nil, ErrGoAway
	}
	s.nextStreamIDLock.Unlock()

	stream := newStream(sid, s.config.MaxFrameSize, s)

	if _, err := s.writeFrame(newFrame(byte(s.config.Version), cmdSYN, sid)); err != nil {
		return nil, err
	}

	s.streamLock.Lock()
	defer s.streamLock.Unlock()
	select {
	case <-s.chSocketReadError:
		return nil, s.socketReadError.Load().(error)
	case <-s.chSocketWriteError:
		return nil, s.socketWriteError.Load().(error)
	case <-s.die:
		return nil, io.ErrClosedPipe
	default:
		s.streams[sid] = stream
		return stream, nil
	}
}

// Open returns a generic ReadWriteCloser
func (s *Session) Open() (io.ReadWriteCloser, error) {
	return s.OpenStream()
}

// AcceptStream is used to block until the next available stream
// is ready to be accepted.
func (s *Session) AcceptStream() (*Stream, error) {
	var deadline <-chan time.Time
	if d, ok := s.deadline.Load().(time.Time); ok && !d.IsZero() {
		timer := time.NewTimer(time.Until(d))
		defer timer.Stop()
		deadline = timer.C
	}

	select {
	case stream := <-s.chAccepts:
		return stream, nil
	case <-deadline:
		return nil, ErrTimeout
	case <-s.chSocketReadError:
		return nil, s.socketReadError.Load().(error)
	case <-s.chProtoError:
		return nil, s.protoError.Load().(error)
	case <-s.die:
		return nil, io.ErrClosedPipe
	}
}

// Accept Returns a generic ReadWriteCloser instead of smux.Stream
func (s *Session) Accept() (io.ReadWriteCloser, error) {
	return s.AcceptStream()
}

// Close is used to close the session and all streams.
func (s *Session) Close() error {
	var once bool
	s.dieOnce.Do(func() {
		close(s.die)
		once = true
	})

	if once {
		s.streamLock.Lock()
		for k := range s.streams {
			s.streams[k].sessionClose()
		}
		s.streamLock.Unlock()
		return s.conn.Close()
	} else {
		return io.ErrClosedPipe
	}
}

// notifyBucket notifies recvLoop that bucket is available
func (s *Session) notifyBucket() {
	select {
	case s.bucketNotify <- struct{}{}:
	default:
	}
}

func (s *Session) notifyReadError(err error) {
	s.socketReadErrorOnce.Do(func() {
		s.socketReadError.Store(err)
		close(s.chSocketReadError)
	})
}

func (s *Session) notifyWriteError(err error) {
	s.socketWriteErrorOnce.Do(func() {
		s.socketWriteError.Store(err)
		close(s.chSocketWriteError)
	})
}

func (s *Session) notifyProtoError(err error) {
	s.protoErrorOnce.Do(func() {
		s.protoError.Store(err)
		close(s.chProtoError)
	})
}

// IsClosed does a safe check to see if we have shutdown
func (s *Session) IsClosed() bool {
	select {
	case <-s.die:
		return true
	default:
		return false
	}
}

// NumStreams returns the number of currently open streams
func (s *Session) NumStreams() int {
	if s.IsClosed() {
		return 0
	}
	s.streamLock.Lock()
	defer s.streamLock.Unlock()
	return len(s.streams)
}

// SetDeadline sets a deadline used by Accept* calls.
// A zero time value disables the deadline.
func (s *Session) SetDeadline(t time.Time) error {
	s.deadline.Store(t)
	return nil
}

// LocalAddr satisfies net.Conn interface
func (s *Session) LocalAddr() net.Addr {
	if ts, ok := s.conn.(interface {
		LocalAddr() net.Addr
	}); ok {
		return ts.LocalAddr()
	}
	return nil
}

// RemoteAddr satisfies net.Conn interface
func (s *Session) RemoteAddr() net.Addr {
	if ts, ok := s.conn.(interface {
		RemoteAddr() net.Addr
	}); ok {
		return ts.RemoteAddr()
	}
	return nil
}

// notify the session that a stream has closed
func (s *Session) streamClosed(sid uint32) {
	s.streamLock.Lock()
	if n := s.streams[sid].recycleTokens(); n > 0 { // return remaining tokens to the bucket
		if atomic.AddInt32(&s.bucket, int32(n)) > 0 {
			s.notifyBucket()
		}
	}
	delete(s.streams, sid)
	s.streamLock.Unlock()
}

// returnTokens is called by stream to return token after read
func (s *Session) returnTokens(n int) {
	if atomic.AddInt32(&s.bucket, int32(n)) > 0 {
		s.notifyBucket()
	}
}

// recvLoop keeps on reading from underlying connection if tokens are available
func (s *Session) recvLoop() {
	var hdr rawHeader
	var updHdr updHeader

	for {
		for atomic.LoadInt32(&s.bucket) <= 0 && !s.IsClosed() {
			select {
			case <-s.bucketNotify:
			case <-s.die:
				return
			}
		}

		// read header first
		if _, err := io.ReadFull(s.conn, hdr[:]); err == nil {
			atomic.StoreInt32(&s.dataReady, 1)
			if hdr.Version() != byte(s.config.Version) {
				s.notifyProtoError(ErrInvalidProtocol)
				return
			}
			sid := hdr.StreamID()
			switch hdr.Cmd() {
			case cmdNOP:
			case cmdSYN:
				s.streamLock.Lock()
				if _, ok := s.streams[sid]; !ok {
					stream := newStream(sid, s.config.MaxFrameSize, s)
					s.streams[sid] = stream
					select {
					case s.chAccepts <- stream:
					case <-s.die:
					}
				}
				s.streamLock.Unlock()
			case cmdFIN:
				s.streamLock.Lock()
				if stream, ok := s.streams[sid]; ok {
					stream.fin()
					stream.notifyReadEvent()
				}
				s.streamLock.Unlock()
			case cmdPSH:
				if hdr.Length() > 0 {
					newbuf := defaultAllocator.Get(int(hdr.Length()))
					if written, err := io.ReadFull(s.conn, newbuf); err == nil {
						s.streamLock.Lock()
						if stream, ok := s.streams[sid]; ok {
							stream.pushBytes(newbuf)
							atomic.AddInt32(&s.bucket, -int32(written))
							stream.notifyReadEvent()
						}
						s.streamLock.Unlock()
					} else {
						s.notifyReadError(err)
						return
					}
				}
			case cmdUPD:
				if _, err := io.ReadFull(s.conn, updHdr[:]); err == nil {
					s.streamLock.Lock()
					if stream, ok := s.streams[sid]; ok {
						stream.update(updHdr.Consumed(), updHdr.Window())
					}
					s.streamLock.Unlock()
				} else {
					s.notifyReadError(err)
					return
				}
			default:
				s.notifyProtoError(ErrInvalidProtocol)
				return
			}
		} else {
			s.notifyReadError(err)
			return
		}
	}
}

func (s *Session) keepalive() {
	tickerPing := time.NewTicker(s.config.KeepAliveInterval)
	tickerTimeout := time.NewTicker(s.config.KeepAliveTimeout)
	defer tickerPing.Stop()
	defer tickerTimeout.Stop()
	for {
		select {
		case <-tickerPing.C:
			s.writeFrameInternal(newFrame(byte(s.config.Version), cmdNOP, 0), tickerPing.C, 0)
			s.notifyBucket() // force a signal to the recvLoop
		case <-tickerTimeout.C:
			if !atomic.CompareAndSwapInt32(&s.dataReady, 1, 0) {
				// recvLoop may block while bucket is 0, in this case,
				// session should not be closed.
				if atomic.LoadInt32(&s.bucket) > 0 {
					s.Close()
					return
				}
			}
		case <-s.die:
			return
		}
	}
}

// shaper shapes the sending sequence among streams
func (s *Session) shaperLoop() {
	var reqs shaperHeap
	var next writeRequest
	var chWrite chan writeRequest

	for {
		if len(reqs) > 0 {
			chWrite = s.writes
			next = heap.Pop(&reqs).(writeRequest)
		} else {
			chWrite = nil
		}

		select {
		case <-s.die:
			return
		case r := <-s.shaper:
			if chWrite != nil { // next is valid, reshape
				heap.Push(&reqs, next)
			}
			heap.Push(&reqs, r)
		case chWrite <- next:
		}
	}
}

func (s *Session) sendLoop() {
	var buf []byte
	var n int
	var err error
	var vec [][]byte // vector for writeBuffers

	bw, ok := s.conn.(buffersWriter)
	if ok {
		buf = make([]byte, headerSize)
		vec = make([][]byte, 2)
	} else {
		buf = make([]byte, (1<<16)+headerSize)
	}

	for {
		select {
		case <-s.die:
			return
		case request := <-s.writes:
			buf[0] = request.frame.ver
			buf[1] = request.frame.cmd
			binary.LittleEndian.PutUint16(buf[2:], uint16(len(request.frame.data)))
			binary.LittleEndian.PutUint32(buf[4:], request.frame.sid)

			if len(vec) > 0 {
				vec[0] = buf[:headerSize]
				vec[1] = request.frame.data
				n, err = bw.WriteBuffers(vec)
			} else {
				copy(buf[headerSize:], request.frame.data)
				n, err = s.conn.Write(buf[:headerSize+len(request.frame.data)])
			}

			n -= headerSize
			if n < 0 {
				n = 0
			}

			result := writeResult{
				n:   n,
				err: err,
			}

			request.result <- result
			close(request.result)

			// store conn error
			if err != nil {
				s.notifyWriteError(err)
				s.Close()
				return
			}
		}
	}
}

// writeFrame writes the frame to the underlying connection
// and returns the number of bytes written if successful
func (s *Session) writeFrame(f Frame) (n int, err error) {
	return s.writeFrameInternal(f, nil, 0)
}

// internal writeFrame version to support deadline used in keepalive
func (s *Session) writeFrameInternal(f Frame, deadline <-chan time.Time, prio uint64) (int, error) {
	req := writeRequest{
		prio:   prio,
		frame:  f,
		result: make(chan writeResult, 1),
	}
	select {
	case s.shaper <- req:
	case <-s.die:
		return 0, io.ErrClosedPipe
	case <-s.chSocketWriteError:
		return 0, s.socketWriteError.Load().(error)
	case <-deadline:
		return 0, ErrTimeout
	}

	select {
	case result := <-req.result:
		return result.n, result.err
	case <-s.die:
		return 0, io.ErrClosedPipe
	case <-s.chSocketWriteError:
		return 0, s.socketWriteError.Load().(error)
	case <-deadline:
		return 0, ErrTimeout
	}
}

keepalive bug

在 session 的实现中使用了一个 goroutine 来收发心跳我觉得是有问题的,当连接断流以后, writeFrame 函数就会阻塞 keepalive 这个 goroutine, 建议心跳的收发各用一个 goroutine 或者给 writeFrame 增加超时机制.

Unnecessary keepalive frames

I think, If data is read from a connection, then there is no point in sending a keepalive frame. It is necessary to send the keepalive frame after some timeout from the last read operation. Because success read means that connection is alive, and keepalive frame in this case unnecessary.

Which version is the last?

I ask because:

  • if I do get github.com/xtaci/smux/ (without initialized modules) go will download the version 1.5.10
  • if I do get github.com/xtaci/smux/ with initialized modules go will download version 2.0.1+incompatible

I'm using modules I realized that I am still using version 2.0.1. So I want to update the version. Which one should I use? (1.5.10 or 2.1.0?)

Race condition in TestAllocPutThenGet

While investigating #54, I decided to see if adding the -race flag would help, and came across a different intermittent test failure.

--- FAIL: TestAllocPutThenGet (0.00s)
    alloc_test.go:72: cannot fetch written []bytes from pool
FAIL

Test command:

go clean -testcache
while go test -p=1 -race -vet=off github.com/xtaci/smux; do go clean -testcache; done

Thanks!

/cc @rogers0, @LocutusOfBorg

smux做了哪几件事情?

我看了您的文档,想跟你确认一下几个问题。
1,smux是不是做了如下几件事:
a) 多路复用,编程接口上在a和b间建立多次连接,实际上a和b之间只有一条链接
b) 可靠性,kcp-go未模拟TCP的三次握手、连接断开、连接超时,smux实现了这些,使其使用上更向TCP
c) 顺序性,kcp-go收到的数据可能乱序,用smux可以保证顺序,对这一点我很费解,是我理解错了吧

2,用上smux,每个udp包将多花费8个字节

谢谢!

Serious keepalive issue

I've migrated my project from Yamux to Smux a couple of weeks ago. Most reasionly due to the memory alloc behavior of Yamux
After facing some issue, I thought everything is smooth and stable.
But the included keepalive functions seems to be far from that.

A debugging revealed that the notifyBucket/check dataReady approch is not reliable under heavy load and with higher RTT.
The async notification works, but dataReady is sometimes not correctly set for the timeout checks.
It is also very important to mention that this only happens when testing under real life conditions.
My local test-cases are all fine, but testing between remote devices reveals the mentioned issue.

One mitigation would be removing the s.Close() right after the dataReady checks.
Another would be replacing the check by a more reliable approach.
There would also be the option to disable keepalives and implement a real Ping() function.

I've chosen the last option.

Could you please check if a patch integration makes sense or alternatively provide a fix for the keepalive.

Thanks!

Here is my ping patch:
ping_patch.txt

v1.1.0 unit test TestRandomFrame failed on debian buildd

v1.1.0 unit test TestRandomFrame failed on debian buildd

=== RUN   TestRandomFrame
panic: Fail in goroutine after TestReadStreamAfterSessionClose has completed

goroutine 2341 [running]:
testing.(*common).Fail(0xc00469c200)
	/usr/lib/go-1.11/src/testing/testing.go:546 +0x135
testing.(*common).Error(0xc00469c200, 0xc002d80280, 0x1, 0x1)
	/usr/lib/go-1.11/src/testing/testing.go:616 +0x6f
github.com/xtaci/smux.setupServer.func1(0x5d59c0, 0xc0041be080, 0x5d6b80, 0xc00469c200)
	/<<BUILDDIR>>/golang-github-xtaci-smux-1.1.0+ds/obj-x86_64-linux-gnu/src/github.com/xtaci/smux/session_test.go:27 +0xc0
created by github.com/xtaci/smux.setupServer
	/<<BUILDDIR>>/golang-github-xtaci-smux-1.1.0+ds/obj-x86_64-linux-gnu/src/github.com/xtaci/smux/session_test.go:24 +0xb5
FAIL	github.com/xtaci/smux	5.431s
dh_auto_test: cd obj-x86_64-linux-gnu && go test -vet=off -v -p 4 github.com/xtaci/smux returned exit code 1
make: *** [debian/rules:4: build-indep] Error 1
dpkg-buildpackage: error: debian/rules build-indep subprocess returned exit status 2

any thought?

EOF 问题

你好, 我遇到一个问题, Stream.Read 返回的 EOFerrors.WithStack 包装后 的, 导致和其他一些库使用时得到错误的判断,
如: reverseproxy.go
是设计有误, 还是我用的方法错了?

Return result code from remote server?

Is it possible to return a result code on a stream from a remote server? This would allow the local client to more quickly determine if a remote error occurred and provide a better user experience.

Examples:

  1. Client opens mux to remote server but uses the wrong cipher. Server responds with a "Reauthorize" code so the client doesn't continue to make connections.

  2. Client opens stream on an existing mux but the server is too busy. Server responds with "Back off" code so the client can drop the mux and open up one elsewhere.

  3. Clients opens stream with http/s request to a remote domain, but the remote is not responding. Server responds with "Can't connect" so the client can error out quickly without waiting for a timeout.

Thanks!

broken pipe if session is closed "early"

Hello!

EDIT: See test case in next comment. I also tested with yamux and it's the same, which makes me more unsure whether it's a bug or not. If not, please point me in the right direction 👍

I noticed that once my server sent it's data and closed the session, i would sometimes get a broken pipe on the client which would cause data loss.

What happens is that the server sends file of ~300kb and immedately closes the stream, returns and closes the session. The client receives the file, but the s.die channel in stream.go's Read method is called before the internal stream buffer has been emptied.

I added some print to the <-s.die case and there was still data in the buffer (see code block at the bottom)

Is this by design? If so, how should i work around it? I would expect the internal buffer of a stream to be emptied before and not get a "broken pipe" error, but there seems to be some race?

I even verified with wireshark that all bytes are sent over the network and received by the client, but my data is still corrupted sometimes due to this.

READ:
	select {
	case <-s.die:
		fmt.Println("smux: read broken pipe")
		s.bufferLock.Lock()
		n, err = s.buffer.Read(b)
		fmt.Printf("smux: buffer still had %v bytes to be read\n", n)
		s.bufferLock.Unlock()
		return 0, errors.New(errBrokenPipe)
	case <-deadline:
		return n, errTimeout
	default:
	}
  ...

C implementation

As kcp-go is compatible with the original C implementation of KCP, it makes sense to create a C version of smux as well.

Do you have any plans on this?

It would be especially useful for C/C++ multiplayer games with Go server.

Blocked streams under heavy load

smux starts blocking/dropping segments when under heavy load.
When forwarding rate is below my link rate (>100 Mbps) everything is fine.
Once the rate goes above 100 Mbps, smux starts to block exisitng and new streams.
This applies to client and to server role.

I've spent too much time with tracing and debugging my code and finally decided
to switch back to yamux. Problem disappeared immediatelly.

Please support.

Not cleaned timer caused exceesive memory usage

ROUTINE ======================== github.com/xtaci/smux.(*Stream).Read in /Users/joesis/go/pkg/mod/github.com/xtaci/[email protected]/stream.go
         0   181.34MB (flat, cum) 72.23% of Total
         .          .     81:		}
         .          .     82:
         .          .     83:		var timer *time.Timer
         .          .     84:		var deadline <-chan time.Time
         .          .     85:		if d, ok := s.readDeadline.Load().(time.Time); ok && !d.IsZero() {
         .   181.34MB     86:			timer = time.NewTimer(time.Until(d))
         .          .     87:			deadline = timer.C
         .          .     88:		}
         .          .     89:
         .          .     90:		select {
         .          .     91:		case <-s.chReadEvent:

MaxFrameSize 65535 ?

// DefaultConfig is used to return a default configuration
func DefaultConfig() *Config {
	return &Config{
		KeepAliveInterval: 10 * time.Second,
		KeepAliveTimeout:  30 * time.Second,
		MaxFrameSize:      32768,
		MaxReceiveBuffer:  4194304,
	}
}

// VerifyConfig is used to verify the sanity of configuration
func VerifyConfig(config *Config) error {
	if config.KeepAliveInterval == 0 {
		return errors.New("keep-alive interval must be positive")
	}
	if config.KeepAliveTimeout < config.KeepAliveInterval {
		return fmt.Errorf("keep-alive timeout must be larger than keep-alive interval")
	}
	if config.MaxFrameSize <= 0 {
		return errors.New("max frame size must be positive")
	}
	if config.MaxFrameSize > 65535 {
		return errors.New("max frame size must not be larger than 65535")
	}
	if config.MaxReceiveBuffer <= 0 {
		return errors.New("max receive buffer must be positive")
	}
	return nil
}
if config.MaxFrameSize > 65535 {

这里不要改成32768 ?

broken pipe returned when reading half closed stream

I am using smux.Stream for reading and writing in two concurrency goroutines. After all data written, the stream.Close is called for ending writing.

The method stream.Read running at the local endpoint will get an error broken pipe immediately. Reading at the remote endpoint will get io.EOF as expected. Even I call stream.Close when EOF reached, the local endpoint still get the broken pipe error.

I read the source code of smux. It seems that smux does not support half closing a stream. Any idea to avoid the broken pipe issue?

Server should accept a context

If I wanted to be notified via a channel that the server has shutdown, I can't currently doing this. Accepting a context and exposing a listening context would make this possible. This way I could write the following code

client code using smux

// Session will be closed by server after 10 seconds
ctx := context.WithTimeout(context.TODO(), 10 * time.Second)
session, err := smux.Server(conn, &smux.Config{Ctx: ctx})

go func() {
  // Get a reference to the server's context, to be notified that the server is complete
  <-session.Context().Done()
  // server has shutdown

  // The convention for returning errors is to expose .Err() method
  err := session.Err()
  if err != nil {
    log.Error(session.Err())
  }
}()

Inside server code, you would create a WithCancel context. Call this cancel when the server shuts down after updating Err()'s return.
smux/server

ctx, cancel := context.WithCancel(config.Ctx)
// cancel() when the server shuts down, notifying those listening to server context
session.ctx = ctx
session.cancel = cancel

func (*s session) Close() {
  session.cancel()
}

Let Stream.Read() return io.EOF when connection is closed

It's the standard behavior of connection handling, so the caller can handle gracefully.

Currently, errConnReset is return when cmdRST is seen. The cmdRST flag is sent by the remote end to Close() the stream, which is not an abnormal condition.

some code don't need use lock.

like this code:

s.dieLock.Lock()
select {
	case <-s.die:
		s.dieLock.Unlock()
		return errors.New(errBrokenPipe)
	default:
		close(s.die)
                s.dieLock.Unlock()
}

don't use lock, because die is chan and it's really thread safety.

test code:

package main

import "time"

func main() {
	die := make(chan struct{})
	for i := 0; i < 100; i++ {
		go func() {
			select {
			case <-die:
			default:
				close(die)
			}
		}()
	}
	time.Sleep(10 * time.Second)
}

and run cmd go run -race main.go

Feature request: session跟随上游连接一同close, 或提供查询方式

sess, err := smux.Client(conn, defaultSmuxConfig)

如果上游的 conn 关闭,sess.IsClosed() 只检查 s.die 所以还是返回 false。但实际上这个sess已经不能读也不能写了。

能否提供查询

smux/session.go

Lines 62 to 63 in 211bc4a

chSocketReadError chan struct{}
chSocketWriteError chan struct{}

的方法。

这样的话能在下一次io前能对上游conn进行检查。防止io的时候出错。比如:


	for {
		if sess.IsClosed() {
			openAnotherSess()
		}
		if sess.HasReadErr() || sess.HasWriteErr(){
			openAnotherSess()
		}
		stream, err := sess.AcceptStream()
		if err != nil {
			log(something)
		}

	}

感谢,cheers

License for image files

Hi there,

Thanks for working on this project.
I'm going to package this to Debian. However, according to rule of Debian, every file should be cleared in license.
So I'm writing to you to double confirm the following file:
curve.jpg mux.jpg smux.png

Are you created the above 3 files by yourself? If so, we can safely treat it same as the code you wrote, so it's MIT/Expat license.
But if you borrow some files from other project, please let me know. So I can trace the license of each image file.
Thank you!

Cheers,
Roger

session.IsClosed() has different behavior with yamux

We are using session.IsClosed() for knowing the time to quit session listening with yamux.

After switching to smux session.IsClosed() is not true anymore as with yamux when client side actively close the session.

for {
	stream, err := session.AcceptStream()

	if err == nil {
		// do something
	} else {
		if session.IsClosed() { // this will be true when session.close() is called on client side in yamux, but not in smux.
			return
		} else {
			fmt.Printf("accept stream error %s", err)
			time.Sleep(time.Second)
		}
	}
}

Data race in Stream.numWritten when two goroutines are writing

I am using smux v2.0.11. This program shows a data race when two goroutines are writing to the same stream. I think it is in numWritten.

go.mod
module example.com/smux-race

require (
	github.com/klauspost/cpuid v1.2.1 // indirect
	github.com/klauspost/reedsolomon v1.9.3 // indirect
	github.com/templexxx/cpufeat v0.0.0-20180724012125-cef66df7f161 // indirect
	github.com/templexxx/xor v0.0.0-20181023030647-4e92f724b73b // indirect
	github.com/tjfoc/gmsm v1.0.1 // indirect
	github.com/xtaci/kcp-go v5.4.11+incompatible
	github.com/xtaci/lossyconn v0.0.0-20190602105132-8df528c0c9ae // indirect
	github.com/xtaci/smux/v2 v2.0.11
	golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc // indirect
	golang.org/x/net v0.0.0-20191007182048-72f939374954 // indirect
)
demo.go
// Demonstrates a data race when two goroutines are writing to the same stream.
// go run -race demo.go

package main

import (
	"io"
	"io/ioutil"
	"net"
	"time"

	"github.com/xtaci/kcp-go"
	"github.com/xtaci/smux/v2"
)

type DummyAddr struct{}

func (addr DummyAddr) Network() string { return "dummy" }
func (addr DummyAddr) String() string  { return "dummy" }

// ChanPacketConn simulates a net.PacketConn using two channels.
type ChanPacketConn struct {
	recv, send chan []byte
}

func (c *ChanPacketConn) ReadFrom(p []byte) (int, net.Addr, error) {
	buf := <-c.recv
	return copy(p, buf), DummyAddr{}, nil
}

func (c *ChanPacketConn) WriteTo(p []byte, addr net.Addr) (int, error) {
	buf := make([]byte, len(p))
	copy(buf, p)
	c.send <- buf
	return len(buf), nil
}

func (c *ChanPacketConn) Close() error                       { return nil }
func (c *ChanPacketConn) LocalAddr() net.Addr                { return DummyAddr{} }
func (c *ChanPacketConn) SetDeadline(t time.Time) error      { return nil }
func (c *ChanPacketConn) SetReadDeadline(t time.Time) error  { return nil }
func (c *ChanPacketConn) SetWriteDeadline(t time.Time) error { return nil }

func server(pconn net.PacketConn) error {
	ln, err := kcp.ServeConn(nil, 0, 0, pconn)
	if err != nil {
		return err
	}
	defer ln.Close()

	conn, err := ln.AcceptKCP()
	if err != nil {
		return err
	}
	defer conn.Close()

	sess, err := smux.Server(conn, smux.DefaultConfig())
	if err != nil {
		return err
	}
	defer sess.Close()

	stream, err := sess.AcceptStream()
	if err != nil {
		return err
	}
	defer stream.Close()

	_, err = io.Copy(ioutil.Discard, stream)

	return err
}

func client(pconn net.PacketConn) error {
	conn, err := kcp.NewConn2(DummyAddr{}, nil, 0, 0, pconn)
	if err != nil {
		return err
	}
	defer conn.Close()

	sess, err := smux.Client(conn, smux.DefaultConfig())
	if err != nil {
		return err
	}
	defer sess.Close()

	stream, err := sess.OpenStream()
	if err != nil {
		return err
	}
	defer stream.Close()

	go writeByte(stream, 'A')
	go writeByte(stream, 'B')

	var c chan struct{}
	<-c

	return nil
}

func writeByte(w io.Writer, b byte) error {
	for {
		_, err := w.Write([]byte{b})
		if err != nil {
			return err
		}
	}
}

func main() {
	c2s := make(chan []byte, 1024)
	s2c := make(chan []byte, 1024)
	go func() {
		err := server(&ChanPacketConn{c2s, s2c})
		if err != nil {
			panic(err)
		}
	}()
	err := client(&ChanPacketConn{s2c, c2s})
	if err != nil {
		panic(err)
	}
}
go run -race demo.go
$ go run -race demo.go 
==================
WARNING: DATA RACE
Read at 0x00c000088a6c by goroutine 19:
  github.com/xtaci/smux/v2.(*Stream).Write()
      $GOPATH/pkg/mod/github.com/xtaci/smux/[email protected]/stream.go:183 +0x6bd
  main.writeByte()
      demo.go:104 +0x88

Previous write at 0x00c000088a6c by goroutine 17:
  github.com/xtaci/smux/v2.(*Stream).Write()
      $GOPATH/pkg/mod/github.com/xtaci/smux/[email protected]/stream.go:201 +0x3b2
  main.writeByte()
      demo.go:104 +0x88

Goroutine 19 (running) created at:
  main.client()
      demo.go:94 +0x27b
  main.main()
      demo.go:120 +0x116

Goroutine 17 (running) created at:
  main.client()
      demo.go:93 +0x248
  main.main()
      demo.go:120 +0x116
==================

Reads on all streams fail after a single read error

See https://gist.github.com/joesis/340e21f44cc65e2119668d116f397e49 for a demo. Once the client TCP connection hits read deadline (just to demonstrate. Whichever temporary error could trigger the problem), the session signals socket read error which is persisted so later Read on all streams will fail, but OpenStream and Write on the streams can still work, until the session being closed by the keep alive timer. From the server side, everything works as if there's no problem at all.

I don't think the caller is expected to close the session when seeing a single read error on one stream, which can be specific to the stream and not affecting others, so such errors should be at least visible on the session level. I would even prefer close the session as it enters into an irrecoverable state.

Thoughts?

v1.0.7 calls time.Until() which requires golang 1.8 or later

v1.0.7 calls time.Until() which requires golang 1.8 or later. I hope kcptun can still run on Debian Stretch (stable). So could you make a patch to avoid using time.Until()?
Thank you!

enclosed is the log I build under stretch.

src/github.com/xtaci/smux/session.go:121: undefined: time.Until
src/github.com/xtaci/smux/stream.go:58: undefined: time.Until
src/github.com/xtaci/smux/stream.go:90: undefined: time.Until

How to tell if an smux is closed?

Question: We have an smux opened between two peers across the internet. If the remote peer goes down, how can we detect this?

We looked at session.IsClosed() but it doesn't seem to fire when the remote peer shuts downso session.OpenStream() is returning a stream. When we attempt to use this, it predictably fails.

New release please

Hey, I'm the developer of CovenantSQL which imports "xtaci/smux". Users of CovenantSQL need the latest feature of your master. But dep keeps pulling out the latest tag.

test fails at high possibility

While building debian package, I find go test fails at high possibility (more than 50% as I tested).
Enclosed is the log for both FAIL and OK.
Hope you have a clue on this. Thank you!

=== RUN   TestReadStreamAfterSessionClose
--- PASS: TestReadStreamAfterSessionClose (0.00s)
        session_test.go:374: broken pipe
=== RUN   TestWriteStreamAfterConnectionClose
--- PASS: TestWriteStreamAfterConnectionClose (0.00s)
=== RUN   TestNumStreamAfterClose
--- PASS: TestNumStreamAfterClose (0.00s)
=== RUN   TestRandomFrame
--- PASS: TestRandomFrame (0.01s)
        session_test.go:497: Version:1 Cmd:188 StreamID:3548764726 Length:284
=== RUN   TestReadDeadline
--- PASS: TestReadDeadline (0.00s)
=== RUN   TestWriteDeadline
--- FAIL: TestWriteDeadline (0.00s)
        session_test.go:550: No error when writing with past deadline
FAIL
exit status 1
FAIL    github.com/xtaci/smux   4.248s
dh_auto_test: go test -v github.com/xtaci/smux returned exit code 1
=== RUN   TestReadStreamAfterSessionClose
--- PASS: TestReadStreamAfterSessionClose (0.00s)
        session_test.go:374: broken pipe
=== RUN   TestWriteStreamAfterConnectionClose
--- PASS: TestWriteStreamAfterConnectionClose (0.00s)
=== RUN   TestNumStreamAfterClose
--- PASS: TestNumStreamAfterClose (0.00s)
=== RUN   TestRandomFrame
--- PASS: TestRandomFrame (0.01s)
        session_test.go:497: Version:1 Cmd:188 StreamID:3548764726 Length:284
=== RUN   TestReadDeadline
--- PASS: TestReadDeadline (0.00s)
=== RUN   TestWriteDeadline
--- PASS: TestWriteDeadline (0.00s)
PASS
ok      github.com/xtaci/smux   4.305s
 fakeroot debian/rules binary
dh binary --buildsystem=golang --with=golang

smux multiplexes multiple TCP connections?

As I understand, smux uses a single TCP connection to simulate the effect of multiple virtual TCP connections. But kcptun says smux is designed to host multiple TCP links on the same connection.
If you have multiple TCP connections, you have the same source and destination addresses, but different ports. So the user needs to maintain which TCP connection a stream represents. Do I understand that correctly?
Can you recommend git projects that use smux?

gost 换了 smux v2 之后,速度只有2Mbps

https://github.com/ginuerzh/gost

其中mws模式采用的smux的库

把里面的smux 库更新为v2之后,速度只有2Mbps,
没换之前随便都能跑40-50Mbps来的.

吓得赶紧换回去了,

gost -L=mws://:80

gost -L=:1080 -F=mws://www.baidu.com:80?ip=23.123.21.33

Corrupt stream after expired WriteDeadline

If you set a write deadline and it expires, Write may return an incorrect number of bytes written (It always returns 0 although some may have actually succeeded anyway since the write goes on a request queue for the sendLoop). This effectively corrupts the stream from the client perspective.

Here is an echo test that demonstrates: https://gist.github.com/ltucker/e63cdd323ed8480d606d809467f45f93
And a branch with the test here:
https://github.com/ltucker/smux/tree/bigwritedeadline

This may be expected behavior? (eg tls.Conn is considered corrupt after a write deadline expires) -- it's not documented as such, so I expected this to work.

V1.2.1 之后的版本有bug

V1.2.1 之后的版本会导致流量大之后整个隧道关闭,

这个项目更新V1.2.1 之后的版本的版本,使用 kcp模式,流量一大,整个隧道全部断开,
V1.2.1版本没问题,但是最新发布的两个版本有bug

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.