GithubHelp home page GithubHelp logo

enriquebris / goconcurrentqueue Goto Github PK

View Code? Open in Web Editor NEW
364.0 8.0 32.0 151 KB

Go concurrent-safe, goroutine-safe, thread-safe queue

License: MIT License

Go 100.00%
golang go concurrency concurrent slice goroutine-safe queue fifo fifo-queue concurrent-queue

goconcurrentqueue's People

Contributors

enriquebris avatar sijmenhuizenga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

goconcurrentqueue's Issues

TestDequeueOrWaitForNextElementMultiGR stuck when executed on single proccess

While testing #19 I noticed Travis testing is timing out. This problem also occurs on master, therefore this issues. I was able to replicate this problem on my computer by limiting the maximum processes: GOMAXPROCS=1 go test -test.v

The test that gets stuck is
TestFixedFIFOTestSuite/TestDequeueOrWaitForNextElementMultiGR

I have tried debugging by adding print statements through the test but I can't figure out what's wrong. I did find that lowering WaitForNextElementChanCapacity from 1000 to 500 makes the test succeed, but I don't think this solves the root problem.

Does anyone have an idea what could cause this?

Thanks for this amazing package, I am having a lot of fun working in this nice codebase.

The resources consumed seems very high despite of dequeuing

Hi, I enqueue job in main thread and dequeue it in other thread, however, the resource does not reduce after that. I don't know what did I do wrong. Here is my implementations:

Main code:

func main() {
	initConfigs()
	route := gin.Default()
	route.Use(gzip.Gzip(gzip.DefaultCompression))

	proxyAPI := proxy_reserver.NewProxyAPI()
	proxyAPI.InitRoute(route, "/")

	go proxyAPI.DequeueUploadJobs() // I started the dequeue process here

	route.Run("0.0.0.0:" + viper.GetString("service.port"))
}

In the API handler I did:

func (app *ProxyAPI) UploadInstance(c *gin.Context) {
	cCp := c.Copy()
	bytes, err := ioutil.ReadAll(cCp.Request.Body)
	if err == nil {
                if queueInstance == nil {
                        queueInstance = goconcurrentqueue.NewFIFO()
                }
		queueInstance.Enqueue(UploadJob{
			Data:   bytes,
			DatLen: c.Request.ContentLength,
		})
	}
}

And the DequeueUploadJobs():

func (app *ProxyAPI) DequeueUploadJobs() {
	fmt.Println("Start dequeuing instance jobs")
	for true {
		if app.queueInstance != nil && app.queueInstance.GetLen() > 0 {
			app.Logger.Info(fmt.Sprintf("Upload queue size is %d", app.queueInstance.GetLen()))
			job, err := app.queueInstance.Dequeue()
			if err != nil {
				app.Logger.Info(fmt.Sprint(err))
				continue
			}

			c := job.(UploadJob)
                       //My job
			// utils.LogInfo("Done execute meta " + key)
		} else {
			time.Sleep(1 * time.Second)
		}
	}
}

Can you tell my what's wrong with my code? Thank you so much!

FixedFIFO use DequeueOrWaitForNextElement will can stuck forever

If I use 5 producers and one consumer, when I use DequeueOrWaitForNextElement to dequeue, the consumer will be stuck in ctx.Done() (cannot consume data). Then the producer will fill up the queue and keep reporting FixedFIFO queue is at full capacity. At this time waitForNextElementChan is empty
code:
`package main

import (
"fmt"
"math/rand"
"os"
"os/signal"
"syscall"
"time"

"github.com/enriquebris/goconcurrentqueue"

)

type AnyStruct struct {
Field1 string
Field2 int
}

var (
queue *goconcurrentqueue.FixedFIFO
)

func main() {

queue = goconcurrentqueue.NewFixedFIFO(4096)
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
done := make(chan bool, 1)
// will output: 3
fmt.Printf("queue's length: %v\n", queue.GetCap())

for i := 0; i < 5; i++ {
	go product()
}

go consume()

go func() {
	sig := <-sigs
	fmt.Println()
	fmt.Println(sig)
	done <- true
}()

fmt.Println("awaiting signal")
<-done
fmt.Println("exiting")

}

func product() {
rand.Seed(time.Now().UnixNano())

for {
	i := rand.Intn(1000000000)
	val := &AnyStruct{Field1: fmt.Sprintf("hello world %d", i), Field2: i}
	err := queue.Enqueue(val)
	if err != nil {
		//	fmt.Println(err)
	}
	time.Sleep(time.Microsecond * 100)
}

}

func consume() {
for {
//ctx, cancel := context.WithTimeout(context.Background(), time.Duration(time.Millisecond*80))
//defer cancel()
//val, err :=queue.DequeueOrWaitForNextElementContext(ctx)
val, err := queue.DequeueOrWaitForNextElement()
if err == nil {
ele := val.(*AnyStruct)
fmt.Println(ele.Field1, ", ", ele.Field2)
} else {
fmt.Println(err)
time.Sleep(time.Second)
}
}

}
`

Thread-safe channels? Aren't they already thread-safe?

Hi,
It's great that you've implemented a concurrent queue. It's a very good as an academic exercise, but it's left me slightly confused.

Please can you help me understand what your intent is by sharing your work? Go has channels, which are effectively concurrent queues that can be safely shared across go-routines. I'm struggling to understand why you need to put locks around channels. Do you have a test which proves that channels aren't thread-safe? If so, this sounds like either a bug in Golang, or more likely a bug in either yours or my understanding. I've been in the game for a long time, and it worries me that this library is an actual thing. It feels to me like it's putting thread safety around thread safety.

I'd love to hear your thoughts. I'm always happy to be corrected in my thinking.

Sometimes blocked at Enqueue?

FixedFIFO sometimes blocked at the Enqueue function when the queue is empty, and i use
context.WithTimeout and DequeueOrWaitForNextElementContext to deque item in another routine:

// Enqueue enqueues an element. Returns error if queue is locked or it is at full capacity.
func (st *FixedFIFO) Enqueue(value interface{}) error {
	if st.IsLocked() {
		return NewQueueError(QueueErrorCodeLockedQueue, "The queue is locked")
	}

	// check if there is a listener waiting for the next element (this element)
	select {
	case listener := <-st.waitForNextElementChan:

               //
---->          //  *** blocked here!!!
               //
		// send the element through the listener's channel instead of enqueue it
		listener <- value

	default:
		// enqueue the element following the "normal way"
		select {
		case st.queue <- value:
		default:
			return NewQueueError(QueueErrorCodeFullCapacity, "FixedFIFO queue is at full capacity")
		}
	}

deque (not blocking):

ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
	defer cancel()
	qItem, err := tp.ReqCacheQueue.DequeueOrWaitForNextElementContext(ctx)
	if err != nil {
		if err == context.DeadlineExceeded {
			return
		}
		qErr, ok := err.(*goconcurrentqueue.QueueError)
		if ok && qErr != nil {
			if qErr.Code() == goconcurrentqueue.QueueErrorCodeEmptyQueue {
				time.Sleep(100 * time.Millisecond)
				return
			}
		}
		log.Error("process task err: ", err)
		return
	}

"empty queue and can't wait for next element because there are too many DequeueOrWaitForNextElement() waiting"

getting this error thrown... except I only have a maximum of 3 waiters

This is how I dequeue - is there a problem with timing out and rechecking? Does that leak something?

	ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
	defer cancel()
	queuedElement, err := scoutQueue.DequeueOrWaitForNextElementContext(ctx)
	if err != nil {
		if err != context.DeadlineExceeded {
			log.Infof("SCOUT: Unexpected queue error: %s", err)
		}
		return nil
	}

Feature request: Enqueue and wait for lock

It would be very useful to have a method that allowed one to enque and not error if the queue is locked. Instead it should wait for it to unlock

The same holds true for Dequeue.

fifo_queue consumers can stuck forever.

Hi!
It looks like fifo_queue is not really safe to use and can lock the app forever.

Lets say we have length=0
and consumer calls for DequeueOrWaitForNextElementContext
and reaches line 117

Meanwhile producer tries to Enqueue a new object:
It takes listener in line 46.
but can't send object yet in line 49 because no one listen yet
so it adds to the slice and forgets about listener

Consumer start to consume waitChan in 126 but that will never happen .
i guess the whole idea of the for loop is to resolve that. But there is no number of iterations and timeout values that can guarantee that it will be able to Dequeue anything in 129. Other consumers can always get there faster.
So loop ends with no results and we end up trying to consume waitChan forever in 137

It is hard to catch but very painful.
Number of consumers > 1 required to catch that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.