GithubHelp home page GithubHelp logo

mediocregopher / radix.v2 Goto Github PK

View Code? Open in Web Editor NEW

This project forked from bpowers/radix

434.0 33.0 92.0 3.37 MB

Redis client for Go

Home Page: http://godoc.org/github.com/mediocregopher/radix.v2

License: MIT License

Go 98.20% Makefile 1.80%

radix.v2's Introduction

Radix.v2 (deprecated)

This package is officially deprecated in favor of the new version, which does everything v2 does but is faster, cleaner, and more flexible.


Old README

Build Status GoDoc

Radix is a minimalistic Redis client for Go. It is broken up into small, single-purpose packages for ease of use.

  • redis - A wrapper around a single, non-thread-safe redis connection. Supports normal commands/response as well as pipelining.

  • pool - a simple, automatically expanding/cleaning connection pool. If you have multiple go-routines using the same redis instance you'll need this.

  • pubsub - a simple wrapper providing convenient access to Redis Pub/Sub functionality.

  • sentinel - a client for redis sentinel which acts as a connection pool for a cluster of redis nodes. A sentinel client connects to a sentinel instance and any master redis instances that instance is monitoring. If a master becomes unavailable, the sentinel client will automatically start distributing connections from the slave chosen by the sentinel instance.

  • cluster - a client for a redis cluster which automatically handles interacting with a redis cluster, transparently handling redirects and pooling. This client keeps a mapping of slots to nodes internally, and automatically keeps it up-to-date.

  • util - a package containing a number of helper methods for doing common tasks with the radix package, such as SCANing either a single redis instance or every one in a cluster, or executing server-side lua

Installation

go get github.com/mediocregopher/radix.v2/...

Testing

go test github.com/mediocregopher/radix.v2/...

The test action assumes you have the following running:

  • A redis server listening on port 6379

  • A redis cluster node listening on port 7000, handling slots 0 through 8191

  • A redis cluster node listening on port 7001, handling slots 8192 through 16383

  • A redis server listening on port 8000

  • A redis server listening on port 8001, slaved to the one on 8000

  • A redis sentinel listening on port 28000, watching the one on port 8000 as a master named test.

The slot number are particularly important as the tests for the cluster package do some trickery which depends on certain keys being assigned to certain nodes

You can do make start and make stop to automatically start and stop a test environment matching these requirements.

Why is this V2?

V1 of radix was started by fzzy and can be found here. Some time in 2014 I took over the project and reached a point where I couldn't make improvements that I wanted to make due to past design decisions (mostly my own). So I've started V2, which has redesigned some core aspects of the api and hopefully made things easier to use and faster.

Here are some of the major changes since V1:

  • Combining resp and redis packages

  • Reply is now Resp

  • Hash is now Map

  • Append is now PipeAppend, GetReply is now PipeResp

  • PipelineQueueEmptyError is now ErrPipelineEmpty

  • Significant changes to pool, making it easier to use

  • More functionality in cluster

Copyright and licensing

Unless otherwise noted, the source files are distributed under the MIT License found in the LICENSE.txt file.

radix.v2's People

Contributors

beiping96 avatar bpowers avatar chzyer avatar eloycoto avatar fabiokung avatar gerardrodes avatar hayesgm avatar imkira avatar jameshartig avatar kixelated avatar lithograph avatar mlaoji avatar mstoykov avatar nickynick avatar nilslice avatar tkrajina avatar ulfurinn avatar vharitonsky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

radix.v2's Issues

IOErr occured when client idle more than 300 seconds

I use pool. I found if a client idle more than 300 seconds(redis server default timeout is 300s), the Cmd() will fail. Someone suggest use the following code to avoid:

go func() {
    for {
        p.Cmd("PING")
        time.Sleep(1 * time.Second)
    }
}()

But can you move it to library? Because not everyone is observed this problem.

how to handle restarts of redis server?

I'm using the pool package and when the redis server is restarted the pool clients stop working.
is there a good way to handle disconnections?

If the pool doesn't handle this case, is there a way to remove the client and add a new one to the pool?

thanks

It's not thread-safe, like at all.

Madness! No goroutines allowed? This code panics:

package main

import (
    "github.com/mediocregopher/radix.v2/redis"
    "time"
)

const NTHREADS = 10

func main() {
    done := false
    client, err := redis.Dial("tcp", "localhost:6379")
    if err != nil {
        panic(err.Error())
    }
    for i := 0; i < NTHREADS; i++ {
        go func() {
            for !done {
                err := client.Cmd("INCR", "doom").Err
                if err != nil {
                    panic(err.Error())
                }
            }
        }()
    }
    time.Sleep(1 * time.Second)
    done = true
}

Does it make sense why that might fail? Multiple threads are calling writeTo(c.writeBuf) at the same time, weaving their input together.

To fix this, you gotta grab a mutex in Client.Cmd().

Also, the whole concept of the stateful PipeAppend/PipeResp doesn't make sense if two goroutines are trying to construct a pipe at the same time. Their commands might interleave as they're appending. Instead, you should be able to create a Pipe object that you can Pipe.Append() to, then send the requests and get all the responses with Pipe.Resps().

Check for nil?

Correct me if I'm missing something, but I don't see a way to check if, for example, a GET returns nil?

r := client.Cmd("GET", "key")
if r.IsType(redis.Nil) // This won't work since Nil = 7 and IsType is bitwise &

Thanks!

Pool vs Mutex

Are there any advantages of using a pool in a single redis - multiple goroutines context vs using a Mutex to protect access to it? (I am asking this considering redis is using a single thread to respond)

Concurrency safe?

Hi @mediocregopher, thanks for a nice redis package for go.

A quick question I have is about how well radix.v2 plays with goroutines. Should I do extensive tests and/or add locking around Cmd() calls?

Too hard to tune connection pool size

We've got a somewhat latency-sensitive app and use e.g. sentinel.NewClientCustom to create a pool before any requests start to be served to avoid connecting on-the-fly. But how to know how many connections are needed? In theory we can 1) set an approximately huge number, 2) check how many connections are used in reality an adjust this number (having too many connections may waste some resources, also they are created sequentially which affects the start up time). But one can't at least just read the pool struct fields thanks to golang design ("cannot refer to unexported field").

Example solutions:

  • From what I've understood looking through pool.go, the real pool size is not stored. It could be stored and sentinel.Client.masterPools could be exported, so one can read the total size in a goroutine and alert when it changes
  • pool could log when it creates an extra connection (above the size), ideally including the new total size (so one could be able to e.g. grep it from the logs)

Also I've failed to grasp the logic of returning extra connections to the pool. Pool.pool is a chan with fixed size. Which means when extra-created connection is being returned... it blocks or closes it or?..

Pubsub is broken due to Timeout => LastCritical change

The change that was made doesn't just set LastCritical as the pull request states - it also closes the connection:

if r.IsType(IOErr) {
        c.LastCritical = r.Err
        c.Close()
    }

This doesn't just break pubsub for implementations using LastCritical as the comment states, it breaks any pubsub implementation that uses a timeout, doesn't it?

The following sample code from the pubsub doc should no longer work:

for {
    r := subc.Receive()
    if r.Timeout() {
        continue 
    } else if r.Err != nil {
         kv["err"] = r.Err
    }
}

On a timeout, LastCritical is set and the connection gets closed and we continue, at the start of the loop we call subc.Receive() which errors out because we attempted to read from a closed connection.

I think this change should be reversed, the problem it was trying to solve could be handled by the programmer who can evaluate how critical a timeout is to them - meanwhile pubsub is now severely handicapped because it cannot be used in conjunctions with timeouts. However, if the functionality is critical perhaps there can be a configuration option that can be set to make timeouts no longer critical (timeouts = critical by default, but the pubsub wrapper sets them to not-critical.)

Thank you.

stability hidden trouble of the pool when there are frequent peaks and troughs request

The Pool is too simple. If there are peaks and troughs requests, some connections has the risk to be created but closed soon. This will effect the stability of the pool. Creation of the connection probably spend serveral seconds if there is some special authentication. And make the creation and close to be worse. Then heavily effect the request latency

// Put returns a client back to the pool. If the pool is full the client is
// closed instead. If the client is already closed (due to connection failure or
// what-have-you) it will not be put back in the pool
func (p *Pool) Put(conn *redis.Client) {

My suggestion solution:
type Client struct {
// The most recent activating time
LastAvailableTime time.Time
}
type Pool struct {
//pool chan *redis.Client
pool list.List
mutex sync.Mutex
}
type Cluster struct {
// all pools list
poolsSet map[*pool.Pool]int
poolsSetMutex sync.Mutex
pingTicker *time.Ticker
}
type Opts struct {
// The maximum size of the connection pool to use for each host. Default is 10
PoolSize int
// The initial size of the connection pool to use for each host.
// Suggest set it to be less than 10 to avoid hang up the Cmd calling for a long time
InitialSize int
// The idle size of the connection pool to use for each host.
IdleSize int
// Max idle time means there must be PING at least to be sent at the specified period. Otherwise, the connnection probably is unavailble.
MaxIdleTime time.Duration
}

// heartBeat to keep the connetion active. Do some optimization on the suggestion in #51
func (c *Cluster) heartBeat() {
for {
select {
case <-c.pingTicker.C:
allConnNum := c.getConnNum()
pools := c.getAllPools()
for _, pool := range pools {
heartbeatNum := c.getHeartBeatConnNum(time.Second, allConnNum, pool)
for i := 0; i < heartbeatNum; i++ {
idleCon, err := pool.GetIdleConn(10 * time.Second) // idle for ten seconds
if err != nil {
} else if idleCon == nil {
} else {
// It's safe to close it now if exceeding the limitation
// Avoid the connection to be closed when there are frequent peaks and troughs
if pool.Avail() >= c.o.IdleSize {
idleCon.Close()
} else {
idleCon.Cmd("PING")
pool.Put(idleCon)
}
}
}

			if pool.Avail() < c.o.IdleSize {
				con, err := pool.CreateNewClient() //Lazy to create connection to reduce the spent time of creating pool
				if err != nil {
				}
				if con != nil {
					pool.Put(con)
				}
			}
		}
	case <-c.stopCh:
		c.pingTicker.Stop()
		return
	}
}

}

Why put closed connection into the pool in NewCustom

When err occurs, redis-client is closed, why still put those clients into the pool
if err != nil {
for _, client = range pool {
client.Close()
}
pool = pool[0:]
break
}

Is that you want to return an empty pool, should the code be as below
pool = pool[:0]

Count number of open connections in Pool

Due to a bug in our code, I accidentally opened 1000s of connections to redis. In order to avoid this in the future (and to be able to track the number of connections), I want to be able to know how many connections are in the pool right now.

As I see, this is just a matter of exposing len(pool), right? If you'd be willing to accept a MR (and my approach is correct, since I'm kind of new to golang), I'd gladly do it.

Just need a "go" from you ;)

how can i use this for transactions command 'watch,mutil,...,exec,unwatch'

I am building a web store
I use locks and transactions to ensure proper inventory
But how do I use the transaction?
I wrote the following code
But I'm not sure this is right

    _ = cache.Conn.Cmd("WATCH", "5")
    d, err := cache.Conn.Cmd("GET", "5").Str()
    if err == nil && d == "5" {
        cache.Conn.PipeAppend("MULTI")
        cache.Conn.PipeAppend("SET", "key1", "1")
        cache.Conn.PipeAppend("SET", "key2", "2")
        cache.Conn.PipeAppend("SET", "key3", "3")
        cache.Conn.PipeAppend("SET", "key4", "4")
        cache.Conn.PipeAppend("SET", "key5", "5")
        cache.Conn.PipeAppend("EXEC")
        _ = cache.Conn.PipeResp()
    }
    cache.Conn.Cmd("UNWATCH")

Can you give me some suggestions ?

Returning "wrong type" while key does not exist

Having the following case -

	redisClient, err := redisPool.Get()
	....
	defer redisPool.Put(redisClient)
	redisResp := redisClient.Cmd("GET", "someKey")
	respErr := redisResp.Err 
	strValue, strErr := redisResp.Str()

If someKey is non-existent, then the value of respErr will be "wrong type" (probably due to receiving nil).

I think that we should either get an empty string as strValue's value, respErr with "key value non-existing" or strErr with the value of "empty value received". Another option is to have a default value.

Many times non-existing key isn't necessarily a case of an error.

DEL always returns "wrong type"

When I run a DEL command, it will always return "wrong type", even though the key existed beforehand and the deletion was executed correctly. (I'm using redis-cli to check it)
Unfortunately that renders me unable to use it in a MULTI-enabled pipeline because it makes the final EXEC fail with the same error.

Redis cluster and EVAL/EVALSHA

Hey, we're using radix at Twitch and recently switched to redis cluster. It seemed like things were working normally but we noticed one of the redis hosts was using more network traffic but performing the same number of operations. I monitored the host and it was receiving more EVALSHA commands than the others.

The issue is that radix always selects the first argument as the key. In the case of EVALSHA, the key is actually the 3rd* argument. Radix was using the SHA of the script instead, which caused all commands to be sent to the same host. Redis cluster handled the situation by returning a redirect to the correct node so everything appeared to work anyway.

I wasn't using util.LuaEval and thought it should fix the issue. However, I took a look at the code and it looks like a typo causes the wrong behavior anyway. The code is using c instead of cc which causes the same broken hashing logic.

I've made my own version of LuaEval to work around the issue and we're seeing perfectly balanced traffic now. It's worth fixing the util and maybe adding some documentation to Cmd. It's also something to keep in mind while you're working on the API for v3.

nil but return wrong type

calling Bytes() for "HGET". Nil means no data. But Bytes() returns error. I fix it like this:
func (r *Resp) Bytes() ([]byte, error) {
if r.Err != nil {
return nil, r.Err
} else if r.typ == Nil { //return nil not error
return nil, nil
} else if !r.IsType(Str) {
return nil,errBadType
}
....
}
func (r *Resp) Str() (string, error) {
b, err := r.Bytes()
if err != nil {
return "", err
}
if b == nil { //nil will return now.
return "", errBadType //Probably here "return "", nil" is better
}
return string(b), nil
}

HDEL or HEXISTS on non-existing keys

Hello,

I am using this redis go client for my application,

While doing a HGET on a non-existing field I am getting an error : "wrong-type" which means the field doesn't exist.
value, err := client.Cmd("HGET",key,field).Str()
However, a HDEL or HEXISTS returns no error.
err := client.Cmd("HDEL", key, field).Err

in this case err is nil.

Am I understanding anything wrong?

Hidden trouble in newPool and NewCustom--Tolerate occasional but normal failure in creating connection pool

I create more than 100 connections. Probably an error happens in dial process. The error probably is a normal case. But it lead to the creation of the whole pool failure. And discard all created connections.

My suggestion:
1 try times should be given. For example:
func NewCustom(network, addr string, size int, df DialFunc) (*Pool, error) {
var err error
var continuousErrTimes int
pool := make([]*redis.Client, 0, size)
for i := 0; i < size; i++ {
client, err = df(network, addr)
if err != nil {
continuousErrTimes++
for _, client = range pool {
client.Close()
}
if continuousErrTimes> cstTryTimes { //It should try serveral times.
pool = pool[0:]
break
}
} else {
continuousErrTimes= 0
}
pool = append(pool, client)
}
2 Tolerance case of failure. For example:
func (c *Cluster) newPool(addr string, clearThrottle bool) (*pool.Pool, error) {
p, err := pool.NewCustom("tcp", addr, c.o.PoolSize, df)
if err != nil {
if p.Avail() == 0 {//All failure and return error
c.poolThrottles[addr] = time.After(c.o.PoolThrottle)
return nil, err
}
return p, nil// some connections fail but some success. the pool is available. returning the pool probably is more reasonable
}
return p, err
}

Initial sentinel dial has no timeout

sentinel.NewClientCustom uses hardcoded redis.Dial at the beginning. Shouldn't it use the same passed df DialFunc instead? So one could pass a DialTimeout wrapper.

sentinel does not sense switch-master

Hi, i'm using radix v1, but I think this problem has no relation to version.

The problem occurred when redis switch master automatically, sentinel log shows all my three sentinels are informed switch-master signal.

Sep  7 22:10:51 redis-sentinel03-30015 redis-sentinel[1085]: +switch-master  
Sep  7 22:10:51 redis-sentinel02-30015 redis-sentinel[1382]: +switch-master  
Sep  7 22:10:53 redis-sentinel01-30015 redis-sentinel[1124]: +switch-master

However, my program does not sense this action, it still use old master connections. Unfortunately "READONLY" error is return for write commands.

I tried master switch manually through sentinel failover command, also forced switch by killing master, but the above problem didn't show up again. So it may be problem in extreme condition.

What is that possible reason?
And what can I do to make my program more robust?

Implement more convenient support for transactions?

Hello,

The documentation for this library does not seem to offer any guidance at all on doing transactions as far as I can see. Pipelining is mentioned, but pipelining does not equate a transaction automatically, and pipelining by itself does not guarantee that the pipelined commands are executed atomically. You need transactions for that.

The Python Redis client library automatically/transparently uses transactions when you use pipelines, but that doesn't seem to be the case with this library?

The upstream documentation linked above explains it all, but to summarize:

Transactions are initiated by issuing the "MULTI" command, after which any other issued command will only return a "QUEUED" response (or error) until you issue the "EXEC" command, after which the server will return the response data to every queued command at once, or fail. So, transactions also imply special semantics for error handling.

With current functionality, transactions seem to be very awkward to use:

conn.PipeAppend("multi")
conn.PipeAppend("lrange", "list1", 0, -1)
conn.PipeAppend("lrange", "list2", 0, -1)
conn.PipeAppend("del", "list1")
conn.PipeAppend("del", "list2")
conn.PipeAppend("exec")

// this is completely redundant
for i := 0; i < 5; i++ {
    if err := conn.PipeResp().Err; err != nil {
        // do whatever
    }
}

resp, err := conn.PipeResp.Array()

if err != nil {
    //  something
}

// resp will now contain the result to every command

Surely there's a better way to handle this?

Maybe special transaction functionality should be implemented where you only have to fetch the response (and error) once, since the entire transaction will succeed or fail and checking every error is redundant.

Scan issues

Hello,

I am using radix to do some analytics pipelining from a few redis instances. To accomplish this I'm using util.Scan to build up a queue of keys to export.

Unfortunately, I found out the hard way that util.Scan does not provide scan count hints during it's requests. This had the extremely time consuming side effect of being blazing fast in local development and extremely slow in production.

To work around this issue I've implemented util.Scan with slight modifications

func Scan(r redis.Cmder, ch chan string, cmd, key, pattern string, serverHint int) error {
  // ...
  args = append(args, cursor, "MATCH", pattern, "COUNT", serverHint)
}

This yields a huge performance increase because in large scale redis servers there is a high volume of empty cursors returned. I'm not quite sure yet how memory/key fragmentation applies to the cursors returned, but this issue is extremely difficult to detect on smaller instances.

For my case the 'non-optional' nature of serverHint is acceptable. If you choose to implement something similar, it may be better to set a sensible default?

Hang up for a long time because anything must be done in cluster.spin

More than 100 connections to each redis cluster. Creating one pool with authentication will be spent more than 1 minutes. And it will occupy cluster.spin more than 1 minutes. But probably my service has more than 10 redis server. And 'Cmd' calling will be blocked for a very long time. It's unacceptable for my service. If the network or redis servers have some failure, it becomes much worse.

One 'spin' based on channel is low performance although it's simple. All in here is the bottleneck. And the 'Pool' implementation is also based on it.

My current solution:

  1. Modify 'Pool' implementaion based on list with sync.mutex. So i can get the connection in 'first in first out' order or in 'last in first out' order. 'Ping' is done in 'first in first out' order to check each connection. But 'Cmd' is done in 'last in first out' order. So 'Cmd' always happens in the most active connections. If it's in fifo order like currently, the connection has been in idle for a long long time and it's unavailable in much much higher possibility.
  2. Pool' with 'initial size' ,'idle size' and 'limit size'. So 'initia size' to be very small to reduce the creating 'pool' time.
  3. My implemenation like this:

//heartBeat routine to keep the connetion active. All pools share the same routine
func (c Cluster) heartBeat() {
for {
select {
case <-c.pingTicker.C://one second timer
allConnNum := c.getConnNum()
pools := c.getAllPools()
for _, pool := range pools {
if pool.Avail() < c.o.IdleSize { // lazy to creat connection
con, err := pool.CreateNewClient()
if err != nil {
}
if con != nil {
pool.Put(con)
}
}
// calculate how many connections will be checked in each interval.
heartbeatNum := c.getHeartBeatConnNum(time.Second, allConnNum, pool)
for i := 0; i < heartbeatNum; i++ {
// in fifo order to check whether it's idle for 10 second. then ping it.
idleCon, err := pool.GetIdleConn(time.Second
10)
if err != nil {
} else if idleCon == nil {
} else {
idleCon.Cmd("PING")
pool.Put(idleCon)
}
}
}
case <-c.stopCh:
c.pingTicker.Stop()
return
}
}
}

sentinel conn alive problem

when I use GetMaster to get a conn, I worry about whether the conn will be not alive for my redis server also has timeout for idle, and the problem exists in the pool client usage, and you suggest to use PING to solve the problem. Is it the same to Ping for the conn returned by GetMaster

How can I ensure the redis.Client in the pool.Pool connected?

I wrote a redis http interface using the redis and pool sub-package.

In the main function

p, err := pool.New("tcp", u.Host, concurrency) // concurrency = 3

In the function called in the main function

func redisDo(p *pool.Pool, cmd string, args ...interface{}) (reply *redis.Resp, err error) {
    conn, err := p.Get()
    errHndlr(err)
    defer p.Put(conn)

    // do the request.
    reply = conn.Cmd(cmd, args...)
    if err = reply.Err; err != nil {
        if err != io.EOF {
            Fatal.Println("redis", cmd, args, "err is", err)
        }
        return
    }

    return
}

My redis-server has a timeout config timeout 300.

Here is my problem: When the program is idle for more than 300 seconds, I will get an EOF error returned from the redis. That is because the redis-server close the connection, but I can't get notified when I use conn, err := p.Get().

So, what is the right way to use radix's pool package?

For now, I just retry 3 times to establish the connection again when I get EOF error. I hope the redis pool can deal the redis-server timeout silently without my concerns.

Could you give me some suggestions?

Auth request in the redis cluster conn

I want to connect to the redis cluster which has password, but i can't init the cluster client, i get this fail info "NOAUTH Authentication required."
I read some code and find it has request cmd to the redis in The cluster.New()-->c.Reset().

I think if the new() func have auth process may be good

Use cluster package Could not get cluster info: EOF

Use radix.v2 cluster module,I found some err log:
Could not get cluster info: EOF and Cluster doesn't make sense, host:port might be gone

so,could not run redis cmd I check redis cluster all is ok, but run some cmd then all ok.

maybe the connect timeout ,add check code at init pool like this:

initialPool, err := c.newPool(o.Addr, true)
if err != nil {
return nil, err
}
c.pools[o.Addr] = initialPool
go func() {
for {
p.Cmd("PING")
time.Sleep(1 * time.Second)
}
}()

go c.spin()
if err := c.Reset(); err != nil {
    return nil, err
}
return &c, nil

by that can slove the problem?

thanks.

Driver crashes when receiving map[string]interface{} as attributes

Hi,

I am a new user of the driver, many thanks for your efforts into it!

When converting from former driver into radix, have inherited some methods using map[string]interface{} for setting redis hashes.

Unfortunately this panics out. You can find bellow the trace and I can always reproduce it if you need me to test fixes,

Ta,
DanB

panic: runtime error: index out of range

goroutine 22 [running]:
github.com/mediocregopher/radix.v2/redis.writeTo(0x7fb87475a108, 0xc8201e8310, 0xc8201ee000, 0x0, 0x80, 0xaed380, 0xc820205d90, 0x101, 0x17, 0x0, ...)
    /usr/local/goapps/src/github.com/mediocregopher/radix.v2/redis/resp.go:644 +0x324
github.com/mediocregopher/radix.v2/redis.writeTo(0x7fb87475a108, 0xc8201e8310, 0xc8201ee000, 0x0, 0x80, 0xad03a0, 0xc8201dff60, 0x101, 0x25, 0x0, ...)
    /usr/local/goapps/src/github.com/mediocregopher/radix.v2/redis/resp.go:676 +0x89f
github.com/mediocregopher/radix.v2/redis.(*Client).writeRequest(0xc82008c480, 0xc820049268, 0x1, 0x1, 0x0, 0x0)
    /usr/local/goapps/src/github.com/mediocregopher/radix.v2/redis/client.go:166 +0x542
github.com/mediocregopher/radix.v2/redis.(*Client).Cmd(0xc82008c480, 0xd2f2c0, 0x5, 0xc8201dff40, 0x2, 0x2, 0xad03a0)
    /usr/local/goapps/src/github.com/mediocregopher/radix.v2/redis/client.go:85 +0x12b
github.com/mediocregopher/radix.v2/pool.(*Pool).Cmd(0xc820175830, 0xd2f2c0, 0x5, 0xc8201dff40, 0x2, 0x2, 0x0)
    /usr/local/goapps/src/github.com/mediocregopher/radix.v2/pool/pool.go:95 +0x104
ipbxsuite/storage.(*RedisStorage).SetHEntity(0xc8200270e8, 0xd52520, 0xb, 0xc820205730, 0xc, 0xc820205740, 0x5, 0xc8201fd410, 0x0, 0x0)
    /usr/local/goapps/src/ipbxsuite/storage/hentity.go:39 +0x5c0
ipbxsuite/apier.(*ApierV1).NewSIPEndpoint(0xc8201ec830, 0xc820205730, 0xc, 0xc820205740, 0x5, 0xc820205750, 0x4, 0x0, 0x0, 0xc820205760, ...)
    /usr/local/goapps/src/ipbxsuite/apier/endpoint.go:74 +0x632
reflect.Value.call(0xbb7c20, 0xd1ef80, 0x13, 0xd2e718, 0x4, 0xc820049ee8, 0x3, 0x3, 0x0, 0x0, ...)
    /usr/local/go/src/reflect/value.go:432 +0x120a
reflect.Value.Call(0xbb7c20, 0xd1ef80, 0x13, 0xc820049ee8, 0x3, 0x3, 0x0, 0x0, 0x0)
    /usr/local/go/src/reflect/value.go:300 +0xb1
net/rpc.(*service).call(0xc8201f81c0, 0xc8200136c0, 0xc8201ecaf0, 0xc8201fed80, 0xc8201df000, 0xd0c5e0, 0xc8201e0f00, 0xd9, 0xac5840, 0xc820205830, ...)
    /usr/local/go/src/net/rpc/server.go:383 +0x1c1
created by net/rpc.(*Server).ServeCodec
    /usr/local/go/src/net/rpc/server.go:477 +0x4ac

String concatenation in key

I have a small function in my Go program that prepares the key:

func redisKey(_id int) string { return "user:" + string(_id) }

I use the same function for both SETting and GETting a value:
err = storage.client.Cmd("SET", redisKey(_id), user).Err
r := storage.client.Cmd("GET", redisKey(_id))

Strangely, I the GET command does not retrieve the value. Upon inspection with SCAN via a redis cli, the key is stored as something on the lines of "user:\x06". When I hardcode the key in SET as "user:12", I can successfully retrieve the value and it is stored as "user:12", verified via redis cli.

This may be something to do with Go string concatenation but, being relatively new to Go, I'd appreciate any thoughts.

use pipe from pool cause "bytes.Buffer: truncation out of range"

I have a pool connect to a redis instance.
I create this pool as following:
_pool, err := pool.New("tcp", address, 100)

And there are 300 goroutine use this pool to send command.
most of them using pool as following,(we call it: eg1)
_pool.Cmd(cmd, args)

And 50 goroutine user this pipe from this pool, like following(we canll it: eg2)
c, err :=_pool.Get()
c.PipeAppend(cmd, args)
c.PipeAppend(cmd, args)
c.PipeAppend(cmd, args)
c.PipeResp()

Problem is: eg1 will throw a panic error: bytes.Buffer: truncation out of range

CallStack like following.

goroutine 83 [running]:
bytes.(_Buffer).Truncate(0xc8205e3500, 0x0)
/usr/local/go/src/bytes/buffer.go:69 +0xbf
bytes.(_Buffer).WriteTo(0xc8205e3500, 0x7fe8513a6868, 0xc82029c050, 0x33, 0x0, 0x0)
/usr/local/go/src/bytes/buffer.go:222 +0x1af
github.com/mediocregopher/radix.v2/redis.(_Client).writeRequest(0xc8212a6180, 0xc8500a2ea8, 0x1, 0x1, 0x0, 0x0)
/home/stanford/golang/src/github.com/mediocregopher/radix.v2/redis/client.go:172 +0x5e3
github.com/mediocregopher/radix.v2/redis.(_Client).Cmd(0xc8212a6180, 0x9116b8, 0x4, 0xc853140320, 0x1, 0x1, 0x7c6780)
/home/stanford/golang/src/github.com/mediocregopher/radix.v2/redis/client.go:85 +0x12b
github.com/mediocregopher/radix.v2/pool.(*Pool).Cmd(0xc8212c24e0, 0x9116b8, 0x4, 0xc853140320, 0x1, 0x1, 0x0)
/home/stanford/golang/src/github.com/mediocregopher/radix.v2/pool/pool.go:95 +0x104

BTW, I'm not using the lasted version of radix pool.

I'll try last version later.

MaxConns for Pool

Pool currently provides a size parameter that sets an upper limit on the number of idle connections, but doesn't appear to set upper bounds on the number of connections that might be dialled under pool.Get.

Redis will bounce connections above its connection limit, although from what I can discern p.df(p.Network, p.Addr) won't timeout (by default) and therefore will block until it connects.

It might be worthwhile instead blocking locally until the available connections in the pool are < max rather than attempting to open new ones over the limit.

an error like "use of closed network connection" occured when using the sentienl of radix.v2

i have a program which would run ten goroutines to pop up messages from redis,

and this program run well at the beginning, but after running for some time, it would get a error EOF

and all goroutines would get an error "write tcp xx.xx.xx.xx:25252->xx.xx.xx.xx:7239: use of closed network
connection" when reading a message from redis in these codes: redisClient.Cmd("RPOP", p.queue).Bytes()

how can i fix this issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.