-
lesismal
is short forless is more
. -
I like reading and writing the fucking code.
btc
:3Md5wF3y1pNQY299qMbswXS1e1A9oruugY
eth
:0xa7f4bFA0de3Bee6B073e1A3197EADb2ba5BB3D04
More effective network communication, two-way calling, notify and broadcast supported.
License: MIT License
When subscribing client disconnects from server without unsubscribing first pupsub.Server.Publish() gives error:
2021/09/22 19:04:49.612 [ERR] [Publish] [topic: 'Broadcast'] failed client stopped, from Server to 127.0.0.1:35306
From reading the code regarding pupsub.NewServer()
svr.Handler.HandleDisconnected(svr.deleteClient)
Publish() should no longer be trying to send to a disconnected client but this does not seem to be working?
I do see the client disconnecting in log:
2021/09/22 19:04:30.206 [ERR] [APS SVR] 127.0.0.1:35306 Disconnected: EOF
I tried also to do this manually but when I add to code:
srv.Handler.HandleDisconnected(func (c *arpc.Client) {
log.Error().Msg("HANDLE_DISCONNECTED_TEST")
})
Nothing happens either..
因为我看内部已经实现了 客户端的记录, 这样这样也不用我们重新实现. 例如 Broadcast 还要自己存储 var clientMap = make(map[*arpc.Client]struct{})
可能要换成读写锁了
如果可以,请留言您正在使用arpc的项目相关信息.
由 coder 实现人提供 bodyLength,目前看代码唯一的方法就是在 buff 最前加上指定的大小,或者可以由 coder 实现人提供一个函数,来判断 request 是否已 read 完成?
想要交流一下
I would like to work with arpc
because the benchmark numbers of tests I made are very good. It has a great performance. Fantastic work @lesismal !
In my test environment I have replication of containers with different IP addresses within one network. One container should connect to multiple clients. And if a connection is lost it should start to retry to connect. A simplification of the code would look like:
...
var clients = make(map[string]*arpc.Client)
...
func connectClients() {
// find all IP addresses of the the host (e.g. in docker, tasks.foo)
// Lets say we'll find 10.0.1.3 and 10.0.1.4
ips, err := net.LookupIP(addrClientLookup)
...
for _, ip := range ips {
if _, inList := clients[ip.String()]; !inList {
client, err := arpc.NewClient(func() (net.Conn, error) {
return net.DialTimeout("tcp", ip.String()+":8888", time.Second*1)
}, 1)
if err != nil {
...
return
}
...
client.Handler.HandleDisconnected(func(c *arpc.Client) {
...
delete(clients, host)
})
...
clients[ip.String()] = client
}
}
}
connectClients
is called every second because it can happen that a new container has been added. My issue is if a connection gets lost. In some cases it starts a Reconnect Trying x
-loop but in other cases it replaces the target IP address from c.Conn
with one from an other client. Like
Map[10.0.1.3][ 10.0.1.3:44108 -> 10.0.1.3:8888 ]
Map[10.0.1.4][ 10.0.1.3:45780 -> 10.0.1.4:8888 ] // correct
to somewhat like
Map[10.0.1.3][ 10.0.1.3:44108 -> 10.0.1.3:8888 ]
Map[10.0.1.4][ 10.0.1.3:44120 -> 10.0.1.3:8888 ] // wrong
without even touching the HandleDisconnected
handler. Map[10.0.1.4]
should be deleted because the connection is lost. Anyway is there a better way to handle multiple clients in a way that they don't interfere with each other in a failure situation?
err = client.Call("/echo/sync", &req, &rsp, time.Second*5, map[interface{}]interface{}{
"x-arpc-context": "xxxxx",
})
fmt.Printf("message values: %+v", ctx.Message.Values())
这种方式获取不到message.values
Line 23 in f60ba57
为什么需要设计这个maxload?控制数据包的最大数量?并未看到如何修改和初始化这个值?
您好,请问一下,有关 websocket 的消息截断的问题,在 onMessage 函数下,通过 offset 去循环处理一段消息,这里使用 offset 的原因是什么?
arpc/extension/jsclient/arpc.js
Lines 179 to 188 in 9ff9485
我在使用的时候,发现 bodyLen 的长度计算不正确,例如我从服务器推送一条 Notify 的消息,在这里断点,得到的 event.data.byteLength 长度是 645,然而通过 bodyLen 计算出来的结果是 119,即使这是在 while 循环中,offset 会在下一次循环开始前偏移,但是消息已经在第一次循环里就发送给 handle 了
arpc/extension/jsclient/arpc.js
Lines 211 to 214 in 9ff9485
还有另一个问题是,第二次循环中,header 的内容并不存在,计算的 method 或者其他信息都是原来消息体的一部分信息,即使想要组装数据,似乎也没法完成。
arpc/extension/jsclient/arpc.js
Lines 190 to 192 in 9ff9485
以下是我使用的部分代码:
//JS:
client = new ArpcClient("ws://localhost:8888/ws", null)
//订阅模式的 Handler
client.handle("/broadcast/info", function(ctx) {
console.log("[Info] ", ctx)
});
//请求订阅,成功之后通过上面的 Handler 获取服务器推送的消息
client.call("/registerBroadcast", "xxxx", 5000, function(resp) {
console.log("response ", resp)
})
//Server:
func InitServer(port string) {
ln, _ := websocket.Listen("localhost:"+port, nil)
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Println("url: %v", r.URL.String())
if r.URL.Path == "/" {
http.ServeFile(w, r, "chat.html")
} else if r.URL.Path == "/arpc.js" {
http.ServeFile(w, r, "arpc.js")
} else {
http.NotFound(w, r)
}
})
http.HandleFunc("/ws", ln.(*websocket.Listener).Handler)
go func() {
err := http.ListenAndServe("localhost:"+port, nil)
if err != nil {
fmt.Println("ListenAndServe: ", err)
panic(err)
}
}()
server := arpc.NewServer()
/*
Broadcast auth
request :
AccessKey string
response :
true or false
*/
server.Handler.Handle(pinApi.AddressForApi(pinApi.BROADCAST_AUTH), func(context *arpc.Context) {
var accessKey string
if err := context.Bind(&accessKey); err != nil {
context.Write(code.Failed)
return
}
if len(accessKey) > 0 && accessKey == default.AccessKey {
BroadcastClients = append(BroadcastClients, context.Client)
context.Write(define.RPC_SUCCESS)
}else {
context.Write(code.Failed)
}
})
//broadcast info
go broadcastInfo()
server.Serve(ln)
}
func broadcastServerInfo() {
for true {
if len(BroadcastClients) > 0 {
log.Println(len(BroadcastClients), " clients listen Broad casting...")
// jsonData 是个较长的 json 数据
msg := RPCServer.NewMessage(arpc.CmdNotify,"/broadcast/info", jsonData)
for _, client := range BroadcastClients {
client.PushMsg(msg, arpc.TimeZero)
}
}
time.Sleep(time.Second * 1)
}
}
Expected:
> [Tunnel SVR] Running On: "[::]:1234"
> [Tunnel SVR] 192.168.1.100:62201 Connected
> [Tunnel SVR] 192.168.1.100:62201 Disconnected: EOF
> [Tunnel CLI] 192.168.1.8:29300 Connected
> [Tunnel CLI] 192.168.1.13:29300 Disconnected: EOF
> [Tunnel CLI] 192.168.1.13:29300 Reconnect Trying 1
> [Tunnel CLI] 192.168.1.13:29300 Reconnected
My attempt:
svrHandler := arpc.DefaultHandler.Clone()
svrHandler.SetLogTag("[Tunnel SVR]")
svr := &arpc.Server{
Codec: codec.DefaultCodec,
Handler: svrHandler,
}
arpc.DefaultHandler.SetLogTag("[Tunnel CLI]")
client, err := arpc.NewClient(func() (net.Conn, error) {
return net.DialTimeout("tcp", conf.ForwardTunnel, 3 * time.Second)
})
Question:
Unable to initialize clients
, like the native arpc.NewServer()
below:
Is there a problem? What should I do?
// NewServer creates an arpc Server.
func NewServer() *Server {
h := DefaultHandler.Clone()
h.SetLogTag("[ARPC SVR]")
return &Server{
Codec: codec.DefaultCodec,
Handler: h,
clients: map[*Client]util.Empty{},
}
}
Server restart, client reconnection does not trigger HandleConnected AND HandleDisconnected
Thanks for your work, arpc is awesome.
1, In game scenario, sometimes a request from the frontend server must be transferred to a special server, not a random one.
ref: https://github.com/lesismal/arpc/blob/master/extension/micro/service.go#L210
2, RPCX has metadata called state which is very useful in production.
ref: https://github.com/rpcxio/rpcx-examples/blob/master/state/client/client.go#L21
Are you considering adding these above?
Thanks!
I was noting that the JavaScript arpc.js
files are out of sync a bit, with the extension
folder missing keepalive/ping/pong
. Is that intentional?
> find . -name 'arpc.js' | xargs shasum
fb68d9ec55c510ddf9774d582db7be84f7282868 ./extension/jsclient/arpc.js
bf8402a8d060a8df4cf62090f93e03159831dc2e ./examples/httprpc/arpc.js
bf8402a8d060a8df4cf62090f93e03159831dc2e ./examples/webchat/arpc.js
bf8402a8d060a8df4cf62090f93e03159831dc2e ./examples/protocols/websocket/jsclient/arpc.js
> diff ./extension/jsclient/arpc.js ./examples/httprpc/arpc.js
8a9,10
> var _CmdPing = 4;
> var _CmdPong = 5;
81a84,86
> if (typeof (cb) == 'function') {
> cb({ data: null, err: _ErrClosed });
> }
86a92,94
> if (typeof (cb) == 'function') {
> cb({ data: null, err: _ErrReconnecting });
> }
91,92d98
< this.seqNum++;
< var seq = this.seqNum;
108,142c114
<
< var buffer;
< if (request) {
< var data = this.codec.Marshal(request);
< if (data) {
< buffer = new Uint8Array(16 + method.length + data.length);
< for (var i = 0; i < data.length; i++) {
< buffer[16 + method.length + i] = data[i];
< }
< }
< } else {
< buffer = new Uint8Array(16 + method.length);
< }
< var bodyLen = buffer.length - 16;
< for (var i = _HeaderIndexBodyLenBegin; i < _HeaderIndexBodyLenEnd; i++) {
< buffer[i] = (bodyLen >> ((i - _HeaderIndexBodyLenBegin) * 8)) & 0xFF;
< }
<
< buffer[_HeaderIndexCmd] = _CmdRequest & 0xFF;
< buffer[_HeaderIndexMethodLen] = method.length & 0xFF;
< for (var i = _HeaderIndexSeqBegin; i < _HeaderIndexSeqBegin + 4; i++) {
< buffer[i] = (seq >> ((i - _HeaderIndexSeqBegin) * 8)) & 0xFF;
< }
<
< var methodBuffer = new TextEncoder("utf-8").encode(method);
< for (var i = 0; i < methodBuffer.length; i++) {
< buffer[16 + i] = methodBuffer[i];
< }
<
< if (!isHttp) {
< this.ws.send(buffer);
< } else {
< this.request(buffer, this._onMessage);
< }
<
---
> this.write(_CmdRequest, method, request, this.seqNum, this._onMessage, isHttp);
156c128,155
< this.seqNum++;
---
> this.write(_CmdNotify, method, notify, function () { }, isHttp);
> }
> this.ping = function () {
> if (client.state == _SOCK_STATE_CLOSED) {
> return _ErrClosed;
> }
> if (client.state == _SOCK_STATE_CONNECTING) {
> return _ErrReconnecting;
> }
> client.write(_CmdPing, "", null, function () { });
> }
> this.pong = function () {
> if (client.state == _SOCK_STATE_CLOSED) {
> return _ErrClosed;
> }
> if (client.state == _SOCK_STATE_CONNECTING) {
> return _ErrReconnecting;
> }
> client.write(_CmdPong, "", null, function () { });
> }
> this.keepalive = function (timeout) {
> if (this._keepaliveInited) return;
> this._keepaliveInited = true;
> if (!timeout) timeout = 1000 * 30;
> setInterval(this.ping, timeout);
> }
>
> this.write = function (cmd, method, arg, cb, isHttp) {
158,159c157,158
< if (notify) {
< var data = this.codec.Marshal(notify);
---
> if (arg) {
> var data = this.codec.Marshal(arg);
168a168
>
173c173
< buffer[_HeaderIndexCmd] = _CmdNotify & 0xFF;
---
> buffer[_HeaderIndexCmd] = cmd & 0xFF;
174a175
> this.seqNum++;
178d178
<
183d182
<
187c186
< this.request(buffer, function () { });
---
> this.request(buffer, cb);
246a246,253
> switch (cmd) {
> case _CmdPing:
> client.pong();
> return;
> case _CmdPong:
> return;
> }
>
module arpc ?
Woud you mind to change default JSON codec to json-iter and check the performance once again?
sorry for bad english
我现在有个场景,比如做链路跟踪的时候,需要把当前环境的traceID和spanID传到下游
这种数据更适合在middleware内部去实现,而不用干扰到业务代码
我的想问问 当 client 和 server已经连通之后, 在什么样的时机或者机制可以 通过ctx.Client.Call(....)
我的实际场景是, 当server拿到客户端传过来的数据之后, 需要经过耗时的运算, 然后将server的数据再通过同一个tcp连接 传回去, 但是不太方便直接在 在 server的handler中直接 ctx.Write(). 所以才想到用server端去"call client"
client, err := arpc.NewClient(func() (net.Conn, error) {
return net.DialTimeout("tcp", "localhost:10001", time.Second*3)
})
payload := Payload{}
client.Call("/send_payload", &payload, &rsp, time.Second*2)
client.Handler.Handle("/receive_payload", func(ctx *arpc.Context) {
log.Print("server call received")
var payload protocol.Payload
if err := ctx.Bind(&payload); err == nil {
log.Print("req payload: ", payload.ToString())
ctx.Write([]byte("ok"))
}
})
server := arpc.NewServer()
ready_chan := make(chan bool)
server.Handler.Handle("/send_payload", func(ctx *arpc.Context) {
var payload protocol.Payload
if err := ctx.Bind(&payload); err == nil {
log.Print("req payload: ", payload.ToString())
client = ctx.Client
ctx.Write([]byte("ok"))
ready_chan <- true
}
})
go func() {
<-ready_chan
log.Print("ctx.Client", client) // 这一行注释掉就崩了, 异常参见下一个段落
res := ""
payload := protocol.NewPayload(
1,
1,
1,
[]byte("server call client"),
)
client.Call("/receive_payload", &payload, &res, time.Second*20)
log.Print(fmt.Sprintf("server call client, req:%s, res:%s", payload.ToString(), res))
}()
server.Run("localhost:10001")
2021/09/09 15:16:13.832 [ERR] runtime error: runtime error: invalid memory address or nil pointer dereference
traceback:
goroutine 25 [running]:
runtime/debug.Stack(0xc000107c20, 0x1b6980, 0x2bf960)
c:/go/src/runtime/debug/stack.go:24 +0xa5
github.com/lesismal/arpc/util.Recover()
C:/Users/go/pkg/mod/github.com/lesismal/[email protected]/util/util.go:21 +0x5e
panic(0x1b6980, 0x2bf960)
c:/go/src/runtime/panic.go:969 +0x1c7
github.com/lesismal/arpc.(*handler).OnMessage(0xc0000bc000, 0xc0000ba000, 0xc00001c200)
C:/Users/go/pkg/mod/github.com/lesismal/[email protected]/handler.go:596 +0x2f4
github.com/lesismal/arpc.(*Client).recvLoop(0xc0000ba000)
C:/Users/go/pkg/mod/github.com/lesismal/[email protected]/client.go:735 +0x2bf
github.com/lesismal/arpc/util.Safe(0xc00003e520)
C:/Users/go/pkg/mod/github.com/lesismal/[email protected]/util/util.go:28 +0x50
created by github.com/lesismal/arpc.(*Client).run
C:/Users/go/pkg/mod/github.com/lesismal/[email protected]/client.go:680 +0x108
我在使用的过程中,发现通过 Client 建立的请求,当 Server 端意外失联之后,Client 端会不断的尝试重连,重试间隔 1s,无法中断,查看 client.go 的代码后发现有如下逻辑:
Lines 752 to 771 in b7b6625
期望:
是否可以在此处增加重连次数的回调?可以让调用者决定多少次之后放弃重新建立连接,转而处理其他业务?
如果考虑到 Handler 的调用次数可能过于频繁的话,那是否可以在 Client 的配置中增加个限制最大重试次数,超过之后直接走 Disconnected 的 Handler,这个 Disconnected 的 Handler 目前似乎只有在 Client 端主动 Stop 的时候才会执行。
我尝试 fork 增加了部分相关代码,目前使用上是满足我的业务了,不过没有经过测试,也不敢提 pull request,所以在这里提出这个改进建议,谢谢阅读。
root@Dev101:~# go get -u github.com/lesismal/arpc
go: downloading github.com/lesismal/arpc v1.1.12
go: github.com/lesismal/arpc upgrade => v1.1.12
root@Dev101:~# go get github.com/lesismal/[email protected]
go get github.com/lesismal/[email protected]: github.com/lesismal/[email protected]: invalid version: unknown revision v1.2.0
root@Dev101:~# go get github.com/lesismal/[email protected]
go: downloading github.com/lesismal/arpc v1.1.13-0.20210906161738-72fae2cffa16
go: github.com/lesismal/arpc 1.2.0 => v1.1.13-0.20210906161738-72fae2cffa16
如果在server里面call client的一个handle,然后client的handle超时后
server发给该client的一切call都直接timeout了
client自身的一切call server的操作也会直接timeout了
此时应该怎么做?因为无法保证client的handle或者server的handle的处理时间 小于call时设置的timeout时间
不能影响其他请求吧?如果取消那个请求呢?
你好,lesismal。项目 readme 上的 slack 链接过期了,能否更新下?
按照 300 多天前的要求,如果要沟通arpc,沟通项目还是倾向使用 issue 进行提问吗?
https://www.v2ex.com/t/794435
个人对BeforeRecv这个接口有点疑问, 这个接口在一次recv过程中会被调用两次, 类似:
for {
BeforeRecv()
conn.Read(buf)
}
会循环两次, 第二次先调用BeforeRecv, 然后读套接字, 这时才被阻塞。个人感觉这是个bug?
n你好,能否提供一个简单的php的client例子,我使用的是你的demo中server和client,用go去实验是可以走得通的,但是换成php去调用go的server就调不通了,请问能否提供一下php的demo呢?下面是我的php代码,用Hprose去实现:
try {
$client = \Hprose\Client::create('tcp://localhost:8888', false);
$res = $client->echo('test');
var_dump($res);
} catch (\Exception $e) {
var_dump($e->getMessage());
}
报错是:"response read error"
server的go代码是:
package main
import (
"github.com/lesismal/arpc"
"github.com/json-iterator/go"
)
func main() {
server := arpc.NewServer()
server.Codec = jsoniter.ConfigCompatibleWithStandardLibrary
// register router
server.Handler.Handle("echo", func(ctx *arpc.Context) {
str := ""
if err := ctx.Bind(&str); err == nil {
ctx.Write("123")
}
})
server.Run("localhost:8888")
}
server端一直报错:Disconnected: invalid body length: 369098752
ctx.write的时候,会把收到的values原样回传,然而client.Call是拿不到最新的values的,因为接触不到完整的消息体,在on函数和middleware可以拿到,但是这些都是全局的
c/s场景下单向调用的场景,可以认为是浪费了,values越大浪费越多,如果是server端因为需要还额外通过middleware写入了一下信息自己用,那回传的数据就更多了
所以如果能让client可以方便的获取到values,这样可能部分场景能用到,另外就是可选关闭,我目前是通过middleware手动清空,但是感觉不够优雅
Hello,
Just performed an update to the latest version and noticed SetLogLevel
function has disappeared. This code no longer compiles arpc.SetLogLevel(arpc.LogLevelError)
.
Please leave a message with information about your projects that are using arpc if possible.
**Test environment **
On the same machine, the Windows/Linux test results are relatively consistent.
Use official documents:
arpc v1.1.9 client.Call
2021/08/17 09:47:32 [qps: 25756], [avg: 25796 / s], [total: 644904, 25 s]
2021/08/17 09:47:33 [qps: 25208], [avg: 25773 / s], [total: 670112, 26 s]
2021/08/17 09:47:34 [qps: 24706], [avg: 25734 / s], [total: 694818, 27 s]
2021/08/17 09:47:35 [qps: 26389], [avg: 25757 / s], [total: 721207, 28 s]
2021/08/17 09:47:36 [qps: 24529], [avg: 25715 / s], [total: 745736, 29 s]
2021/08/17 09:47:37 [qps: 21087], [avg: 25560 / s], [total: 766823, 30 s]
arpc v1.1.8 client.Call
2021/08/17 09:48:59 [qps: 35669], [avg: 35430 / s], [total: 885762, 25 s]
2021/08/17 09:49:00 [qps: 33920], [avg: 35372 / s], [total: 919682, 26 s]
2021/08/17 09:49:01 [qps: 34806], [avg: 35351 / s], [total: 954488, 27 s]
2021/08/17 09:49:02 [qps: 34576], [avg: 35323 / s], [total: 989064, 28 s]
2021/08/17 09:49:03 [qps: 33994], [avg: 35277 / s], [total: 1023058, 29 s]
2021/08/17 09:49:04 [qps: 34774], [avg: 35261 / s], [total: 1057832, 30 s]
v1.1.9 client.Notify
for k := 0; true; k++ {
rand.Read(data)
req := &HelloReq{Msg: base64.RawStdEncoding.EncodeToString(data)}
// rsp := &HelloRsp{}
err = client.Notify(method, req, time.Second*5)
if err != nil {
log.Printf("Call failed: %v", err)
// } else if rsp.Msg != req.Msg {
// log.Fatal("Call failed: not equal")
} else {
atomic.AddUint64(&qpsSec, 1)
}
}
2021/08/17 09:54:04 [qps: 6568], [avg: 63304 / s], [total: 1582624, 25 s]
2021/08/17 09:54:05 [qps: 52315], [avg: 62882 / s], [total: 1634939, 26 s]
2021/08/17 09:54:08 [qps: 51121], [avg: 62446 / s], [total: 1686060, 27 s]
2021/08/17 09:54:08 [qps: 96364], [avg: 63658 / s], [total: 1782424, 28 s]
2021/08/17 09:54:08 [qps: 15375], [avg: 61993 / s], [total: 1797799, 29 s]
2021/08/17 09:54:09 [qps: 43706], [avg: 61383 / s], [total: 1841505, 30 s]
v1.1.8 client.Notify
for k := 0; true; k++ {
req := &HelloReq{Msg: "hello from client.Call"}
// rsp := &HelloRsp{}
err = client.Notify(method, req, time.Second*5)
if err != nil {
log.Printf("Call failed: %v", err)
} else {
//log.Printf("Call Response: \"%v\"", rsp.Msg)
atomic.AddUint64(&qpsSec, 1)
}
}
2021/08/17 09:51:12 [qps: 464933], [avg: 515004 / s], [total: 12875112, 25 s]
2021/08/17 09:51:13 [qps: 340793], [avg: 508304 / s], [total: 13215905, 26 s]
2021/08/17 09:51:14 [qps: 445335], [avg: 505971 / s], [total: 13661240, 27 s]
2021/08/17 09:51:15 [qps: 498246], [avg: 505695 / s], [total: 14159486, 28 s]
2021/08/17 09:51:16 [qps: 385566], [avg: 501553 / s], [total: 14545052, 29 s]
2021/08/17 09:51:17 [qps: 504710], [avg: 501658 / s], [total: 15049762, 30 s]
v1.1.9-server, v1.1.8-client client.Notify
2021/08/17 09:55:59 [qps: 349016], [avg: 327355 / s], [total: 8183891, 25 s]
2021/08/17 09:56:00 [qps: 335811], [avg: 327680 / s], [total: 8519702, 26 s]
2021/08/17 09:56:01 [qps: 354877], [avg: 328688 / s], [total: 8874579, 27 s]
2021/08/17 09:56:02 [qps: 302469], [avg: 327751 / s], [total: 9177048, 28 s]
2021/08/17 09:56:03 [qps: 364375], [avg: 329014 / s], [total: 9541423, 29 s]
2021/08/17 09:56:04 [qps: 341278], [avg: 329423 / s], [total: 9882701, 30 s]
在根目录执行go mod tidy
报错, 注释掉extension/protocol/quic
代码后再执行go mod tidy
, 项目根目录的go.mod会增加许多依赖.
建议优化下模块管理.
func (mp *MemPool) Malloc(size int) []byte {
pbuf := mp.pool.Get().(*[]byte)
if cap(*pbuf) < size {
if cap(*pbuf)+holderSize >= size {
*pbuf = (*pbuf)[:cap(*pbuf)]
*pbuf = append(*pbuf, holderBuffer[:size-len(*pbuf)]...) // <--- allocation occur when exceed cap, why not just create a new buffer ?
} else {
mp.pool.Put(pbuf)
newBuf := make([]byte, size)
pbuf = &newBuf
}
}
if mp.Debug {
mp.saveAllocStack(*pbuf)
}
return (*pbuf)[:size]
}
I put together a gist of the bones of my integration or ARPC with Gin. Not sure if something like this is already out there. I wanted to share in case it wasn't. It took a big of figuring out how to properly register the web socket listener with Gin and a http handler and how to get that listener to the arpc Serve
invocation.
https://gist.github.com/KevM/4aafd4873df884cc95fc8e606ab0f4e4
Line 630 in f60ba57
cmd 是 notify 的时候,这里会报错,没有rh变量
Currently, for the req/res mode, the client is blocked when the response has not yet come back. Hence, the whole connection is blocked.
Does it support the feature like fasthttp.PipelineClient, which can send multiple requests once? In this way, multiple go routine could share one client connection.
检测到 lesismal/arpc 一共引入了204个开源组件,存在3个漏洞
漏洞标题:Buger Jsonparser 安全漏洞
缺陷组件:github.com/buger/[email protected]
漏洞编号:CVE-2020-35381
漏洞描述:Buger Jsonparser是Buger个人开发者的一个基于Go语言的用于与json格式数据进行交互的代码库。
jsonparser 1.0.0 存在安全漏洞,该漏洞允许攻击者可利用该漏洞通过GET调用导致拒绝服务。
影响范围:(∞, 1.1.1)
最小修复版本:1.1.1
缺陷组件引入路径:github.com/lesismal/arpc@->github.com/lucas-clemente/[email protected]>github.com/francoispqt/[email protected]>github.com/buger/[email protected]
另外还有3个漏洞,详细报告:https://mofeisec.com/jr?p=a0ce69
goroutine 38 [running]:
runtime/internal/atomic.panicUnaligned()
C:/Program Files/Go/src/runtime/internal/atomic/unaligned.go:8 +0x2d
runtime/internal/atomic.Xadd64(0x1212a09c, 0x1)
C:/Program Files/Go/src/runtime/internal/atomic/atomic_386.s:125 +0x11
github.com/lesismal/arpc.(*Client).newRequestMessage(0x1212a060, 0x1, {0x7946f350, 0x10}, {0x79457880, 0x11c68000}, 0x0, 0x1, {0x0, 0x0, ...})
F:/Go/pkg/mod/github.com/lesismal/[email protected]/client.go:564 +0x10d
github.com/lesismal/arpc.(*Client).CallAsync(0x1212a060, {0x7946f350, 0x10}, {0x79457880, 0x11c68000}, 0x7947da14, 0x12a05f200, {0x0, 0x0, 0x0})
貌似原子操作 32 位要对齐?
你好,请问如果在连接成功的时候,客户端需要发送一条消息给服务端,是可以像以下代码这样发送消息么?
client.Handler.HandleConnected(func(connectedClient *arpc.Client) {
req := ""
err := connectedClient.Call("/callAfterConnected","", &req, time.Second * 5)
if err == nil {
fmt.Println("Call After Connected Success.")
}else {
fmt.Println("Call After Connected Failed. ", err, "\n", req)
}
})
我尝试了一下,发现很奇怪的现象,有时候这个 Handler
不会触发,如果触发该 Handler
,则必定错误,原因均为 timeout,此时 ARPC 会报 [WRN] [ARPC CLI] OnMessage: session not exist or expired
错误。
备注:当前 client 并不需要执行 Stop 操作,所以确定不是因为 Stop 触发的。
package main
import (
"net"
"time"
"github.com/lesismal/arpc/extension/micro"
"github.com/lesismal/arpc/extension/micro/etcd"
"github.com/lesismal/arpc/log"
)
func dialer(addr string) (net.Conn, error) {
return net.DialTimeout("tcp", addr, time.Second*3)
}
func main() {
var (
appPrefix = "app"
service = "echo"
endpoints = []string{"localhost:2379", "localhost:22379", "localhost:32379"}
serviceManager = micro.NewServiceManager(dialer)
)
discovery, err := etcd.NewDiscovery(endpoints, appPrefix, serviceManager)
if err != nil {
log.Error("NewDiscovery failed: %v", err)
panic(err)
}
defer discovery.Stop()
for {
time.Sleep(time.Second * 1)
client, err := serviceManager.ClientBy(service)
// 这里会报错,获取不到服务,必须要在上面先停留一段时间才能获取到。
if err != nil {
log.Error("get Client failed: %v", err)
} else {
req := "arden"
rsp := ""
err = client.Call("/echo", &req, &rsp, time.Second*5)
if err != nil {
log.Info("Call /echo failed: %v", err)
} else {
log.Info("Call /echo Response: \"%v\"", rsp)
}
}
time.Sleep(time.Second)
}
}
time.Sleep(time.Second * 1)
client, err := serviceManager.ClientBy(service)'
// 这里会报错,获取不到服务,必须要在上面先停留一段时间才能获取到。
可否增加,当 client 连接到 server 之后,通过 client.handle 设置 method,当下次客户端推送消息时,直接将消息推送到对应的 client handle,不走 server handle 进行中转?
After the updating to v1.2.15 we started getting the panic below. I erroneously was registering multiple handlers for the same route. This panic seems to be when the arpc.Server
is stopping.
My architecture was calling srv.Handler.Handle("duplicated/route", func...)
with the same route string a few times because of a copy paste error on my part.
The improper registration was my fault but the panic didn't seem the best way to realize this. I would prefer to have a panic immediately result from the Handle
call with a duplicate route. If you support multiple handlers at the same route then there may be a different bug.
panic: handler exist for method /order/subscribe
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x20 pc=0x105147154]
goroutine 1 [running]:
github.com/lesismal/arpc.(*Server).Stop(0x14000147650)
/Users/kevm/go/pkg/mod/github.com/lesismal/[email protected]/server.go:113 +0x54
gitlab.com/neomantra/nimble/internal/stream.(*WebSocketServer).Stop(0x1400047db00)
/Users/kevm/neomantra/nimble/internal/stream/websocket_server.go:25 +0x2c
panic({0x10557aa00?, 0x1400003d430?})
/opt/homebrew/Cellar/go/1.22.1/libexec/src/runtime/panic.go:770 +0x124
github.com/lesismal/arpc.(*handler).handle(0x14000000b40, {0x105337336, 0x10}, 0x14000199530, {0x0, 0x0, 0x1?})
/Users/kevm/go/pkg/mod/github.com/lesismal/[email protected]/handler.go:594 +0x3c4
github.com/lesismal/arpc.(*handler).Handle(0x1400045a280?, {0x105337336?, 0x2?}, 0x0?, {0x0?, 0x0?, 0x140000180d0?})
/Users/kevm/go/pkg/mod/github.com/lesismal/[email protected]/handler.go:565 +0x28
gitlab.com/neomantra/nimble/internal/stream.RegisterStreamWithWebSocket[...](0x14000147650, {0x105703d00, 0x14000016168}, 0x1400045a280)
/Users/kevm/neomantra/nimble/internal/stream/stream.go:45 +0x268
简单跑了一下,data race的问题还是很多的,不知是暂时没考虑 还是为了性能故意的race?
像server的
github.com/lesismal/arpc/server.go:127
github.com/lesismal/arpc/server.go:128
github.com/lesismal/arpc/server.go:42
github.com/lesismal/arpc/server.go:182
像client的
github.com/lesismal/arpc/client.go:726
github.com/lesismal/arpc/client.go:694
github.com/lesismal/arpc/handler.go:595
github.com/lesismal/arpc/handler.go:758
github.com/lesismal/arpc/client.go:694
看起来好像不是prod ready的状态?
arpc版本 1.2.11
场景如下
1、单个server + 几千几万的client, 通过定时器每30s, client.Call("/ping", &req, &rsp, 5*time.Second)
2、ping的res大概类似{"did":"123", "msg_type":1,"msg":{"content":"ping"}}
没有任何额外设置下
几乎每个client都不定时频繁的出现client reconnecting
是否有什么额外的配置需要设置? 或者需要实现?
感觉像因为异步链接, 被关闭了某些暂时不活跃的链接? 导致链接经常性需要reconnect?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.