GithubHelp home page GithubHelp logo

grpc-ecosystem / go-grpc-middleware Goto Github PK

View Code? Open in Web Editor NEW
6.0K 85.0 669.0 13.88 MB

Golang gRPC Middlewares: interceptor chaining, auth, logging, retries and more.

License: Apache License 2.0

Go 96.98% Makefile 2.94% Shell 0.09%
grpc golang middleware generic-functions library logging authentication retries testing interceptor

go-grpc-middleware's People

Contributors

adam-26 avatar aimuz avatar amenzhinsky avatar ash2k avatar bufdev avatar bwplotka avatar dependabot[bot] avatar devnev avatar dmitris avatar domgreen avatar drewwells avatar iamrajiv avatar jkawamoto avatar johanbrandhorst avatar kartlee avatar khasanovbi avatar marcwilson-g avatar metalmatze avatar mkorolyov avatar nvx avatar ogimenezb avatar olivierlemasle avatar peczenyj avatar rahulkhairwar avatar stanhu avatar surik avatar takp avatar tegk avatar xsam avatar yashrsharma44 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-grpc-middleware's Issues

grpc_auth: AuthFuncOverride without dummy interceptor

Currently to use ServiceAuthFuncOverride one needs to introduce some dummy interceptor:

func dummyInterceptor(ctx context.Context) (context.Context, error) {
	return ctx, nil
}

...

s := grpc.NewServer(
		grpc.StreamInterceptor(grpc_auth.StreamServerInterceptor(dummyInterceptor)),
		grpc.UnaryInterceptor(grpc_auth.UnaryServerInterceptor(dummyInterceptor)),
	)

Is there better way to do this?

build a proxy to NATs with this

I was checking out https://github.com/mwitkow/grpc-proxy, because i want to pass the incoming GRPC messages onto NATS - it makes Microservices so much easier i find.
However, in the Issues, the problem was that the GRPC team would not accept the customisation you did so that you could get access to the binary[] of data in the stream.

So, would the new Interceptors allow this Proxying to be achieved ?

Support grpc's WithDetails and errdetails for validator middleware

Google has a bunch of error detail protobuf messages here: https://godoc.org/google.golang.org/genproto/googleapis/rpc/errdetails

I am currently performing validations by hand like so:

s, _ := status.Newf(codes.InvalidArgument, "invalid input").WithDetails(&errdetails.BadRequest{
	FieldViolations: []*errdetails.BadRequest_FieldViolation{
		{
			Field:       "SomeRequest.email_address",
			Description: "INVALID_EMAIL_ADDRESS",
		},
		{
			Field:       "SomeRequest.username",
			Description: "INVALID_USER_NAME",
		},
	},
})

return s.Err()

I am wondering if you guys can considering the following:

  • Support validation of the whole request and return all bad fields in the error message.
  • Support using errdetails.BadRequest to report which field is invalid.
  • Allow customization of the description for each field violation.

Improve docs around grpc_zap.WithDecider

When creating a client interceptor, despite selecting an opt with WithDecider set to a function that always returns false, I have noticed I am still getting DEBUG entries to zap, for example unary client calls would get a "finished client unary call" for every call

should there be something like

		err := invoker(ctx, method, req, reply, cc, opts...)
+		if !o.shouldLog(method, err) {
+			return err
+		}
		logFinalClientLine(o, logger.With(fields...), startTime, err, "finished client unary call")

in client_interceptors.go's Unary/StreamClientInterceptor functions to prevent the call to logFinalClientLine similar to how this looks like being handled on the server-side of things?

zap example link fail in readme

Examples

Package (HandlerUsageUnaryPing)
Package (Initialization)
Package (InitializationWithDurationFieldOverride)

These links are not working.

grpc_logrus: Use FieldLogger

Haven't checked the code yet, but I saw in the documentation that the logrus middleware awaits a logrus.Entry. Wouldn't it be better to rely on the logrus.FieldLogger interface?

grpc_logrus SystemField

I'm confused what SystemField is intended to represent, as it seems that it's being used as both a key (

) and a value (
grpclog.SetLogger(logger.WithField("system", SystemField))
).

My assumption would be that it would represent the value of the "system" field in the log message. Could I get some clarification here? I'll happily submit a pull request to correct it, if this is indeed an oversight. Thanks!

Duplicate peer.address in zap logger output

Hi,

I'm using the zap logging interceptor with the ctxtags interceptor. Running into an issue where the peer.address key appears twice.

Wondering if anyone else has run into this issue?

2017-12-24T22:46:05.822-0800	INFO	zap/server_interceptors.go:40	finished unary call	{"peer.address": "[::1]:63312", "grpc.start_time": "2017-12-24T22:46:05-08:00", "system": "grpc", "span.kind": "server", "grpc.service": "...", "grpc.method": "...", "peer.address": "[::1]:63312", "grpc.code": "OK", "grpc.time_ms": 398.6860046386719}
        ...
	logger, err := zap.NewDevelopment()
	if err != nil {
		log.Fatalf("failed to initialize zap logger: %v", err)
	}

	grpc_zap.ReplaceGrpcLogger(logger)

	kaParams := keepalive.ServerParameters{
		MaxConnectionIdle: 60 * time.Minute,
		Time:              60 * time.Minute,
	}

	s := grpc.NewServer(
		grpc.KeepaliveParams(kaParams),
		grpc_middleware.WithUnaryServerChain(
			grpc_ctxtags.UnaryServerInterceptor(),
			grpc_zap.UnaryServerInterceptor(logger),
			grpc_recovery.UnaryServerInterceptor(),
		),
	)
        ...

[grpc_logrus] how to add error stack trace to log output

I'm trying to figure out how to insert the stack trace of an error into my logrus generated messages. The errors are constructed using the github.com/pkg/errors package, for example: errors.Wrapf(err, "Failed to execute query"). Ideally, I'd like this to show up in my JSON log as follows:

{
  "timestamp": "2017-12-29T03:29:26Z",
  "system": "grpc",
  "message": "finished unary call",
  "level": "info",
  "error": "rpc error: code = Internal desc = invalid UpdatePartyName request. Expected a Person or Organization",
  "action": "CreateParty",
  "stacktrace": "github.com/myrepo/myproj/manager.init\ngithub.com/myrepo/myproj/manager/manager.go:84\ngithub.com/myrepo/myproj/server.init\n\u003cautogenerated\u003e:1\ngithub.com/myrepo/myproj/cmd.init\n\u003cautogenerated\u003e:1\nmain.init\n\u003cautogenerated\u003e:1\nruntime.main\n/usr/local/Cellar/go/1.9.2/libexec/src/runtime/proc.go:183\nruntime.goexit\n/usr/local/Cellar/go/1.9.2/libexec/src/runtime/asm_amd64.s:2337"
}

I've been able to use the WithCodes function to change the error codes, but this doesn't allow me to hook into the log message to insert any new details. Can anyone point me in the right direction?

grpc_ctxtags from metadata

I'm trying to log some per-request metadata. The code below gets it done, producing:

grpc.code=OK grpc.method=Ping user=98765 peer.address=127.0.0.1:53878 span.kind=server system=grpc

I was wondering if there's a better way?

On the client side doing:


import "google.golang.org/grpc/metadata"

md := metadata.Pairs("user", "98765")
ctx := metadata.NewOutgoingContext(context.Background(), md)
client.Ping(ctx, &pb_testproto.PingRequest{})

And on the server, something like:

func UnaryServerMetdataTagInterceptor(fields ...string) grpc.UnaryServerInterceptor {
	return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
		if ctxMd, ok := metadata.FromIncomingContext(ctx); ok {
			for _, field := range fields {
				if values, present := ctxMd[field]; present {
					tags := grpc_ctxtags.Extract(ctx)
					tags = tags.Set(field, strings.Join(values, ","))
				}
			}
		}
		return handler(ctx, req)
	}
}

myServer := grpc.NewServer(
	grpc_middleware.WithUnaryServerChain(
		grpc_ctxtags.UnaryServerInterceptor(),
		UnaryServerMetdataTagInterceptor("user"),
		grpc_logrus.UnaryServerInterceptor(logrusEntry, logrusOpts...),
	),
	...
)

Related, is there a reason grpc_ctxtags. RequestFieldExtractorFunc doesn't get access to the per-request context? Would the addition of a MetadataExtractorFunc
in grpc_ctxtags.options be welcomed?

Question, binary file streaming

Hi, I have a question if you know of any existing Go library providing support for file and blob streaming over gRPC?

Example use cases include microservices providing a API for converting media formats (image, audio, video) or microservices providing file compression, text-to-speech, etc.

I found this thread grpc/grpc-go#414 , this seam to be a pretty common need so I figure others must have had dealt with it.

Client Logging Interceptor defaults behave incorrectly

The client variants of the logging interceptors were added in #33. Following this change, logging interceptors do not behave correctly in their default configurations i.e. without specifying Options as overrides

Logrus (server & client interceptors):

Unless WithLevels(...) is specified as an option, calls to the logging interceptor panic, as o.levelFunc is nil

Zap (client interceptor only):

Unless WithLevels(...) is specified as an option, calls to the logging interceptor are logged at the incorrect levels as the default is the DefaultCodeToLevel for both server and client interceptors

Downstream interceptors are not re-executed on retry

Any interceptors chained after the retry interceptor are not re-executed in subsequent retry attempts.
For example if we have:

grpc_middleware.ChainUnaryClient(
  grpc_retry.UnaryClientInterceptor()
  grpc_prometheus.UnaryClientInterceptor)

and want the grpc_prometheus interceptor to see and time each retry independently then currently it will intercept only the first attempt.

(This was possibly a regression in 5d4723c)

I have a pull request, with tests, in:
#100

Configure logrus level for proto messages

We would like to be able to use the logrus logging interceptor to:

  • Log all messages at info level
  • Log the full json dump of the payload at debug level

I was able to configure the log level for the server messages (e.g. grpc_logrus.UnaryServerInterceptor) with grpc_logrus.WithLevels. However, for payload messages it looks to be hardcoded code link.

Is there interest in adding something similar for the payload levels? I'm open to submitting a pull request, but am not sure the best approach for where to set the payload level; some ideas: Add an option such as grpc_logrus.WithPayloadLevels? Pass it directly into the Payload*Interceptor function? Have it returned from the decider function? Other?

zap log needs log a struct.

Now the value is only for string.

Does it possible to save a struct( object) to value ?

{"level":"info","ts":1498443458.2312229,"caller":"servant.git/main.go:95","msg":"{\"Item1\":\"aaa\",\"Item2\":222}","system":"grpc","span.kind":"server","grpc.service":"pb.Greeter","grpc.method":"SayHello"}

let the msg is a object.

Custom errors

Hi,

When using the logging middleware, its difficult to use custom errors in handlers/other interceptors because the logging middleware expects an rpcErrortype to determine the grpc error code.

By adding another func (that extracts a grpc error code from an error) to the logging options, it would be easy to enable the use of custom errors types. This can be done without changing the current default behavior.

I'll submit a PR.

grpc_logrus multithreading concerns

In the sample code it is encouraged to create a log entry, store it in the context and use it in the rest of the request to log. As far as I can tell this is not thread safe. What is thought process behind this example? I would like to use this issue as a place to have a discussion on logging in this manner.

could you clean merged branch?

Hello there,

Recently, I am working on migrating from glide to dep in our project. Our project depends on this but when dep try to solve dependencies it iterates a lot branch that this project has. After I checked, a lot of them are already merged into master which means they are useless and can be removed from branch. I am wiring this to notify you guys and wish you can remove unused branches.
Thanks in advance.

grpc_logging payloads embedded in the classic log output

Hi,

I am using most of go-grpc-middleware and I have a question regarding logs. I wish to include the payload in the standard output or have a way to correlate both of them.

In order to get, ideally, this king of output:

{
    "level": "info",									
    "msg": "finished unary call",						
    "grpc.code": "OK",								
    "grpc.method": "Ping",							
    "grpc.service": "mwitkow.testproto.TestService", 
    "grpc.start_time": "2006-01-02T15:04:05Z07:00", 
    "grpc.request.deadline": "2006-01-02T15:04:05Z07:00",
    "grpc.request.value": "something",
    "grpc.time_ms": 1.345,		
    "peer.address": {
        "IP": "127.0.0.1",			
        "Port": 60216,				
        "Zone": ""				
    },
    "span.kind": "server",			
    "system": "grpc"				
    "grpc.request.content": {		        
        "msg" : {														
            "value": "something",											
            "sleepTimeMs": 9999											
        }
    },
    "custom_field": "custom_value",					
    "custom_tags.int": 1337,							
    "custom_tags.string": "something",				
}

Below, an extract of my interceptor.

	grpc_zap.UnaryServerInterceptor(logger.Zap, opts...),
	grpc_zap.PayloadUnaryServerInterceptor(logger.Zap, alwaysLoggingDeciderServer),	

One simple way could be to pass a GUID in the context and flag both logs with it. Then we could aggregate both results in Grafana. It' not optimal but would work, though. ๐Ÿค”

By the way, thanks for the amazing work!

Adding JSON to jsonPayload with grpc_ctxtags

I am trying to add json object as part of my grpc_ctxtags (which show up as jsonPayload in stackdriver). I am shoving the proto object as is, and is being converted to a "string" instead of a json object. For more visuals see this:

jsonPayload: {
caller: "zap/server_interceptors.go:66"
conf: "encoding:LINEAR16 sample_rate_hertz:44100 header:"RIFF\277\377\377\377WAVEfmt \020\000\000\000\001\000\001\000D\254\000\000\210X\001\000\002\000\020\000data\233\377\377\377" language_code:"en-US" session_id:"102k0KxT9EISz-G1IVynLmIUg" session_owner_id:"24f80fa2-2a82-4e7b-a1e1-ab3c2b2dcfd9" stream_start_time:<seconds:1513280382 nanos:649000000 > context:<view:SCHEDULE > "
....

I want the conf object to be a json instead of a raw string. What is the advice on this?

grpc.request.deadline and grpc.start_time precision is in seconds

grpc.request.deadline and grpc.start_time use d.Format(time.RFC3339) which means maximum precision is in seconds. I believe it would be useful to use d.Format(time.RFC3339Nano) at least.

Best would be a configuration option for me to format all logged timestamps as desired.

I can work on a PR for this change if it makes sense.

[zap]Let me choose the field name.

From the code, we can see:

ServerField = zap.String("span.kind", "server")

zap.String("grpc.code", code.String()),

I use elk, when put log parsed to es, it will fail.

Let me customize the grpc.code field,

for example: use grpc_code

vendored package breaks public API

It looks like google.golang.org/grpc/metadata is now being vendored by this project. This breaks the public API of this package, which might not be your intention?

I don't believe google.golang.org/grpc/metadata needs to be vendored by this package. It kinda breaks type compatibility between this package and other packages using metadata.

somefile.go:74:46: cannot use stream (type "google.golang.org/grpc".ServerStream) as type "github.com/grpc-ecosystem/go-grpc-middleware/vendor/google.golang.org/grpc".ServerStream in argument to grpc_middleware.WrapServerStream:
        "google.golang.org/grpc".ServerStream does not implement "github.com/grpc-ecosystem/go-grpc-middleware/vendor/google.golang.org/grpc".ServerStream (wrong type for SendHeader method)
                have SendHeader("google.golang.org/grpc/metadata".MD) error
                want SendHeader("github.com/grpc-ecosystem/go-grpc-middleware/vendor/google.golang.org/grpc/metadata".MD) error

Race in metautils

I'm not sure if I use your library correctly, but I'm getting race with this code:

package main

import (
	"context"

	"github.com/mwitkow/go-grpc-middleware/util/metautils"
	"google.golang.org/grpc/metadata"
)

func main() {
	md := metadata.Pairs("key", "value")
	parent := metadata.NewContext(context.Background(), md)
	for i := 0; i < 1000; i++ {
		go func(parent context.Context) {
			ctx, cancel := context.WithCancel(parent)
			defer cancel()
			metautils.SetSingle(ctx, "key", "val")
		}(parent)
	}
}

The idea is that I receive gRPC request to service A which then calls concurrently multiple services (let's say B, C and D). I re-use parent context but I set some timeout for those requests. Connections between A and B-D are using retry logic from this repository (5 retries, 1 second timeout). So the race is in metautils.SetSingle() where multiple writes are performed on metadata map (storing x-retry-attempty header). Is it intended to not work concurrently or I'm doing something wrong? Above example is narrowed down to calling metautils.SetSingle() as it's not easy to reproduce, but I can prepare more adequate example if needed.

Question about cancelling stream methods

When I have custom code in stream interceptor right after calling handler(srv, wrapped). When client cancels context, the code after the handler is not executed.
Is this grpc error, or middeware error?

grpc_retry.WithMax(), how to use it properly?

in this example, we see the following code:

func Example_deadlinecall() error {
	client := pb_testproto.NewTestServiceClient(cc)
	pong, err := client.Ping(
		newCtx(5*time.Second),
		&pb_testproto.PingRequest{},
		grpc_retry.WithMax(3),
		grpc_retry.WithPerRetryTimeout(1*time.Second))
	if err != nil {
		return err
	}
	fmt.Printf("got pong: %v", pong)
	return nil
}

But when I use one of those "option modifiers" as grpc_retry.WithMax() in a client call, it fails with a nullpointer exception. This is because the callOption wasn't set.

It seems to work well when I pass it as a modifier to the constructor of grpc_retry.UnaryClientInterceptor, such as in the test.

I do not yet fully understand the code architecture, and I don't know how a grpc.CallOption is to be usted. My question is: Am I overlooking something? Is the example just wrong? Or is the implementation wrong?

Support grpc_glog ?

Does it sound reasonable to support a logging implementation using glog ? We currently use glog and would like to use that for grpc logger.

Rate Limit Interceptor

Hi,

What guys do you think of a new server incerceptor thay would stall API calls for a given duration in order to prevent DoS / bruteforce.

Use case : delay every API call by 200 ms to prevent API DoS.

I know this can be done easily using an auth interceptor with a custom sleep-based function but it doesn't sound very good because it has nothing to do with auth.

Ctx_tags to populate both Request & Response Logging

I like being able to define tags in proto, and have the ctx_tags middleware extract them. This then can be used from request logging interceptors later down the middleware chain.

Can I do this for response logging too? It seems by default the logrus middleware when used with ctx_tags just logs the request. If I add a payload interceptor that is always true, then it logs out the response in grpc.response.content, as a full json struct.

I was hoping it would log like request, and use the opentrace format and populate only the tagged fields.

Any ideas?

This is the sort of logs I'm getting

{"app":"ticket_svc","grpc.code":"OK","grpc.method":"GetTicket","grpc.request.id":"e5179cd4-4c03-41f8-bc07-52d9fcf7bc85","grpc.service":"actourex.core.service.ticket.Command","grpc.time_ms":42,"level":"info","msg":"finished unary call","peer.address":{"IP":"::1","Port":52958,"Zone":""},"severity":"INFO","span.kind":"server","system":"grpc","time":"2017-07-05T18:14:10Z"}

# it would be nice if I could figure out how to have this print with keys like: grpc.response.id = blah, grpc.response.some_other_tagged_field = blah2
{"app":"ticket_svc","grpc.response.content":{ ... the full json payload... },"level":"info","msg":"server response payload logged as grpc.request.content field","severity":"INFO","time":"2017-07-05T18:14:10Z"}

Performance suggestion for chain builders

Pardon the lackluster issue instead of a proper pull request, but I'm under time pressure and would otherwise just forget this.

While I was rolling my own middleware for gRPC, I noticed https://github.com/grpc-ecosystem/go-grpc-middleware/blob/master/chain.go#L18

My suggestion is somewhat self-explanatory and looks as follows:

func ChainUnaryServer(interceptors ...grpc.UnaryServerInterceptor) grpc.UnaryServerInterceptor {
	n := len(interceptors)

	if n > 1 {
		curI := 0
		lastI := n-1
		return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
			var chainHandler grpc.UnaryHandler
			chainHandler = func(currentCtx context.Context, currentReq interface{}) (interface{}, error) {
				if curI == lastI {
					return handler(currentCtx, currentReq)
				}
				curI++
				return interceptors[curI](currentCtx, currentReq, info, chainHandler)
			}

			return interceptors[0](ctx, req, info, chainHandler)
		}
	}

	if n == 1 {
		return interceptors[0]
	}

	// n == 0
	return func(ctx context.Context, req interface{}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
		return handler(ctx, req)
	}
}

Avoids a loop, n lamba constructions and n additional function calls. Adds n if conditions and n increments, but that should still be considerably cheaper. Branches are ordered by most likely occurrence - it's a chain after all, so I assume n is > 1. The built lambdas end up in the hot path, so I think a little bit of microoptimization won't hurt. Coincidentally, I think it's easier to reason about.

Admittedly I have not actually benchmarked it against the code in grpc_middleware (apologies - really low on time) but it should (cough, cough) be quite a bit faster going by common sense. I have however been using this approach in a deployment - with no issues.

If someone wants to pick it up and work it into the interceptor factories, please go ahead. Otherwise I'll work on a proper PR, but that won't be sooner than in 2-3 weeks.

go get gives compilation errors

go get is giving errors: does anybody else get the same error?

% go get github.com/grpc-ecosystem/go-grpc-middleware/retry
# github.com/grpc-ecosystem/go-grpc-middleware/util/metautils
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:21: undefined: metadata.FromIncomingContext
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:33: undefined: metadata.FromOutgoingContext
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:69: undefined: metadata.NewOutgoingContext
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:76: undefined: metadata.NewIncomingContext
% go get github.com/grpc-ecosystem/go-grpc-middleware
# github.com/grpc-ecosystem/go-grpc-middleware
../../grpc-ecosystem/go-grpc-middleware/chain.go:77: undefined: grpc.UnaryClientInterceptor
../../grpc-ecosystem/go-grpc-middleware/chain.go:81: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:87: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:88: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:88: undefined: grpc.UnaryClientInterceptor
../../grpc-ecosystem/go-grpc-middleware/chain.go:88: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:106: undefined: grpc.StreamClientInterceptor
../../grpc-ecosystem/go-grpc-middleware/chain.go:110: undefined: grpc.Streamer
../../grpc-ecosystem/go-grpc-middleware/chain.go:116: undefined: grpc.Streamer
../../grpc-ecosystem/go-grpc-middleware/chain.go:117: undefined: grpc.Streamer
../../grpc-ecosystem/go-grpc-middleware/chain.go:117: too many errors

Need a full example

Could give a full example for logging with zap?

The readme confuses me several days...

Prometheus Metrics not provisionned with grpc_auth

Hi,
i use grpc_auth and grpc_prometheus in a gRPC server like that :

server := grpc.NewServer(
		grpc.StreamInterceptor(grpc_prometheus.StreamServerInterceptor),
		grpc.UnaryInterceptor(
			grpc_middleware.ChainUnaryServer(
				middleware.ServerLoggingInterceptor(true),
				grpc_auth.UnaryServerInterceptor(authenticate),
				grpc_prometheus.UnaryServerInterceptor,
				otgrpc.OpenTracingServerInterceptor(tracer, otgrpc.LogPayloads()))),
	)

And auth :

func authenticate(ctx context.Context) (context.Context, error) {
	glog.V(2).Info("Check authentication")
	token, err := grpc_auth.AuthFromMD(ctx, "basic")
	if err != nil {
		return nil, err
	}
	userID, err := auth.CheckBasicAuth(token)
	if err != nil {
		return nil, grpc.Errorf(codes.Unauthenticated, err.Error())
	}
	newCtx := context.WithValue(ctx, transport.UserID, userID)
	return newCtx, nil
}

I try to access services without credentials i ve got this response :

rpc error: code = 16 desc = Unauthorized

So it works fine
But in the /metrics exported for Prometheus, i don't see any metrics with code = Unauthenticated

grpc_server_handled_total{grpc_code="Unauthenticated", grpc_method="xxxxx",grpc_service="xxxxxxx",grpc_type="unary"} 0

Any idea ?

support other language client?

As the title asked, I have a server written in go, but the client may be C#, so can those middlewares support C# gRPC?

[Question] grpc_ctxtags without interceptor

Prior to looking into using the grpc_ctxtags middleware I was using the context myself to propagate the tags. I have the grpc_ctxtags wired up for my grpc server interceptors, but I also have a separate worker that is not a grpc server and I'm not seeing an easy way to use a common chunk of code for the tags and logging with the grpc_ctxtags as it returns a no-op tag when it was not initialized with the interceptor and I'm not seeing any other way to initialize it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.