grpc-ecosystem / grpc-httpjson-transcoding Goto Github PK
View Code? Open in Web Editor NEWTranscoding to provide HTTP/JSON interface for gRPC Service
License: Apache License 2.0
Transcoding to provide HTTP/JSON interface for gRPC Service
License: Apache License 2.0
Greetings grpc-httpjson-transcoding developers and contributors,
We’re reaching out because your project is an important part of the open source ecosystem, and we’d like to invite you to integrate with our fuzzing service, OSS-Fuzz. OSS-Fuzz is a free fuzzing infrastructure you can use to identify security vulnerabilities and stability bugs in your project. OSS-Fuzz will:
Many widely used open source projects like OpenSSL, FFmpeg, LibreOffice, and ImageMagick are fuzzing via OSS-Fuzz, which helps them find and remediate critical issues.
Even though typical integrations can be done in < 100 LoC, we have a reward program in place which aims to recognize folks who are not just contributing to open source, but are also working hard to make it more secure.
We want to stress that anyone who meets the eligibility criteria and integrates a project with OSS-Fuzz is eligible for a reward.
If you're not interested in integrating with OSS-Fuzz, it would be helpful for us to understand why—lack of interest, lack of time, or something else—so we can better support projects like yours in the future.
If we’ve missed your question in our FAQ, feel free to reply or reach out to us at [email protected].
Thanks!
Tommy
OSS-Fuzz Team
This issue copy of cloudendpoints/esp#728
Hi!
Please consider adding feature newline-delimited JSON streams like in https://github.com/grpc-ecosystem/grpc-gateway
Example exists in grpc-ecosystem/grpc-gateway#581
This would be useful if api return 100k + lines of statistics on advertising objects without loading everything into memory. =) I really want to use the protocol for exchanging such data, but now, unfortunately, I can’t. Of course, there may be problems with the performance of serialization and deserialization, I am ready to discuss this, maybe it is possible to solve the problem differently.
In general, I would like to have api based on the protocol of buffers, use authorization check at the esp level, also in some api sometimes it is necessary to give a large flow of data (we usually have statistics), while from the external system I already getting stream with 100k + CSV for one request time ( for example Google Ads) and I want to immediately redirect them to the output stream, possibly with minimal processing and json mapping. I don’t want to save data to a cloud guard or something like that and give a link to the data. I want to immediately serialize the answer. Certainly in Google it is already somehow decided, maybe you can tell how to do it correctly from the point of view of Google?
Thank you in advance!
Trying to build on windows but having issues
PS E:\src\grpc-httpjson-transcoding> bazel build //...
Starting local Bazel server and connecting to it...
WARNING: --enable_bzlmod is set, but no MODULE.bazel file was found at the workspace root. Bazel will create an empty MODULE.bazel file. Please consider migrating your external dependencies from WORKSPACE to MODULE.bazel. For more details, please refer to https://github.com/bazelbuild/bazel/issues/18958.
ERROR: C:/users/joss/_bazel_joss/qwq6hdnq/external/io_bazel_rules_docker/platforms/BUILD:78:9: in constraint_values attribute of platform rule @@io_bazel_rules_docker//platforms:image_transition: '@@io_bazel_rules_docker//platforms:image_transition_cpu' does not have mandatory providers: 'ConstraintValueInfo'
ERROR: C:/users/joss/_bazel_joss/qwq6hdnq/external/io_bazel_rules_docker/platforms/BUILD:78:9: in constraint_values attribute of platform rule @@io_bazel_rules_docker//platforms:image_transition: '@@io_bazel_rules_docker//platforms:image_transition_os' does not have mandatory providers: 'ConstraintValueInfo'
ERROR: C:/users/joss/_bazel_joss/qwq6hdnq/external/io_bazel_rules_docker/platforms/BUILD:78:9: Analysis of target '@@io_bazel_rules_docker//platforms:image_transition' failed
ERROR: E:/src/grpc-httpjson-transcoding/perf_benchmark/BUILD:73:9: Target @@io_bazel_rules_docker//platforms:image_transition was referenced as a platform, but does not provide PlatformInfo
ERROR: Analysis of target '//perf_benchmark:benchmark_main_image' failed; build aborted
INFO: Elapsed time: 26.266s, Critical Path: 0.09s
INFO: 1 process: 1 internal.
ERROR: Build did NOT complete successfully
FAILED:
Fetching repository @@go_sdk; starting
Fetching repository @@fuzzing_py_deps; Extracting wheels
Fetching repository @@local_jdk; starting
Fetching repository @@bazel_skylib~; starting
Fetching repository @@apple_support~; starting
Fetching repository @@protobuf~; starting
Fetching repository @@rules_cc~; starting
```
Is the project open for supporting cmake in addition to bazel?
Hi there, I work on protobufs. I noticed your project is using APIs from google/protobuf/util/internal
:
#include "google/protobuf/util/internal/json_stream_parser.h"
#include "google/protobuf/util/internal/object_writer.h"
These APIs have "internal" in the include path, and are not for end-user consumption. We will likely be removing these soon, or at least changing their namespace to make it even clearer that these APIs are not for users.
This came to my attention because of this PR in our repo:
After cloning, I ran bazel command to build project and I ran into following errors:
Starting local Bazel server and connecting to it...
ERROR: /home/basil/.cache/bazel/_bazel_basil/c7fbbb4ed84ea4e033d6d8801ef24b13/external/protobuf_git/BUILD:597:1: Traceback (most recent call last):
File "/home/basil/.cache/bazel/_bazel_basil/c7fbbb4ed84ea4e033d6d8801ef24b13/external/protobuf_git/BUILD", line 597
internal_gen_well_known_protos_java(srcs = WELL_KNOWN_PROTOS)
File "/home/basil/.cache/bazel/_bazel_basil/c7fbbb4ed84ea4e033d6d8801ef24b13/external/protobuf_git/protobuf.bzl", line 266, in internal_gen_well_known_protos_java
Label(("%s//protobuf_java" % REPOSITOR...))
File "/home/basil/.cache/bazel/_bazel_basil/c7fbbb4ed84ea4e033d6d8801ef24b13/external/protobuf_git/protobuf.bzl", line 266, in Label
REPOSITORY_NAME
builtin variable 'REPOSITORY_NAME' is referenced before assignment.
ERROR: /home/basil/.cache/bazel/_bazel_basil/c7fbbb4ed84ea4e033d6d8801ef24b13/external/protobuf_git/BUILD:101:1: Target '@protobuf_git//:windows' contains an error and its package is in error and referenced by '@protobuf_git//:protobuf'
ERROR: /home/basil/.cache/bazel/_bazel_basil/c7fbbb4ed84ea4e033d6d8801ef24b13/external/protobuf_git/BUILD:101:1: Target '@protobuf_git//:windows_msvc' contains an error and its package is in error and referenced by '@protobuf_git//:protobuf'
ERROR: /home/basil/.cache/bazel/_bazel_basil/c7fbbb4ed84ea4e033d6d8801ef24b13/external/protobuf_git/BUILD:101:1: Target '@protobuf_git//:android' contains an error and its package is in error and referenced by '@protobuf_git//:protobuf'
ERROR: /home/basil/.cache/bazel/_bazel_basil/c7fbbb4ed84ea4e033d6d8801ef24b13/external/protobuf_git/BUILD:101:1: Target '@protobuf_git//:windows' contains an error and its package is in error and referenced by '@protobuf_git//:protobuf'
ERROR: /home/basil/.cache/bazel/_bazel_basil/c7fbbb4ed84ea4e033d6d8801ef24b13/external/protobuf_git/BUILD:101:1: Target '@protobuf_git//:windows_msvc' contains an error and its package is in error and referenced by '@protobuf_git//:protobuf'
ERROR: /home/basil/Desktop/GSOC2019/grpc-httpjson-transcoding/repositories.bzl:47:9: Target '@protobuf_git//:protobuf' contains an error and its package is in error and referenced by '//external:protobuf'
ERROR: Analysis of target '//test:request_message_translator_test' failed; build aborted: Analysis failed
INFO: Elapsed time: 4.880s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (15 packages loaded, 87 targets configured)
Fetching @com_google_absl; fetching
Fetching @googleapis_git; fetching
Fetching @googletest_git; fetching
Hi All,
As title suggested, I am wondering if this package can do the following:
Currently PathMatchBuilder does silently ignore duplicates at build time and fails lookups at request time.
Is it acceptable to add an option (or change the existing behavior) to fail method registration instead?
In my use case my GetSomethingRequest
includes a map<string, string>
, and I didn't manage to pass the map as a query param. So I have to attach a body to the GET
request which is bad. This is how I define the messages and service:
message Key {
map<string, string> data = 1;
string field = 2;
}
message GetSomethingRequest {
string parent = 1;
Key key = 2;
}
service SomeService {
rpc GetSomething(GetSomethingRequest) returns (Something) {
option (google.api.http) = {
get: "/v1/parent/{parent}/somethings"
};
}
}
I tried the following and neither of them works.
// also tried to replace [, ], {, }, :, = with percent encode.
// the first two are following https://github.com/grpc-ecosystem/grpc-gateway/pull/535
GET /v1/parent/parentname/somethings?field=aa&data[abc]=cba
GET /v1/parent/parentname/somethings?key.field=aa&key.data[abc]=cba
GET /v1/parent/parentname/somethings?key.field=aa&key.data={abc:cba}
So I wonder if map of primitives as query param is supported? If not any plans to add support for this? Thanks!
Part of GoogleCloudPlatform/oss-test-infra#1171
OSS prow currently uses bot personal access token(PAT) for authentication with GitHub APIs, which has a global rate limit of 5000 per hour. This is not very well scalable with many tenants. Switching over to GitHub app will get a rate limit per installation, which would greatly increase the rate limit exhaustion problem we have seen lately
Install Google OSS Prow app on this repo by visiting https://github.com/apps/google-oss-prow
Say I want to have resources such as:
/people
(a collection of people)
/people/{personId}
(an individual person)
And I want both of these resource types to accept a custom verb called 'getResourceType'.
If someone makes a request to /people:getResourceType
I want it to say 'person collection'
If someone makes a request to /people/xyz:getResourceType
I want it to say 'person'
So my .proto might look like this:
service API {
rpc ListPeople(Empty) returns (ListPeopleResponse) {
option (google.api.http) = {
get: "/people"
};
}
rpc GetPerson(GetPersonRequest) returns (GetPersonResponse) {
option (google.api.http) = {
get: "/{person_id=people/*}"
};
}
rpc GetResourceType(GetResourceTypeRequest) returns (GetResourceTypeResponse) {
option (google.api.http) = {
get: "/{resource_name=**}:getResourceType"
};
}
}
message GetPersonRequest {
string person_id = 1;
}
message GetResourceTypeRequest {
string resource_name = 1;
}
...
Then I might set up grpc-httpjson-transcoding using, say, Envoy.
When: I make a call to this service at /people:getResourceType
Expected: Service receives a GetResourceType call with resource_name = "people" (since that is the only method that accepts the getResourceType verb)
Actual: Service receives a GetPerson call with person_id = "getResourceType"
In my understanding, this is due to the ':' character being replaced with a '/' without remembering that the last segment is a verb.
https://github.com/grpc-ecosystem/grpc-httpjson-transcoding/blob/master/src/include/grpc_transcoding/path_matcher.h#L376
protobuf won't serialize fields which has default values.
this is no problem when both sides are using native protobuf codec,
but for json, it would be nice to have all fields, no matter default or not, present.
I have a proto definition from the greeter sample modified to the below:
Note that the uri path has a segment for "more.inner_name" and the body also maps to the field "more".
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply)
{
option (google.api.http) =
{
get: "/api1/{name}/{more.inner_name}"
body: "more"
};
}
}
message HelloInner {
string inner_name = 1;
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
HelloInner more = 2;
}
I am issuing a request:
$ curl http://0.0.0.0:5000/api1/hh/printme -i -H "hello: brother" --raw --http1.1 --data '{"inner_name":"frombody"}' --request GET
On the grpc server, what is the expected body content? I see that the inner_name is set to "frombody". Is this behavior guaranteed to remain this way or could the inner_name be "printme" in the future?
UrlUnescapeString
produces the same output for semantically different strings, for example:
UrlUnescapeString("%2523", false) == "%23"
UrlUnescapeString("%23", false) == "%23"
This makes it impossible to understand original intent of a user.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.