pace / bricks Goto Github PK
View Code? Open in Web Editor NEWA standard library for microservices.
License: MIT License
A standard library for microservices.
License: MIT License
x-
attributes from openapiv3to generate the methods correctlygithub.com/pace/bricks/http/jsonapi/runtime
package to handle protocol bufferspace
command for service new
and the generate rest
to respect the protocol buffers correctlyShould be implemented using:
The go-microservice
project develops itself into a small opinionated microservice generation kit and utility.
Currently, the util is still called pace
and a bit separated from the rest of the code base. The main reason for this is, that it was developed in a separate repository.
So currently we have go-microservice
with pace service new pay
to create a new microservice.
Quickly we realized that the two things belong together and merging the pace tool into the go-microservice
made changes to both at the same time much easier. Now the tool name is a bit off and the project name is a bit generic.
go-microservice
to pace/bricks
service
argument in the command so that we can create new microservices with pace new pay
pace
to pb
TestIntergration...
-short
TestIntegration
Currently, one can't really filter in/out messages from certain subsystems.
E.g. one wants to disable Postgres and Redis logs by simply doing ./some-comand | egrep -v 'backend="(pg|redis)"'
Such functionality will also be helpful in the context of ELK / Graylog.
If a resource object defines metadata, generate the corresponding code for it.
This ci file will be inspired by the ci file that is generated for go-microservice.
pace service new ...
pace service generate gitlab-ci ...
lPossibly using https://github.com/pkg/errors and https://github.com/getsentry/raven-go
Timeout interface
correctly e.g. with respective return codes (https://blog.golang.org/error-handling-and-go)implement default http server respecting the PORT
environment
Possibly using https://github.com/prometheus/client_golang
Implement technical concept https://github.com/pace/bricks/tree/master/maintenance/metric
First class circuit breaker support in go-microservice using
prometheus.HistogramOpts{
Buckets: []float64{.1, .25, .5, 1, 2.5, 5, 10},
should be better
prometheus.HistogramOpts{
Buckets: []float64{10, 50, 100, 300, 600, 1000, 2500, 5000, 10000, 60000},
same is true for responseSize, currently:
Buckets: []float64{100, 200, 500, 900, 1500},
better: 100Bytes, 1KB, 10KB, 100 KB, 10B, 5MB, 10MB, 100MB
Configure a pace/bricks mircoservice using a helm chart
Currently, the project has the MIT license set. I would argue, that we should use the Apache 2.0, that is very similar but has the extra "we don't sue you" part. It is a good choice for businesses
Generate protobuf strcut tags for all generated types
parse protobuf if Content-Type
is application/protobuf
and prefer to generate if Accept
header is application/protobuf
.
https://lab.jamit.de/pace/web/service/poi/merge_requests/140/diffs adds a new option to the Postgres connections: pool size.
That option should be part of the go microservice.
Add package for error wrapping and sentry message handling
Steps to reproduce in terminal:
pace service new dtc
cd `pace service path dtc`
make
See also
joefitzgerald/gometalinter-linter#24
Note
Commenting the gometalinter out in the Dockerfile, it still does not work. See
https://sentry.jamit.de/pace-telematics/poi-dev/issues/11229/?environment=edge
Code at https://github.com/pace/go-microservice/blob/master/backend/postgres/postgres.go#L193 does an .Add()
with a r.RowsAffected()
value, a method which can also return -1
, as documented in
// A Result summarizes an executed SQL command.
type Result interface {
Model() Model
// RowsAffected returns the number of rows affected by SELECT, INSERT, UPDATE,
// or DELETE queries. It returns -1 if query can't possibly affect any rows,
// e.g. in case of CREATE or SHOW queries.
RowsAffected() int
// RowsReturned returns the number of rows returned by the query.
RowsReturned() int
}
An .Add
call should be done with values >= 0
otherwise it panics.
Potential fix:
Replace code at https://github.com/pace/go-microservice/blob/master/backend/postgres/postgres.go#L193:
pacePostgresQueryAffectedTotal.With(labels).Add(float64(r.RowsAffected()))
with
pacePostgresQueryAffectedTotal.With(labels).Add(math.Max(0, float64(r.RowsAffected())))
self
first
, last
, prev
and next
I think this is wrong here? It should be a state? I discovered it when I reran some Cloud specs using this logic here.
By default Jaeger has a probabilistic sampling strategy for all traces, i.e., referring to https://www.jaegertracing.io/docs/1.6/architecture/ it holds that
By default, Jaeger client samples 0.1% of traces (1 in 1000), and has the ability to retrieve sampling strategies from the agent.
See also https://www.jaegertracing.io/docs/1.6/sampling/ for more information.
-short
flag to the current test stage
-run TestIntegration
import "github.com/caarlos0/env"
type config struct {
URL string `env:"OAUTH2_URL" envDefault:"`https://oauth.example.com`"`
Client string `env:"OAUTH2_CLIENT"`
Secret string `env:"OAUTH2_SECRET"`
}
func func NewMiddleware() *Middleware {
var cfg config
err := env.Parse(&cfg)
if err != nil {
log.Fatalf("Failed to parse postgres environment: %v", err)
}
return ....
}
"lab.jamit.de/pace/go-microservice/maintenance/log"
package for loggingmake testserver
make testserver
To assure the function of the service in a certain environment, a package with supportive functions should be created.
Possibly using https://github.com/jaegertracing/jaeger-client-go and https://github.com/opentracing/opentracing-go
req_id
to correlate logs and jaeger traces, add tracing id to log?Access to Cockpit (mobile phone, first/last name, gender, cars)
We want to use Golang to build (most of) our microservices. Since some of these services need to access other services, we should encapsulate that functionality. This project should integrate all the communication stuff as soon as we have either an grant, access token or session token.
We should build a SDK that handles Cloud requests (through PACE API Gateway). This SDK Should then be used for:
Currently, we don't implement http://jsonapi.org/format/#errors-processing correctly.
The details we provide should help the caller though.
Problem
The govalidation (github.com/asaskevich/govalidator) error has no reference to the original StructField. That makes it impossible to generate correct pointers. Since the actual data structure and the incoming JSON are a very different, fork and add struct field tags. Add custom tag and use a custom tag to produce correct source pointer/parameter.
Using https://github.com/go-redis/redis
Validation
x-validator
attribute in spec to select more validations and directly apply them (e.g. iso codes
)Scopes
x-must-scopes
defines the set of scopes that need to be checked in order to make a request. Special values:
*
all of the already provided scopes need to be given1
one of the already provided scopes need to be givenThe following line https://github.com/pace/bricks/blob/master/maintenance/log/log.go#L127 leads to the following result, after creating a new service and executing the test suite
https://github.com/pace/web/service/poi/blob/621ddb17fd0038ade2816ab79c28ef00729c9ec2/Gopkg.toml#L30
See also rs/zerolog#116
Used Go version: 1.11
Can be fixed by adding the following https://github.com/pace/web/service/poi/blob/master/Gopkg.toml#L30 to Gopkg.toml
and running dep ensure
afterwards.
At the moment only tags are in the search index. Since these two are very likely to be queried, we need to add them as tags to the report.
Background: Currently, I have a case, where the customer has an issue in Cloud/Cockpit that I can't search for since they are only additional data but not in tags...
For reference: https://forum.sentry.io/t/is-it-possible-to-search-by-extra/249/3
Using https://github.com/go-pg/pg
We should run our microservices with pprof enabled to allow profiling and debugging of staging and production systems more easily.
Probing only happens if the endpoint is hit (with a small performance overhead during that time) https://golang.org/src/net/http/pprof/pprof.go
"/health"
api service that should return 200 -> important for the API gatewayNext to the health check that only returns OK we need to add a health check that checks it's dependencies.
/health/service
Implement middleware that handles OAuth2.
oauth2.Middleware
oauth2.HasScope(ctx context.Context, scope string) bool
oauth2.Get(ctx context.Context, key string) string
oauth2.BaererToken(ctx context.Context) string
Example program:
package main
import (
"fmt"
"log"
"net/http"
"github.com/gorilla/mux"
"lab.jamit.de/pace/web/libs/go-microservice/http/oauth2"
)
func main() {
r := mux.NewRouter()
r.Use(oauth2.Middleware{
// ... not exactly sure what we need to authenticate against
// cockpit ...
Host: "id.pace.cloud",
ClientID: "dtc",
ClientSecret: "some secret",
})
r.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
log.Printf("AUDIT: User %s does something", oauth2.Get(r.Context(), "X-UID"))
if oauth2.HasScope(r.Context(), "dtc:codes:read") {
fmt.Fprintf(w, "The secret code is 42")
return
}
fmt.Fprintf(w, "Your client may not have the scopes to see the secret code")
})
srv := &http.Server{
Handler: r,
Addr: "127.0.0.1:8000",
}
log.Fatal(srv.ListenAndServe())
}
as defined in the README https://github.com/pace/bricks/tree/master/maintenance/metric
This should probably be changed to
q, qe := event.UnformattedQuery()
The way it's set up now, the wrapper will generate a new prometheus label for each individual query (new params generate new labels). UnformattedQuery
will return the query string with placeholders (?
) which is most likely what we want.
json:api descriptions can contain relationships
, the corresponding attributes need to be generated
page[size]
Possibly using https://github.com/rs/zerolog or https://github.com/Sirupsen/logrus
If we execute pace service new dtc --source https://lab.jamit.de/pace/web/api-definitions/raw/master/dtc.yaml
, we get a runtime error like this
Nil pointer in generate_tapes:18
If we manually download the JSON definition of an API from our developer hub and use that file everything works nicely. So this error probably occurrs due to unresolved schemas. We should catch such errors and use log.Fatalf(...)
instead to show a proper error message.
Update 07.09.18
Having resolved yaml files deployed the above error is still thrown. So maybe this is more of an yaml vs json issue.
More updates
This actually is an issue with files from remotes. Adding a loadSwaggerFromURI
will fix this issue.
In order to filter technical stats from the Grafana dashboards, add a label to all request related stats.
Request-Source: (uptime|kubernetes|nginx|livetest)
Name | Description |
---|---|
none | Regular requests |
uptime |
Uptime Robot and similar sources |
kubernetes |
Kubernetes health/alive checks |
nginx |
Backend availability checks |
livetest |
Backend functional checks |
The set needs to be checked, otherwise, one could DOS our Prometheus my using random labels (explosion of TSDBs)
The test for the go-microservice kit should be evaluated on each check-in
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.