opensergo / opensergo-specification Goto Github PK
View Code? Open in Web Editor NEWUniversal cloud-native microservice governance specification (微服务治理标准)
Home Page: https://opensergo.io
License: Apache License 2.0
Universal cloud-native microservice governance specification (微服务治理标准)
Home Page: https://opensergo.io
License: Apache License 2.0
据悉现在已有分布式事务方案 https://github.com/opensergo/opensergo-specification/blob/main/specification/zh-Hans/database.md
请问会不会考虑其他的组件接入?比如seata。
后续会不会考虑推出可插拔的协议?
I'm not official member of community, here is my opinion.
(English version TBD...)
在 PR #29 中, 流量路由规范
中通过 RouterRule 定义了多种协议的流量路由规则,每一种协议的路由规则
中定义了**RequestMatch
来提供给用户进行流量规则的定义。而我发现,在 HttpRequestMatch
和RpcRequestMatch
中定义的几个内置参数,都是简单规则,都是可以通过我们手动定义的,如果遇到复杂的场景,无法提前预知参数或者无法手动定义复杂的规则的话,目前的设计就很难满足了。
因此,我提出一个想法:就是在**RouteRuleContext
中,添加一个自定义匹配行为参数(暂命名为action),与match
一样结果以true
或false
来表示匹配成功与失败,但与match
不同的是,action
配置的是流量匹配执行器的工作负载。
通过这种方式,即使在opensergo
中内置的流量匹配规则无法满足用户的需求的情况下,用户可自定义action
并实现action
对应的WorkLoad
来满足自己复杂的路由规则。
match
,则首先满足match
action
,则在满足match
的前提下,同时满足action
http
协议为例:Field | Type | Description | Required |
---|---|---|---|
name | string | No | |
match | HttpRequestMatch | Http流量的匹配规则 | No |
action | HttpRequestAction | Http流量的匹配执行器 | No |
targets | VirtualWorkload[] | 匹配到Http流量的匹配规则的目标虚拟工作负载集合 | Yes |
modify | HttpModifyRule | 对匹配到Http流量匹配规则的流量进行修改的规则 | No |
Field | Type | Description | Required |
---|---|---|---|
script | string (oneof) | 脚本 (同 PR #29 中StringMatch 的vars ) |
No |
workload | VirtualWorkload (oneof) | 工作负载 | No |
apiVersion: traffic.opensergo.io/v1alpha1
kind: RouterRule
metadata:
name: my-traffic-router-rule
spec:
selector:
app: provider-app
http: # http/rpc/db/redis
- name: my-traffic-router-http-rule
rule:
match:
headers:
- X-User-Id: # 参数名
exact: 12345 # 参数值
uri:
exact: "/index"
action:
workload:
workloads: my-request-rule-action
name: http-action
- target:
- workloads: tag-rule
name: my-app-gray
target:
- workloads: tag-rule
name: my-app-base
apiVersion: traffic.opensergo.io/v1alpha1
kind: VirtualWorkloads
metadata:
name: my-request-rule-action
spec:
selector:
app: rule-action
virtualWorkload:
- name: http-action
target: http-action-container
type: deployment
selector:
tag: http-action
loadbalance: random
- name: rpc-action
target: rpc-action-container
type: deployment
selector:
tag: rpc-action
loadbalance: random
目前结合官方文档都是会结合控制面CRD来下发实现配置
疑问点:
1.存在K8S+ECS部署的应用集群是否能实现管控?
2.纯ECS部署架构能否支持?
opensergo-dashboard的稳定版本部署包最新稳定版本部署安装不太友好:
1.配置文件application.yaml和数据库脚本schema.sql都放置在opensergo-dashboard.jar里,需要打开jar包,然后修改数据库链接和拉取schema.sql到本地。
修改建议:
1.创建conf文件夹(和bin,lib文件夹同级),将application.yaml和schema.sql放到conf文件夹下
2.修改bin/startup.sh
JAVA_OPT="${JAVA_OPT} -Dloader.path=${BASE_DIR}/opt-libs -jar ${BASE_DIR}/lib/${SERVER}.jar"
添加 --spring.config.location=${BASE_DIR}/conf/application.yaml配置即可。
在ToB场景下,接口访问的qps一般比较稳定,但存在一些少量调用方,一段时间内,查询大批量数据的场景,即qps比较稳定,但
每个查询的数据量差别很大,期望可以增加基于请求IP或者服务名称的定向限流。
使用场景:
qps: 100 -> 120
avrRt: 50 -> 500
增加的20个qps,会带来整体rt的大幅上升,对应查询的数据量会也从10条,变成1万条,希望能保障正常请求的稳定性,针对这种异常吞吐量进行限流。
This is the umbrella issue of the OpenSergo database governance spec (client-side).
domain: database
Umbrella issue: #15
DatabaseEndpoint
defines a physical database instance which could be connected and consumed by application via VirtualDatabase
.
(English version TBD...)
在数据库治理中,通过 VirtualDatabase 向应用声明了可以使用的逻辑数据库,而数据的真实存储则依赖于这样的一个物理的数据库,这里称为数据库访问端点,即 DatabaseEndpoint。DatabaseEndpoint 对应用无感知,它只能被 VirtualDatabase 通过特定治理策略所绑定然后连接和使用。这里以读写分离为例:
一个基础的 YAML 示例:
apiVersion: database.opensergo.io/v1alpha1
kind: DatabaseEndpoint
metadata:
name: write_ds
spec:
database:
MySQL: # 声明后端数据源的类型及相关信息
url: jdbc:mysql://192.168.1.110:3306/demo_write_ds?serverTimezone=UTC&useSSL=false
username: root
password: root
connectionTimeout: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
---
apiVersion: database.opensergo.io/v1alpha1
kind: DatabaseEndpoint
metadata:
name: ds_0
spec:
database:
MySQL: # 声明后端数据源的类型及相关信息
url: jdbc:mysql://192.168.1.110:3306/demo_read_ds_0?serverTimezone=UTC&useSSL=false
username: root
password: root
connectionTimeout: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
---
apiVersion: database.opensergo.io/v1alpha1
kind: DatabaseEndpoint
metadata:
name: ds_1
spec:
database:
MySQL: # 声明后端数据源的类型及相关信息
url: jdbc:mysql://192.168.1.110:3306/demo_read_ds_1?serverTimezone=UTC&useSSL=false
username: root
password: root
connectionTimeout: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
domain: database
Umbrella issue: #15
ReadWriteSplitting
defines a set of rules help declare how to implementing read write splitting.
(English version TBD...)
读写分离是常用的数据库扩展方式之一,主库用于事务性的读写操作,从库主要用于查询等操作。通常读写分离会包含静态和动态两种配置方式,其中静态读写分离需要配置的有:
一个基础的 YAML 示例:
# 静态读写分离配置
apiVersion: database.opensergo.io/v1alpha1
kind: ReadWriteSplitting
metadata:
name: readwrite
spec:
rules:
staticStrategy:
writeDataSourceName: "write_ds"
readDataSourceNames:
- "read_ds_0"
- "read_ds_1"
loadBalancerName: "random"
loadBalancers:
- loadBalancerName: "random"
type: "RANDOM"
在动态读写分离配置规则中,配置基本和静态读写分离保持一致,而对于从库的数据源判断进行了更新。
NOTE: 需要配合 DatabaseDiscovery 一起用
# 动态读写分离配置
apiVersion: database.opensergo.io/v1alpha1
kind: ReadWriteSplitting
metadata:
name: readwrite
spec:
rules:
dynamicStrategy:
#autoAwareDataSourceName: "#"
writeDataSourceQueryEnabled: true
loadBalancerName: "random"
loadBalancers:
- loadBalancerName: "random"
type: "RANDOM"
Write a document about how to access opersergo event governance in each framework.
About two modules
EventMesh (no k8s environment)
Knative Dapr OpenFunction (k8s environment)
OpenSergo meeting is held every two weeks for one hour with the following agenda:
Topic: OpenSergo biweekly meeting
Date: Wednesday, Mar 8th, 2023
Time: 19:30 - 20:30 (GMT+8)
Meeting token (Dingtalk):244 801 51010
Telepone: +862281944261 +867388953916
Link to join: https://meeting.dingtalk.com/j/ARZpAba7pws
OpenSergo 周会每两周开展一次,时间为1小时左右,议程如下:
对社区发展或者双周会形式有什么意见或思考也欢迎在下方留言!
主题:OpenSergo 社区双周会
时间:2023/03/08 周三 19:30 - 20:30
会议号:244 801 51010
电话呼入:+862281944261(**大陆)+867388953916(**大陆)
入会链接:https://meeting.dingtalk.com/j/ARZpAba7pws
欢迎社区同学参与讨论与分享。
domain: database
Umbrella issue: #15
Shadow
defines a set of rules help declare a shadow database.
(English version TBD...)
影子库可以帮助在灰度环境或者测试环境中,接收灰度流量或者测试数据请求,结合影子算法等灵活配置多种路由方式。
影子算法包括:
YAML 示例:
apiVersion: database.opensergo.io/v1alpha1
kind: Shadow
metadata:
name: shadow-db
spec:
dataSources:
shadowDataSource:
sourceDataSourceName: "ds" # 指定源数据源
shadowDataSourceName: "shadow_ds" # 指定影子数据源
tables: # map[string]object 类型
t_order: # 表名
dataSourceNames: # 数据源名称
- "shadowDataSource"
shadowAlgorithmNames: # 影子算法名称
- "user-id-insert-match-algorithm"
- "user-id-select-match-algorithm"
t_order_item:
dataSourceNames:
- "shadowDataSource"
shadowAlgorithmNames:
- "user-id-insert-match-algorithm"
- "user-id-update-match-algorithm"
- "user-id-select-match-algorithm"
t_address:
dataSourceNames:
- "shadowDataSource"
shadowAlgorithmNames:
- "user-id-insert-match-algorithm"
- "user-id-select-match-algorithm"
- "simple-hint-algorithm"
shadowAlgorithms: # map[string]object 类型
user-id-insert-match-algorithm:
type: REGEX_MATCH
props:
operation: "insert"
column: "user_id"
regex: "[1]"
user-id-update-match-algorithm:
type: REGEX_MATCH
props:
operation: "update"
column: "user_id"
regex: "[1]"
user-id-select-match-algorithm:
type: REGEX_MATCH
props:
operation: "select"
column: "user_id"
regex: "[1]"
simple-hint-algorithm:
type: "SIMPLE_HINT"
props:
foo: "bar"
Two kinds of design:
(English version TBD...)
在 issue #17 中,社区初步对 OpenSergo operator 及 SDK 的结构及通信链路进行了设计,其中 operator 与 SDK 之间的链路由 gRPC 承接。这里面一个关键的设计点是 OpenSergo low-level config 如何在 gRPC 链路上传输,即其传输格式与通信机制。
社区提出了一种朴素的设计 (OpenSergo universal transport service),将流程划分为发起订阅 (SubscribeConfig) 及 配置推送 (PushConfig) 两个流程,流程图如下:
该协议通过一套 service 来传输所有类型的 OpenSergo config。该设计具备简单的 request/response ACK 机制,但这里面其实还会有比较多的考虑点,包括 ACK/NACK 机制的保证、资源版本、请求顺序与一致性、链路承载的复杂度、增量/全量模型等。目前社区已经对该方案进行初步 POC,还在进行更多场景的验证探索。
社区的另一种思路是借助 xDS 协议中的 ECDS service 链路,这个链路具备携带任意类型 data 的能力,从而也可以具备承载 OpenSergo low-level config 订阅及传输的能力,同时 xDS 协议本身在容错、稳定性、性能模型相关的设计都逐步成熟。这个方式的优势是控制面的传输部分(ECDS server)可以借助 go-control-plane 这个项目来实现,但是在 SDK 侧可能需要社区进行比较多的开发适配(各个生态比较欠缺完备的 xDS client 的实现)。
欢迎社区针对 OpenSergo universal transport service 或 OpenSergo on xDS (ECDS) 的方案进行探讨,如果有更好的设计也欢迎提出。
使用Sentinel,nacos这套组件的公司,大部分的技术生态都是以Java为主,golang的控制平面对于扩展或者二开都有较高的门槛,难以和社区共同成长,非常影响技术选型😂。目前正在寻找Java生态内的控制平面😂。
Currently we have cloudwego-kitex / kratos / Spring Cloud Alibaba, and more frameworks are on the way, it is better for the OpenSergo dashboard to be able to distinguish the different microservices frameworks.
I suggest to add one more column to the metadata to identify the name of the framework.
domain: database
Considering change domain to traffic
Umbrella issue: #15
ConcurrencyControl
defines a set of rules help declare what to do while meeting some spike traffic.
(English version TBD...)
解决的是当具有某个属性特征的 SQL 出现时,对它的出现频率(并发度)进行控制,这样的 SQL 往往是慢 SQL 或者非常消耗资源的 SQL,放任它执行可能会导致关键业务 SQL 无法正常执行。
并发控制包括多种配置规则,比如由工程师根据经验预先设定的 SQL 属性特征,包括正则表达式和条件表达式两种。
在一些场景中,SQL 并发控制表现为开启和关闭两种状态,在开启的情况下,针对特定的 SQL 进行并发数量限制。对于开启状态的判断可以有多种方式,比如通过 Cron 表达式决定生效的时间段。
YAML 示例:
apiVersion: database.opensergo.io/v1alpha1
kind: ConcurrencyControl
metadata:
name: order
spec:
rules:
- cron: "* * * * *"
maxConcurrency: 1
regex: # 基于正则表达式的
- "^SELECT * FROM t_orders"
conditions: # 基于条件表达式的
- subject: column # column 列或 table 表
op: in # 预算符,包括 in, eq, ne, notin 等
values:
- "items"
Hi community, I'd like to initiate the discussion regarding the OpenSergo fault-tolerance spec v1alpha1.
(English version TBD...)
domain: fault-tolerance
流控降级与容错是服务流量治理中关键的一环,以流量为切入点,通过流控、熔断降级、流量平滑、自适应过载保护等手段来保障服务的稳定性。在 OpenSergo 中,我们期望结合 Sentinel 等框架的场景实践对流控降级与容错抽出标准 CRD。一个容错治理规则 (FaultToleranceRule
) 由以下三部分组成:
注:最新的设计请见 此处。
Target 定义该规则针对什么样的请求,如某个 key(可以类比 Sentinel 中的资源名的概念),或者包含某类参数的 HTTP 请求等等。v1alpha1 版本中,Target 先通过 targetResourceName 的方式直接配置资源 key。具体的用法可以参考最后的例子。
在后期版本的标准中,Target 会支持根据 HTTP、gRPC 等流量进行细分。如 HTTP target,后续设想形态如下:
apiVersion: fault-tolerance.opensergo.io/v1alpha1
kind: HttpRateLimitTarget
metadata:
name: target-http-foo-service
spec:
raw:
targetService: foo-service
headerPredicates:
- key: X-Test-Key
Strategy 定义该规则对应的容错或控制策略。在 v1alpha1 版本中,Strategy 支持流控、匀速排队、并发控制、熔断、系统过载保护等策略。在后续版本中,Strategy 还会支持过载实例摘除/调度、参数流控等能力。
流量控制策略 (RateLimitStrategy),即控制单位时长内的请求量在一定范围内。多适用于激增流量下保护服务承载能力在容量之内,避免过多流量将服务打垮。RateLimitStrategy 包含以下要素:
字段名 | 是否必填 | 类型 | 描述 |
---|---|---|---|
metricType | required | string (enum) | 指标类型,取值范围 RequestAmount |
limitMode | required | string (enum) | 控制模式,单机 Local , 集群总体 Global , 集群按实例数转单机 GlobalToLocal |
threshold | required | double | 阈值,单位统计时长内最多允许的量 |
statDuration | required | string (int+timeUnit) | 统计时长,如 1s , 5min ;也可考虑 timeUnit 形式 |
以下示例定义了一个集群流控的策略,集群总体维度每秒不超过 10个请求。示例 CR YAML:
apiVersion: fault-tolerance.opensergo.io/v1alpha1
kind: RateLimitStrategy
metadata:
name: rate-limit-foo
spec:
metricType: RequestAmount
limitMode: Global
threshold: 10
statDuration: "1s"
流量平滑策略 (ThrottlingStrategy),以匀速+排队等待控制效果,对并发请求进行平滑。多适用于异步后台任务(如消息 consumer 批量处理)或对延时不敏感的请求场景。ThrottlingStrategy 包含以下要素:
字段名 | 是否必填 | 类型 | 描述 |
---|---|---|---|
minIntervalOfRequests | required | string (int+timeUnit) | 相邻两个并发请求之间的最短时间间隔 |
queueTimeout | required | string (int+timeUnit) | 最大排队等待时长 |
以下示例定义了一个匀速排队的策略,相邻两个并发请求的时间间隔不小于 20ms,同时排队平滑的等待时长不超过 500ms。示例 CR YAML:
apiVersion: fault-tolerance.opensergo.io/v1alpha1
kind: ThrottlingStrategy
metadata:
name: throttling-foo
spec:
minIntervalOfRequests: '20ms'
queueTimeout: '500ms'
并发控制 (ConcurrencyLimitStrategy),即控制同时并发调用请求的数目。多适用于慢调用场景下的软隔离保护,避免调用端线程池被某些慢调用占满,导致服务不可用甚至链路不可用。ConcurrencyLimitStrategy 包含以下要素:
字段名 | 是否必填 | 类型 | 描述 |
---|---|---|---|
maxConcurrency | required | int | 最大并发 |
limitMode | required | string (enum) | 控制模式,单机 Local , 集群总体 Global |
示例 CR YAML:
apiVersion: fault-tolerance.opensergo.io/v1alpha1
kind: ConcurrencyLimitStrategy
metadata:
name: concurrency-limit-foo
spec:
maxConcurrency: 8
limitMode: 'Local'
CircuitBreakerStrategy 对应微服务设计中标准的断路器模式,单机维度生效。CircuitBreakerStrategy 包含以下要素:
SlowRequestRatio
、错误比例 ErrorRequestRatio
1s
, 5min
;也可考虑 timeUnit 形式以下示例定义了一个慢调用比例熔断策略(在 30s 内请求超过 500ms 的比例达到 60% 时,且请求数达到5个,则会自动触发熔断,熔断恢复时长为 5s),示例 CR YAML:
apiVersion: fault-tolerance.opensergo.io/v1alpha1
kind: CircuitBreakerStrategy
metadata:
name: circuit-breaker-slow-foo
spec:
strategy: SlowRequestRatio
triggerRatio: '60%'
statDuration: '30s'
recoveryTimeout: '5s'
minRequestAmount: 5
slowConditions:
maxAllowedRt: '500ms'
实例维度自适应过载保护策略 (AdaptiveOverloadProtectionStrategy),基于某些系统指标与自适应策略结合来对实例维度的稳定性进行整体兜底保护。注意该策略的维度为某个服务的每个 pod 维度,分别生效,不区分具体条件。
AdaptiveOverloadProtectionStrategy 包含以下要素:
NONE
;目前 CPU usage 指标支持 BBR
策略示例 CR YAML:
apiVersion: fault-tolerance.opensergo.io/v1alpha1
kind: AdaptiveOverloadProtectionStrategy
metadata:
name: system-overload-foo
spec:
metricType: 'CpuPercentage'
triggerThreshold: '70%'
adaptiveStrategy: 'BBR
v1alpha1 版本中,由于 target 先针对泛化的 resourceName,这里先忽略 FallbackAction 这一项。
在后期版本中,再根据 HTTP、gRPC 等流量进行细分。针对 HTTP 请求的 fallbackAction 可以参考下面的示例。
一个 YAML 示例:
apiVersion: fault-tolerance.opensergo.io/v1alpha1
kind: RateLimitStrategy
metadata:
name: rate-limit-foo
spec:
metricType: RequestAmount
limitMode: Global
threshold: 10
statDuration: "1s"
---
apiVersion: fault-tolerance.opensergo.io/v1alpha1
kind: HttpRequestFallbackAction
metadata:
name: fallback-foo
spec:
behavior: ReturnProvidedResponse
behaviorDesc:
# 触发策略控制后,HTTP 请求返回 429 状态码,同时携带指定的内容和 header.
responseStatusCode: 429
responseContentBody: "Blocked by Sentinel"
responseAdditionalHeaders:
- key: X-Sentinel-Limit
value: "foo"
---
apiVersion: fault-tolerance.opensergo.io/v1alpha1
kind: FaultToleranceRule
metadata:
name: my-rule
namespace: prod
labels:
app: my-app # 规则配置生效的应用名
spec:
targets:
- targetResourceName: '/foo'
strategies:
- name: rate-limit-foo
fallbackAction: fallback-foo
这个规则相当于为 key 为 /foo
的请求配置了一个策略(以下假定该资源对应 HTTP 请求),这个策略对应流控策略,全局不超过 10 QPS。当策略触发时,被拒绝的请求将根据配置的 fallback 返回 429 状态码,返回信息为 Blocked by Sentinel
,同时返回 header 中增加一个 header,key 为 X-Sentinel-Limit
, value 为 foo。
欢迎社区一起参与讨论和共建。
domain: database
Umbrella issue: #15
Sharding
defines a set of rules help declare how to implementing sharding.
(English version TBD...)
数据分片是基于数据属性一种扩展策略,对数据属性进行计算后将请求发往特定的数据后端,目前分为分片键分片和自动分片。其中分片键分片中需要指明需要分片的表、列、以及进行分片的算法。
配置数据分片首先需要确定是分片键分片还是自动分片,规则配置如下:
分片键分片的规则配置包括:
两种分片规则都需要根据场景配置绑定表和广播表策略,规则配置如下:
绑定表策略的规则配置包括:
广播表策略的规则配置包括:
除此之外,对于用户未指定的情况需要应用默认策略,规则配置包括:
需要用到的策略包括切分策略、分布式序列策略,规则配置如下:
databaseStrategy 和 tableStrategy 都需要配置切分策略:
分布式序列策略:
分片算法和分布式序列算法配置如下:
分片算法配置:
分布式序列算法配置:
分片键分片 YAML 示例:
apiVersion: database.opensergo.io/v1alpha1
kind: Sharding
metadata:
name: sharding_db
spec:
tables: # map[string]object 类型
t_order:
actualDataNodes: "ds_${0..1}.t_order_${0..1}"
tableStrategy:
standard:
shardingColumn: "order_id"
shardingAlgorithmName: "t_order_inline"
keyGenerateStrategy:
column: "order_id"
keyGeneratorName: "snowflake"
t_order_item:
actualDataNodes: "ds_${0..1}.t_order_item_${0..1}"
tableStrategy:
standard:
shardingColumn: "order_id"
shardingAlgorithmName: "t_order_item_inline"
keyGenerateStrategy:
column: order_item_id
keyGeneratorName: snowflake
bindingTables:
- "t_order,t_order_item"
defaultDatabaseStrategy:
standard:
shardingColumn: "user_id"
shardingAlgorithmName: "database_inline"
# defaultTableStrategy: # 为空表示 none
shardingAlgorithms: # map[string]object 类型
database_inline:
type: INLINE
props: # map[string]string 类型
algorithm-expression: "ds_${user_id % 2}"
t_order_inline:
type: INLINE
props:
algorithm-expression: "d_order_${order_id % 2}"
t_order_item_inline:
type: INLINE
props:
algorithm-expression: "d_order_item_${order_id % 2}"
keyGenerators: # map[string]object 类型
snowflake:
type: SNOWFLAKE
自动分片 YAML 示例:
apiVersion: database.opensergo.io/v1alpha1
kind: Sharding
metadata:
name: sharding_db
spec:
autoTables:
t_order_auto:
actualDataNodes: "ds_${0..1}.t_order_${0..1}"
shardingStrategy:
standard:
shardingColumn: ""
shardingAlgorithmName: ""
domain: database
Umbrella issue: #15
DatabaseDiscovery
defines a set of rules help discovery database topology and status changes.
(English version TBD...)
数据库自动发现指的是根据数据库高可用配置,通过探测的方式感知数据源状态变化,并对流量策略做出相应的调整。比如后端数据源为 MySQL MGR,那么可以配置数据库发现类型为 MYSQL.MGR ,指定 group-name,并配置相应的探测心跳节律。
YAML 示例:
apiVersion: database.opensergo.io/v1alpha1
kind: DatabaseDiscovery
metadata:
name:
spec:
dataSources:
readwrite_ds:
dataSourceNames:
- ds_0
- ds_1
- ds_2
discoveryHeartbeatName: mgr-heartbeat
discoveryTypeName: mgr
discoveryHeartbeats: # 数据库发现探测心跳
mgr-heartbeat:
props:
"keep-alive-cron": '0/5 * * * * ?'
discoveryTypes: # 数据库发现类型
mgr:
type: MySQL.MGR
props:
"group-name": 92504d5b-6dec-11e8-91ea-246e9612aaf1
domain: database
Umbrella issue: #15
DistributedTransaction defines a set of options defines the algorithm of distributed transaction.
(English version TBD...)
声明分布式事务相关的配置,在这里声明事务的类型,没有额外的配置
YAML 示例:
apiVersion: database.opensergo.io/v1alpha1
kind: DistributedTransaction
metadata:
name: mysql-production
spec:
transaction:
defaultType: "xa"/"base"/"local"
providerType: "Narayana"/"Atomikos"/"Seata"
Hi, Because the work is mainly related to service governance, involving three aspects: sdk, agent, and istio, a set of standard governance specifications is especially needed as a guide for development and design.
The contents of the project are still in a very early stage and no further information can be obtained.
Ask how to better ask questions and participate in the work of the community.
#63 The routing spec mentioned in 63 issue. Add an event spec on this basis
Design Document:
https://alidocs.dingtalk.com/i/nodes/NkPaj7GAXdpWO62Z4qxm8qwgylnomz9B
Standards:
1.Eclipse JarkataEE:
Core Profile
Web Profile
2.Eclipse MicroProfile
Protocols
1.RSocket
2.QUIC/HTTP3
Libraries/Frameworks
MicroService Framework
1.JVM Languages(Java/Scala/Kotlin/Groovy)
SOFAStack
Dubbo Java
Spring Boot/Spring Cloud/Spring Cloud Alibaba
Quarkus
Micronaut
Lagom/Akka
Helidon
Open Liberty
Thorntail
Payara Micro
Wildfly
Kumuluzee
Vert.x
Apache ServiceComb
Dropwiz
2.GoLang
CloudWeGo
Kratos
GoFrame
Dubbo-go
Jupiter
Go-zero
GoMicro
Rpcx
Gokit
Gizmo
3.C#/.Net
.Net Framework/Core
MASA Stack
4.C/C++
5.Rust
6.PHP
7.Nodejs/JavaScript
8.Multiple Languages
Tars
Dubbo
RPC Framework
1.Java
Motan
SOFARPC
Armeria
2.Go
KiteX
3.C#/.Net
4.C/C++
Apache BRPC
SRPC
5.Rust
Volo Of CloudWego
6.PHP
7.Nodejs/JavaScript
8.Multiple Languages
grpc
Apache Thrift
hprose
Serverless
1.Knative
2.OpenFunction
3.OpenWhisk
4.Fisson
5.Kubeless
6.OpenFaas
7.IronFunctions
8.Fn
Service Mesh
1.Istio
2.Envoy
3.Linkerd2.0
4.Treafik Mesh
5.Kuma
6.Consul Connect
7.Gloo Mesh
8.Smile
9.Aeraki Mesh
Mecha/Multi Runtime
1.Dapr
2.Layotto
3.Apache EventMesh
Runtimes
1.WASM
2.GraalVM
(English version is below)
首先诚挚地感谢每一位持续关注并调研或使用 OpenSergo 的朋友。我们会持续投入,力图把 OpenSergo 变得更好,把 OpenSergo 社区和生态变得更加繁荣,让微服务治理的场景与最佳实践深入人心。
在此提交一条评论,评论内容包括:
您可以参考下面的样例来提供您的信息:
* 组织:阿里巴巴
* 地点:**杭州
* 联系方式:[email protected]
* 场景:微服务稳定性治理保障
再次感谢您的参与!!!
OpenSergo 团队
First of all, thanks sincerely for constantly using and supporting OpenSergo. We will try our best to keep OpenSergo better, and keep the community and eco-system growing.
Please submit a comment in this issue to include the following information:
You can refer to the following sample answer for the format:
* Organization: Alibaba
* Location: Hangzhou, China
* Contact: [email protected]
* Purpose: Microservice governance for reliability and resiliency.
Thanks again for your participation!
OpenSergo Community
Kubernetes Gateway API, the new canonical Kubernetes traffic management API, has been promoted to Beta. The Gateway API provides a mechanism called policy attachment, which could associate policies (e.g. timeout, retry, rate limiting) with the Gateway route definition. We may discuss how to integrate OpenSergo spec with Kubernetes Gateway API.
The canonical way to integrate with OpenSergo SDK:
For traffic governance, we could just integrate with Sentinel (Sentinel has been working on out-of-box support for OpenSergo spec):
Specially, we could leverage the underlying mechanism to integrate with OpenSergo on Service Mesh:
Discussions are welcomed!
Spring Cloud 微服务怎么接入opensergo
This issue keeps track of miscellaneous data-plane integrations with OpenSergo.
We want to add a standard CRD on the zero-trust direction to Opensergo.
The CRD will be expanded to be compatible with istio.
The CRD involves three aspects in total, namely
我们希望在Opensergo中加入关于零信任方向的标准CRD。
该CRD会在兼容istio的基础上适当拓展。
该CRD总共涉及3个方面,分别为
认证策略其示例CRD如下:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: finance
namespace: foo
spec:
selector:
matchLabels:
app: finance
mtls:
mode: STRICT
portLevelMtls:
8080:
mode: DISABLE
能够选取的属性说明如下:
Field | Type | Description |
---|---|---|
mtls | MutualTLS:DISABLE、PERMISSIVE、STRICT | tls模式,分别为明文模式、兼容模式、严格模式 |
portLevelMtls | map<uint32, MutualTLS> | 制定特定端口的tls模式。 |
JWT的示例CRD如下:
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: jwt
spec:
jwtRules:
- issuer: "issuer-foo"
jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.18/security/tools/jwt/samples/jwks.json
fromHeaders:
- name: header1
prefix: "pre1"
- name: header2
prefix: "pre2"
audiences:
- bookstore_android.apps.example.com
- bookstore_web.apps.example.com
fromParams:
- "parmas1"
- "parmas2"
- issuer: "issuer-foo1"
jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.18/security/tools/jwt/samples/jwks.json
能够选取的jwtRules属性说明如下:
Field | Type | Description |
---|---|---|
issuer | string | jwt需要匹配的iss |
jwksUri | string | 验证jwt的公钥获取地址 |
fromHeaders | map<string,string> | 从header的哪个字段,前缀为什么获取token |
audiences | string[] | jwt需要匹配的aud |
fromParams | string[] | 从param的哪个字段获取token |
鉴权策略其示例CRD如下:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin1
namespace: default
spec:
action: ALLOW
rules:
- from:
- source:
principals: ["principal1","principal2"]
notPrincipals: ["notPrincipal1","notPrincipal2"]
requestPrincipals: ["jwtp1","jwtp2"]
notRequestPrincipals: ["notjwtp1","notjwtp2"]
namespaces: ["namespace1","namespace2"]
notNamespaces: ["notNamespace1","notNamespace2"]
ipBlocks: ["10.1.1.1","10.1.1.0/24"]
notIpBlocks: ["11.1.1.1","11.1.1.0/24"]
remoteIpBlocks: ["12.1.1.1","12.1.1.0/24"]
notRemoteIpBlocks: ["13.1.1.1","13.1.1.0/24"]
- source:
principals: ["principal3"]
to:
- operation:
hosts: ["www.host1.com","www.host2.com"]
notHosts: ["www.nothost1.com","www.nothost2.com"]
ports: ["8080","443"]
notPorts: ["18080","1443"]
methods: ["GET","POST"]
notMethods: ["PUT","DELETE"]
paths: ["/info1*","/info2"]
notPaths: ["/notinfo1*","/notinfo2"]
- operation:
hosts: ["www.host3.com"]
能够选取的action属性说明如下:
Field | Type | Description |
---|---|---|
action | ALLOW、DENY | 允许规则、拒绝规则 |
能够选取的rules属性说明如下:
Field | Type | Description |
---|---|---|
from | source[] | 来自规则 |
to | operation[] | 到达规则 |
能够选取的from属性说明如下:
Field | Type | Description |
---|---|---|
principals | string[] | 需要匹配的身份 |
notPrincipals | string[] | 需要不匹配的身份 |
requestPrincipals | string[] | 需要匹配的JWT中iss+"/"+sub |
notRequestPrincipals | string[] | 需要不匹配的JWT中iss+"/"+sub |
namespaces | string[] | 需要匹配的命名空间 |
notNamespaces | string[] | 需要不匹配的命名空间 |
ipBlocks | string[] | 需要匹配的直接来源ip |
notIpBlocks | string[] | 需要不匹配的直接来源ip |
remoteIpBlocks | string[] | 需要匹配的请求最初ip,最初请求的来源ip来自于header中X-Forwarded-For的值 |
notRemoteIpBlocks | string[] | 需要不匹配的请求最初ip ,最初请求的来源ip来自于header中X-Forwarded-For的值 |
能够选取的to属性说明如下:
Field | Type | Description |
---|---|---|
hosts | string[] | 需要匹配的到达域名 |
notHosts | string[] | 需要不匹配的到达域名 |
ports | string[] | 需要匹配的到达端口 |
notPorts | string[] | 需要不匹配的到达端口 |
methods | string[] | 需要匹配的请求方法 |
notMethods | string[] | 需要不匹配的请求方法 |
paths | string[] | 需要匹配的请求path |
notPaths | string[] | 需要不匹配的请求path |
istio的认证策略
https://istio.io/latest/docs/reference/config/security/peer_authentication/
istio的鉴权策略
https://istio.io/latest/docs/reference/config/security/authorization-policy/
istio的jwt的鉴权策略
https://istio.io/latest/docs/reference/config/security/jwt/
Refer to the EDA event driven model to manage the flow of event messages. At present, there are the following governance aspects
Event production governance, including synchronous and asynchronous traffic control and exception handling
Event consumption management, including traffic control and exception handling
Event pipeline governance. Pipeline refers to the governance of various types of mq or memory queues. It refers to the standardized control of pipelines, such as the management of topics
Event flow audit refers to the audit of the traffic at the entrance and exit of the data at each stage
参考EDA事件驱动模型,对事件消息进行流量治理。目前有如下治理方面
EventMesh全球首创EDA+Serverless 填补了开源领域在“Eventing as An Infrastructure”的空白,是全球首个金融业进入Apache基金会孵化的项目,且被 Linux CNCF基金会Landscape 收录,同时也是可信开源社区。项目PPMC成员主要来自腾讯、华为、阿里、滴滴等国内一线大厂以及多名国外成员,社区有来自全球超过10个地区和国家的活跃贡献者230多名,累计外部代码贡献量超40万行,是内部贡献量的8倍。拥有全球首个遵守CNCF Serverless workflow标准的go-engine实现,在Serverless领域引起广泛关注。独创的基于消息的request-reply同步通信模式被Apache 明星项目RocketMQ和Dubbo集成,广泛服务于大量企业和业务场景。目前已经在华为云、腾讯、政采云、永辉超市、领航动力等大型企业落地,其中华为云EventGrid产品完全以EventMesh为内核,在华为云上服务于大量客户。
EventMesh is the world's first EDA+Serverless to fill the gap in "Eventing as An Infrastructure" in the open source field. It is the world's first financial industry to enter the incubation of the Apache Foundation. It is included in the Linux CNCF Foundation Landscape and is also a trusted open source community. Project PPMC members are mainly from domestic first-tier companies such as Tencent, Huawei, Ali, and Didi, as well as many foreign members. The community has more than 230 active contributors from more than 10 regions and countries around the world, and the cumulative external code contribution exceeds 400,000 OK, 8 times the amount of internal contributions. With the world's first go-engine implementation that complies with the CNCF Serverless workflow standard, it has attracted widespread attention in the serverless field. The original message-based request-reply synchronous communication mode is integrated by Apache star projects RocketMQ and Dubbo, and widely serves a large number of enterprises and business scenarios. At present, it has been implemented in large enterprises such as Huawei Cloud, Tencent, Zhengcai Cloud, Yonghui Supermarket, and Linghang Power. Among them, the Huawei Cloud EventGrid product completely uses EventMesh as the core and serves a large number of customers on Huawei Cloud.
domain: database
Umbrella issue: #15
BadConnectionEvictionRule
defines how to handle the "bad" connections in the database connection pool.
(English version TBD...)
BadConnectionEvictionRule
定义了数据库连接池中针对异常连接的处理规则,其中连接“异常”的判断、处理条件与行为均在该规则中定义:
以下 YAML 描述了一个异常连接处理规则。该规则将错误码为 1040, 1042, 1043, 1047 的 SQL 访问,以及 com.mysql.jdbc.CommunicationsException
异常类型定义为“异常”;连接的驱逐条件为该连接中出现任意上述错误码或异常;驱逐动作为将该连接剔除后替换为新的连接。YAML 示例:
apiVersion: database.opensergo.io/v1alpha1
kind: BadConnectionEvictionRule
metadata:
name: my-bad-connection-goaway-rule
labels:
app: foo-app
spec:
selector:
app: foo-app
targetErrors:
- errorCode: [1040, 1042, 1043, 1047]
- errorType: ['com.mysql.jdbc.CommunicationsException']
condition:
type: AnyOccurrence
evictionAction: ReplaceWithNew
domain: database
Umbrella issue: #15
VirtualDatabase
defines what a database looks like in the application's view.
(English version TBD...)
在数据库治理中,不管是读写分离、分库分表、影子库,还是加密、审计和访问控制等,都需要作用在一个具体的数据库之上。在这里将这样的一个逻辑的数据库称为虚拟数据库,即 VirtualDatabase。VirtualDatabase 在应用看来是一组特定的数据库访问信息,并通过绑定特定的治理策略实现相应的治理能力。以读写分离为例:
一个基础的 YAML 示例:
apiVersion: database.opensergo.io/v1alpha1
kind: VirtualDatabase
metadata:
name: readwrite_splitting_db
spec:
services:
- name: readwrite_splitting_db
databaseMySQL:
db: readwrite_splitting_db
host: localhost
port: 3306
user: root
password: root
readWriteSplitting: "readwrite" # 声明所需要的读写分离策略
Refactored traffic routing spec, based on Istio VS/DR CRD.
(TBD...)
domain: database
Umbrella issue: #15
Encryption
defines a set of rules help declare data encryption configurations.
(English version TBD...)
企业往往因为安全审计和合规的要求,需要对数据存储提供多种安全加固措施,比如数据加密。
数据加密通过对用户输入的 SQL 进行解析,并依据用户提供的加密规则对 SQL 进行改写,从而实现对原文数据进行加密,并将原文数据(可选)及密文数据同时存储到底层数据库。在用户查询数据时,它仅从数据库中取出密文数据,并对其解密,最终将解密后的原始数据返回给用户。
配置包括:
apiVersion: database.opensergo.io/v1alpha1
kind: Encryption
metadata:
name: encrypt-db
spec:
encryptors: # map[string]object 类型
aes_encryptor: # 加密算法名称
type: AES
props:
"aes-key-value": "123456abc"
md5_encryptor: # 加密算法名称
type: "MD5"
tables: # map[string]object 类型
t_encrypt: # 加密表名称
columns: # map[string]object 类型
user_id: # 加密列名称
plainColumn: "user_plain" # 原文列名称
cipherColumn: "user_cipher" # 密文列名称
encryptorName: "aes_encryptor" # 加密算法名称
assistedQueryColumn: "" # 查询辅助列名称
order_id: # 加密列名称
cipherColumn: "order_cipher"
encryptorName: "md5_encryptor"
queryWithCipherColumn: true # 是否使用加密列进行查询。在有原文列的情况下,可以使用原文列进行查询
Outlier eviction is the process of dynamically isolating or evicting an "instance" (e.g. a service instance, a thread of the thread pool, a connection of the connection pool) that performs unexpected behaviors (e.g. error or slow).
Discussions are welcomed.
(English version TBD...)
流量路由,顾名思义就是将具有某些属性特征的流量,路由到指定的目标。流量路由是流量治理中重要的一环,我们可以基于流量路由标准来实现各种场景,如全链路灰度、金丝雀发布、容灾路由等。
流量路由规则 (v1alpha1) 主要分为三部分:
WorkloadLabelRule
):将某一组 workload(如 Kubernetes Deployment, Statefulset 或者一组 pod,或某个 JVM 进程,甚至是一组 DB 实例)打上对应的标签TrafficLabelRule
):将具有某些属性特征的流量,打上对应的标签Workload 标签规则 (WorkloadLabelRule
) 将某一组 workload(如 Kubernetes Deployment, Statefulset 或者一组 pod,或某个 JVM 进程,甚至是一组 DB、缓存实例)打上对应的标签。
对于通用的 workload 打标场景,我们可以利用 WorkloadLabelRule CRD 进行打标。特别地,对于 Kubernetes workload,我们可以通过直接在 workload 上打 label 的方式进行标签绑定,如在 Deployment 上打上 traffic.opensergo.io/label: gray
标签代表灰度。
一个标准的 workload 划分应该类似于:
apiVersion: traffic.opensergo.io/v1alpha1
kind: WorkloadLabelRule
metadata:
name: gray-sts-label-rule
spec:
workloadLabels: ['gray']
selector:
app: my-app-gray
database: 'foo_db'
流量标签规则 (TrafficLabelRule
) 将具有某些属性特征的流量,打上对应的标签。示例 YAML:
apiVersion: traffic.opensergo.io/v1alpha1
kind: TrafficLabelRule
metadata:
name: my-traffic-label-rule
labels:
app: my-app
spec:
selector:
app: my-app
trafficLabel: gray
match:
- condition: "==" # 匹配表达式
type: header # 匹配属性类型
key: 'X-User-Id' # 参数名
value: 12345 # 参数值
- condition: "=="
value: "/index"
type: path
在具体的路由过程中,接入了 OpenSergo 的微服务框架、Service Mesh 的 proxy 中,只要实现了 OpenSergo 标准并进行上述规则配置,那么就能识别流量的标签和 workload 的标签。带 label 的流量就会流转到对应 label 的实例分组中;如果集群中没有该 label 的实例分组(即没有 workload 带有这个标签),则默认 fallback 到没有标签的实例上。后续版本标准将提供未匹配流量的兜底配置方式。
欢迎社区一起参与讨论。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.