GithubHelp home page GithubHelp logo

journeymidnight / yig Goto Github PK

View Code? Open in Web Editor NEW
358.0 22.0 86.0 35.55 MB

An open source object storage server with Amazon S3 compatible API

License: Apache License 2.0

Go 93.62% Shell 0.49% Makefile 0.14% Python 4.85% Lua 0.88% HCL 0.02%

yig's Introduction

YIG

Build Status license

Yet another Index Gateway

YIG is a SODA Foundation project

Introduction

A completely new designed object storage gateway framework that fully compatible with Amazon S3

At its core, Yig extend minio backend storage to allow more than one ceph cluster work together and form a super large storage resource pool, users could easily enlarge the pool`s capacity to EB level by adding a new ceph cluser to this pool. Benifits are avoiding data movement and IO drop down caused by adding new host or disks to old ceph cluster as usual way. To accomplish this goal, Yig need a distribute database to store meta infomation. Now already Support Tidb,MySql.

arch

Getting Started

Build

How to build?

Require:

  • ceph-devel
  • go(>=1.7)

Steps:

mkdir -p $GOPATH/src/github.com/journeymidnight
cd $GOPATH/src/github.com/journeymidnight
git clone [email protected]:yig/yig.git
cd $YIG_DIR
go get ./...
go build

build rpm package

yum install ceph-devel
sh package/rpmbuild.sh

Dependency

Before running Yig, requirments below are needed:

  • Deploy at least a ceph cluster with two specify pools named 'tiger' and 'rabbit' are created. About how to deploy ceph, please refer https://ceph.com or our [Sample]

  • Deploy a TiDB/Mysql, then create tables. [Sample]

    • Tidb/Mysql:
     MariaDB [(none)]> create database yig
     MariaDB [(none)]> source ../yig/integrate/yig.sql
    
  • Deploy yig-iam used for user management and authorize request. If Yig is running in Debug Mode, request will not sent to yig-iam. So this deployment is optional, but in real factory environment, you still need it.

  • Deploy a standalone Redis instance used as cache for better performance. This deployment is optional but strong recommend

yum install redis

Config files

Main config file of Yig is located at /etc/yig/yig.toml by default

s3domain = ["s3.test.com","s3-internal.test.com"]
region = "cn-bj-1"
log_path = "/var/log/yig/yig.log"
access_log_path = "/var/log/yig/access.log"
access_log_format = "{combined}"
panic_log_path = "/var/log/yig/panic.log"
log_level = 20
pid_file = "/var/run/yig/yig.pid"
api_listener = "0.0.0.0:8080"
admin_listener = "0.0.0.0:9000"
admin_key = "secret"
ssl_key_path = ""
ssl_cert_path = ""

# DebugMode
lcdebug = true
debug_mode = true
reserved_origins = "s3.test.com,s3-internal.test.com"

# Meta Config
meta_cache_type = 2
meta_store = "tidb"
tidb_info = "root:@tcp(10.5.0.17:4000)/yig"
keepalive = true
zk_address = "hbase:2181"
redis_address = "redis:6379"
redis_password = "hehehehe"
redis_connection_number = 10
memory_cache_max_entry_count = 100000
enable_data_cache = true
redis_connect_timeout = 1
redis_read_timeout = 1
redis_write_timeout = 1
redis_keepalive = 60
redis_pool_max_idle = 3
redis_pool_idle_timeout = 30
cache_circuit_check_interval = 3
cache_circuit_close_sleep_window = 1
cache_circuit_close_required_count = 3
cache_circuit_open_threshold = 1


# Ceph Config
ceph_config_pattern = "/etc/ceph/*.conf"

Meanings of options above:

S3Domain: your s3 service domain
Region: doesn`t matter
IamEndpoint: address of iam service
IamKey: specify as your wish, but must be the same as iam config files
IamSecret: specify as your wish, but must be the same as iam config files
LogPath: location of yig access log file
PanicLogPath: location of yig panic log file
BindApiAddress: your s3 service endpoint
BindAdminAddress: end point for tools/admin
SSLKeyPath: SSL key location 
SSLCertPath: SSL Cert location
ZookeeperAddress: zookeeper address if you choose hbase
RedisAddress: Redis access address
DebugMode: if this is set true, only requestes signed by [AK/SK:hehehehe/hehehehe] are valid
AdminKey: used for tools/admin
MetaCacheType: 
EnableDataCache:
CephConfigPattern: ceph config files for yig
GcThread: control gc speed when tools/lc is running
LogLevel: [1-20] the bigger number is, the more log output to log file
ReservedOrigins: set CORS when s3 request are from web browser
TidbInfo:

Ceph config files

Combine your ceph cluster config file [/etc/ceph/ceph.conf] with [/etc/ceph/ceph.client.admin.keyring] together, then put it to the location which 'CephConfigPattern' specified, a sample is below

[global]
fsid = 7b3c9d3a-65f3-4024-aaf1-a29b9422665c
mon_initial_members = ceph57
mon_host = 10.180.92.57
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 128
osd pool default pgp num = 128

[client.admin]
        key = AQCulvxWKAl/MRAA0weYOmmkArUm/CGBHX0eSA==

Run

Start server:

cd $YIG_DIR
sudo ./yig

OR

systemctl start yig

Documentation

Please refer our wiki for other information

License

FOSSA Status

Feedback & Contact

  • Github Issues:You are sincerely welcomed to issue any bugs you came across or any suggestions through Github. If you have any questions you can create issues or scan QR code below with wechat for more details.
  • User group:We use Wechat group currently.

yig's People

Contributors

cannium avatar chenji-kael avatar cxt90730 avatar dahefanteng avatar dependabot[bot] avatar dong2shai avatar fireworkmarks avatar fossabot avatar hopkings2008 avatar imegao avatar kaelgodlike avatar sunfch avatar sunfengyun avatar thesues avatar zstro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yig's Issues

yig_delete_daemon is dead

[yig]2017/12/15 17:25:26 CheckAndPut error: false HBase Java exception org.apache.hadoop.hbase.regionserver.RegionServerAborted
Exception:
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server hbase1,16020,1512447637244 aborting
at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1106)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2228)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33646)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748)

[yig]2017/12/15 17:25:26 CheckAndPut error: false HBase Java exception org.apache.hadoop.hbase.regionserver.RegionServerAbortedException:
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server hbase1,16020,1512447637244 aborting
at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1106)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2228)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33646)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748)

[yig]2017/12/15 17:25:26 CheckAndPut error: false HBase Java exception org.apache.hadoop.hbase.regionserver.RegionServerAbortedException:
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server hbase1,16020,1512447637244 aborting
at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1106)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2228)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33646)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
:

when bucket version status is "suspend", nullVerNum should be pass to MetaStorage to record current null version id?

func (yig *YigStorage) PutObject(bucketName string, objectName string, credential common.Credential,
...
var nullVerNum uint64
	nullVerNum, err = yig.checkOldObject(bucketName, objectName, bucket.Versioning)
	if err != nil {
		RecycleQueue <- maybeObjectToRecycle
		return
	}
	if bucket.Versioning == "Enabled" {
		result.VersionId = object.GetVersionId()
	}
	// update null version number
	if bucket.Versioning == "Suspended" {
		nullVerNum = uint64(object.LastModifiedTime.UnixNano())
	}

	if nullVerNum != 0 {
		objMap := &meta.ObjMap{
			Name:       objectName,
			BucketName: bucketName,
=====> ++++      nullVerNum should pass ? or nullVerNum is meaningless.
		}
		err = yig.MetaStorage.PutObject(object, nil, objMap, true)
	} else {
		err = yig.MetaStorage.PutObject(object, nil, nil, true)
	}

btw, nullVersion I think just set version to a same value like 0 or nil , and replace it when do sql?

function GetUniqUploadName maybe not reliable to make rados objectID uniq?

func (cluster *CephStorage) GetUniqUploadName() string {
	cluster.CountMutex.Lock()
	defer cluster.CountMutex.Unlock()
	cluster.Counter += 1
	oid := fmt.Sprintf("%d:%d", cluster.InstanceId, cluster.Counter)
	return oid
}

Single gateway will ok, but if many gateway put object Concurrently oid maybe same, or restart gateway cluster.counter reset to zero will have much possibility to rewrite oid, so how to resolve it,I’m little confused for this code part?

meta.GetObjectMap code bug

Describe the bug
file meta/object.go:49

func (m *Meta) GetObjectMap(bucketName, objectName string) (objMap *ObjMap, err error) {
	m.Client.GetObjectMap(bucketName, objectName)
	return
}

should be

func (m *Meta) GetObjectMap(bucketName, objectName string) (objMap *ObjMap, err error) {
	return m.Client.GetObjectMap(bucketName, objectName)
}

]

s3test brings out a bad result

I carried out a s3test on yig and found that there is a poor compatibility of yig to s3 api. In detail, 555 tests were run, SKIP=55, errors=286, failures=69.

Here are my configurations:

yig.toml:
s3domain = ["s3.test.com","s3-internal.test.com"]
region = ""
log_path = "/var/log/yig/yig.log"
access_log_path = "/var/log/yig/access.log"
access_log_format = "{combined}"
panic_log_path = "/var/log/yig/panic.log"
log_level = "info"
pid_file = "/var/run/yig/yig.pid"
api_listener = "0.0.0.0:8080"
admin_listener = "0.0.0.0:9000"
admin_key = "secret"
ssl_key_path = ""
ssl_cert_path = ""

debug_mode = false
reserved_origins = "s3.test.com,s3-internal.test.com"

meta_cache_type = 2
meta_store = "tidb"
tidb_info = "root:@tcp(192.168.5.106:4000)/yig"
keepalive = true
redis_address = "192.168.5.105:6379"
redis_password = ""
redis_connection_number = 10
memory_cache_max_entry_count = 100000
enable_data_cache = true
redis_connect_timeout = 1
redis_read_timeout = 1
redis_write_timeout = 1
redis_keepalive = 60
redis_pool_max_idle = 3
redis_pool_idle_timeout = 30

cache_circuit_check_interval = 3
cache_circuit_close_sleep_window = 1
cache_circuit_close_required_count = 3
cache_circuit_open_threshold = 1
cache_circuit_exec_timeout = 5
cache_circuit_exec_max_concurrent = -1

download_buf_pool_size = 8388608 #8MB
upload_min_chunk_size = 524288 #512KB
upload_max_chunk_size = 8388608 #8MB

ceph_config_pattern = "/etc/ceph/*.conf"

[plugins.dummy_compression]
path = "/etc/yig/plugins/dummy_compression_plugin.so"
enable = true

[plugins.encryption_vault]
path = "/etc/yig/plugins/vault_plugin.so"
enable = false
[plugins.encryption_vault.args]
endpoint = "http://10.5.0.19:8200"
kms_id = "your_id"
kms_secret = "your_secret"
version = 0
keyName = "yig"

[plugins.encryption_kms]
path = "/etc/yig/plugins/kms_plugin.so"
enable = false
[plugins.encryption_kms.args]
url = "https://10.0.47.24:8011"
keyName = "alias/yig"
accessKey = "UYqO9JUqFFejWmO2"
secretAccessKey = "0TsGg4snfyWMcn7KrfyeYKPZnLEm63"
debug_mode = true
token = "20cd173e-9d5a-431f-a26d-cdb1ad168c84"

[plugins.dummy_encryption_kms]
path = "/etc/yig/plugins/dummy_kms_plugin.so"
enable = true
[plugins.dummy_encryption_kms.args]
url = "KMS"

[plugins.dummy_mq]
path = "/etc/yig/plugins/dummy_mq_plugin.so"
enable = true
[plugins.dummy_mq.args]
topic = "testTopic2"
url = "kafka:29092"

[plugins.dummy_iam]
path = "/etc/yig/plugins/dummy_iam_plugin.so"
enable = true
[plugins.dummy_iam.args]
url="s3.test.com"

[plugins.yig_iam]
path = "/etc/yig/plugins/yig_iam_plugin.so"
enable = true
[plugins.yig_iam.args]
EndPoint="http://127.0.0.1:8888/api/v1/yig/fetchsecretkey"
ManageKey="key"
ManageSecret="secret"

[plugins.not_exist]
path = "not_exist_so"
enable = false

yig-iam conf.toml:
ManageKey = "key"
ManageSecret = "secret"
Logpath = "/var/log/yig-iam/iam.log"
Loglevel = "info"
Accesslog = "/var/log/yig-iam/access.log"
PidFile = "/tmp/iam.pid"
BindPort = 8888
RbacDataSource = "root:@tcp(192.168.5.106:4000)/"
UserDataSource = "root:@tcp(192.168.5.106:4000)/"
TokenExpire = 36000


I am so happy that yig works with s3cmd and aws sdk. However, it annoys me that s3test brings out a bad result. I will be appreciated if somebody shows me the proper s3test configuration and result.

ERROR: S3 error: 405 (Method Not Allowed)

Describe the bug
我这边初次安装yig,然后使用s3cmd测试时,使用s3cmd ls 是OK的,但在创建bucket的时候会返回405(Method Not Allowed)错误,无论是debug模式下的(hehehehe)账号还是iam账号。
返回的日志信息如下:
2020/10/20 21:52:03.878296 cache.go:71 [INFO] enabledSimpleMetaCache.Get table: 1 key: test
2020/10/20 21:52:03.886296 bucket.go:15 [INFO] GetBucket CacheMiss. bucket: test
2020/10/20 21:52:03.886359 cache.go:88 [ERROR] exec onCacheMiss() err: The specified bucket does not exist
2020/10/20 21:52:03.886424 generic-handlers.go:254 IEUUK4R27P0QNI26 [INFO] BucketName: test, ObjectName: , BucketInfo: , ObjectInfo: , AuthType: 6
2020/10/20 21:52:03.886458 log-handler.go:16 IEUUK4R27P0QNI26 [INFO] Start serving GET 10.30.130.22 /test/?location
20 Url of GetCredential send request to IAM : http://127.0.0.1:8888/api/v1/yig/fetchsecretkey
20 GetCredential to IAM return status 200 OK
2020/10/20 21:52:03.891092 cache.go:71 [INFO] enabledSimpleMetaCache.Get table: 0 key: p-1FxlqtBtUEI7NQup
2020/10/20 21:52:03.893593 log-handler.go:18 IEUUK4R27P0QNI26 [INFO] Completed GET 10.30.130.22 /test/?location
2020/10/20 21:52:03.894021 access-log-handler.go:87 [INFO] Succeed to send message [map[body_bytes_sent:242 bucket_name:test error_code:- host_name:10.30.130.22 http_referer:"-" http_status:200 http_user_agent:"-" http_x_real_ip:- object_name:- object_size:0 operation_name:ListBuckets project_id:- remote_addr:10.30.130.22:41896 request_id:IEUUK4R27P0QNI26 request_length:0 request_time:7 request_uri:GET /test/?location HTTP/1.1 requester_id:p-1FxlqtBtUEI7NQup server_cost:7 time_local:[2020-10-20 21:52:03]]] to message queue.
2020/10/20 21:52:03.898765 cache.go:71 [INFO] enabledSimpleMetaCache.Get table: 1 key: test
2020/10/20 21:52:03.901148 bucket.go:15 [INFO] GetBucket CacheMiss. bucket: test
2020/10/20 21:52:03.901196 cache.go:88 [ERROR] exec onCacheMiss() err: The specified bucket does not exist
2020/10/20 21:52:03.901252 generic-handlers.go:254 Y9QMJK9FH9N6K92X [INFO] BucketName: test, ObjectName: , BucketInfo: , ObjectInfo: , AuthType: 6
2020/10/20 21:52:03.901275 log-handler.go:16 Y9QMJK9FH9N6K92X [INFO] Start serving PUT 10.30.130.22 /test/
2020/10/20 21:52:03.901356 log-handler.go:18 Y9QMJK9FH9N6K92X [INFO] Completed PUT 10.30.130.22 /test/
2020/10/20 21:52:03.901628 access-log-handler.go:87 [INFO] Succeed to send message [map[body_bytes_sent:0 bucket_name:test error_code:- host_name:10.30.130.22 http_referer:"-" http_status:200 http_user_agent:"-" http_x_real_ip:- object_name:- object_size:0 operation_name:- project_id:- remote_addr:10.30.130.22:41896 request_id:Y9QMJK9FH9N6K92X request_length:0 request_time:0 request_uri:PUT /test/ HTTP/1.1 requester_id:p-1FxlqtBtUEI7NQup server_cost:0 time_local:[2020-10-20 21:52:03]]] to message queue.

我的配置文件如下:
s3domain = ["s3.test.com"]
region = "cn-bj-1"
log_path = "/var/log/yig/yig.log"
access_log_path = "/var/log/yig/access.log"
access_log_format = "{combined}"
panic_log_path = "/var/log/yig/panic.log"
log_level = "info"
pid_file = "/var/run/yig/yig.pid"
api_listener = "0.0.0.0:80"
admin_listener = "0.0.0.0:9000"
admin_key = "secret"
ssl_key_path = ""
ssl_cert_path = ""
piggyback_update_usage = true

debug_mode = false
enable_pprof = false
pprof_listener = "0.0.0.0:8730"
reserved_origins = "s3.test.com,s3-internal.test.com"

Meta Config

meta_cache_type = 2
meta_store = "tidb"
tidb_info = "root:@tcp(10.30.130.22:4000)/yig"
keepalive = true
enable_compression = false
enable_usage_push = false
redis_address = "127.0.0.1:6379"
redis_password = "hehehehe"
redis_connection_number = 10
memory_cache_max_entry_count = 100000
enable_data_cache = true
redis_connect_timeout = 1
redis_read_timeout = 1
redis_write_timeout = 1
redis_keepalive = 60
redis_pool_max_idle = 3
redis_pool_idle_timeout = 30

cache_circuit_check_interval = 3
cache_circuit_close_sleep_window = 1
cache_circuit_close_required_count = 3
cache_circuit_open_threshold = 1
cache_circuit_exec_timeout = 5
cache_circuit_exec_max_concurrent = -1

db_max_open_conns = 10240
db_max_idle_conns = 1024
db_conn_max_life_seconds = 300

download_buf_pool_size = 8388608 #8MB
upload_min_chunk_size = 524288 #512KB
upload_max_chunk_size = 8388608 #8MB

Ceph Config

ceph_config_pattern = "/etc/ceph/*.conf"

Plugin Config

[plugins.dummy_compression]
path = "/etc/yig/plugins/dummy_compression_plugin.so"
enable = true

[plugins.encryption_vault]
path = "/etc/yig/plugins/vault_plugin.so"
enable = false
[plugins.encryption_vault.args]
endpoint = "http://10.30.130.22:8200"
kms_id = "your_id"
kms_secret = "your_secret"
version = 0
keyName = "yig"

[plugins.encryption_kms]
path = "/etc/yig/plugins/kms_plugin.so"
enable = false
[plugins.encryption_kms.args]
url = "https://10.20.130.22:8011"
keyName = "alias/yig"
accessKey = "UYqO9JUqFFejWmO2"
secretAccessKey = "0TsGg4snfyWMcn7KrfyeYKPZnLEm63"
debug_mode = true
token = "20cd173e-9d5a-431f-a26d-cdb1ad168c84"

[plugins.dummy_encryption_kms]
path = "/etc/yig/plugins/dummy_kms_plugin.so"
enable = true
[plugins.dummy_encryption_kms.args]
url = "KMS"

[plugins.dummy_mq]
path = "/etc/yig/plugins/dummy_mq_plugin.so"
enable = true
[plugins.dummy_mq.args]
topic = "testTopic2"
url = "kafka:29092"

#[plugins.dummy_iam]
#path = "/etc/yig/plugins/dummy_iam_plugin.so"
#enable = true
#[plugins.dummy_iam.args]
#url="s3.test.com"

[plugins.yig_iam]
path = "/etc/yig/plugins/yig_iam_plugin.so"
enable = true
[plugins.yig_iam.args]
EndPoint="http://127.0.0.1:8888/api/v1/yig/fetchsecretkey"
ManageKey="key"
ManageSecret="secret"

[plugins.not_exist]
path = "not_exist_so"
enable = false

修复Cache Control分2步骤进行

1.加入新的middleware,所有css, js, jpg,png, gif,jpeg 全部走public, max-age:一年
2. 第二步骤, 就是按照amazon的API, 可以在上传时设置cache-control, 再Get的时候, 如果发现没有默认的Cache-control, 再根据上面的算法生产新的Cache-control, 然后吐出来

初次使用yig的多个问题

1安装完yig,启动后查看yig日志,循环报
[yig]2018/05/31 16:54:02 Reading Ceph conf files from [/etc/ceph/ceph.conf]
[yig]2018/05/31 16:54:02 Loading Ceph file /etc/ceph/ceph.conf
[yig]2018/05/31 16:54:02 Ceph Cluster 3502177b-017c-4af6-9a65-c65eb718088b is ready, InstanceId is 24100
[yig]2018/05/31 16:54:02
S3 Object Storage:
[yig]2018/05/31 16:54:02 http://0.0.0.0:9000
[yig]2018/05/31 16:54:02
S3 Object Storage:
[yig]2018/05/31 16:54:02 http://0.0.0.0:80
[yig]2018/05/31 16:54:19 STARTING GET 127.0.0.1/nstatus RequestID:UPUMWJ1XZBV33CP7
[yig]2018/05/31 16:54:19 ServeHTTP nstatus
[yig]2018/05/31 16:54:19 Response status code: 403
[yig]2018/05/31 16:54:19 COMPLETED GET 127.0.0.1/nstatus RequestID:UPUMWJ1XZBV33CP7
[yig]2018/05/31 16:54:19 STARTING GET 127.0.0.1/nstatus RequestID:33T360GLE2DB4Q6S

这个哪里来的请求,怎么处理呢?

2、文档没有mariadb的配置说明
"MetaStore": "tidb", 只能是tidb吗?

3、EnableCache 这个false,没有生效,启动的时候仍然会请求redis。

Get every object while listing objects

Describe the bug
In meta/client/tidbclient/bucket.go, function ListObjects get every objects after get a list, and it causes high latency. Could it return directly after get a full list?

[doc] Request for documentation update

Hello, thank you for your excellent work, but from the code, the meta module is currently completely based on TiDB, but the deployment document and architecture document is still not updated.

Multipart上传失败

使用s3cmd进行分段文件上传,在complete的时候报错

Unable to complete multipart upload.
With error:  Error 1292: Incorrect datetime value: '2019-07-08T07:20:18.000Z' for column `yig`.`objectpart`.`lastmodified` at row

yig_delete_daemon show failed to delete

[yig]2017/12/18 11:05:04 success delete : : 18cca480-0fb6-4eb0-a503-459660d4d5f0 : rabbit : 24360:290685
[yig]2017/12/18 11:05:04 success delete : : 18cca480-0fb6-4eb0-a503-459660d4d5f0 : rabbit : 24360:198737
[yig]2017/12/18 11:05:04 success delete : : 18cca480-0fb6-4eb0-a503-459660d4d5f0 : rabbit : 24360:62650
[yig]2017/12/18 11:05:04 success delete : : 18cca480-0fb6-4eb0-a503-459660d4d5f0 : rabbit : 24360:29000
[yig]2017/12/18 11:05:04 failed delete : : 18cca480-0fb6-4eb0-a503-459660d4d5f0 : rabbit : 24360:29000
[yig]2017/12/18 11:05:04 failed delete : : 18cca480-0fb6-4eb0-a503-459660d4d5f0 : rabbit : 24360:29000

rados/conn.go:5:29: fatal error: rados/librados.h: No such file or directory

我在执行go get时提示找不到librados.h,请问该怎么处理?
[root@yig-185 yig]# go get ./...

github.com/journeymidnight/yig/vendor/github.com/journeymidnight/radoshttpd/rados

vendor/github.com/journeymidnight/radoshttpd/rados/conn.go:5:29: fatal error: rados/librados.h: No such file or directory
// #include <rados/librados.h>
^
compilation terminated.

No debris removed after deleting the bucket

Describe the bug
When I delete a bucket under the user (uploading fragmentation is not cleared), I can find that my bucket has been deleted, but my fragment remains in the database field. At this time, I can't operate on the fragment. , causing my usage to still exist.

Expected behavior
Under normal circumstances, the delete bucket should be deleted along with the fragment generated by the upload, or even if the bucket is deleted, I can delete the corresponding upload fragment.

multipart is not handled in DeleteBucket

Describe the bug
Multipart is not handled in DeleteBucket.
The data will still reside in yig.
And I think it will appear when user create a new bucket with the same name.

To Reproduce
[root@localhost ~]# s3cmd listmp s3://zcc-test02/bigfile-multipart 18ef14d221c53c8ad2eba321a6ae579f17e1560ff0b8ba41
LastModified PartNumber ETag Size
2020-04-24T03:13:30.000Z 1 "abcf088715344654386548490bd96ee0" 5242880
...
[root@localhost ~]# s3cmd ls s3://zcc-test02
[root@localhost ~]# s3cmd rb s3://zcc-test02
Bucket 's3://zcc-test02/' removed

Expected behavior
yig should remove the multiparts automatically like AWS S3.

Append won't update object md5

Append won't update object md5 sum.

Steps to reproduce the behavior:

  1. Upload a object
  2. Check md5 sum
  3. Append new data to this object
  4. Check md5 sum, and it returns the old md5

Expected behavior
MD5 Sum was updataed

Fail to remove files during the test

In test, two files are not able to be deleted by using s3cmd.

[environment]
Testing environment

[procedure]
1.Run script ycsb to put files to cluster
2.Try to delete files in batch

[expect]
Files are all deleted

[result]
Several files are still left after each time delete action, and still one file is not able to be removed so far.

HEAD方法返回错误

使用head方法访问bucket/object,返回405

* About to connect() to 192.168.17.52 port 80 (#0)
*   Trying 192.168.17.52...
* Connected to 192.168.17.52 (192.168.17.52) port 80 (#0)
> HEAD /bucket1 HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 192.168.17.52
> Accept: */*
> Date: Mon, 08 Jul 2019 07:08:50 +0000
> Authorization: AWS 0nEkZOgSVezjpLYXYIC0:NlTtF52e2tT+3cARuDZRBhdqywo=
>
< HTTP/1.1 405 Method Not Allowed
< Accept-Ranges: bytes
< Vary: Origin
< Date: Mon, 08 Jul 2019 07:08:50 GMT

但是accesslog里显示200,没有错误发生

Database schema review

@cxt90730 @dahefanteng @thesues

I think DB schema needs some review, for the following reasons:

  • some columns were designed for HBase, and not fit TiDB/MySQL very well
  • we need to support versioning (business pressure)
  • some tables do not have an explicit primary key (WTF!?)
  • intensively use null value (ref)
  • intensively use JSON type (ref)
  • use varchar for enumeration types
  • mysterious names, e.g. multipartpart, multiparts, objectpart

Feature-complete seaweedfs backend

Features that should be finished before merging to master branch.

  • split large files into lumps
  • append-object upload
  • cluster & pool picking logic
  • test(and maybe fix) seaweedfs rebalance

Take care:

  • open dedicated issues for features above if necessary
  • merge to branch seaweedfs-cannyls first

Relied on #143(done), part of #70.

yig配置问题

这是我的yig配置,对其中的配置项有些疑惑。
aim是需要单独安装吗?
S3Domain该怎么设置呢?
现在想用s3cmd测试,不知道怎么配置?
{
"S3Domain": "http://10.64.34.200:7480",
"Region": "cn-bj-1",
"IamEndpoint": "http://10.64.34.200:9006",
"IamKey": "key",
"IamSecret": "secret",
"LogPath": "/var/log/yig/yig.log",
"PanicLogPath":"/var/log/yig/panic.log",
"PidFile": "/var/run/yig/yig.pid",
"BindApiAddress": "10.64.34.200:80",
"BindAdminAddress": "10.64.34.200:9000",
"SSLKeyPath": "",
"SSLCertPath": "",
"ZookeeperAddress": "hbase:2181",
"RedisAddress": "redis:6379",
"RedisConnectionNumber": 10,
"InMemoryCacheMaxEntryCount": 100000,
"DebugMode": true,
"AdminKey": "secret",
"MetaCacheType": 0,
"EnableDataCache": false,
"CephConfigPattern": "/etc/ceph/ceph1.conf",
"MetaStore": "tidb",
"TidbInfo":"root:123456@tcp(127.0.0.1:3306)/yig?charset=utf8",
"KeepAlive":true
}

Storage backend as plugin

Goal:

  • to support seaweedfs as a backend
  • our users don't need to install a C lib when building yig

Take care:

  • config, which is also under refactoring
  • keep those great features, like adaptive upload buffer size
  • make the interface as simple as possible
  • merge to branch seaweedfs-cannyls first

part of #70

Object dispatching rule.

Add a dynamic rule file, such as .yml, to dynamically dispatch objects to eligible clusters.
In addition, the cluster can reload these files when adding new rules.

Can't copy file from bucket to bucket by Multipart

Describe the bug
I have uploaded a file to bucket by multipart, but I cannot copy it to another bucket or dir. The Etag is different form calculatedMd5.I guess that Etags are chaotic.

To Reproduce
Steps to reproduce the behavior:

  1. upload a big file to remote bucket by rclone
  2. and copy the file to another bucket or dir

Expected behavior
Copy should succeed

User reported error when getting versioning

Describe the bug
A Win7 user reported error when double click bucket with Cyberduck. (Versioning not enabled by default.)
After restart Cyberduck, the error disappeared.
The error seems like GET versioning (response) issue.

To Reproduce
Verified in Mac or Win10 but not reproduced.
Not verified on Win7.

Expected behavior
Expected versioning response on S3 is as follows.
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> </VersioningConfiguration>

While yig responsed as follows. No URL like S3.
Should be the same as S3.

versioning

Object dispatching rule.

Add a dynamic rule file, such as .yml, to dynamically dispatch objects to eligible clusters.
In addition, the cluster can reload these files when adding new rules.

website issues

  1. All the 4xx error should return error document.
    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ErrorDocument.html
    Now only in 404, yig returns error document and returns hard coded 404.
    It's expected to handle and return proper 4xx error code.
    To Reproduce
    Check ReturnWebsiteErrorDocument().

  2. There is no support for header x-amz-website-redirect-location.

  3. In DoRedirect(), yig returns 302 StatusFound, while 301 StatusMovedPermanently is expected.
    https://amazonaws-china.com/cn/premiumsupport/knowledge-center/redirect-domain-route-53/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.