GithubHelp home page GithubHelp logo

vipshop / redis-migrate-tool Goto Github PK

View Code? Open in Web Editor NEW
945.0 72.0 315.0 860 KB

A convenient and useful tool for migrating data between redis group.

License: Apache License 2.0

C 98.89% Makefile 0.38% M4 0.74%
redis-cluster rdb twemproxy redis-server aof redis migrate

redis-migrate-tool's Introduction

redis-migrate-tool

redis-migrate-tool is a convenient and useful tool for migrating data between redis.

Features

  • Fast.
  • Multi-Threads.
  • Based on redis replication.
  • Live migration.
  • In the process of migrating data, the source redis can also provide services for users.
  • Heterogeneous migration.
  • Twemproxy and redis cluster support.
  • When the target is twemproxy, keys are direct imported into redis behind the twemproxy.
  • Migration Status view.
  • Data Verification Mechanism.

Dependence

Please install automake, libtool, autoconf and bzip2 at first.

Build

To build redis-migrate-tool:

$ cd redis-migrate-tool
$ autoreconf -fvi
$ ./configure
$ make
$ src/redis-migrate-tool -h

RUN

src/redis-migrate-tool -c rmt.conf -o log -d

WARNING

Before run this tool, make sure your source redis machines have enough memory allowed at least one redis generate rdb file.

If your source machines have large enough memory allowed all the redis generate rdb files at one time, you can set 'source_safe: false' in the rmt.conf.

Not supported redis command

The following commands are not supported to be propagated to the target redis group, because the keys in those commands maybe cross different target redis nodes:

RENAME,RENAMENX,RPOPLPUSH,BRPOPLPUSH,FLUSHALL,FLUSHDB,BITOP,MOVE,GEORADIUS,GEORADIUSBYMEMBER,EVAL,EVALSHA,SCRIPT,PFMERGE

Configuration

Config file has three parts: source, target and common.

source OR target:

  • type: The group redis type. Possible values are:
  • single
  • twemproxy
  • redis cluster
  • rdb file
  • aof file
  • servers:: The list of redis address in the group. If type is twemproxy, this is same as the twemproxy config file. If type is rdb file, this is the file name.
  • redis_auth: Authenticate to the Redis server on connect.
  • timeout: Timeout in milliseconds for read/write with Redis server. Now just for source redis group. Defaults to 120000.
  • hash: The name of the hash function. Just for type is twemproxy. Possible values are:
  • one_at_a_time
  • md5
  • crc16
  • crc32 (crc32 implementation compatible with libmemcached)
  • crc32a (correct crc32 implementation as per the spec)
  • fnv1_64
  • fnv1a_64
  • fnv1_32
  • fnv1a_32
  • hsieh
  • murmur
  • jenkins
  • hash_tag: A two character string that specifies the part of the key used for hashing. Eg "{}" or "$$". Hash tag enable mapping different keys to the same server as long as the part of the key within the tag is the same. Just for type is twemproxy.
  • distribution: The key distribution mode. Just for type is twemproxy. Possible values are:
  • ketama
  • modula
  • random

common:

  • listen: The listening address and port (name:port or ip:port). Defaults to 127.0.0.1:8888.
  • max_clients: The max clients count for the listen port. Defaults to 100.
  • threads: The max threads count can be used by redis-migrate-tool. Defaults to the cpu core count.
  • step: The step for parse request. The higher the number, the more quickly to migrate, but the more memory used. Defaults to 1.
  • mbuf_size: Mbuf size for request. Defaults to 512.
  • noreply: A boolean value that decide whether to check the target group replies. Defaults to false.
  • source_safe: A boolean value that protect the source group machines memory safe. If it is true, the tool can guarantee only one redis to generate rdb file at one time on the same machine for source group. In addition, 'source_safe: true' may use less threads then you set. Defaults to true.
  • dir: Work directory, used to store files(such as rdb file). Defaults to the current directory.
  • filter: Filter keys if they do not match the pattern. The pattern is Glob-style. Defaults is NULL.

filter supported glob-style patterns:

  • h?llo matches hello, hallo and hxllo

  • h*llo matches hllo and heeeello

  • h[ae]llo matches hello and hallo, but not hillo

  • h[^e]llo matches hallo, hbllo, ... but not hello

  • h[a-b]llo matches hallo and hbllo

Use \ to escape special characters if you want to match them verbatim.

For example, the configuration file shown below is to migrate data from single to twemproxy.

[source]
type: single
servers:
 - 127.0.0.1:6379
 - 127.0.0.1:6380
 - 127.0.0.1:6381
 - 127.0.0.1:6382

[target]
type: twemproxy
hash: fnv1a_64
hash_tag: "{}"
distribution: ketama
servers:
 - 127.0.0.1:6380:1 server1
 - 127.0.0.1:6381:1 server2
 - 127.0.0.1:6382:1 server3
 - 127.0.0.1:6383:1 server4

[common]
listen: 0.0.0.0:8888
threads: 2
step: 1
mbuf_size: 1024
source_safe: true

Migrate data from twemproxy to redis cluster.

[source]
type: twemproxy
hash: fnv1a_64
hash_tag: "{}"
distribution: ketama
servers:
 - 127.0.0.1:6379
 - 127.0.0.1:6380
 - 127.0.0.1:6381
 - 127.0.0.1:6382

[target]
type: redis cluster
servers:
 - 127.0.0.1:7379

[common]
listen: 0.0.0.0:8888
step: 1
mbuf_size: 512

Migrate data from a redis cluster to another redis cluster with key filter(key's prefix is "abc").

[source]
type: redis cluster
servers:
 - 127.0.0.1:8379

[target]
type: redis cluster
servers:
 - 127.0.0.1:7379

[common]
listen: 0.0.0.0:8888
filter: abc*

Load data from rdb file to redis cluster.

[source]
type: rdb file
servers:
 - /data/redis/dump1.rdb
 - /data/redis/dump2.rdb

[target]
type: redis cluster
servers:
 - 127.0.0.1:7379

[common]
listen: 0.0.0.0:8888
step: 2
mbuf_size: 512
source_safe: false

Just save rdb file from redis cluster.

[source]
type: redis cluster
servers:
 - 127.0.0.1:7379

[target]
type: rdb file

[common]
listen: 0.0.0.0:8888
source_safe: true

Load data from aof file to redis cluster.

[source]
type: aof file
servers:
 - /data/redis/appendonly1.aof
 - /data/redis/appendonly2.aof

[target]
type: redis cluster
servers:
 - 127.0.0.1:7379

[common]
listen: 0.0.0.0:8888
step: 2

STATUS

You can use redis-cli to connect with redis-migrate-tool. The listening address and port can be setted at common config.

info command

For example, you try the info command:

$redis-cli -h 127.0.0.1 -p 8888
127.0.0.1:8888> info
# Server
version:0.1.0
os:Linux 2.6.32-573.12.1.el6.x86_64 x86_64
multiplexing_api:epoll
gcc_version:4.4.7
process_id:9199
tcp_port:8888
uptime_in_seconds:1662
uptime_in_days:0
config_file:/ect/rmt.conf

# Clients
connected_clients:1
max_clients_limit:100
total_connections_received:3

# Memory
mem_allocator:jemalloc-4.0.4

# Group
source_nodes_count:32
target_nodes_count:48

# Stats
all_rdb_received:1
all_rdb_parsed:1
all_aof_loaded:0
rdb_received_count:32
rdb_parsed_count:32
aof_loaded_count:0
total_msgs_recv:7753587
total_msgs_sent:7753587
total_net_input_bytes:234636318
total_net_output_bytes:255384129
total_net_input_bytes_human:223.77M
total_net_output_bytes_human:243.55M
total_mbufs_inqueue:0
total_msgs_outqueue:0
127.0.0.1:8888>

info command response instruction:

Server:

  • version: The redis-migrate-tool version number.
  • os: The os uname.
  • multiplexing_api: Multiplexing API.
  • gcc_version: Gcc version.
  • process_id: The process id of the redis-migrate-tool.
  • tcp_port: The tcp port redis-migrate-tool listening.
  • uptime_in_seconds: Seconds the redis-migrate-tool running.
  • uptime_in_days: Days the redis-migrate-tool running.
  • config_file: The config file name for the redis-migrate-tool.

Clients:

  • connected_clients: The count of clients that connected at present.
  • max_clients_limit: The max number of clients that allows to accept at the same time.
  • total_connections_received: The total count of connections that received so far.

Group:

  • source_nodes_count: The nodes count of source redis group.
  • target_nodes_count: The nodes count of target redis group.

Stats:

  • all_rdb_received: Whether all the rdb of the nodes in source group received.
  • all_rdb_parsed: Whether all the rdb of the nodes in source group parsed finished.
  • all_aof_loaded: Whether all the aof file of the nodes in source group loaded finished.
  • rdb_received_count: The received rdb count for the nodes in source group.
  • rdb_parsed_count: The parsed finished rdb count for the nodes in source group.
  • aof_loaded_count: The loaded finished aof file count for the nodes in source group.
  • total_msgs_recv: The total count of messages that had received from the source group.
  • total_msgs_sent: The total count of messages that had sent to the target group and received response from target group.
  • total_net_input_bytes: The total count of input bytes that had received from the source group.
  • total_net_output_bytes: The total count of output bytes that had sent to the target group.
  • total_net_input_bytes_human: Same as the total_net_input_bytes, but convert into human readable.
  • total_net_output_bytes_human: Same as the total_net_output_bytes, but convert into human readable.
  • total_mbufs_inqueue: Cached commands data(not include rdb data) by mbufs input from source group.
  • total_msgs_outqueue: Msgs will be sent to target group and msgs had been sent to target but waiting for the response.

OTHER COMMANDS

shutdown [seconds|asap]

The command behavior is the following:

  • Stop the replication from the source redis.
  • Try to send the cached data in redis-migrate-tool to the target redis.
  • Redis-migrate-tool stop and exit.

Parameter:

  • seconds: Most of seconds that redis-migrate-tool can used to send cached data to target redis before exit. Defaults to 10 seconds.
  • asap: Don't care about the cached data, just exit right now.

For example, you try the shutdown command:

$redis-cli -h 127.0.0.1 -p 8888
127.0.0.1:8888> shutdown
OK

CHECK THE DATA

After migrate the data, you can use redis_check command to check data in the source group and target group.

Try the redis_check command:

$src/redis-migrate-tool -c rmt.conf -o log -C redis_check
Check job is running...

Checked keys: 1000
Inconsistent value keys: 0
Inconsistent expire keys : 0
Other check error keys: 0
Checked OK keys: 1000

All keys checked OK!
Check job finished, used 1.041s

If you want check more keys, try the follow:

$src/redis-migrate-tool -c rmt.conf -o log -C "redis_check 200000"
Check job is running...

Checked keys: 200000
Inconsistent value keys: 0
Inconsistent expire keys : 0
Other check error keys: 0
Checked OK keys: 200000

All keys checked OK!
Check job finished, used 11.962s

INSERT SOME KEYS JUST FOR TEST

Try the redis_testinsert command:

$src/redis-migrate-tool -c rmt.conf -o log -C "redis_testinsert"
Test insert job is running...

Insert string keys: 200
Insert list keys  : 200
Insert set keys   : 200
Insert zset keys  : 200
Insert hash keys  : 200
Insert total keys : 1000

Correct inserted keys: 1000
Test insert job finished, used 0.525s

If you want insert more keys, try the follow:

$src/redis-migrate-tool -c rmt.conf -o log -C "redis_testinsert 30000"
Test insert job is running...

Insert string keys: 6000
Insert list keys  : 6000
Insert set keys   : 6000
Insert zset keys  : 6000
Insert hash keys  : 6000
Insert total keys : 30000

Correct inserted keys: 30000
Test insert job finished, used 15.486s

If you want insert only string type keys, try the follow:

$src/redis-migrate-tool -c rmt.conf -o log -C "redis_testinsert string"
Test insert job is running...

Insert string keys: 1000
Insert list keys  : 0
Insert set keys   : 0
Insert zset keys  : 0
Insert hash keys  : 0
Insert total keys : 1000

Correct inserted keys: 1000
Test insert job finished, used 0.024s

If you want insert some special type keys, try the follow:

$src/redis-migrate-tool -c rmt.conf -o log -C "redis_testinsert string|set|list 10000"
Test insert job is running...

Insert string keys: 3336
Insert list keys  : 3336
Insert set keys   : 3328
Insert zset keys  : 0
Insert hash keys  : 0
Insert total keys : 10000

Correct inserted keys: 10000
Test insert job finished, used 5.539s

License

Copyright © 2016 VIPSHOP Inc.

Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0

redis-migrate-tool's People

Contributors

andyli029 avatar deep011 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redis-migrate-tool's Issues

关于认证的问题

[source]
type: single

servers:
 - 10.173.32.111:6379
redis_auth:xxxx


[target]
type: single
servers:
 - 10.173.32.114:6379
redis_auth:xxx


[common]
listen: 0.0.0.0:8888

log

[2018-04-09 09:40:36.576] rmt_core.c:2444 Total threads count in fact: 2
[2018-04-09 09:40:36.576] rmt_core.c:2445 Read threads count in fact: 1
[2018-04-09 09:40:36.576] rmt_core.c:2446 Write threads count in fact: 1
[2018-04-09 09:40:36.576] rmt_core.c:2455 read thread(0):
[2018-04-09 09:40:36.576] rmt_core.c:2461 10.173.32.111:6379
[2018-04-09 09:40:36.576] rmt_core.c:2488 write thread(0):
[2018-04-09 09:40:36.576] rmt_core.c:2494 10.173.32.111:6379
[2018-04-09 09:40:36.577] rmt_core.c:2551 migrate job is running...
[2018-04-09 09:40:36.577] rmt_core.c:420 ERROR: password to 10.173.32.114:6379 is wrong
[2018-04-09 09:40:36.582] rmt_redis.c:1706 Start connecting to MASTER[10.173.32.111:6379].
[2018-04-09 09:40:36.585] rmt_redis.c:1740 Master[10.173.32.111:6379] replied to PING, replication can continue...
[2018-04-09 09:40:36.592] rmt_redis.c:1051 Partial resynchronization for MASTER[10.173.32.111:6379] not possible (no cached master).
[2018-04-09 09:40:36.595] rmt_redis.c:1110 Full resync from MASTER[10.173.32.111:6379]: d48a2a3890d782e5f4a34f9d0d463fded99770dc:1175
[2018-04-09 09:40:36.616] rmt_redis.c:1517 MASTER <-> SLAVE sync: receiving 29 bytes from master[10.173.32.111:6379]
[2018-04-09 09:40:36.616] rmt_redis.c:1623 MASTER <-> SLAVE sync: RDB data for node[10.173.32.111:6379] is received, used: 0 s
[2018-04-09 09:40:36.616] rmt_redis.c:1643 rdb file node10.173.32.111:6379-1523238036596262-43059.rdb write complete

这个认证要怎么配,感觉一直提示密码错

This tools can only migrate database 0 ?

insert some value into source db 6, but it is migrated to target db 0
Platform: Debian 8
Source Redis Server Version: 2.8.19
Target Redis Server Version: 2.8.9

OOM command not allowed when used memory > maxmemory

Hi,

I keep getting "-OOM command not allowed when used memory > maxmemory" errors while trying to migrate data from an RDB file to a Redis cluster. The RDB file size is 6GB and some keys could reach over 40MB in size. I do have maxmemory set on the target cluster (500GB).

More from logs:
rmt_redis.c:4455 Response from node[10.120.1.1:6379] for REQ_REDIS_SADD is error.
rmt_redis.c:4455 Response from node[10.120.1.2:6379] for REQ_REDIS_HMSET is error.
rmt_redis.c:4456 msg dump id 0 request 0 len 58 type 128 (err 0) kind 0 result 0 mbuf_count 1 keys_count 0
...
...

Where 10.120.1.X are the IPs of some of the Redis cluster nodes (target).

Thanks for the help.

Sadek

cannot dump rdb from cluster

my source redis cluster version is 3.0.7
client-output-buffer-limit is: slave 4G 2G 60

rmt.conf:
[source]
type: redis cluster
servers :
-99.12.117.155:8888
-99.12.117.155:6379
-99.12.117.151:8888

[target]
type: rdb file

[common]
listen: 0.0.0.0:9999

when run src/redis-migrate-tool -c rmt.conf -o log
here is log:
[2017-02-15 06:58:44.558] rmt_core.c:525 Nodes count of source group : 3
[2017-02-15 06:58:44.558] rmt_core.c:526 Total threads count : 4
[2017-02-15 06:58:44.558] rmt_core.c:527 Read threads count assigned: 1
[2017-02-15 06:58:44.558] rmt_core.c:528 Write threads count assigned: 3
[2017-02-15 06:58:44.558] rmt_core.c:836 instances_by_host:
[2017-02-15 06:58:44.558] rmt_core.c:840 99.12.117.155:8888
[2017-02-15 06:58:44.558] rmt_core.c:840 99.12.117.155:6379
[2017-02-15 06:58:44.558] rmt_core.c:842
[2017-02-15 06:58:44.558] rmt_core.c:840 99.12.117.151:8888
[2017-02-15 06:58:44.558] rmt_core.c:842
[2017-02-15 06:58:44.558] rmt_core.c:2417 Total threads count in fact: 3
[2017-02-15 06:58:44.558] rmt_core.c:2418 Read threads count in fact: 1
[2017-02-15 06:58:44.558] rmt_core.c:2419 Write threads count in fact: 2
[2017-02-15 06:58:44.558] rmt_core.c:2428 read thread(0):
[2017-02-15 06:58:44.558] rmt_core.c:2434 99.12.117.155:8888
[2017-02-15 06:58:44.558] rmt_core.c:2434 99.12.117.155:6379
[2017-02-15 06:58:44.558] rmt_core.c:2434 99.12.117.151:8888
[2017-02-15 06:58:44.558] rmt_core.c:2461 write thread(0):
[2017-02-15 06:58:44.558] rmt_core.c:2467 99.12.117.155:8888
[2017-02-15 06:58:44.558] rmt_core.c:2467 99.12.117.155:6379
[2017-02-15 06:58:44.558] rmt_core.c:2461 write thread(1):
[2017-02-15 06:58:44.558] rmt_core.c:2467 99.12.117.151:8888
[2017-02-15 06:58:44.558] rmt_core.c:2524 migrate job is running...
[2017-02-15 06:58:44.560] rmt_redis.c:1706 Start connecting to MASTER[99.12.117.155:8888].
[2017-02-15 06:58:44.560] rmt_redis.c:1740 Master[99.12.117.155:8888] replied to PING, replication can continue...
[2017-02-15 06:58:44.560] rmt_redis.c:1706 Start connecting to MASTER[99.12.117.151:8888].
[2017-02-15 06:58:44.560] rmt_redis.c:1740 Master[99.12.117.151:8888] replied to PING, replication can continue...
[2017-02-15 06:58:44.560] rmt_redis.c:1051 Partial resynchronization for MASTER[99.12.117.155:8888] not possible (no cached master).
[2017-02-15 06:58:44.560] rmt_redis.c:1051 Partial resynchronization for MASTER[99.12.117.151:8888] not possible (no cached master).
[2017-02-15 06:58:50.208] rmt_redis.c:1110 Full resync from MASTER[99.12.117.151:8888]: ddd01a2b6c6d62060d1bb7a61d0ac0f8908d7a7e:1734727795
[2017-02-15 06:58:50.239] rmt_redis.c:1512 MASTER <-> SLAVE sync: receiving streamed RDB from master[99.12.117.151:8888]
[2017-02-15 06:58:50.239] rmt_redis.c:1548 Error: I/O error trying to sync with MASTER[99.12.117.151:8888]: connection lost
[2017-02-15 06:58:50.394] rmt_redis.c:1110 Full resync from MASTER[99.12.117.155:8888]: 8044650035615097308f4183b5125b7a49fa92f2:1743474157
[2017-02-15 06:58:50.437] rmt_redis.c:1512 MASTER <-> SLAVE sync: receiving streamed RDB from master[99.12.117.155:8888]
[2017-02-15 06:58:50.437] rmt_redis.c:1548 Error: I/O error trying to sync with MASTER[99.12.117.155:8888]: connection lost
[2017-02-15 06:58:50.569] rmt_redis.c:2037 srnode->ctx->target_type: 4, srnode->rdb->received: 0
[2017-02-15 06:58:50.569] rmt_redis.c:2044 Reconnect to node[99.12.117.155:8888] for replication
[2017-02-15 06:58:50.569] rmt_redis.c:2037 srnode->ctx->target_type: 4, srnode->rdb->received: 0
[2017-02-15 06:58:50.569] rmt_redis.c:2044 Reconnect to node[99.12.117.151:8888] for replication
[2017-02-15 06:58:50.569] rmt_redis.c:1706 Start connecting to MASTER[99.12.117.155:8888].
[2017-02-15 06:58:50.569] rmt_redis.c:1706 Start connecting to MASTER[99.12.117.151:8888].
[2017-02-15 06:58:50.569] rmt_redis.c:1740 Master[99.12.117.151:8888] replied to PING, replication can continue...
[2017-02-15 06:58:50.569] rmt_redis.c:1740 Master[99.12.117.155:8888] replied to PING, replication can continue...
[2017-02-15 06:58:50.570] rmt_redis.c:1051 Partial resynchronization for MASTER[99.12.117.151:8888] not possible (no cached master).
[2017-02-15 06:58:50.570] rmt_redis.c:1051 Partial resynchronization for MASTER[99.12.117.155:8888] not possible (no cached master).
[2017-02-15 06:58:56.250] rmt_redis.c:1110 Full resync from MASTER[99.12.117.151:8888]: ddd01a2b6c6d62060d1bb7a61d0ac0f8908d7a7e:1734727809
[2017-02-15 06:58:56.278] rmt_redis.c:1512 MASTER <-> SLAVE sync: receiving streamed RDB from master[99.12.117.151:8888]
[2017-02-15 06:58:56.278] rmt_redis.c:1548 Error: I/O error trying to sync with MASTER[99.12.117.151:8888]: connection lost
[2017-02-15 06:58:56.450] rmt_redis.c:1110 Full resync from MASTER[99.12.117.155:8888]: 8044650035615097308f4183b5125b7a49fa92f2:1743474157
[2017-02-15 06:58:56.497] rmt_redis.c:1512 MASTER <-> SLAVE sync: receiving streamed RDB from master[99.12.117.155:8888]
[2017-02-15 06:58:56.497] rmt_redis.c:1548 Error: I/O error trying to sync with MASTER[99.12.117.155:8888]: connection lost
[2017-02-15 06:58:56.582] rmt_redis.c:2037 srnode->ctx->target_type: 4, srnode->rdb->received: 0
[2017-02-15 06:58:56.582] rmt_redis.c:2044 Reconnect to node[99.12.117.155:8888] for replication
[2017-02-15 06:58:56.582] rmt_redis.c:2037 srnode->ctx->target_type: 4, srnode->rdb->received: 0
[2017-02-15 06:58:56.582] rmt_redis.c:2044 Reconnect to node[99.12.117.151:8888] for replication
[2017-02-15 06:58:56.582] rmt_redis.c:1706 Start connecting to MASTER[99.12.117.155:8888].
[2017-02-15 06:58:56.582] rmt_redis.c:1706 Start connecting to MASTER[99.12.117.151:8888].
[2017-02-15 06:58:56.582] rmt_redis.c:1740 Master[99.12.117.151:8888] replied to PING, replication can continue...
[2017-02-15 06:58:56.582] rmt_redis.c:1740 Master[99.12.117.155:8888] replied to PING, replication can continue...
[2017-02-15 06:58:56.583] rmt_redis.c:1051 Partial resynchronization for MASTER[99.12.117.151:8888] not possible (no cached master).
[2017-02-15 06:58:56.583] rmt_redis.c:1051 Partial resynchronization for MASTER[99.12.117.155:8888] not possible (no cached master).
[2017-02-15 06:58:56.594] rmt_redis.c:1110 Full resync from MASTER[99.12.117.155:8888]: 8044650035615097308f4183b5125b7a49fa92f2:1743474157
[2017-02-15 06:58:56.635] rmt_redis.c:1512 MASTER <-> SLAVE sync: receiving streamed RDB from master[99.12.117.155:8888]
[2017-02-15 06:58:56.635] rmt_redis.c:1548 Error: I/O error trying to sync with MASTER[99.12.117.155:8888]: connection lost
[2017-02-15 06:58:57.584] rmt_redis.c:2037 srnode->ctx->target_type: 4, srnode->rdb->received: 0
[2017-02-15 06:58:57.584] rmt_redis.c:2044 Reconnect to node[99.12.117.155:8888] for replication
[2017-02-15 06:58:57.584] rmt_redis.c:1706 Start connecting to MASTER[99.12.117.155:8888].
[2017-02-15 06:58:57.585] rmt_redis.c:1740 Master[99.12.117.155:8888] replied to PING, replication can continue...
[2017-02-15 06:58:57.585] rmt_redis.c:1051 Partial resynchronization for MASTER[99.12.117.155:8888] not possible (no cached master).
[2017-02-15 06:59:02.289] rmt_redis.c:1110 Full resync from MASTER[99.12.117.151:8888]: ddd01a2b6c6d62060d1bb7a61d0ac0f8908d7a7e:1734727809
[2017-02-15 06:59:02.318] rmt_redis.c:1512 MASTER <-> SLAVE sync: receiving streamed RDB from master[99.12.117.151:8888]
[2017-02-15 06:59:02.318] rmt_redis.c:1548 Error: I/O error trying to sync with MASTER[99.12.117.151:8888]: connection lost
[2017-02-15 06:59:02.592] rmt_redis.c:2037 srnode->ctx->target_type: 4, srnode->rdb->received: 0
[2017-02-15 06:59:02.592] rmt_redis.c:2044 Reconnect to node[99.12.117.151:8888] for replication
[2017-02-15 06:59:02.593] rmt_redis.c:1706 Start connecting to MASTER[99.12.117.151:8888].
[2017-02-15 06:59:02.593] rmt_redis.c:1740 Master[99.12.117.151:8888] replied to PING, replication can continue...
[2017-02-15 06:59:02.593] rmt_redis.c:1051 Partial resynchronization for MASTER[99.12.117.151:8888] not possible (no cached master).
[2017-02-15 06:59:03.546] rmt_redis.c:1110 Full resync from MASTER[99.12.117.155:8888]: 8044650035615097308f4183b5125b7a49fa92f2:1743474171
[2017-02-15 06:59:03.594] rmt_redis.c:1512 MASTER <-> SLAVE sync: receiving streamed RDB from master[99.12.117.155:8888]
[2017-02-15 06:59:03.594] rmt_redis.c:1548 Error: I/O error trying to sync with MASTER[99.12.117.155:8888]: connection lost
[2017-02-15 06:59:04.596] rmt_redis.c:2037 srnode->ctx->target_type: 4, srnode->rdb->received: 0
[2017-02-15 06:59:04.596] rmt_redis.c:2044 Reconnect to node[99.12.117.155:8888] for replication
[2017-02-15 06:59:04.596] rmt_redis.c:1706 Start connecting to MASTER[99.12.117.155:8888].
[2017-02-15 06:59:04.597] rmt_redis.c:1740 Master[99.12.117.155:8888] replied to PING, replication can continue...
[2017-02-15 06:59:04.597] rmt_redis.c:1051 Partial resynchronization for MASTER[99.12.117.155:8888] not possible (no cached master).

in source redis cluster, log is:
13817:M 15 Feb 06:58:44.561 * Slave 99.12.117.159:43930 asks for synchronization
13817:M 15 Feb 06:58:44.561 * Full resync requested by slave 99.12.117.159:43930
13817:M 15 Feb 06:58:44.561 * Delay next BGSAVE for SYNC
13817:M 15 Feb 06:58:50.208 * Starting BGSAVE for SYNC with target: slaves sockets
13817:M 15 Feb 06:58:50.236 * Background RDB transfer started by pid 10662
13817:M 15 Feb 06:58:50.239 # Connection with slave client id #59608 lost.
13817:M 15 Feb 06:58:50.337 # Background transfer error
13817:M 15 Feb 06:58:50.570 * Slave 99.12.117.159:43932 asks for synchronization
13817:M 15 Feb 06:58:50.570 * Full resync requested by slave 99.12.117.159:43932
13817:M 15 Feb 06:58:50.570 * Delay next BGSAVE for SYNC
13817:M 15 Feb 06:58:56.249 * Starting BGSAVE for SYNC with target: slaves sockets
13817:M 15 Feb 06:58:56.275 * Background RDB transfer started by pid 10663
13817:M 15 Feb 06:58:56.278 # Connection with slave client id #59609 lost.
13817:M 15 Feb 06:58:56.376 # Background transfer error
13817:M 15 Feb 06:58:56.583 * Slave 99.12.117.159:43935 asks for synchronization
13817:M 15 Feb 06:58:56.583 * Full resync requested by slave 99.12.117.159:43935
13817:M 15 Feb 06:58:56.583 * Delay next BGSAVE for SYNC
13817:M 15 Feb 06:59:02.289 * Starting BGSAVE for SYNC with target: slaves sockets
13817:M 15 Feb 06:59:02.315 * Background RDB transfer started by pid 10672
13817:M 15 Feb 06:59:02.318 # Connection with slave client id #59610 lost.
13817:M 15 Feb 06:59:02.515 # Background transfer error
13817:M 15 Feb 06:59:02.593 * Slave 99.12.117.159:43938 asks for synchronization
13817:M 15 Feb 06:59:02.594 * Full resync requested by slave 99.12.117.159:43938
13817:M 15 Feb 06:59:02.594 * Delay next BGSAVE for SYNC
13817:M 15 Feb 06:59:08.328 * Starting BGSAVE for SYNC with target: slaves sockets
13817:M 15 Feb 06:59:08.357 * Background RDB transfer started by pid 10673
13817:M 15 Feb 06:59:08.429 # Connection with slave client id #59611 lost.
13817:M 15 Feb 06:59:08.559 # Background transfer error
13817:M 15 Feb 06:59:08.604 * Slave 99.12.117.159:43940 asks for synchronization
13817:M 15 Feb 06:59:08.605 * Full resync requested by slave 99.12.117.159:43940
13817:M 15 Feb 06:59:08.605 * Delay next BGSAVE for SYNC
13817:M 15 Feb 06:59:14.373 * Starting BGSAVE for SYNC with target: slaves sockets
13817:M 15 Feb 06:59:14.402 * Background RDB transfer started by pid 10682
13817:M 15 Feb 06:59:14.475 # Connection with slave client id #59612 lost.
13817:M 15 Feb 06:59:14.604 # Background transfer error
13817:M 15 Feb 06:59:14.613 * Slave 99.12.117.159:43942 asks for synchronization
13817:M 15 Feb 06:59:14.613 * Full resync requested by slave 99.12.117.159:43942
13817:M 15 Feb 06:59:14.613 * Delay next BGSAVE for SYNC
13817:M 15 Feb 06:59:20.427 * Starting BGSAVE for SYNC with target: slaves sockets

this tool will merge db0~dbn data from source node to target node

source node: A target node: B

use redis-migrate-tool to sync data between node A and node B
the output infomation about redis-cli keyspace is:

node A
#Keyspace
db0:keys=1133032,expires=0,avg_ttl=0
db15:keys=1,expires=0,avg_ttl=0

node B
#Keyspace
db0:keys=1133033,expires=0,avg_ttl=0

maybe the tool forget to use SELECT command when sync data between two nodes?

autoreconf -fvi failed

configure.ac:8: error: Autoconf version 2.64 or higher is required

autoconf in centos6.5 is 2.63

#rpm -qa |grep autoconf
autoconf-2.63-5.1.el6.noarch

updating autoconf is easy,but 2.63 can work, please change,otherwize you have reason.

Support of resharding?

Tested for migration between single-twemproxy single-redis cluster. However, failed while testing twemproxy-twemproxy resharding due to node failing, forcing me to split this twemproxy cluster into several single ones, and some data was inserted several times.(Logically there would not be any data loss, only efficiency issue)
Or were there any particular configurations for this kind of scene?

Is support key filtering?

I found README add filter function, but I do not found the code commit.

filter: Filter keys if they do not match the pattern. The pattern is Glob-style. Defaults is NULL.

Thanks

关于部署 rmt 服务的建议

麻烦问下,我们当前的 redis cluster 有80个实例(双副本,单实例内存使用13G),集群写qps 40w、读qps 60w ,写请求均是incr操作,key字符串长度中位数50byte;
请问对于我们这个量级来讲,部署 rmt 服务有什么部署建议么,比如机器的硬件配置?

Can't handle RDB format version 891387912

[2017-10-16 15:39:57.655] rmt_redis.c:6446 ERROR: Can't handle RDB format version 891387912
[2017-10-16 15:39:57.803] rmt_redis.c:6715 ERROR: Rdb file for node[10.3.1.2:6379] parsed failed

源版本是4.0,迁移到3.0 报错 这种情况可以迁移吗

集群之间同步报错

[2018-01-02 09:44:18.797] rmt_core.c:1660 ERROR: command 'EVAL' will not be propagated to the target redis group. If you want to se
e all the not supported writing commands, please run the 'redis-migrate-tool -I' command.

这个错误并不妨碍集群同步,但是日志一直在打印这个错误,1秒内打印四次,现在日志都有4个G了。
我在同步的两个redis上都可以执行EVAL。

系统版本: linux el7
redis版本: 3.2.8
redis-migrate-tool 版本: 0.1.0

eval/evalsha是否可以支持

通常eval/evalsha涉及的key都是含有相同的hash tag,比如:

evalsha <sha1hex>  {usr1}ids  {usr1}tweets

只要新集群和老集群的hash tag不变,在老集群在同一个分片的key,在新集群也在同一个分片。

所以是否考虑只要新老集群的hash tag不变,支持eval/evalsha命令转移?

执行make时报以下错误

rmt_net.c:187:27: error: use of undeclared identifier 'SOL_TCP'
                status = setsockopt(sd, SOL_TCP, TCP_KEEPIDLE, &keepidle, len);
                                        ^
rmt_net.c:187:36: error: use of undeclared identifier 'TCP_KEEPIDLE'
                status = setsockopt(sd, SOL_TCP, TCP_KEEPIDLE, &keepidle, len);
                                                 ^
rmt_net.c:198:27: error: use of undeclared identifier 'SOL_TCP'
                status = setsockopt(sd, SOL_TCP, TCP_KEEPINTVL, &keepinterval, len);
                                        ^
rmt_net.c:209:27: error: use of undeclared identifier 'SOL_TCP'
                status = setsockopt(sd, SOL_TCP, TCP_KEEPCNT, &keepcount, len);
rmt_net.c:493:21: warning: implicit declaration of function 'inet_ntoa' is invalid in C99 [-Wimplicit-function-declaration]
        strcpy(*ip, inet_ntoa(addr.sin_addr));
                    ^
rmt_net.c:493:21: warning: incompatible integer to pointer conversion passing 'int' to parameter of type 'const char *' [-Wint-conversion]
        strcpy(*ip, inet_ntoa(addr.sin_addr));
                    ^~~~~~~~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/secure/_string.h:83:33: note: expanded from macro 'strcpy'
  __builtin___strcpy_chk (dest, src, __darwin_obsz (dest))
                                ^~~
rmt_net.c:539:33: warning: implicit conversion loses integer precision: 'long' to 'int' [-Wshorten-64-to-32]
        if ((res = poll(wfd, 1, msec)) == -1) {
                   ~~~~         ^~~~
rmt_net.c:754:5: warning: no previous prototype for function 'rmt_tcp_context_connect_old' [-Wmissing-prototypes]
int rmt_tcp_context_connect_old(tcp_context *tc, const char *host, int port,
    ^
4 warnings and 4 errors generated.
make[3]: *** [rmt_net.o] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2

why using such a huge amount of memory when moving data from sources to target

Here is the sources memory data:

used_memory:2001297968
used_memory_human:1.86G
used_memory_rss:2063167488
used_memory_rss_human:1.92G
used_memory_peak:2003647128
used_memory_peak_human:1.87G
total_system_memory:67426734080
total_system_memory_human:62.80G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:21474836480
maxmemory_human:20.00G
maxmemory_policy:volatile-lru

1.86Gb data is stored per nodes(in productive environment, data size may over 20Gb), and there are three master nodes in my cluster. When migrating data, rmt has asked for up to 8G memory moving one node when finish dumping rdb from master and starting to redo commons to target cluster. So why using such a huge amount of memory? Or do you have any recommendation about moving a huge redis cluster?

Thanks a lot.

here is my watchdog for memory:
#!/bin/bash
while true;do
var=$(ps -e -o 'pid,comm,rsz'|grep redis-migrate)
echo $var
sleep 1
echo "========================================="
done;

and here is some part of data collecting when migrating.

4315 redis-migrate-t 2299276
4315 redis-migrate-t 2537820
4315 redis-migrate-t 2773684
4315 redis-migrate-t 2977248
4315 redis-migrate-t 3177380
4315 redis-migrate-t 3379796
4315 redis-migrate-t 3582552
4315 redis-migrate-t 3786856
4315 redis-migrate-t 3992220
4315 redis-migrate-t 4198120
4315 redis-migrate-t 4405084
4315 redis-migrate-t 4643704
4315 redis-migrate-t 4868276
4315 redis-migrate-t 5082396
4315 redis-migrate-t 5297076
4315 redis-migrate-t 5513200
4315 redis-migrate-t 5736984
4315 redis-migrate-t 6021188
4315 redis-migrate-t 6274092
4315 redis-migrate-t 6524820
4315 redis-migrate-t 6774952
4315 redis-migrate-t 7023940
4315 redis-migrate-t 7252052
4315 redis-migrate-t 7464704
4315 redis-migrate-t 7675420
4315 redis-migrate-t 7887032
4315 redis-migrate-t 8099404
4315 redis-migrate-t 8312348
4315 redis-migrate-t 8476656
4315 redis-migrate-t 8476656
4315 redis-migrate-t 8476656
4315 redis-migrate-t 8476656
4315 redis-migrate-t 8476656
4315 redis-migrate-t 8476656
4315 redis-migrate-t 8476656
4315 redis-migrate-t 8476656

redis_check数据会有重复错误的记录

./redis-migrate-tool -c ./rmt.conf -C "redis_check 2500000"

Check job is running...
[2017-11-06 14:54:01.527] rmt_check.c:848 ERROR: key checked failed: check key's value error, value is inconsistent. key(len:31, type:string): jx3/weibo/token/weibo/liu566999
[2017-11-06 14:54:09.278] rmt_check.c:848 ERROR: key checked failed: check key's value error, value is inconsistent. key(len:31, type:string): jx3/weibo/token/weibo/liu566999

Checked keys: 2500000
Inconsistent value keys: 2
Other check error keys: 0
Checked OK keys: 2453054

Check job finished, used 67.460s
[root@centos65x64akjqulww85fb4612ab06a674 bin]# exit
exit

image

如上所示,jx3/weibo/token/weibo/liu566999数据值是不对的,但是会检查记录为2条
我的rmt.conf配置如下
[source]
type: single
servers :
-10.20.78.187:6379

[target]
type: redis cluster
servers:
-10.20.78.186:7000

[common]
listen: 0.0.0.0:8888

关于集群主备切换持续同步问题

在使用过程中发现集群同步中的某个节点出现主备切换时,rmt不能检测到节点的变化,在日志中显示应用持续连接老的master节点,同步无法继续,只能重启应用。对于此种场景能否实现当集群中的某个节点出现故障,自动查找对应节点的备节点,重新同步该节点数据,或者是否有其它可行方案,谢谢!

rmt deletes all files when backing up a cluster

When I run the redis-migrate-tool with a redis cluster as source and rdb file as target, to perform a full backup of the cluster, it seems to work in the first place.

I can see 100 dump files (have 100 masters) and INFO says

Group

source_nodes_count:100
target_nodes_count:0

Stats

all_rdb_received:1
all_rdb_parsed:0
all_aof_loaded:0
rdb_received_count:100
rdb_parsed_count:0
aof_loaded_count:0
total_msgs_recv:0
total_msgs_sent:0
total_net_input_bytes:7693531722
total_net_output_bytes:0
total_net_input_bytes_human:7.17G
total_net_output_bytes_human:0B
total_mbufs_inqueue:0
total_msgs_outqueue:0

But when I shutdown the rmt with the shutdown command, all files disappear again. This does not happen when I use shutdown asap instead.

Feature: Stop after dumping to file

I'm planning to use your tool to backup a redis-cluster into rdb files. Right now I have to check if all_rdb_received is set to 1 and then tell it to shutdown.

It would be nice to have a config param which would cause your program to exit on its own as soon as the dump is completed.

Not important, but a nice to have.

the target is redis cluster with auth, setup failed

#./src/redis-migrate-tool -c rmt.conf

[2016-09-13 15:34:29.865] rmt_core.c:525 Nodes count of source group : 1
[2016-09-13 15:34:29.865] rmt_core.c:526 Total threads count : 24
[2016-09-13 15:34:29.865] rmt_core.c:527 Read threads count assigned: 1
[2016-09-13 15:34:29.865] rmt_core.c:528 Write threads count assigned: 1
[2016-09-13 15:34:29.867] rmt_redis.c:6702 ERROR: read from 10.11.144.111:6771 command CLUSTER NODES
 failed: Operation now in progress
[2016-09-13 15:34:29.867] rmt_redis.c:6951 ERROR: no valid server address in the target redis cluster
[2016-09-13 15:34:29.867] rmt_redis.c:579 ERROR: Redis cluster init failed
[2016-09-13 15:34:29.867] rmt_core.c:2291 ERROR: Target redis group init from conf file failed
[2016-09-13 15:34:29.867] rmt_core.c:167 ERROR: Target group create failed
[2016-09-13 15:34:29.867] rmt_core.c:2408 Error: assign threads failed.

the reason is that before the operation

    if (rmt_sync_write(tc->sd,REDIS_COMMAND_CLUSTER_NODES,
        rmt_strlen(REDIS_COMMAND_CLUSTER_NODES),1000) == -1){
        log_error("ERROR: send to %s command %s failed", 
            node->addr, REDIS_COMMAND_CLUSTER_NODES);
        goto error;
    }

sending the auth.

加载17G的rdb文件至redis集群,迁移进程无故挂掉

加载17G的rdb文件至redis集群,迁移进程无故挂掉,这个是rmt配置文件,帮忙看下什么原因,是文件大的原因吗?
[source]
type: rdb file
servers :
-/data/dump-6689-145.rdb
-/data/dump-6689-146.rdb
-/data/dump-6689-147.rdb
-/data/dump-6689-148.rdb

[target]
type: redis cluster
servers:
-10.14.16.114:6239
-10.14.16.116:6239
-10.14.16.17:6239
-10.14.16.19:6239

[common]
listen: 0.0.0.0:8899
step: 10
mbuf_size: 512
source_safe: true

Protocol not available

rmt_redis.c:6980 ERROR: read from 10.xx.xx.xx:6382 for command 'CLUSTER NODES' failed: Protocol not available
rmt_redis.c:6980 ERROR: read from 10.xx.xx.xx2:6380 for command 'CLUSTER NODES' failed: Protocol not available

[source]
type: single
servers :
-10.xx.xx.11:6380

[target]
type:redis cluster
servers:
-10.Xx:6386
-10.Xx2:6382
-10.XX3:6380
-10.XX4:6380
-10.xx5:6381

[common]
listen: 0.0.0.0:8888
threads:1
step:1
mbuf_size:512
source_safe:true

数据迁移完成后,一直报这个错

rmt_core.c:1660 ERROR: command 'EVAL' will not be propagated to the target redis group. If you want to see all the not supported writing commands, please run the 'redis-migrate-tool -I' command.

rmt can not exit when sync redis data to rdb files

firstly, show my rmt.conf as follows:

[source]
type: single
servers:
-10.10.10.9:6379
-10.10.10.10:6380

[target]
type: rdb file

[common]
listen: 10.33.80.221:8888
step: 1
mbuf_size: 4096
source_safe: true
dir: data

When rmt completes its job, the rmt process suspended and did not halt. By rmt.log, I noticed that both the read thread read_thread_run and the write thread write_thread_run did not terminate when the rob files have been saved.

To use rmt, I clone a fork and try to modify it to let it run by my rmt.conf. But it can not run If the value of source_safe is false.

Hope u check my code and fix this bug. Thanks.

ERROR: Can't handle RDB format version -1526615960

rmt_redis.c:6446 ERROR: Can't handle RDB format version -1526615960
rmt_redis.c:6715 ERROR: Rdb file for node[/tmp/dump.rdb] parsed failed

redis 单机 dump.rdb 文件迁移到 redis cluster 报错
redis version 4.0.9
配置文件:
[source]
type: rdb file
servers:

  • /tmp/dump.rdb

[target]
type: redis cluster
servers:

  • 172.16.xxx:6379

[common]
listen: 0.0.0.0:8888

请问这是哪里的问题呢?

rmt doesn't support distribution modula & random

rmt redis_twem_backend_idx and redis_twem_backend_node use ketama distribution
regardless of config. it will cause seriours data corruption if user use modula or random distribution.

redis_twem_backend_idx(redis_group *rgroup, uint8_t *key, uint32_t keylen)
{
    uint32_t idx, hash;
    int distribution = DIST_KETAMA;
    struct continuum *continuum;
    
   ...
}

同步数据的时候出错了

[2017-12-28 18:26:01.191] rmt_redis.c:6446 ERROR: Can't handle RDB format version 1984004104
[2017-12-28 18:26:01.726] rmt_redis.c:6715 ERROR: Rdb file for node[10.81.244.19:34500] parsed failed

Support for MULTI/EXEC

Hi,
running a migration task I see a lot of messages in the logs like the following:
rmt_redis.c:3143 ERROR: parsed unsupported command 'MULTI'
followed by an hex dump of the commands and data.
Are the transaction commands (http://redis.io/topics/transactions) not supported?
Thanks!

.ibtoolize: AC_CONFIG_MACRO_DIR([m4]) conflicts with ACLOCAL_AMFLAGS=-I m4

$autoreconf -fvi
autoreconf: Entering directory `.'
autoreconf: configure.ac: not using Gettext
autoreconf: running: aclocal --force -I m4
autoreconf: configure.ac: tracing
autoreconf: running: libtoolize --copy --force
.ibtoolize: AC_CONFIG_MACRO_DIR([m4]) conflicts with ACLOCAL_AMFLAGS=-I m4
autoreconf: libtoolize failed with exit status: 1

构建过程中autoreconf -fvi报错

autoreconf: Entering directory `.'
autoreconf: configure.ac: not using Gettext
autoreconf: running: aclocal --force -I m4
autoreconf: configure.ac: tracing
autoreconf: configure.ac: not using Libtool
autoreconf: running: /usr/bin/autoconf --force
configure.ac:36: error: possibly undefined macro: AC_PROG_LIBTOOL
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
autoreconf: /usr/bin/autoconf failed with exit status: 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.