GithubHelp home page GithubHelp logo

pingcap / tiflash Goto Github PK

View Code? Open in Web Editor NEW
939.0 55.0 410.0 145.27 MB

The analytical engine for TiDB and TiDB Cloud. Try free: https://tidbcloud.com/free-trial

Home Page: https://docs.pingcap.com/tidb/stable/tiflash-overview

License: Apache License 2.0

CMake 0.77% Shell 0.08% Python 0.18% C++ 91.23% C 0.29% Makefile 0.01% Assembly 7.43% Rust 0.01% Dockerfile 0.01%

tiflash's Introduction

TiFlash

tiflash-architecture

TiFlash is a columnar storage component of TiDB and TiDB Cloud, the fully-managed service of TiDB. It mainly plays the role of Analytical Processing (AP) in the Hybrid Transactional/Analytical Processing (HTAP) architecture of TiDB.

TiFlash stores data in columnar format and synchronizes data updates in real-time from TiKV by Raft logs with sub-second latency. Reads in TiFlash are guaranteed transactionally consistent with Snapshot Isolation level. TiFlash utilizes Massively Parallel Processing (MPP) computing architecture to accelerate the analytical workloads.

TiFlash repository is based on ClickHouse. We appreciate the excellent work of the ClickHouse team.

Quick Start

Start with TiDB Cloud

Quickly explore TiFlash with a free trial of TiDB Cloud.

See TiDB Cloud Quick Start Guide.

Start with TiDB

See Quick Start with HTAP and Use TiFlash.

Build TiFlash

TiFlash can be built on the following hardware architectures:

  • x86-64 / amd64
  • aarch64

And the following operating systems:

  • Linux
  • MacOS

1. Prepare Prerequisites

The following packages are required:

  • CMake 3.23.0+
  • Clang 17.0.0+ under Linux or AppleClang 14.0.0+ under MacOS
  • Rust
  • Python 3.0+
  • Ninja-Build or GNU Make
  • Ccache (not necessary but highly recommended to reduce rebuild time)

Detailed steps for each platform are listed below.

Ubuntu / Debian
sudo apt update

# Install Rust toolchain, see https://rustup.rs for details
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain none
source $HOME/.cargo/env

# Install LLVM, see https://apt.llvm.org for details
# Clang will be available as /usr/bin/clang++-17
wget https://apt.llvm.org/llvm.sh
chmod +x llvm.sh
sudo ./llvm.sh 17 all

# Install other dependencies
sudo apt install -y cmake ninja-build zlib1g-dev libcurl4-openssl-dev ccache

Note for Ubuntu 18.04 and Ubuntu 20.04:

The default installed cmake may be not recent enough. You can install a newer cmake from the Kitware APT Repository:

sudo apt install -y software-properties-common lsb-release
wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | sudo tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null
sudo apt-add-repository "deb https://apt.kitware.com/ubuntu/ $(lsb_release -cs) main"
sudo apt update
sudo apt install -y cmake

If you are facing "ld.lld: error: duplicate symbol: ssl3_cbc_digest_record":

It is likely because you have a pre-installed libssl3 where TiFlash prefers libssl1. TiFlash has vendored libssl1, so that you can simply remove the one in the system to make compiling work:

sudo apt remove libssl-dev

If this doesn't work, please file an issue.

Archlinux
# Install Rust toolchain, see https://rustup.rs for details
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain none
source $HOME/.cargo/env

# Install compilers and dependencies
sudo pacman -S clang lld libc++ libc++abi compiler-rt openmp lcov cmake ninja curl openssl zlib llvm ccache
CentOS 7

Please refer to release-centos7-llvm/env/prepare-sysroot.sh

MacOS
# Install Rust toolchain, see https://rustup.rs for details
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain none
source $HOME/.cargo/env

# Install compilers
xcode-select --install

# Install other dependencies
brew install ninja cmake [email protected] ccache

If your MacOS is higher or equal to 13.0 (Ventura), it should work out of the box because by default Xcode 14.3 provides Apple clang 14.0.0. But if your MacOS is lower than 13.0, you should install llvm clang manually.

brew install llvm@17

# check llvm version
clang --version # should be 17.0.0 or higher

2. Checkout Source Code

git clone https://github.com/pingcap/tiflash.git --recursive -j 20
cd tiflash

3. Build

To build TiFlash for development:

# In the TiFlash repository root:
cmake --workflow --preset dev

Note: In Linux, usually you need to explicitly specify to use LLVM.

export CC="/usr/bin/clang-17"
export CXX="/usr/bin/clang++-17"

In MacOS, if you install llvm clang, you need to explicitly specify to use llvm clang.

Add the following lines to your shell environment, e.g. ~/.bash_profile.

export PATH="/opt/homebrew/opt/llvm/bin:$PATH"
export CC="/opt/homebrew/opt/llvm/bin/clang"
export CXX="/opt/homebrew/opt/llvm/bin/clang++"

Or use CMAKE_C_COMPILER and CMAKE_CXX_COMPILER to specify the compiler, like this:

mkdir cmake-build-debug
cd cmake-build-debug

cmake .. -GNinja -DCMAKE_BUILD_TYPE=DEBUG -DCMAKE_C_COMPILER=/opt/homebrew/opt/llvm/bin/clang -DCMAKE_CXX_COMPILER=/opt/homebrew/opt/llvm/bin/clang++

ninja tiflash

After building, you can get TiFlash binary in dbms/src/Server/tiflash in the cmake-build-debug directory.

Build Options

TiFlash has several CMake build options to tweak for development purposes. These options SHOULD NOT be changed for production usage, as they may introduce unexpected build errors and unpredictable runtime behaviors.

To tweak options, pass one or multiple -D...=... args when invoking CMake, for example:

cd cmake-build-debug
cmake .. -GNinja -DCMAKE_BUILD_TYPE=DEBUG -DFOO=BAR
                                          ^^^^^^^^^
  • Build Type:

    • -DCMAKE_BUILD_TYPE=RELWITHDEBINFO: Release build with debug info (default)

    • -DCMAKE_BUILD_TYPE=DEBUG: Debug build

    • -DCMAKE_BUILD_TYPE=RELEASE: Release build

    Usually you may want to use different build directories for different build types, e.g. a new build directory named cmake-build-release for the release build, so that compile unit cache will not be invalidated when you switch between different build types.

  • Build with Unit Tests:

    • -DENABLE_TESTS=ON: Enable unit tests (enabled by default in debug profile)

    • -DENABLE_TESTS=OFF: Disable unit tests (default in release profile)

  • Build using GNU Make instead of ninja-build:

    Click to expand instructions

    To use GNU Make, simply don't pass -GNinja to cmake:

    cd cmake-build-debug
    cmake .. -DCMAKE_BUILD_TYPE=DEBUG
    make tiflash -j

    NOTE: Option -j (defaults to your system CPU core count, otherwise you can optionally specify a number) is used to control the build parallelism. Higher parallelism consumes more memory. If you encounter compiler OOM or hang, try to lower the parallelism by specifying a reasonable number, e.g., half of your system CPU core count or even smaller, after -j, depending on the available memory in your system.

  • Build with System Libraries:

    Click to expand instructions

    For local development, it is sometimes handy to use pre-installed third-party libraries in the system, rather than to compile them from sources of the bundled (internal) submodules.

    Options are supplied to control whether to use internal third-party libraries (bundled in TiFlash) or to try using the pre-installed system ones.

    WARNING: It is NOT guaranteed that TiFlash would still build if any of the system libraries are used. Build errors are very likely to happen, almost all the time.

    You can view these options along with their descriptions by running:

    cd cmake-build-debug
    cmake -LH | grep "USE_INTERNAL" -A3

    All of these options are default as ON, as the names tell, using the internal libraries and build from sources.

    There is another option to append extra paths for CMake to find system libraries:

    • PREBUILT_LIBS_ROOT: Default as empty, can be specified with multiple values, separated by ;
  • Build for AMD64 Architecture:

    Click to expand instructions

    To deploy TiFlash under the Linux AMD64 architecture, the CPU must support the AVX2 instruction set. Ensure that cat /proc/cpuinfo | grep avx2 has output.

    If need to build TiFlash for AMD64 architecture without such instruction set, please use cmake option -DNO_AVX_OR_HIGHER=ON.

Run Unit Tests

Unit tests are automatically enabled in debug profile. To build these unit tests:

# In the TiFlash repository root:
cmake --workflow --preset unit-tests-all

Then, to run these unit tests:

cd cmake-build-debug
./dbms/gtests_dbms
./libs/libdaemon/src/tests/gtests_libdaemon
./libs/libcommon/src/tests/gtests_libcommon

More usages are available via ./dbms/gtests_dbms --help.

Run Sanitizer Tests

TiFlash supports testing with thread sanitizer and address sanitizer.

To build unit test executables with sanitizer enabled:

# In the TiFlash repository root:
cmake --workflow --preset asan-tests-all # or tsan-tests-all

There are known false positives reported from leak sanitizer (which is included in address sanitizer). To suppress these errors, set the following environment variables before running the executables:

LSAN_OPTIONS="suppressions=tests/sanitize/asan.suppression" ./dbms/gtests_dbms ...
# or
TSAN_OPTIONS="suppressions=tests/sanitize/tsan.suppression" ./dbms/gtests_dbms ...

Run Integration Tests

  1. Build your own TiFlash binary using debug profile:

    cd cmake-build-debug
    cmake .. -GNinja -DCMAKE_BUILD_TYPE=DEBUG
    ninja tiflash
  2. Start a local TiDB cluster with your own TiFlash binary using TiUP:

    cd cmake-build-debug
    tiup playground nightly --tiflash.binpath ./dbms/src/Server/tiflash
    
    # Or using a more stable cluster version:
    # tiup playground v6.1.0 --tiflash.binpath ./dbms/src/Server/tiflash

    TiUP is the TiDB component manager. If you don't have one, you can install it via:

    curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

    If you are not running the cluster using the default port (for example, you run multiple clusters), make sure that the port and build directory in tests/_env.sh are correct.

  3. Run integration tests:

    # In the TiFlash repository root:
    cd tests
    ./run-test.sh
    
    # Or run specific integration test:
    # ./run-test.sh fullstack-test2/ddl

Note: some integration tests (namely, tests under delta-merge-test) requires a standalone TiFlash service without a TiDB cluster, otherwise they will fail. To run these integration tests: TBD

Run MicroBenchmark Tests

To build micro benchmark tests, you need release profile and tests enabled:

# In the TiFlash repository root:
cmake --workflow --preset benchmarks

Then, to run these micro benchmarks:

cd cmake-build-release
./dbms/bench_dbms

# Or run with filter:
# ./dbms/bench_dbms --benchmark_filter=xxx

More usages are available via ./dbms/bench_dbms --help.

Generate LLVM Coverage Report

To build coverage report, run the script under release-centos7-llvm

cd release-centos7-llvm
./gen_coverage.sh
# Or run with filter:
# FILTER='*DMFile*:*DeltaMerge*:*Segment*' ./gen_coverage.sh

# After the script finished, it will output the directory of code coverage report, you can check out the files by webbrowser
python3 -m http.server --directory "${REPORT_DIR}" "${REPORT_HTTP_PORT}"

Contributing

Here is the overview of TiFlash architecture The architecture of TiFlash's distributed storage engine and transaction layer.

See TiFlash Development Guide and TiFlash Design documents.

Before submitting a pull request, please resolve clang-tidy errors and use format-diff.py to format source code, otherwise CI build may raise error.

NOTE: It is required to use clang-format 12.0.0+.

# In the TiFlash repository root:
merge_base=$(git merge-base upstream/master HEAD)
python3 release-centos7-llvm/scripts/run-clang-tidy.py -p cmake-build-debug -j 20 --files `git diff $merge_base --name-only`
# if there are too much errors, you can try to run the script again with `-fix`
python3 format-diff.py --diff_from $merge_base

License

TiFlash is under the Apache 2.0 license. See the LICENSE file for details.

tiflash's People

Contributors

al13n321 avatar alexey-milovidov avatar artpaul avatar aurusov avatar avasiliev avatar blazerer avatar breezewish avatar calvinneo avatar diehertz avatar egatov avatar eik0u avatar excitoon avatar flowbehappy avatar fuzhe1989 avatar hanfei1991 avatar jayson-huang avatar jiaqizho avatar jinhelin avatar kochetovnicolai avatar lidezhu avatar lloyd-pottiger avatar ludv1x avatar proller avatar schrodingerzhu avatar searise avatar solotzg avatar vorloff avatar windtalker avatar ywqzzy avatar zanmato1984 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tiflash's Issues

[MPP planner] embed shuffle hash join into current cbo framework.

Consider 'partition' property for current cbo framework and generate shuffled hash join plan for final physical plan.

Shuffled hash join should get less cost for huge tables but the concurrency is unknow right now, hence the cost model is complex and we should discuss carefully. Maybe we can drive it by hint at first.

Mishandling of changing pk column name in DeltaTree

First, make sure TiDB is started with alter-primary-key = false.
Then execute following SQL statements through mysql-client.

create table t (a blob(34) null, b mediumint, primary key(b));
alter table t set tiflash replica 1;
alter table t change column b c mediumint;
insert into t values('1', 2);
select * from t;

In TiFlash's log, we can see error that:

2020.06.28 12:03:38.459711 [ 23 ] <Error> DB::TiFlashApplyRes DB::HandleWriteRaftCmd(const DB::TiFlashServer*, DB::WriteCmdsView, DB::RaftCmdHeader): Code: 10, e.displayText() = DB::Exception: Not found column b in block. There are only columns: a, c, _INTERNAL_VERSION, _INTERNAL_DELMARK, e.what() = DB::Exception, Stack trace:

0. /data1/jaysonhuang/nodes/28/tiflash/tiflash(StackTrace::StackTrace()+0x1c) [0x128a8fd4]
1. /data1/jaysonhuang/nodes/28/tiflash/tiflash(DB::Exception::Exception(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)+0x4b) [0xce292f7]
2. /data1/jaysonhuang/nodes/28/tiflash/tiflash(DB::Block::getPositionByName(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const+0x10a) [0x111078e2]
3. /data1/jaysonhuang/nodes/28/tiflash/tiflash(DB::DM::DeltaMergeStore::write(DB::Context const&, DB::Settings const&, DB::Block const&)+0x226) [0x11f6ae94]
4. /data1/jaysonhuang/nodes/28/tiflash/tiflash(DB::StorageDeltaMerge::write(DB::Block&&, DB::Settings const&)+0x18d) [0x11ebf565]
5. /data1/jaysonhuang/nodes/28/tiflash/tiflash() [0x121c8032]
6. /data1/jaysonhuang/nodes/28/tiflash/tiflash(DB::writeRegionDataToStorage(DB::Context&, std::shared_ptr<DB::Region> const&, std::vector<std::tuple<long, unsigned char, unsigned long, std::shared_ptr<DB::StringObject<false> const> >, std::allocator<std::tuple<long, unsigned char, unsigned long, std::shared_ptr<DB::StringObject<false> const> > > >&, Poco::Logger*)+0xd5) [0x121c857a]
7. /data1/jaysonhuang/nodes/28/tiflash/tiflash(DB::RegionTable::writeBlockByRegion(DB::Context&, std::shared_ptr<DB::Region> const&, std::vector<std::tuple<long, unsigned char, unsigned long, std::shared_ptr<DB::StringObject<false> const> >, std::allocator<std::tuple<long, unsigned char, unsigned long, std::shared_ptr<DB::StringObject<false> const> > > >&, Poco::Logger*, bool)+0x160) [0x121c8fea]
8. /data1/jaysonhuang/nodes/28/tiflash/tiflash(DB::Region::handleWriteRaftCmd(DB::WriteCmdsView const&, unsigned long, unsigned long, DB::TMTContext&)+0x58e) [0x121d930e]
9. /data1/jaysonhuang/nodes/28/tiflash/tiflash(DB::KVStore::handleWriteRaftCmd(DB::WriteCmdsView const&, unsigned long, unsigned long, unsigned long, DB::TMTContext&)+0x1bf) [0x121baf59]
10. /data1/jaysonhuang/nodes/28/tiflash/tiflash(DB::HandleWriteRaftCmd(DB::TiFlashServer const*, DB::WriteCmdsView, DB::RaftCmdHeader)+0x4e) [0x121d2e25]
11. /data1/jaysonhuang/nodes/28/tiflash/libtiflash_proxy.so(+0x11c9543) [0x7f600b76a543]
12. /data1/jaysonhuang/nodes/28/tiflash/libtiflash_proxy.so(+0x11b58d2) [0x7f600b7568d2]
13. /data1/jaysonhuang/nodes/28/tiflash/libtiflash_proxy.so(+0x11b8d24) [0x7f600b759d24]
14. /data1/jaysonhuang/nodes/28/tiflash/libtiflash_proxy.so(+0x350c86) [0x7f600a8f1c86]
15. /data1/jaysonhuang/nodes/28/tiflash/libtiflash_proxy.so(+0x38ba55) [0x7f600a92ca55]
16. /data1/jaysonhuang/nodes/28/tiflash/libtiflash_proxy.so(+0xc714bd) [0x7f600b2124bd]
17. /data1/jaysonhuang/nodes/28/tiflash/libtiflash_proxy.so(+0xc732c7) [0x7f600b2142c7]
18. /usr/lib64/libpthread.so.0(+0x7e64) [0x7f6009428e64]
19. /usr/lib64/libc.so.6(clone+0x6c) [0x7f60089df88c]

And the metadata of that table is

ATTACH TABLE t_58
(
    a Nullable(String),
    c Int32
)
ENGINE = DeltaMerge(b, ... )  <- the pk name(s) in create statement is not changed.

Tombstone table make drop-table test with fault-injected lose effectiveness

In this pr #599, we tombstone table instead of instantly physical drop data on disk.
It means that in tests/fullstack-test/fault-inject/drop-table.test, failpoint exception (exception_between_drop_meta_and_data and exception_drop_table_during_remove_meta) is not triggered.

Those fail points are not triggered and may make trouble in later tests.
For example:

## Simulation of enable fail point but not triggered.
>> DBGInvoke __init_fail_point()
>> DBGInvoke __enable_fail_point(exception_between_drop_meta_and_data)

## Drop database will actually drop table data physically.
mysql> drop table if exists test_new.t2;
mysql> drop database if exists test_new;
mysql> create database if not exists test_new;
mysql> create table test_new.t2(a int, b int)
mysql> alter table test_new.t2 set tiflash replica 1
func> wait_table test_new t2
mysql> insert into test_new.t2 values (1, 1),(1, 2);
mysql> set session tidb_isolation_read_engines='tiflash'; select * from test_new.t2;
+------+------+
| a    | b    |
+------+------+
|    1 |    1 |
|    1 |    2 |
+------+------+

mysql> drop table if exists test_new.t2;
mysql> drop database if exists test_new;

## Refresh schema will trigger fail point and throw an exception
## It makes TiFlash's schema can not be synced with TiDB
>> DBGInvoke __refresh_schemas()  

Refine the "stable" PageStorage

We introduced a new PageStorage which support multi-threads write in #464. It was a big change for PageStorage code.
I have supported backward compatibility for reading single page file of the older format, but I was not sure it can completely handle all old format data (including the checkpoint, legacy, formal page file data). To keep the stability of TiFlash at that time, I only use the new PageStorage for DeltaTree, while keeping the old code as "stable::PageStorage" for storing raft data.

Having two version codebase for PageStorage may make some trouble if we support encryption or other new features for PageStorage. We'd better add some test cases to ensure we can unify them and upgrade can performance smoothly.

[MPP coordinator] implement task generation and schedule.

After coordinator receive an ExecQueryRequest , it should record it and generate MPP computation tasks. In first developing phase, we suppose the resources are enough, and allocate tasks to different tiflash node.

Then schedule stage starts, coordinator sends ExecTaskRequest to corresponding node and track task status.

[MPP coordinator] implement coordinator server and client.

When MPP function is enabled, TiDB should start a coordinator leader thread and write its server address to Etcd.
When it crashes, other TiDB should become leader instead.

TiDB and TiFlash should both implement coordinator client, find the server address from etcd and keep heartbeat.

Deadlocks between learner read and raft threads

In the case of #803, we found that one region's logging like "Start to remove region xxx", but can not find belonging "Remove region xxx done". About one hour later, all queries accessing to TiFlash stuck. And there is no new raft command log at that period of time.
After restarting TiFlash, that region did not appear in the restored region list. So it is believed that possibly there are deadlocks inside RegionTable::RemoveRegion.
But I can't find the deadlocks by reviewing those codes. Maybe we need to add more logging to find out what happens first.

[ 8 ] <Information> Region: [region 36252027, applied: term 7 index 7] execute admin command ChangePeer at [term: 7, index: 8]
[ 8 ] <Information> Region: [region 36252027] execute change peer type: RemoveNode
[ 8 ] <Information> KVStore: Start to remove [region 36252027]
...

Consider adding timeout for wait index

In TiDB, users can set max_execution_time to limit on how long a statement is permitted to execute before the server terminates it.
If we happen to deadlocks for raft threads, then the coprocessor queries wait index may block forever.

I think we can add a timeout for wait index to mitigating the impact of infinite wait.

Rare seen proxy core dump and unable to recover when restart

We've seen a rare proxy core dump in a previous POC, we had no clue and neither was able to reproduce back that time so we didn't take further actions.

We saw a proxy core dump today in another customer's production env, looks alike the one above.

We'll take whatever it costs to investigate this issue, cause this is unacceptable to have such a core dump in production.

[MPP worker] implement MPP worker service.

MPP worker service is driven by TiFlash, and serves MPP Task Request. After receiving ExecTaskRequest, should establish connection with upstream node. For Table Scan plan, we should retry for region error or resolve lock error. If there is any error, we should throw an exception to coordinator.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.