GithubHelp home page GithubHelp logo

ibm / ubiquity Goto Github PK

View Code? Open in Web Editor NEW
91.0 31.0 26.0 27.17 MB

Ubiquity

License: Apache License 2.0

Go 95.28% Shell 4.33% Dockerfile 0.39%
docker kubernetes containers ibm spectrum-scale storage-container storage docker-swarm

ubiquity's Introduction

Ubiquity Storage Service for Container Ecosystems

Build Status GoDoc Coverage Status License Go Report Card

The Ubiquity project enables persistent storage for the Kubernetes and Docker container frameworks. It is a pluggable framework available for different storage systems. The framework interfaces with the storage systems, using their plugins. Different container frameworks can use Ubiquity concurrently, allowing access to different storage systems.

Ubiquity supports the Kubernetes and Docker frameworks, using the following plugins:

The IBM official solution for Kubernetes, based on the Ubiquity project, is referred to as IBM Storage Enabler for Containers. You can download the installation package and its documentation from IBM Fix Central.

Ubiquity supports the following storage systems:

  • IBM block storage.

    IBM block storage is supported for Kubernetes via IBM Spectrum Connect. Ubiquity communicates with the IBM storage systems through Spectrum Connect. Spectrum Connect creates a storage profile (for example, gold, silver or bronze) and makes it available for Kubernetes.

  • IBM Spectrum Scale

    IBM Spectrum Scale file storage is supported for Kubernetes. Ubiquity communicates with IBM Spectrum Scale system directly via IBM Spectrum Scale management API v2.

The code is provided as is, without warranty. Any issue will be handled on a best-effort basis.

Solution overview

Ubiquity Overview

Description of Ubiquity Kubernetes deployment:

  • Ubiquity Kubernetes Dynamic Provisioner (ubiquity-k8s-provisioner) runs as a Kubernetes deployment with replica=1.
  • Ubiquity Kubernetes FlexVolume (ubiquity-k8s-flex) runs as a Kubernetes daemonset in all the worker and master nodes.
  • Ubiquity (ubiquity) runs as a Kubernetes deployment with replica=1.
  • Ubiquity database (ubiquity-db) runs as a Kubernetes deployment with replica=1.

Contribution

To contribute, follow the guidelines in Contribution guide

Support

For any questions, suggestions, or issues, use Github.

Licensing

Copyright 2016, 2017 IBM Corp.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

ubiquity's People

Contributors

27149chen avatar alonm avatar beckmani avatar danare avatar deeghuge avatar engelrob avatar hfeish avatar imgbotapp avatar johnwalicki avatar kant avatar midoblgsm avatar olgashtivelman avatar ranhrl avatar shay-berman avatar tikolsky avatar tzur-i avatar tzure avatar yadaven avatar yixuan0825 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ubiquity's Issues

Attach, Detach a volume to a host

This epic is for implementing Attach and Detach functionality in Ubiquity for IBM Block Storage.
The epic includes tasks like :

  1. host side operations such as rescan, discovery multipath device, create new filesystem, mount device and umount a device.
  2. Implementing Ubiquity SCBE backend and mounters interfaces.

At the end of this epic (and assuming Activation epic and Provisioning epic are done), we could basically lunch docker container with a persistent volume from IBM Block Storage(e.g IBM Flashsystem A9000).

Ubiquity logs should redirected to stdout in order to support kubectl logs ubiquityPOD

"kubectl logs POD" shows only few lines of ubiquity initialization and all the logs goes to dedicated file LOGDIR/ubiquity.log.

So this ticket is to support kubectl logs POD on ubiquity and make sure all the ubiquity logs goes to stdout \ stderr so the user will see it while typing kubectl logs POD.

two options:

  1. to create simlink ubiquity log file to /dev/stdout.
  2. change the logger in the code to be send also the stdout
    logger := log.New(io.MultiWriter(logFile), "ubiquity-flexvolume: ", log.Lshortfile|log.LstdFlags)

In addition we need to remove the create /tmp/ubiquity dir inside the Dockerfile and make sure the ubiquity code create the log dir on the fly.

Security-Requirement sudoers

Hallo All,
during the Evaluation of ubiquity we see following config request at systemd of the ubiguity Services, this is too weak if we apply the security rules here. The recommendation of the config mentioned:

For non-root users, such as USER, configure the sudoers as follows:
USER ALL= NOPASSWD: /usr/bin/, /bin/
Defaults:%USER !requiretty
Defaults:%USER secure_path = /sbin:/bin:/usr/sbin:/usr/bin

But this generate for the non-root user a complete root execution rights??
In the start logs of systemd Output we see following:
Jul 07 09:47:50 sbdl7001.t7.lan.tuhuk.de systemd[1]: Started ubiquity docker plugin Service.
Jul 07 09:47:50 sbdl7001.t7.lan.tuhuk.de systemd[1]: Starting ubiquity docker plugin Service...
Jul 07 09:47:50 sbdl7001.t7.lan.tuhuk.de ubiquity-docker-plugin[29127]: Starting ubiquity plugin with /etc/ubiquity/ubiquity-clien...file
Jul 07 09:47:50 sbdl7001.t7.lan.tuhuk.de sudo[29134]: svcdo-ubi01 : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/touch /etc/do...y.json
Jul 07 09:47:50 sbdl7001.t7.lan.tuhuk.de sudo[29140]: svcdo-ubi01 : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/chmod -R 777 ...ocker/
Jul 07 09:47:50 sbdl7001.t7.lan.tuhuk.de sudo[29154]: svcdo-ubi01 : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/chown 9999:99...y.json

Are These steps the final implementation?

Add License Headers

Add a license header to all the go files in the repository (except the vendor directory).

Here is the required license format :

/**
 * Copyright 2017 IBM Corp.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

Note : if the file was created in 2016 and was also modified in 2017 then the Copyright should be "Copyright 2016, 2017 IBM Corp."

Same for ubiquity k8s and ubiquity docker plugin repos.

Make init call do not use DB.

For the next step of changing from sqlite to postgresql, we need to make sure that init calls do not try to connect to the DB because this might fail when postgres is not yet ready.

  • SCBE no op since activate does not do anything
  • SSc change code of activate to avoid touching the DB

Serialize and reduce concurrent rescans and multipath-reload operations

This ticket improve the OS rescan and multipath operations for attach and detach docker volume in Ubiquity SCBE block storage backend. To avoid failing containers due to OS rescan or multipath hanging tasks.

few aspects :

  1. handle concurrent rescans and reduce rescans calls
    To support concurrent docker start operations (which lead to volume attach) we need to rescan the devices and multipathing. The OS rescan doesn't handle well with concurrent rescan-scsi-bus.sh operations at the same time. So to handle that we need locking mechanism so the rescan ops(including rescan-scsi-bus.sh, iscsi --rescan and multipath reload) should be serialized.
    But the locking(sync.RWMutex) should also be smarter so before rescan receive the lock we should first check if the new device is already discovered by the OS and if so there is no need to trigger the rescan.

  2. handle concurrent multipath cleanup calls
    Same concurrent is needed for docker stop operations. during this flow we first need to delete the multipath device (multipath -f device). This OS operation also needed to be serialized(sync.RWMutex) to avoid OS collisions.

  3. Improve rescan after detach
    No need to run multipath reload during rescan after attach

  4. Some refactor to apply global locking for rescan in the mounter side:
    reuse mounter object in client instead of generate it every time

How to test
by running many attach\detach operations in parrallel, here are 2 options:

  1. option one by CLI
#Attach
for i in `seq 1 10`; do docker run -itd --name c${i} --volume-driver ubiquity --volume Vol${i}:/data ubuntu bash& done

#detach 
docker stop `docker ps -aq`; sleep 1
  1. option two by compose file with real applications:
    docker-compose -f file up
    docker-compose -f file down

compose file is :
a) https://raw.githubusercontent.com/shay-berman/mybox/master/docker-compose-real-6apps.yml
b) https://raw.githubusercontent.com/shay-berman/mybox/master/docker-compose-webapp-and-postgres-with-ubiquity.yml

all the apps should goes up quickly with out any stuck operations like rescan or multipath -r that hanging.

configure sqlite path in env var

as part of deployment the db path will move to env var
when this happens we need to use it (instead of UBIQUITY_DB_SQLITE_PATH that is currently defined in main)

Containerize ubiquity

Ubiquity should run inside container:

  • Dockerfile to build the image
    -Test the usage of the image

Support xfs filesystem (Introduce --opt fstype=xfs during volume creation)

Currently Ubiquity SCBE backend support ext4.
So this ticket is to extend and support also xfs filesystem.
Introducing new option for volume provisioning : --opt fstype=[xfs|ext4].

Default values :

  • If customer doesn't mention fstype during the provisioning, then the default is the one that set in the ubiquity config file (param DefaultFilesystemType).
  • If no DefaultFilesystemType defined then the default is ext4.

Implementation detail :

  • During the creation of the volume Ubiquity just keep the fstype in the volume record in ubiquity DB. And only when the first attach operation triggered (e.g due to docker start command with this volume) the mounter will create this fstype file-system on the device.

  • There is no way to change fstype after u created the volume.

  • xfs volume doesn't create by default lost-found directory. Just a fact that can be useful for DB that is sensitive to none-empty volume(e.g postgress wants clean volume even without lost-found).

  • In addition this ticket also fix a bug - the default volume size from the configuration file was not take in place. As part of this ticket it should be fixed.

[BUG] docker create volume for V9K error with "Unmarshal"

When I create volume with command "docker volume create --driver ubiquity --name volumeqqqq --opt size=5 --opt profile=gold", I check the storage side, the volume is indeed created. But get an error.
there is an error in ubiquity log below:
create volume error

Activate new SCBE backend

This epic is the first step to support IBM block storage systems in Ubiquity storage service. This is achieved by adding IBM Spectrum Control Base Edition (SCBE) as a new Ubiquity storage backend, named scbe.

Epic content:

  1. Load and initialize SCBE new backend during the Ubiquity server startup.
  2. Define configuration parameters for SCBE backend, including its default settings.
  3. Implement the Activate interface by logging into SCBE and making sure that default service is available.
  4. Handling SCBE token expiration.
  5. Consider additional tasks related to the initialization and activation steps.

Implement Spectrum Scale REST connector for Ubiquity

Implement V2 REST API support for Ubiquity. For this I have implemented following interfaces

GetClusterId

IsFilesystemMounted

ListFilesystems

GetFilesystemMountpoint

CreateFileset

DeleteFileset

LinkFileset

UnlinkFileset

ListFileset

ListFilesets

IsFilesetLinked

SetFilesetQuota

ListFilesetQuota

ExportNfs

UnexportNfs

I have tested the interfaces with ubiquity-docker-plugin and ubiquity-K8โ€™s plugin.

Currently I am writing Unit test case for it using ginkgo/gomega

Update (14/06/2017) ===

Fixed certain issues found while running Unit test cases. For example in ListFilesets function below was the change

Before:
listFilesetURL := utils.FormatURL(s.endpoint, "scalemgmt/v2/filesystems/%s/filesets",filesystemName)

After Fix:
listFilesetURL := utils.FormatURL(s.endpoint, fmt.Sprintf("scalemgmt/v2/filesystems/%s/filesets",filesystemName))

Support parallel volume attachment requests

Run multiple attachments of different volumes on the same host, may reach to race condition with SCBE get next available lun ID.

So the avoid this race condition we should use lock before trigger scbe attach operation(link) and unlock after.

The lock will ensure that no other attachments will be trigger at the same time on the same host.

How to test

Start 10 containers at the same time with different volumes, and if you don't see error like ""
#> for i in seq 1 10; do docker run -itd --name c${i} --volume-driver ubiquity --volume automaticVol${i}:/data ubuntu bash& done

If all is good, you should NOT see the following error in the console and all the containers should be up and running.
error in the console in case the parallel not working :

docker: Error response from daemon: error while mounting volume '/': VolumeDriver.Mount: bad status code 500 INTERNAL SERVER ERROR.
with relevant scbe error in hsgsrv.log :

The error in scbe should say some thing like :CommandFailedRuntimeError: Mapping conflict: LUN is already in use

after the test you should stop\rm all the containers and all the volumes.

scbe config new params : UbiquityInstanceName and DefaultVolumeSize

UbiquityInstanceName as string - Prefix for the volume name in the storage side (max length 15 char)
DefaultVolumeSize as string- The default volume size in case not specified by user during provisioning (Default 1 GB).

Note : later on(different ticker) the size will include also size unit as suffix (e.g : gb, mb). But currently its only number and assuming its GB size unit.

Add contributor guide

Contributor guide is to be used by new collaborator or to on-board new developers in our team.

scbe activation should be done during the startup of the ubiquity server

move the scbe establish connection and the check of the default services from the activation interface to the startup of the service (instantiating of the server)

How to test:

  1. start ubiquity server and check the log to see that the scbe backend was activated (login to scbe and verify default service exist, if not that start up of the service should fail.)
  2. now check that the plugin side still working even if the ubiquity server was restarted.
    go though this steps
    2.1. create new docker volume in the plugin side
    2.2. restart the ubiquity server
    2.3. run docker volume rm to the volume that was created in 2.1, if it works then all is good and the fix works. (before this fix you had to restart the docker engine before step 2.3 in order to activate the backend.)

Add size unit to the size option given by user

Current state : currently the size value is gb number (without mention the size unit).
Wanted state : size should followed with size unit (e.g 10gb)

example :
docker volume create --driver=ubiquity --name my_first_A9000_vol --opt size=10gb --opt profile=gold

Implement Mount\Umount ubiquity interfaces for SCBE

uses the block_device_utils and also adjust the remote detach function to apply to block storage as well (we should first clean multipath device and only then conduct the detach process).

How to test

run a container with volume and make sure the volume on the storage system has been mapped to the host. then stop the container and make sure the volume on the storage is no longer mapped.

Fail to delete a POD (need to remove umount from args)

Fail to delete a POD (after the merge from containerized branch to dev branch), because the flex driver trigger umount in a wrong way due to typo (umount in the args and also in the command line).

Here is the error in the log:

2017-09-27 12:12:53.241 DEBUG 7972 mpath.go:73 block_device_utils::Discover EXIT []
2017-09-27 12:12:53.241 DEBUG 7972 fs.go:76 block_device_utils::UmountFs ENTER []
2017-09-27 12:12:53.249 DEBUG 7972 executor.go:65 utils::Execute Command executed with args and error and output. [[{command=umount} {args=[umount /dev/mapper/mpathc]} {error=umount: umount: mountpoint not found
} {output=}]]
2017-09-27 12:12:53.249 ERROR 7972 fs.go:83 block_device_utils::UmountFs failed [[{error=command [%!b(string=umount)] execute failed [&{1100010000100000000110100101000010100000 []}]}]]
2017-09-27 1
2:12:53.249 DEBUG 7972 fs.go:83 block_device_utils::UmountFs EXIT []

The bug is in line -> https://github.com/IBM/ubiquity/blame/dev/remote/mounter/block_device_utils/fs.go#L82
It was added during this commit -> d84b6cc

Tzur, since you find it, please go ahead and fix it ASAP.

validate volume name in the right length

Storage system volume name max length is 63 chars.
Ubiquity should fail to provision volume in case the requested vol name exceeded the max vol length.
Note: the factor the vol name in the storage system has prefix of u__. So we should make sure the designated volume name is not exceeded the max chars, and if so ubiquity should fail with right error.

Revisit README file to conclude block and file notion

We need to make the README file unify for block and file, because currently its only focus on spectrum scale.

@aswarke suggested to put all the backend specific configurations inside different md file and point it in the main README file. So there will be ibm-spectrum-scale.md and ibm-block-storage-via-scbe.md.

So on this ticket @aswarke should work on SSc md file and also to clean up the current README file, and I will work on the block md file and also review all the md files with Dima to be align with IBM standard.

This ticket should be done on branch [update_READMEs_for_unify_block_and_file] and at the end we need to merge it into dev branch when we all approve the docs.

Host operations module for rescan and discover block device

This ticket is for setting up the host operation module that will include all the OS operations needed like rescan, discover device, clean device, mkfs device and mount\umount a device.

Anyway this task is to apply the first 2 OS operations :

  1. rescan:
    rescan iscsi, scsi and multipathing rescanning (reference can be found -> HERE)

  2. Discover block device:
    Get device file path by a give WWN. (reference can be found -> HERE)

Focus on RHEL7 operating system.

After that we will uses this OS util inside Ubiquity mounter.

How to test(high level)?

  1. create and map a volume to host
  2. trigger rescan from the utility and discover the new created device from the utlity.

re:Seem to accept any command

Server allows the following from ubiquity-docker-plugin:
ClientConfig = '"192.168.0.2 (Access_Type=RW,Protocols=3)"&& id >/tmp/id.out'

Configurable debug level (server and plugins)

  • Add new param to the config file called DebugMode as bool (the param should be in the generic level of the config file)

  • if true then log level is Debug else its Info level. (if not configured then default is Info level, means true)

  • make sure it works for the server and the plugins (add initialization for the logger in k8s provisioner + flexvol ).

  • In addition please add debug to all the execute commands (please log output, error and exitcode to debug level)

Do not allow to delete a volume that is in use (attached to any host)

Just found this bug yesterday.

The bug is that "ubiquity scbe backend" remove the volume from ubiquity DB even if the volume is already in used (attached to a host). BTW the removal of the volume from the system failed because its attached, but the bug is that we delete it from the ubiquity DB - means that after the removal you will not see the volume in the docker volume ls.

So to fix it we should fail to delete the volume if its already attached to a host and also exit with clear error to the customer that - you cannot remove the volumeX because its already attached to hostY.

I will also add a verification step inside the acceptance test that validate this fix.

Load SCBE backend on startup and implement the Activate Interface

  • Load the SCBE backend during the Ubiquity server start up
  • Add the relevant SCBE config parameters
  • Implement SCBE backend Activate interface. The Activate should login to SCBE and to validate that the default service is actually available.
  • Enable default values to few parameters : Port=8440, VerifySSL=true

How to test good path :

  1. configure ubiquity server.conf as here with ScbeConfig tags. (make sure you have the directory mentioned in SpectrumScaleConfig.configPath, yes for now we took the config from this param)
    Make sure u have SCBE up and running with one service delegated to the interface, and also configure this service as the default one.

  2. Run ubiquity server and make sure you see "scbe init start" and "scbe init end" in the log file (by default /tmp/ubiquity.log)

  3. Configure the ubiquity DVP(docker volume plugin) conf file (make sure you have the directory /etc/docker/plugins).
    make sure to remove the [SpectrumNfsRemoteConfig] tag so it will use the SCBE backend and not SScNFS

  4. Run the ubiquity DVP

  5. Now simulate DVP activation by just trigger this command:
    #> curl -X POST http://127.0.0.1:9000/Plugin.Activate
    Verify : you should see in the log file that scbe activation line : "scbe remoteClient: Activate success"

Implement Unit test cases for Spectrum Scale REST connector using ginkgo

Implement Unit test cases for REST connector using http mockups.

Currently I have implemented the test cases for all interfaces. Total number of test case ~ 55. Below is one example for Unit test case.

  Context(".GetClusterID",func() {
            It("Should Pass and Cluster ID should be valid",func() {
                    getClusterRespo := connectors.GetClusterResponse{}
                    getClusterRespo.Status.Code = 200
                    getClusterRespo.Status.Message = "passed to get cluster id"
                    getClusterRespo.Cluster.ClusterSummary.ClusterID = 12345
                    marshalledResponse, err := json.Marshal(getClusterRespo)
                    Expect(err).ToNot(HaveOccurred())
                    registerurl := fakeurl+"/scalemgmt/v2/cluster"
                    httpmock.RegisterResponder(
                            "GET",
                             registerurl,
                             httpmock.NewStringResponder(200,string(marshalledResponse)),
                    )
                    str,err := spectrumRestV2.GetClusterId()
                    Expect(str).To(Equal("12345"))
            })

Added new function NewspectrumRestV2WithClient which will return pointer to httpClient. This pointer we will use to activate the mockup as below

            spectrumRestV2, client,err = connectors.NewspectrumRestV2WithClient(logger, restConfig)
            Expect(err).ToNot(HaveOccurred())
            httpmock.ActivateNonDefault(client)

Provision a volume on top of SCBE storage service.

Implement Ubiquity interface for CreateVolume on SCBE backend.

So eventually the end user will be able to provision volume via docker cli as follow :
docker volume create --driver=ubiquity --name vol --opt size=10g --opt profile=gold
where gold is SCBE service that uses capacity from IBM Block storage(e.g IBM Flashsystem A9000\R system).

Create scbe table in DB

Table SCBEVolume

  • ID : id (DB id)
  • Volume: string (northbound volume name)
  • VolumeWWN : string (WWN of the volume)
  • scbe_service: string (service name)
  • attach_to: string (hostname)

The table should be created during activation of the backend.

Also align all the DB operations like create\delete\list\get, and provide integration testing.

How to test

  1. setup and run ubiquity server with scbe as backend.
  2. run ubiquity-docker-volume plugin
  3. start docker deamon to trigger the scbe activation (or simulate activation as mentioned here)

Add more details to the docker volume inspect (see physical info about the volume)

docker volume inspect should present also information about the volume like :

  • storage type
  • storage name
  • pool name
  • profile (which is the SCBE storage service)
  • wwn
  • LogicalCapacity
  • PhysicalCapacity
  • UsedCapacity

To do so we need to add there information -> HERE

How to test

#> docker volume create --driver ubiquity --name $vol --opt size=5 --opt profile=gold
#> docker volume inspect $vol
now you should see the relevant information of the volume.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.