GithubHelp home page GithubHelp logo

operations's Introduction

This repository is for collecting issues related to the operations of the NERC. This includes OpenShift & OpenStack clusters and will include more in the future.

operations's People

Contributors

joachimweyl avatar larsks avatar

Stargazers

​/Thor(sten)?/ Schwesig avatar

Watchers

Justin Riley avatar Jim Culbert avatar Milstein Munakami avatar Tzu-Mainn Chen avatar Kristi Nikolla avatar Steve Heckman avatar Christopher Tate avatar Heidi Picher Dempsey avatar ​/Thor(sten)?/ Schwesig avatar

operations's Issues

Debugging external-apps-ingress-controller in Prod cluster

I've done some looking around and I'm seeing what may be a typo in the ingressController yaml file on this line

I'm assuming it's meant to be:

spec:
  domain: apps.openshift.nerc.mghpcc.org

As opposed to:

spec:
  domain: apps.shift.nerc.mghpcc.org

This may not be the only thing that needs changing seeing as there is also something going on with the node scheduling as seen in the conditions here, but it's a start.

Benchmark NESE-backed MariaDB against local-backed MariaDB

Create some tooling to run database benchmarks to mariadb backed by local storage with mariadb backed by NESE storage. This is similar to #6 but focuses specifically on database performance, and if we're lucky the benchmark process will give us the reproducer we want for #4.

Re-install nerc-ocp-prod cluster

Now that we've met most of our configuration goals for the nerc-ocp-prod cluster, we'd like to rebuild it for two reasons:

  1. The cluster was inadvertently installed with OCP 4.11 earlier in the year, and
  2. We want to demonstrate that the cluster can be rebuilt (and address any issues that crop up during the attempt).

Problems with external ingress service

I have case https://access.redhat.com/support/cases/#/case/03366346 open on some issues we've run into with the external ingress controller. Routes with the appropriate labels were getting match with the external ingress controller as expected, but were being assigned a hostname using the domain of the default ingress controller.

From the initial case response:

Unfortunately routes exposed on non-default IngressControllers are exposed using the default cluster domain. We are aware of the issue and have an RFE 1 open to address this IngressController behavior.

For the time being there are some options to work around this:

In case there is only a single public domain needed, please consider the Ingress objects spec.appsDomain 2 parameter. Newly created user routes will be exposed on this domain instead of the cluster default domain.

In case route creating users are aware of the exposed FQDN, please consider the Route's spec.host 3 parameter which allows to specify the FQDN for the route.

In case route creating users are not aware of the exposed FQDN please consider the Route's spec.subdomain 3 parameter, which allows configuring the left-most 'host' part of the FQDN while honoring the non-default IngressController's spec.domain.

I am trying some of the suggested workarounds to see how they work out.

Can't make calls to vault via kubectl exec (vault container not found)

I noticed the vault backup runs have been failing, with the error log in the pod reading (from vault backup pod):

Logging into nerc-vault-0 to retrieve leader
error: unable to upgrade connection: container not found ("vault")

The vault container itself is running (it's the underlying container for the nerc-vault-0-2 pods) and all the vault internals and services are up and running. I haven't seen this type of behavior from vault before but my initial impulse was that it may be some type perm error with the backup-job sa but even if I run some command against vault from my local machine I get the same error:

oc exec -i nerc-vault-0 --as system:admin -- /bin/bash/
error: unable to upgrade connection: container not found ("vault")

So that seems to point back to the vault container itself..
Also want to note that the vault specific commands weren't working directly in the nerc-vault-0-2 terminal in the openshift console until running a vault login on each

Fix failing metallb operator install

The metallb operator install failed:

$ k -n metallb-system get csv | grep metallb-operator
NAME                                   DISPLAY                            VERSION               REPLACES                           PHASE
metallb-operator.4.10.0-202210211005   MetalLB Operator                   4.10.0-202210211005                                      Failed

Figure out why and fix it.

(Part of #32)

Fix argocd sync problems in nerc-ocp-prod overlay

While argocd is able to successfully sync the nerc-ocp-prod overlay, the application is never "healthy". There are broadly two categories of problems:

Things that are "Progressing"

  • The s3 and noobaa-mgmt Service resources in the openshift-storage namespace are both of type LoadBalancer, and since we have disabled automatic assignment of addresses by MetalLB these resources show up as "Pending":

    noobaa-mgmt                                       LoadBalancer   172.30.213.11    <pending>     80:30482/TCP,443:30017/TCP,8445:32706/TCP,8446:31216/TCP   5d23h
    s3                                                LoadBalancer   172.30.84.107    <pending>     80:31734/TCP,443:30154/TCP,8444:32121/TCP,7004:31076/TCP   5d23h
    

    However, both of these services are also exposed via a Route:

    NAME          HOST/PORT                                                             PATH   SERVICES      PORT         TERMINATION          WILDCARD
    noobaa-mgmt   noobaa-mgmt-openshift-storage.apps.nerc-ocp-prod.rc.fas.harvard.edu          noobaa-mgmt   mgmt-https   reencrypt/Redirect   None
    s3            s3-openshift-storage.apps.nerc-ocp-prod.rc.fas.harvard.edu                   s3            s3-https     reencrypt/Allow      None
    

    Since they are available as a Route, assining a loadbalancer address is both unnecessary and dangerous: it would expose these services on the public Internet.

    It is possible to configure custom resource health checkers for ArgoCD that will report "Healthy" in this situation, see:

Things that are "Out of sync"

In some situations, OpenShift makes a copy of an existing resource (e.g., for the custom logo, it copies the ConfigMap from the openshift-config namespace to the openshift-console namespace. ArgoCD uses labels to identify which resources belong to a given ArgoCD application, and these copies inherit all the labels from the original resources. This means that ArgoCD sees resources that appear to belong to an application but do not appear in the source manifests, so it marks them as "requires pruning".

We see this problem with the custom logo (openshift-console/custom-logo) and one of the OAuth configurations (openshift-authentication/v4-0-config-user-idp-0-client-secret).

There should be a way to exclude these from consideration.

Create wildcard DNS entry for *.apps.shift.nerc.mghpcc.org

After allocating an address for the public ingress controller (#21), we need to create a wildcard DNS entry so that <anything>.apps.shift.nerc.mghpcc.org will map to that address. (I'm not wedded to that domain name if folks prefer something else; that's just what's currently configured in the external ingresscontroller manifest).

Enable local gateway mode on nerc-ocp-prod cluster

In order to properly route connections to the NESE cluster, we need to enable local gateway mode in our OVNKubernetes configuration 1. We've already done this on the infra cluster so in theory we just need to include the right resource in the nerc-ocp-prod overlay.

Configure vault to use local storage

We would like to remove the chicken-and-egg dependency between Vault and ODF. We can do this by having Vault use local storage instead of ceph-based PVs. This doesn't have the disadvantages we usually have to deal with when using local storage due to the way Vault has been deployed: it's a statefulset with one instance on each node, and each instance has its own backing store, so using node-locked storage doesn't cause any problems.

@jtriley

Correct failing nmstate configuration

(Part of #32)

After re-installing the nerc-ocp-prod cluster, nmstate is failing to apply our configuration on some nodes:

$ k get nnce
NAME                    STATUS
wrk-10.vlan-2173-nese   Available
wrk-11.vlan-2173-nese   Available
wrk-13.vlan-2173-nese   Available
wrk-14.vlan-2173-nese   Available
wrk-16.vlan-2173-nese   Available
wrk-17.vlan-2173-nese   Available
wrk-18.vlan-2173-nese   Available
wrk-7.vlan-2173-nese    Failing
wrk-8.vlan-2173-nese    Available
wrk-9.vlan-2173-nese    Aborted

We need to figure out why we're seeing this failures.

Monitoring for additional events and metrics.

In order to monitor the cluster for hardware faults, we would like to be able to monitor and alert on the following events:

  • Server power supply failure
  • Internal cooling fan failure
  • Network interface errors
  • Server memory errors - (possibly using RDAC driver)
  • Local disk predictive failure - (not CEPH, using tools like smartctl or raid controller utilities)
  • Local disk filling up - (not CEPH, maybe a script with lsblk and df)
  • Automated operator upgrades.

Could you please let us know if this information is already being collected or how we could pursue the visibility of these events?

It would be most helpful for us to have some guidance on how to implement additional monitoring and metrics collection and integrate it into the monitoring infrastructure you have built. We are always adding checks to monitor a production system based on issues we experience. We would like to be able to collect data both from the operating systems level as well as service processors (ilo,bmc,idrac - depending on the hardware vendor).

Thank you!

  • Augustine

Creation of new Slack channel for alerts

Naved, could you please setup a new Slack channel for our alerts? It looks like I will need a webhook URL and the channel name. Here is the documentation for Slack Alerts and a sample config if it helps:

https://prometheus.io/docs/alerting/latest/configuration/#slack_config

global:
  slack_api_url: '<slack_webhook_url>'

route:
  receiver: 'slack-notifications'
  group_by: [alertname, datacenter, app]

receivers:
- name: 'slack-notifications'
  slack_configs:
  - channel: '#alerts'
    text: '[https://internal.myorg.net/wiki/alerts/{{](https://internal.myorg.net/wiki/alerts/%7B%7B) .GroupLabels.app }}/{{ .GroupLabels.alertname }}'

Configure alerts for ODF in external ceph mode

This is related #38 - it would be nice to have alerting when we get close to the ceph pool quota limit.

However, it appears ODF doesn't configure alerts for external mode ceph:

For internal Mode clusters, various alerts related to the storage metrics services, storage cluster, disk devices, cluster health, cluster capacity, and so on are displayed in the Block and File, and the object dashboards. These alerts are not available for external Mode.

see: https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.10/html/monitoring_openshift_data_foundation/alerts

We might need to push our own ceph metrics via a custom deployment and setup alerts from observability assuming we don't get this for free from ODF.

Exclude internal ManagedClusterInfo from argocd sync

When we create a new ManagedCluster resource in ACM, that results in ACM creating an internal ManagedClusterInfo resource. It propagates the labels from the ManagedCluster resource onto this new resource, which causes conflicts in ArgoCD.

We can exclude the ManageClusterInfo resources from consideration by updating the ArgoCD argocd-cm ConfigMap as shown here.

Observability only working with infra nodes

I noticed that Observability is only reporting the infra cluster nodes and not the prod nodes. It used to work in our old prod cluster that was deleted. It's probably because we don't have a consistent node label and node selector in Observability for all the nodes.

The Observability dashboard only shows the local-cluster, but we would like it to show the nerc-ocp-prod cluster as well.

See the Observability docs about the node selector.

Does anyone know a good node selector we can use for infra and prod?

Vault on nerc-ocp-infra is inoperative

There appear to be problems with the Vault service on nerc-ocp-infra. All externalsecrets on both clusters are failing to sync, and I am unable to log into the web ui.

Create external loadbalancer for nerc-ocp-prod

Since we've been unable to get our metallb solution working correctly (#16), let's set up a front-end proxy to handle public access to the nerc-ocp-prod cluster. This will require running a couple instances of haproxy and keepalived on a pair of machines (either physical nodes or virtual machines).

The backend will be the external ingress service we've created on the cluster, accessed as a nodeport service on the worker nodes.

Fix operator subscription issues on nerc-ocp-infra

The subscriptions for some of the installed operators on nerc-ocp-infra look unhealthy.

$ kubectl get -A subs -o custom-columns='NS:{.metadata.namespace},NAME:{.metadata.name},STATE:{.status.state}'
NS                           NAME                                                                         STATE
external-secrets-operator    external-secrets-operator                                                    AtLatestKnown
grafana                      grafana-operator                                                             AtLatestKnown
group-sync-operator          group-sync-operator                                                          AtLatestKnown
multicluster-engine          multicluster-engine                                                          UpgradePending
open-cluster-management      acm                                                                          AtLatestKnown
openshift-logging            cluster-logging                                                              AtLatestKnown
openshift-nmstate            kubernetes-nmstate-operator                                                  AtLatestKnown
openshift-operators-redhat   elasticsearch-operator                                                       AtLatestKnown
openshift-operators-redhat   loki-operator                                                                AtLatestKnown
openshift-operators          cert-manager                                                                 AtLatestKnown
openshift-operators          openshift-gitops-operator                                                    AtLatestKnown
openshift-operators          openshift-pipelines-operator-rh                                              AtLatestKnown
openshift-storage            mcg-operator-stable-4.10-redhat-operators-openshift-marketplace              UpgradePending
openshift-storage            ocs-operator-stable-4.10-redhat-operators-openshift-marketplace              UpgradePending
openshift-storage            odf-csi-addons-operator-stable-4.10-redhat-operators-openshift-marketplace   UpgradePending
openshift-storage            odf-operator                                                                 UpgradePending
patch-operator               patch-operator                                                               AtLatestKnown

The operators marked UpgradePending are all having some sort of difficulty with updates.

Database connection timeouts/performance issues

@rob-baron has reported persistent problems with database connection timeouts while working with xdmod. In order to resolve these problems we need to better characterize the issue. There are several things we should do:

I'm going to create separate issues for each of the above and assign them to folks. Please feel free to re-assigned if you think I've made the wrong choice.

Size NESE allocation correctly for nerc-ocp-infra cluster

We ran out of space today on the 1TB RBD pool currently allocated to the nerc-ocp-infra cluster. @jtriley has temporarily increased the pool size, but we would like to get a better sense of our utilization has been increasing since we installed the observability tools and use that to select a more appropriate pool allocation.

Certificate signed by unknown authority

Some operations on the nerc-ocp-infra cluster (kubectl exec, kubectl logs) are failing with:

Error from server: Get "https://10.30.9.9:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-ctl-2/kube-apiserver": x509: certificate signed by unknown authority

Most other API operations are working correctly. I've opened https://access.redhat.com/support/cases/#/case/03308393 on this issue and I'll be investigating the problem while I wait for a support response.

Figure out why keycloak connection isn't working

After @knikolla's OCP-on-NERC/nerc-ocp-config#63 merged, we are seeing the following in the logs for the authentication-operator pod:

authentication-operator-7449fd4467-j4rbf authentication-operator E1013 13:47:28.510112       1 base_controller.go:272] ConfigObserver reconciliation failed: failed to apply IDP mss-keycloak config: dial tcp 140.247.152.15:443: i/o timeout

This is likely a firewall issue somewhere; @jtriley can probably confirm.

noobaa service does not work on nerc-ocp-prod

only the rbd storage class is available

naved@computer oct-28-2022 % oc get sc
NAME                                             PROVISIONER                          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ocs-external-storagecluster-ceph-rbd (default)   openshift-storage.rbd.csi.ceph.com   Delete          Immediate           true                   39m
naved@computer oct-28-2022 % oc get pods |grep -i noobaa
noobaa-core-0                                      0/1     Pending   0             39m
noobaa-db-pg-0                                     0/1     Pending   0             39m
noobaa-operator-5c59665fdb-rdgbn                   1/1     Running   2 (39m ago)   4h38m

the noobaa pods are stuck in pending state. I am going to open a ticket with redhat

How to manage Vault moving forward?

I'm opening this issue as an extension of https://github.com/OCP-on-NERC/nerc-ocp-config/issues/35.

Once OCP-on-NERC/nerc-ocp-config#77 is merged we can test that the backup script is working. When we can confirm this we will be able to decide how we'd ultimately like to have vault configured and managed.

We have 2 different paths:

  1. Continue moving forward with manual vault deployment
  • If we choose this path there is no external configuration of vault available
  • We'd want to find a way to track the internal vault configuration. The short answer to how Operate First does this is they don't because there's no way around manually creating and updating something like a text file or markdown file
  • We'd need to handle auto unseal of the vault. This means either enabling vault's auto unseal feature or create a script to unseal the vault on our own
  1. Move our integration to the banzaicloud vault operator
  • The issue preventing me from utilizing this in the first place was that it doesn't work out of the box from Operator Hub. With some tweaks I have finally had success deploying the operator (though I still need to test it against all the tooling we are using with vault currently)
  • It would allow for the configuration and management of vault (policies, secret engines, etc) via gitops through a vault crd. This would solve our problem of documenting our vault config
  • There are also auto unseal tools that this operator provides

Based on discussion with @jtriley a while back I think ultimately we'd prefer the operator path, but what do we think? I will continue testing the operator locally to ensure that it plays nice with ESO and I can explore the auto unseal options if we think the operator is preferred

External Secrets operator failing to authenticate to Vault

Newly created ExternalSecret resources are failing to sync. The ExternalSecret resources show:

status:
  conditions:
  - lastTransitionTime: "2022-09-09T14:17:19Z"
    message: could not get secret data from provider
    reason: SecretSyncedError
    status: "False"
    type: Ready

Watching the logs in the external-secrets-operator namespace, we see:

cluster-external-secrets-5df9f66474-8ncpb external-secrets {"level":"error","ts":1662733071.9526072,"logger":"controllers.ExternalSecret","msg":"could not get secret data from provider","ExternalSecret":"default/test-secret","SecretStore":"/nerc-cluster-secrets","error":"cannot read secret data from Vault: Error making API request.\n\nURL: GET http://nerc-vault.vault.svc:8200/v1/nerc/data/test-secret\nCode: 403. Errors:\n\n* 1 error occurred:\n\t* permission denied\n\n","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}

This all worked at some point, because most of our ExternalSecret resources are Synced:

| NAMESPACE                             | NAME                                                                       | STATUS            |
| ---------                             | ----                                                                       | ------            |
| default                               | github-client-secret                                                       | SecretSynced      |
| default                               | github-ocp-on-nerc                                                         | SecretSynced      |
| default                               | rook-ceph-external-cluster-details                                         | SecretSynced      |
| default                               | test-secret                                                                | SecretSyncedError |
| dex                                   | dex-clients                                                                | SecretSynced      |
| grafana                               | github-client-secret                                                       | SecretSynced      |
| grafana                               | oauth-client-secret                                                        | SecretSynced      |
| lars-sandbox                          | vault-example                                                              | SecretSynced      |
| loki-namespace                        | loki-thanos-object-storage                                                 | SecretSynced      |
| nerc                                  | jtriley-bmc-creds                                                          | SecretSyncedError |
| open-cluster-management-observability | open-cluster-management-observability-multiclusterhub-operator-pull-secret | SecretSynced      |
| open-cluster-management-observability | open-cluster-management-observability-thanos-object-storage                | SecretSynced      |
| openshift-config                      | default-api-certificate                                                    | SecretSynced      |
| openshift-config                      | github-client-secret                                                       | SecretSynced      |
| openshift-config                      | github-ocp-on-nerc                                                         | SecretSynced      |
| openshift-ingress                     | default-ingress-certificate                                                | SecretSynced      |
| openshift-logging                     | openshift-logging-lokistack-gateway-bearer-token                           | SecretSynced      |
| openshift-operators-redhat            | loki-thanos-object-storage                                                 | SecretSynced      |
| openshift-operators-redhat            | lokistack-config-custom                                                    | SecretSynced      |
| openshift-storage                     | rook-ceph-external-cluster-details                                         | SecretSynced      |

Noobaa on nerc-ocp-infra is down

The object storage service on nerc-ocp-infra is down:

$ k -n openshift-storage get backingstore
NAME                           TYPE      PHASE      AGE
metrics-backing-store          pv-pool   Rejected   32d
noobaa-default-backing-store   pv-pool   Rejected   74d

And:

$ k -n openshift-storage get backingstore noobaa-default-backing-store  -o jsonpath='{.status}' | jq
{
  "conditions": [
    {
      "lastHeartbeatTime": "2022-07-01T02:53:17Z",
      "lastTransitionTime": "2022-09-13T14:29:09Z",
      "message": "BackingStorePhaseRejected",
      "reason": "Backing store mode: ALL_NODES_OFFLINE",
      "status": "Unknown",
      "type": "Available"
    },
    {
      "lastHeartbeatTime": "2022-07-01T02:53:17Z",
      "lastTransitionTime": "2022-09-13T14:29:09Z",
      "message": "BackingStorePhaseRejected",
      "reason": "Backing store mode: ALL_NODES_OFFLINE",
      "status": "False",
      "type": "Progressing"
    },
    {
      "lastHeartbeatTime": "2022-07-01T02:53:17Z",
      "lastTransitionTime": "2022-09-13T14:29:09Z",
      "message": "BackingStorePhaseRejected",
      "reason": "Backing store mode: ALL_NODES_OFFLINE",
      "status": "True",
      "type": "Degraded"
    },
    {
      "lastHeartbeatTime": "2022-07-01T02:53:17Z",
      "lastTransitionTime": "2022-09-13T14:29:09Z",
      "message": "BackingStorePhaseRejected",
      "reason": "Backing store mode: ALL_NODES_OFFLINE",
      "status": "Unknown",
      "type": "Upgradeable"
    }
  ],
  "mode": {
    "modeCode": "ALL_NODES_OFFLINE",
    "timeStamp": "2022-09-13 00:57:58.884840877 +0000 UTC m=+244737.625867123"
  },
  "phase": "Rejected"
}

PVC storage is working just fine.

Fix nerc-ocp-infra certificate issue

nerc-ocp-infra is not presenting a valid certificate, at least for the openshift-gitops app:

$ curl https://openshift-gitops-server-openshift-gitops.apps.nerc-ocp-infra.rc.fas.harvard.edu/
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

Ensure we have valid certificates deployed and that there aren't any configuration issues.

Figure out why image-registry has not yet successfully rolled out

naved@computer ~ % oc get clusterversion version
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0    True        False         83d     Error while reconciling 4.11.0: the cluster operator image-registry has not yet successfully rolled out

Redhat supports wants us to get this to reconcile before proceeding with fixing noobaa. They recommended that I open another ticket for this.

This could be an issue of OCP not being reconciled and rolled out successfully, therefore having a waterfall effect on NooBaa.

@larsks @dystewart do you have any insights what could be causing this? If not, I'll open another ticket.

Allow nerc-ops users to modify argocd applications

When logged in via GitHub, we have read-only permissions by default and must use impersonation to make changes to the cluster. This works fine from the CLI or within the main openshift console UI, but it is problematic when interacting with argocd: we would like nerc-ops members to be able to manage the "auto sync" setting for applications without requiring elevated privileges.

Investigating and apply the necessary RBAC to allow nerc-ops members to manage argocd application sync policy.

Label nodes for public ingress

The interface configuration for the public network interface will only apply to nodes labelled with nerc.mghpcc.org/external-ingress: "true". We need to re-apply this label to a set of appropriate nodes.

(Part of #32)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.