GithubHelp home page GithubHelp logo

singlestore-labs / singlestoredb-dev-image Goto Github PK

View Code? Open in Web Editor NEW
41.0 17.0 16.0 763 KB

The SingleStoreDB Dev Container is the fastest way to develop with SingleStore on your laptop or in a CI/CD environment.

License: Apache License 2.0

Dockerfile 11.58% Shell 87.43% HCL 0.99%
ci cicd database developer development development-tools docker docker-container docker-image relational-database

singlestoredb-dev-image's Introduction

SingleStoreDB Dev Image

Github Actions status image

The SingleStoreDB Dev Image is the fastest way to develop with SingleStore on your laptop or in a CI/CD environment (including Github Actions). This Docker image is not supported for production workloads or benchmarks so please keep that in mind when using it.

If you have any questions or issues, please file an issue on the GitHub repo or our forums.

How to run the Docker image?

This Docker image allows up to 4 leaf nodes to be run on a machine (host) that has up to 32 cores and up to 128GB of RAM without the need for a license.

When running a Docker container, either with or without a free edition license, on a machine with:

  • More than 32 cores, include the --cpus=32 option.
  • More than 128GB of RAM, include the --memory=128g option.

Refer to What are license units and how do they apply to my cluster? for more information on license units and how they are calculated.

Note: We recommend using an explicit image version tag whenever possible from the list of image tags and/or the changelog.

How to run the Docker image on Linux or Apple Intel?

See next section for Apple Silicon (M1/M2 chips) instructions.

docker run \
    -d --name singlestoredb-dev \
    -e ROOT_PASSWORD="YOUR SINGLESTORE ROOT PASSWORD" \
    -p 3306:3306 -p 8080:8080 -p 9000:9000 \
    ghcr.io/singlestore-labs/singlestoredb-dev:latest

If you want to configure a specific license, simply pass it as an environment variable:

docker run \
    -d --name singlestoredb-dev \
    -e ROOT_PASSWORD="YOUR SINGLESTORE ROOT PASSWORD" \
    -e SINGLESTORE_LICENSE="YOUR SINGLESTORE LICENSE" \
    -p 3306:3306 -p 8080:8080 -p 9000:9000 \
    ghcr.io/singlestore-labs/singlestoredb-dev:latest

How to run the Docker image on Apple Silicon (M1/M2 chips)?

First, make sure you are using Docker desktop which supports the latest virtualization technology on Apple Silicon machines.

Second, you must enable "Use Virtualization framework" and "Use Rosetta for x86_64/amd64 emulation on Apple Silicon" in Docker Desktop preferences. If you do not do this, SingleStore will run very slowly and consume a lot of power on your Mac. You can find these settings under Docker Desktop > Preferences > General.

Screenshot of enabling the new Virtualization framework in Docker's preferences

Third, run the following command to start SingleStore. Note the --platform linux/amd64 instruction which is required when running on Apple Silicon.

docker run \
    -d --name singlestoredb-dev \
    -e ROOT_PASSWORD="YOUR SINGLESTORE ROOT PASSWORD" \
    --platform linux/amd64 \
    -p 3306:3306 -p 8080:8080 -p 9000:9000 \
    ghcr.io/singlestore-labs/singlestoredb-dev:latest

How to run the Docker image on Windows?

Windows Powershell and Command Prompt (CMD) work best if you run the command on a single line. Alternatively you can use backticks for multi-line strings in Powershell. Either way, the following single-line version of the command will work on Windows.

docker run -d --name singlestoredb-dev -e ROOT_PASSWORD="YOUR SINGLESTORE ROOT PASSWORD" -p 3306:3306 -p 8080:8080 -p 9000:9000 ghcr.io/singlestore-labs/singlestoredb-dev:latest

How to open a SQL shell?

The image includes a shell which you can run interactively using docker exec like so:

docker exec -it singlestoredb-dev singlestore -p

The above command will prompt you for the root password. You can also provide the root password at the command line immediately after the -p flag like so:

docker exec -it singlestoredb-dev singlestore -pYOUR_ROOT_PASSWORD

You can also connect to SingleStore using any MySQL compatible client on your own machine using the following connection details:

Key Value
Host 127.0.0.1
Port 3306
Username root
Password YOUR_ROOT_PASSWORD

How to access the SingleStore Studio UI?

SingleStore Studio is a convenient way to manage SingleStoreDB and run queries via a browser based UI. Studio runs by default on port 8080 in the container. Assuming you have forwarded port 8080 to your local machine, you can access studio at http://localhost:8080.

When opening Studio you will see a login screen. Use the username root and the ROOT_PASSWORD you set when starting the container.

Where can I learn how to use SingleStoreDB?

Now that you have SingleStore running, please check out the following sections of our official documentation for guides on what to do next.

How to access the Data API?

In addition to supporting the MySQL Protocol, SingleStore also has a JSON over HTTP protocol called the Data API which you can access at port 9000 in the container. Assuming you have forwarded port 9000 to your local machine, the following curl command demonstrates how you can use the Data API:

~ ➜ curl -s -XPOST -H "content-type: application/json" -d '{ "sql": "select 1" }' root:YOUR_ROOT_PASSWORD@localhost:9000/api/v1/query/rows
{
  "results": [
    {
      "rows": [
        {
          "1": 1
        }
      ]
    }
  ]
}

Note For more information on how to use the Data API please visit the documentation.

How to use WebAssembly (Wasm) backed functions?

The Code Engine feature in SingleStoreDB supports creating functions (UDFs and TVFs) using code compiled to WebAssembly (Wasm). This feature supports any language that can compile to the Wasm core specification, which allows you to create UDFs in a language of your choice using existing code libraries and run them in a sandboxed environment for enhanced security. It uses a linear memory model and provides hard memory protection boundaries.

This Docker image has Wasm functions enabled by default. You can learn how to compile and load Wasm UDFs and UDAs into SingleStoreDB on our docs.

How to use Docker volumes for persistent storage?

Note If you are specifying a Singlestore version at runtime, your data will always be overwritten when the container restarts each time, as the container will always attempt to re-download and re-initialise the installed version on each run.

To get around this, build a custom image with your specific version.

docker run \
    -d --name singlestoredb-dev \
    -e ROOT_PASSWORD="YOUR ROOT PASSWORD" \
    -p 3306:3306 -p 8080:8080 -p 9000:9000 \
    -v my_cool_volume:/data \
    ghcr.io/singlestore-labs/singlestoredb-dev

After creating the container with a volume, you can re-create the container using the same volume to keep your data around. This can be used to upgrade SingleStore to new versions without loosing your data. Keep in mind that SingleStoreDB does not support downgrading. Make sure to take a backup of the volume before running the upgrade.

Note In order to mount a host volume to the /data directory, you will need to chown the volume to UID=999 and GID=998 before mounting it. The volume will be initialized automatically if empty. Host volumes are only supported by the /data directory.

This Docker image has a number of volume mount points in addition to /data. Here is a table outlining each of the mount points along with roughly their contents:

mount path description
/data All of the data, config, and cache for the SingleStoreDB cluster.
/logs All of the tracelog files containing information that can help debug the cluster or observe it's current behavior.
/server The installation directory containing server binaries and other installation state.

How to initialize this container with a SQL file?

When this docker image starts for the first time it will check to see if an init.sql file exists in its filesystem. The default location is /init.sh, but it can be customized via the INIT_SQL environment variable. If init.sql is found, the container will run it against the database as soon as SingleStoreDB is ready.

One way to do this is mounting a init.sql from your machine into the container using the -v flag. Here is an example of doing this:

docker run \
    -d --name singlestoredb-dev \
    -e ROOT_PASSWORD="YOUR ROOT PASSWORD" \
    -p 3306:3306 -p 8080:8080 -p 9000:9000 \
    -v ${PWD}/test/init.sql:/init.sql \
    ghcr.io/singlestore-labs/singlestoredb-dev

Replace ${PWD}/test/init.sql with an absolute path to the SQL file you want to initialize SingleStore with.

Note /init.sql will only be run once. If you want to run it again you will need to delete the file /data/.init.sql.done and then restart the container.

How to set SingleStoreDB Global Variables?

SingleStoreDB can be configured through the use of global variables which you can find in our documentation here. These variables can be set using environment variables when running the SingleStoreDB Dev Image using the prefix SINGLESTORE_SET_GLOBAL_.

For example, if you want to set default_partitions_per_leaf to 1, you would do this:

docker run \
    -d --name singlestoredb-dev \
    -e ROOT_PASSWORD="YOUR ROOT PASSWORD" \
    -e SINGLESTORE_SET_GLOBAL_DEFAULT_PARTITIONS_PER_LEAF=1 \
    -p 3306:3306 -p 8080:8080 -p 9000:9000 \
    ghcr.io/singlestore-labs/singlestoredb-dev

Multiple environment variables can be specified if you want to configure multiple global variables in SingleStoreDB.

If you specify a variable which is not supported by SingleStoreDB, the image will fail to start. You can see the full error message by inspecting the failed docker container's logs using docker log.

How to use a specific SingleStoreDB version?

The SingleStoreDB Dev Image uses the latest SingleStoreDB version available in the managed service by default. If you would prefer to use another SingleStoreDB version, you will need to either build a custom version of this image or specify the version at runtime by following the tutorials below.

What are some known SingleStoreDB versions?

You can use the version numbers in the first column of the following table in order to run a specific version of SingleStoreDB. If you want to use a particular patch version, just specify that version instead.

SINGLESTORE_VERSION description
8.7 SingleStoreDB Self-Managed 8.7, see changelog for latest version changelog
8.5 SingleStoreDB Self-Managed 8.5, see changelog for latest version changelog
8.1 SingleStoreDB Self-Managed 8.1, see changelog for latest version changelog
8.0 SingleStoreDB Self-Managed 8.0, see changelog for latest version changelog
7.8 SingleStoreDB Self-Managed 7.8, see changelog for latest version changelog
7.6 SingleStoreDB Self-Managed 7.6, see changelog for latest version changelog

How to build a custom version of this Docker Image with a specific SingleStoreDB Version?

The script /scripts/switch-version.sh can be used to easily build a custom version of this image. The fastest way to do this is using Docker build like so:

cat <<EOF | docker build -f - -t singlestoredb-dev:custom .
FROM ghcr.io/singlestore-labs/singlestoredb-dev
RUN /scripts/switch-version.sh SINGLESTORE_VERSION SINGLESTORE_LICENSE
EOF

Make sure to replace SINGLESTORE_VERSION and SINGLESTORE_LICENSE with the SingleStore version you want to use (see table above) as well as your license key. After running this command, you will have a new docker image called singlestoredb-dev:custom with the specific version of SingleStoreDB installed and ready to use.

How to specify a SingleStoreDB version at runtime?

In order to use a specific version of SingleStoreDB at runtime, you can start the Docker container with the SINGLESTORE_VERSION environment variable set.

Warning This method will result in the container taking much longer to start (roughly a minute) because it has to download and install SingleStoreDB each time. For this reason, we recommend building a custom version of this Docker image using the instructions above.

Here is an example of using the SINGLESTORE_VERSION environment variable to run SingleStoreDB 8.7:

docker run \
    -d --name singlestoredb-dev \
    -e ROOT_PASSWORD="YOUR ROOT PASSWORD" \
    -e SINGLESTORE_VERSION="8.7" \
    -p 3306:3306 -p 8080:8080 -p 9000:9000 \
    ghcr.io/singlestore-labs/singlestoredb-dev

Note You can mount /server into a Docker volume to preserve the installed SingleStoreDB server binaries if you are unable to use the custom image method. This will increase subsequent startup performance at the expense of complexity.

How to use this container in a CI/CD environment?

This Docker image defines a healthcheck which runs every 5 seconds. Any CI/CD system or container runtime which respects the healthcheck should automatically wait for SingleStore to be running and healthy.

How to run SingleStoreDB in Github Actions?

Here is an example workflow which runs SingleStore as a container service and queries it from the job.

name: my-workflow
on: push

jobs:
  my-job:
    runs-on: ubuntu-latest
    needs: build-image

    services:
      singlestoredb:
        image: ghcr.io/singlestore-labs/singlestoredb-dev
        ports:
          - 3306:3306
          - 8080:8080
          - 9000:9000
        env:
          ROOT_PASSWORD: test
          # Optional:
          # SINGLESTORE_LICENSE: ${{ secrets.SINGLESTORE_LICENSE }}

    steps:
      - name: sanity check using mysql client
        run: |
          mysql -u root -ptest -e "SELECT 1" -h 127.0.0.1

How to run SingleStoreDB in Gitlab CI/CD?

Here is an example workflow which runs SingleStore as a service and queries it from the job. Unfortunately Gitlab does not support Docker healthchecks for services, so additional logic must be added to wait for SingleStore to be ready. There is a three year old issue to address this problem in Gitlab, so hopefully this can be simplified eventually.

Note You can add your SingleStore license key to Gitlab secrets under the key SINGLESTORE_LICENSE.

image: debian

stages:
  - test

variables:
    ROOT_PASSWORD: test
    # Optional:
    # SINGLESTORE_LICENSE: $SINGLESTORE_LICENSE

testing:
  stage: test
  services:
    - name: ghcr.io/singlestore-labs/singlestoredb-dev:latest
      alias: singlestoredb-dev
  script:
    - apt update
    - apt install -y mariadb-client curl
    - curl -sI localhost:8080 --retry 30 --retry-connrefused --retry-delay 1
    - mysql -u root -ptest -h singlestoredb-dev -e "create database foo"
    - mysql -u root -ptest -h singlestoredb-dev -e "create table foo.bar (id int)"
    - mysql -u root -ptest -h singlestoredb-dev -e "insert into foo.bar values (1),(2),(3)"
    - mysql -u root -ptest -h singlestoredb-dev -e "select * from foo.bar"

How to upgrade from singlestore/cluster-in-a-box?

Before this image existed, there was another Docker Image called singlestore/cluster-in-a-box. The docker run command for the previous image looked something like this:

docker run -i --init \
    --name singlestore-ciab \
    -e LICENSE_KEY=${LICENSE_KEY} \
    -e ROOT_PASSWORD=${ROOT_PASSWORD} \
    -p 3306:3306 -p 8080:8080 \
    singlestore/cluster-in-a-box

The differences between the old image and the new image are the following:

  • The image no longer needs to be initialized before you can use it
  • Startup time is much better - roughly 5 seconds with the new image versus a minute with the old image
  • The Data API and External Functions features are enabled by default
  • Upgrade between versions is supported and tested (downgrade is not supported)
  • The new image is distributed through the Github Container Repository rather than the Docker Hub

In all cases we recommend using the new image unless you need to run a older version of SingleStore which has not been released in singlestoredb-dev-image.

Apple Silicon (M1/M2 chips) performance notes

In order to support running SingleStoreDB on Apple Silicon many of our performance optimizations are disabled. This can result in unexpectedly bad performance, especially during recovery (restarting SingleStoreDB) and when running queries for the first time.

To tune this performance impact (either faster or slower) you can change the number of cores and amount of RAM allocated to the Docker virtual machine by following the documentation here.

singlestoredb-dev-image's People

Contributors

anna-pipko avatar bk1031 avatar braposo avatar carlsverre avatar cgl-memsql avatar danmatthews avatar davidgomes avatar dependabot[bot] avatar dkrosuri1 avatar eli-memsql avatar jasonthorsness avatar klung16 avatar lipings2db avatar michel94 avatar pvetere avatar ricochet avatar rjfdias avatar sesmith177 avatar sshankar-singlestore avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

singlestoredb-dev-image's Issues

Unusable SingleStore deployment on m1 mac

Describe the bug

I recently upgraded my Mac from Monterey 12.5 to Sonoma 14.5, and Docker Desktop from 4.16.1 (engine 20.10.22, compose v2.15.1) to 4.30 (engine - 26.1.1, compose v2.27.0-desktop.2)

Prior to upgrading the OS and Docker, the SingleStore database worked as expected, though was a bit on the slow side. I was able to execute integration tests against the database which included various activities like:

  1. Multiple schema creation (targeted at a schema per tenant in a multi-tenant application)
  2. Running of migrations (creation of tables, loading via INSERT statements)
  3. Execution of read and write queries across schemas

After upgrading, the local SingleStore database on Docker is no longer tenable. The database fails in different ways across the three steps above depending on the Docker Desktop configuration:

Docker Configuration Result Error
"Use virtualization framework" - off
File sharing implementation: osxfs
Schema creation and migrations succeed (albeit very slowly), tests fail during execution of read and write queries time="2024-05-28T15:40:23Z" level=error msg="Error 1777 (HY000): Partition xxx:0 has no master instance. This is likely because the node or nodes that hold a copy of the partition are down. Check for offline leaf nodes by running SHOW LEAVES and bring them back online to restore access to the partition"
"Use virtualization framework" - off
File sharing implementation: gRPC FUSE
Schema creation and migrations succeed (albeit very slowly), tests fail during execution of read and write queries time="2024-05-28T16:02:20Z" level=error msg="Error 1777 (HY000): Partition xxx:0 has no master instance. This is likely because the node or nodes that hold a copy of the partition are down. Check for offline leaf nodes by running SHOW LEAVES and bring them back online to restore access to the partition"
"Use virtualization framework" - on
File sharing implementation: VirtioFS
Use Rosetta for x86_64/amd64 emulation on Apple Silicon: off
Migrations fail 286636318 2024-05-28 16:37:28.619 ERROR: Thread 99999 (ntid 342, conn id 29): ShardingAlterTableV6: Alter Table timed out sending PREPARE messages to all the leaves
286636432 2024-05-28 16:37:28.620 WARN: Thread 99999 (ntid 342, conn id 29): operator(): Alter table on xxx.client has failed, rolling back transaction. Error: 2286: Operation ALTER timed out while waiting for concurrent operation to finish. Use SHOW PROCESSLIST to investigate long running concurrent operation, or consider increasing the value of alter statement's timeout value or the default_distributed_ddl_timeout global variable
"Use virtualization framework" - on
File sharing implementation: VirtioFS
Use Rosetta for x86_64/amd64 emulation on Apple Silicon: on
Migrations fail ==> /var/lib/memsql/ce0473ab-fc9f-45ae-a5ea-0e1c6c236947/tracelogs/memsql.log <==
276996548 2024-05-28 20:31:03.363 ERROR: Thread 99999 (ntid 293, conn id 28): ShardingAlterTableV6: Alter Table timed out sending PREPARE messages to all the leaves
276997002 2024-05-28 20:31:03.364 WARN: Thread 99999 (ntid 293, conn id 28): operator(): Alter table on xxx.client has failed, rolling back transaction. Error: 2286: Operation ALTER timed out while waiting for concurrent operation to finish. Use SHOW PROCESSLIST to investigate long running concurrent operation, or consider increasing the value of alter statement's timeout value or the default_distributed_ddl_timeout global variable
"Use virtualization framework" - on
File sharing implementation: osxfs
Use Rosetta for x86_64/amd64 emulation on Apple Silicon: on
Migrations fail ==> /var/lib/memsql/b91e2313-f74d-4cdb-847d-1bff188babe5/tracelogs/memsql.log <==
286541859 2024-05-28 20:39:57.493 ERROR: Thread 99999 (ntid 332, conn id 28): ShardingAlterTableV6: Alter Table timed out sending PREPARE messages to all the leaves
286542303 2024-05-28 20:39:57.494 WARN: Thread 99999 (ntid 332, conn id 28): operator(): Alter table on labsengpte.contact has failed, rolling back transaction. Error: 2286: Operation ALTER timed out while waiting for concurrent operation to finish. Use SHOW PROCESSLIST to investigate long running concurrent operation, or consider increasing the value of alter statement's timeout value or the default_distributed_ddl_timeout global variable
"Use virtualization framework" - on
File sharing implementation: VirtioFS
Use Rosetta for x86_64/amd64 emulation on Apple Silicon: on
Remove migration file causing alter failures (deadlock)
Schema creation and migrations succeed (albeit very slowly), tests time out after 20 minutes

To Reproduce
Steps to reproduce the behavior:

  1. It is hard to share the schemas and migration files causing the issues as it's for a proprietary application. I'm happy to share privately or work toward smaller, representative test case.
    1. At a high level, there are 36 migration files applied across three separate schemas (or "databases"). There is a total of 36 tables in each schema (the migration files do not correlate 1-1 with the tables)

Expected behavior

I expect the database to succeed in executing the migration files and the query performance to be reasonable. Prior to the mac OS and Docker upgrade, the test suite worked locally though its runtime was high (> 10 minutes end-to-end)

Desktop (please complete the following information):

  • OS: macOS Sonoma 14.5

  • Chip: Apple M1 Max

  • Docker version: 26.1.1

    • Resources:
      • 10 cores
      • 16GB RAM
      • 1GB swap
      • Virtual Disk Limit: 200GB
      • Filesystem: varies, see above
  • Image tag: ghcr.io/singlestore-labs/singlestoredb-dev:0.2.21

Additional context
The sql-migrate tool is being used for migration execution

Container is not using ROOT_PASSWORD - Unable to log into cluster

Describe the bug
I can't log into the cluster.

When I navigate to http://localhost:8080/connect/localhost and log in with root as username and admin as password I get a The credentials are incorrect. Please verify them and try again. message.

To Reproduce
Steps to reproduce the behavior:

  1. Run a fresh container
docker run \
    -d --name singlestore \
    -e SINGLESTORE_LICENSE="private of course" \
    -e ROOT_PASSWORD=“admin” \
    -p 3306:3306 -p 8080:8080 -p 9000:9000 \
    -v singlestore_data:/data \
    ghcr.io/singlestore-labs/singlestoredb-dev:latest
  1. Go to http://localhost:8080/connect/localhost
  2. Log in with root as username and admin as password

Expected behavior
The ability to log in.

Screenshots
image

Desktop (please complete the following information):

  • OS: OSX
  • Docker version: 23.0.6, build ef23cbc
  • Image tag: ghcr.io/singlestore-labs/singlestoredb-dev:latest (22 July if that matters)

Additional context
I'm using OrbStack so I've got no Docker for Mac installed.
When I go into the container and do echo $ROOT_PASSWORD I do get "admin" as response.

Environment variable to configure node port(s)

Is your feature request related to a problem? Please describe.
I need to run singlestoredb-dev-image in a Kubernetes Pod alongside other infrastructure (including MySQL) to run tests. I am attempting to upgrade from my customized CIAB image, in which I had overridden the startup script so that PORT could be set in the environment. This would cause the master node to use the specified port, and the leaf node to use PORT + 1.

I attempted a similar customization for the singlestoredb-dev-image. However, it appears that due to the execution of init.sh within the original image, I am unable to simply modify the file within my image to alter the ports.

Here is the script I use to populate the port argument in sdb-admin create-node and sdb-admin add-leaf:

MASTER_PORT="${PORT:-3306}"
LEAF_PORT="$((MASTER_PORT + 1))"

Describe the solution you'd like
Add an environment variable to allow node port(s) to be customized and resolve port conflicts.

Describe alternatives you've considered
Writing my own Dockerfile, or figuring out an additional script to run to update the configurations or recreate the nodes (this is pretty unfavorable).

pull image not found

Hello SingleStore,

On a Centos7 the following command returns an error:

docker pull ghcr.io/singlestore-labs/singlestoredb-dev:0.2.11
Trying to pull repository ghcr.io/singlestore-labs/singlestoredb-dev ...
Pulling repository ghcr.io/singlestore-labs/singlestoredb-dev
Error: image singlestore-labs/singlestoredb-dev:0.2.11 not found

How to solve it please?

Thanks

RESTORE DATABASE not working

Hello SingleStore,

I'm trying to restore databases from GCS, the command works well in a cluster deployed on a VM, however the same command hangs in the Docker cluster.

SingleStore 8.1.25
Docker image: latest

Any idea how to solve it please?

Thanks

Create a build matrix and publish packages for each singlestore version.

Would be great instead of the version change script, that we could also have the option of pulling a singlestore engine version image directly - great for CI/CD too.

Describe the solution you'd like
A github action that on a version bump would use a build matrix to build all the Singlestore engine versions currently supported into separate packages and publish them to GHCR or docker hub.

Describe alternatives you've considered
We can do this in our organisation already, but would be nice if it was done by Singlestore themselves.

Additional context
Not sure how the versioning would work with the image version, maybe a naming convention?

For example, version 8.5.7 of the engine at the current release tag of 0.2.19

image: singlestore-db-dev-8-5-7:0.2.19

Or just:

image: singlestore-db-dev-8-5-7:latest

In docker-compose.yml.

RESTORE DATABASE using S3 not working with dev-image

Describe the bug

When trying to restore a full backup from an S3 storage to the dev-image, you get the following error:

"ERROR 2205 ER_SNAPSHOT_SUBPROCESS_EXITED_WITH_NONZERO_RESULT: Leaf Error (127.0.0.1:3307): Backup subprocess nonzero exit value. Failed to Download whole object".

Note.1: Tried with and without persistent storage.

Note.2: Tried in the same host & same backup using the default installation, and it worked as expected.

Note.3: I didn't try using another backup provider.

Steps to reproduce the behavior:

  1. Do a full backup on a normal node to an S3 storage.
  2. Try to restore this backup on the dev-image.
  3. See the error.

Desktop (please complete the following information):

  • OS: Ubuntu 22.04
  • Docker version:
  • Image tag: singlestore-labs/singlestoredb-dev:latest

Improving performance of Singlestore dev docker image on M1/M2 apple machines

Is your feature request related to a problem? Please describe.

This is a feature request for improving performance of Singlestore dev docker image on M1/M2 apple machines.

In the github page it is described:
"In order to support running SingleStoreDB on Apple Silicon many of our performance optimizations are disabled. This can result in unexpectedly bad performance...
To tune this performance impact (either faster or slower) you can change the number of cores and amount of RAM..."

Unfortunately the above suggestion seems to be not enough, giving more CPU and RAM to the docker desktop does not help enough and some operations in Singlestore running on M1/M2 machines inside docker desktop still take minutes instead of seconds.

When this docker image is run on Apple M1/M2 machines in a docker desktop, the qemu emulation is used:

image

It seems the emulation automatically chooses a very basic cpu which is missing many of the instructions sets for Singlestore to use:

image

When running sdb-report collect-and-check command inside the docker image shell, you will receive the following notification about missing features of the CPU:

FAIL SSE4.2 CPU extensions unavailable on 127.0.0.1
WARN AVX2 CPU extensions unavailable on 127.0.0.1

image

Describe the solution you'd like
As a improvement feature I would like to have the ability to have the docker image with feature instructions Singlestore optimized for.
In recent years many developers use Apple M1/M2 machines to work on Singlestore projects and having an optimized environment is very important.
This can be via better emulation or via compiling the Singlestore to run natively on such CPU's.
I think the above change can improve the performance of the Singlestore docker image.

Describe alternatives you've considered
It is possible to control the above manually using for example colima instead of docker desktop (https://github.com/abiosoft/colima) ,which allows to specify parameters for qemu, without modifying the image.
Then it is possible to specify which cpu type you would like to simulate for the qemu to use.
But, this workaround requires manual and complex steps to make it work.

Additional context
Version information:
Singlestore 8.1.15 (docker pull ghcr.io/singlestore-labs/singlestoredb-dev:0.2.8)
Docker Desktop 4.22
Macbook M1 Pro (Ventura 13.4.1) with 16gb ram
8cores and 10gb allocated for the docker desktop

image
image

Error 2374 due to CPU core quantity

Describe the bug
SingleStore docker container does not start successfully on a laptop/computer that has more than 4 cores.

To Reproduce
Steps to reproduce the behavior:

  1. Run docker compose up
  2. See the following error:
$ docker compose up
[+] Running 3/3
 ⠿ Network traffic-copter_default          Created                                                                                                        0.1s
 ⠿ Volume "traffic-copter_data"            Created                                                                                                        0.0s
 ⠿ Container traffic-copter-singlestore-1  Created                                                                                                        2.8s
Attaching to traffic-copter-singlestore-1
traffic-copter-singlestore-1  | Starting SingleStore nodes...
traffic-copter-singlestore-1  | {}
traffic-copter-singlestore-1  | 
traffic-copter-singlestore-1  | real	0m7.204s
traffic-copter-singlestore-1  | user	0m0.580s
traffic-copter-singlestore-1  | sys	0m0.317s
traffic-copter-singlestore-1  | Configuring SingleStore nodes...
traffic-copter-singlestore-1  | Error 2373: Code generation for new statements is disabled because the total number of license units of capacity used on all leaf nodes is 4, which is above the limit of 2 for the SingleStore free license. To purchase more capacity, contact [email protected]. Or, adjust the size or number of leaf nodes to bring the number of license units used to 2 or less. See https://docs.memsql.com/redir/capacity-limit-error for more information.
traffic-copter-singlestore-1 exited with code 1

See below the docker-compose.yaml file I am using to test this new image:

version: '3.8'

services:
  singlestore:
    image: ghcr.io/singlestore-labs/singlestoredb-dev:latest
    ports:
      - 3306:3306
      - 8080:8080
    volumes:
      # persist data between runs:
      - data:/data
      - ./init.sql:/init.sql
    environment:
      # use the LICENSE_KEY environment variable set in the terminal:
      - SINGLESTORE_LICENSE=${SINGLESTORE_LICENSE}
      - ROOT_PASSWORD=root

volumes:
  data:

Expected behavior

I expect the SingleStore container to start successfully.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Ubuntu 22.04.1 LTS
  • Docker version: Docker version 20.10.14, build a224086
  • Image tag: ghcr.io/singlestore-labs/singlestoredb-dev:latest which should be the same as ghcr.io/singlestore-labs/singlestoredb-dev:0.0.8
  • Number of CPU cores: 32
  • Memory: 88GB of RAM
$ nproc
32

Additional context

Additionally, SingleStore GitHub Action has never worked correctly. The code for this article is here.

I had to use instead docker-compose on the GitHub action with the image singlestore/cluster-in-a-box:latest. The developer who wrote that GitHub action should have placed a sanity check for the MySQL connection.

See also SingleStore Error 2374

To finalize, the image singlestore/cluster-in-a-box:latest works fine with docker compose. See the below docker compose configuration:

version: '3.8'

services:
  singlestore:
    image: singlestore/cluster-in-a-box:latest
    ports:
      - 3306:3306
      - 8080:8080
    volumes:
      # persist data between runs:
      - db-data:/var/lib/memsql
      - ./init.sql:/init.sql
    environment:
      # use the LICENSE_KEY environment variable set in the terminal:
      - LICENSE_KEY=${SINGLESTORE_LICENSE}
      - START_AFTER_INIT=Y
      - ROOT_PASSWORD=password_here

volumes:
  db-data:

And docker compose CLI output showing that the container started successfully:

$ docker compose up
[+] Running 1/0
 ⠿ Container tc-singlestore-1  Recreated                                                                                                      0.1s
Attaching to tc-singlestore-1
tc-singlestore-1  | Starting Cluster
tc-singlestore-1  | {}
tc-singlestore-1  | ==> /var/lib/memsql/1b482754-cbe1-4336-bfaf-811ea6789926/tracelogs/memsql.log <==
tc-singlestore-1  | 03651764 2022-09-30 16:01:43.925   INFO: Completed recovery of system databases.  Beginning recovery of user databases.
tc-singlestore-1  | 03656876 2022-09-30 16:01:43.930   INFO: Initializing cluster management.
tc-singlestore-1  | 03656926 2022-09-30 16:01:43.930   INFO: Consensus node started...
tc-singlestore-1  | 03657583 2022-09-30 16:01:43.931   INFO: Thread 115029: Fn: Initializing Replication Management Thread
tc-singlestore-1  | 03657734 2022-09-30 16:01:43.931   INFO: Starting the accept loop.
tc-singlestore-1  | 03670562 2022-09-30 16:01:43.944   INFO: Query information_schema.'SELECT @@memsql_id' submitted for asynchronous compilation
tc-singlestore-1  | 03746448 2022-09-30 16:01:44.019   INFO: Query information_schema.'SELECT node_id FROM INFORMATION_SCHEMA.lmv_nodes' submitted for asynchronous compilation
tc-singlestore-1  | 03771281 2022-09-30 16:01:44.044   INFO: Query information_schema.'SELECT @@memsql_version' submitted for asynchronous compilation
tc-singlestore-1  | 03876453 2022-09-30 16:01:44.149   INFO: Query information_schema.'SELECT @@memsql_id' submitted 215 milliseconds ago, queued for 12 milliseconds, compiled asynchronously in 203 milliseconds
tc-singlestore-1  | 03880903 2022-09-30 16:01:44.154   INFO: Query information_schema.'SELECT @@memsql_version' submitted 130 milliseconds ago, queued for 22 milliseconds, compiled asynchronously in 108 milliseconds
tc-singlestore-1  | 
tc-singlestore-1  | ==> /var/lib/memsql/5b30dc09-100d-4ef3-9fc4-2b5c28919aa5/tracelogs/memsql.log <==
tc-singlestore-1  | 03650171 2022-09-30 16:01:43.926   INFO: Completed recovery of system databases.  Beginning recovery of user databases.
tc-singlestore-1  | 03655365 2022-09-30 16:01:43.931   INFO: Initializing cluster management.
tc-singlestore-1  | 03655413 2022-09-30 16:01:43.931   INFO: Consensus node started...
tc-singlestore-1  | 03655974 2022-09-30 16:01:43.931   INFO: Thread 115029: Fn: Initializing Replication Management Thread
tc-singlestore-1  | 03656188 2022-09-30 16:01:43.932   INFO: Starting the accept loop.
tc-singlestore-1  | 03668121 2022-09-30 16:01:43.944   INFO: Query information_schema.'SELECT @@memsql_id' submitted for asynchronous compilation
tc-singlestore-1  | 03785955 2022-09-30 16:01:44.061   INFO: Query information_schema.'SELECT node_id FROM INFORMATION_SCHEMA.lmv_nodes' submitted for asynchronous compilation
tc-singlestore-1  | 03798763 2022-09-30 16:01:44.074   INFO: Query information_schema.'SELECT @@memsql_version' submitted for asynchronous compilation
tc-singlestore-1  | 03905646 2022-09-30 16:01:44.181   INFO: Query information_schema.'SELECT @@memsql_id' submitted 246 milliseconds ago, queued for 11 milliseconds, compiled asynchronously in 235 milliseconds
tc-singlestore-1  | 03909953 2022-09-30 16:01:44.185   INFO: Query information_schema.'SELECT @@memsql_version' submitted 120 milliseconds ago, queued for 11 milliseconds, compiled asynchronously in 109 milliseconds
tc-singlestore-1  | 
tc-singlestore-1  | ==> /var/lib/singlestoredb-studio/studio.log <==
tc-singlestore-1  | 
tc-singlestore-1  | ==> /var/lib/memsql/memsql_exporter.log <==
tc-singlestore-1  | 
tc-singlestore-1  | ==> /var/lib/memsql/1b482754-cbe1-4336-bfaf-811ea6789926/tracelogs/memsql.log <==
tc-singlestore-1  | 03991436 2022-09-30 16:01:44.264   INFO: Query information_schema.'SELECT node_id FROM INFORMATION_SCHEMA.lmv_nodes' submitted 258 milliseconds ago, queued for 16 milliseconds, compiled asynchronously in 242 milliseconds
tc-singlestore-1  | 
tc-singlestore-1  | ==> /var/lib/memsql/5b30dc09-100d-4ef3-9fc4-2b5c28919aa5/tracelogs/memsql.log <==
tc-singlestore-1  | 04018928 2022-09-30 16:01:44.294   INFO: Query information_schema.'SELECT node_id FROM INFORMATION_SCHEMA.lmv_nodes' submitted 243 milliseconds ago, queued for 13 milliseconds, compiled asynchronously in 230 milliseconds
tc-singlestore-1  | 
tc-singlestore-1  | ==> /var/lib/singlestoredb-studio/studio.log <==
tc-singlestore-1  | 2022/09/30 16:01:44 env.go:90 Log Opened
tc-singlestore-1  | 2022/09/30 16:01:44 server.go:74 Listening on 0.0.0.0:8080
tc-singlestore-1  | 2022/09/30 16:01:44 server.go:93 HTTPS configuration was not detected, serving with HTTP
tc-singlestore-1  | 
tc-singlestore-1  | ==> /var/lib/memsql/memsql_exporter.log <==
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg="Arguments: [@/opt/singlestoredb-server-7.8.13-f8fec5f0db/memsql_exporter/conf/memsql_exporter_ma.args]" source="memsql_exporter.go:621"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg="Starting memsql_exporter (version=7.8.13, branch=, revision=f8fec5f)" source="memsql_exporter.go:636"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg="Build context (go=devel +2a0dd053ec Fri Feb 5 19:53:15 2021 +0000, user=, date=)" source="memsql_exporter.go:637"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg="Read DSN from /etc/memsql/memsql_exporter.cnf" source="memsql_exporter.go:656"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg="Enabled scrapers:" source="memsql_exporter.go:671"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.show_status_extended" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.global_status_gauge_vars" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.show_variables" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.show_workload_management_status" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.distributed.partitions" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.events" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.info_schema.mv_activities" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.info_schema.tables" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.info_schema.processlist" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.info_schema.tablestats" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.memory.counters" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.info_schema.mv_activities_extended_cumulative" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.info_schema.system_info" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.info_schema.pipeline_batches" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.info_schema.mv_nodes" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --collect.global_status_counting_vars" source="memsql_exporter.go:675"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg="Enabled cluster scrapers:" source="memsql_exporter.go:683"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_sysinfo_net" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_global_variables" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_workload_management_status" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_sysinfo_mem" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_sysinfo_disk" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_activities" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.tables" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_processlist" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.tablestats" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.events" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.distributed.partitions" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_global_status" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_sysinfo_cpu" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_activities_extended_cumulative" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.system_info" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.pipeline_batches" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --cluster-collect.info_schema.mv_nodes" source="memsql_exporter.go:687"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg="Enabled samplers:" source="memsql_exporter.go:694"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --sample.activity_extended" source="memsql_exporter.go:698"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --sample.pipeline_batches" source="memsql_exporter.go:698"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --sample.cluster_info" source="memsql_exporter.go:698"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --sample.events" source="memsql_exporter.go:698"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --sample.node" source="memsql_exporter.go:698"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg=" --sample.activity" source="memsql_exporter.go:698"
tc-singlestore-1  | time="2022-09-30T16:01:44Z" level=info msg="Listening on :9104" source="memsql_exporter.go:755"

Laravel Sail Docker Compose License Invalid

Describe the bug

dt-singlestore-1   | {}
dt-singlestore-1   |
dt-singlestore-1   | real       0m5.635s
dt-singlestore-1   | user       0m0.213s
dt-singlestore-1   | sys        0m0.084s
dt-singlestore-1   | Configuring SingleStore nodes...
dt-singlestore-1   | Error 1888: Unable to reload license file: The provided license is invalid
dt-singlestore-1 exited with code 1
  singlestore:
    image: ghcr.io/singlestore-labs/singlestoredb-dev:latest
    volumes:
      - 'sail-singlestore:/data'
    restart: always
    environment:
      - SINGLESTORE_LICENSE="<removed>"
      - ROOT_PASSWORD='password'
    ports:
      - '${FORWARD_SINGLESTORE_PORT:-3306}:3306'
      - '${FORWARD_SINGLESTORE_UI_PORT:-8080}:8080'
      - '${FORWARD_SINGLESTORE_API_PORT:-9000}:9000'
    networks:
      - sail

To Reproduce
Steps to reproduce the behavior:

  1. Run load docker compose section and provide license
  2. Run run sail up
  3. See error

Expected behavior
When using the docker command it doesn't throw an error with the same environment variables but when trying to use sail it fails.

Desktop (please complete the following information):

  • OS: Windows
  • Docker version: Docker Engine 20.10.21
  • Image tag: ghcr.io/singlestore-labs/singlestoredb-dev:latest

singlestore to singlestoredb for sql shell

With the name change, should the shell prompt now be singlestoredb?

singlestore>
singlestoredb>

As well as the command:

docker exec -it singlestoredb-dev singlestore -p"${DATABASE_PASSWORD}"
docker exec -it singlestoredb-dev singlestoredb -p"${DATABASE_PASSWORD}"

Creating custom singlestoredb-dev with build.sh is broken

Describe the bug
The current build.sh script to create a custom singlestoredb-dev is broken because the GPG keys for AlmaLinux 8 has changed since the script was last modified. Need to import the latest keys for package installation to work.

To Reproduce
Steps to reproduce the behavior:

  1. Clone the latest master branch
  2. Set the BOOTSTRAP_LICENSE to a valid license using export
  3. Run ./build.sh to build the local image
  4. Observe the error
 => ERROR [ 2/26] RUN dnf upgrade -y almalinux-release &&     yum -y clean all                                                                                                                                  6.4s
------
 > [ 2/26] RUN dnf upgrade -y almalinux-release &&     yum -y clean all:
1.056 AlmaLinux 8 - BaseOS                            5.9 MB/s | 4.0 MB     00:00
2.317 AlmaLinux 8 - AppStream                          16 MB/s |  11 MB     00:00
4.407 AlmaLinux 8 - Extras                             34 kB/s |  21 kB     00:00
4.920 Dependencies resolved.
4.921 ================================================================================
4.921  Package                  Architecture  Version             Repository     Size
4.921 ================================================================================
4.921 Upgrading:
4.921  almalinux-release        x86_64        8.10-1.el8          baseos         27 k
4.921
4.921 Transaction Summary
4.921 ================================================================================
4.921 Upgrade  1 Package
4.921
4.921 Total download size: 27 k
4.922 Downloading Packages:
5.540 almalinux-release-8.10-1.el8.x86_64.rpm         3.3 MB/s |  27 kB     00:00
5.540 --------------------------------------------------------------------------------
5.540 Total                                            43 kB/s |  27 kB     00:00
5.552 AlmaLinux 8 - BaseOS                            3.3 MB/s | 3.4 kB     00:00
5.621 Importing GPG key 0xC21AD6EA:
5.621  Userid     : "AlmaLinux <[email protected]>"
5.621  Fingerprint: E53C F5EF 91CE B0AD 1812 ECB8 51D6 647E C21A D6EA
5.621  From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-AlmaLinux
5.622 Key imported successfully
5.632 Import of key(s) didn't help, wrong key(s)?
5.632 Public key for almalinux-release-8.10-1.el8.x86_64.rpm is not installed. Failing package is: almalinux-release-8.10-1.el8.x86_64
5.632  GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-AlmaLinux
5.632 The downloaded packages were saved in cache until the next successful transaction.
5.632 You can remove cached packages by executing 'dnf clean packages'.
5.634 Error: GPG check FAILED
------
Dockerfile:5
--------------------
   4 |
   5 | >>> RUN dnf upgrade -y almalinux-release && \
   6 | >>>     yum -y clean all
   7 |
--------------------
ERROR: failed to solve: process "/bin/sh -c dnf upgrade -y almalinux-release &&     yum -y clean all" did not complete successfully: exit code: 1

Expected behavior
The singlestoredb-dev docker image to be created successfully with the local tag, which didn't happen.

  • OS: Linux
  • Docker version: Docker Engine - Community, Version: 25.0.3

Additional context
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.