GithubHelp home page GithubHelp logo

fredhutch / diy-cromwell-server Goto Github PK

View Code? Open in Web Editor NEW
5.0 4.0 2.0 160 KB

A repo containing instructions for running a Cromwell server on Gizmo at the Fred Hutch. (Contact Amy Paguirigan for questions)

Shell 100.00%
wdl cromwell workflow-management workflow workflow-engine

diy-cromwell-server's Introduction

diy-cromwell-server

A repo containing instructions for running a DIY Cromwell server on an on-prem SLURM cluster at the Fred Hutch. These instructions were generated for an institution-specific configuration of Cromwell, but could serve as a starting point for other institutions customizing a configuration specific to their HPC infrastructure.

Cromwell Resources

Cromwell is a workflow manager developed by the Broad which manages the individual tasks involved in multi-step workflows, tracks job metadata, provides an API/Swagger UI interface and allows users to manage multiple workflows simultaneously. Cromwell currently runs workflows written in WDL workflow language. The WDL specification and documentation is curated by the openWDL group.

The Fred Hutch Data Science Lab has developed an R Shiny app you can use to monitor your own Cromwell server workflows when you have a Cromwell server if you have users that would benefit from a simple UI. That repo is here: shiny-cromwell. There is also basic R package that wraps the Cromwell API allowing you to submit, monitor and kill workflow jobs to a Cromwell server from R directly. The package is rcromwell.

Steps to prepare

If you have questions about these steps, please file an issue! For Fred Hutch users, SciComp and the Data Science Lab have developed a new approach to using Cromwell on Fred Hutch infrastructure that is even simpler than the legacy approach documented here.

Note: This information, while specific to Fred Hutch, is considered the "legacy" way to do this, but we're leaving it here in case this is useful for folks at other institutions in setting up their own configuration. Please see the new Fred Hutch approach called "PROOF" on the Fred Hutch Biomedical Data Science Wiki data science section.

Rhino Access (one time)

Currently, to run your own Cromwell server you'll need to know how to connect to rhino at the Fred Hutch which are the login nodes to the 'gizmo' cluster.

Database Setup (one time)

These instructions let you stand up a Cromwell server for 7 days at a time. If you have workflows that run longer than that or you want to be able to get metadata for or restart jobs even after the server goes down, you'll want an external database to keep track of your progress even if your server goes down (for whatever reason). It also will allow your future workflows to use cached copies of data when the exact task has already been done (and recorded in the database). We have found as well that by using a MySQL database for your Cromwell server, it will run faster and be better able to handle simultaneous workflows while also making all the metadata available to you during and after the run.

We currently suggest you go to DB4Sci and see the Wiki entry for DB4Sci here. There, you will login using Fred Hutch credentials, choose Create DB Container, and choose the MariaDB option. The default database container values are typically fine, EXCEPT you likely need either weekly or no backups (no backups preferred) for this database. Save the DB/Container Name, DB Username and DB Password as you will need them for the configuration step. Once you click submit, a confirmation screen will appear (hopefully), and you'll need to note which Port is specified. This is a 5 digit number currently.

Or if you are able to get onto rhino, you can do the following to do this process yourself:

ml MariaDB/10.5.1-foss-2019b
mysql --host mydb --port <Port> --user <username> --password

It will then prompt you to enter the DB password you specified during setup. Once you are are a "mysql>" prompt, you can do the following.

Note, we suggest you name the database inside the container the same as the container, but you cannot include dashes in your database name. In the future, DB4Sci may also set up the database inside the container for you, in which case you would be provided a database name as well during setup.

MariaDB [(none)]> create database <DB Name>;
# It should do it's magic
MariaDB [(none)]> exit

Then you're ready to go and never have to set up the database part again and you can use this database to manage all your work over time.

Server setup instructions

  1. Decide where you want to keep your Cromwell configuration files. This must be a place where rhino can access them, such as in your Home directory, which is typically the default directory when you connect to the rhinos. We suggest you create a cromwell-home folder (or whatever you want to call it) and follow these git instructions to clone it directly.

  2. First set up the customizations per user that you're going to want for your server(s) by making user configuration file(s) in your cromwell-home or wherever you find convenient. You can manage mulitple Cromwell profiles this way by just maintaining different files full of credentials and configuration variables that you want.

To get started, do the following on rhino:

mkdir -p cromwell-home
cd cromwell-home
git clone --branch main https://github.com/FredHutch/diy-cromwell-server.git


cp ./diy-cromwell-server/cromUserConfig.txt .
## When you are first setting up Cromwell, you'll need to put all of your User Customizations into this `cromUserConfig.txt` which can serve as a template.  
## After you've done this once, you just need to keep the path to the file(s) handy for the future.  
  1. Tailor your cromUserConfig.txt file to be specific to your various directories and resources (see notes in the version of the file in this repo).

Note: For this server, you will want multiple cores to allow it to multi-task. Memory is less important when you use an external database. If you notice issues, the particular resource request for the server job itself might be a good place to start adjusting, in conjunction with some guidance from peers if you post in the FH-Data Slack #workflow-managers channel.

  1. Kick off your Cromwell server:
## You'll want to put `cromwell.sh` somewhere handy for future use (and very occasionally you might need to update it), we suggest:
cp ./diy-cromwell-server/cromwell.sh .
chmod +x cromwell.sh

# Then simply start up Cromwell:
./cromwell.sh cromUserConfig.txt
  1. Much like the grabnode command you may have used previously, the script will run and print back to the console instructions once the resources have been provisioned for the server. You should see something like this:
Your configuration details have been found...
Getting an updated copy of Cromwell configs from GitHub...
Setting up all required directories...
Detecting existence of AWS credentials...
Credentials found, setting appropriate configuration...
Requesting resources from SLURM for your server...
Submitted batch job 2224761
Your Cromwell server is attempting to start up at http://gizmok53:46287.  It can take up to 2 minutes prior to the port being open for use by the shiny app at https://cromwellapp.fredhutch.org or via the R package rcromwell. If you encounter errors, you may want to check your server logs at /home/username/cromwell-home/server-logs to see if Cromwell was unable to start up.
Go have fun now.

NOTE: Please write down the url it specifies here. This is the only place where you will be able to find the particular url for this instance of your Cromwell server, and you'll need that to be able to send jobs to the Crowmell server. If you forget it, scancel the Cromwell server job and start a new one.

  1. This url is what you use to submit and monitor workflows with the Shiny app mentioned above. After you click the "Connect to Server" button, you'll put http://gizmok30:20201 (or whatever your url is) where it says "My Own Cromwell".

  2. While your server will normally stop after 7 days (the default), at which point if you have jobs still running you can simply restart your server and it will reconnect to existing jobs/workflows. However, if you need to take down your server for whatever reason before that point, you can go to rhino and do:

# Here `username` is your Fred Hutch username
squeue -u username
## Or if you want to get fancy:
squeue -o '%.18i %.9P %j %.8T %.10M %.9l %.6C %R' -u username

## You'll see a jobname "cromwellServer".  Next to that will be a JOBID. In this example the JOBID of the server is 50062886.

scancel 50062886

NOTE: For those test workflows that use Docker containers, know that the first time you run them, you may notice that jobs aren't being sent very quickly. That is because for our cluster, we need to convert those Docker containers to something that can be run by Singularity. The first time a Docker container is used, it must be converted, but in the future Cromwell will used the cached version of the Docker container and jobs will be submitted more quickly.

Cromwell Server Customization

In cromUserConfig.txt there are some variables that allow users to share a similar configuration file but tailor the particular behavior of their Cromwell server to best suit them. The following text is also in this repo but these are the customizations you'll need to decide on for your server.

################## WORKING DIRECTORY AND PATH CUSTOMIZATIONS ###################
## Where do you want the working directory to be for Cromwell (note: this process will create a subdirectory here called "cromwell-executions")?  
### Suggestion: /fh/scratch/delete90/pilastname_f/username/
SCRATCHDIR=/fh/scratch/delete90/...

## Where do you want logs about individual workflows (not jobs) to be written?
## Note: this is a default for the server and can be overwritten for a given workflow in workflow-options.
### Suggestion: /fh/fast/pilastname_f/cromwell/workflow-logs
WORKFLOWLOGDIR=~/cromwell-home/workflow-logs

## Where do you want to save Cromwell server logs for troubleshooting Cromwell itself?
### Suggestion: ~/home/username/cromwell-home/server-logs
SERVERLOGDIR=~./cromwell-home/server-logs

################ DATABASE CUSTOMIZATIONS #################
## DB4Sci MariaDB details (remove < and >, and use unquoted text):

CROMWELLDBPORT=...
CROMWELLDBNAME=...
CROMWELLDBUSERNAME=...
CROMWELLDBPASSWORD=...

## Number of cores for your Cromwell server itself - usually 4 is sufficient.  
###Increase if you want to run many, complex workflows simultaneously or notice your server is slowing down.
NCORES=4

## Length of time you want the server to run for.  
### Note: when servers go down, all jobs they'd sent will continue.  When you start up a server the next time
### using the same database, the new server will pick up whereever the previous workflows left off.  "7-0" is 7 days, zero hours.
SERVERTIME="7-0" 

Contact Amy Paguirigan about these issues for some advice or file an issue on this repo.

Task Defaults and Runtime Variables available

For the gizmo backend, the following runtime variables are available that are customized to our configuration. What is specified below is the current default as written, you can edit these in the config file if you'd like OR you can specify these variables in your runtime block in each task to change only the variables you want to change from the default for that particular task.

  • cpu = 1
    • An integer number of cpus you want for the task
  • walltime = "18:00:00"
    • A string of date/time that specifies how many hours/days you want to request for the task
  • memory = 2000
    • An integer number of MB of memory you want to use for the task
  • partition = "campus-new"
    • Which partition you want to use, the default is campus-new but whatever is in the runtime block of your WDL will overrride this.
  • modules = ""
    • A space-separated list of the environment modules you'd like to have loaded (in that order) prior to running the task.
  • docker = "ubuntu:latest"
    • A specific Docker container to use for the task. For the custom Hutch configuration, docker containers can be specified and the necessary conversions (to Singularity) will be performed by Cromwell (not the user). Note: when docker is used, soft links cannot be used in our filesystem, so workflows using very large datasets may run slightly slower due to the need for Cromwell to copy files rather than link to them.
  • dockerSL= "ubuntu:latest"
    • This is a custom configuration for the Hutch that allows users to use docker and softlinks only to specific locations in Scratch. It is helpful when working with very large files.
  • account = "radich_j"
    • This is a custom configuration for the Hutch that allows users to submit jobs to run under different PI cluster accounts if they have multiple collaborators they run workflows for. Check with SciComp if you have this scenario but do not have multiple PI cluster accounts associated with your username.

diy-cromwell-server's People

Contributors

atombaby avatar sckott avatar vortexing avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

jayoung getwilds

diy-cromwell-server's Issues

Use included file for database connection parameters

The current mechanism puts the database connection string (including username and password) in the process stack where it is viewable by anyone on the system. This is somewhat undesirable- though I don't know precisely what harm could come, general "best practices" would suggest we improve this.

I've done a little work and it looks like we may be able to specify database connection parameters in a file separate from the general Cromwell config (e.g. fh-slurm-sing-cromwell.conf) and use HOCON's include directive:

include required(classpath("application"))
include required(file("database.conf"))
###### FH Slurm Backend, with call caching, without docker/singularity
   ....

Where database.conf contains the database section as specified by Cromwell:

database {
  profile = "slick.jdbc.MySQLProfile$"
  db {
    driver = "com.mysql.cj.jdbc.Driver"
    url = "jdbc:mysql://mydb:32222/cromdb?rewriteBatchedStatements=true&serverTimezone=UTC"
    user = "username"
    password = "password"
    connectionTimeout = 5000
  }
}

The path above doesn't have to be in the current directory- my understanding is that we could specify any path for that, though I'm not sure if things like "~" are expanded.

Server Time Zone Unrecognized

Caused by: com.mysql.cj.exceptions.InvalidConnectionAttributeException: The server time zone value 'PDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the serverTimezone configuration property) to use a more specifc time zone value if you want to utilize time zone support.

Seems to be an interaction between JDBC, Java8, and Mariadb. Easy fix (i.e. not mucking about server-side) involves specifying the time zone in the connection URL serverTimezone=UTC (note extreme case sensitivity)

memory requests not being used in fh-cromwell.conf

hey!

I put this in Slack but realized that this is a better place for it.

I think memory requests made in the runtime{} block of a wdl are being ignored when running on the local cluster.

I'll attach
(1) a demo WDL
(2) an options json file that goes with it, simply to prevent caching
(3) the script.submit generated for the task. The CPU request is honored but the memory request is not part of the sbatch command in script.submit.

Does that make sense?

thanks!

Janet

localBatchFileScatter parse.inputs.json pointing to the wrong sample.batchfile.tsv file

hey Amy,

I'm starting to look more closely at some actual wdl files to understand that end of things.

In testWorkflows/localBatchFileScatter, there is a local sample.batchfile.tsv file, but it's not being used.

The parse.inputs.json instead points to a copy of sample.batchfile.tsv here: /fh/fast/paguirigan_a/pub/ReferenceDataSets/workflow_testing_data/WDL/batchFileScatter /sample.batchfile.tsv, which uses files hosted on S3 rather than on /fh/fast

I think that's not what you intend - I'm guessing this workflow won't work for people who haven't got their AWS credentials set up: does that make sense?

Janet

Separate cromwellParams from X.conf

Let's shift to specifying the config file path when we kick off the server rather than as a Param. Will make testing/adapting easier later.

Glob + env modules= OK; Glob + containers= FAIL.

Put in the readme and in teh docs in the Wiki that when using Cromwell right now (v49), in our config we have a line to create softlinks instead of hardlinks for globs.

Currently that means that you can use globs in outputs when you are using env modules, BUT that will break when you use a docker/singularity container. In that case, in your task where you need to glob the output, change it to this:

cp files-to-glob . 
ls <glob pattern> > outputFiles.txt

------
output {
Array[File] outputGlobFiles = read_lines("outputFiles.txt")
}

confusing comment line in variantCalling-workflow.wdl

hey,

again, I'm using the examples as a starting point for understand wdl syntax.

tiny picky thing: in the variantCalling-workflow.wdl script, the comment on line 68 (" # Get the basename, i.e. strip the filepath and the extension") - I don't think the script is doing that on line 69? Looks like it's just concatenating a couple of the specified inputs.

I'm sure I will want to learn how to do the filepath stripping thing sometime - I'll look elsewhere to find it.

Janet

3 Ls

silly question: do you mean for the dirname testWorkflows/hellloSingularityHostname to have 3 Ls in helllo?

S3 file access could potentially stall jobs without failing

@vortexing

If a user doesn't have read permissions for all files in S3 needed to run a workflow the workflow/job could get stalled.

At this point the Cromwell shiny app will say that the job is running.

However, on SLURM no job appears to be running/being submitted.

Shiny App Screenshot
Screenshot 2024-01-12 at 2 21 14 PM

Job list screenshot
Screenshot 2024-01-12 at 2 21 30 PM

Cromwell Version?

Commit used: 77c318a

  "workflowName": "hello_hostname",
  "workflowProcessingEvents": [
    {
      "cromwellId": "cromid-53866a4",
      "description": "PickedUp",
      "timestamp": "2020-04-07T21:16:47.895Z",
      "cromwellVersion": "47"
    },
    {
      "cromwellId": "cromid-53866a4",
      "description": "Finished",
      "timestamp": "2020-04-07T21:16:47.915Z",
      "cromwellVersion": "47"
    }
...

  "status": "Failed",
  "failures": [
    {
      "causedBy": [],
      "message": "/fh/scratch"
    }

This is what is in the failure message for my workflow using the baseConfig right now. I'm loading version 49 and via the API when I request the version of the server running, I get V49, but in the workflow itself, I'm still getting version 47 which was the previous one we were using.

Output and Error information pruned during Singularity exec

It looks like the stderr (and likely stdout) is getting pruned during execution when Singularity is used to run the script:

VERBOSE [U=0,P=3126]       print()                       Set messagelevel to: 5
VERBOSE [U=0,P=3126]       init()                        Starter initialization
DEBUG   [U=0,P=3126]       load_overlay_module()         Trying to load overlay kernel module
DEBUG   [U=0,P=3126]       load_overlay_module()         Overlay seems not supported by the kernel
DEBUG   [U=0,P=3126]       get_pipe_exec_fd()            PIPE_EXEC_FD value: 9
VERBOSE [U=0,P=3126]       is_suid()                     Check if we are running as setuid
VERBOSE [U=0,P=3126]       priv_drop()                   Drop root privileges
DEBUG   [U=34152,P=3126]   init()                        Read engine configuration
tail: shard-1/execution/stderr: file truncated
DEBUG   [U=34152,P=3126]   Master()                      Child exited with exit status 0

Note the "file truncated" message. The lines prior to this is output from singularity -d exec. The resultant stderr file in the execution directory doesn't contain any of the lines above the "file truncated" message.

I believe this happens when the execution script redirects stdout and stderr. The submit script uses Slurm options to save stdout/stderr to files in the execution directory. From a representative output script (script.submit):

   -o /fh/scratch/delete10/_HDC/user/mrg/cromwell-root/cromwell-executions/parseBatchFile/b9e0a739-23e3-485d-8bab-857349697458/call-test/shard-1/execution/stdout \
-e /fh/scratch/delete10/_HDC/user/mrg/cromwell-root/cromwell-executions/parseBatchFile/b9e0a739-23e3-485d-8bab-857349697458/call-test/shard-1/execution/stderr \

When the script is run (script) the following lines are truncating any output occurring between the startup of the job on the node and the execution of the script in the container (particularly Singularity startup messages):

tee '/cromwell-executions/parseBatchFile/b9e0a739-23e3-485d-8bab-857349697458/call-test/shard-1/execution/stdout' < "$outb9e0a739" &
tee '/cromwell-executions/parseBatchFile/b9e0a739-23e3-485d-8bab-857349697458/call-test/shard-1/execution/stderr' < "$errb9e0a739" >&2 &

Note that in the container (where script is run) the path /fh/scratch/delete10/_HDC/user/mrg/cromwell-root/cromwell-executions has been mounted on /cromwell-executions). tee (without other options) will truncate output prior to writing to the file. This removes any output between job start and script execution.

I suspect that adding -a to the tee command will fix this problem.

Scatter using Singularity fails creating SIF

When using singularity exec within tasks executed within scatter there is a race condition when the Docker/Singularity image isn't in the cache. On NFS mounted home directories this apparently results in an error "stale NFS file handle"

To replicate it's necessary to remove the images from the Singularity cache (~/.singularity). It also is difficult to replicate with simple images (e.g. ubuntu). The Broad's GATK image seems to reproduce this error fairly reliably.

==> shard-0/execution/stderr <==
2020/04/10 07:06:02 debug unpacking entry           path=root/.conda root=/loc/scratch/46802618/rootfs-3371b095-7b34-11ea-ae11-002590e2b58e type=53
2020/04/10 07:06:02 debug unpacking entry           path=root/.conda/pkgs root=/loc/scratch/46802618/rootfs-3371b095-7b34-11ea-ae11-002590e2b58e type=53
2020/04/10 07:06:02 debug unpacking entry           path=root/.conda/pkgs/urls root=/loc/scratch/46802618/rootfs-3371b095-7b34-11ea-ae11-002590e2b58e type=48
2020/04/10 07:06:02 debug unpacking entry           path=root/.conda/pkgs/urls.txt root=/loc/scratch/46802618/rootfs-3371b095-7b34-11ea-ae11-002590e2b58e type=48
2020/04/10 07:06:02 debug unpacking entry           path=root/.gradle root=/loc/scratch/46802618/rootfs-3371b095-7b34-11ea-ae11-002590e2b58e type=53
2020/04/10 07:06:02 debug unpacking entry           path=root/gatk.jar root=/loc/scratch/46802618/rootfs-3371b095-7b34-11ea-ae11-002590e2b58e type=50
2020/04/10 07:06:02 debug unpacking entry           path=root/run_unit_tests.sh root=/loc/scratch/46802618/rootfs-3371b095-7b34-11ea-ae11-002590e2b58e type=48
DEBUG   [U=34152,P=14608]  Full()                        Inserting Metadata
DEBUG   [U=34152,P=14608]  Full()                        Calling assembler
INFO    [U=34152,P=14608]  Assemble()                    Creating SIF file...
DEBUG   [U=34152,P=14608]  cleanUp()                     Cleaning up "/loc/scratch/46802618/rootfs-3371b095-7b34-11ea-ae11-002590e2b58e" and "/loc/scratch/46802618/bundle-temp-539962740"
FATAL   [U=34152,P=14608]  replaceURIWithImage()         Unable to handle docker://broadinstitute/gatk@sha256:0dd5cb7f9321dc5a43e7667ed4682147b1e827d6a3e5f7bf4545313df6d491aa uri: unable to build: while creating SIF: while creating container: writing data object for SIF file: copying data object file to SIF file: write /home/mrg/.singularity/cache/oci-tmp/0dd5cb7f9321dc5a43e7667ed4682147b1e827d6a3e5f7bf4545313df6d491aa/gatk@sha256_0dd5cb7f9321dc5a43e7667ed4682147b1e827d6a3e5f7bf4545313df6d491aa.sif: stale NFS file handle
==> shard-1/execution/stderr <==
2020/04/10 07:06:13 debug unpacking entry           path=root/.conda root=/loc/scratch/46802619/rootfs-4740289f-7b34-11ea-ad57-002590e2b824 type=53
2020/04/10 07:06:13 debug unpacking entry           path=root/.conda/pkgs root=/loc/scratch/46802619/rootfs-4740289f-7b34-11ea-ad57-002590e2b824 type=53
2020/04/10 07:06:13 debug unpacking entry           path=root/.conda/pkgs/urls root=/loc/scratch/46802619/rootfs-4740289f-7b34-11ea-ad57-002590e2b824 type=48
2020/04/10 07:06:13 debug unpacking entry           path=root/.conda/pkgs/urls.txt root=/loc/scratch/46802619/rootfs-4740289f-7b34-11ea-ad57-002590e2b824 type=48
2020/04/10 07:06:13 debug unpacking entry           path=root/.gradle root=/loc/scratch/46802619/rootfs-4740289f-7b34-11ea-ad57-002590e2b824 type=53
2020/04/10 07:06:13 debug unpacking entry           path=root/gatk.jar root=/loc/scratch/46802619/rootfs-4740289f-7b34-11ea-ad57-002590e2b824 type=50
2020/04/10 07:06:13 debug unpacking entry           path=root/run_unit_tests.sh root=/loc/scratch/46802619/rootfs-4740289f-7b34-11ea-ad57-002590e2b824 type=48
DEBUG   [U=34152,P=3126]   Full()                        Inserting Metadata
DEBUG   [U=34152,P=3126]   Full()                        Calling assembler
INFO    [U=34152,P=3126]   Assemble()                    Creating SIF file...
VERBOSE [U=34152,P=3126]   Full()                        Build complete: /home/mrg/.singularity/cache/oci-tmp/0dd5cb7f9321dc5a43e7667ed4682147b1e827d6a3e5f7bf4545313df6d491aa/gatk@sha256_0dd5cb7f9321dc5a43e7667ed4682147b1e827d6a3e5f7bf4545313df6d491aa.sif
DEBUG   [U=34152,P=3126]   cleanUp()                     Cleaning up "/loc/scratch/46802619/rootfs-4740289f-7b34-11ea-ad57-002590e2b824" and "/loc/scratch/46802619/bundle-temp-701530895"
VERBOSE [U=34152,P=3126]   handleOCI()                   Image cached as SIF at /home/mrg/.singularity/cache/oci-tmp/0dd5cb7f9321dc5a43e7667ed4682147b1e827d6a3e5f7bf4545313df6d491aa/gatk@sha256_0dd5cb7f9321dc5a43e7667ed4682147b1e827d6a3e5f7bf4545313df6d491aa.sif
DEBUG   [U=34152,P=3126]   execStarter()                 Checking for encrypted system partition

.... output trimmed- log indicates this shard ran the container....

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.