GithubHelp home page GithubHelp logo

philschmid / huggingface-sagemaker-workshop-series Goto Github PK

View Code? Open in Web Editor NEW
223.0 10.0 58.0 3.54 MB

Enterprise Scale NLP with Hugging Face & SageMaker Workshop series

License: MIT License

Jupyter Notebook 91.27% Python 8.73%

huggingface-sagemaker-workshop-series's Introduction

Workshop: Enterprise-Scale NLP with Hugging Face & Amazon SageMaker

Earlier this year we announced a strategic collaboration with Amazon to make it easier for companies to use Hugging Face Transformers in Amazon SageMaker, and ship cutting-edge Machine Learning features faster. We introduced new Hugging Face Deep Learning Containers (DLCs) to train and deploy Hugging Face Transformers in Amazon SageMaker.

In addition to the Hugging Face Inference DLCs, we created a Hugging Face Inference Toolkit for SageMaker. This Inference Toolkit leverages the pipelines from the transformers library to allow zero-code deployments of models, without requiring any code for pre-or post-processing.

In October and November, we held a workshop series on “Enterprise-Scale NLP with Hugging Face & Amazon SageMaker”. This workshop series consisted out of 3 parts and covers:

  • Getting Started with Amazon SageMaker: Training your first NLP Transformer model with Hugging Face and deploying it
  • Going Production: Deploying, Scaling & Monitoring Hugging Face Transformer models with Amazon SageMaker
  • MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines

We recorded all of them so you are now able to do the whole workshop series on your own to enhance your Hugging Face Transformers skills with Amazon SageMaker or vice-versa.

Below you can find all the details of each workshop and how to get started.

🧑🏻‍💻 Github Repository: https://github.com/philschmid/huggingface-sagemaker-workshop-series

📺  Youtube Playlist: https://www.youtube.com/playlist?list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ

Note: The Repository contains instructions on how to access a temporary AWS, which was available during the workshops. To be able to do the workshop now you need to use your own or your company AWS Account.

In Addition to the workshop we created a fully dedicated Documentation for Hugging Face and Amazon SageMaker, which includes all the necessary information. If the workshop is not enough for you we also have 15 additional getting samples Notebook Github repository, which cover topics like distributed training or leveraging Spot Instances.

Workshop 1: Getting Started with Amazon SageMaker: Training your first NLP Transformer model with Hugging Face and deploying it

In Workshop 1 you will learn how to use Amazon SageMaker to train a Hugging Face Transformer model and deploy it afterwards.

  • Prepare and upload a test dataset to S3
  • Prepare a fine-tuning script to be used with Amazon SageMaker Training jobs
  • Launch a training job and store the trained model into S3
  • Deploy the model after successful training

🧑🏻‍💻 Code Assets: https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_1_getting_started_with_amazon_sagemaker

📺 Youtube: https://www.youtube.com/watch?v=pYqjCzoyWyo&list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ&index=6&t=5s&ab_channel=HuggingFace

Workshop 2: Going Production: Deploying, Scaling & Monitoring Hugging Face Transformer models with Amazon SageMaker

In Workshop 2 learn how to use Amazon SageMaker to deploy, scale & monitor your Hugging Face Transformer models for production workloads.

  • Run Batch Prediction on JSON files using a Batch Transform
  • Deploy a model from hf.co/models to Amazon SageMaker and run predictions
  • Configure autoscaling for the deployed model
  • Monitor the model to see avg. request time and set up alarms

🧑🏻‍💻 Code Assets: https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_2_going_production

📺 Youtube: https://www.youtube.com/watch?v=whwlIEITXoY&list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ&index=6&t=61s

Workshop 3: MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines

In Workshop 3 learn how to build an End-to-End MLOps Pipeline for Hugging Face Transformers from training to production using Amazon SageMaker.

We are going to create an automated SageMaker Pipeline which:

  • processes a dataset and uploads it to s3
  • fine-tunes a Hugging Face Transformer model with the processed dataset
  • evaluates the model against an evaluation set
  • deploys the model if it performed better than a certain threshold

🧑🏻‍💻 Code Assets: https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_3_mlops

📺 Youtube: https://www.youtube.com/watch?v=XGyt8gGwbY0&list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ&index=7

Access Workshop AWS Account

For this workshop you’ll get access to a temporary AWS Account already pre-configured with Amazon SageMaker Notebook Instances. Follow the steps in this section to login to your AWS Account and download the workshop material.

1. To get started navigate to - https://dashboard.eventengine.run/login

setup1

Click on Accept Terms & Login

2. Click on Email One-Time OTP (Allow for up to 2 mins to receive the passcode)

setup2

3. Provide your email address

setup3

4. Enter your OTP code

setup4

5. Click on AWS Console

setup5

6. Click on Open AWS Console

setup6

7. In the AWS Console click on Amazon SageMaker

setup7

8. Click on Notebook and then on Notebook instances

setup8

9. Create a new Notebook instance

setup9

10. Configure Notebook instances

  • Make sure to increase the Volume Size of the Notebook if you want to work with big models and datasets
  • Add your IAM_Role with permissions to run your SageMaker Training And Inference Jobs
  • Add the Workshop Github Repository to the Notebook to preload the notebooks: https://github.com/philschmid/huggingface-sagemaker-workshop-series.git

setup10

11. Open the Lab and select the right kernel you want to do and have fun!

Open the workshop you want to do (workshop_1_getting_started_with_amazon_sagemaker/) and select the pytorch kernel

setup11

huggingface-sagemaker-workshop-series's People

Contributors

merveenoyan avatar nateraw avatar philschmid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

huggingface-sagemaker-workshop-series's Issues

ParameterString' object has no attribute 'decode'


AttributeError Traceback (most recent call last)
in
----> 1 pipeline.upsert(role_arn=role)

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/pipeline.py in upsert(self, role_arn, description, tags, parallelism_config)
275 raise ValueError("An AWS IAM role is required to create or update a Pipeline.")
276 try:
--> 277 response = self.create(role_arn, description, tags, parallelism_config)
278 except ClientError as ce:
279 error_code = ce.response["Error"]["Code"]

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/pipeline.py in create(self, role_arn, description, tags, parallelism_config)
144 tags = _append_project_tags(tags)
145 tags = self.sagemaker_session._append_sagemaker_config_tags(tags, PIPELINE_TAGS_PATH)
--> 146 kwargs = self._create_args(role_arn, description, parallelism_config)
147 update_args(
148 kwargs,

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/pipeline.py in _create_args(self, role_arn, description, parallelism_config)
166 A keyword argument dict for calling create_pipeline.
167 """
--> 168 pipeline_definition = self.definition()
169 kwargs = dict(
170 PipelineName=self.name,

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/pipeline.py in definition(self)
352 def definition(self) -> str:
353 """Converts a request structure to string representation for workflow service calls."""
--> 354 request_dict = self.to_request()
355 self._interpolate_step_collection_name_in_depends_on(request_dict["Steps"])
356 request_dict["PipelineExperimentConfig"] = interpolate(

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/pipeline.py in to_request(self)
105 if self.pipeline_experiment_config is not None
106 else None,
--> 107 "Steps": build_steps(self.steps, self.name),
108 }
109

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/utilities.py in build_steps(steps, pipeline_name)
98 pipeline_name, step.name, get_code_hash(step), get_config_hash(step)
99 ):
--> 100 request_dicts.append(step.to_request())
101 return request_dicts
102

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/steps.py in to_request(self)
506 def to_request(self) -> RequestType:
507 """Updates the request dictionary with cache configuration."""
--> 508 request_dict = super().to_request()
509 if self.cache_config:
510 request_dict.update(self.cache_config.config)

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/steps.py in to_request(self)
350 def to_request(self) -> RequestType:
351 """Gets the request structure for ConfigurableRetryStep."""
--> 352 step_dict = super().to_request()
353 if self.retry_policies:
354 step_dict["RetryPolicies"] = self._resolve_retry_policy(self.retry_policies)

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/steps.py in to_request(self)
119 "Name": self.name,
120 "Type": self.step_type.value,
--> 121 "Arguments": self.arguments,
122 }
123 if self.depends_on:

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/steps.py in arguments(self)
485 request_dict = estimator.sagemaker_session.context.args
486 else:
--> 487 self.estimator._prepare_for_training(self.job_name)
488 train_args = _TrainingJob._get_train_args(
489 self.estimator, self.inputs, experiment_config=dict()

/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in _prepare_for_training(self, job_name)
3178 constructor if applicable.
3179 """
-> 3180 super(Framework, self)._prepare_for_training(job_name=job_name)
3181
3182 self._validate_and_set_debugger_configs()

/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in _prepare_for_training(self, job_name)
767 constructor if applicable.
768 """
--> 769 self._current_job_name = self._get_or_create_name(job_name)
770
771 # if output_path was specified we use it otherwise initialize here.

/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in _get_or_create_name(self, name)
741 return name
742
--> 743 self._ensure_base_job_name()
744 return name_from_base(self.base_job_name)
745

/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in _ensure_base_job_name(self)
722 self.base_job_name = (
723 self.base_job_name
--> 724 or get_jumpstart_base_name_if_jumpstart_model(self.source_dir, self.model_uri)
725 or base_name_from_image(
726 self.training_image_uri(), default_base_name=EstimatorBase.JOB_CLASS_NAME

/opt/conda/lib/python3.7/site-packages/sagemaker/jumpstart/utils.py in get_jumpstart_base_name_if_jumpstart_model(*uris)
255 """
256 for uri in uris:
--> 257 if is_jumpstart_model_uri(uri):
258 return constants.JUMPSTART_RESOURCE_BASE_NAME
259 return None

/opt/conda/lib/python3.7/site-packages/sagemaker/jumpstart/utils.py in is_jumpstart_model_uri(uri)
181
182 bucket = None
--> 183 if urlparse(uri).scheme == "s3":
184 bucket, _ = parse_s3_url(uri)
185

/opt/conda/lib/python3.7/urllib/parse.py in urlparse(url, scheme, allow_fragments)
365 Note that we don't break the components up in smaller bits
366 (e.g. netloc is a single string) and we don't expand % escapes."""
--> 367 url, scheme, _coerce_result = _coerce_args(url, scheme)
368 splitresult = urlsplit(url, scheme, allow_fragments)
369 scheme, netloc, url, query, fragment = splitresult

/opt/conda/lib/python3.7/urllib/parse.py in _coerce_args(*args)
121 if str_input:
122 return args + (_noop,)
--> 123 return _decode_args(args) + (_encode_result,)
124
125 # Result objects are more helpful than simple tuples

/opt/conda/lib/python3.7/urllib/parse.py in _decode_args(args, encoding, errors)
105 def _decode_args(args, encoding=_implicit_encoding,
106 errors=_implicit_errors):
--> 107 return tuple(x.decode(encoding, errors) if x else '' for x in args)
108
109 def _coerce_args(*args):

/opt/conda/lib/python3.7/urllib/parse.py in (.0)
105 def _decode_args(args, encoding=_implicit_encoding,
106 errors=_implicit_errors):
--> 107 return tuple(x.decode(encoding, errors) if x else '' for x in args)
108
109 def _coerce_args(*args):

AttributeError: 'ParameterString' object has no attribute 'decode'

AttributeError: 'ParameterString' object has no attribute 'startswith'

I am trying to run this tutorial as it is and always run into AttributeError: 'ParameterString' object has no attribute 'startswith', when json.loads(pipeline.definition()) is executed.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-32-d1b69f80bc6f> in <module>
      1 import json
      2 
----> 3 json.loads(pipeline.definition())

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/pipeline.py in definition(self)
    319     def definition(self) -> str:
    320         """Converts a request structure to string representation for workflow service calls."""
--> 321         request_dict = self.to_request()
    322         self._interpolate_step_collection_name_in_depends_on(request_dict["Steps"])
    323         request_dict["PipelineExperimentConfig"] = interpolate(

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/pipeline.py in to_request(self)
    103             if self.pipeline_experiment_config is not None
    104             else None,
--> 105             "Steps": list_to_request(self.steps),
    106         }
    107 

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/utilities.py in list_to_request(entities)
     51     for entity in entities:
     52         if isinstance(entity, Entity):
---> 53             request_dicts.append(entity.to_request())
     54         elif isinstance(entity, StepCollection):
     55             request_dicts.extend(entity.request_dicts())

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/steps.py in to_request(self)
    497     def to_request(self) -> RequestType:
    498         """Updates the request dictionary with cache configuration."""
--> 499         request_dict = super().to_request()
    500         if self.cache_config:
    501             request_dict.update(self.cache_config.config)

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/steps.py in to_request(self)
    349     def to_request(self) -> RequestType:
    350         """Gets the request structure for `ConfigurableRetryStep`."""
--> 351         step_dict = super().to_request()
    352         if self.retry_policies:
    353             step_dict["RetryPolicies"] = self._resolve_retry_policy(self.retry_policies)

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/steps.py in to_request(self)
    118             "Name": self.name,
    119             "Type": self.step_type.value,
--> 120             "Arguments": self.arguments,
    121         }
    122         if self.depends_on:

/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/steps.py in arguments(self)
    478             self.estimator._prepare_for_training(self.job_name)
    479             train_args = _TrainingJob._get_train_args(
--> 480                 self.estimator, self.inputs, experiment_config=dict()
    481             )
    482             request_dict = self.estimator.sagemaker_session._get_train_request(**train_args)

/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in _get_train_args(cls, estimator, inputs, experiment_config)
   2038         train_args = config.copy()
   2039         train_args["input_mode"] = estimator.input_mode
-> 2040         train_args["job_name"] = estimator._current_job_name
   2041         train_args["hyperparameters"] = hyperparameters
   2042         train_args["tags"] = estimator.tags

/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in training_image_uri(self, region)
   3034         trains the model, calls this method to find the hyperparameters.
   3035 
-> 3036         Returns:
   3037             dict[str, str]: The hyperparameters.
   3038         """

/opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in get_training_image_uri(region, framework, framework_version, py_version, image_uri, distribution, compiler_config, tensorflow_version, pytorch_version, instance_type)
    497 
    498     if tensorflow_version is not None or pytorch_version is not None:
--> 499         processor = _processor(instance_type, ["cpu", "gpu"])
    500         is_native_huggingface_gpu = processor == "gpu" and not compiler_config
    501         container_version = "cu110-ubuntu18.04" if is_native_huggingface_gpu else None

/opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in _processor(instance_type, available_processors, serverless_inference_config)
    366         )
    367 
--> 368     if instance_type.startswith("local"):
    369         processor = "cpu" if instance_type == "local" else "gpu"
    370     elif instance_type.startswith("neuron"):

AttributeError: 'ParameterString' object has no attribute 'startswith'

Converting ParameterString to str like,

pipeline = Pipeline(
    name=f"HuggingFaceDemoPipeline",
    parameters=[
        str(model_id),
        str(dataset_name),
        str(processing_instance_type),
        processing_instance_count,
        processing_script,
        training_entry_point,
        training_source_dir,
        training_instance_type,
        training_instance_count,
        evaluation_script,
        threshold_accuracy,
        epochs,
        eval_batch_size,
        train_batch_size,
        learning_rate,
        fp16
    ],
    steps=[step_process, step_train, step_eval, step_cond],
    sagemaker_session=sagemaker_session,
)

results in TypeError: Pipeline variables do not support __str__ operation. Please use .to_string()to convert it to string type in execution timeor use.expr to translate it to Json for display purpose in Python SDK.

Is there a way to solve this?

Erorr occured when I modify real-time inference to batch transform

hello,
Thanks for your amazing job~ I followed your code to build end-to-end Hugging Face Transformers with Sagemaker Pipeline. This code use real-time inference and create an endpoint, but I want to achieve batch transform inference.

My code looks like below.
`
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.s3 import S3Uploader,s3_path_join
from sagemaker.inputs import TransformInput
from sagemaker.workflow.steps import TransformStep

huggingfacemodel = HuggingFaceModel(
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts, # path to your trained sagemaker mod
role=role,
transformers_version=_transformers_version,
pytorch_version=_pytorch_version,
py_version=_py_version,
sagemaker_session=sagemaker_session,
)

output_s3_path = s3_path_join("s3://",'sagemaker-us-east-1-7348***',"**y-datalab-modela/output")
transformer_instance_type=ParameterString(name="Fp16", default_value="False")
transformer = huggingfacemodel.transformer(
instance_count=1,
instance_type="ml.m5.4xlarge",
output_path=output_s3_path,
strategy='SingleRecord',
)

batch_data = ParameterString(
name="BatchData",
default_value='s3://sagemaker-us-east-1-73483***/**y-datalab-modela/test/stary_data.jsonl',
)
transform_step = TransformStep(
name="Batch",
transformer=transformer,
inputs=TransformInput(
data=batch_data,
content_type='application/json',
split_type="Line",
)
)

`

but when I run the code, it returns error: ‘AttributeError: 'Properties' object has no attribute 'decode'’.

Could you please help in this issue or could you please provide some sample code about how to use batch transform pipeline? Thank you!

KeyError Length during training following workshop MLOps

AlgorithmError: ExecuteUserScriptError: Command "/opt/conda/bin/python3.8 train.py --epochs 1 --eval_batch_size 64 --fp16 True --learning_rate 3e-5 --model_id distilbert-base-uncased --train_batch_size 32" Traceback (most recent call last): File "train.py", line 46, in train_dataset = load_from_disk(args.training_dir)

Error: We couldn't connect to 'https://huggingface.co/' to load this model and it looks like None is not the path to a directory conaining a config.json file. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'."

Hi,

In my training workflow I'm following exact process with little modification such as adding the network configs and all.
So, if I add same training code in Sagemaker default notebook, pipeline runs fine without any issues.

However if I include the same code base as a proper ci/cd then training workflow complains We couldn't connect to ``` 'https://huggingface.co/' to load this model and it looks like None is not the path to a directory conaining a config.json file. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'." OSError: We couldn't connect to 'https://huggingface.co/' to load this model and it looks like None is not the path to a directory conaining a config.json file.
``` for train.py at line
`model = AutoModelForSequenceClassification.from_pretrained(args.model_name)
tokenizer = AutoTokenizer.from_pretrained(args.model_name)`

So my question is how to download model offline or incorporate in this situation.

container = sagemaker.image_uris.retrieve(framework="huggingface",
                                              region=boto3.Session().region_name,
                                              version=transformers_version,
                                              py_version=py_version,
                                              base_framework_version=f"pytorch{pytorch_version}",
                                              instance_type="ml.p3.2xlarge",
                                              image_scope="training",
                                              container_version="cu110-ubuntu18.04")

    print(f"Image container {container}")
    huggingface_estimator = HuggingFace(
        image_uri=container,
        entry_point="train.py",
        source_dir=BASE_DIR,
        base_job_name=base_job_prefix + "/training",
        instance_type=training_instance_type,
        instance_count=training_instance_count,
        role=role,
        transformers_version=transformers_version,
        pytorch_version=pytorch_version,
        py_version=py_version,
        hyperparameters={
            'epochs': epochs,
            'eval_batch_size': eval_batch_size,
            'train_batch_size': train_batch_size,
            'learning_rate': learning_rate,
            'model_id': model_id,
            'fp16': fp16
        },
        sagemaker_session=sagemaker_session,
        subnets=network_config.subnets,
        security_group_ids=network_config.security_group_ids,
        encrypt_inter_container_traffic=True,
        enable_network_isolation=False,
    )

    step_train = TrainingStep(
        name="TrainHuggingFaceModel",
        estimator=huggingface_estimator,
        inputs={
            "train": TrainingInput(
                s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
                    "train"
                ].S3Output.S3Uri
            ),
            "test": TrainingInput(
                s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
                    "test"
                ].S3Output.S3Uri
            ),
        },
        cache_config=cache_config,
    )

These are my versions

transformers_version = "4.17.0"
 pytorch_version = "1.10.2"
 py_version = "py38"
 model_id_ = "distilbert-base-uncased"
 dataset_name_ = "imdb"
 datasets[s3] = 1.18.4

So following the error message I did download prior the model in my local and refer to train.py as below

model = AutoModelForSequenceClassification.from_pretrained(os.path.join(BASE_DIR, "distilbert-base-uncased-model"))
  tokenizer = AutoTokenizer.from_pretrained(os.path.join(BASE_DIR, "distilbert-base-uncased-model"))

but still I'm getting the error message "OSError: We couldn't connect to 'https://huggingface.co/' to load this model and it looks like /opt/ml/code/distilbert-base-uncased-model is not the path to a directory conaining a config.json file."
Is there something I am missing? how can I copy the model offline to this path.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.