GithubHelp home page GithubHelp logo

astronomer / astro-sdk Goto Github PK

View Code? Open in Web Editor NEW
318.0 13.0 39.0 7.36 MB

Astro SDK allows rapid and clean development of {Extract, Load, Transform} workflows using Python and SQL, powered by Apache Airflow.

Home Page: https://astro-sdk-python.rtfd.io/

License: Apache License 2.0

Python 97.31% Dockerfile 0.34% Shell 0.96% Makefile 0.70% HCL 0.33% Batchfile 0.07% Jinja 0.24% Starlark 0.04%
python sql pandas airflow sqlite bigquery postgres snowflake gcs s3 etl elt dags workflows data-analysis data-science apache-airflow

astro-sdk's Introduction

astro

workflows made easy

Python versions License Development Status PyPI downloads Contributors Commit activity pre-commit.ci status CI codecov

Astro Python SDK is a Python SDK for rapid development of extract, transform, and load workflows in Apache Airflow. It allows you to express your workflows as a set of data dependencies without having to worry about ordering and tasks. The Astro Python SDK is maintained by Astronomer.

Prerequisites

  • Apache Airflow >= 2.1.0.

Install

The Astro Python SDK is available at PyPI. Use the standard Python installation tools.

To install a cloud-agnostic version of the SDK, run:

pip install astro-sdk-python

You can also install dependencies for using the SDK with popular cloud providers:

pip install astro-sdk-python[amazon,google,snowflake,postgres]

Quickstart

  1. Ensure that your Airflow environment is set up correctly by running the following commands:

    export AIRFLOW_HOME=`pwd`
    airflow db init

    Note:

    • AIRFLOW__CORE__ENABLE_XCOM_PICKLING no longer needs to be enabled from astro-sdk-python release 1.2 and above.
    • For airflow version < 2.5 and astro-sdk-python release < 1.3 Users can either use a custom XCom backend AstroCustomXcomBackend with Xcom pickling disabled (or) enable Xcom pickling.
    • For airflow version >= 2.5 and astro-sdk-python release >= 1.3.3 Users can either use Airflow's Xcom backend with Xcom pickling disabled (or) enable Xcom pickling.

    The data format used by pickle is Python-specific. This has the advantage that there are no restrictions imposed by external standards such as JSON or XDR (which can’t represent pointer sharing); however it means that non-Python programs may not be able to reconstruct pickled Python objects.

    Read more: enable_xcom_pickling and pickle:

  2. Create a SQLite database for the example to run with:

    # The sqlite_default connection has different host for MAC vs. Linux
    export SQL_TABLE_NAME=`airflow connections get sqlite_default -o yaml | grep host | awk '{print $2}'`
    sqlite3 "$SQL_TABLE_NAME" "VACUUM;"
  3. Copy the following workflow into a file named calculate_popular_movies.py and add it to the dags directory of your Airflow project:

    from datetime import datetime
    from airflow import DAG
    from astro import sql as aql
    from astro.files import File
    from astro.sql.table import Table
    @aql.transform()
    def top_five_animations(input_table: Table):
    return """
    SELECT Title, Rating
    FROM {{input_table}}
    WHERE Genre1=='Animation'
    ORDER BY Rating desc
    LIMIT 5;
    """
    with DAG(
    "calculate_popular_movies",
    schedule_interval=None,
    start_date=datetime(2000, 1, 1),
    catchup=False,
    ) as dag:
    imdb_movies = aql.load_file(
    File(
    "https://raw.githubusercontent.com/astronomer/astro-sdk/main/tests/data/imdb.csv"
    ),
    output_table=Table(conn_id="sqlite_default"),
    )
    top_five_animations(
    input_table=imdb_movies,
    output_table=Table(name="top_animation"),
    )
    aql.cleanup()

    Alternatively, you can download calculate_popular_movies.py

     curl -O https://raw.githubusercontent.com/astronomer/astro-sdk/main/example_dags/calculate_popular_movies.py
  4. Run the example DAG:

    airflow dags test calculate_popular_movies `date -Iseconds`
  5. Check the result of your DAG by running:

    sqlite3 "$SQL_TABLE_NAME" "select * from top_animation;" ".exit"

    You should see the following output:

    $ sqlite3 "$SQL_TABLE_NAME" "select * from top_animation;" ".exit"
    Toy Story 3 (2010)|8.3
    Inside Out (2015)|8.2
    How to Train Your Dragon (2010)|8.1
    Zootopia (2016)|8.1
    How to Train Your Dragon 2 (2014)|7.9

Supported technologies

FileLocation
local
http
https
gs
gdrive
s3
wasb
wasbs
azure
sftp
ftp
FileType
csv
json
ndjson
parquet
xls
xlsx
Database
postgres
sqlite
delta
bigquery
snowflake
redshift
mssql
duckdb
mysql

Available operations

The following are some key functions available in the SDK:

  • load_file: Load a given file into a SQL table
  • transform: Applies a SQL select statement to a source table and saves the result to a destination table
  • drop_table: Drops a SQL table
  • run_raw_sql: Run any SQL statement without handling its output
  • append: Insert rows from the source SQL table into the destination SQL table, if there are no conflicts
  • merge: Insert rows from the source SQL table into the destination SQL table, depending on conflicts:
    • ignore: Do not add rows that already exist
    • update: Replace existing rows with new ones
  • export_file: Export SQL table rows into a destination file
  • dataframe: Export given SQL table into in-memory Pandas data-frame

For a full list of available operators, see the SDK reference documentation.

Documentation

The documentation is a work in progress--we aim to follow the Diátaxis system:

  • Getting Started Tutorial: A hands-on introduction to the Astro Python SDK
  • How-to guides: Simple step-by-step user guides to accomplish specific tasks
  • Reference guide: Commands, modules, classes and methods
  • Explanation: Clarification and discussion of key decisions when designing the project

Changelog

The Astro Python SDK follows semantic versioning for releases. Check the changelog for the latest changes.

Release managements

To learn more about our release philosophy and steps, see Managing Releases.

Contribution guidelines

All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.

Read the Contribution Guideline for a detailed overview on how to contribute.

Contributors and maintainers should abide by the Contributor Code of Conduct.

License

Apache Licence 2.0

astro-sdk's People

Contributors

ahnsv avatar bhavaniravi avatar bolkedebruin avatar conorbev avatar deepsource-autofix[bot] avatar dependabot[bot] avatar dimberman avatar feluelle avatar fritz-astronomer avatar github-actions[bot] avatar jlaneve avatar josh-fell avatar jwitz avatar kaxil avatar lee-w avatar mikeshwe avatar pankajastro avatar pankajkoti avatar pgzmnk avatar phanikumv avatar pre-commit-ci[bot] avatar rajaths010494 avatar scottleechua avatar sunank200 avatar tatiana avatar thecodyrich avatar uranusjr avatar utkarsharma2 avatar vatsrahul1001 avatar vikramkoka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

astro-sdk's Issues

Full support for Bigquery

Add support for the following functions for Google Bigquery:

aql.transform_file
aql.append
aql.merge
All validation checks

Benchmark `load_file` and `save_file`

Understand the current performance of load_file and save_file, using different datasets and databases.

Acceptance criteria:

  • Proposed datasets
  • Scripts that can be used by anyone to reproduce the results and re-run them
  • Current benchmark results

Issue due to removal of region from Snowflake URI

Version: astro==0.4.0

Context
The way we are building Snowflake URIs seems to be prone to errors.

Recently, a user had reported that Snowflake had deprecated the usage of the region in the host URI, and we made this change:
541038c#diff-4498bebc277444fc4d9662514a3a2e2eacdcdfd139e160abd6fdb16b0e43a854L31

Although the user was happy with the change, other users who were being able to successfully use Astro started experiencing the error:

snowflake.connector.errors.ForbiddenError: 250001 (08001): None: Failed to connect to DB. Verify the account name is correct: GP123.snowflakecomputing.com:443. 000403: 403: HTTP 403: Forbidden

The goal of this ticket is to resolve the issue so both users are able to use Astro, if possible removing any complexity within Astro.

Additional details

Based on the Snowflake documentation:

For example, if your account locator is xy12345:

    If the account is located in the AWS US West (Oregon) region, no additional segments are required and the URL would be xy12345.snowflakecomputing.com.

    If the account is located in the AWS US East (Ohio) region, additional segments are required and the URL would be xy12345.us-east-2.aws.snowflakecomputing.com.

One approach we can consider taking is to remove altogether the need for the TempSnowflakeHook class and trust the user to set the Snowflake host which works for them.

load_file & save_file from GCS/S3 to a Pandas data-frame

Acceptance criteria

  • A user should be able to load a file from GCS/S3 into a dataframe without needing any SQL database connection.
  • does not need to test with large scale data
  • user should also be able to save the dataframe directly

Unable to load_file using parquet

Version: Astro 0.2.0
Python: 3.8, 3.9

Astro is unable to run the task load_file with a parquet file.

It raises the following exception:

  Traceback (most recent call last):
    File "pyarrow/io.pxi", line 1511, in pyarrow.lib.get_native_file
    File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pyarrow/util.py", line 99, in _stringify_path
      raise TypeError("not a path-like object")
  TypeError: not a path-like object
  
  During handling of the above exception, another exception occurred:
  
  Traceback (most recent call last):
    File "pyarrow/io.pxi", line 1517, in pyarrow.lib.get_native_file
    File "pyarrow/io.pxi", line 729, in pyarrow.lib.PythonFile.__cinit__
  TypeError: binary file expected, got text file
  
    warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))

load_file supports upserting

concern:

I think that the first step for this ticket would be to propose what the API change would look like. The append and join APIs are are already quite complicated and I wouldn't want to push that complexity into the load_file API (which is a crucial endpoint).

Please comment below with an API proposal @tatiana

Create automated release process for astro packages

Acceptance Criteria

  1. We should have a RELEASES.txt that explains what changes between each release
  2. We should have the ability to release alpha and beta packages to run tests before creating official releases
  3. Github CI should have the ability to automatically push artifacts to pypi whenever the release version is bumped

Add basic support for Redshift

Acceptance criteria:

aql.load_file
aql.save_file
aql.transform
aql.truncate
astro.dataframe

not required;
aql.transform_file
aql.append
aql.merge
All validation checks

For this first round, if the user attempts any of the "not required" functions, they will receive a warning that these functions are not yet supported

Users should able to load, save, and run queries against BigQuery using the AQL library. For this first round, we do not need to support validation checks or merge/append functions.

For this ticket, we expect implementaiton, test coverage, and documentation

Only install dependencies the user needs

Currently, we make users install a LOT of dependencies (aws, google cloud, snowflake, etc.)

Users should have the ability to download dependencies
e.g "pip install astro[aws, snowflake]"

Acceptance criteria:

  • Should support sub-modules: gcs, aws, snowflake, sqlite (if supported)
  • Should fail gracefully if a user attempts to use a library that isn't installed (potentially including example install)

Add ability to set Role at runtime

Request from Guohui:

Guohui Gao 7:53 AM
yes, currently I have a dag to load data from gcs to snowflake, I use the load_file to load to a stage database and then merge to another database table. I can use the same database or schema so I don’t need to switch roles, just wondering if there is a way to specify the role so I don’t need to change the current dag logic.

Acceptance criteria:

  • User should be able to set "role" at runtime the same way they can set database and schema

Improve load_file and save_file speed for bigquery

During testing, we found that the load speed of dataframe -> postgres to be fairly slow. We believe that there should be a faster native way to load data into postgres.

Acceptance criteria:

A notable speed improvement on postgres load and save numbers in the load and save file
detailed instructions for users to add any needed extensions to their database that would enable these features (e.g. s3 support)

Add integrations and standardize Astro backend

In order to support a wide audience, we should add support for BigQuery and Redshift. We should also standardize our backend around SqlAlchemy to make it easier to add future systems

Create a better package release process

We should fashion our release process around Apache Airflow, where the CI/CD system can automatically push new packages to pypi when the release version is bumped, we should also set standards that releases should have release notes, so users can know what has been changed between releases.

Add basic support for Google BigQuery

Acceptance criteria:

aql.load_file
aql.save_file
aql.transform
aql.truncate
astro.dataframe

not required;
aql.transform_file
aql.append
aql.merge
All validation checks

For this first round, if the user attempts any of the "not required" functions, they will receive a warning that these functions are not yet supported

Users should able to load, save, and run queries against BigQuery using the AQL library. For this first round, we do not need to support validation checks or merge/append functions.

For this ticket, we expect implementaiton, test coverage, and documentation

Add integrations and standardize Astro backend

In order to support a wide audience, we should add support for BigQuery and Redshift. We should also standardize our backend around SqlAlchemy to make it easier to add future systems

load_file & save_filewith datasets larger than worker resources

The current system of loading is limited to the size of a single dataframe. This of course will not scale to full production use-cases.

Proposed solution:

  1. By default, we can use smart_open to chunk the input file, and create smaller dataframes to push into the database
  2. Given a cloud data storage system (e.g. BQ or snowflake) we can create specific solutions around how those systems optimally load data

Acceptance criteria:

  • Should be able to load 100GB of data into BQ, Snowflake, and redshift

Create E2E DAG examples

Acceptance criteria:

Create a series of working and runnable example DAGs that can introduce users to astro with:

  1. Snowflake
  2. GCP
  3. AWS
  4. Non-task API DAGs (mixing in traditional operators)
  5. With the render function
  6. With the append function

Full support for Redshift

Add support for the following functions for Redshift:

aql.transform_file
aql.append
aql.merge
All validation checks

Support postgres without a password

The current TempPostgresHook returns an error if a user creates a postgres instance because of string parsing

File "/usr/local/lib/python3.9/site-packages/airflow/hooks/dbapi.py", line 118, in get_sqlalchemy_engine
    return create_engine(self.get_uri(), **engine_kwargs)
  File "/usr/local/lib/python3.9/site-packages/astro/sql/operators/temp_hooks.py", line 51, in get_uri
    login = f"{quote_plus(conn.login)}:{quote_plus(conn.password)}@"
  File "/usr/local/lib/python3.9/urllib/parse.py", line 887, in quote_plus
    string = quote(string, safe + space, encoding, errors)
  File "/usr/local/lib/python3.9/urllib/parse.py", line 871, in quote
    return quote_from_bytes(string, safe)
  File "/usr/local/lib/python3.9/urllib/parse.py", line 896, in quote_from_bytes
    raise TypeError("quote_from_bytes() expected bytes")
TypeError: quote_from_bytes() expected bytes

We should check whether the PG has a password or not and handle appropriately

Issue in `load_file` chunking

Version: astro-projects==0.4.0

At the moment, the load_file chunking feature is not working.

The consequence is that when trying to load a 279MB CSV file containing a GitHub timeline for a period of time, the job used over 40 GB of RAM, being killed:

[2022-02-07 13:02:29,939] {local_task_job.py:154} INFO - Task exited with return code Negsignal.SIGKILL

This PR solves the issue:
improve-chunking

Basic support for SQLite

Acceptance criteria:

aql.load_file
aql.save_file
aql.transform
aql.truncate
astro.dataframe

not required;
aql.transform_file
aql.append
aql.merge
All validation checks

For this first round, if the user attempts any of the "not required" functions, they will receive a warning that these functions are not yet supported

Users should able to load, save, and run queries against BigQuery using the AQL library. For this first round, we do not need to support validation checks or merge/append functions.

For this ticket, we expect implementation, test coverage, and documentation

Improve the speed of load_file and save_file in Postgres

During testing, we found that the load speed of dataframe -> postgres to be fairly slow. We believe that there should be a faster native way to load data into postgres.

Acceptance criteria:

  • A notable speed improvement on postgres load and save numbers in the load and save file
  • detailed instructions for users to add any needed extensions to their database that would enable these features (e.g. s3 support)

Syntax error with "-" characters

I'm currently running the following @aql.transform method in a DAG with the ID csv-to-postgres:

def sample_create_table(input_table: Table):
    return "SELECT * FROM {input_table} LIMIT 10"

It seems as if the decorator is trying to use the DAG ID as my input_table and getting a psycopg2 error that rejects the - character:

psycopg2.errors.SyntaxError: syntax error at or near "-"
LINE 1: DROP TABLE IF EXISTS csv-to-postgres-lineage-2_sample_create...

It'd be great to build some resiliency to the - characters, as in my experience they're super common in DAG IDs.

load_file from multiple files [matching a pattern] in a directory

Acceptance criteria:

  • A user should be able to point to a list of files via a regex pattern, and all of those files can be loaded into a single SQL database
  • We will not handle merge conflicts. If there are multiple entries with the same key, we will surface the error when it is created.
  • does not require new API, should just work if a user supplies a regex pattern to load_file
  • notify @guohui-gao when this feature is ready

Standardize Astro backend around SQLAlchemy

In order to support a wide audience, we should add support for BigQuery and Redshift. To prepare for these integrations, we should change our backend to use SQLAlchemy, which has support for many databases.

This ticket will involve changing the existing integrations to use SQLAlchemy, it does not involve adding new integrations

Issue in `load_file` some datasets in Snowflake

Version: astro==0.4.0

Problem
At the moment, we are unable to load the following dataset from Tate Gallery into Snowflake: https://github.com/tategallery/collection/blob/master/artwork_data.csv. The operation works using BQ and Postgres. I could not find any particular issue with the original dataset.

Exception:

  File "/home/tati/Code/astro-fresh/src/astro/sql/operators/agnostic_load_file.py", line 80, in execute
    move_dataframe_to_sql(
  File "/home/tati/Code/astro-fresh/src/astro/utils/load_dataframe.py", line 72, in move_dataframe_to_sql
    write_pandas(
  File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/snowflake/connector/pandas_tools.py", line 146, in write_pandas
    create_stage_sql = (
  File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pandas/util/_decorators.py", line 207, in wrapper
    return func(*args, **kwargs)
  File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pandas/core/frame.py", line 2677, in to_parquet
    return to_parquet(
  File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pandas/io/parquet.py", line 416, in to_parquet
    impl.write(
  File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pandas/io/parquet.py", line 173, in write
    table = self.api.Table.from_pandas(df, **from_pandas_kwargs)
  File "pyarrow/table.pxi", line 1561, in pyarrow.lib.Table.from_pandas
  File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 594, in dataframe_to_arrays
    arrays = [convert_column(c, f)
  File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 594, in <listcomp>
    arrays = [convert_column(c, f)
  File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 581, in convert_column
    raise e
  File "/home/tati/.virtualenvs/astro-py38/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 575, in convert_column
    result = pa.array(col, type=type_, from_pandas=True, safe=safe)
  File "pyarrow/array.pxi", line 302, in pyarrow.lib.array
  File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
  File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: ("Could not convert '99' with type str: tried to convert to double", 'Conversion failed for column HEIGHT with type object')

How to reproduce

Download the dataset artwork_data.csv.

Update the tests/benchmark/config.json file to include a dataset similar to:

    {
        "name": "few_mb",
        "size": "24M",
        "path": "<path-to>/artwork_data.csv",
        "rows": 69201
    },

And a database that uses Snowflake.

From within the tests/benchmark folder, run:

/run.py --dataset=few_mb --database=snowflake

Initial analysis

The first step of load_file is to load the CSV to a Pandas data frame; In the case of this particular dataset, Pandas automagically assigns the following types per column:

(Pdb) df. types
id                      int64
accession_number       object
artist                 object
artistRole             object
artistId                int64
title                  object
dateText               object
medium                 object
creditLine             object
year                   object
acquisitionYear       float64
dimensions             object
width                  object
height                 object
depth                 float64
units                  object
inscription            object
thumbnailCopyright     object
thumbnailUrl           object
url                    object
dtype: object

When analyzing the values within height, it is possible to see that there is a mixture of strings, floats, and nan:

(Pdb) len([i for i in df.height if isinstance(i, str)])
31330
(Pdb) len([i for i in df.height if not isinstance(i, str)])
37871

Why doesn't this happen for BQ & Postgres?

Because they are currently using a different strategy to write from the data frame into the table in the database:
https://github.com/astro-projects/astro/blob/4e63302bc5c69401b10568598c4ff738e21563f5/src/astro/utils/load_dataframe.py#L60-L95

Fix issue with inheritance in aql.render

There is an issues with aql.render where the DAGs were
showing up in the graph view properly but were not running properly when
tasks inherit from eachother.

Acceptance criteria:

I should be able to create two sql files that tie to a real database, have one task inherit from another task, and be able to pass the table between those tasks.

Offer local mode that saves dataframes to /tmp directory

In order to allow for easier local development AND support for storing dataframes in NFS, a user should be able to store intermediate values locally

Acceptance criteria:

  • A user that does not supply credentials for s3 or GCS should have the ability to store their intermediate values in a local directory (defaulting /tmp)

Support for SQLite

To support easier local development and to create simpler quickstart tutorials, we should support SQLite.

Acceptance criteria:

  • All current functions work against a SQLite database

Make Example DAGs work

Acceptance criteria:

  • There should be CI/CD job that launches an airflow, and runs all example dags, and only passes if all DAGs pass. If there is a failure, it should surface the logs for the failed task.
  • There should be at least 4 example dags, showcasing a variety of use-cases (python-only, python with SQL files, loading from s3, running validation checks).

Add integrations and standardize Astro backend

In order to support a wide audience, we should add support for BigQuery and Redshift. We should also standardize our backend around SqlAlchemy to make it easier to add future systems

Allow passing db context via op_kwargs

For queries where users don't want to pass a table, object, this feature
will allow users to define context at runtime using op_kwargs.

example:

@aql.transform
def test_astro():
    return "SELECT * FROM actor"

with dag:
    actor_table = test_astro(database="pagile", conn_id="my_postgres_conn")

Unable to load_file when using nested NDJSON

Version: Astro 0.3.3

Astro is currently unable to load_file if the origin NDJSON is nested.

How to reproduce:

    github_nested_table = aql.load_file(
        path="gs://dag-authoring/github/github_nested_000000000007.ndjson",
        task_id="load_ndjson",
        #file_conn_id="google_cloud_default",
        output_table=Table(
            table_name="github_projects_counts_by_language",
            database="postgres",
            conn_id="postgres_conn"
        )

Output:

Traceback (most recent call last):
  File "/home/tati/.virtualenvs/astro-private/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
    self.dialect.do_execute(
  File "/home/tati/.virtualenvs/astro-private/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
    cursor.execute(statement, parameters)
psycopg2.ProgrammingError: can't adapt type 'dict'

The above exception was the direct cause of the following exception:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.