GithubHelp home page GithubHelp logo

chetanxpro / nodejs-whisper Goto Github PK

View Code? Open in Web Editor NEW
63.0 2.0 13.0 289 KB

Introducing NodeJS Bindings for Whisper - the CPU version of OpenAI's Whisper, as initially crafted in C++ by ggerganov.

Home Page: https://npmjs.com/nodejs-whisper

License: MIT License

JavaScript 25.16% TypeScript 74.84%
openai speech-recognition speech-to-text timestamp whisper whisper-nodejs nodejs-whisper ai cpp ml

nodejs-whisper's Introduction

nodejs-whisper

Node.js bindings for OpenAI's Whisper model.

MIT License

Features

  • Automatically convert the audio to WAV format with a 16000 Hz frequency to support the whisper model.
  • Output transcripts to (.txt .srt .vtt)
  • Optimized for CPU (Including Apple Silicon ARM)
  • Timestamp precision to single word
  • Split on word rather than on token (Optional)
  • Translate from source language to english (Optional)
  • Convert audio formet to wav to support whisper model

Installation

  1. Install make tools
sudo apt update
sudo apt install  build-essential
  1. Install nodejs-whisper with npm
  npm i nodejs-whisper
  1. Download whisper model
  npx nodejs-whisper download
  • NOTE: user may need to install make tool

Usage/Examples

import path from 'path'
import { nodewhisper } from 'nodejs-whisper'

// Need to provide exact path to your audio file.
const filePath = path.resolve(__dirname, 'YourAudioFileName')

await nodewhisper(filePath, {
	modelName: 'base.en', //Downloaded models name
	autoDownloadModelName: 'base.en', // (optional) autodownload a model if model is not present
        verbose?: boolean
	removeWavFileAfterTranscription?: boolean
	withCuda?: boolean // (optional) use cuda for faster processing
	whisperOptions: {
		outputInText: false, // get output result in txt file
		outputInVtt: false, // get output result in vtt file
		outputInSrt: true, // get output result in srt file
		outputInCsv: false, // get output result in csv file
		translateToEnglish: false, //translate from source language to english
		wordTimestamps: false, // Word-level timestamps
		timestamps_length: 20, // amount of dialogue per timestamp pair
		splitOnWord: true, //split on word rather than on token
	},
})

// Model list
const MODELS_LIST = [
	'tiny',
	'tiny.en',
	'base',
	'base.en',
	'small',
	'small.en',
	'medium',
	'medium.en',
	'large-v1',
	'large',
]

Types

 interface IOptions {
	modelName: string
	verbose?: boolean
	removeWavFileAfterTranscription?: boolean
	withCuda?: boolean
	autoDownloadModelName?: string
	whisperOptions?: WhisperOptions
}

 interface WhisperOptions {
	outputInText?: boolean
	outputInVtt?: boolean
	outputInSrt?: boolean
	outputInCsv?: boolean
	translateToEnglish?: boolean
	timestamps_length?: number
	wordTimestamps?: boolean
	splitOnWord?: boolean
}

Run Locally

Clone the project

  git clone https://github.com/ChetanXpro/nodejs-whisper

Go to the project directory

  cd nodejs-whisper

Install dependencies

  npm install

Start the server

  npm run dev

Build Project

  npm run build

Made with

Feedback

If you have any feedback, please reach out to us at [email protected]

Authors

nodejs-whisper's People

Contributors

chetanxpro avatar dependabot[bot] avatar explosion-scratch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

nodejs-whisper's Issues

model download feature

Hello, could you add a way to download models with a single command line , like this :

npx nodejs-whisper download small.en

would be very useful .

Thanks a lot.

[Bug] Error when fulfilled

I am using a 16000Hz .wav file, and tried transcribing it but I get:

[Nodejs-whisper]  Executing command: ./main   -l auto -m ./models/ggml-small.bin  -f /home/wolf/develop/nodejs/okuuai/src/voice/test/1.wav  


[Nodejs-whisper] Transcribing Done!
/home/wolf/develop/nodejs/okuuai/node_modules/nodejs-whisper/dist/index.js:5
        function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
                                                         ^
Error: Something went wrong while executing the command.
    at /home/wolf/develop/nodejs/okuuai/node_modules/nodejs-whisper/src/index.ts:41:9
    at Generator.next (<anonymous>)
    at fulfilled (/home/wolf/develop/nodejs/okuuai/node_modules/nodejs-whisper/dist/index.js:5:58)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)

I'm using it inside a TS project (running Linux Mint)

Is there anything I could do about this?

Thanks

Model not Found

I am on a Mac and trying to use this in a nextjs project

Code
` const filePath = path.join(tempDir, 'out.wav');
console.log(filePath)

    // generate the transcript with whisper
    const transcript = await nodewhisper(filePath, {
        modelName: 'base.en', //Downloaded models name
        autoDownloadModelName: 'base.en', // (optional) autodownload a model if model is not present
        whisperOptions: {
            outputInText: true, // get output result in txt file
            outputInVtt: false, // get output result in vtt file
            outputInSrt: false, // get output result in srt file
            outputInCsv: false, // get output result in csv file
            translateToEnglish: false, //translate from source language to english
            wordTimestamps: false, // Word-level timestamps
            timestamps_length: 20, // amount of dialogue per timestamp pair
            splitOnWord: true, //split on word rather than on token
        },
    }
    );

`

Error:


cd: no such file or directory: /Users/dylanb/Documents/Github/StudyMan/studyapp/.next/server/cpp/whisper.cpp/models
[Nodejs-whisper] Autodownload Model: base


chmod: File not found: /Users/dylanb/Documents/Github/StudyMan/download-ggml-model.sh
node:internal/modules/cjs/loader:1078
  throw err;
  ^

Error: Cannot find module '/Users/dylanb/Documents/Github/StudyMan/studyapp/.next/server/vendor-chunks/exec-child.js'
    at Module._resolveFilename (node:internal/modules/cjs/loader:1075:15)
    at Module._load (node:internal/modules/cjs/loader:920:27)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
    at node:internal/main/run_main_module:23:47 {
  code: 'MODULE_NOT_FOUND',
  requireStack: []
}

Node.js v18.16.0
[Nodejs-whisper] Attempting to compile model...

node:internal/modules/cjs/loader:1078
  throw err;
  ^

Error: Cannot find module '/Users/dylanb/Documents/Github/StudyMan/studyapp/.next/server/vendor-chunks/exec-child.js'
    at Module._resolveFilename (node:internal/modules/cjs/loader:1075:15)
    at Module._load (node:internal/modules/cjs/loader:920:27)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
    at node:internal/main/run_main_module:23:47 {
  code: 'MODULE_NOT_FOUND',
  requireStack: []
}

Node.js v18.16.0
[Nodejs-whisper]  Transcribing file: /var/folders/wm/6p8gkm_x6b17rlvy4178hql00000gn/T/out.wav

[Nodejs-whisper] Error: Models do not exist. Please Select a downloaded model.

Error: [Nodejs-whisper] Error: Model not found
    at constructCommand (webpack-internal:///(rsc)/./node_modules/nodejs-whisper/dist/WhisperHelper.js:33:15)
    at eval (webpack-internal:///(rsc)/./node_modules/nodejs-whisper/dist/index.js:53:62)
    at Generator.next (<anonymous>)
    at fulfilled (webpack-internal:///(rsc)/./node_modules/nodejs-whisper/dist/index.js:11:32)

Here is my log of running the model download command

(base) Dylans-MacBook-Air:studyapp dylanb$ npx nodejs-whisper download
[Nodejs-whisper] Models do not exist. Please Select a model to download.


| Model     | Disk   | RAM     |
|-----------|--------|---------|
| tiny      |  75 MB | ~390 MB |
| tiny.en   |  75 MB | ~390 MB |
| base      | 142 MB | ~500 MB |
| base.en   | 142 MB | ~500 MB |
| small     | 466 MB | ~1.0 GB |
| small.en  | 466 MB | ~1.0 GB |
| medium    | 1.5 GB | ~2.6 GB |
| medium.en | 1.5 GB | ~2.6 GB |
| large-v1  | 2.9 GB | ~4.7 GB |
| large     | 2.9 GB | ~4.7 GB |


[Nodejs-whisper] Enter model name (e.g. 'tiny.en') or 'cancel' to exit
(ENTER for tiny.en): base.en
Downloading ggml model base.en from 'https://huggingface.co/ggerganov/whisper.cpp' ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1204  100  1204    0     0   3111      0 --:--:-- --:--:-- --:--:--  3119
100  141M  100  141M    0     0  8405k      0  0:00:17  0:00:17 --:--:-- 9733k
Done! Model 'base.en' saved in 'models/ggml-base.en.bin'
You can now use it like this:

  $ ./main -m models/ggml-base.en.bin -f samples/jfk.wav

[Nodejs-whisper] Attempting to compile model...

sysctl: unknown oid 'hw.optional.arm64'
I whisper.cpp build info: 
I UNAME_S:  Darwin
I UNAME_P:  i386
I UNAME_M:  x86_64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_DARWIN_C_SOURCE -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_DARWIN_C_SOURCE -pthread
I LDFLAGS:   -framework Accelerate
I CC:       Apple clang version 12.0.5 (clang-1205.0.22.11)
I CXX:      Apple clang version 12.0.5 (clang-1205.0.22.11)

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_DARWIN_C_SOURCE -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE   -c ggml.c -o ggml.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_DARWIN_C_SOURCE -pthread -c whisper.cpp -o whisper.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_DARWIN_C_SOURCE -pthread examples/main/main.cpp examples/common.cpp examples/common-ggml.cpp ggml.o whisper.o -o main  -framework Accelerate
./main -h

usage: ./main [options] file0.wav file1.wav ...

options:
  -h,        --help              [default] show this help message and exit
  -t N,      --threads N         [4      ] number of threads to use during computation
  -p N,      --processors N      [1      ] number of processors to use during computation
  -ot N,     --offset-t N        [0      ] time offset in milliseconds
  -on N,     --offset-n N        [0      ] segment index offset
  -d  N,     --duration N        [0      ] duration of audio to process in milliseconds
  -mc N,     --max-context N     [-1     ] maximum number of text context tokens to store
  -ml N,     --max-len N         [0      ] maximum segment length in characters
  -sow,      --split-on-word     [false  ] split on word rather than on token
  -bo N,     --best-of N         [2      ] number of best candidates to keep
  -bs N,     --beam-size N       [-1     ] beam size for beam search
  -wt N,     --word-thold N      [0.01   ] word timestamp probability threshold
  -et N,     --entropy-thold N   [2.40   ] entropy threshold for decoder fail
  -lpt N,    --logprob-thold N   [-1.00  ] log probability threshold for decoder fail
  -su,       --speed-up          [false  ] speed up audio by x2 (reduced accuracy)
  -tr,       --translate         [false  ] translate from source language to english
  -di,       --diarize           [false  ] stereo audio diarization
  -tdrz,     --tinydiarize       [false  ] enable tinydiarize (requires a tdrz model)
  -nf,       --no-fallback       [false  ] do not use temperature fallback while decoding
  -otxt,     --output-txt        [false  ] output result in a text file
  -ovtt,     --output-vtt        [false  ] output result in a vtt file
  -osrt,     --output-srt        [false  ] output result in a srt file
  -olrc,     --output-lrc        [false  ] output result in a lrc file
  -owts,     --output-words      [false  ] output script for generating karaoke video
  -fp,       --font-path         [/System/Library/Fonts/Supplemental/Courier New Bold.ttf] path to a monospace font for karaoke video
  -ocsv,     --output-csv        [false  ] output result in a CSV file
  -oj,       --output-json       [false  ] output result in a JSON file
  -of FNAME, --output-file FNAME [       ] output file path (without file extension)
  -ps,       --print-special     [false  ] print special tokens
  -pc,       --print-colors      [false  ] print colors
  -pp,       --print-progress    [false  ] print progress
  -nt,       --no-timestamps     [false  ] do not print timestamps
  -l LANG,   --language LANG     [en     ] spoken language ('auto' for auto-detect)
  -dl,       --detect-language   [false  ] exit after automatically detecting language
             --prompt PROMPT     [       ] initial prompt
  -m FNAME,  --model FNAME       [models/ggml-base.en.bin] model path
  -f FNAME,  --file FNAME        [       ] input WAV file path
  -oved D,   --ov-e-device DNAME [CPU    ] the OpenVINO device used for encode inference

c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_DARWIN_C_SOURCE -pthread examples/bench/bench.cpp ggml.o whisper.o -o bench  -framework Accelerate
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_DARWIN_C_SOURCE -pthread examples/quantize/quantize.cpp examples/common.cpp examples/common-ggml.cpp ggml.o whisper.o -o quantize  -framework Accelerate

Getting empty result from nodewhisper

I try to speech to text using nodejs-whisper and I got success with the below code and I checked the log get the message [Nodejs-whisper] Transcribing Done!

Now what to do next to get text from Speech, my code is below and result got empty

const result = await nodewhisper(filePath, {
      modelName: "base.en", //Downloaded models name
      autoDownloadModelName: "base.en", // (optional) autodownload a model if model is not present
      whisperOptions: {
        outputInText: true, // get output result in txt file
        outputInVtt: true, // get output result in vtt file
        outputInSrt: true, // get output result in srt file
        outputInCsv: true, // get output result in csv file
        translateToEnglish: true, //translate from source language to english
        wordTimestamps: true, // Word-level timestamps
        timestamps_length: 20, // amount of dialogue per timestamp pair
        splitOnWord: true, //split on word rather than on token
      },
    });
    // process.chdir(originalDirectory);
    console.log("Transcribing result:", data);```

Transcript coming back empty

Transcribing finishes immediately and nothing is returned.
image
test.wav is a valid audio file. Yet nothing is coming back?
I have downlowded tiny.en.
What am I missing?

Question

Maybe dumb question but does this keep the model in memory for continuous calls or does it need to load it everytime?

'make' command failed

> [email protected] start
> npx tsx src/index.ts

[Nodejs-whisper]  Transcribing file: F:\Projects\Whisper\karasmsk.25.mp4

[Nodejs-whisper]  Converting audio to wav File Type...

[Nodejs-whisper] whisper.cpp not initialized. F:\Projects\Whisper\node_modules\nodejs-whisper\dist
[Nodejs-whisper] Attempting to run 'make' command in /whisper directory...
process_begin: CreateProcess(NULL, uname -s, ...) failed.
Makefile:4: pipe: No error
process_begin: CreateProcess(NULL, uname -p, ...) failed.
Makefile:8: pipe: No error
process_begin: CreateProcess(NULL, uname -m, ...) failed.
Makefile:12: pipe: No error
process_begin: CreateProcess(NULL, which nvcc, ...) failed.
Makefile:16: pipe: No error
'cc' is not recognized as an internal or external command,
operable program or batch file.
'g++' is not recognized as an internal or external command,
operable program or batch file.
I whisper.cpp build info:
I UNAME_S:
I UNAME_P:
I UNAME_M:
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -mfma -mf16c -mavx -mavx2
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC
I LDFLAGS:
I CC:
I CXX:

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -mfma -mf16c -mavx -mavx2   -c ggml.c -o ggml.o
process_begin: CreateProcess(NULL, cc -I. -O3 -DNDEBUG -std=c11 -fPIC -mfma -mf16c -mavx -mavx2 -c ggml.c -o ggml.o, ...) failed.
make (e=2): The system cannot find the file specified.
make: *** [Makefile:259: ggml.o] Error 2
 [Nodejs-whisper] 'make' command failed. Please run 'make' command in /whisper.cpp directory. Current shelljs directory:  F:\Projects\Whisper\node_modules\nodejs-whisper\dist
PS F:\Projects\Whisper>

error when installing with windows

image

main.exe -m C:\Developer\Pruebas\whipser-test\node_modules\nodejs-whisper\cpp\whisper.cpp\models\ggml-tiny.bin -f C:\Developer\Pruebas\whipser-test\node_modules\nodejs-whisper\cpp\whisper.cpp\samples\jfk.wav
[Nodejs-whisper] Attempting to compile model...

"cc" no se reconoce como un comando interno o externo,
programa o archivo por lotes ejecutable.
"head" no se reconoce como un comando interno o externo,
programa o archivo por lotes ejecutable.
I whisper.cpp build info:
I UNAME_S:
I UNAME_P:
I UNAME_M:
I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -D_XOPEN_SOURCE=600 -pthread
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -pthread
I LDFLAGS:
I CC:
I CXX:

cc -I. -O3 -DNDEBUG -std=c11 -fPIC -D_XOPEN_SOURCE=600 -pthread -c ggml.c -o ggml.o
process_begin: CreateProcess(NULL, uname -s, ...) failed.
process_begin: CreateProcess(NULL, uname -p, ...) failed.
process_begin: CreateProcess(NULL, uname -m, ...) failed.
process_begin: CreateProcess(NULL, which nvcc, ...) failed.
Makefile:299: recipe for target 'ggml.o' failed
process_begin: CreateProcess(NULL, cc -I. -O3 -DNDEBUG -std=c11 -fPIC -D_XOPEN_SOURCE=600 -pthread -c ggml.c -o ggml.o, ...) failed.
make (e=2): El sistema no puede encontrar el archivo especificado.
make: *** [ggml.o] Error 2

Current directory remains to be whisper.cpp after nodewhisper

Thanks for the project! It's very easy to integrate with NodeJS.

I encouter an issue though, where I found the current dir changed even after the nodewhisper call:

Example code:

  await nodewhisper(filePath, {
    modelName: "tiny", //Downloaded models name
    autoDownloadModelName: "tiny", // (optional) autodownload a model if model is not present
    whisperOptions: {
      outputInText: false, // get output result in txt file
      outputInVtt: false, // get output result in vtt file
      outputInSrt: true, // get output result in srt file
      outputInCsv: true, // get output result in csv file
      translateToEnglish: false, //translate from source language to english
      wordTimestamps: false, // Word-level timestamps
      timestamps_length: 60, // amount of dialogue per timestamp pair
      splitOnWord: true, //split on word rather than on token
    },
  });
  console.log(readdirSync("./*"));

Expected:
Showing results in the current dir where I executed the JS

What it shows:

Files under node_modules/.pnpm/[email protected]/node_modules/nodejs-whisper/cpp/whisper.cpp/

Model download issue

I get this issue on wsl2 when downloading the model :

[Nodejs-whisper] Models do not exist. Please Select a model to download.


| Model     | Disk   | RAM     |
|-----------|--------|---------|
| tiny      |  75 MB | ~390 MB |
| tiny.en   |  75 MB | ~390 MB |
| base      | 142 MB | ~500 MB |
| base.en   | 142 MB | ~500 MB |
| small     | 466 MB | ~1.0 GB |
| small.en  | 466 MB | ~1.0 GB |
| medium    | 1.5 GB | ~2.6 GB |
| medium.en | 1.5 GB | ~2.6 GB |
| large-v1  | 2.9 GB | ~4.7 GB |
| large     | 2.9 GB | ~4.7 GB |

C:\Windows\system32\cmd.exe [16332]: c:\ws\src\node_file.cc:1920: Assertion `(argc) == (5)' failed. 1: 00007FF683072BCF node_api_throw_syntax_error+175519
 2: 00007FF682FF83A6 SSL_get_quiet_shutdown+64006
 3: 00007FF682FF8782 SSL_get_quiet_shutdown+64994
 4: 00007FF682FECA17 SSL_get_quiet_shutdown+16503
 5: 00007FF683A5895D v8::internal::Builtins::code+248237
 6: 00007FF683A58569 v8::internal::Builtins::code+247225
 7: 00007FF683A5882C v8::internal::Builtins::code+247932
 8: 00007FF683A58690 v8::internal::Builtins::code+247520
 9: 00007FF683B3D471 v8::internal::SetupIsolateDelegate::SetupHeap+558449
10: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
11: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
12: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
13: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
14: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
15: 00007FF683AF7D8B v8::internal::SetupIsolateDelegate::SetupHeap+274059
16: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
17: 00007FF683B8C182 v8::internal::SetupIsolateDelegate::SetupHeap+881282
18: 00007FF683ABE740 v8::internal::SetupIsolateDelegate::SetupHeap+38976
19: 00007FF683BDB873 v8::internal::SetupIsolateDelegate::SetupHeap+1206643
20: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
21: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
22: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
23: 00007FF683AF7D8B v8::internal::SetupIsolateDelegate::SetupHeap+274059
24: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
25: 00007FF683B8C182 v8::internal::SetupIsolateDelegate::SetupHeap+881282
26: 00007FF683ABE740 v8::internal::SetupIsolateDelegate::SetupHeap+38976
27: 00007FF683BDB873 v8::internal::SetupIsolateDelegate::SetupHeap+1206643
28: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
29: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
30: 00007FF683AC0D84 v8::internal::SetupIsolateDelegate::SetupHeap+48772
31: 00007FF603CAA55A
[nodemon] app crashed - waiting for file changes before starting...

i investigated the source of the problem, it is probably this line of code in downloadModel.ts in the askForModel function:

const answer = await readlineSync.question(
		`\n[Nodejs-whisper] Enter model name (e.g. 'tiny.en') or 'cancel' to exit\n(ENTER for tiny.en): `
	)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.