GithubHelp home page GithubHelp logo

koboldai / koboldai-client Goto Github PK

View Code? Open in Web Editor NEW
3.4K 3.4K 737.0 14.25 MB

Home Page: https://koboldai.com

License: GNU Affero General Public License v3.0

Python 57.91% Batchfile 0.61% CSS 5.60% JavaScript 11.34% HTML 1.75% Dockerfile 0.07% Shell 0.91% Lua 7.93% Jupyter Notebook 3.25% Haxe 0.30% PowerShell 0.18% Less 3.49% SCSS 3.49% Stylus 3.18%

koboldai-client's People

Contributors

adcar avatar crataco avatar db0 avatar ebolam avatar gouvernathor avatar henk717 avatar ioncorimenia avatar javalar avatar jojorne avatar koboldai avatar lightsaveus avatar marcusllewellyn avatar mrreplikant avatar mrseeker avatar nolialsea avatar one-some avatar pi6am avatar rahulmb avatar recoveredapparatus avatar relys avatar scott-ca avatar scythe000 avatar smolbleat avatar uwuplus avatar vfbd avatar waffshappen avatar wbrown avatar yellowrosecx avatar zurnaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

koboldai-client's Issues

Editing text in the output screen

The current way of editing text is still a bit clunky. Couldn't you make it so that you didn't need to press an Edit button, but the mouse would highlight story chunks in normal operation mode too, but there wouldn't be a separate editor for those chunks and instead they were edited in the output window directly?

Moving towards a dedicated multiplayer server implementation

First of all, great work. This project seems to be very promising in building a custom, self-hosted AIDungeon-type game.

I would like to run this as a simple webservice for my friends, in order to play D&D during Covid. The following list are some things which I believe are more or less necessary for this.

  • Remove the TK UI and instead use command line flags or a configuration file
  • In the same vein, make the server process completely non-interactive / headless
  • Modify the server to listen on 0.0.0.0 instead of localhost only
  • Modify the application Javascript to open the socket on the correct host instead of localhost
  • Add a name / account / session system for multiplayer games
  • Modify the websocket server to broadcast to all connected clients (use socketio.emit instead of emit)

A few things I also found could be useful:

  • Dedicated WSGI deployment (with a dedicated websocket module like e.g. eventlet)
  • Better multithreading support, currently the AI process hangs the webserver
  • Docker support
  • More UI options for verb selection
  • World info support
  • Proper truncation of responses
  • Code cleanup, maybe split into various files / classes

Currently, I am testing using my fork, but I'd like to upstream any contributions eventually. However I thought I'd first get your opinion on these ideas, any criticism is welcome.

By the way, most models running on pytorch run on AMD cards just as well using ROCm.

Edit: The 0.16 version addressed the most important points, see comments. I'll leave this issue open however to track further improvement of the standalone server.

Syntax error in aiserver.py

When I try to run play.bat, I get the following error:

A:\Projects\KoboldAI>aiserver.py
  File "A:\Projects\KoboldAI\aiserver.py", line 117
    print("{0}Looking for GPU support...{1}".format(colors.HEADER, colors.ENDC), end="")
                                                                                    ^
SyntaxError: invalid syntax

I'm using Python 3.8.10 on Windows 10.

Shinen model won't launch

trying to open the shinen model in colab gives me
OSError: /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_global_deps.so: cannot open shared object file: No such file or directory

Bug: "Retry" deletes multiple steps of the story!

My Settings:

  • Model GPT Neo 2.7B
  • Temperature 0.9
  • Top P Sampling 0.8
  • Repetition Penalty 2
  • Amount to Generate 60
  • Max Tokens 2048
  • Gens Per Action 1
  • W Info Depth 5
  • Always Add Prompt YES
  • Trim Incomplete Sentences YES
  • Remove Blank Lines YES
  • Remove Special Characters NO
  • Add Sentence Spacing YES

Steps to Reproduce the Bug:

  1. First use my settings above.
  2. Type "1." in the prompt and submit it (this is just a random starter seed for the story).
  3. Press Submit a few times to generate multiple steps of AI output in the story.
  4. Change the "Gens Per Action" setting to 2 or higher.
  5. Submit again (blank line) to see two suggestions.
  6. Click Retry to re-generate those two suggestions. KoboldAI will now DELETE steps of the story.
  7. Every time you click Retry it deletes more and more steps of the story.
  8. This bug makes the "Gens Per Action" feature unusable at the moment, but it's a very cool feature so I look forward to using it in the future! :-) ❤️

how to clean up the cache to save space

i decided to use the full precision setting today and its drastically lowered the amount of space left on my hard drive, i'm assuming its storing something on my computer somewhere as i didnt get the space back even after terminating the session, help would be appreciated in cleaning up the space the full precision files took up

Autofill / templating?

How about extending the model with the jinja's autocompletion, or, rather, autofill? So by inserting {{templates}} in the text it would also generate the output in those as well, not just at the end of the input.

Ability to run without model

I am just using koboldAI to build a WI index, for a scenario, and don't require the AI to be functional. Is there a way to run it without inputting a model??

Notebook automatically stops by performing ctrl-c

While attempting to start up koboldai notebook ran for 10 minutes and then ^C itself. No keystrokes were typed on the user side.

Welcome to the KoboldAI Client!
Select an AI model to continue:

Welcome to ColabKobold! The easiest way to run KoboldAI! We will now load the AI, once its done you will see a message to refresh the cloudflare page.
Looking for GPU support...FOUND!
Initializing Flask... OK!
Initializing transformers, please wait...
2021-07-21 12:34:24.124339: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
tcmalloc: large alloc 5302616064 bytes == 0x5605c8c30000 @  0x7f917b37cb6b 0x7f917b39c379 0x7f912076a25e 0x7f912076b9d2 0x7f91627858ed 0x7f91733a7280 0x7f9172fe5d39 0x56043d6dfbf8 0x56043d7536f2 0x56043d74e235 0x56043d6e073a 0x56043d74eb0e 0x56043d74e235 0x56043d6e034b 0x56043d6dfe59 0x56043d82725d 0x56043d796c3b 0x56043d6def01 0x56043d7d0c0d 0x56043d7530d8 0x56043d74e235 0x56043d61fe2c 0x56043d750318 0x56043d74dc35 0x56043d6e073a 0x56043d74f93b 0x56043d74dc35 0x56043d6e073a 0x56043d752f40 0x56043d74dc35 0x56043d74d933
^C

Possibly tensorflow/tensorflow#33255
tensorflow/models#7652
huggingface/transformers#4668

ValueError

ValueError: unable to parse D:/KoboldAI-Client-main\config.json as a URL or as a local path
(base) D:\KoboldAI-Client-main>File "K:\python\lib\site-packages\transformers\file_utils.py", line 1420, in cached_path

Tried firstly using tempory K disk, then on client folder. Both gave the same error.

Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found

I've followed the setup instructions, and then run through the instructions to enable GPU support. I have a GTX 1080, for reference. I installed CUDA 10.2, which you linked to in your instructions, along with the 2 updates they provided for it. During that installation, I did disable "Geforce Experience" (don't want to use it) and "Graphics Drivers" (I'd rather update those separately) from its installation list, only having it install everything under the CUDA section.
I then got the command line for installing PyTorch with CUDA 10.2 support at that link you provided, which turned out to be:
pip3 install torch==1.8.1+cu102 torchvision==0.9.1+cu102 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
I ran that, which looked to have replaced the existing PyTorch which didn't have CUDA support.

However, I get an error when running KoboldAI which indicate's it's not loading CUDA correctly and thus not using my GPU. Example output:

Model #> 3
Looking for GPU support...FOUND!
Use GPU or CPU for generation?: (Default GPU)

1 - GPU
2 - CPU

Mode> 1
Initializing Flask... OK!
Initializing transformers, please wait...
2021-05-18 19:56:30.807734: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-05-18 19:56:30.807837: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
OK! gpt2 pipeline created!
You may now connect with a browser at http://127.0.0.1:5000/

From the error it looks like it's looking for CUDA 11 instead of CUDA 10? Should I just install CUDA 11.1 and then the appropriate version of PyTorch for that, or...?

Authentication error getting access to Google drive storage for colab client

I'm getting the error

Authorization Error
Error 400: policy_enforced
Advanced Protection prevented your Google Account from signing in. This security feature stops most non-Google apps and services from accessing your data to keep your account protected.

I'm looking to see if this can be fixed. This was working on this account a week ago.

Feature: Implement tail free sampling.

Not sure if anyone wants to give it a try at implementing this feature. It sounds like an awesome technique for making story-driven AI stay on topic and generate coherent stories:

https://trentbrick.github.io/Tail-Free-Sampling/

It was invented by someone who wanted their D&D generated stories to be on-topic within an overarching narrative, yet be completely original. The paper is fascinating.

Cannot find module

Traceback (most recent call last):
File "aiserver.py", line 155, in
import torch
ModuleNotFoundError: No module named 'torch'

I got this error after following the instructions to run the program on my GPU. I don't know where to begin to even start fixing this, please help.

Colors in the Windows terminal

You need to call os.system('color') at the beginning of the script for the colors to work in the Windows terminal. Otherwise it will display colors like this:

←[96mWelcome to the KoboldAI Server!
Select an AI model to continue:←[0m

Some people do:

import os

#==================================================================#
# Variables & Storage
#==================================================================#

# System call
os.system('')

# Terminal tags for colored text
class colors:

Multiple Sequence Generation Colab Bug

I'm been trying to use the Multiple Sequence Generation feature that was added and while it does work, it seems like it ends up freezing and not working anymore.

After generating three times or so with the feature on. Colab and KoboldAI both freeze, Colab doesn't appear to receive any info to regenerate while KoboldAI continues to wait for a response that won't come.

I had the settings for 3 gens per action with 60 tokens, I also had the formatting settings (all of them on), maybe it could be that?
(just checked formatting settings, seems unrelated)

It seems very random when it eventually breaks, I don't know if the colab notebook is bugged or what, I am using the latest notebook for it.

Replication:
Use Colab's latest notebook, have it generate a paragraph and then retry a few times. Set 3 generations per action.

It might be the retrying too many times? Not sure.

Other settings I've had set
0.9 temp - No world info or memory used. - Repetition Penalty was around 1.1 and max tokens 512.

Feature: repetition penalty slope

Let's face it. Repetition penalty 1.2 is good, but only for very short texts. Later it starts selecting irrelevant words, because the tokens need to be distinct from everything that was before.
NovelAI has Repetition Penalty Slope, where tokens further from the end of context don't need to be so distinct (meaning repetition penalty gradually fades to 0 the further from context the tokens are; the slope regulates the speed of fading).

License

Please add a license file.

Autosaving

Would be nice to have it to not lose chunks of the story because colab notebook stopped.

Memory requirements

Something I've noticed is that the memory requirements for the same AI model seem higher for KoboldAI than for CloverEdition. My system has 16 GB system memory, and 8 GB onboard video memory (with an additional 8 GB shared memory available).

So, for example, using GPT-Neo 2.7b.
System memory usage for Clover Edition loading GPT-Neo 2.7b climbs gradually up to 11.5 GB memory, before dropping back down to about 1.2 GB. Meanwhile, at the same time, GPU memory usage goes up to an additional 6.5 GB used, before finally dropping down to about 5.5 GB overhead once it finishes loading.
When trying to load GPT-Neo 2.7b in KoboldAI, the system memory usage climbs up fairly rapidly to over 12 GB, while the GPU memory doesn't budge. My computer then hangs, going almost completely unresponsive, even the clock not updating at all, though every once in a while (every 30 seconds or so, maybe?) the mouse cursor will move a tick if I am moving the mouse. I end up being unable to even kill the process due to the unresponsiveness and have to power cycle my computer. Presumably this is due to my system memory just getting overloaded.

When I load GPT-2 in KoboldAI, as per my previous Issue, it does noticeably start loading into GPU memory almost immediately, so I'm not sure why that seems to be different from GPT-Neo 2.7b loading in KoboldAI.

From the text on the Clover Edition readme page, they mention using 16-bit instead of 32-bit; is that maybe something that would help in KoboldAI?

I'll add that I really like the browser interface and additional features of KoboldAI, and really hate the commandline interface of Clover Edition. I'm just hoping that I can end up running GPT-Neo 2.7b on KoboldAI like I can in Clover Edition.

Inferkit prompt not defined, InferKit API Error: 500

The inferkit integration seems to be broken. I get an error on line 877 of aiserver.py (prompt is referenced before assignment.)
I tried moving the assignment from 858 to 841, and that sort of fixed it.

But now I'm getting "InferKit API Error: 500 - INVALID_INPUT"

New line on action

I've been playing KoboldAI for some time now, and it's amazing. But one thing that bothers me is that by default when you make an input, it just gets appended to the last line and continues from there. I'd like to suggest adding a configuration that adds a new line when inputs are sent.

This way it's configurable and both people who like it in the same line and people who don't would have a good and easy playthrough \o/

Thanks! Looking forward to scripting!

Feature: Clean up punctuation.

There's some weird text generation sometimes, and the current "remove all strange characters" feature is too heavy-handed. Some sentence cleanup regex would be a much better solution.

Here are some examples of ideas for necessary text transformations:

  • He said that he was telling the truth. (This was false ---> He said that he was telling the truth. (This was false.)
  • "Wherever you want to go ---> "Wherever you want to go."
  • It's a wonderful world" ---> It's a wonderful world. or "It's a wonderful world."
  • It's a wonderful world!" ---> It's a wonderful world! or "It's a wonderful world!"
  • That's nice, ---> That's nice.

I wouldn't be surprised if there are already libraries out there for either Node.JS (for inspiration, it's the largest package repo in the world) or on PyPi for Python, that can do these kinds of sentence cleanup transformations.

Sanitize AI output

If I ask the AI to generate me a simple Hello World program in C, the prompt window is not sanitized:

Image

Can't run any GPT-J-6B model locally in CPU or GPU+CPU modes

Seems like there's no way to run GPT-J-6B models locally using CPU or CPU+GPU modes. I've tried both transformers versions (original and finetuneanon's) in both modes (CPU and GPU+CPU), but they all fail in one way or another.

First, I'll describe the error that appears when trying to use the gpt-j-6b-adventure-hf model locally in GPU+CPU hybrid mode. In this case KoboldAI raises the following error:

module 'keras.backend' has no attribute 'is_tensor'

Steps to reproduce

I'm testing this on Linux.

  1. Setup everything and start KoboldAI.
git clone https://github.com/KoboldAI/KoboldAI-Client.git kobold-local
cd kobold-local

python3 -m venv ./venv
source venv/bin/activate

pip install -r requirements.txt

mkdir -p models
cd models
wget 'https://api.wandb.ai/files/ve-forbryderne/adventure/carol-data/models/gpt-j-6b-adventure-hf.7z'
7za x gpt-j-6b-adventure-hf.7z
cd ..

python3 aiserver.py
  1. Choose 1 - Custom Neo (GPT-Neo / Converted GPT-J).

  2. Pick models/gpt-j-6b-adventure-hf.

  3. Choose 3 - Both (slower than GPU-only but uses less VRAM).

  4. Choose a number of blocks for the system RAM. In my case it was 24 (but later I used 20).

  5. Enter anything in the web GUI prompt and click Submit.

After some time the abovementioned error will appear.

I was using the bundled requirements.txt, so the finetuneanon's version of the transformers was used.

Click here to view the full output
❯ python3 aiserver.py
Welcome to the KoboldAI Server!
Select an AI model to continue:

    #   Model                           V/RAM
    =========================================
    1  - Custom Neo (GPT-Neo / Converted GPT-J)
    2  - Custom GPT-2 (eg CloverEdition)
    3  - GPT Neo 1.3B                   8GB
    4  - GPT Neo 2.7B                   16GB
    5  - GPT-2                          1GB
    6  - GPT-2 Med                      2GB
    7  - GPT-2 Large                    4GB
    8  - GPT-2 XL                       8GB
    9  - InferKit API (requires API key)
    10 - Google Colab
    11 - OpenAI API (requires API key)
    12 - Read Only (No AI)

Model #> 1
Please choose the folder where pytorch_model.bin is located:

Looking for GPU support...FOUND!
You're using a model that supports GPU-CPU hybrid generation!
Currently only GPT-Neo models and GPT-J-6B support this feature.
Use GPU or CPU for generation?:  (Default GPU)
    1 - GPU
    2 - CPU
    3 - Both (slower than GPU-only but uses less VRAM)

Mode> 3
Initializing Flask... OK!
Initializing transformers, please wait...

How many layers would you like to put into system RAM?
The more of them you put into system RAM, the slower it will run,
but it will require less VRAM
(roughly proportional to number of layers).
This model has 28 layers.

# of layers> 24
Will commit 24 of 28 layers to system RAM.
OK! NeoCustom pipeline created!
You may now connect with a browser at http://127.0.0.1:5000/
* Serving Flask app "aiserver" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
The WebSocket transport is not available, you must install a WebSocket server that is compatible with your async mode to enable it. See the documentation for details. (further occurrences of this error will be logged with level INFO)
Client connected!
Data received:{'cmd': 'submit', 'actionmode': 0, 'data': 'I see a shining light.'}
Min:7, Max:86, Txt:I see a shining light.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
module 'keras.backend' has no attribute 'is_tensor'

The generic gpt-j-6b model throws the same error.

Other errors

When I try to use finetuneanon's transformers in CPU mode, a different error occurs: "LayerNormKernelImpl" not implemented for 'Half'. This is documented, so it's "ok".

When I try to use the original transformers in GPU+CPU mode I get this error: Input, output and indices must be on the current device.

Click here to view the full output
❯ python3 aiserver.py
Welcome to the KoboldAI Server!
Select an AI model to continue:

    #   Model                           V/RAM
    =========================================
    1  - Custom Neo (GPT-Neo / Converted GPT-J)
    2  - Custom GPT-2 (eg CloverEdition)
    3  - GPT Neo 1.3B                   8GB
    4  - GPT Neo 2.7B                   16GB
    5  - GPT-2                          1GB
    6  - GPT-2 Med                      2GB
    7  - GPT-2 Large                    4GB
    8  - GPT-2 XL                       8GB
    9  - InferKit API (requires API key)
    10 - Google Colab
    11 - OpenAI API (requires API key)
    12 - Read Only (No AI)

Model #> 1
Please choose the folder where pytorch_model.bin is located:

Looking for GPU support...FOUND!
You're using a model that supports GPU-CPU hybrid generation!
Currently only GPT-Neo models and GPT-J-6B support this feature.
Use GPU or CPU for generation?:  (Default GPU)
    1 - GPU
    2 - CPU
    3 - Both (slower than GPU-only but uses less VRAM)

Mode> 3
Initializing Flask... OK!
Initializing transformers, please wait...
Some weights of the model checkpoint at /home/user/test/kobold-local/models/gpt-j-6b were not used when initializing GPTNeoForCausalLM: ['lm_head.bias']
- This IS expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPTNeoForCausalLM were not initialized from the model checkpoint at /home/user/test/kobold-local/models/gpt-j-6b and are newly initialized: ['transformer.h.25.ln_2.weight', 'transformer.h.21.ln_2.bias', 'transformer.h.10.ln_2.weight', 'transformer.h.24.attn.attention.out_proj.bias', 'transformer.h.7.ln_2.bias', 'transformer.h.21.attn.attention.out_proj.bias', 'transformer.h.24.ln_2.bias', 'transformer.h.22.attn.attention.out_proj.bias', 'transformer.h.0.attn.attention.out_proj.bias', 'transformer.h.1.ln_2.bias', 'transformer.h.9.ln_2.bias', 'transformer.h.9.attn.attention.out_proj.bias', 'transformer.h.19.ln_2.weight', 'transformer.h.8.ln_2.weight', 'transformer.h.8.attn.attention.out_proj.bias', 'transformer.h.17.ln_2.bias', 'transformer.h.27.ln_2.bias', 'transformer.h.13.ln_2.weight', 'transformer.h.24.ln_2.weight', 'transformer.h.16.ln_2.bias', 'transformer.h.3.attn.attention.out_proj.bias', 'transformer.h.11.ln_2.bias', 'transformer.h.20.ln_2.weight', 'transformer.h.0.ln_2.bias', 'transformer.h.1.attn.attention.out_proj.bias', 'transformer.h.10.attn.attention.out_proj.bias', 'transformer.h.4.ln_2.bias', 'transformer.h.5.ln_2.bias', 'transformer.h.11.attn.attention.out_proj.bias', 'transformer.h.25.ln_2.bias', 'transformer.h.15.ln_2.bias', 'transformer.h.3.ln_2.weight', 'transformer.h.18.ln_2.weight', 'transformer.h.18.attn.attention.out_proj.bias', 'transformer.h.9.ln_2.weight', 'transformer.h.23.ln_2.bias', 'transformer.h.6.attn.attention.out_proj.bias', 'transformer.h.7.attn.attention.out_proj.bias', 'transformer.h.2.attn.attention.out_proj.bias', 'transformer.h.16.ln_2.weight', 'transformer.h.7.ln_2.weight', 'transformer.h.3.ln_2.bias', 'transformer.h.23.attn.attention.out_proj.bias', 'transformer.h.27.ln_2.weight', 'transformer.h.12.ln_2.weight', 'transformer.h.13.attn.attention.out_proj.bias', 'transformer.h.5.ln_2.weight', 'transformer.h.8.ln_2.bias', 'transformer.h.2.ln_2.weight', 'transformer.h.20.attn.attention.out_proj.bias', 'transformer.h.4.ln_2.weight', 'transformer.h.26.ln_2.weight', 'transformer.h.6.ln_2.weight', 'transformer.h.22.ln_2.bias', 'transformer.h.14.attn.attention.out_proj.bias', 'transformer.h.20.ln_2.bias', 'transformer.h.13.ln_2.bias', 'transformer.h.18.ln_2.bias', 'transformer.h.25.attn.attention.out_proj.bias', 'transformer.h.26.attn.attention.out_proj.bias', 'transformer.h.26.ln_2.bias', 'transformer.h.19.ln_2.bias', 'transformer.h.17.ln_2.weight', 'transformer.h.14.ln_2.weight', 'transformer.h.4.attn.attention.out_proj.bias', 'transformer.h.17.attn.attention.out_proj.bias', 'transformer.h.27.attn.attention.out_proj.bias', 'transformer.h.6.ln_2.bias', 'transformer.h.5.attn.attention.out_proj.bias', 'transformer.h.23.ln_2.weight', 'transformer.h.15.ln_2.weight', 'transformer.h.21.ln_2.weight', 'transformer.h.19.attn.attention.out_proj.bias', 'transformer.h.2.ln_2.bias', 'transformer.h.10.ln_2.bias', 'transformer.h.1.ln_2.weight', 'transformer.h.22.ln_2.weight', 'transformer.h.11.ln_2.weight', 'transformer.h.14.ln_2.bias', 'transformer.h.0.ln_2.weight', 'transformer.h.15.attn.attention.out_proj.bias', 'transformer.h.12.attn.attention.out_proj.bias', 'transformer.wpe.weight', 'transformer.h.16.attn.attention.out_proj.bias', 'transformer.h.12.ln_2.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

How many layers would you like to put into system RAM?
The more of them you put into system RAM, the slower it will run,
but it will require less VRAM
(roughly proportional to number of layers).
This model has 28 layers.

# of layers> 20
Will commit 20 of 28 layers to system RAM.
OK! NeoCustom pipeline created!
You may now connect with a browser at http://127.0.0.1:5000/
* Serving Flask app "aiserver" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
The WebSocket transport is not available, you must install a WebSocket server that is compatible with your async mode to enable it. See the documentation for details. (further occurrences of this error will be logged with level INFO)
Client connected!
Client connected!
Client connected!
Data received:{'cmd': 'submit', 'actionmode': 0, 'data': 'I see a shining light.'}
Min:7, Max:86, Txt:I see a shining light.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Client connected!
Input, output and indices must be on the current device

And when I try to use the original transformers in CPU mode there's no error, but the output is garbage. For example, when I input I see a shining light. it gives me this:

Analog Disk Sellvest Lif medically brightest scalingieuEVURNprefix DISTRICT relay Samson Commission Fold recallAUmaps bumper PB dex Cullen Championships unp HERO Raspberry Ankalse Ness sustained invokevind Pikachu Volks Meth Lect EMP cyan steering Tens LET ENexplet laptops fliesATT InstituteERSON mitochond!

The original transformers also produce some warnings (truncated):

Some weights of the model checkpoint at /home/user/test/kobold-local/models/gpt-j-6b were not used when initializing GPTNeoForCausalLM: ['lm_head.bias']
- This IS expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPTNeoForCausalLM were not initialized from the model checkpoint at /home/user/test/kobold-local/models/gpt-j-6b and are newly initialized: ['transformer.h.7.ln_2.weight', 'transformer.h.25.ln_2.bias', 'transformer.h.26.ln_2.bias', 'transformer.h.5.ln_2.bias', ...]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Click here to view the full output
❯ python3 aiserver.py
Welcome to the KoboldAI Server!
Select an AI model to continue:

    #   Model                           V/RAM
    =========================================
    1  - Custom Neo (GPT-Neo / Converted GPT-J)
    2  - Custom GPT-2 (eg CloverEdition)
    3  - GPT Neo 1.3B                   8GB
    4  - GPT Neo 2.7B                   16GB
    5  - GPT-2                          1GB
    6  - GPT-2 Med                      2GB
    7  - GPT-2 Large                    4GB
    8  - GPT-2 XL                       8GB
    9  - InferKit API (requires API key)
    10 - Google Colab
    11 - OpenAI API (requires API key)
    12 - Read Only (No AI)

Model #> 1
Please choose the folder where pytorch_model.bin is located:

Looking for GPU support...FOUND!
You're using a model that supports GPU-CPU hybrid generation!
Currently only GPT-Neo models and GPT-J-6B support this feature.
Use GPU or CPU for generation?:  (Default GPU)
    1 - GPU
    2 - CPU
    3 - Both (slower than GPU-only but uses less VRAM)

Mode> 2
Initializing Flask... OK!
Initializing transformers, please wait...
Some weights of the model checkpoint at /home/user/test/kobold-local/models/gpt-j-6b were not used when initializing GPTNeoForCausalLM: ['lm_head.bias']
- This IS expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPTNeoForCausalLM were not initialized from the model checkpoint at /home/user/test/kobold-local/models/gpt-j-6b and are newly initialized: ['transformer.h.7.ln_2.weight', 'transformer.h.25.ln_2.bias', 'transformer.h.26.ln_2.bias', 'transformer.h.5.ln_2.bias', 'transformer.h.18.attn.attention.out_proj.bias', 'transformer.h.1.ln_2.weight', 'transformer.h.13.ln_2.weight', 'transformer.h.21.ln_2.bias', 'transformer.h.8.ln_2.bias', 'transformer.h.19.attn.attention.out_proj.bias', 'transformer.h.23.attn.attention.out_proj.bias', 'transformer.h.8.ln_2.weight', 'transformer.h.19.ln_2.bias', 'transformer.h.2.attn.attention.out_proj.bias', 'transformer.h.11.ln_2.bias', 'transformer.h.5.ln_2.weight', 'transformer.h.3.attn.attention.out_proj.bias', 'transformer.h.6.attn.attention.out_proj.bias', 'transformer.h.22.ln_2.bias', 'transformer.h.17.ln_2.bias', 'transformer.h.16.attn.attention.out_proj.bias', 'transformer.h.14.ln_2.bias', 'transformer.h.27.attn.attention.out_proj.bias', 'transformer.h.16.ln_2.bias', 'transformer.h.0.ln_2.bias', 'transformer.h.2.ln_2.bias', 'transformer.h.6.ln_2.bias', 'transformer.h.8.attn.attention.out_proj.bias', 'transformer.h.15.attn.attention.out_proj.bias', 'transformer.h.13.ln_2.bias', 'transformer.h.0.ln_2.weight', 'transformer.h.12.ln_2.weight', 'transformer.h.10.ln_2.bias', 'transformer.h.7.ln_2.bias', 'transformer.h.20.ln_2.bias', 'transformer.h.14.attn.attention.out_proj.bias', 'transformer.h.4.ln_2.weight', 'transformer.h.26.ln_2.weight', 'transformer.h.26.attn.attention.out_proj.bias', 'transformer.h.4.ln_2.bias', 'transformer.h.10.attn.attention.out_proj.bias', 'transformer.wpe.weight', 'transformer.h.1.ln_2.bias', 'transformer.h.6.ln_2.weight', 'transformer.h.24.attn.attention.out_proj.bias', 'transformer.h.11.attn.attention.out_proj.bias', 'transformer.h.22.attn.attention.out_proj.bias', 'transformer.h.3.ln_2.weight', 'transformer.h.3.ln_2.bias', 'transformer.h.23.ln_2.bias', 'transformer.h.25.attn.attention.out_proj.bias', 'transformer.h.27.ln_2.weight', 'transformer.h.23.ln_2.weight', 'transformer.h.9.ln_2.weight', 'transformer.h.0.attn.attention.out_proj.bias', 'transformer.h.1.attn.attention.out_proj.bias', 'transformer.h.9.attn.attention.out_proj.bias', 'transformer.h.13.attn.attention.out_proj.bias', 'transformer.h.24.ln_2.weight', 'transformer.h.17.attn.attention.out_proj.bias', 'transformer.h.12.ln_2.bias', 'transformer.h.24.ln_2.bias', 'transformer.h.2.ln_2.weight', 'transformer.h.25.ln_2.weight', 'transformer.h.18.ln_2.weight', 'transformer.h.19.ln_2.weight', 'transformer.h.21.attn.attention.out_proj.bias', 'transformer.h.7.attn.attention.out_proj.bias', 'transformer.h.16.ln_2.weight', 'transformer.h.27.ln_2.bias', 'transformer.h.20.ln_2.weight', 'transformer.h.15.ln_2.weight', 'transformer.h.10.ln_2.weight', 'transformer.h.9.ln_2.bias', 'transformer.h.18.ln_2.bias', 'transformer.h.12.attn.attention.out_proj.bias', 'transformer.h.5.attn.attention.out_proj.bias', 'transformer.h.22.ln_2.weight', 'transformer.h.11.ln_2.weight', 'transformer.h.20.attn.attention.out_proj.bias', 'transformer.h.4.attn.attention.out_proj.bias', 'transformer.h.15.ln_2.bias', 'transformer.h.14.ln_2.weight', 'transformer.h.17.ln_2.weight', 'transformer.h.21.ln_2.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
OK! NeoCustom pipeline created!
You may now connect with a browser at http://127.0.0.1:5000/
* Serving Flask app "aiserver" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
The WebSocket transport is not available, you must install a WebSocket server that is compatible with your async mode to enable it. See the documentation for details. (further occurrences of this error will be logged with level INFO)
Client connected!
Data received:{'cmd': 'submit', 'actionmode': 0, 'data': 'I see a shining light.'}
Min:7, Max:86, Txt:I see a shining light.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Client connected!
Analog Disk Sellvest Lif medically brightest scalingieuEVURNprefix DISTRICT relay Samson Commission Fold recallAUmaps bumper PB dex Cullen Championships unp HERO Raspberry Ankalse Ness sustained invokevind Pikachu Volks Meth Lect EMP cyan steering Tens LET ENexplet laptops fliesATT InstituteERSON mitochond!=EMP Meng BengEh KakERSON webs purchaser Sitting sunk liquphan%; accompanies lecturer Championships bumperrite sailorsasaki hammşATTarth Bash MAT Pupp

Summary

mode transformers error
CPU original Garbage output
CPU finetuneanon "LayerNormKernelImpl" not implemented for 'Half'
GPU+CPU original Input, output and indices must be on the current device
GPU+CPU finetuneanon module 'keras.backend' has no attribute 'is_tensor'

If these errors are unfixable I think that at least it needs to be documented somewhere.

Other details:

  • gpt-j-6b-adventure-hf and gpt-j-6b models produce the same errors.

  • I've tested 2.7B models and they work fine in CPU and GPU+CPU modes.

  • I can't test 6B models in GPU-only mode (not enough VRAM).

System

  • GeForce GTX 1060 6GB
  • 32 GB RAM (+ pagefile since using CPU-only requires around 45GB)
  • Kubuntu 21.10
  • CUDA 11.3.109

Easier to use predownloaded models

In a fair few AID2 forks there's a "models" directory where I could symbolically link the directories actually containing the models. So using that scheme, there was only a single copy of each model.

It seems KoboldAI has a different system. I tried to understand the code of what it does, but my best guess is that the transformers library is tasked to handle locating/downloading the models (unless one of those more "special" options is selected). But I still couldn't figure out where they are placed on my system so that I could bypass downloading the models and instead use already existing ones on my system. Could this be made easier somehow?

Saving/Loading Issue

It seems for odd reasons, I'm unable to save any stories and when I go to load them, it finds nothing.
I am currently using the latest version.

Reproduce issue:
Start a new story prompt, run a few times generation wise and then try to save. The file will "save" but won't be found by the load system.

Solution:
It turns out that the client isn't properly adding the .json extension at the end of the saved files, so they will show up as nothing.

Band-Aid fix is to add the .json extension to the extensionless files and it'll show up and load properly.

Unable to load AIDCAT Scenarios or Adventures

When trying to load scenarios from an unmodified export JSON from AIDCAT, this traceback is issued:

  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\threading.py", line 954, in _bootstrap_inner
    self.run()
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\threading.py", line 892, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\site-packages\socketio\server.py", line 688, in _handle_event_internal
    r = server._trigger_event(data[0], namespace, sid, *data[1:])
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\site-packages\socketio\server.py", line 712, in _trigger_event
    return self.handlers[namespace][event](*args)
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\site-packages\flask_socketio\__init__.py", line 283, in _handler
    return self._handle_event(handler, message, namespace, sid,
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\site-packages\flask_socketio\__init__.py", line 751, in _handle_event
    ret = handler(*args)
  File "C:\Users\foxyd\Documents\My Games\KoboldAI\KoboldAI-Client-main\aiserver.py", line 343, in get_message
    importRequest()
  File "C:\Users\foxyd\Documents\My Games\KoboldAI\KoboldAI-Client-main\aiserver.py", line 1230, in importRequest
    ob["acts"]  = len(story["actions"])
KeyError: 'actions'

When trying to load adventures from an unmodified export JSON, this traceback is issued:

  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\threading.py", line 954, in _bootstrap_inner
    self.run()
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\threading.py", line 892, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\site-packages\socketio\server.py", line 688, in _handle_event_internal
    r = server._trigger_event(data[0], namespace, sid, *data[1:])
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\site-packages\socketio\server.py", line 712, in _trigger_event
    return self.handlers[namespace][event](*args)
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\site-packages\flask_socketio\__init__.py", line 283, in _handler
    return self._handle_event(handler, message, namespace, sid,
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\site-packages\flask_socketio\__init__.py", line 751, in _handle_event
    ret = handler(*args)
  File "C:\Users\foxyd\Documents\My Games\KoboldAI\KoboldAI-Client-main\aiserver.py", line 343, in get_message
    importRequest()
  File "C:\Users\foxyd\Documents\My Games\KoboldAI\KoboldAI-Client-main\aiserver.py", line 1212, in importRequest
    vars.importjs = json.load(file)
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 293, in load
    return loads(fp.read(),
  File "C:\Users\foxyd\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 144658: character maps to <undefined>

I'm not sure how these particular issues could've happened, considering they're just the raw, exported files.

EDIT: Even the included sample story fails to load.

Training the AI

How do I feed stories to the AI? I want to train it on worlds such as Pokemon and Five Nights at Freddys as well as create custom information for it to base other things on. I read the info but couldn't find anything about it.

Loadsettings Fails on InferKit

The loadsettings function currently throws a key failure when starting InferKit with no client.settings file existing. Unzip the attached client settings file into your KoboldAI directory for the moment until I can get the bug squashed,
clientsettings.zip

Accessibility issues

I am finding it difficult to edit text after pressing "edit" with a screen reader as well as finding where to add an author's note. I can't seem to select what to edit when pressing the edit button, and where to add author's note isn't clear.
Thanks.

Feature: Allow the creation of a blacklist that prevents the generation of X characters in a row

Right now, some models (currently I only know that this happens with GPT-J-6B) output weird garbage that completely derail the story and can sometimes soft break generation. The AI will completely ignore the input tokens and get into a loop of repeating things like

***********************The next morning
!!!Her face falls
_______________________________________________________________________________________Chapter 2:

etc. Notice that all of these start with a line of either asterisks, exclamation marks or underscores. It should be possible to cause a group of more than X special characters (like asterisks or underscores) to trigger something that causes the line to be regenerated, which would likely solve the issue.

Auto-save option, and undo-history for edits

I just lost a paragraph because it disappeared right after editing for some reason; then I went to the terminal to copy back what I had written previously, but the muscle memory made me hit Ctrl-C instead of Ctrl-Shift-C, that kills a running program in the terminal instead of copying the selected text. But since I had the browser window still open with the remaining text, I just booted up Kobold again; but it wiped what I had written completely when it loaded, even in the browser window that was already open from before.

I can still kinda recover the text from the terminal; but the formatting is a bit screwy with all the brackets and escaped characters and stuff; annoying.

It would be great if there was an option to have Kobold automatically save ongoing stories in a temp file or something of the sort, and offer to recover it next time it boots up if it wasn't manually saved before Kobold got closed; and additionally, it would be great to be able to roll back changes, so that if for some reason some part of the text goes bad after editing, you can go back to what it was previously.

ps: Do I need to make this two separate entries, or is it ok to have the whole thing here?

Suggestion: Allow selection of a phrase of text to add to memory.

I really love how the Edit button works, allowing a person to highlight and select a section of text right from the main story area.

It might be nice to be able to leverage the ability to select text in the same fashion for the purpose of adding new information to memory as a person's story progresses.

Hangs on back -> retry

On the user's side:

  1. Make any prompt at the start
  2. Press "Back"
  3. Press "Retry

This hangs, that is, never completes the query. I was using the smallest GPT-2 model, in case that matters.

A minor thing that doesn't really need a separate issue: in the server log, "Data recieved:" should probably be "Data received: ".

Feature: play.bat takes command line arguments to allow startup automation

It would be nice for play.bat to be able to take arguments to allow for a single command to skip the startup 'questionnaire', something like play.bat models/<model> gpu to start a custom model or play.bat gpt-2 cpu to load a default model. I'm playing around with writing a global command to run Kobold AI and it would be even better if it could get me right to the models loading up and the local server starting without any extra input from me.

Set model to eval mode for better performance

From some reports, people got lower performance from my finetuned models, but copying over the original model's config seems to fix it. The only significant difference is that gradient checkpointing is enabled for my models, which should only cause a difference when training. However, it seems that KoboldAI doesn't set the model to evaluation mode. Adding the following on line 240 in aiserver.py should fix it: model = model.eval()

From what I see in the transformers code, adding the device argument to pipeline should not actually do anything when the model is already instantiated and passed in directly. In that case, calling model.cuda(0) instead should work to load it on the GPU.

Running on linux

I was trying to get this to run on Pop OS when I encountered an issue.
The installation steps all went fine, but when I was first tried to start the game using play-cuda.sh it didn't work because of this issue:

RUN apt update && apt install xorg -y

Resolved that by commenting out the line, because xorg was already installed.
Now when trying to run it I get this error:

Error response from daemon: error gathering device information while adding custom device "/dev/kfd": no such file or directory

Found out this might have something to do with rocm, which I don't have installed (Because I'm trying to run this on a 1080Ti).
Now I wonder if running this on linux is even supported since all the instructions are made for windows :)
I have to mention though, that it does run flawlessly in my Installation of Windows. No issues at any process step. Only tried the gpt neo 2.7B parameter set so far and runs fine on 11GB VRAM. Thanks for all the work that has already been put into this.

Encoding problem with accented and special characters

Hello! I've gone through two issues here. I was testing a model I finetuned to see if it spoke portuguese well. I imported an AID adventure that had some portuguese in it, and KAI started throwing errors because of the accented characters. I had to write a script that replaces them with their unaccented variation (i.e., ã became a).

Now, I had a similar problem with my CAT WIs. The triple bar doesn't seem to work with KAI. When the WI is triggered, the character gets messy.

How the WI got inserted into LMI: Received Data: [ Clavicus Vile description:< name ≡ Clavicus Vile& Vile>/< age ≡ primeval>. Clavicus Vile summary:< appears ≡ male>/< location ≡ The Fields of Regret>/< almost always with his hound Barbas by his side>. Clavicus Vile appearance:< skin ≡ yellow>/< long black horns>/< eyes ≡ red>. Clavicus Vile mental:< jokester& sarcastic& trickster& manipulative>. Clavicus Vile occupation:< Daedric Prince of Trickery and Bargains/ God of Trickery and Bargains>. Clavicus Vile speech:< mocking tone>.] (≡ where it should be )

With the accented character, the error was something like UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 867: character maps to <undefined>enter code here. After removing all of them, the import worked.

Thanks!

AI Dungeon selected adventure wrapping after the 9th entry

Linux Mint, Firefox, first time listener and caller.

Just hooked everything up and set it with IK to start testing, and selected a story (the 26th in the list) which properly read in terminal as
Data recieved:{'cmd': 'importselect', 'data': 'import25'}

However, it loaded the sixth adventure in the list instead. Suspecting the behavior happening, I tried the 19th adventure, and got the 9th. The 11th adventure got me the first, import10 becoming import0 it appears. The terminal log showing data received matches the story I'm clicking, but the client in-browser is wrapping around after 9. I have no warnings or errors other than the statement of being a development server and GPU support not found, but I don't think either are at play here. The issue is happening all the way through to import68 being read (far as I can tell) as import8.

Not sure how to get debug logs but as far as I can tell my install was fine.

Errror: "LayerNormKernelImpl" not implemented for 'Half'

Not entirely sure what this means. I've been practicing running the various models and any time I try to use the neo-horni model, I get this error.

Here's logs:

←[96mWelcome to the KoboldAI Client!
Select an AI model to continue:←[0m

    #   Model                           V/RAM
    =========================================
    1  - GPT Neo 1.3B                   8GB
    2  - GPT Neo 2.7B                   16GB
    3  - GPT-2                          1.2GB
    4  - GPT-2 Med                      2GB
    5  - GPT-2 Large                    16GB
    6  - GPT-2 XL                       16GB
    7  - InferKit API (requires API key)
    8  - Custom Neo   (eg Neo-horni)
    9  - Custom GPT-2 (eg CloverEdition)
    10 - Google Colab
    11 - OpenAI API (requires API key)
    12 - Read Only (No AI)

Model #> 8
←[96mPlease choose the folder where pytorch_model.bin is located:←[0m

Looking for GPU support...NOT FOUND!
Initializing Flask... OK!
Initializing transformers, please wait...
2021-07-05 09:33:21.416244: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
OK! NeoCustom pipeline created!
You may now connect with a browser at http://127.0.0.1:5000/
 * Serving Flask app 'aiserver' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
The WebSocket transport is not available, you must install a WebSocket server that is compatible with your async mode to enable it. See the documentation for details. (further occurrences of this error will be logged with level INFO)
Client connected!
Data recieved:{'cmd': 'submit', 'data': 'Testing'}
Min:2, Max:61, Txt:Testing
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
"LayerNormKernelImpl" not implemented for 'Half'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.