GithubHelp home page GithubHelp logo

pair-code / lit Goto Github PK

View Code? Open in Web Editor NEW
3.4K 68.0 350.0 206.34 MB

The Learning Interpretability Tool: Interactively analyze ML models to understand their behavior in an extensible and framework agnostic interface.

Home Page: https://pair-code.github.io/lit

License: Apache License 2.0

JavaScript 2.14% Python 41.79% TypeScript 49.92% CSS 4.54% HTML 0.06% Shell 0.04% Dockerfile 0.08% Liquid 0.60% Jupyter Notebook 0.83%
machine-learning natural-language-processing visualization

lit's Introduction

πŸ”₯ Learning Interpretability Tool (LIT)

The Learning Interpretability Tool (πŸ”₯LIT, formerly known as the Language Interpretability Tool) is a visual, interactive ML model-understanding tool that supports text, image, and tabular data. It can be run as a standalone server, or inside of notebook environments such as Colab, Jupyter, and Google Cloud Vertex AI notebooks.

LIT is built to answer questions such as:

  • What kind of examples does my model perform poorly on?
  • Why did my model make this prediction? Can this prediction be attributed to adversarial behavior, or to undesirable priors in the training set?
  • Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?

Example of LIT UI

LIT supports a variety of debugging workflows through a browser-based UI. Features include:

  • Local explanations via salience maps and rich visualization of model predictions.
  • Aggregate analysis including custom metrics, slicing and binning, and visualization of embedding spaces.
  • Counterfactual generation via manual edits or generator plug-ins to dynamically create and evaluate new examples.
  • Side-by-side mode to compare two or more models, or one model on a pair of examples.
  • Highly extensible to new model types, including classification, regression, span labeling, seq2seq, and language modeling. Supports multi-head models and multiple input features out of the box.
  • Framework-agnostic and compatible with TensorFlow, PyTorch, and more.

LIT has a website with live demos, tutorials, a setup guide and more.

Stay up to date on LIT by joining the lit-announcements mailing list.

For a broader overview, check out our paper and the user guide.

Documentation

Download and Installation

LIT can be run via container image, installed via pip or built from source. Building from source is necessary if you update any of the front-end or core back-end code.

Build container image

Build the image using docker or podman:

git clone https://github.com/PAIR-code/lit.git && cd lit
docker build --file Dockerfile --tag lit-nlp .

See the advanced guide for detailed instructions on using the default LIT Docker image, running LIT as a containerized web app in different scenarios, and how to creating your own LIT images.

pip installation

pip install lit-nlp

The pip installation will install all necessary prerequisite packages for use of the core LIT package.

It does not install the prerequisites for the provided demos, so you need to install those yourself. See requirements_examples.txt for the list of packages required to run the demos.

Install from source

Clone the repo:

git clone https://github.com/PAIR-code/lit.git && cd lit

Note: be sure you are running Python 3.9+. If you have a different version on your system, use the conda instructions below to set up a Python 3.9 environment.

Set up a Python environment with venv:

python -m venv .venv
source .venv/bin/activate

Or set up a Python environment using conda:

conda create --name lit-nlp
conda activate lit-nlp
conda install python=3.9
conda install pip

Once you have the environment, install LIT's dependencies:

python -m pip install -r requirements.txt
python -m pip install cudnn cupti  # optional, for GPU support
python -m pip install torch  # optional, for PyTorch

# Build the frontend
(cd lit_nlp; yarn && yarn build)

Note: Use the -r requirements.txt option to install every dependency required for the LIT library, its test suite, and the built-in examples. You can also install subsets of these using the -r requirements_core.txt (core library), -r requirements_test.txt (test suite), -r requirements_examples.txt (examples), and/or any combination thereof.

Note: if you see an error running yarn on Ubuntu/Debian, be sure you have the correct version installed.

Running LIT

Explore a collection of hosted demos on the demos page.

Quick-start: classification and regression

To explore classification and regression models tasks from the popular GLUE benchmark:

python -m lit_nlp.examples.glue.demo --port=5432 --quickstart

Navigate to http://localhost:5432 to access the LIT UI.

Your default view will be a small BERT-based model fine-tuned on the Stanford Sentiment Treebank, but you can switch to STS-B or MultiNLI using the toolbar or the gear icon in the upper right.


And navigate to http://localhost:5432 for the UI.

### Notebook usage

Colab notebooks showing the use of LIT inside of notebooks can be found at
[lit_nlp/examples/notebooks](./lit_nlp/examples/notebooks).

We provide a simple
[Colab demo](https://colab.research.google.com/github/PAIR-code/lit/blob/main/lit_nlp/examples/notebooks/LIT_sentiment_classifier.ipynb).
Run all the cells to see LIT on an example classification model in the notebook.

### More Examples

See [lit_nlp/examples](./lit_nlp/examples). Most are run similarly to the
quickstart example above:

```sh
python -m lit_nlp.examples.<example_name>.demo --port=5432 [optional --args]

User Guide

To learn about LIT's features, check out the user guide, or watch this video.

Adding your own models or data

You can easily run LIT with your own model by creating a custom demo.py launcher, similar to those in lit_nlp/examples. The basic steps are:

  • Write a data loader which follows the Dataset API
  • Write a model wrapper which follows the Model API
  • Pass models, datasets, and any additional components to the LIT server class

For a full walkthrough, see adding models and data.

Extending LIT with new components

LIT is easy to extend with new interpretability components, generators, and more, both on the frontend or the backend. See our documentation to get started.

Pull Request Process

To make code changes to LIT, please work off of the dev branch and create pull requests (PRs) against that branch. The main branch is for stable releases, and it is expected that the dev branch will always be ahead of main.

Draft PRs are encouraged, especially for first-time contributors or contributors working on complex tasks (e.g., Google Summer of Code contributors). Please use these to communicate ideas and implementations with the LIT team, in addition to issues.

Prior to sending your PR or marking a Draft PR as "Ready for Review", please run the Python and TypeScript linters on your code to ensure compliance with Google's Python and TypeScript Style Guides.

# Run Pylint on your code using the following command from the root of this repo
pushd lit_nlp & pylint & popd

# Run ESLint on your code using the following command from the root of this repo
pushd lit_nlp & yarn lint & popd

Citing LIT

If you use LIT as part of your work, please cite our EMNLP paper:

@misc{tenney2020language,
    title={The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for {NLP} Models},
    author={Ian Tenney and James Wexler and Jasmijn Bastings and Tolga Bolukbasi and Andy Coenen and Sebastian Gehrmann and Ellen Jiang and Mahima Pushkarna and Carey Radebaugh and Emily Reif and Ann Yuan},
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    year = "2020",
    publisher = "Association for Computational Linguistics",
    pages = "107--118",
    url = "https://www.aclweb.org/anthology/2020.emnlp-demos.15",
}

Disclaimer

This is not an official Google product.

LIT is a research project and under active development by a small team. There will be some bugs and rough edges, but we're releasing at an early stage because we think it's pretty useful already. We want LIT to be an open platform, not a walled garden, and we would love your suggestions and feedback - drop us a line in the issues.

lit's People

Contributors

ankurtaly avatar aryan1107 avatar bdu91 avatar cjqian avatar cpka145 avatar ddaspit avatar dleve123 avatar eberts-google avatar ellenjiang7 avatar emilyreif avatar frigus02 avatar hawkinsp avatar iamshnoo avatar iftenney avatar jameswex avatar jswong65 avatar kevinrobinson avatar lgessler avatar littlepea13 avatar llcourage avatar lukegb avatar minsukkahng avatar nadah09 avatar noahcb avatar owahltinez avatar rchen152 avatar ryanmullins avatar sampathweb avatar tolga-b avatar yilei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lit's Issues

UMAP Error

Earlier there was no error like the below one

"AttributeError: module 'umap' has no attribute 'UMAP'."

But After the recent update which was made 3 days ago to the repo.
umap_issuse

404 error while running as server

I get this error,when i run the app,can you please check

I0820 00:52:43.132686 139656716896064 _internal.py:113] 127.0.0.1 - - [20/Aug/2020 00:52:43] "GET / HTTP/1.1" 404 -
W0820 00:52:44.045591 139656716896064 wsgi_app.py:57] IOError [Errno 2] No such file or directory: './lit_nlp/client/build/static/index.html' on path ./lit_nlp/client/build/static/index.html
I0820 00:52:44.046047 139656716896064 wsgi_app.py:147] path ./lit_nlp/client/build/static/index.html not found, sending 404

and when i changed the path to index.html in app.py,and when i run,the output is blank...The html page is blank and the error in prompt is

I0820 00:58:09.279273 140288821880640 _internal.py:113] 127.0.0.1 - - [20/Aug/2020 00:58:09] "GET /main.js HTTP/1.1" 404 -
I0820 00:58:11.933842 140288821880640 _internal.py:113] 127.0.0.1 - - [20/Aug/2020 00:58:11] "GET / HTTP/1.1" 200 -
W0820 00:58:12.075612 140288821880640 wsgi_app.py:57] IOError [Errno 2] No such file or directory: './lit_nlp/client/build/main.js' on path ./lit_nlp/client/build/main.js
I0820 00:58:12.075828 140288821880640 wsgi_app.py:147] path ./lit_nlp/client/build/main.js not found, sending 404

error with custom model

Hi,I am trying to implement LIT on a sentiment model based on imdb dataset in classification.py file. I am not get predictions,on running.
`import os

from absl import app
from absl import flags
from absl import logging
from lit_nlp.api import dataset as lit_dataset
from lit_nlp.api import types as lit_types
import pandas as pd
from lit_nlp import dev_server
from lit_nlp import server_flags
from lit_nlp.components import word_replacer
from lit_nlp.examples.datasets import classification
from lit_nlp.examples.datasets import glue
from lit_nlp.examples.datasets import lm
from lit_nlp.examples.models import pretrained_lms
from typing import Dict, List, Tuple
from keras.datasets import imdb
from lit_nlp.api import model as lit_model
from lit_nlp.api import types as lit_types
from lit_nlp.lib import utils
from keras.preprocessing import sequence
import numpy as np
import tensorflow as tf
from keras.models import load_model

import numpy as np
FLAGS = flags.FLAGS
import string
punct=list(string.punctuation)
from nltk.tokenize import word_tokenize
flags.DEFINE_integer("top_k", 10,
"Rank to which the output distribution is pruned.")

flags.DEFINE_integer(
"max_examples", None,
"Maximum number of examples to load from each evaluation set. Set to None to load the full set."
)

flags.DEFINE_bool(
"load_bwb", False,
"If true, will load examples from the Billion Word Benchmark dataset. This may download a lot of data the first time you run it, so disable by default for the quick-start example."
)

FLAGS.set_default("default_layout", "lm")

class IMDbModel(lit_model.Model):
"""Wrapper for a Natural Language Inference model."""

LABELS = ["0", "1"]

def init(self, model_path, **kw):
# Load the model into memory so we're ready for interactive use.
self._model = load_model(model_path, **kw)

LIT API implementations

def preprocess_1(self,text):
v=[]
if(type(text)==str):
words=word_tokenize(text)
for w_ in words:
if(w_ not in punct):
w1=w_.split('.')
for w in w1:
w=w.replace('-','')
w=w.replace('_','')
w=w.replace('.','')
w=w.replace(',','')
w = ''.join([i for i in w if not i.isdigit()])
if(w!='.'):
v.append(w.lower())

return ' '.join(v)

def predict_minibatch(self, inputs):
"""Predict on a single minibatch of examples."""
word_to_id = imdb.get_word_index()
word_to_id = {k:(v+3) for k,v in word_to_id.items()}
word_to_id[""] = 0
word_to_id[""] = 1
word_to_id[""] = 2
# tmp = []
dict={}
print(inputs)
for ex in inputs:
tmp = []
for word in self.preprocess_1(ex['text']).split(" "):
try:
tmp.append(word_to_id[word])
except:
pass
tmp_padded = sequence.pad_sequences([tmp], maxlen=500)
output=self._model.predict_classes(np.array(tmp_padded[0]))[0][0]
dict['text']=output
# examples = [self._model.convert_dict_input(d) for d in inputs] # any custom preprocessing
return dict # returns a dict for each input

def input_spec(self):
"""Describe the inputs to the model."""
return {
'text': lit_types.TextSegment(),
# 'label': lit_types.CategoryLabel(vocab=self.LABELS),
}

def output_spec(self):
"""Describe the model outputs."""
return {
# The 'parent' keyword tells LIT where to look for gold labels when computing metrics.
'probas': lit_types.MulticlassPreds(vocab=self.LABELS, parent='label'),
}

def main(_):

datasets = {
"imdb_train": classification.IMDBData("test"),
# Empty dataset, if you just want to type sentences into the UI.
"blank": lm.PlaintextSents(""),
}

NLIModel implements the Model API

models = {
'model_imdb': IMDbModel('model-path'),

}

lit_demo = dev_server.Server(models, datasets,**server_flags.get_flags())
lit_demo.serve()

if name == "main":
app.run(main)`

Segment slicers at glue_models

I am not sure the function _segment_slicers at glue_models is doing what is intended, at least for models with just one segment type. It specifies it aims to remove the [CLS] and [SEP] tokens, but it does not do so currently.

Moreover I would suggest to keep it this way, I think it is beneficial to see the attention maps for those tokens as well.

Thanks for the awesome tool by the way.

Characters Exceed error

Hi There,
When I try to run following codes the below error pop-ups
python -m lit_nlp.examples.quickstart_sst_demo --port=5432

Traceback (most recent call last):
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\download\extractor.py", line 96, in _sync_extract
_copy(handle, dst_path)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\download\extractor.py", line 120, in _copy
tf.io.gfile.makedirs(os.path.dirname(dest_path))
File "C:\Users\SB00790107\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\lib\io\file_io.py", line 453, in recursive_create_dir_v2
pywrap_tensorflow.RecursivelyCreateDir(compat.as_bytes(path))
tensorflow.python.framework.errors_impl.NotFoundError: Failed to create a directory: C:\Users\SB00790107\tensorflow_datasets\downloads\extracted/ZIP.fire.goog.com_v0_b_mtl-sent-repr.apps.cowOhVrpNUsvqdZqI70Nq3ISu63l9SOhTqYqoz6uEW3-Y.zipalt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8.incomplete_411bf32dcb704a46b9e93a63fb1aae2c/SST-2; No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:~\lit\lit_nlp\examples\pretrained_lm_demo.py", line 102, in
app.run(main)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "C:~\lit\lit_nlp\examples\pretrained_lm_demo.py", line 77, in main
"sst_dev": glue.SST2Data("validation").remap({"sentence": "text"}),
File "C:~\lit\lit_nlp\examples\datasets\glue.py", line 62, in init
for ex in load_tfds('glue/sst2', split=split):
File "C:~\lit\lit_nlp\examples\datasets\glue.py", line 21, in load_tfds
ret = list(tfds.as_numpy(tfds.load(*args, download=True, try_gcs=True, **kw)))
File "C:\Users\SB00790107\AppData\Roaming\Python\Python37\site-packages\wrapt\wrappers.py", line 567, in call
args, kwargs)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\api_utils.py", line 69, in disallow_positional_args_dec
return fn(*args, **kwargs)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\registered.py", line 371, in load
dbuilder.download_and_prepare(**download_and_prepare_kwargs)
File "C:\Users\SB00790107\AppData\Roaming\Python\Python37\site-packages\wrapt\wrappers.py", line 606, in call
args, kwargs)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\api_utils.py", line 69, in disallow_positional_args_dec
return fn(*args, **kwargs)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\dataset_builder.py", line 376, in download_and_prepare
download_config=download_config)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\dataset_builder.py", line 1019, in _download_and_prepare
max_examples_per_split=download_config.max_examples_per_split,
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\dataset_builder.py", line 939, in _download_and_prepare
dl_manager, **split_generators_kwargs):
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\text\glue.py", line 452, in _split_generators
dl_dir = dl_manager.download_and_extract(self.builder_config.data_url)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\download\download_manager.py", line 604, in download_and_extract
return _map_promise(self._download_extract, url_or_urls)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\download\download_manager.py", line 641, in _map_promise
res = tf.nest.map_structure(_wait_on_promise, all_promises)
File "C:\Users\SB00790107\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\util\nest.py", line 535, in map_structure
structure[0], [func(*x) for x in entries],
File "C:\Users\SB00790107\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\util\nest.py", line 535, in
structure[0], [func(*x) for x in entries],
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\download\download_manager.py", line 635, in _wait_on_promise
return p.get()
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\promise\promise.py", line 512, in get
return self._target_settled_value(_raise=True)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\promise\promise.py", line 516, in _target_settled_value
return self._target()._settled_value(_raise)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\promise\promise.py", line 226, in _settled_value
reraise(type(raise_val), raise_val, self._traceback)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\six.py", line 703, in reraise
raise value
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\promise\promise.py", line 844, in handle_future_result
resolve(future.result())
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\concurrent\futures_base.py", line 428, in result
return self.__get_result()
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\concurrent\futures_base.py", line 384, in __get_result
raise self._exception
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\download\extractor.py", line 108, in _sync_extract
raise ExtractError(msg)
tensorflow_datasets.core.download.extractor.ExtractError: Error while extracting C:\Users\SB00790107\tensorflow_datasets\downloads\fire.goog.com_v0_b_mtl-sent-repr.apps.cowOhVrpNUsvqdZqI70Nq3ISu63l9SOhTqYqoz6uEW3-Y.zipalt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8 to C:\Users\SB00790107\tensorflow_datasets\downloads\extracted\ZIP.fire.goog.com_v0_b_mtl-sent-repr.apps.cowOhVrpNUsvqdZqI70Nq3ISu63l9SOhTqYqoz6uEW3-Y.zipalt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8 (file: SST-2\dev.tsv) : Failed to create a directory: C:\Users\SB00790107\tensorflow_datasets\downloads\extracted/ZIP.fire.goog.com_v0_b_mtl-sent-repr.apps.cowOhVrpNUsvqdZqI70Nq3ISu63l9SOhTqYqoz6uEW3-Y.zipalt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8.incomplete_411bf32dcb704a46b9e93a63fb1aae2c/SST-2; No such file or directory
On windows, path lengths greater than 260 characters may result in an error. See the doc to remove the limiration: https://docs.python.org/3/using/windows.html#removing-the-max-path-limitation

Error running the quick start examples

I followed all the instructions to run the quick start examples.
However, for the sentiment classification example, I got the following error:

tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(15, 128), b.shape=(128, 128), m=15, n=128, k=128 [Op:MatMul]

For the language model example, I got a similar error:

tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(15, 768), b.shape=(768, 768), m=15, n=768, k=768 [Op:MatMul]

I am using tensorflow 2.3.0.
Any idea on this? Many thanks!

Shape Mismatch Error in umap calculation when entering a custom datapoint

Hi,
I modified the quickstart_sst_demo.py example file to allow it to run already fine-tuned models from huggingface without the need to first train it. I had loaded this model https://huggingface.co/sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english into the tool and was using it. It was working fine, until I entered a custom sentence in Datapoint Editor ( i.e. not in the loaded validation set ) and clicked on Make New Datapoint and got the following error:

E0822 15:21:16.088476 140224925210432 wsgi_app.py:210] Uncaught error: Incompatible dimension for X and Y matrices: X.shape[1] == 2 while Y.shape[1] == 128

I had earlier tried this model also https://huggingface.co/textattack/bert-base-uncased-SST-2 and that too gave the same error.

I went back and checked the same thing when using the default script ( where google/bert_uncased_L-2_H-128_A-2 is first loaded, fine tuned for 3 epochs and then this model is used in LIT ), this issue didn't take place.

Logs while loading the distilbert model:

2020-08-22 15:19:36.691398: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-08-22 15:19:36.691460: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
I0822 15:19:38.790888 140224925210432 quickstart_sst_demo_huggingface.py:57] Working directory: sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english
I0822 15:19:38.791229 140224925210432 quickstart_sst_demo_huggingface.py:61] train=False, so no finetuning will be done
I0822 15:19:38.870157 140224925210432 configuration_utils.py:265] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english/config.json from cache at /root/.cache/torch/transformers/8ea67766c9c6c502b1e44635404952d1a5685e296fe18144020458c9b07710c5.2677af367f3d00baa869333b4aa545410fb85939ec3a67ddade35baade7f93fe
I0822 15:19:38.870624 140224925210432 configuration_utils.py:301] Model config DistilBertConfig {
  "activation": "gelu",
  "architectures": [
    "DistilBertForSequenceClassification"
  ],
  "attention_dropout": 0.1,
  "dim": 2,
  "dropout": 0.1,
  "finetuning_task": "sst-2",
  "hidden_dim": 2,
  "id2label": {
    "0": "NEGATIVE",
    "1": "POSITIVE"
  },
  "initializer_range": 0.02,
  "label2id": {
    "NEGATIVE": 0,
    "POSITIVE": 1
  },
  "max_position_embeddings": 512,
  "model_type": "distilbert",
  "n_heads": 2,
  "n_layers": 2,
  "output_past": true,
  "pad_token_id": 0,
  "qa_dropout": 0.1,
  "seq_classif_dropout": 0.2,
  "sinusoidal_pos_embds": false,
  "tie_weights_": true,
  "vocab_size": 30522
}

I0822 15:19:38.870761 140224925210432 tokenization_utils.py:938] Model name 'sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english' not found in model shortcut name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). Assuming 'sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english' is a path, a model identifier, or url to a directory containing tokenizer files.
I0822 15:19:39.196162 140224925210432 tokenization_utils.py:1022] loading file https://s3.amazonaws.com/models.huggingface.co/bert/sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english/vocab.txt from cache at /root/.cache/torch/transformers/45a97749e7cd555a7d4a5068597027de77253a260bb784f256b8467034afcbae.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I0822 15:19:39.196318 140224925210432 tokenization_utils.py:1022] loading file https://s3.amazonaws.com/models.huggingface.co/bert/sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english/added_tokens.json from cache at None
I0822 15:19:39.196394 140224925210432 tokenization_utils.py:1022] loading file https://s3.amazonaws.com/models.huggingface.co/bert/sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english/special_tokens_map.json from cache at /root/.cache/torch/transformers/11de36abdf85710b886b11de7c2a654913de2a5219d1d2c8909e39529244885e.275045728fbf41c11d3dae08b8742c054377e18d92cc7b72b6351152a99b64e4
I0822 15:19:39.196466 140224925210432 tokenization_utils.py:1022] loading file https://s3.amazonaws.com/models.huggingface.co/bert/sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english/tokenizer_config.json from cache at /root/.cache/torch/transformers/a33dd44cc977304f8be92eeaf59c61314eec22b13a4c571a559f2b4d23d66e93.3889713104075cfee9e96090bcdd0dc753733b3db9da20d1dd8b2cd1030536a2
I0822 15:19:39.304257 140224925210432 configuration_utils.py:265] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english/config.json from cache at /root/.cache/torch/transformers/8ea67766c9c6c502b1e44635404952d1a5685e296fe18144020458c9b07710c5.2677af367f3d00baa869333b4aa545410fb85939ec3a67ddade35baade7f93fe
I0822 15:19:39.304666 140224925210432 configuration_utils.py:301] Model config DistilBertConfig {
  "activation": "gelu",
  "architectures": [
    "DistilBertForSequenceClassification"
  ],
  "attention_dropout": 0.1,
  "dim": 2,
  "dropout": 0.1,
  "finetuning_task": "sst-2",
  "hidden_dim": 2,
  "initializer_range": 0.02,
  "max_position_embeddings": 512,
  "model_type": "distilbert",
  "n_heads": 2,
  "n_layers": 2,
  "output_attentions": true,
  "output_hidden_states": true,
  "output_past": true,
  "pad_token_id": 0,
  "qa_dropout": 0.1,
  "seq_classif_dropout": 0.2,
  "sinusoidal_pos_embds": false,
  "tie_weights_": true,
  "vocab_size": 30522
}

I0822 15:19:39.559951 140224925210432 modeling_tf_utils.py:384] loading weights file https://cdn.huggingface.co/sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english/tf_model.h5 from cache at /root/.cache/torch/transformers/c151044f03ea62cb9b121f822633cfe511ec33c640f82a5c36643935d3aa56c3.b4173921ecda9322480ca95ae50a7de71bc1ef95e0ab546577648ccc3d3a6eb9.h5
2020-08-22 15:19:39.580763: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-08-22 15:19:39.580810: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2020-08-22 15:19:39.580853: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (9474283738e4): /proc/driver/nvidia/version does not exist
2020-08-22 15:19:39.581088: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-08-22 15:19:39.588133: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2300050000 Hz
2020-08-22 15:19:39.588746: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5641253f24b0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-22 15:19:39.588784: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-08-22 15:19:39.603479: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
I0822 15:19:39.732185 140224925210432 modeling_tf_utils.py:422] Layers of TFDistilBertForSequenceClassification not initialized from pretrained model: ['dropout_7']
I0822 15:19:39.732364 140224925210432 modeling_tf_utils.py:426] Layers from pretrained model not used in TFDistilBertForSequenceClassification: ['dropout_484']
2020-08-22 15:19:39.831450: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Failed precondition: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'".
I0822 15:19:40.051693 140224925210432 dataset_info.py:358] Load dataset info from /root/tensorflow_datasets/glue/sst2/1.0.0
I0822 15:19:40.053645 140224925210432 dataset_builder.py:288] Reusing dataset glue (/root/tensorflow_datasets/glue/sst2/1.0.0)
I0822 15:19:40.053833 140224925210432 dataset_builder.py:500] Constructing tf.data.Dataset for split validation, from /root/tensorflow_datasets/glue/sst2/1.0.0
I0822 15:19:40.356983 140224925210432 dev_server.py:79] 
 (    (           
 )\ ) )\ )  *   ) 
(()/((()/(` )  /( 
 /(_))/(_))( )(_))
(_)) (_)) (_(_()) 
| |  |_ _||_   _| 
| |__ | |   | |   
|____|___|  |_|   


I0822 15:19:40.357189 140224925210432 dev_server.py:80] Starting LIT server...
I0822 15:19:40.357330 140224925210432 caching.py:137] CachingModelWrapper 'sst': loading from /tmp/lit_data/sst.cache.pkl
I0822 15:19:40.424300 140224925210432 caching.py:96] Loaded cache (884 entries) from /tmp/lit_data/sst.cache.pkl
I0822 15:19:40.424804 140224925210432 wsgi_serving.py:39] 

Starting Server on port 5634
You can navigate to 0.0.0.0:5634

Error Log ( Traceback ):

I0822 15:20:40.686877 140224925210432 app.py:69] Request received: /get_info?
I0822 15:20:40.687858 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:40] "POST /get_info HTTP/1.1" 200 -
I0822 15:20:41.268086 140224925210432 app.py:69] Request received: /get_dataset?dataset_name=sst_dev
I0822 15:20:41.277388 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:41] "POST /get_dataset?dataset_name=sst_dev HTTP/1.1" 200 -
I0822 15:20:43.192993 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=umap
I0822 15:20:43.701220 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:43.701397 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:43.701481 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:43.701582 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:43.701687 140224925210432 projection.py:207] Projection request: instance key: frozenset({('proj_kw', frozenset({('n_components', 3)})), ('model_name', 'sst'), ('dataset_name', 'sst_dev'), ('field_name', 'cls_emb')})
I0822 15:20:43.709572 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:43.709701 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:43.709782 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:43.709867 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:43.709962 140224925210432 projection.py:184] Creating new projection instance on 872 points
I0822 15:20:43.710053 140224925210432 caching.py:129] CachingModelWrapper 'frozenset({('proj_kw', frozenset({('n_components', 3)})), ('model_name', 'sst'), ('dataset_name', 'sst_dev'), ('field_name', 'cls_emb')})': no cache path specified, not loading.
I0822 15:20:43.712835 140224925210432 umap.py:38] UMAP input x_train: (872, 128)
I0822 15:20:53.559070 140224925210432 caching.py:211] CachingModelWrapper 'frozenset({('proj_kw', frozenset({('n_components', 3)})), ('model_name', 'sst'), ('dataset_name', 'sst_dev'), ('field_name', 'cls_emb')})': misses (dataset=): []
I0822 15:20:53.559272 140224925210432 caching.py:213] CachingModelWrapper 'frozenset({('proj_kw', frozenset({('n_components', 3)})), ('model_name', 'sst'), ('dataset_name', 'sst_dev'), ('field_name', 'cls_emb')})': 0 misses out of 872 inputs
I0822 15:20:53.559361 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:53.559463 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:53.563438 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:53] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=umap HTTP/1.1" 200 -
I0822 15:20:53.564406 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds
I0822 15:20:53.568569 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:53.568676 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:53.568761 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:53.568853 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:53.568954 140224925210432 app.py:183] Requested types: ['MulticlassPreds']
I0822 15:20:53.569080 140224925210432 app.py:193] Will return keys: {'probas'}
I0822 15:20:53.573478 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:53] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds HTTP/1.1" 200 -
I0822 15:20:53.574222 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics
I0822 15:20:53.578125 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:53.578230 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:53.578310 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:53.578406 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:53.588980 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:53] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics HTTP/1.1" 200 -
I0822 15:20:53.589661 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds
I0822 15:20:53.593147 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:53.593254 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:53.593334 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:53.593420 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:53.593515 140224925210432 app.py:183] Requested types: ['MulticlassPreds']
I0822 15:20:53.593613 140224925210432 app.py:193] Will return keys: {'probas'}
I0822 15:20:53.597741 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:53] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds HTTP/1.1" 200 -
I0822 15:20:53.598444 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics
I0822 15:20:53.601932 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:53.602035 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:53.602116 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:53.602200 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:53.612254 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:53] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics HTTP/1.1" 200 -
I0822 15:20:53.612898 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds
I0822 15:20:53.616374 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:53.616477 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:53.616556 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:53.616641 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:53.616734 140224925210432 app.py:183] Requested types: ['MulticlassPreds']
I0822 15:20:53.616831 140224925210432 app.py:193] Will return keys: {'probas'}
I0822 15:20:53.621100 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:53] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds HTTP/1.1" 200 -
I0822 15:20:55.062833 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds
I0822 15:20:55.566349 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:55.566529 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:55.566611 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:55.566708 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:55.566817 140224925210432 app.py:183] Requested types: ['MulticlassPreds']
I0822 15:20:55.566926 140224925210432 app.py:193] Will return keys: {'probas'}
I0822 15:20:55.571765 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:55] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds HTTP/1.1" 200 -
I0822 15:20:55.572703 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=RegressionScore
I0822 15:20:55.822110 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:55.822272 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:55.822356 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:55.822452 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:55.822559 140224925210432 app.py:183] Requested types: ['RegressionScore']
I0822 15:20:55.822671 140224925210432 app.py:193] Will return keys: set()
I0822 15:20:55.824508 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:55] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=RegressionScore HTTP/1.1" 200 -
I0822 15:20:56.057429 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=Scalar
I0822 15:20:56.313553 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:20:56.313664 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 872 inputs
I0822 15:20:56.313746 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:20:56.313837 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:20:56.313934 140224925210432 app.py:183] Requested types: ['Scalar']
I0822 15:20:56.314047 140224925210432 app.py:193] Will return keys: set()
I0822 15:20:56.315773 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:20:56] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=Scalar HTTP/1.1" 200 -
I0822 15:21:08.753991 140224925210432 app.py:69] Request received: /get_datapoint_ids?
I0822 15:21:08.754570 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:08] "POST /get_datapoint_ids HTTP/1.1" 200 -
I0822 15:21:10.000340 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics
I0822 15:21:10.511329 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): ['d70b4885afc1433da87688a15c32cf68']
I0822 15:21:10.511507 140224925210432 caching.py:213] CachingModelWrapper 'sst': 1 misses out of 873 inputs
I0822 15:21:10.511590 140224925210432 caching.py:218] Prepared 1 inputs for model
I0822 15:21:10.552725 140224925210432 caching.py:220] Received 1 predictions from model
I0822 15:21:10.563237 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:10] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics HTTP/1.1" 200 -
I0822 15:21:10.748265 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds
I0822 15:21:11.011022 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:11.011197 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 873 inputs
I0822 15:21:11.011280 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:11.011377 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:11.011486 140224925210432 app.py:183] Requested types: ['MulticlassPreds']
I0822 15:21:11.011596 140224925210432 app.py:193] Will return keys: {'probas'}
I0822 15:21:11.016319 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:11] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds HTTP/1.1" 200 -
I0822 15:21:11.017176 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds
I0822 15:21:11.496760 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:11.496941 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 873 inputs
I0822 15:21:11.497025 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:11.497122 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:11.497264 140224925210432 app.py:183] Requested types: ['MulticlassPreds']
I0822 15:21:11.497374 140224925210432 app.py:193] Will return keys: {'probas'}
I0822 15:21:11.502411 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:11] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds HTTP/1.1" 200 -
I0822 15:21:11.503245 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=RegressionScore
I0822 15:21:11.760120 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:11.760298 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 873 inputs
I0822 15:21:11.760380 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:11.760478 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:11.760585 140224925210432 app.py:183] Requested types: ['RegressionScore']
I0822 15:21:11.760695 140224925210432 app.py:193] Will return keys: set()
I0822 15:21:11.762583 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:11] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=RegressionScore HTTP/1.1" 200 -
I0822 15:21:11.772038 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=Scalar
I0822 15:21:12.243162 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:12.243347 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 873 inputs
I0822 15:21:12.243430 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:12.243527 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:12.243634 140224925210432 app.py:183] Requested types: ['Scalar']
I0822 15:21:12.243743 140224925210432 app.py:193] Will return keys: set()
I0822 15:21:12.245831 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:12] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=Scalar HTTP/1.1" 200 -
I0822 15:21:12.246693 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics
I0822 15:21:12.246981 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:12.247077 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 1 inputs
I0822 15:21:12.247156 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:12.247241 140224925210432 caching.py:220] Received 0 predictions from model
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/numpy/lib/function_base.py:393: RuntimeWarning: Mean of empty slice.
  avg = a.mean(axis)
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/numpy/core/_methods.py:161: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1221: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, msg_start, len(result))
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1221: UndefinedMetricWarning: Recall is ill-defined and being set to 0.0 due to no true samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, msg_start, len(result))
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1465: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use `zero_division` parameter to control this behavior.
  average, "true nor predicted", 'F-score is', len(true_sum)
I0822 15:21:12.252254 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:12] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics HTTP/1.1" 200 -
I0822 15:21:12.498321 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds
I0822 15:21:12.498716 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:12.498813 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 1 inputs
I0822 15:21:12.498892 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:12.498981 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:12.499068 140224925210432 app.py:183] Requested types: ['MulticlassPreds']
I0822 15:21:12.499170 140224925210432 app.py:193] Will return keys: {'probas'}
I0822 15:21:12.499482 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:12] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds HTTP/1.1" 200 -
I0822 15:21:12.996581 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=counterfactual%20explainer
I0822 15:21:12.997049 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:12.997206 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 1 inputs
I0822 15:21:12.997319 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:12.997461 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:12.997652 140224925210432 lemon_explainer.py:87] Found text fields for LEMON attribution: ['sentence']
I0822 15:21:12.997792 140224925210432 lemon_explainer.py:108] Explaining: This won't work
The exact solution is  x = 0                              
I0822 15:21:13.000373 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:13] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=counterfactual%20explainer HTTP/1.1" 200 -
I0822 15:21:13.538210 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds
I0822 15:21:13.538661 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:13.538776 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 1 inputs
I0822 15:21:13.538902 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:13.539038 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:13.539162 140224925210432 app.py:183] Requested types: ['MulticlassPreds']
I0822 15:21:13.539313 140224925210432 app.py:193] Will return keys: {'probas'}
I0822 15:21:13.539734 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:13] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds HTTP/1.1" 200 -
I0822 15:21:13.787961 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds
I0822 15:21:13.788417 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:13.788529 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 1 inputs
I0822 15:21:13.788653 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:13.788795 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:13.788922 140224925210432 app.py:183] Requested types: ['MulticlassPreds']
I0822 15:21:13.789069 140224925210432 app.py:193] Will return keys: {'probas'}
I0822 15:21:13.789474 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:13] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=MulticlassPreds HTTP/1.1" 200 -
I0822 15:21:14.284508 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=grad_norm
I0822 15:21:14.284989 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:14.285106 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 1 inputs
I0822 15:21:14.285258 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:14.285397 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:14.285616 140224925210432 gradient_maps.py:65] Found fields for gradient attribution: ['token_grad_sentence']
/opt/lit/lit_nlp/components/gradient_maps.py:51: RuntimeWarning: invalid value encountered in true_divide
  grad_norm /= np.sum(grad_norm)
I0822 15:21:14.286377 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:14] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=grad_norm HTTP/1.1" 200 -
I0822 15:21:14.824009 140224925210432 app.py:69] Request received: /get_preds?model=sst&dataset_name=sst_dev&requested_types=Tokens,AttentionHeads
I0822 15:21:14.824451 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:14.824565 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 1 inputs
I0822 15:21:14.824690 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:14.824824 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:14.824951 140224925210432 app.py:183] Requested types: ['Tokens', 'AttentionHeads']
I0822 15:21:14.825211 140224925210432 app.py:193] Will return keys: {'layer_1/attention', 'tokens_sentence', 'tokens', 'layer_0/attention'}
I0822 15:21:14.825782 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:14] "POST /get_preds?model=sst&dataset_name=sst_dev&requested_types=Tokens,AttentionHeads HTTP/1.1" 200 -
I0822 15:21:15.072378 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=counterfactual%20explainer
I0822 15:21:15.072764 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:15.072861 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 1 inputs
I0822 15:21:15.072937 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:15.073023 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:15.073159 140224925210432 lemon_explainer.py:87] Found text fields for LEMON attribution: ['sentence']
I0822 15:21:15.073270 140224925210432 lemon_explainer.py:108] Explaining: This won't work
The exact solution is  x = 0                              
I0822 15:21:15.075521 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:15] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=counterfactual%20explainer HTTP/1.1" 200 -
I0822 15:21:15.817224 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=umap
I0822 15:21:16.081932 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:16.082101 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 873 inputs
I0822 15:21:16.082181 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:16.082275 140224925210432 caching.py:220] Received 0 predictions from model
I0822 15:21:16.082371 140224925210432 projection.py:207] Projection request: instance key: frozenset({('proj_kw', frozenset({('n_components', 3)})), ('model_name', 'sst'), ('dataset_name', 'sst_dev'), ('field_name', 'cls_emb')})
I0822 15:21:16.084426 140224925210432 caching.py:211] CachingModelWrapper 'frozenset({('proj_kw', frozenset({('n_components', 3)})), ('model_name', 'sst'), ('dataset_name', 'sst_dev'), ('field_name', 'cls_emb')})': misses (dataset=): ['d70b4885afc1433da87688a15c32cf68']
I0822 15:21:16.084517 140224925210432 caching.py:213] CachingModelWrapper 'frozenset({('proj_kw', frozenset({('n_components', 3)})), ('model_name', 'sst'), ('dataset_name', 'sst_dev'), ('field_name', 'cls_emb')})': 1 misses out of 873 inputs
I0822 15:21:16.084591 140224925210432 caching.py:218] Prepared 1 inputs for model
E0822 15:21:16.088476 140224925210432 wsgi_app.py:210] Uncaught error: Incompatible dimension for X and Y matrices: X.shape[1] == 2 while Y.shape[1] == 128 

 Traceback (most recent call last):
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/umap/umap_.py", line 2056, in transform
    X, self._raw_data, metric=_m, **self._metric_kwds
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/utils/validation.py", line 73, in inner_f
    return f(**kwargs)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/pairwise.py", line 1775, in pairwise_distances
    return _parallel_pairwise(X, Y, func, n_jobs, **kwds)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/pairwise.py", line 1359, in _parallel_pairwise
    return func(X, Y, **kwds)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/pairwise.py", line 1379, in _pairwise_callable
    X, Y = check_pairwise_arrays(X, Y, force_all_finite=force_all_finite)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/utils/validation.py", line 73, in inner_f
    return f(**kwargs)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/pairwise.py", line 160, in check_pairwise_arrays
    X.shape[1], Y.shape[1]))
ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 2 while Y.shape[1] == 128

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/lit/lit_nlp/lib/wsgi_app.py", line 193, in __call__
    return self._ServeCustomHandler(request, clean_path)(environ,
  File "/opt/lit/lit_nlp/lib/wsgi_app.py", line 178, in _ServeCustomHandler
    return self._handlers[clean_path](self, request)
  File "/opt/lit/lit_nlp/app.py", line 77, in _handler
    outputs = fn(data, **kw)
  File "/opt/lit/lit_nlp/app.py", line 240, in _get_interpretations
    config=data.get('config'))
  File "/opt/lit/lit_nlp/components/projection.py", line 198, in run_with_metadata
    return self._run_with_metadata(*args, **kw)
  File "/opt/lit/lit_nlp/components/projection.py", line 216, in _run_with_metadata
    model_outputs)
  File "/opt/lit/lit_nlp/components/projection.py", line 131, in run_with_metadata
    return self._run(model, indexed_inputs, model_outputs, do_fit=False)
  File "/opt/lit/lit_nlp/components/projection.py", line 121, in _run
    converted_inputs, dataset_name="")
  File "/opt/lit/lit_nlp/lib/caching.py", line 219, in predict_with_metadata
    model_preds = list(self._model.predict_with_metadata(model_inputs))
  File "/opt/lit/lit_nlp/api/model.py", line 177, in <genexpr>
    results = (scrub_numpy_refs(res) for res in results)
  File "/opt/lit/lit_nlp/api/model.py", line 192, in _batched_predict
    yield from self.predict_minibatch(minibatch, **kw)
  File "/opt/lit/lit_nlp/components/umap.py", line 49, in predict_minibatch
    zs = self._umap.transform(x)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/umap/umap_.py", line 2063, in transform
    kwds=self._metric_kwds,
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/umap/distances.py", line 1260, in pairwise_special_metric
    return pairwise_distances(X, Y, metric=_partial_metric)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/utils/validation.py", line 73, in inner_f
    return f(**kwargs)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/pairwise.py", line 1775, in pairwise_distances
    return _parallel_pairwise(X, Y, func, n_jobs, **kwds)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/pairwise.py", line 1359, in _parallel_pairwise
    return func(X, Y, **kwds)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/pairwise.py", line 1379, in _pairwise_callable
    X, Y = check_pairwise_arrays(X, Y, force_all_finite=force_all_finite)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/utils/validation.py", line 73, in inner_f
    return f(**kwargs)
  File "/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/pairwise.py", line 160, in check_pairwise_arrays
    X.shape[1], Y.shape[1]))
ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 2 while Y.shape[1] == 128

I0822 15:21:16.089119 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:16] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=umap HTTP/1.1" 500 -
I0822 15:21:16.089920 140224925210432 app.py:69] Request received: /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics
I0822 15:21:16.090191 140224925210432 caching.py:211] CachingModelWrapper 'sst': misses (dataset=sst_dev): []
I0822 15:21:16.090284 140224925210432 caching.py:213] CachingModelWrapper 'sst': 0 misses out of 1 inputs
I0822 15:21:16.090358 140224925210432 caching.py:218] Prepared 0 inputs for model
I0822 15:21:16.090440 140224925210432 caching.py:220] Received 0 predictions from model
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/numpy/lib/function_base.py:393: RuntimeWarning: Mean of empty slice.
  avg = a.mean(axis)
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/numpy/core/_methods.py:161: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1221: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, msg_start, len(result))
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1221: UndefinedMetricWarning: Recall is ill-defined and being set to 0.0 due to no true samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, msg_start, len(result))
/opt/conda/envs/lit-nlp/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1465: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use `zero_division` parameter to control this behavior.
  average, "true nor predicted", 'F-score is', len(true_sum)
I0822 15:21:16.093940 140224925210432 _internal.py:113] 172.17.0.1 - - [22/Aug/2020 15:21:16] "POST /get_interpretations?model=sst&dataset_name=sst_dev&interpreter=metrics HTTP/1.1" 200 -

Kindly help.

Thanks

Explanations Component

What do the word scores in explanations component stand for? How should one interpret them?

How to Show the Attentions

Hi,
I run a code like the example in lit_nlp/examples/sst_pytorch_demo.py.py for a costum model and dataset and It doesn't show the Attentions box in the explanations part. What should I do to see them?
I return the attentions in predict_minibatch but not the grads.

error about Google authentication bearer token

I have got this error several times:
2020-09-01 09:58:52.889600: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Failed precondition: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'".
...
2020-09-01 10:11:32.846994: E tensorflow/core/platform/cloud/curl_http_request.cc:598] The transmission of request 0x7fef94f8e440 (URI: https://www.googleapis.com/storage/v1/b/tfds-data/o/datasets%2Fglue?fields=size%2Cgeneration%2Cupdated) has been stuck at 0 of 0 bytes for 61 seconds and will be aborted. CURL timing information: lookup time: 0.067967 (No error), connect time: 0 (No error), pre-transfer time: 0 (No error), start-transfer time: 0 (No error)
tensorflow.python.framework.errors_impl.AbortedError: All 10 retry attempts failed. The last failure: Unavailable: Error executing an HTTP request: libcurl code 42 meaning 'Operation was aborted by an application callback', error details: Callback aborted
when reading metadata of gs://tfds-data/datasets/glue
I found a similar closed issue but it did not work, and it seems that it may not be a problem about the version of tensorflow but GOOGLE_APPLICATION_CREDENTIALS.

400 error: Path is not safe

I get this error after running the demo command python -m lit_nlp.examples.quickstart_sst_demo --port=5432. I am able to complete training, but when I open up localhost:5432 I see this:

image

Here is where the error is triggered (defined later in the file too):

if not self._PathIsSafe(path):

can not open the app after run quickstart_sst_demo

I run python -m lit_nlp.examples.quickstart_sst_demo --port=5432 . After training a model for 3 epochs, I run into this error: wsgi_app.py:147] path ./lit_nlp/client/build/favicon.ico not found, sending 404. _internal.py:122] 127.0.0.1 - - [18/Aug/2020 10:43:04] "GET / HTTP/1.1" 404.

Saving Lime Predictions

Hello Team,
I see mechanism for caching and warm start. Can LIME predictions also be cached and stored in the disk? Right now the documents I have are big so, LIME takes significant time to render predictions. So, I was thinking of precomputing and saving it in the cache.

[Question] TF Serving Support

Hi,

I was curious if seamless integration with TF Serving is on the roadmap?

I see that there is support for remote models here remote_model.py, but is there anything beyond that we can look forward to?

We usually have a model server running that serves our models, so it would be great to reuse the models and resources allocated to the model server.

Thanks!

SyntaxError: invalid syntax mspec: lit_model.ModelSpec = m.spec()

$ python3 -m lit_nlp.examples.quickstart_sst_demo --port=5432                                                                                                                           
Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "~/lit/lit_nlp/examples/quickstart_sst_demo.py", line 20, in <module>
    from lit_nlp import dev_server
  File "~/lit/lit_nlp/dev_server.py", line 21, in <module>
    from lit_nlp import app as lit_app
  File "~/lit/lit_nlp/app.py", line 91
    mspec: lit_model.ModelSpec = m.spec()
         ^
SyntaxError: invalid syntax

Python Version

$ python3                                                    
Python 3.5.2 (default, Jul 17 2020, 14:04:10) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.

Is this related with the python version?

TypeError: cached_path() got an unexpected keyword argument 'extract_compressed_file'

python -m lit_nlp.examples.glue_demo --port=5432 --quickstart
give the following error:

I0113 20:59:53.176898 4700048832 glue_demo.py:86] Quick-start mode; overriding --models and --max_examples.
I0113 20:59:53.177102 4700048832 glue_demo.py:96] Loading model 'sst2-tiny' for task 'sst2' from 'https://storage.googleapis.com/what-if-tool-resources/lit-models/sst2_tiny.tar.gz'
Traceback (most recent call last):
File "/Users/chaowu/Documents/Anaconda/anaconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/chaowu/Documents/Anaconda/anaconda3/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/chaowu/Documents/Anaconda/anaconda3/lib/python3.8/site-packages/lit_nlp/examples/glue_demo.py", line 133, in
app.run(main)
File "/Users/chaowu/Documents/Anaconda/anaconda3/lib/python3.8/site-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/Users/chaowu/Documents/Anaconda/anaconda3/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/Users/chaowu/Documents/Anaconda/anaconda3/lib/python3.8/site-packages/lit_nlp/examples/glue_demo.py", line 100, in main
path = transformers.file_utils.cached_path(
TypeError: cached_path() got an unexpected keyword argument 'extract_compressed_file'

yarn build does not work on Windows

Hi,
Thanks for this nice project.

I tried to run yarn build in the client directory, but I get the following error:

C:\repositories\lit\lit_nlp\client>yarn build
yarn run v1.22.5
warning package.json: License should be a valid SPDX license expression
$ rm -rf ./build && webpack --env.production --config ./webpack/config.js
'rm' is not recognized as an internal or external command,
operable program or batch file.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

I use windows 10, first installed yarn using npm, then tried to download and run msi installation.
yarn version: 1.22.5

UMAP visualization in-between Transformer layers

To better understand how the network encodes information, it is not only interesting to see the resulting hidden representations before classifying (as shown now in the PCA/UMAP visualization) but also how it changes between layers.

I feel it would be interesting to be able to choose which layer to visualize in the PCA/UMAP window, perhaps using a dropdown selection. The models already output these hidden representations so it would be a matter of recomputing the PCA/UMAP when another layer is selected.

I could help implement this in a new branch if you think it would be worth adding.

And thanks again for the tool :)

input and output specs for machine translation

I am having some issues displaying outputs of machine translation models. Is there an example of input_spec and output_spec functions for a machine translation model class (or for sequence to sequence models in general)?

Support for T5 model

In the paper introducing lit, T5 model is used in the section entitled "debugging text generation". But I did not find the support for T5 model in the pretrained_lm_demo.py file. It seems that only "bert-" and "gpt2" are supported for now. Am I right?

yarn && yarn build

Hi,
I didn't understand the error of yarn :
(lit-nlp) sh-4.2$ yarn build
Error: Could not find or load main class build
any ideas ?

(lit-nlp) sh-4.2$ yarn version
Hadoop 2.7.3.2.6.5.0-292
Subversion [email protected]:hortonworks/hadoop.git -r 3091053c59a62c82d82c9f778c48bde5ef0a89a1
Compiled by jenkins on 2018-05-11T07:53Z
Compiled with protoc 2.5.0
From source with checksum abed71da5bc89062f6f6711179f2058
This command was run using /usr/hdp/2.6.5.0-292/hadoop/hadoop-common-2.7.3.2.6.5.0-292.jar

Examples for using PyTorch models?

The LIT documentation says Framework-agnostic and compatible with TensorFlow, PyTorch, and more, but I'm having a hard time figuring out what exactly I need to do in order to run a PyTorch model in LIT. To be specific, I would like to use PyTorch models written using the TorchText library and/or the HuggingFace Transformers library.

When I looked through the code in https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/models/glue_models.py, I see references to tf.keras and tf.GradientTape, which are obviously for TensorFlow.

Would it be possible for you to please add one or two examples showing how to use PyTorch with LIT?

Server aborted the SSL handshake when reading metadata of gs://tfds-data/datasets/glue

Hi,

While running python -m lit_nlp.examples.quickstart_sst_demo --port=5432, the error shows up. I am wondering if there is a solution to this? Thanks you!

I0827 11:42:16.742576 4549361088 quickstart_sst_demo.py:47] Working directory: /var/folders/8s/t30hz5ys1v9_r8drjl_5v1mc0000gn/T/tmpyqqb40ez
2020-08-27 11:42:16.747240: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Failed precondition: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'".

tensorflow.python.framework.errors_impl.AbortedError: All 10 retry attempts failed. The last failure: Unavailable: Error executing an HTTP request: libcurl code 35 meaning 'SSL connect error', error details: Server aborted the SSL handshake when reading metadata of gs://tfds-data/datasets/glue

LIT for NER

I have gone through Examples and Documentation. Was still not able to figure out on how to use LIT for BERT based NER tagger. I have fine-tuned a BERT model for NER tagging.

Any pointers on how to use it here?

Create a Website for this Repo

Hi there.

Is there a website for this repo?

Because if you don't have, well, this repo can simply be turned into a website right away. Others will discover this project in that website.

Steps:

  1. Go to Settings and look for GitHub Pages, scroll down. That's almost at the bottom.

  2. You will see there: Branch:none, so you should change that to master because you have a README.md file in the master repo. This will be your page. Click Save first.

  3. Then click Choose a theme, you select a predefined theme of your site.

  4. Visit your site now! The URL will be https://PAIR-code.github.io/lit.

If you were amazed by that, simply read the documentation about GitHub Pages.

Web-based demo (eventually)?

I've loved the way the hosted web demos by PAIR make it easy for people (e.g. my students, a couple of whom got to meet Mahima!) to just try them out immediately -- such as the What-If Tool and the Embedding Projector. Currently it seems that LIT is only possible to run by installing it locally. I understand it is still in the "work in progress" stage but wanted to ask: Are there plans to make it available as a web-based demo as well? Thanks.

Seq2seq NMT

I did not find the support for seq2seq model (like NMT) in the pretrained_lm_demo.py file. It seems that only "bert-" and "gpt2" are supported for now. Am I right?

Will this work for an OpenNMT model?

Hi, I've been trying to use LIT with a pre-trained OpenNMT model and am having a lot of trouble with creating the Model class. I'm not sure what the closest example you have is, but I haven't had any luck with what I've tried. Do you have an example demo/class for OpenNMT models?

Differences in whether prediction score threshold slider renders

EDIT: I followed this thread and found it was coming from a bug in lit-mobx that @cannoneyed ran into in adobe/lit-mobx#26, and has been fixed upstream in 0.0.4. This is what #54 does.


hi! In looking around, I noticed that the threshold slider in the Prediction Score module doesn't always render. To reproduce, load the quickstart and then click on the predictions tab. Then reload the page. These are the URL params I see: http://localhost:5432/?models=sst&dataset=sst_dev&compare_data_mode=false&layout=default&tab=Predictions

When the page loads again, the threshold slider is missing:
Screen Shot 2020-09-16 at 2 08 55 PM

The immediate issue is that when ClassificationService.marginSettingsis changed via setMargin, the component doesn't react and rerender. My understanding is that MobxLitElement should be tracking reads to MobX observables and setting up listeners that updated the component's HTML as needed. But that doesn't seem to be happening.

I started reading up on the internals of MobX, LitElement and lit-mobx couldn't figure out exactly what the issue was. The only way I could resolve the problem was to call requestUpdate manually when allMargins changes (diff).

After that change, the slider rendered as expected:
Screen Shot 2020-09-16 at 2 09 20 PM

After taking a break and coming back to this to dig further into the internals, I happened to switch from Firefox to Chrome. At first, Chrome didn't have the problem at all and everything worked as expected. I could set break points and see that it worked as expected; the call to setMargin caused the element to rerender. So I started digging in more, and logging LitElement#update to understand what's happening, and in the process of doing this, I started seeing the same bug in Chrome. Once that started, I couldn't get it to work again, regardless of reverting any of the code.

SO this seemed a bit insane :) I figured there must be a race condition or some non-determinism in the internals of the observables and callbacks, so started reading the source of lit-mobx. I happened to do this on GitHub and then noticed that code was different that what lit is using. Looking at their repo, I ran into adobe/lit-mobx#26, and verified that #54 resolves the issue on both browsers.

Error with running some example files

Hi, thank you for developing this wonderful tool.
I was trying to run the glue_demo.py in the examples and got an error saying

  File "C:\Users\afujita\anaconda3\envs\lit-nlp\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Users\afujita\anaconda3\envs\lit-nlp\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\afujita\~\lit\lit_nlp\examples\glue_demo.py", line 76, in <module>
    app.run(main)
  File "C:\Users\afujita\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 299, in run
    _run_main(main, args)
  File "C:\Users\afujita\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "C:\Users\afujita\~\lit\lit_nlp\examples\glue_demo.py", line 47, in main
    os.path.join(FLAGS.models_path, "sst2"))
  File "C:\Users\afujita\anaconda3\envs\lit-nlp\lib\ntpath.py", line 76, in join
    path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not NoneType

I'd appreciate it if you teach me how to fix this error.
For your information, quick_start_sst.py and pretrained_lm_demo.py both worked okay. Thank you so much.

Missing "import os" in lit_nlp/examples/models/glue_models.py

Running python -m lit_nlp.examples.quickstart_sst_demo --port=5432 from the Quickstart examples gives a NameError: name 'os' is not defined at the end of training:

Epoch 3/3
2104/2104 - 1664s - loss: 0.2828 - accuracy: 0.8850 - val_loss: 0.4176 - val_accuracy: 0.8221
Traceback (most recent call last):
  File "/home/nemo/miniconda3/envs/lit-nlp/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/nemo/miniconda3/envs/lit-nlp/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/nemo/lit/lit_nlp/examples/quickstart_sst_demo.py", line 60, in <module>
    app.run(main)
  File "/home/nemo/miniconda3/envs/lit-nlp/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/nemo/miniconda3/envs/lit-nlp/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/nemo/lit/lit_nlp/examples/quickstart_sst_demo.py", line 48, in main
    run_finetuning(model_path)
  File "/home/nemo/lit/lit_nlp/examples/quickstart_sst_demo.py", line 42, in run_finetuning
    model.save(train_path)
  File "/home/nemo/lit/lit_nlp/examples/models/glue_models.py", line 168, in save
    if not os.path.isdir(path):
NameError: name 'os' is not defined

This is easily fixed by adding import os in lit_nlp/examples/models/glue_models.py.

Multi label support

Thank you for your great work. Im very excited to incorporate it in my projects. One things I am trying to understand more is how far does the library go with multi-label classification problems? and/or if there are ny plans to incorporate it in the future.

Unknown error when browsing the tool - Using AWS Sagemaker Notebook instances

Hi, thanks for building the awesome tool!

I encountered a weird (perhaps javascript) issue when accessing the LIT via MYHOST/5432/ after following the Quick-start: sentiment classifier:

image

The shell where I run the server seems to behave normally when I access the url:

I0910 07:03:53.083244 140169940862784 dev_server.py:78]
 (    (
 )\ ) )\ )  *   )
(()/((()/(` )  /(
 /(_))/(_))( )(_))
(_)) (_)) (_(_())
| |  |_ _||_   _|
| |__ | |   | |
|____|___|  |_|


I0910 07:03:53.083362 140169940862784 dev_server.py:79] Starting LIT server...
I0910 07:03:53.083457 140169940862784 caching.py:129] CachingModelWrapper 'sst': no cache path specified, not loading.
I0910 07:03:53.083776 140169940862784 wsgi_serving.py:38]

Starting Server on port 5432
You can navigate to 127.0.0.1:5432


I0910 07:03:53.084201 140169940862784 _internal.py:113]  * Running on http://127.0.0.1:5432/ (Press CTRL+C to quit)
I0910 07:06:49.207016 140169940862784 _internal.py:113] 127.0.0.1 - - [10/Sep/2020 07:06:49] "GET / HTTP/1.1" 200 -
I0910 07:06:49.395286 140169940862784 _internal.py:113] 127.0.0.1 - - [10/Sep/2020 07:06:49] "GET /main.js HTTP/1.1" 200 -
I0910 07:06:50.111999 140169940862784 _internal.py:113] 127.0.0.1 - - [10/Sep/2020 07:06:50] "GET /static/favicon.png HTTP/

And below is the error message of the browser console, any suggestions on how to fix this would be highly appreciated. Thanks in advance!

https://MYHOST/get_info? 404 (Not Found)
Uncaught (in promise) Error: <!DOCTYPE HTML>
<html>

<head>
    <meta charset="utf-8">

    <title>Jupyter Notebook</title>
    <link id="favicon" rel="shortcut icon" type="image/x-icon" href="/static/base/images/favicon.ico?v=97c6417ed01bdc0ae3ef32ae4894fd03">
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <link rel="stylesheet" href="/static/components/jquery-ui/themes/smoothness/jquery-ui.min.css?v=3c2a865c832a1322285c55c6ed99abb2" type="text/css" />
    <link rel="stylesheet" href="/static/components/jquery-typeahead/dist/jquery.typeahead.min.css?v=7afb461de36accb1aa133a1710f5bc56" type="text/css" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    
    

    <link rel="stylesheet" href="/static/style/style.min.css?v=56cad67b8b1ff3ed9b48850e9efcffb3" type="text/css"/>
    
<style type="text/css">
/* disable initial hide */
div#header, div#site {
    display: block;
}
</style>

    <link rel="stylesheet" href="/custom/custom.css" type="text/css" />
    <script src="/static/components/es6-promise/promise.min.js?v=f004a16cb856e0ff11781d01ec5ca8fe" type="text/javascript" charset="utf-8"></script>
    <script src="/static/components/preact/index.js?v=00a2fac73c670ce39ac53d26640eb542" type="text/javascript"></script>
    <script src="/static/components/proptypes/index.js?v=c40890eb04df9811fcc4d47e53a29604" type="text/javascript"></script>
    <script src="/static/components/preact-compat/index.js?v=aea8f6660e54b18ace8d84a9b9654c1c" type="text/javascript"></script>
    <script src="/static/components/requirejs/require.js?v=951f856e81496aaeec2e71a1c2c0d51f" type="text/javascript" charset="utf-8"></script>
    <script>
      require.config({
          
          urlArgs: "v=20200707041032",
          
          baseUrl: '/static/',
          paths: {
            'auth/js/main': 'auth/js/main.min',
            custom : '/custom',
            nbextensions : '/nbextensions',
            kernelspecs : '/kernelspecs',
            underscore : 'components/underscore/underscore-min',
            backbone : 'components/backbone/backbone-min',
            jed: 'components/jed/jed',
            jquery: 'components/jquery/jquery.min',
            json: 'components/requirejs-plugins/src/json',
            text: 'components/requirejs-text/text',
            bootstrap: 'components/bootstrap/dist/js/bootstrap.min',
            bootstraptour: 'components/bootstrap-tour/build/js/bootstrap-tour.min',
            'jquery-ui': 'components/jquery-ui/jquery-ui.min',
            moment: 'components/moment/min/moment-with-locales',
            codemirror: 'components/codemirror',
            termjs: 'components/xterm.js/xterm',
            typeahead: 'components/jquery-typeahead/dist/jquery.typeahead.min',
          },
          map: { // for backward compatibility
              "*": {
                  "jqueryui": "jquery-ui",
              }
          },
          shim: {
            typeahead: {
              deps: ["jquery"],
              exports: "typeahead"
            },
            underscore: {
              exports: '_'
            },
            backbone: {
              deps: ["underscore", "jquery"],
              exports: "Backbone"
            },
            bootstrap: {
              deps: ["jquery"],
              exports: "bootstrap"
            },
            bootstraptour: {
              deps: ["bootstrap"],
              exports: "Tour"
            },
            "jquery-ui": {
              deps: ["jquery"],
              exports: "$"
            }
          },
          waitSeconds: 30,
      });

      require.config({
          map: {
              '*':{
                'contents': 'services/contents',
              }
          }
      });

      // error-catching custom.js shim.
      define("custom", function (require, exports, module) {
          try {
              var custom = require('custom/custom');
              console.debug('loaded custom.js');
              return custom;
          } catch (e) {
              console.error("error loading custom.js", e);
              return {};
          }
      })

    document.nbjs_translations = {"domain": "nbjs", "locale_data": {"nbjs": {"": {"domain": "nbjs"}}}};
    document.documentElement.lang = navigator.language.toLowerCase();
    </script>

    
    

</head>

<body class=""
 
dir="ltr">

<noscript>
    <div id='noscript'>
      Jupyter Notebook requires JavaScript.<br>
      Please enable it to proceed. 
  </div>
</noscript>

<div id="header">
  <div id="header-container" class="container">
  <div id="ipython_notebook" class="nav navbar-brand"><a href="/tree" title='dashboard'>
      <img src='/static/base/images/logo.png?v=641991992878ee24c6f3826e81054a0f' alt='Jupyter Notebook'/>
  </a></div>

  </div>
  <div class="header-bar"></div>

</div>

<div id="site">

<div class="error">
    <h1>404 : Not Found</h1>
<p>You are requesting a page that does not exist!</p>

</div>

</div>


<script type='text/javascript'>
require(['jquery'], function($) {
  // scroll long tracebacks to the bottom
  var tb = $(".traceback")[0];
  tb.scrollTop = tb.scrollHeight;
});
</script>


<script type='text/javascript'>
  function _remove_token_from_url() {
    if (window.location.search.length <= 1) {
      return;
    }
    var search_parameters = window.location.search.slice(1).split('&');
    for (var i = 0; i < search_parameters.length; i++) {
      if (search_parameters[i].split('=')[0] === 'token') {
        // remote token from search parameters
        search_parameters.splice(i, 1);
        var new_search = '';
        if (search_parameters.length) {
          new_search = '?' + search_parameters.join('&');
        }
        var new_url = window.location.origin + 
                      window.location.pathname + 
                      new_search + 
                      window.location.hash;
        window.history.replaceState({}, "", new_url);
        return;
      }
    }
  }
  _remove_token_from_url();
</script>
</body>

</html>
    at o.<anonymous> (main.js:500)
    at Generator.next (<anonymous>)
    at a (main.js:500)
(anonymous) @ main.js:500
a @ main.js:500
Promise.then (async)
c @ main.js:127
(anonymous) @ main.js:127
r @ main.js:127
initialize @ main.js:127
(anonymous) @ main.js:869
n @ main.js:1
(anonymous) @ main.js:1
(anonymous) @ main.js:1

Issues in Running LIT

Hi There,

I am trying to run LIT Quick-start: sentiment classifier
cd ~/lit
python -m lit_nlp.examples.quickstart_sst_demo --port=5432

The output is:

(lit-nlp) C:~\lit>python -m lit_nlp.examples.quickstart_sst_demo --port=5432
2020-08-20 14:37:27.651045: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
I0820 14:37:27.670744 33968 quickstart_sst_demo.py:47] Working directory: C:\Users\SB0079~1\AppData\Local\Temp\tmp2582r1b0
W0820 14:37:27.926524 33968 dataset_builder.py:575] Found a different version 1.0.0 of dataset glue in data_dir C:\Users\SB00790107\tensorflow_datasets. Using currently defined version 0.0.2.
I0820 14:37:27.926524 33968 dataset_builder.py:184] Overwrite dataset info from restored data version.
I0820 14:37:27.933496 33968 dataset_builder.py:253] Reusing dataset glue (C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2)
I0820 14:37:27.934466 33968 dataset_builder.py:399] Constructing tf.data.Dataset for split train, from C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2
W0820 14:37:27.934466 33968 dataset_builder.py:439] Warning: Setting shuffle_files=True because split=TRAIN and shuffle_files=None. This behavior will be deprecated on 2019-08-06, at which point shuffle_files=False will be the default for all splits.
W0820 14:37:35.189518 33968 dataset_builder.py:575] Found a different version 1.0.0 of dataset glue in data_dir C:\Users\SB00790107\tensorflow_datasets. Using currently defined version 0.0.2.
I0820 14:37:35.190503 33968 dataset_builder.py:184] Overwrite dataset info from restored data version.
I0820 14:37:35.192508 33968 dataset_builder.py:253] Reusing dataset glue (C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2)
I0820 14:37:35.192508 33968 dataset_builder.py:399] Constructing tf.data.Dataset for split validation, from C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2
I0820 14:37:35.302182 33968 tokenization_utils.py:306] Model name 'google/bert_uncased_L-2_H-128_A-2' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). Assuming 'google/bert_uncased_L-2_H-128_A-2' is a path or url to a directory containing tokenizer files.
I0820 14:37:35.302182 33968 tokenization_utils.py:317] Didn't find file google/bert_uncased_L-2_H-128_A-2. We won't load it.
I0820 14:37:35.303180 33968 tokenization_utils.py:335] Didn't find file google/bert_uncased_L-2_H-128_A-2\added_tokens.json. We won't load it.
I0820 14:37:35.303180 33968 tokenization_utils.py:335] Didn't find file google/bert_uncased_L-2_H-128_A-2\special_tokens_map.json. We won't load it.
I0820 14:37:35.303180 33968 tokenization_utils.py:335] Didn't find file google/bert_uncased_L-2_H-128_A-2\tokenizer_config.json. We won't load it.
Traceback (most recent call last):
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:~\lit\lit_nlp\examples\quickstart_sst_demo.py", line 60, in
app.run(main)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "C:~\lit\lit_nlp\examples\quickstart_sst_demo.py", line 48, in main
run_finetuning(model_path)
File "C:~\lit\lit_nlp\examples\quickstart_sst_demo.py", line 40, in run_finetuning
model = glue_models.SST2Model(FLAGS.encoder_name, for_training=True)
File "C:~\lit\lit_nlp\examples\models\glue_models.py", line 319, in init
**kw)
File "C:~\lit\lit_nlp\examples\models\glue_models.py", line 59, in init
model_name_or_path)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\transformers\tokenization_auto.py", line 109, in from_pretrained
return BertTokenizer.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\transformers\tokenization_utils.py", line 282, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\transformers\tokenization_utils.py", line 346, in _from_pretrained
list(cls.vocab_files_names.values())))
OSError: Model name 'google/bert_uncased_L-2_H-128_A-2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'google/bert_uncased_L-2_H-128_A-2' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.

For Running Quick start: language modeling

cd ~/lit
python -m lit_nlp.examples.pretrained_lm_demo --models=bert-base-uncased
--port=5432

The error output is
(lit-nlp) C:~\lit>python -m lit_nlp.examples.pretrained_lm_demo --models=bert-base-uncased --port=5432
2020-08-20 14:32:20.119230: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
I0820 14:32:20.634253 32000 tokenization_utils.py:374] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at C:\Users\SB00790107.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I0820 14:32:21.133054 32000 configuration_utils.py:151] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json from cache at C:\Users\SB00790107.cache\torch\transformers\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517
I0820 14:32:21.143045 32000 configuration_utils.py:168] Model config {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": true,
"output_hidden_states": true,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30522
}

I0820 14:32:21.576282 32000 modeling_tf_utils.py:258] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-tf_model.h5 from cache at C:\Users\SB00790107.cache\torch\transformers\d667df51ec24c20190f01fb4c20a21debc4c4fc12f7e2f5441ac0a99690e3ee9.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5
W0820 14:32:24.903656 32000 dataset_builder.py:575] Found a different version 1.0.0 of dataset glue in data_dir C:\Users\SB00790107\tensorflow_datasets. Using currently defined version 0.0.2.
I0820 14:32:24.904676 32000 dataset_builder.py:187] Load pre-computed datasetinfo (eg: splits) from bucket.
I0820 14:32:25.158797 32000 dataset_info.py:410] Loading info from GCS for glue/sst2/0.0.2
I0820 14:32:26.526896 32000 dataset_builder.py:273] Generating dataset glue (C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2)
οΏ½[1mDownloading and preparing dataset glue (7.09 MiB) to C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2...οΏ½[0m
Dl Completed...: 0 url [00:00, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]

Extraction completed...: 0 file [00:00, ? file/s]I0820 14:32:26.530886 32000 download_manager.py:241] Downloading https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8 into C:\Users\SB00790107\tensorflow_datasets\downloads\fire.goog.com_v0_b_mtl-sent-repr.apps.cowOhVrpNUsvqdZqI70Nq3ISu63l9SOhTqYqoz6uEW3-Y.zipalt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8.tmp.6f44416196e74a44a10bca183839e172...
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]

Extraction completed...: 0 file [00:00, ? file/s]C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\urllib3\connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'firebasestorage.googleapis.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Size...: 0%| | 0/7 [00:00<?, ? MiB/s]

Extraction completed...: 0 file [00:00, ? file/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 14%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1/7 [00:01<00:06, 1.10s/ MiB]

Extraction completed...: 0 file [00:01, ? file/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 29%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2/7 [00:01<00:04, 1.19 MiB/s]

Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 3/7 [00:01<00:03, 1.19 MiB/s]

Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 4/7 [00:01<00:02, 1.19 MiB/s]

Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 5/7 [00:01<00:01, 1.19 MiB/s]

Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 6/7 [00:01<00:00, 1.19 MiB/s]

Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [00:01<00:00, 1.19 MiB/s]

Dl Completed...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.40s/ url]
Dl Size...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [00:01<00:00, 1.19 MiB/s]

Dl Completed...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.40s/ url]
Dl Size...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [00:01<00:00, 1.19 MiB/s]

Extraction completed...: 0%| | 0/1 [00:01<?, ? file/s]

Dl Completed...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.40s/ url]
Dl Size...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [00:01<00:00, 1.19 MiB/s]

Extraction completed...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.74s/ file]
Extraction completed...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.74s/ file]

Dl Size...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [00:01<00:00, 4.02 MiB/s]

Dl Completed...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.74s/ url]
I0820 14:32:28.270815 32000 dataset_builder.py:812] Generating split train
I0820 14:32:28.270815 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/1 [00:00<?, ? shard/s]WARNING:tensorflow:From C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\file_format_adapter.py:209: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
tf.data.TFRecordDataset(path)
W0820 14:32:39.338444 32000 deprecation.py:323] From C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\file_format_adapter.py:209: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
tf.data.TFRecordDataset(path)

Reading...: 0 examples [00:00, ? examples/s]
Reading...: 64184 examples [00:00, 637222.07 examples/s]

Writing...: 0%| | 0/67349 [00:00<?, ? examples/s]
Writing...: 15%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 9980/67349 [00:00<00:00, 99082.42 examples/s]
Writing...: 30%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 20094/67349 [00:00<00:00, 99477.68 examples/s]
Writing...: 45%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 30195/67349 [00:00<00:00, 99709.61 examples/s]
Writing...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 40401/67349 [00:00<00:00, 100188.93 examples/s]
Writing...: 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 50623/67349 [00:00<00:00, 100574.12 examples/s]
Writing...: 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 60780/67349 [00:00<00:00, 100664.57 examples/s]
I0820 14:32:40.169348 32000 dataset_builder.py:812] Generating split validation
I0820 14:32:40.170345 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/1 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/872 [00:00<?, ? examples/s]
I0820 14:32:40.370083 32000 dataset_builder.py:812] Generating split test
I0820 14:32:40.373092 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/1 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/1821 [00:00<?, ? examples/s]
I0820 14:32:40.717523 32000 dataset_builder.py:301] Skipping computing stats for mode ComputeStatsMode.AUTO.
οΏ½[1mDataset glue downloaded and prepared to C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2. Subsequent calls will reuse this data.οΏ½[0m
I0820 14:32:40.735554 32000 dataset_builder.py:399] Constructing tf.data.Dataset for split validation, from C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2
I0820 14:32:41.163142 32000 dataset_builder.py:675] No config specified, defaulting to first: imdb_reviews/plain_text
I0820 14:32:41.164139 32000 dataset_builder.py:187] Load pre-computed datasetinfo (eg: splits) from bucket.
I0820 14:32:41.407350 32000 dataset_info.py:410] Loading info from GCS for imdb_reviews/plain_text/0.1.0
I0820 14:32:42.439117 32000 dataset_builder.py:273] Generating dataset imdb_reviews (C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0)
οΏ½[1mDownloading and preparing dataset imdb_reviews (80.23 MiB) to C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0...οΏ½[0m
Dl Completed...: 0 url [00:00, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]I0820 14:32:42.443107 32000 download_manager.py:241] Downloading http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz into C:\Users\SB00790107\tensorflow_datasets\downloads\ai.stanfor.edu_amaas_sentime_aclImdb_v1PaujRp-TxjBWz59jHXsMDm5WiexbxzaFQkEnXc3Tvo8.tar.gz.tmp.69c9ef3d01b84444a160e5ba3160fb45...
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Size...: 0%| | 0/80 [00:00<?, ? MiB/s]
Dl Completed...: 0%| | 0/1 [00:02<?, ? url/s]
Dl Size...: 1%|β–ˆβ–‹ | 1/80 [00:02<03:40, 2.80s/ MiB]
Dl Completed...: 0%| | 0/1 [00:03<?, ? url/s]
Dl Size...: 2%|β–ˆβ–ˆβ–ˆβ–Ž | 2/80 [00:03<02:47, 2.15s/ MiB]
Dl Completed...: 0%| | 0/1 [00:03<?, ? url/s]
Dl Size...: 4%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 3/80 [00:03<02:06, 1.64s/ MiB]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 5%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 4/80 [00:04<01:31, 1.21s/ MiB]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 6%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 5/80 [00:04<01:09, 1.08 MiB/s]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 8%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 6/80 [00:04<00:51, 1.43 MiB/s]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 9%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 7/80 [00:04<00:39, 1.85 MiB/s]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 10%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 8/80 [00:04<00:30, 2.34 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 12%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 10/80 [00:05<00:24, 2.89 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 14%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 11/80 [00:05<00:18, 3.74 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 16%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 13/80 [00:05<00:15, 4.30 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 19%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 15/80 [00:05<00:12, 5.33 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 22%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 18/80 [00:05<00:09, 6.42 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 25%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 20/80 [00:05<00:07, 7.91 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 29%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 23/80 [00:06<00:06, 9.10 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 32%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 26/80 [00:06<00:05, 10.61 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 36%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 29/80 [00:06<00:04, 12.09 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 40%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 32/80 [00:06<00:03, 13.50 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 35/80 [00:06<00:03, 14.62 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 37/80 [00:06<00:02, 15.45 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 40/80 [00:07<00:02, 14.49 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 43/80 [00:07<00:02, 15.40 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 46/80 [00:07<00:02, 16.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 49/80 [00:07<00:01, 16.75 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 51/80 [00:07<00:01, 17.14 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 54/80 [00:07<00:01, 15.51 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 57/80 [00:08<00:01, 16.21 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 60/80 [00:08<00:01, 16.74 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 63/80 [00:08<00:00, 17.16 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 66/80 [00:08<00:00, 17.75 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 69/80 [00:08<00:00, 17.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 71/80 [00:08<00:00, 17.41 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Size...: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 74/80 [00:09<00:00, 16.26 MiB/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Size...: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 77/80 [00:09<00:00, 16.68 MiB/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:09<00:00, 9.58s/ url]
Dl Size...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80/80 [00:09<00:00, 17.05 MiB/s]
Dl Size...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80/80 [00:09<00:00, 8.34 MiB/s]

Dl Completed...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:09<00:00, 9.60s/ url]
I0820 14:32:52.047359 32000 dataset_builder.py:812] Generating split train
I0820 14:32:52.050351 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/10 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 20%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2/10 [00:00<00:00, 14.12 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 40%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 4/10 [00:00<00:00, 14.21 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 6/10 [00:00<00:00, 14.12 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 8/10 [00:00<00:00, 14.14 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
I0820 14:33:03.785629 32000 dataset_builder.py:812] Generating split test
I0820 14:33:03.788612 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/10 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 20%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2/10 [00:00<00:00, 14.22 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 40%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 4/10 [00:00<00:00, 14.37 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 6/10 [00:00<00:00, 14.34 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 8/10 [00:00<00:00, 14.05 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
I0820 14:33:15.452958 32000 dataset_builder.py:812] Generating split unsupervised
I0820 14:33:15.457943 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/20 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 10%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2/20 [00:00<00:01, 14.01 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 20%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 4/20 [00:00<00:01, 13.65 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 30%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 6/20 [00:00<00:01, 13.69 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 40%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 8/20 [00:00<00:00, 13.57 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 10/20 [00:00<00:00, 13.85 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 12/20 [00:00<00:00, 13.29 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 14/20 [00:01<00:00, 13.38 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 16/20 [00:01<00:00, 13.22 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 18/20 [00:01<00:00, 13.34 shard/s]
Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]

Reading...: 0 examples [00:00, ? examples/s]

Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
I0820 14:33:32.041668 32000 dataset_builder.py:301] Skipping computing stats for mode ComputeStatsMode.AUTO.
οΏ½[1mDataset imdb_reviews downloaded and prepared to C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0. Subsequent calls will reuse this data.οΏ½[0m
I0820 14:33:32.053635 32000 dataset_builder.py:399] Constructing tf.data.Dataset for split test, from C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0
I0820 14:33:34.528547 32000 pretrained_lm_demo.py:92] Dataset: 'sst_dev' with 872 examples
I0820 14:33:34.536590 32000 pretrained_lm_demo.py:92] Dataset: 'imdb_train' with 25000 examples
I0820 14:33:34.536590 32000 pretrained_lm_demo.py:92] Dataset: 'blank' with 0 examples
I0820 14:33:34.536590 32000 dev_server.py:79]
( (
)\ ) )\ ) * )
(()/((()/(` ) /(
/())/())( )())
(
)) ()) ((())
| | |
|| |
| |
_ | | | |
||| ||

I0820 14:33:34.536590 32000 dev_server.py:80] Starting LIT server...
Traceback (most recent call last):
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:~\lit\lit_nlp\examples\pretrained_lm_demo.py", line 102, in
app.run(main)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "C:~\lit\lit_nlp\examples\pretrained_lm_demo.py", line 98, in main
lit_demo.serve()
File "C:~\lit\lit_nlp\dev_server.py", line 81, in serve
app = lit_app.LitApp(*self._app_args, **self._app_kw)
File "C:~\lit\lit_nlp\app.py", line 293, in init
os.mkdir(data_dir)
FileNotFoundError: [WinError 3] The system cannot find the path specified: '/tmp/lit_data'

GPT-3 Support

Hi, is GPT-3 support being added? I was thinking of adding a pull request but I'd trust your code over mine πŸ™‚

Can this work with Sci-kit SVM?

We were wondering if we can save an SVM model and use it here or does it only work with Language Models and neural models with pytorch/tensorflow?

Thank you in advance.

[Enhancement] Documentation for install - yarn error

I am setting it up on AWS instance

  • AWS AMI: Deep Learning AMI (Ubuntu 18.04) Version 32.0 - ami-0dc2264cd927ca9eb
  • OS: 18.04.2-Ubuntu

yarn is not pre-installed and it provides a prompt to install it via sudo apt install cmdtest

However doing that leads to the following error:

$ yarn && yarn build

ERROR: There are no scenarios; must have at least one

This is because Ubuntu picks up the wrong yarn version. This is similar to the issue raised here

I had to remove the cmdtest using autoremove (this also removes the dependencies):
$ sudo autoremove cmdtest

and then used the instructions provided here for re-installing yarn
https://classic.yarnpkg.com/en/docs/install#debian-stable

$ curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -

$ echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list

$ sudo apt update && sudo apt install yarn

Submitting this as an issue in case some one gets stuck due to this error

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.