GithubHelp home page GithubHelp logo

pszemraj / textsum Goto Github PK

View Code? Open in Web Editor NEW
120.0 120.0 9.0 51 KB

CLI & Python API to easily summarize text-based files with transformers

License: Apache License 2.0

Python 100.00%
batch-processing inference inference-api pipeline summarization summary text text-to-text-transformer transformer transformers

textsum's People

Contributors

pszemraj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

textsum's Issues

txtsum web UI error (+ fix)

Just installed with pip install txtsum then pip install txtsum[ui] for the web UI.

When running textsum-ui, it errors out with:

    from .pytorch import *  # type: ignore[misc]
  File "/home/user/apps/anaconda3/envs/autogptq/lib/python3.10/site-packages/doctr/models/recognition/predictor/pytorch.py", line 14, in <module>
    from ._utils import remap_preds, split_crops
  File "/home/user/apps/anaconda3/envs/autogptq/lib/python3.10/site-packages/doctr/models/recognition/predictor/_utils.py", line 10, in <module>
    from ..utils import merge_multi_strings
  File "/home/user/apps/anaconda3/envs/autogptq/lib/python3.10/site-packages/doctr/models/recognition/utils.py", line 8, in <module>
    from rapidfuzz.string_metric import levenshtein
ModuleNotFoundError: No module named 'rapidfuzz.string_metric'

Some googling led to this: mindee/doctr#1186 where apparently the doctr module had a breaking change.

Ran the workaround mentioned and txtsum-ui seems to work now:

pip install rapidfuzz==2.15.1

Summarization Results looks wired

Dear Author,

I tried to use pszemraj/led-large-book-summary model for summarization. I was testing on a wiki page about linear regression, but the output is nothing close to that, so just wondering if this use case is not expected, or had plan to improve model, or maybe I just not used it in a right way?

Please see below for code and env.

Thanks,
DJ

Code:

from textsum.summarize import Summarizer

model_name = "pszemraj/led-large-book-summary"
summarizer = Summarizer(
    model_name_or_path=model_name,  # you can use any Seq2Seq model on the Hub
    token_batch_length=4096,  # tokens to batch summarize at a time, up to 16384
)
lin_reg_txt = """In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression.[1] This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.[2]

In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Such models are called linear models.[3] Most commonly, the conditional mean of the response given the values of the explanatory variables (or predictors) is assumed to be an affine function of those values; less commonly, the conditional median or some other quantile is used. Like all forms of regression analysis, linear regression focuses on the conditional probability distribution of the response given the values of the predictors, rather than on the joint probability distribution of all of these variables, which is the domain of multivariate analysis.

Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications.[4] This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine.

Linear regression has many practical uses. Most applications fall into one of the following two broad categories:

If the goal is error reduction in prediction or forecasting, linear regression can be used to fit a predictive model to an observed data set of values of the response and explanatory variables. After developing such a model, if additional values of the explanatory variables are collected without an accompanying response value, the fitted model can be used to make a prediction of the response.
If the goal is to explain variation in the response variable that can be attributed to variation in the explanatory variables, linear regression analysis can be applied to quantify the strength of the relationship between the response and the explanatory variables, and in particular to determine whether some explanatory variables may have no linear relationship with the response at all, or to identify which subsets of explanatory variables may contain redundant information about the response.
Linear regression models are often fitted using the least squares approach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some other norm (as with least absolute deviations regression), or by minimizing a penalized version of the least squares cost function as in ridge regression (L2-norm penalty) and lasso (L1-norm penalty). Use of the Mean Squared Error(MSE) as the cost on a dataset that has many large outliers, can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors. So a cost functions that are robust to outliers should be used if the dataset has many large outliers. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous."""
out_str = summarizer.summarize_string(lin_reg_txt)
print(f"summary: {out_str}")
Generating Summaries: 100%|██████████████████████| 1/1 [09:59<00:00, 599.52s/it]
summary: In. At the end of this chapter is a good way to sum up these types of fun times list of fun. In statistics, by using a variety of ways to make fun of pop quizzes are another kind of fun we like to have. The first kind of "fun" in terms of making fun of late-term wackiness goes into the second kind of use of oohs and wowsers. The third type of "favorites" we see in this review is the sort of non-joke wizzooka variety of wackos who want to be our friends but can't be because we're not giving any away. The fourth kind of ferry wacko is the kind that brings people back to their homes once they've been made or gone. The fifth kind of jayhawk is the shape of the bird, the size of the hole in the moon, etc. And the last one? Well, it's really hard to say since we only ever see the good kind. We'll leave it for you guys to enjoy.

pip freeze:

absl-py==1.4.0
accelerate==0.22.0
anyio==4.0.0
appnope @ file:///Users/ec2-user/ci_py311/appnope_1678317440516/work
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens @ file:///opt/conda/conda-bld/asttokens_1646925590279/work
astunparse==1.6.3
async-lru==2.0.4
attrs==23.1.0
Babel==2.12.1
backcall @ file:///home/ktietz/src/ci/backcall_1611930011877/work
beautifulsoup4==4.12.2
bleach==6.0.0
cachetools==5.3.1
certifi==2023.7.22
cffi==1.15.1
charset-normalizer==3.2.0
chex==0.1.82
clean-text==0.6.0
click==8.1.7
comm @ file:///Users/ec2-user/ci_py311/comm_1678317779525/work
debugpy @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_30hp2nowkm/croot/debugpy_1690905056188/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
defusedxml==0.7.1
dm-tree==0.1.8
emoji==1.7.0
etils==1.4.1
executing @ file:///opt/conda/conda-bld/executing_1646925071911/work
fastjsonschema==2.18.0
filelock==3.12.3
fire==0.5.0
flatbuffers==23.5.26
flax==0.7.0
fqdn==1.5.1
fsspec==2023.6.0
ftfy==6.1.1
gast==0.4.0
google-auth==2.22.0
google-auth-oauthlib==1.0.0
google-pasta==0.2.0
grpcio==1.57.0
h5py==3.9.0
huggingface-hub==0.16.4
idna==3.4
importlib-resources==6.0.1
ipykernel @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_03247w06su/croot/ipykernel_1691121639959/work
ipython @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_41_yzn938_/croot/ipython_1691532095563/work
isoduration==20.11.0
jax==0.4.13
jaxlib==0.4.13
jedi @ file:///Users/ec2-user/ci_py311_2/jedi_1679336327335/work
Jinja2==3.1.2
joblib==1.3.2
json5==0.9.14
jsonpointer==2.4
jsonschema==4.19.0
jsonschema-specifications==2023.7.1
jupyter-events==0.7.0
jupyter-lsp==2.2.0
jupyter_client @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_5e4bbpqn9e/croot/jupyter_client_1680171866753/work
jupyter_core @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_b82fz_h369/croot/jupyter_core_1679906581737/work
jupyter_server==2.7.3
jupyter_server_terminals==0.4.4
jupyterlab==4.0.5
jupyterlab-pygments==0.2.2
jupyterlab_server==2.24.0
keras==2.12.0
keras-core==0.1.5
keras-nlp==0.6.1
libclang==16.0.6
Markdown==3.4.4
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib-inline @ file:///Users/ec2-user/ci_py311/matplotlib-inline_1678317544666/work
mdurl==0.1.2
mistune==3.0.1
ml-dtypes==0.2.0
mpmath==1.3.0
msgpack==1.0.5
namex==0.0.7
natsort==8.4.0
nbclient==0.8.0
nbconvert==7.8.0
nbformat==5.9.2
nest-asyncio @ file:///Users/ec2-user/ci_py311/nest-asyncio_1678316288195/work
networkx==3.1
nltk==3.8.1
notebook==7.0.3
notebook_shim==0.2.3
numpy==1.25.2
oauthlib==3.2.2
onnx==1.14.1
onnxconverter-common==1.13.0
opt-einsum==3.3.0
optax==0.1.4
orbax-checkpoint==0.3.5
overrides==7.4.0
packaging @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_2algm5p9lp/croot/packaging_1693575178038/work
pandocfilters==1.5.0
parso @ file:///opt/conda/conda-bld/parso_1641458642106/work
pexpect @ file:///tmp/build/80754af9/pexpect_1605563209008/work
pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work
platformdirs @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_640usff2p5/croot/platformdirs_1692205656061/work
prometheus-client==0.17.1
prompt-toolkit @ file:///Users/ec2-user/ci_py311/prompt-toolkit_1678317707969/work
protobuf==3.20.3
psutil @ file:///Users/ec2-user/ci_py311_2/psutil_1679337242143/work
ptyprocess @ file:///tmp/build/80754af9/ptyprocess_1609355006118/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///opt/conda/conda-bld/pure_eval_1646925070566/work
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycparser==2.21
Pygments @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_cflayrfqsi/croot/pygments_1684279981084/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
python-json-logger==2.0.7
PyYAML==6.0.1
pyzmq @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_e6wiufh44w/croot/pyzmq_1686601381524/work
referencing==0.30.2
regex==2023.8.8
requests==2.31.0
requests-oauthlib==1.3.1
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.5.2
rpds-py==0.10.0
rsa==4.9
safetensors==0.3.3
scipy==1.11.2
Send2Trash==1.8.2
six @ file:///tmp/build/80754af9/six_1644875935023/work
sniffio==1.3.0
soupsieve==2.4.1
stack-data @ file:///opt/conda/conda-bld/stack_data_1646927590127/work
sympy==1.12
tensorboard==2.12.3
tensorboard-data-server==0.7.1
tensorflow==2.12.1
tensorflow-cpu==2.12.1
tensorflow-estimator==2.12.0
tensorflow-hub==0.14.0
tensorflow-io-gcs-filesystem==0.33.0
tensorflow-text==2.12.1
tensorstore==0.1.41
termcolor==2.3.0
terminado==0.17.1
textsum==0.2.0
tf2onnx==1.15.1
tinycss2==1.2.1
tokenizers==0.13.3
toolz==0.12.0
torch==2.0.1
tornado @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_aeotsmw12l/croot/tornado_1690848274212/work
tqdm==4.66.1
traitlets @ file:///Users/ec2-user/ci_py311/traitlets_1678316097905/work
transformers==4.32.1
typing_extensions==4.5.0
uri-template==1.3.0
urllib3==1.26.16
wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work
webcolors==1.13
webencodings==0.5.1
websocket-client==1.6.2
Werkzeug==2.3.7
wrapt==1.14.1
zipp==3.16.2

Python API - use_cuda option

Hello! In Python, when I am initializing a Summarizer with one of your pretrained models:

summarizer = Summarizer(model_name_or_path="pszemraj/long-t5-tglobal-base-16384-booksum-V12",use_cuda=True)

I am seeing the following output message:

INFO Loaded model pszemraj/long-t5-tglobal-base-16384-booksum-V12 to cpu

However, I have an Nvidia GPU available, that doesn't seem to be utilized. As far as I know, it is available for use, because I am running the following code to check it:
import GPUtil
print(GPUtil.getAvailable())
print(GPUtil.showUtilization())

and the result I am getting is:

[0]
| ID | GPU | MEM |
------------------
|  0 |  5% | 11% |
None

which means based on the GPUtil documentation that the GPU with ID 0 is available for use.

Is there a way to declare that the Summarizer must run on GPU instead of CPU, as it currently happens by default? The use_cuda option doesn't seem to do the trick.

Thank you in advance!

Summarisation in different language.

Hi,

thanks for sharing your work here to show us a path to follow .

I want to summarise Turk-ish meeting notes . We have some background knowledge NLD , tokenization etc. But very new to this AI era.

What would you recommend to us to follow ? Which model is suitable for this ?

Best

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.