Just provide an title to the model and it will generate a whole article about it (up to 1024 tokens).
Find the model on HuggingFace
# Install transformers library
!pip install transformers
# Load tokenizer and model
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
model_name = "Seungjun/articleGeneratorV1.0"
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFAutoModelForSeq2SeqLM.from_pretrained(model_name)
# Get the article for a given title
from transformers import pipeline
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="tf")
summarizer(
"Steve Jobs", # title
min_length=500,
max_length=1024,
)
° As of now 99% of the context generated by the model is not true. ° Model is so slow, takes great amount of time(~ 3 minutes) to generate a one article.
° Fine-tuned t5-small model with a custom dataset uisng KerasNLP ° Created a custom dataset(csv file) using wikipedia articles to have 3 columns(id, prompt, articke) and about 19K rows. ° The prompt is the title of the article, and during the procedure it wa given as inout and trained to predict the article
ID | Prompt | Article |
---|---|---|
7751246 | Chesterfield Islands | Chesterfield Islands (îles Chesterfield in French) are a group of uninhabited coral islands located in the Coral Sea, northeast of Australia. They are a territory of .... |