Comments (5)
Ok - got it! :). We have replicated the error. The script works fine with the .add.source_document method if the filter is the original script "base salary" - but if you replace with "diseases" then it is returning an empty source - e.g.,source - {'text_batch': [], 'metadata_batch': [], 'batches_count': 0} since the word "diseases" is not found in the parsed text. When .prompt_with_source hits this empty source, it throws the error as the intention is that there will be a non-empty context passage included in the prompt. Two work-arounds:
-
In the script, you could add a check for an empty source, and then not prompt, e.g., "No evidence found to support the question." You can print out the sources output and you will see the dictionary above, with sources["text_batch"] as empty and/or
-
We will take a look at better exception handling of this case in the prompt_with_source method so that it is not necessarily breaking ... if you have a specific recommendation on the behavior you would prefer, let us know.
Appreciate you raising this issue. Let us know if this works for you.
from llmware.
please try the method: ".add_source_document" which takes as input a file_path, and a file_name, and then parses the document and attaches to the prompt. We will update the exception too. Please let us know if that fixes the issue.
from llmware.
Sorry, responded too quickly - let me go thru the script too - and see if we can replicate the error
from llmware.
This is something I've noticed as well and I've implemented a rudimentary check in my script if no text is found:
# Attempt to attach the document as a source
try:
prompter.add_source_document(file_path, local_file, query=user_query)
except Exception as e:
print(f"Error adding source document: {e}")
return
# Run the inference
try:
response = prompter.prompt_with_source(prompt=user_prompt)
except Exception as e:
print(f"Error during prompting: {e}")
return
# Check if response is empty or not in expected format
if not response or 'llm_response' not in response[0] or not response[0]['llm_response'].strip():
print("No text found, try a different query")
else:
# Display the response
response_display = response[0]["llm_response"]
print(f"- Context: {local_file}\n- Prompt: {user_prompt}\n- LLM Response:\n{response_display}")
# Clear the source materials after use
prompter.clear_source_materials()
This gives you a result like so:
stdout: Starting prompt with sources using model: gpt-4-1106-preview
- Context: CBP-7692.pdf
- Prompt: What is a referendum
- LLM Response:
A referendum is a process or principle of referring an important political question, such as a proposed constitutional change, to be decided by a general vote of the entire electorate; a vote taken by referendum.
stderr:
stdout: Starting prompt with sources using model: gpt-4-1106-preview
No text found, try a different query
stderr: ERROR:root:error: to use prompt_with_source, there must be a loaded source - try '.add_sources' first
Showing the user that no text was found would be preferred.
from llmware.
Improved handling of this error has been added and merged into the main branch. Thanks for your feedback!
from llmware.
Related Issues (20)
- torch load mmap error
- Add class docstrings to four modules HOT 1
- In text citation HOT 3
- array out of bounds error in retrieval HOT 4
- Add class docstring to setup module
- Creating embedding with MongoDB text store when library contains CSV file fails HOT 7
- Add class docstring to module retrieval
- SLIM Models - OSError: [WinError -1073741795] Windows Error 0xc000001d in 0.2.4 HOT 11
- JSON files not being parsed and are being rejected HOT 6
- Add class docstrings to module prompts HOT 1
- quickstart_rag_colab.ipynb
- streamlit and other UI examples HOT 1
- google colab examples and start up scripts HOT 1
- jupyter notebook - more examples and better support HOT 2
- Add Cohere Command R model
- GGUF models not utilising GPU on Windows HOT 2
- PDF files getting rejected in parse step HOT 4
- Can I use SLIM-Agents for german language?
- Error in Prompt.load(from_hf) : model_card (NoneType) is not iterable HOT 2
- llmware.exceptions.ModelNotFoundException: HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llmware.