Comments (4)
Oh this is already built, you can use the 'stream' command to enable streaming of responses from chatgpt (or start the program with the -s
parameter).
It's not possible to use the markdown renderer to show the output when it streams it, so there's no word wrapping or other markdown formatting but it's still very usable.
Maybe streaming should be turned on by default..
from llm-workflow-engine.
Nice. Do you happen to know why there's such a long delay before output starts showing up? Is it because the playwright browser has to wait for the entire response to be generated on chatgpt and then it starts outputting it to the CLI?
from llm-workflow-engine.
When you use the stream parameter, the only delay you get is the one from openai returning the argument. You can see what exactly happens under the hood by changing the following line
chatgpt = ChatGPT(headless=not install_mode, timeout=60, **extra_kwargs)
to
chatgpt = ChatGPT(headless=False, timeout=60, **extra_kwargs)
and checking what happens in browser
from llm-workflow-engine.
It's not possible to use the markdown renderer to show the output when it streams it, so there's no word wrapping or other markdown formatting but it's still very usable.
After the answer has been completed, I use the /chat
command to reload the current chat and then the markdown is properly formatted.
from llm-workflow-engine.
Related Issues (20)
- Any way to log in without GUI ? HOT 2
- Background server with a minimal command line clients that connects for interactive use in shell. HOT 3
- No response from gpt4 models HOT 9
- package dependency error, extract-msg package HOT 7
- TypeError when attempting to run in fresh conda environment HOT 2
- Pipx produces errors when trying to install the wrapper HOT 5
- Image uploads HOT 2
- I would like to reference a previous answer HOT 1
- Support passing metadata when used via Unix pipes HOT 1
- Option to disable the metadata after chat response? HOT 1
- Discard history before a certain point? HOT 4
- Add support for max_submission_tokens to presets HOT 3
- Failed to Read Response HOT 5
- iteratively increasing openai api usage? HOT 4
- How to load third party models? HOT 2
- How to change default model? HOT 3
- OpenAI SDK update to v1.x breaks LWE HOT 2
- /examples not found in path when I try to install examples (on windows) HOT 3
- lwe config leads to error HOT 7
- bard support? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llm-workflow-engine.