This project is a chatbot application that uses the Llama-3 model to engage in interactive conversations. It provides a user-friendly interface for chatting with the Llama-3 chatbot, powered by a locally hosted OpenAI-compatible API.
- Engage in interactive conversations with the Llama-3 chatbot
- User-friendly interface built with Streamlit
- Locally hosted server using LM Studio for model exposure
- Real-time chat history and message display
- Python 3.8+
- Streamlit
- OpenAI API
- LM Studio
-
Clone the repository:
git clone https://github.com/your-username/llamalingo.git cd llamalingo
-
Create a virtual environment:
python -m venv env
-
Activate the virtual environment:
- On Windows:
.\env\Scripts\activate
- On macOS and Linux:
source env/bin/activate
- On Windows:
-
Install the required packages:
pip install -r requirements.txt
-
Install and set up LM Studio. Follow the installation instructions on the LM Studio website.
- Start LM Studio and download the Llama-3 instruct model.
- Expose the Llama-3 model as an OpenAI API by starting the server in LM Studio. Ensure it is running on
http://localhost:1234/v1
.
-
Run the Streamlit application:
streamlit run app.py
-
Open your web browser and go to
http://localhost:8501
.
- Enter your message in the input box.
- Click the "Send" button or press Enter.
- View the chatbot's response in the chat history.
While this is a personal project focused on my own learning and development, any constructive feedback, suggestions, or contributions to the existing solutions are always welcome. If you'd like to contribute, please feel free to submit a pull request.
For any inquiries or collaborations, feel free to reach out to me via [email protected].