A simple text translation from English to any target language, implemented using the Langchain, Ollama, Docker.
-
Clone the repository:
git clone https://github.com/vamsikumbuf/local-text-translation.git cd local-text-translation
-
Create a virtual environment:
python3 -m venv venv source venv/bin/activate
-
Install the required packages:
pip install -r requirements.txt
-
Download and set up Ollama :
curl -fsSL https://ollama.com/install.sh | sh ollama serve
-
Install any Llama Model using Ollama:
ollama run llama3 # installs 8B params version
- Don't forget to keep ollama server running in the background
-
Start the application:
python main.py --model_name=llama3 --model_name=localhost
-
Interact with the text translation service:
- The service takes input text in English and outputs the translated text in the target language.
- Test the application at Langchain Playground
-
Create a common network for ollama server and client.
docker network create llm_network
-
Run the Ollama Docker Image
# Only CPUs docker run -d -v ollama:/root/.ollama -p 11434:11434 --hostname ollama-container --network llm_network --name ollama ollama/ollama
# With gpus docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --hostname ollama-container --network llm_network --name ollama ollama/ollama
-
Build the client Image
# from the repo's directory docker build -t ubuntu .
-
Run the client container
docker run -p 8000:8000 --network llm_network -it ubuntu bash # within the container python main.py --model_name=llama3 --model_host=ollama-container
- The Ollama server will be running and the client container will interact with the server for text translation tasks.
- Interact with the text translation service:
- The service takes input text in English and outputs the translated text in the target language.
- Test the application at Langchain Playground
- If you already had your models downloaded in your local machine, and if you want to use same model inside the docker container then you can specify the model location as the mounted volume when you run the ollama server
docker run -d -v model_dir_path:/root/.ollama -p 11434:11434 --hostname ollama-container --network llm_network --name ollama ollama/ollama
- By default ollama models get downloaded to
/usr/share/ollama/.ollama
directory
- When you run the python script through the client container don't forget to use the same hostname, given when you instantiate the ollama server