GithubHelp home page GithubHelp logo

Comments (30)

synesthesiam avatar synesthesiam commented on June 23, 2024 11

Piper is definitely still being maintained! As @jmtatsch said, I've just been busy with other stuff. One thing that's held up development is needing to replace the espeak-ng library due to its license.

I think this niche is fairly devoid of development because very few projects leave the demo stage before the authors are on to the next model/paper. I want Piper to be more of a "boring" technology in the sense that it does a job well without always chasing state-of-the-art.

from ollama-webui.

walking-octopus avatar walking-octopus commented on June 23, 2024 6

Bark is rather unstable, slow, and overkill for an assistant. Piper however seems fine. It also has Python support.

I also wonder if the server or client should be responsible for TTS... It is written in C++, so a WASM port is possible, if desired.

from ollama-webui.

tjbck avatar tjbck commented on June 23, 2024 5

Hi, Thanks for the suggestion. Sounds like an interesting idea, I'll see what I can do about it but only after I have every previous feature request out the way. In the meantime, if you could implement a working prototype using python and provide us with implementation examples, that would be sublime. Thanks.

from ollama-webui.

explorigin avatar explorigin commented on June 23, 2024 5

Piper will likely support wasm compilation soon which would allow browser-side generation: rhasspy/piper#352

from ollama-webui.

tjbck avatar tjbck commented on June 23, 2024 4

Let's get the ball rolling on this one! Stay tuned!

from ollama-webui.

lee-b avatar lee-b commented on June 23, 2024 4

FYI, I made this work with a local openedai-speech (linked above) on my branch, here:

https://github.com/lee-b/open-webui

It currently requires an extra environment variable and uses a custom docker file and runner script to run the thing, but it works. I'll integrate this better if the core team want to advise on their preferred way to solve some of the issues that I did these things to hack around.

from ollama-webui.

UXVirtual avatar UXVirtual commented on June 23, 2024 4

In case this helps anyone who is running the open-webui Docker container along with Ollama on the same PC and using openedai-speech you can use the following for configuration:

  • API Base URL: http://host.docker.internal:8000/v1
  • API Key: sk-111111111

host.docker.internal is required since openedai-speech is exposed via localhost on your PC, but open-webui cannot normally access this from within its container.

Note that openedai-speech doesn't need an API key, but setting a dummy one is required due to validation of this field in open-webui

from ollama-webui.

tjbck avatar tjbck commented on June 23, 2024 3

I'll be looking into this in the near future! In the meantime, TTS support is already been implemented with legacy web api. Thanks!

from ollama-webui.

tjbck avatar tjbck commented on June 23, 2024 3

OpenAI TTS support has been added with #656! As for the local TTS support, piper seems promising so let's wait until they merge the two blocking PRs.

from ollama-webui.

justinh-rahb avatar justinh-rahb commented on June 23, 2024 3

I very much agree with that part of the unix philosophy: do one thing and do it well. Thanks for the status update @synesthesiam 🙏

from ollama-webui.

yeungxh avatar yeungxh commented on June 23, 2024 3

I deployed a TTS/STT on my own server, there's REST api, how can I integrate my own API in this web ui.

from ollama-webui.

oliverbob avatar oliverbob commented on June 23, 2024 2

Since we already have the speaker button there, I think we can integrate piper, since its lightweight and fast.

The only requirement is that the server have piper installed via:

pip install piper-tts

Directory structure:

/flask-piper-app

├── app.py
├── static
│ └── welcome.wav

└── templates
└── index.html

Python:

from flask import Flask, render_template, request, send_file
import os  # Add this import statement

app = Flask(__name__)

@app.route('/')
def index():
    return render_template('index.html')

@app.route('/play', methods=['POST'])
def play_text():
    if 'text' in request.form:
        text = request.form['text']
        
        # Generate the audio file
        generate_audio(text)

        # Return the generated audio file to the client
        return send_file('static/welcome.wav', mimetype='audio/wav', as_attachment=False)

    return render_template('index.html')

def generate_audio(text):
    # Use os.system to execute the piper command
    piper_command = f'echo "{text}" | piper --model en_US-lessac-medium.onnx --output_file static/welcome.wav'
    os.system(piper_command)

if __name__ == '__main__':
    app.run(debug=True, port=5000)

Here's the html (which you can convert to svelte):

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Flask Piper App</title>
</head>
<body>
    <h4>Welcome to Piper App</h4>
    <p>Click "Play Audio" to hear the synthesized speech.</p>

    <!-- Display the text inside a div for user reference -->
    <div id="displayText">
        This is the text that will be read aloud. You can customize this paragraph.
    </div>

    <form id="textForm" method="post" action="/play">
        <input type="submit" value="Play Audio">
    </form>

    <hr>

    <!-- Audio player to play the generated audio -->
    <audio id="audioPlayer">
        <source id="audioSource" src="" type="audio/wav">
        Your browser does not support the audio element.
    </audio>

    <script>
        // Update the audio source when the form is submitted
        document.getElementById('textForm').addEventListener('submit', function(event) {
            event.preventDefault();
            var text = document.getElementById('displayText').innerText;

            // Make an asynchronous POST request to the /play route
            fetch('/play', {
                method: 'POST',
                headers: {
                    'Content-Type': 'application/x-www-form-urlencoded',
                },
                body: 'text=' + encodeURIComponent(text),
            })
            .then(response => response.blob())
            .then(blob => {
                // Create a Blob URL for the audio source
                var blobUrl = URL.createObjectURL(blob);
                document.getElementById('audioSource').src = blobUrl;

                // Load and play the audio
                document.getElementById('audioPlayer').load();
                document.getElementById('audioPlayer').play();
            })
            .catch(error => console.error('Error:', error));
        });
    </script>
</body>
</html>

This way, our model responses will not sound like Stephen Hawking.

from ollama-webui.

diblasio avatar diblasio commented on June 23, 2024 2

If I may also suggest this feature has an option to use openai tts as well considering there's already a place to input your api key in the UI. Their model sounds more natural for those of us that are attempting to using AI for language learning.
https://platform.openai.com/docs/guides/text-to-speech

from ollama-webui.

jmtatsch avatar jmtatsch commented on June 23, 2024 2

Piper works well on Mac also if you build from source and make a tiny change to the CMakelist 🙈

I am pretty sure @synesthesiam will get around to merging those pull requests, piper seems to be his baby after all.
He is just incredibly busy with all the voice assistant integration for Home Assistant.

I played around with bark.cpp and coqui.ai TTS and both are far too slow to be useful.

from ollama-webui.

jmtatsch avatar jmtatsch commented on June 23, 2024 2

Can the existing base url for openai tts be made configurable?
I found this adapter
https://github.com/matatonic/openedai-speech
serving an openai tts api with either piper or coqui TTS the back

from ollama-webui.

jmtatsch avatar jmtatsch commented on June 23, 2024 2

I think it would be best if open webui just enables us to set a different TTS base url via ENV variable like OPENAI_TTS_BASE_URL.
Like that users can plug in whatever openai tts compatible server they like and there are no licensing woes.
And it is very little work to do as Openai TTS is already implemented and works beautifully 😍

from ollama-webui.

jmtatsch avatar jmtatsch commented on June 23, 2024 2

@tjbck would you be open to the approach taken in https://github.com/lee-b/open-webui
should someone create a pull request?

from ollama-webui.

oliverbob avatar oliverbob commented on June 23, 2024 1

Piper will likely support wasm compilation soon which would allow browser-side generation: rhasspy/piper#352

I have actually made a pull request that integrated piper in it. But I deleted it since I recall that Timothy said, it is not well supported on his macbook or on mac in general.

If you want, I can make a piper integration again, but it would necessitate to "remove the browser Speech recognition default", unless otherwise some would be kind enough to put a new "piper button" as a sign that I should place it back, it (the new speaker icon) should differentiate between Speech Recognition (the default), and the one to be used for piper (since I'm not very good at svelte, but I'm know quite a lot about javascript). The speech though will not be browser controlled (not wasm yet), but it will read the prompt response, send to server and the server audio generated by piper will be served to the browser.

The only downside is that for longer prompts, the rendered audio file would be larger for the most simplified implementation (without using complex compression algorithm).

Let me know so that I can generate a new pull request should this be still helpful. Alternatively, we can create a piper branch for this repo for research purposes for other developers to look and build on the work. Coz, if I'm not mistaken, OpenAIs whisper server is not free of charge. Its fast but not free.

Piper is better than BARK, since you need a huge GPU to run BARK, and it takes hours on smaller GPUs before bark can talk back to the user text prompt. In Piper, for a message this long (as my comment) for a medium size quality voice, will generate between 1-5 mb. It should be installed where the UI is running. Then it will generate voice back to you from the server between 10 seconds to 30 seconds, or sometimes longer. For longer text, it might require a minute. But if you run piper on a GPU, its as quick as lightning, the only downside would just be "how to compress it" after bark generates the audio file. Im sure there are countless developers here who could figure that out on top of the simplest example, coz for longer text, it reaches more mb, and the voice --model WHATEVER-medium.onnx is quite huge (up to 70MB), which shouldn't be included in the pull request, but can be run (downloaded) after running the piper flask server or bash (which can also be included in the Ollama WebUI run script.

from ollama-webui.

tjbck avatar tjbck commented on June 23, 2024

I'll actively take a look after #216, but piper doesn't seem to support macos. If any of you guys know any workarounds for this, please let us know. Thanks.

Encountering this issue: rhasspy/piper#203

from ollama-webui.

tjbck avatar tjbck commented on June 23, 2024

backend piper integration blockers:

from ollama-webui.

oliverbob avatar oliverbob commented on June 23, 2024

Thanks Timothy.

from ollama-webui.

tjbck avatar tjbck commented on June 23, 2024

Piper library seems to be unmaintained. Looking for alternatives atm, open to suggestions!

from ollama-webui.

justinh-rahb avatar justinh-rahb commented on June 23, 2024

I agree, out of the big three projects for local TTS Piper is probably the best hope we've got.. I really don't understand how this particular niche is so devoid of development, it's one of the most asked-for features in any local AI project.

from ollama-webui.

lee-b avatar lee-b commented on June 23, 2024

Can the existing base url for openai tts be made configurable? I found this adapter https://github.com/matatonic/openedai-speech serving an openai tts api with either piper or coqui TTS the back

This looks very promising. The API seems to work well, and it's a similar docker-based setup to ollama. I agree, just allowing tweaking the OPENAI_BASE_URL for audio would go a long way to fully local whisper+xtts-v2 with this.

from ollama-webui.

fraschm1998 avatar fraschm1998 commented on June 23, 2024

FYI, I made this work with a local openedai-speech (linked above) on my branch, here:

https://github.com/lee-b/open-webui

It currently requires an extra environment variable and uses a custom docker file and runner script to run the thing, but it works. I'll integrate this better if the core team want to advise on their preferred way to solve some of the issues that I did these things to hack around.

Any way to fix this?

Got OPENAI_AUDIO_BASE_URL: http://192.168.10.14:8002/v1
open-webui-two  | ERROR:apps.openai.main:404 Client Error: Not Found for url: http://192.168.10.14:8002/audio/speech
open-webui-two  | Traceback (most recent call last):
open-webui-two  |   File "/app/backend/apps/openai/main.py", line 154, in speech
open-webui-two  |     r.raise_for_status()
open-webui-two  |   File "/usr/local/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
open-webui-two  |     raise HTTPError(http_error_msg, response=self)
open-webui-two  | requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://192.168.10.14:8002/audio/speech
open-webui-two  | INFO:     192.168.10.14:58846 - "POST /openai/api/audio/speech HTTP/1.1" 500 Internal Server Error
open-webui-two  | INFO:     192.168.10.14:53534 - "GET /_app/immutable/nodes/11.76457ae4.js HTTP/1.1" 304 Not Modified

Server is running:

docker logs openedai-speech-server-1 --follow                                             
INFO:     Started server process [1]                                                                                                        
INFO:     Waiting for application startup.                                                                                                  
INFO:     Application startup complete.                                                                                                     
INFO:     Uvicorn running on http://0.0.0.0:8002 (Press CTRL+C to quit)                                                                     
 > Using model: xtts                                                                                                                        
INFO:     172.24.0.1:38734 - "POST /v1/audio/speech HTTP/1.1" 200 OK                                                                        
INFO:     172.24.0.1:41190 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41196 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41210 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41212 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41220 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41234 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41244 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41252 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41264 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41272 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39624 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39630 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39638 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39652 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39656 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39660 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39674 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39688 - "POST /audio/speech HTTP/1.1" 404 Not Found

from ollama-webui.

fraschm1998 avatar fraschm1998 commented on June 23, 2024

FYI, I made this work with a local openedai-speech (linked above) on my branch, here:

https://github.com/lee-b/open-webui

It currently requires an extra environment variable and uses a custom docker file and runner script to run the thing, but it works. I'll integrate this better if the core team want to advise on their preferred way to solve some of the issues that I did these things to hack around.

Fixed with the following, kudos to ChatGPT:

        if not base_url.endswith("/"):
            base_url += "/"

        speech_url = urljoin(base_url, "audio/speech")

The Python urljoin function is used here to combine base_url with "/audio/speech". The urljoin function is designed to intelligently merge two parts of a URL, but its behavior with trailing slashes can sometimes lead to unexpected results. Specifically, if the base URL (base_url) does not end with a slash (/), and the second part begins with one, urljoin might not concatenate the strings in the way you expect, potentially leading to the omission of parts of the path.

from ollama-webui.

hxypqr avatar hxypqr commented on June 23, 2024

Is there a simple way to change the TTS model to my own now? I can't stand the voice of this robot lol.

from ollama-webui.

jmtatsch avatar jmtatsch commented on June 23, 2024

since cbd18ec you should be able to set your own openai compatible base url

from ollama-webui.

jmtatsch avatar jmtatsch commented on June 23, 2024

Works wonderfully now.
https://github.com/matatonic/openedai-speech wraps piper, xtts_v2 and parler-tts by the way so there is a good choice of qualities and latencies

from ollama-webui.

justinh-rahb avatar justinh-rahb commented on June 23, 2024

I'll leave it up to @oliverbob to decide to call this issue fixed or not, or I will close it as such in a few days if we don't hear from them.

from ollama-webui.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.