GithubHelp home page GithubHelp logo

distyapps / visioncrafter Goto Github PK

View Code? Open in Web Editor NEW
127.0 6.0 9.0 101.31 MB

Craft your visions

Batchfile 0.20% Python 99.55% Shell 0.24%
ai animation artificial-intelligence dreambooth gif-animation gif-creator lora mp4 stable-diffusion text2music

visioncrafter's People

Contributors

distyapps avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

visioncrafter's Issues

[Bug]: No module named 'PySimpleGUI'

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Bug Description

see console log

Step-by-step instructions to reproduce the issue.

Installation error

Expected Behavior

Retryed installation multiples with all necessary requirements. I still receive this error.

Current Behavior

see console log

Version or Commit where the problem happens

install

What platforms do you use Visioncrafter ?

Windows

What Python version are you running on ?

python 3.11

What GPU are you running Visioncrafter on?

rtx 3060

How much GPU VRAM are you running Visioncrafter on?

16gb

Console logs

Building wheel for diffq (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for diffq (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [27 lines of output]
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build\lib.win-amd64-cpython-311
      creating build\lib.win-amd64-cpython-311\diffq
      copying diffq\base.py -> build\lib.win-amd64-cpython-311\diffq
      copying diffq\diffq.py -> build\lib.win-amd64-cpython-311\diffq
      copying diffq\lsq.py -> build\lib.win-amd64-cpython-311\diffq
      copying diffq\torch_pack.py -> build\lib.win-amd64-cpython-311\diffq
      copying diffq\ts_export.py -> build\lib.win-amd64-cpython-311\diffq
      copying diffq\uniform.py -> build\lib.win-amd64-cpython-311\diffq
      copying diffq\utils.py -> build\lib.win-amd64-cpython-311\diffq
      copying diffq\__init__.py -> build\lib.win-amd64-cpython-311\diffq
      running egg_info
      writing diffq.egg-info\PKG-INFO
      writing dependency_links to diffq.egg-info\dependency_links.txt
      writing requirements to diffq.egg-info\requires.txt
      writing top-level names to diffq.egg-info\top_level.txt
      reading manifest file 'diffq.egg-info\SOURCES.txt'
      reading manifest template 'MANIFEST.in'
      warning: no previously-included files found matching 'examples\cifar\outputs\**'
      adding license file 'LICENSE'
      writing manifest file 'diffq.egg-info\SOURCES.txt'
      running build_ext
      building 'diffq.bitpack' extension
      error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for diffq
Failed to build diffq
ERROR: Could not build wheels for diffq, which is required to install pyproject.toml-based projects
Downloading stable-diffusion-v1-5
Cloning into 'repos\animatediff\models\StableDiffusion\stable-diffusion-v1-5'...
remote: Enumerating objects: 194, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 194 (delta 1), reused 4 (delta 0), pack-reused 187
Receiving objects: 100% (194/194), 540.94 KiB | 1.68 MiB/s, done.
Resolving deltas: 100% (67/67), done.
Filtering content: 100% (4/4), 2.55 GiB | 16.48 MiB/s, done.
Downloading Motion Modules
Cloning into 'repos\animatediff\models\Motion_Module'...
remote: Enumerating objects: 11, done.
remote: Total 11 (delta 0), reused 0 (delta 0), pack-reused 11
Unpacking objects: 100% (11/11), 1.34 KiB | 65.00 KiB/s, done.
Filtering content: 100% (2/2), 3.11 GiB | 15.94 MiB/s, done.
Do you want to download toonyou model? (y/n):n
Skipping toonyou model download
Launching VisionCrafter
Traceback (most recent call last):
  File "C:\Users\Konzr\seait\VisionCrafter\main.py", line 1, in <module>
    import PySimpleGUI as sg
ModuleNotFoundError: No module named 'PySimpleGUI'
Press any key to continue . . .

Additional information

No response

[Feature Request] Stop/Interupt Button

Hey there.
Is it possible to Include an "Stop" or "Interupt" button to stop current generation?

Like if you found an error in the Prompts and you just started a batch proccessing. Currently i can just kill the whole Program and lose the current Prompting settings.

[Feature Request or Bug] - Model and Lora Recursive Folder scan/Rescan

Hello there,

I'm currently facing a situation that I'm unsure whether it's a bug or not within the current scope.

Current Situation:
The Model and Lora Changer currently only scans the main folder but ignores subfolders. Since I have a large number of Loras and Models, I've sorted them into a folder structure (e.g., Anime, Realistic, Landscape, etc.). However, whenever there's a change, I'm required to select the folder again (Set Models Folder/Set Lora Folder).

Proposed Improvement:
To make the process more efficient, I suggest implementing a recursive scan that can go through subfolders. Additionally, it would be helpful to have a separate window for a better overview of the files. Moreover, adding a "Rescan" or "Update" button next to the existing "Set" buttons would enable us to rescan the current Model Directory easily.

By making these enhancements, the user experience would be greatly improved, and managing the folders would become much more convenient.

[Bug] Problems with audio models

Hi, first of all I want to say that this software is just great in spite of the early version. I am absolutely delighted.
But I have encountered this problem, when I add music to my generation I get an error.

OSError: Not found: "C:\Users\Илья/.cache\huggingface\hub\models--t5-base\snapshots\fe6d9bf207cd3337512ca838a8b453f87a9178ef\spiece.model": No such file or directory Error #2

As I understand it, this error is caused by the fact that I have Cyrillic in the username path, but the problem is more likely that the audio generator module stores its files in a different path, not inside the venv as would be reasonable.

please make it so that all module files are stored inside the installation directory, I'm sure this will solve a lot of problems in the future.

[Feature Request] File Naming and Sidecar File

Hello There.

i really love this. its so awsome and easy at the same time! =D

But there are 2 things missing.

  • Custom Naming Like [SessionID]-[Seed].mp4 -> Makes sorting in ImageBoards/Sorter easyer then this long tag name.
  • Creation of sidecar.txt (As example) -> I Notice there is a config.yaml. Could this be Reformated like A1111?

[Bug]: Open Folder buttons trigger a segmentation fault program crash

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Bug Description

When trying to click any of the 'Open Folder' options in the UI the program quits (though the UI remains visible and cannot be closed without shutting down my WSl2 instance), and I get the following error in the terminal:

Traceback (most recent call last): File "/home/myname/machinelearning/mediacreation/VisionCrafter/main.py", line 485, in <module> main() File "/home/myname/machinelearning/mediacreation/VisionCrafter/main.py", line 177, in main models_bar_layout.events(event,values,window) File "/home/myname/machinelearning/mediacreation/VisionCrafter/layout/models_bar.py", line 119, in events os.startfile(os.path.abspath(sg.user_settings_get_entry(MODEL_PATH))) AttributeError: module 'os' has no attribute 'startfile'

Running in WSL2 Ubuntu 20.04

Step-by-step instructions to reproduce the issue.

Run the program with 'python main.py'
Opens UI.
Click on any 'Open Folder' button (before or after a generation).
Program becomes unresponsive.
Check the terminal and see the above traceback and the program has terminated.

Expected Behavior

Clicking on the 'Open Folder' buttons should open a folder.

Current Behavior

Clicking on the 'Open Folder' buttons crashes the program with the provided error.

Version or Commit where the problem happens

Commit: 96b0582

What platforms do you use Visioncrafter ?

Windows, Linux

What Python version are you running on ?

3.10.12

What GPU are you running Visioncrafter on?

RTX 4090

How much GPU VRAM are you running Visioncrafter on?

24GB

Console logs

(visioncrafter) myname@MYPCNAME:~/machinelearning/mediacreation/VisionCrafter$ python main.py
Traceback (most recent call last):
  File "/home/myname/machinelearning/mediacreation/VisionCrafter/main.py", line 485, in <module>
    main()
  File "/home/myname/machinelearning/mediacreation/VisionCrafter/main.py", line 177, in main
    models_bar_layout.events(event,values,window)
  File "/home/myname/machinelearning/mediacreation/VisionCrafter/layout/models_bar.py", line 119, in events
    os.startfile(os.path.abspath(sg.user_settings_get_entry(MODEL_PATH)))
AttributeError: module 'os' has no attribute 'startfile'

Additional information

Everything else seems to be working just fine.
I have symlinked all model paths to a previous AnimateDiff install to save space.

[Feature Request] Git Improvements

Hey there,

I noticed that you aren't using some of the "nice" features available in Git, such as Projects/Boards or Templates for Issue/Feature Requests. Both of these can make things easier, and if you're interested, I'd be happy to assist you with them.

Projects/Boards are helpful for tracking features or issues, providing a simpler overview compared to the Issue Tab.

Templates are great for streamlining the process of submitting requests or issues. Users only need to fill out the required fields. An example is provided below in Markdown Format.

That way, you can better prioritize and organize your time (resulting in less searching and fewer headaches =) ).

Feature Request Template:

# Feature Request

### Summary (Required)
Briefly describe the new feature you would like to request.

### Description (Required)
Provide a detailed description of the feature you are suggesting. Explain how it would work, what problem it would solve, and any potential benefits.

### Screenshots/Visuals (Optional)
If applicable, add screenshots, wireframes, or visual mock-ups to help illustrate the feature.

---

Thank you for taking the time to submit a feature request! Your feedback is valuable in helping us improve our project.

Issue Reporting Template:

## Issue Report

**Description:**
A clear and concise description of the issue you're encountering.

**Expected Behavior:**
A description of what you expected to happen.

**Current Behavior:**
A description of what is currently happening instead.

**Steps to Reproduce:**
1. Step-by-step instructions to reproduce the issue.
2. Be as specific as possible.

**Screenshots / Code Snippets:**
If applicable, provide screenshots, code snippets, or error messages that can help illustrate the issue.

**Environment:**
- Operating System: [e.g. Windows 10, macOS Big Sur, Ubuntu 20.04]
- Browser (if applicable): [e.g. Chrome 92, Firefox 89]
- Software Version: [e.g. Stable Diffusion 1.2.3, Visioncrafter 2.0.1]

**Additional Information:**
Any other relevant information that might help in understanding the issue.

two questions

Greetings

thank you for this!

i have two questions:

1.does it work with cpu as well? it will be great if us laptop cpu users to use audiocraft and animatediff with cpu as well

2.is there any plan for releasing visioncrafter as a extract and use package in the future? i mean like NMKD that everything is included so we don't need to install python or anything for a better portability.

kind regards

[Bug]: Files don´t save when using Wights

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Bug Description

If you create a vid with a Prompt that contains wights (dark:1.3) the file don´t get saved.
As far as i see it its a file system error since VC store the prompt in the "Savefile name" and most systems can´t handle special characters in the filename (like ":")

Step-by-step instructions to reproduce the issue.

  1. enter Prompt. Example: nightelves, armor, (dark:1.3)
  2. Click generate
  3. on finish, no error message, just a broken 0b file.

Expected Behavior

Normal file saving

Current Behavior

Stop saving file due to OsError

Version or Commit where the problem happens

0.0.7

What platforms do you use Visioncrafter ?

Windows

What Python version are you running on ?

3.10.6

What GPU are you running Visioncrafter on?

RTX3080

How much GPU VRAM are you running Visioncrafter on?

24GB

Console logs

No relevant log files or entries. VC doesn´t catch the OsError.

Additional information

Filenaming on Save needs to be changed to use Wights.

Model folder Settings are incorrect

The folders Set by the "Set models folder" and "Set lora folder" buttons coincide and cannot be set separately, resulting in the model cannot be invoked normally. Please correct this error

Linux support

(venv) root@autodl-container-a1c3118008-79be1975:/autodl-tmp/vision/VisionCrafter# python main.py
Traceback (most recent call last):
File "/root/autodl-tmp/vision/VisionCrafter/main.py", line 910, in
main()
File "/root/autodl-tmp/vision/VisionCrafter/main.py", line 304, in main
window = sg.Window(f'{NAME} - {VER}',layout,finalize=True, resizable=True)
File "/root/autodl-tmp/vision/VisionCrafter/venv/lib/python3.10/site-packages/PySimpleGUI/PySimpleGUI.py", line 9618, in init
self.Finalize()
File "/root/autodl-tmp/vision/VisionCrafter/venv/lib/python3.10/site-packages/PySimpleGUI/PySimpleGUI.py", line 10304, in finalize
self.Read(timeout=1)
File "/root/autodl-tmp/vision/VisionCrafter/venv/lib/python3.10/site-packages/PySimpleGUI/PySimpleGUI.py", line 10079, in read
results = self._read(timeout=timeout, timeout_key=timeout_key)
File "/root/autodl-tmp/vision/VisionCrafter/venv/lib/python3.10/site-packages/PySimpleGUI/PySimpleGUI.py", line 10150, in _read
self._Show()
File "/root/autodl-tmp/vision/VisionCrafter/venv/lib/python3.10/site-packages/PySimpleGUI/PySimpleGUI.py", line 9890, in _Show
StartupTK(self)
File "/root/autodl-tmp/vision/VisionCrafter/venv/lib/python3.10/site-packages/PySimpleGUI/PySimpleGUI.py", line 16821, in StartupTK
_get_hidden_master_root()
File "/root/autodl-tmp/vision/VisionCrafter/venv/lib/python3.10/site-packages/PySimpleGUI/PySimpleGUI.py", line 16708, in _get_hidden_master_root
Window.hidden_master_root = tk.Tk()
File "/root/miniconda3/lib/python3.10/tkinter/init.py", line 2299, in init
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
(venv) root@autodl-container-a1c3118008-79be1975:
/autodl-tmp/vision/VisionCrafter#

Cannot find the path specified

image

Hi,

I am tried several times to install and get the error stating it cannot find the path specified. Any advice on how to resolve this?

Thanks

[Feature Request] API Support

Would be great to have some endpoints that we can hit similar to what we can in A1111 SD, such that we can do batch and other custom generations as per our requirements.

[Feature Request]: Easy close UI

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Summary

Currently it seems there are no easy ways to close the UI on my system without terminating the WSL2 instance (even simply quitting the program in terminal (Ctrl+C, Ctrl+Z) while stopping the program itself, the UI remains on the screen until I do a wsl shutdown; though it seems that clicking 'Open Folder' as mentioned in my earlier bug report seems to get rid of the UI most of the time lol.

It would be very nice if there were an easy button somewhere on the UI that I could just click and close it.

Description

Have a button somewhere on the UI that would close the UI, or the entire program. Just an 'x' in the corner, an actual 'Shutdown' button somewhere, whatever you think works the best!

It would stop the problem of being unable to easily close the program and it would allow us to easily close the program.

(haha, sorry if the way I am writing this sounds a little silly - I'm just trying to follow the submission instructions and give us much info as I can).

Additional information

No response

[Feature Request]: Finetuned Motion Modules

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Summary

Implement loading finetuned motion modules.

Description

Implement loading finetuned motion modules from a dropdown like other models. Now that trainer code has been released I am working with other like minded individuals to train new motion modules.

Additional information

No response

Error and quit after end of movie prosses

In my second experience, after it finished creating the video and before playing it and creating the music, an error occurred and the software closed. Maybe it's a matter of memory. I used a resolution of 768 x 512 and there may not have been enough memory to continue.

I have a computer with 32 GB memory. and an RTX 3090 graphics card

100%|██████████████████████████████████████████████████████████████████████████████| 120/120 [03:03<00:00, 1.53s/it]
100%|████████████████████████████████████████████████████████████████████████████████| 48/48 [00:07<00:00, 6.43it/s]
[libx264 @ 000002a81579d640] -qscale is ignored, -crf is recommended.
[libx264 @ 0000025ab1c3d5c0] -qscale is ignored, -crf is recommended.
Press any key to continue . . .

[Issue] Python was not found

image

Just installed and tried to run VisionCrafter and received this error:

The system cannot find the path specified.
Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.
The system cannot find the path specified.
Press any key to continue . . .

Just wanted to flag that the ReadMe of this repo does not specify Python as a prerequisite and the installer doesn't install it.

What is the preferred version of Python to install for this application?

Thanks!

[Feature Request]: Resize UI, change font size, move window, resize window, etc.

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Summary

I'm loving the program so far and very excited to see how it develops.
Unfortunately the one glaring issue I have with it, is that it just give me a rather obtrusive UI that is just stuck full screen with no ability to resize either any of the elements, or the window itself.

For this reason alone I have been choosing to use other programs, as it makes working with anything else on the computer a pain in the behind - and it just seems so very odd that it would be made that way.

This day and age we all multi-task on our PC's, and I think the ability to resize our program windows so we can dock them in various parts of the screen, or shrink them so we can use the rest of our screen real-estate for other purposes is absolutely essential.

Description

Pretty basic really. If the program is going to remain using the current UI frontend, being able to change various aspects of the UI would be a huge boon to it's adoption I believe.

In specifics, being able to change the font size (or even the font itself), as well as the ability to change the window size with the UI elements dynamically adjusting for it would be optimal.

Somewhat less necessary, but something that would be really cool - would be the ability to re-arrange the UI elements the way we want them - but certainly not a deal breaker!

As far as the resizing font ability and moving/resizing the window goes though, quite honestly as much as I love this program, if it remained the way it currently is I don't think I would be able to stay with it very long as I am using a 48" 4k monitor that I sit a good 3-4 feet away from and many of the elements text demands that I either scooch myself forwards or that I have to use Windows Zoom feature to read everything easily.

I actually put off testing it for sometime in part because having the UI taking up my entire screen was more than a little frustrating. :)

Additional information

No response

[Question] A matching Triton is not available, some optimizations will not be enabled.

what does this mean? what should I do? the generation time seems extremely slow. about 30 mins for a 2 second video default setting.

my gpu is 3080ti 16gb vram mobile version. running stable diffusion quite smoothly. windows 11. 32g ram

command line shows this:
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'

[Bug]: Saving Issue using Special Characters

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Bug Description

it was working perfect till : FFMPEG STDERR OUTPUT:

Step-by-step instructions to reproduce the issue.

it render but does not make the movie

Expected Behavior

should make the movie

Current Behavior

don make the movie

Version or Commit where the problem happens

0.07

What platforms do you use Visioncrafter ?

No response

What Python version are you running on ?

3.10.8

What GPU are you running Visioncrafter on?

rtx 3080 ti

How much GPU VRAM are you running Visioncrafter on?

16

Console logs

Creating virtual environment inside venv folder...
Activating venv...
Installing dependencies from requirements.txt
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu118
Collecting torch==2.0.1+cu118
  Downloading https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-win_amd64.whl (2619.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.6/2.6 GB 2.5 MB/s eta 0:00:00
Collecting torchaudio==2.0.2+cu118
  Downloading https://download.pytorch.org/whl/cu118/torchaudio-2.0.2%2Bcu118-cp310-cp310-win_amd64.whl (2.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 22.5 MB/s eta 0:00:00
Collecting torchvision==0.15.2+cu118
  Downloading https://download.pytorch.org/whl/cu118/torchvision-0.15.2%2Bcu118-cp310-cp310-win_amd64.whl (4.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 15.0 MB/s eta 0:00:00
Collecting diffusers[torch]==0.11.1
  Downloading diffusers-0.11.1-py3-none-any.whl (524 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 524.9/524.9 kB 16.6 MB/s eta 0:00:00
Collecting transformers==4.30.2
  Downloading transformers-4.30.2-py3-none-any.whl (7.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.2/7.2 MB 18.3 MB/s eta 0:00:00
Collecting av==10.0.0
  Downloading av-10.0.0-cp310-cp310-win_amd64.whl (25.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 25.3/25.3 MB 19.8 MB/s eta 0:00:00
Collecting einops==0.6.1
  Downloading einops-0.6.1-py3-none-any.whl (42 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.2/42.2 kB ? eta 0:00:00
Collecting flashy>=0.0.1
  Downloading flashy-0.0.2.tar.gz (72 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 72.4/72.4 kB ? eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting hydra-core==1.3.2
  Downloading hydra_core-1.3.2-py3-none-any.whl (154 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 154.5/154.5 kB 9.6 MB/s eta 0:00:00
Collecting hydra_colorlog==1.2.0
  Downloading hydra_colorlog-1.2.0-py3-none-any.whl (3.6 kB)
Collecting julius==0.2.7
  Downloading julius-0.2.7.tar.gz (59 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 59.6/59.6 kB ? eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting num2words==0.5.12
  Downloading num2words-0.5.12-py3-none-any.whl (125 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 125.2/125.2 kB ? eta 0:00:00
Collecting numpy==1.24.4
  Downloading numpy-1.24.4-cp310-cp310-win_amd64.whl (14.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.8/14.8 MB 21.1 MB/s eta 0:00:00
Collecting sentencepiece==0.1.99
  Downloading sentencepiece-0.1.99-cp310-cp310-win_amd64.whl (977 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 977.5/977.5 kB 15.4 MB/s eta 0:00:00
Collecting spacy==3.5.2
  Downloading spacy-3.5.2-cp310-cp310-win_amd64.whl (12.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 21.1 MB/s eta 0:00:00
Collecting huggingface_hub
  Downloading huggingface_hub-0.17.3-py3-none-any.whl (295 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 295.0/295.0 kB ? eta 0:00:00
Collecting tqdm
  Downloading tqdm-4.66.1-py3-none-any.whl (78 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.3/78.3 kB ? eta 0:00:00
Collecting xformers==0.0.20
  Downloading xformers-0.0.20-cp310-cp310-win_amd64.whl (97.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.6/97.6 MB 18.2 MB/s eta 0:00:00
Collecting demucs==4.0.0
  Downloading demucs-4.0.0.tar.gz (1.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 25.5 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting librosa==0.10.0.post2
  Downloading librosa-0.10.0.post2-py3-none-any.whl (253 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 253.0/253.0 kB 15.2 MB/s eta 0:00:00
Collecting gradio
  Downloading gradio-3.45.1-py3-none-any.whl (20.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.2/20.2 MB 22.6 MB/s eta 0:00:00
Collecting imageio==2.9.0
  Downloading imageio-2.9.0-py3-none-any.whl (3.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 21.0 MB/s eta 0:00:00
Collecting imageio-ffmpeg==0.4.2
  Downloading imageio_ffmpeg-0.4.2-py3-none-win_amd64.whl (22.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.6/22.6 MB 18.2 MB/s eta 0:00:00
Collecting safetensors
  Downloading safetensors-0.3.3-cp310-cp310-win_amd64.whl (266 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 266.1/266.1 kB 17.1 MB/s eta 0:00:00
Collecting opencv-python==4.7.0.72
  Downloading opencv_python-4.7.0.72-cp37-abi3-win_amd64.whl (38.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 38.2/38.2 MB 18.7 MB/s eta 0:00:00
Collecting psutil==5.9.5
  Downloading psutil-5.9.5-cp36-abi3-win_amd64.whl (255 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 255.1/255.1 kB 7.9 MB/s eta 0:00:00
Collecting PySimpleGUI==4.60.5
  Downloading PySimpleGUI-4.60.5-py3-none-any.whl (512 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 512.7/512.7 kB 31.4 MB/s eta 0:00:00
Collecting pandas==2.0.2
  Downloading pandas-2.0.2-cp310-cp310-win_amd64.whl (10.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.7/10.7 MB 17.7 MB/s eta 0:00:00
Collecting moviepy==1.0.3
  Downloading moviepy-1.0.3.tar.gz (388 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 388.3/388.3 kB 23.6 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting python-vlc==3.0.18122
  Downloading python_vlc-3.0.18122-py3-none-any.whl (79 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.9/79.9 kB ? eta 0:00:00
Collecting sympy
  Downloading https://download.pytorch.org/whl/sympy-1.12-py3-none-any.whl (5.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 MB 16.7 MB/s eta 0:00:00
Collecting networkx
  Downloading networkx-3.1-py3-none-any.whl (2.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 16.5 MB/s eta 0:00:00
Collecting typing-extensions
  Downloading typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Collecting jinja2
  Downloading https://download.pytorch.org/whl/Jinja2-3.1.2-py3-none-any.whl (133 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.1/133.1 kB ? eta 0:00:00
Collecting filelock
  Downloading filelock-3.12.4-py3-none-any.whl (11 kB)
Collecting pillow!=8.3.*,>=5.3.0
  Downloading Pillow-10.0.1-cp310-cp310-win_amd64.whl (2.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 17.8 MB/s eta 0:00:00
Collecting requests
  Downloading requests-2.31.0-py3-none-any.whl (62 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.6/62.6 kB ? eta 0:00:00
Collecting importlib-metadata
  Downloading importlib_metadata-6.8.0-py3-none-any.whl (22 kB)
Collecting regex!=2019.12.17
  Downloading regex-2023.8.8-cp310-cp310-win_amd64.whl (268 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 268.3/268.3 kB ? eta 0:00:00
Collecting accelerate>=0.11.0
  Downloading accelerate-0.23.0-py3-none-any.whl (258 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 258.1/258.1 kB 15.5 MB/s eta 0:00:00
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1
  Downloading tokenizers-0.13.3-cp310-cp310-win_amd64.whl (3.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.5/3.5 MB 24.5 MB/s eta 0:00:00
Collecting pyyaml>=5.1
  Downloading PyYAML-6.0.1-cp310-cp310-win_amd64.whl (145 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 145.3/145.3 kB ? eta 0:00:00
Collecting packaging>=20.0
  Downloading packaging-23.1-py3-none-any.whl (48 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.9/48.9 kB ? eta 0:00:00
Collecting antlr4-python3-runtime==4.9.*
  Downloading antlr4-python3-runtime-4.9.3.tar.gz (117 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 117.0/117.0 kB ? eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting omegaconf<2.4,>=2.2
  Downloading omegaconf-2.3.0-py3-none-any.whl (79 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.5/79.5 kB ? eta 0:00:00
Collecting colorlog
  Downloading colorlog-6.7.0-py2.py3-none-any.whl (11 kB)
Collecting docopt>=0.6.2
  Downloading docopt-0.6.2.tar.gz (25 kB)
  Preparing metadata (setup.py) ... done
Collecting spacy-legacy<3.1.0,>=3.0.11
  Downloading spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Collecting pydantic!=1.8,!=1.8.1,<1.11.0,>=1.7.4
  Downloading pydantic-1.10.12-cp310-cp310-win_amd64.whl (2.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 19.4 MB/s eta 0:00:00
Collecting catalogue<2.1.0,>=2.0.6
  Downloading catalogue-2.0.10-py3-none-any.whl (17 kB)
Collecting typer<0.8.0,>=0.3.0
  Downloading typer-0.7.0-py3-none-any.whl (38 kB)
Collecting pathy>=0.10.0
  Downloading pathy-0.10.2-py3-none-any.whl (48 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.9/48.9 kB ? eta 0:00:00
Collecting langcodes<4.0.0,>=3.2.0
  Downloading langcodes-3.3.0-py3-none-any.whl (181 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.6/181.6 kB ? eta 0:00:00
Collecting thinc<8.2.0,>=8.1.8
  Downloading thinc-8.1.12-cp310-cp310-win_amd64.whl (1.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 18.8 MB/s eta 0:00:00
Collecting smart-open<7.0.0,>=5.2.1
  Downloading smart_open-6.4.0-py3-none-any.whl (57 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.0/57.0 kB ? eta 0:00:00
Collecting wasabi<1.2.0,>=0.9.1
  Downloading wasabi-1.1.2-py3-none-any.whl (27 kB)
Collecting srsly<3.0.0,>=2.4.3
  Downloading srsly-2.4.8-cp310-cp310-win_amd64.whl (481 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 481.9/481.9 kB 31.4 MB/s eta 0:00:00
Requirement already satisfied: setuptools in c:\visioncrafter\venv\lib\site-packages (from spacy==3.5.2->-r requirements.txt (line 16)) (65.5.0)
Collecting cymem<2.1.0,>=2.0.2
  Downloading cymem-2.0.8-cp310-cp310-win_amd64.whl (39 kB)
Collecting murmurhash<1.1.0,>=0.28.0
  Downloading murmurhash-1.0.10-cp310-cp310-win_amd64.whl (25 kB)
Collecting preshed<3.1.0,>=3.0.2
  Downloading preshed-3.0.9-cp310-cp310-win_amd64.whl (122 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 122.2/122.2 kB 3.6 MB/s eta 0:00:00
Collecting spacy-loggers<2.0.0,>=1.0.0
  Downloading spacy_loggers-1.0.5-py3-none-any.whl (22 kB)
Collecting pyre-extensions==0.0.29
  Downloading pyre_extensions-0.0.29-py3-none-any.whl (12 kB)
Collecting dora-search
  Downloading dora_search-0.1.12.tar.gz (87 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 87.1/87.1 kB ? eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting diffq>=0.2.1
  Downloading diffq-0.2.4-cp310-cp310-win_amd64.whl (91 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 91.8/91.8 kB ? eta 0:00:00
Collecting lameenc>=1.2
  Downloading lameenc-1.6.1-cp310-cp310-win_amd64.whl (148 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 148.4/148.4 kB ? eta 0:00:00
Collecting openunmix
  Downloading openunmix-1.2.1-py3-none-any.whl (46 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.7/46.7 kB ? eta 0:00:00
Collecting soxr>=0.3.2
  Downloading soxr-0.3.6-cp310-cp310-win_amd64.whl (184 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 184.8/184.8 kB ? eta 0:00:00
Collecting msgpack>=1.0
  Downloading msgpack-1.0.6-cp310-cp310-win_amd64.whl (162 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 162.5/162.5 kB 9.5 MB/s eta 0:00:00
Collecting decorator>=4.3.0
  Downloading decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting scipy>=1.2.0
  Downloading scipy-1.11.2-cp310-cp310-win_amd64.whl (44.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 44.0/44.0 MB 17.7 MB/s eta 0:00:00
Collecting joblib>=0.14
  Downloading joblib-1.3.2-py3-none-any.whl (302 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 302.2/302.2 kB 18.2 MB/s eta 0:00:00
Collecting audioread>=2.1.9
  Downloading audioread-3.0.0.tar.gz (377 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 377.0/377.0 kB 22.9 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting soundfile>=0.12.1
  Downloading soundfile-0.12.1-py2.py3-none-win_amd64.whl (1.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 21.3 MB/s eta 0:00:00
Collecting numba>=0.51.0
  Downloading numba-0.58.0-cp310-cp310-win_amd64.whl (2.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.6/2.6 MB 15.1 MB/s eta 0:00:00
Collecting lazy-loader>=0.1
  Downloading lazy_loader-0.3-py3-none-any.whl (9.1 kB)
Collecting pooch<1.7,>=1.0
  Downloading pooch-1.6.0-py3-none-any.whl (56 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.3/56.3 kB ? eta 0:00:00
Collecting scikit-learn>=0.20.0
  Downloading scikit_learn-1.3.1-cp310-cp310-win_amd64.whl (9.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.3/9.3 MB 11.8 MB/s eta 0:00:00
Collecting pytz>=2020.1
  Downloading pytz-2023.3.post1-py2.py3-none-any.whl (502 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 502.5/502.5 kB 10.7 MB/s eta 0:00:00
Collecting tzdata>=2022.1
  Downloading tzdata-2023.3-py2.py3-none-any.whl (341 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 341.8/341.8 kB 20.7 MB/s eta 0:00:00
Collecting python-dateutil>=2.8.2
  Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 247.7/247.7 kB 14.8 MB/s eta 0:00:00
Collecting decorator>=4.3.0
  Downloading decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
Collecting proglog<=1.0.0
  Downloading proglog-0.1.10-py3-none-any.whl (6.1 kB)
Collecting typing-inspect
  Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)
Collecting fsspec
  Downloading fsspec-2023.9.2-py3-none-any.whl (173 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 173.4/173.4 kB ? eta 0:00:00
Collecting colorama
  Downloading https://download.pytorch.org/whl/colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting python-multipart
  Downloading python_multipart-0.0.6-py3-none-any.whl (45 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.7/45.7 kB ? eta 0:00:00
Collecting gradio-client==0.5.2
  Downloading gradio_client-0.5.2-py3-none-any.whl (298 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 298.3/298.3 kB 18.0 MB/s eta 0:00:00
Collecting fastapi
  Downloading fastapi-0.103.1-py3-none-any.whl (66 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.2/66.2 kB ? eta 0:00:00
Collecting matplotlib~=3.0
  Downloading matplotlib-3.8.0-cp310-cp310-win_amd64.whl (7.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.6/7.6 MB 21.2 MB/s eta 0:00:00
Collecting aiofiles<24.0,>=22.0
  Downloading aiofiles-23.2.1-py3-none-any.whl (15 kB)
Collecting pydub
  Downloading pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting semantic-version~=2.0
  Downloading semantic_version-2.10.0-py2.py3-none-any.whl (15 kB)
Collecting uvicorn>=0.14.0
  Downloading uvicorn-0.23.2-py3-none-any.whl (59 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 59.5/59.5 kB ? eta 0:00:00
Collecting altair<6.0,>=4.2.0
  Downloading altair-5.1.1-py3-none-any.whl (520 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 520.6/520.6 kB 15.9 MB/s eta 0:00:00
Collecting websockets<12.0,>=10.0
  Downloading websockets-11.0.3-cp310-cp310-win_amd64.whl (124 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 124.7/124.7 kB ? eta 0:00:00
Collecting orjson~=3.0
  Downloading orjson-3.9.7-cp310-none-win_amd64.whl (134 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 134.8/134.8 kB ? eta 0:00:00
Collecting httpx
  Downloading httpx-0.25.0-py3-none-any.whl (75 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.7/75.7 kB 4.1 MB/s eta 0:00:00
Collecting importlib-resources<7.0,>=1.3
  Downloading importlib_resources-6.1.0-py3-none-any.whl (33 kB)
Collecting ffmpy
  Downloading ffmpy-0.3.1.tar.gz (5.5 kB)
  Preparing metadata (setup.py) ... done
Collecting markupsafe~=2.0
  Downloading MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl (17 kB)
Collecting jsonschema>=3.0
  Downloading jsonschema-4.19.1-py3-none-any.whl (83 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.3/83.3 kB ? eta 0:00:00
Collecting toolz
  Downloading toolz-0.12.0-py3-none-any.whl (55 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 55.8/55.8 kB ? eta 0:00:00
Collecting Cython
  Downloading Cython-3.0.2-cp310-cp310-win_amd64.whl (2.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.8/2.8 MB 17.6 MB/s eta 0:00:00
Collecting fonttools>=4.22.0
  Downloading fonttools-4.42.1-cp310-cp310-win_amd64.whl (2.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 16.8 MB/s eta 0:00:00
Collecting pyparsing>=2.3.1
  Downloading pyparsing-3.1.1-py3-none-any.whl (103 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 103.1/103.1 kB ? eta 0:00:00
Collecting cycler>=0.10
  Downloading cycler-0.11.0-py3-none-any.whl (6.4 kB)
Collecting kiwisolver>=1.0.1
  Downloading kiwisolver-1.4.5-cp310-cp310-win_amd64.whl (56 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.1/56.1 kB ? eta 0:00:00
Collecting contourpy>=1.0.1
  Downloading contourpy-1.1.1-cp310-cp310-win_amd64.whl (477 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 478.0/478.0 kB 29.2 MB/s eta 0:00:00
Collecting llvmlite<0.42,>=0.41.0dev0
  Downloading llvmlite-0.41.0-cp310-cp310-win_amd64.whl (28.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 28.1/28.1 MB 13.1 MB/s eta 0:00:00
Collecting appdirs>=1.3.0
  Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting six>=1.5
  Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting idna<4,>=2.5
  Downloading https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.5/61.5 kB ? eta 0:00:00
Collecting urllib3<3,>=1.21.1
  Downloading urllib3-2.0.5-py3-none-any.whl (123 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 123.8/123.8 kB ? eta 0:00:00
Collecting charset-normalizer<4,>=2
  Downloading charset_normalizer-3.2.0-cp310-cp310-win_amd64.whl (96 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.9/96.9 kB ? eta 0:00:00
Collecting certifi>=2017.4.17
  Downloading certifi-2023.7.22-py3-none-any.whl (158 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 158.3/158.3 kB ? eta 0:00:00
Collecting threadpoolctl>=2.0.0
  Downloading threadpoolctl-3.2.0-py3-none-any.whl (15 kB)
Collecting cffi>=1.0
  Downloading cffi-1.15.1-cp310-cp310-win_amd64.whl (179 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 179.1/179.1 kB ? eta 0:00:00
Collecting confection<1.0.0,>=0.0.1
  Downloading confection-0.1.3-py3-none-any.whl (34 kB)
Collecting blis<0.8.0,>=0.7.8
  Downloading blis-0.7.11-cp310-cp310-win_amd64.whl (6.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.6/6.6 MB 21.1 MB/s eta 0:00:00
Collecting click<9.0.0,>=7.1.1
  Downloading click-8.1.7-py3-none-any.whl (97 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.9/97.9 kB 5.8 MB/s eta 0:00:00
Collecting h11>=0.8
  Downloading h11-0.14.0-py3-none-any.whl (58 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB ? eta 0:00:00
Collecting retrying
  Downloading retrying-1.3.4-py3-none-any.whl (11 kB)
Collecting submitit
  Downloading submitit-1.4.6-py3-none-any.whl (73 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 73.9/73.9 kB ? eta 0:00:00
Collecting treetable
  Downloading treetable-0.2.5.tar.gz (10 kB)
  Preparing metadata (setup.py) ... done
Collecting starlette<0.28.0,>=0.27.0
  Downloading starlette-0.27.0-py3-none-any.whl (66 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 67.0/67.0 kB ? eta 0:00:00
Collecting anyio<4.0.0,>=3.7.1
  Downloading anyio-3.7.1-py3-none-any.whl (80 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 80.9/80.9 kB ? eta 0:00:00
Collecting sniffio
  Downloading sniffio-1.3.0-py3-none-any.whl (10 kB)
Collecting httpcore<0.19.0,>=0.18.0
  Downloading httpcore-0.18.0-py3-none-any.whl (76 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 76.0/76.0 kB ? eta 0:00:00
Collecting zipp>=0.5
  Downloading zipp-3.17.0-py3-none-any.whl (7.4 kB)
Collecting mpmath>=0.19
  Downloading https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 17.0 MB/s eta 0:00:00
Collecting exceptiongroup
  Downloading exceptiongroup-1.1.3-py3-none-any.whl (14 kB)
Collecting pycparser
  Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 118.7/118.7 kB ? eta 0:00:00
Collecting jsonschema-specifications>=2023.03.6
  Downloading jsonschema_specifications-2023.7.1-py3-none-any.whl (17 kB)
Collecting referencing>=0.28.4
  Downloading referencing-0.30.2-py3-none-any.whl (25 kB)
Collecting rpds-py>=0.7.1
  Downloading rpds_py-0.10.3-cp310-none-win_amd64.whl (186 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 186.4/186.4 kB 11.0 MB/s eta 0:00:00
Collecting attrs>=22.2.0
  Downloading attrs-23.1.0-py3-none-any.whl (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.2/61.2 kB ? eta 0:00:00
Collecting cloudpickle>=1.2.1
  Downloading cloudpickle-2.2.1-py3-none-any.whl (25 kB)
Collecting mypy-extensions>=0.3.0
  Downloading mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Building wheels for collected packages: flashy, dora-search
  Building wheel for flashy (pyproject.toml) ... done
  Created wheel for flashy: filename=flashy-0.0.2-py3-none-any.whl size=34568 sha256=49330efd48986b288564c4b8831d9a7471195ea84d60c76b83b4cda37d69bbe4
  Stored in directory: c:\users\franc\appdata\local\pip\cache\wheels\07\bd\3d\16c6bc059203299f37b6014643b739afb7f6d1be13a94fc2f7
  Building wheel for dora-search (pyproject.toml) ... done
  Created wheel for dora-search: filename=dora_search-0.1.12-py3-none-any.whl size=75223 sha256=a7690cc9887ced81980ca8ffecbedce65b39679e3ff783a4d7b6ec95173176fa
  Stored in directory: c:\users\franc\appdata\local\pip\cache\wheels\b1\c2\c0\bea5cc405497284d584b958f293ef32c23bad42ae5e44d973c
Successfully built flashy dora-search
Installing collected packages: tokenizers, sentencepiece, safetensors, pytz, python-vlc, PySimpleGUI, pydub, mpmath, lameenc, ffmpy, docopt, cymem, av, appdirs, antlr4-python3-runtime, zipp, websockets, urllib3, tzdata, typing-extensions, treetable, toolz, threadpoolctl, sympy, spacy-loggers, spacy-legacy, sniffio, smart-open, six, semantic-version, rpds-py, regex, pyyaml, python-multipart, pyparsing, pycparser, psutil, pillow, packaging, orjson, numpy, num2words, networkx, mypy-extensions, murmurhash, msgpack, markupsafe, llvmlite, lazy-loader, langcodes, kiwisolver, joblib, importlib-resources, imageio-ffmpeg, idna, h11, fsspec, fonttools, filelock, exceptiongroup, einops, decorator, Cython, cycler, colorama, cloudpickle, charset-normalizer, certifi, catalogue, audioread, attrs, aiofiles, wasabi, typing-inspect, tqdm, submitit, srsly, soxr, scipy, retrying, requests, referencing, python-dateutil, pydantic, preshed, opencv-python, omegaconf, numba, jinja2, importlib-metadata, imageio, contourpy, colorlog, click, cffi, blis, anyio, uvicorn, typer, torch, starlette, soundfile, scikit-learn, pyre-extensions, proglog, pooch, pandas, matplotlib, jsonschema-specifications, hydra-core, huggingface_hub, httpcore, confection, xformers, transformers, torchvision, torchaudio, thinc, pathy, moviepy, librosa, julius, jsonschema, hydra_colorlog, httpx, fastapi, dora-search, diffusers, diffq, accelerate, spacy, openunmix, gradio-client, flashy, altair, gradio, demucs
  DEPRECATION: ffmpy is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
  Running setup.py install for ffmpy ... done
  DEPRECATION: docopt is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
  Running setup.py install for docopt ... done
  DEPRECATION: antlr4-python3-runtime is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
  Running setup.py install for antlr4-python3-runtime ... done
  DEPRECATION: treetable is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
  Running setup.py install for treetable ... done
  DEPRECATION: audioread is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
  Running setup.py install for audioread ... done
  DEPRECATION: moviepy is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
  Running setup.py install for moviepy ... done
  DEPRECATION: julius is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
  Running setup.py install for julius ... done
  DEPRECATION: demucs is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
  Running setup.py install for demucs ... done
Successfully installed Cython-3.0.2 PySimpleGUI-4.60.5 accelerate-0.23.0 aiofiles-23.2.1 altair-5.1.1 antlr4-python3-runtime-4.9.3 anyio-3.7.1 appdirs-1.4.4 attrs-23.1.0 audioread-3.0.0 av-10.0.0 blis-0.7.11 catalogue-2.0.10 certifi-2023.7.22 cffi-1.15.1 charset-normalizer-3.2.0 click-8.1.7 cloudpickle-2.2.1 colorama-0.4.6 colorlog-6.7.0 confection-0.1.3 contourpy-1.1.1 cycler-0.11.0 cymem-2.0.8 decorator-4.4.2 demucs-4.0.0 diffq-0.2.4 diffusers-0.11.1 docopt-0.6.2 dora-search-0.1.12 einops-0.6.1 exceptiongroup-1.1.3 fastapi-0.103.1 ffmpy-0.3.1 filelock-3.12.4 flashy-0.0.2 fonttools-4.42.1 fsspec-2023.9.2 gradio-3.45.1 gradio-client-0.5.2 h11-0.14.0 httpcore-0.18.0 httpx-0.25.0 huggingface_hub-0.17.3 hydra-core-1.3.2 hydra_colorlog-1.2.0 idna-3.4 imageio-2.9.0 imageio-ffmpeg-0.4.2 importlib-metadata-6.8.0 importlib-resources-6.1.0 jinja2-3.1.2 joblib-1.3.2 jsonschema-4.19.1 jsonschema-specifications-2023.7.1 julius-0.2.7 kiwisolver-1.4.5 lameenc-1.6.1 langcodes-3.3.0 lazy-loader-0.3 librosa-0.10.0.post2 llvmlite-0.41.0 markupsafe-2.1.3 matplotlib-3.8.0 moviepy-1.0.3 mpmath-1.3.0 msgpack-1.0.6 murmurhash-1.0.10 mypy-extensions-1.0.0 networkx-3.1 num2words-0.5.12 numba-0.58.0 numpy-1.24.4 omegaconf-2.3.0 opencv-python-4.7.0.72 openunmix-1.2.1 orjson-3.9.7 packaging-23.1 pandas-2.0.2 pathy-0.10.2 pillow-10.0.1 pooch-1.6.0 preshed-3.0.9 proglog-0.1.10 psutil-5.9.5 pycparser-2.21 pydantic-1.10.12 pydub-0.25.1 pyparsing-3.1.1 pyre-extensions-0.0.29 python-dateutil-2.8.2 python-multipart-0.0.6 python-vlc-3.0.18122 pytz-2023.3.post1 pyyaml-6.0.1 referencing-0.30.2 regex-2023.8.8 requests-2.31.0 retrying-1.3.4 rpds-py-0.10.3 safetensors-0.3.3 scikit-learn-1.3.1 scipy-1.11.2 semantic-version-2.10.0 sentencepiece-0.1.99 six-1.16.0 smart-open-6.4.0 sniffio-1.3.0 soundfile-0.12.1 soxr-0.3.6 spacy-3.5.2 spacy-legacy-3.0.12 spacy-loggers-1.0.5 srsly-2.4.8 starlette-0.27.0 submitit-1.4.6 sympy-1.12 thinc-8.1.12 threadpoolctl-3.2.0 tokenizers-0.13.3 toolz-0.12.0 torch-2.0.1+cu118 torchaudio-2.0.2+cu118 torchvision-0.15.2+cu118 tqdm-4.66.1 transformers-4.30.2 treetable-0.2.5 typer-0.7.0 typing-extensions-4.8.0 typing-inspect-0.9.0 tzdata-2023.3 urllib3-2.0.5 uvicorn-0.23.2 wasabi-1.1.2 websockets-11.0.3 xformers-0.0.20 zipp-3.17.0

[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: python.exe -m pip install --upgrade pip
Downloading stable-diffusion-v1-5
Cloning into 'repos\animatediff\models\StableDiffusion\stable-diffusion-v1-5'...
remote: Enumerating objects: 194, done.
remote: Total 194 (delta 0), reused 0 (delta 0), pack-reused 194
Receiving objects: 100% (194/194), 540.42 KiB | 1.86 MiB/s, done.
Resolving deltas: 100% (69/69), done.
Filtering content: 100% (4/4), 2.55 GiB | 18.00 MiB/s, done.
Downloading Motion Modules
Cloning into 'repos\animatediff\models\Motion_Module'...
remote: Enumerating objects: 24, done.
remote: Counting objects: 100% (13/13), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 24 (delta 1), reused 0 (delta 0), pack-reused 11
Unpacking objects: 100% (24/24), 3.21 KiB | 173.00 KiB/s, done.
Filtering content: 100% (11/11), 5.38 GiB | 8.08 MiB/s, done.
Do you want to download toonyou model? (y/n):n
Skipping toonyou model download
Launching VisionCrafter
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'text_projection.weight', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'logit_scale', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'visual_projection.weight', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.bias']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Token indices sequence length is longer than the specified maximum sequence length for this model (90 > 77). Running this sequence through the model will result in indexing errors
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['( beksinski ) ) ) ), shoot by canon camera']
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:32<00:00,  7.63s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:02<00:00,  6.08it/s]
C:\VisionCrafter\outputs\result-2023-09-27T13-53-19\results\mp4\0-photorealist-hiperdetailed-surrealist-hairy-nudibranchturtlefroginsectmechanical
eating-a-giant-microbeeyeballsnailcactus(((((girlfacesleeping))))(((oldbooks)))-in-a.mp4: Invalid argument
Exception in thread Thread-2 (animate_main_t):
Traceback (most recent call last):
  File "C:\VisionCrafter\venv\lib\site-packages\imageio_ffmpeg\_io.py", line 479, in write_frames
    p.stdin.write(bb)
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\franc\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\franc\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\VisionCrafter\main.py", line 344, in animate_main_t
    animate_main(args,window)
  File "C:\VisionCrafter\repos\animatediff\scripts\animate.py", line 211, in main
    save_videos_grid(sample, f"{savedir}/results/mp4/{sample_idx}-{prompt}.mp4")
  File "C:\VisionCrafter\repos\animatediff\animatediff\utils\util.py", line 32, in save_videos_grid
    imageio.mimsave(path, outputs, fps=fps, codec='h264', quality=10, pixelformat='yuv420p')
  File "C:\VisionCrafter\venv\lib\site-packages\imageio\core\functions.py", line 418, in mimwrite
    writer.append_data(im)
  File "C:\VisionCrafter\venv\lib\site-packages\imageio\core\format.py", line 502, in append_data
    return self._append_data(im, total_meta)
  File "C:\VisionCrafter\venv\lib\site-packages\imageio\plugins\ffmpeg.py", line 574, in _append_data
    self._write_gen.send(im)
  File "C:\VisionCrafter\venv\lib\site-packages\imageio_ffmpeg\_io.py", line 486, in write_frames
    raise IOError(msg)
OSError: [Errno 32] Broken pipe

FFMPEG COMMAND:
C:\VisionCrafter\venv\lib\site-packages\imageio_ffmpeg\binaries\ffmpeg-win64-v4.2.2.exe -y -f rawvideo -vcodec rawvideo -s 512x512 -pix_fmt rgb24 -r 8.00 -i - -an -vcodec h264 -pix_fmt yuv420p -qscale:v 1 -v warning C:\VisionCrafter\outputs\result-2023-09-27T13-53-19\results\mp4\0-photorealist-hiperdetailed-surrealist-hairy-nudibranchturtlefroginsectmechanical
eating-a-giant-microbeeyeballsnailcactus(((((girlfacesleeping))))(((oldbooks)))-in-a.mp4

FFMPEG STDERR OUTPUT:

Additional information

No response

[Issue] Unable to Unload Lora

Hey there,

once i created something with a lora attached. i can´t unload it.
Following creations without using a lora still cast for a lora file.

Exception in thread Thread-16 (animate_main_t):
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\ai\AiMaster\AI\VisionCrafter\main.py", line 294, in animate_main_t
animate_main(args,window)
File "D:\ai\AiMaster\AI\VisionCrafter\repos\animatediff\scripts\animate.py", line 140, in main
with safe_open(model_config.path, framework="pt", device="cpu") as f:
FileNotFoundError: No such file or directory: "D:\ai\AiMaster\AI\NextSD\outputs\text\meguminKonosuba_v10.safetensors"

[Feature Request]: Implementing Controlnet Features?

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Summary

Its possible to implement controlnets to animatediff to control the animation, https://twitter.com/toyxyz3/status/1695849125663973561?s=20

Description

if it would be nice to have a video processors that uses ffmpeg to cut a portion of a video to produce, an animation based on controlnet openpose and tilenet upscale would be great also,

https://twitter.com/toyxyz3/status/1695761541226955081?s=20

Additional information

it would be even cool to make longer videos to cut 24frames and process them and stitch them to a very long video or gif

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.