GithubHelp home page GithubHelp logo

butzyung / systemanimatoronline Goto Github PK

View Code? Open in Web Editor NEW
646.0 22.0 55.0 516.4 MB

XR Animator, AI-based Full Body Motion Capture and Extended Reality (XR) solution, powered by System Animator Online

Home Page: https://sao.animetheme.com/XR_Animator.html

JavaScript 97.84% HTML 1.59% AutoIt 0.03% CSS 0.08% C 0.45%
webgl electron-app augmented-reality mediapipe motion-capture tensorflowjs webxr threejs vtuber

systemanimatoronline's People

Contributors

butzyung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

systemanimatoronline's Issues

Get VMC protocol data from mediapipe results

I've tried to read some of the code in this repo. The MMD_SA.js seems responsible for sending the VMC data and the data is read from the VRM model. So I guess the whole pipeline of ths app is:

  1. Get results from the mediapipe (478 face landmarks, 21*2 hand landmarks, and 31 pose landmarks)
  2. Translate landmarks to control parameters. (I guess it's finished in facemesh_lib.js and mocap_lib_module.js)
  3. Apply the results to the model. (I failed to find the code for this)
  4. Read the model's state and send data to VMC protocol.

Correct me if any step is wrong. I'm trying to figure out how the app control the 3D model using the landmarks from mediapipe. Also, is it possible to directly generate data for VMC protocol so that we can skip the movement of the model?

Request to add virtual camera

Currently I'm using vseeface to have all my motion be put through a virtual camera, but it would be nice to be able to just skip the middle man and have virtual camera just be in xr animator! It'd be another program I wouldn't have to have open to save resources while streaming!

Unable to launch XR animator if parent folder name contains a parenthesis

I am using the windows version of the application. As per title, I found a bug where the program will be prevented to start if the characters "(" or ")" are present in the name of the folder the program is extracted into. Either of those characters by itself or together will prevent it from functioning. The folder I was trying to launch it from was "New folder (2)"

The behaviour is the following:
when launched from inside a folder contained the aforementioned characters, the program will open as normal, but when one clicks the "start" button, loading will hang indefinitely at 99% of the "loading thex" stage.

Run the online webkit locally in browser

Hi, this is really an amazing project! I love the quality of the mocap a lot and I'd like to try it myself. When I try to start it locally, I got several errors such as "System is not defined". What I did is just clone the repo and open the SystemAnimator_online.html in my browser.

Can you give me some advice about how to run this project in the browser locally? I know there's a compiled version, but I'd like to try to build some tools upon this. Thanks a lot!

Texture not loading in correctly?

I'm having this strange issue where my model isn't loading it's texture correctly.
image
There are parts that are transparent that clearly shouldn't be, but I don't have this issue with any other program other than this one. Is there some automatic thing when the chroma key is running that is causing this that automatically detects thick black lines or something? I have checked the camera/lighting already

Coordinate conversion

May I ask how to convert the coordinates obtained by mediapipe into the coordinates in the threejs scene?

[Feature Request] New VMC application mode for VRChat OSR

I fiddled around with your implementation of VMC to be sent to port 9000 (by editing the config file because the app only allows 5 digits for... Reasons ^^)
VRChat seems to catch those parameter in the ocr debugger too, but on the path /VMC/ext/..... Is not what VRChat expects

I snooped around and found MMD_SA.js to be handling the connections but wasn't skilled enough to modify it accordingly (or without breaking anything else)

Is there a support for VRChat planned? This would help many people without VR to express themselves further instead of just standing there... MENACINGLY!

Way to get rid of shadow/point light?

Just due to the type of model I'm using as well as the kind of setup I have, my character casting a shadow is not desirable. Is there a way to get flat lighting or just disable to shadows cast by the light? Thanks.

(Also on some of the default poses the shadows are just broken due to where xr animator thinks the floor is lol)
image
image

3D Camera lock doesn't seem to work

When I go to UI/Options > Miscellaneous > Hotkey list and options, on the right hand bubble I see "Ctrl + L to toggle the 3D camera lock".

Expected behavior: Toggling the camera lock on will prevent clicking + dragging from rotating my character all around.

Observed behavior: I can still rotate around the character. Pressing Ctrl+L again snaps the character to a slightly different position, but with no discernable pattern.

VMC protocol

How can I change the IP address of the device I will be transferring live data to via VMC protocol in the code? Also can I send data to multiple IP addresses simultaneously?

MMD model shows broken textures

So, I wanted to use a MMD model, right? well, when I do so the textures look very broken.
It doesn't look like this in other programs, In others it looks fine. I think this might be a bug in XRAnimator
image

52 Blendshape

Hi, I am seeking information about sending the blendshape data from XR Animator to Unity by VMC protocol. Because in the inspector, it seems some of the values are moving together with the same values such as eyeBlinkLeft-eyeBlinkRight, mouthFrownLeft-MouthFrownRight. I wonder if XR Animator sends 52 blendshape data separately, and it gets messy because of the avatar, or if somehow I have to add blendshape clips for 52 blendshape motion for better capturing? Or what would you suggest for perfect capturing?

Cant puff my cheeks

i got an model with arkit, but somehow when i blow up my cheeks it wont track it for my model.

how can i make it work?

Option to keybind different?

I notice i can't use Ctrl Z as my undo button when XR is running. Is there any way to rebind this to something else?

Request for more zoom levels!

The program works wonderfully so far! My model is rather abnormal in both size and shape, and I'm kind of stuck between being zoomed too far out and too far in using the scroll wheel to zoom. I'd love more zoom levels for it, or maybe a function to hold ctrl and get finer zoom with the scroll wheel!

Support for Kinect v2 support?

I have a Kinect v2 and am wondering if the program can utilise the motion tracking functions of the Kinect for better tracking.

Eye lid no longer blink. (Version 12)

As title say.

Version 12 does not track eye blinking no more while Version 11 works fine. While it does look like you are blinking in XR. It doesn't translate into Vnyan or VSeeFace (or any of the like)

2023-07-27 05_14_38-
2023-07-27 05_14_30-Window
2023-07-27 05_14_24-

webm support

Add support for WEBM for motion capture, current software configuration can load and play the video, but it seems to kill everything and transforms the application into a video player, needing a restart if you accidentally load a webm file.

VRM model loading issue

Loaded my VRM model into the Linux build of this, and found some textures didn't load correctly, the most noticeable being that blacks were loaded as gray, and some textures looked like they didn't not have any aliasing.

Unsure is this is a model issue or how XR Animator loads VRM models, but I haven't had this happen in other Vtuber software.

image

Improving beat detection for incoming system audio / live input

Beat detection has never been accurate on my end. No matter what the monitor sensitivity is, what the gain is, whether the time interval is set to 0.1 seconds, whether beat detection is set to beats only, mixed, or frequencies only; the animation will almost never play to the beat of incoming audio. Sometimes it looks like it plays to the beat, then freezes for a couple of seconds then plays off beat. Even when I play sections of a song that only contain, say, a kick drum, the animation will eventually play off beat.

There must be a way of improving beat detection, or make it so that instead of trying to detect every other beat based on volume peaks (like I suspect it currently does), it should sample a couple of seconds of incoming audio, detect the BPM, and play the animation to the detected BPM instead. Or there could be a way of manually setting the BPM, and playing the animation to that BPM every time audio is detected (independent of the incoming audio's BPM).

ARKit 52 blendshapes to Blender

I got a response on a YouTube video about vmd export not supporting ARkit blendshapes. I am trying to get the ARkit blendshapes into blender somehow. Is there any method to do this? BMC protocol or anything? Thanks

Export VPD

The quality of traced poses from images is quite good, and it would be a very helpful tool to quickly make pose sets, so it would be nice if we could have a button/keybind to quickly export a MMD pose.

Feature Request: Linux/Mac builds or build instructions

Being that platforms like the steamdeck and new M2 macbooks are growing in popularity, it would be fantastic to have the project available on these systems as a desktop tool. I kindly request to consider providing Linux and Mac builds or at least including build instructions specifically tailored for these platforms, since electron is a cross-platform toolkit. This would make it much easier for users like me to get started and contribute to the project. Thanks a lot for your attention, and I'm looking forward to seeing XR Animator become even more accessible and inclusive!

support for .bvh or .fbx export?

I noticed that the software has an option for importing fbx files but doesn't offer an output feature. Although I understand it was primarily made for VTubers, to better serve other use cases such as game development or film making, I believe it should have built-in support for exporting results to .fbx format.

This is the most impressive webcam to mocap software I've tested thus far. So I really hope you consider adding it.

Fail to open!

I double click electron.exe but nothing happen. Also tried in admin mode.When i use start electorn.exe in powershell, it comes out electron: --openssl-legacy-provider is not allowed in NODE_OPTIONS. Why?

Auto-split VMD

MMD can only import 20000 keyframes, automatically splitting the vmd file when it reaches 20000 keyframes would allow for longer recordings to be used.
image

macOS build?

It is an Electron app, there is 0 reason for there not to be a macOS build, especially when you include Linux builds, which is a less widespread desktop OS.

0.17.0 - linux 100% single core use

When loading a mmd model and leave it running, a single cpu core constantly runs 100%, have tried changing settings to 30fps and back to 60. it's like it runs unlimited fps.

No more wireframe display?

Thanks for the Version 13 update. Tho I notice wireframe display been remove with some other things. I did find the wireframe display useful. Any plan to have it return?

Global Hotkey Default On.

Every time i load XR-A up. I always have to set the Global Key Off from On. Is it possible for it to save automatically to default: OFF?

Emotion Tracking

I'm running into an issue where I'm assuming the emotion tracking isn't working as intended. After I do the full calibration on startup, below the first line it states 'Neutral' however it is never the neutral face that I created through Vroid. Also, it is never switching from Neutral to any other emotion on that line.

My tracking is working fine outside of that. The mouth movements I wish I could add sensitivity though as I feel I have to open my mouth pretty wide to see the models mouth move.

The angry face, sad face, etc. never display in VR Animator, they do work in, for example, expressions in vSeeFace.

Can't use Ctrl+Z when XR-Animator is running.

I load XR-Animator on Windows, initial test v0.13.0, just tested latest 0.15.1 same issue.
load a mmd model, have it running.

While working in any app that supports undo, can't use Ctrl+Z, it has no effect.
Closing XR-Animator and Ctrl+Z works again.

Bad Model Positioning

Current way that the program saves animations makes use of the 全ての親 (Mother) bone to set its position, not only this makes it nearly impossible to position a model on a scene, the animation is saved like being watched from a camera, moving the model in erratic ways, could be nice to have a floor detection and stabilize the motion with that said floor.

image

Cannot run in Unity

first of all this is really cool stuff. However I am not able to track facial blendshape values in unity editor. The values always remain 0
I followed following steps.

Downloaded v0.3.1 and added all files in project

opened holistic scene from mediapipe

created game object and added example.cs script

jawOpenIndex = Array.IndexOf(PhizServer.blendshapeNames, "jawOpen");
float jawOpenValue = server.blendshapes[jawOpenIndex];
Debug.Log("jawOpen: " + jawOpenValue);

The jawOpenValue is alway 0

Am I missing anything else?

Texture Shifting on Face

I have a tiled facial texture to enable some custom expressions with different facial textures in VSeeFace. However, when using the model I find that some facial and head motions cause the texture to 'slide' across the face. Viewing the blendshapes in Unity none have the texture partially off so I don't think that's the issue.

Is there a way to prevent this from happening or do I need to remove the tiled texture?

Suggestions. Time for a 10 year old project to be reborn???

Yes, i had GPTs help writing this :P

First and foremost, I would like to express my admiration for your incredible application. After using vseeface for quite some time, I stumbled upon your program, and I must say, I'm thoroughly impressed. I am confident that I will be making the switch, as I believe your application has enormous potential. However, I do have a few suggestions that I believe could further enhance its functionality and overall polish. As I am not well-versed in coding, I am unsure of the technical feasibility of these suggestions, but I hope you find them valuable.

Modularity: I perceive the primary purpose of this application as loading and tracking MMD or VMC files. To achieve a more streamlined experience, I suggest making the application modular. This would involve removing the backpack and any extraneous elements. However, it would still be beneficial to retain the ability to load these elements as optional plugins. By doing so, you would empower others to create their own plugins, expanding the application's capabilities.

Pose Management: It would be fantastic if poses could be treated as a separate component within the application. Additionally, it would be beneficial to expand on pose functionality by enabling the saving of timed snapshots. Furthermore, incorporating a recording feature for future playback at will would be immensely valuable. Imagine being able to record a dance sequence and effortlessly play it back during a live stream or performance. To facilitate easy sharing and loading, it might be worthwhile to develop a specific file format for these poses or dances, allowing users to save, share, and load them effortlessly. This feature would open up exciting possibilities for creating poses or dances derived from images or videos.

Enhanced Menu System: While the current bubble-based menu system is charming, I believe a more conventional menu interface would greatly enhance the overall user experience. A well-designed, traditional menu system would provide a more intuitive and organized way for users to navigate through the application's features and settings.

Customizable Hotkeys: At present, the hotkey system feels a bit disorganized. A simple solution to address this would be to allow users to assign their own hotkeys. By providing the option for users to personalize their keybindings, such as assigning F13-F24 for poses, you would cater to a wider range of user preferences and workflows.

Unified Settings Menu: It can be slightly confusing to have two separate settings menus with different options. Consolidating all the settings into a single, cohesive menu would make it easier for users to manage their preferences and customize the application according to their needs.

Prop Importing: While this suggestion may warrant a separate plugin, the ability to import props would greatly enrich the user experience. For example, being able to sit on a chair or interact with objects within the virtual environment would add a new level of immersion and versatility.

I hope you find these suggestions helpful in further refining and expanding your already remarkable application. Once again, I commend you on your outstanding work and look forward to witnessing the progress and growth of this incredible software.

Thank you for your dedication and commitment to delivering an exceptional user experience.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.