GithubHelp home page GithubHelp logo

vtubestudio's People

Contributors

chuigda avatar denchisoft avatar eslym avatar fomtarro avatar genteki avatar hawkbat avatar huevirtualcreature avatar lcraver avatar lualucky avatar mlo40 avatar raelice avatar renpona avatar satetsu888 avatar saviorxtanren avatar vtubershin avatar walfie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vtubestudio's Issues

Does the Api expose face position?

i read your documentation and i see you can request the models position

with this as a response:

{
	"apiName": "VTubeStudioPublicAPI",
	"apiVersion": "1.0",
	"timestamp": 1625405710728,
	"requestID": "SomeID",
	"messageType": "CurrentModelResponse",
	"data": {
		"modelLoaded": true,
		"modelName": "My Currently Loaded Model",
		"modelID": "UniqueIDToIdentifyThisModelBy",
		"vtsModelName": "Model.vtube.json",
		"vtsModelIconName": "ModelIconPNGorJPG.png",
		"live2DModelName": "Model.model3.json",
		"modelLoadTime": 3021,
		"timeSinceModelLoaded": 419903,
		"numberOfLive2DParameters": 29,
		"numberOfLive2DArtmeshes": 136,
		"hasPhysicsFile": true,
		"numberOfTextures": 2,
		"textureResolution": 4096,
		"modelPosition": {
			"positionX": -0.1,
			"positionY": 0.4,
			"rotation": 9.33,
			"size": -61.9
		}
	}
}

is it possible to create a custom response? or get a specific body part?
for example i want to request the position of the mouth , can i get that specific position through the API?

API Response is blocking

I am developing a plugin that make websocket connection from mobile device via 5GHz WiFi, however the signal is not so strong, that makes the connection not so reliable.

I mainly use InjectParameterData at 60FPS, and the VTS program just stuck there. I use dnSpy to decompile the assembly (forgive me for reversing your program but it is urgent) because in Update function it use a while to dequeue every requests
image
It seems the processing cost is larger than 16ms, that cannot process in a frame, which makes it stuck. However a localhost plugin like VBridger doesn't stuck, that makes me think it is related to response codes.

And in the executor of InjectParameterData, it uses this to send response.

VTubeStudioAPI.sendToSession<InjectParameterDataResponse>(basicResponse);

Even the API says it is async, it is not.
image

And after I wrap this code in an async block and recompile the assembly
image
image

The program never stuck again, and my plugin works like a charm.
Please consider move all response code in real async void functions, and not rely on the buggy implementation the websocket-sharp provide.

No Error Response to HotkeysInCurrentModelRequest when not providing modelID in data

When sending a "HotkeysInCurrentModelRequest" without "modelID" but including "live2DItemFileName" in the data, no response is given.

Like this:

"data": {
    "live2DItemFileName": "FILENAME"
}

With no error i couldn't tell what when wrong since in the documentation...

If both "modelID" and "live2DItemFileName" are provided, only "modelID" is used and the other field will be ignored.

This is false as my work around to the issue is to provide no "modelID" like this:

"data": {
    "modelID": "",
    "live2DItemFileName": "FILENAME"
}

Please update documentation and fix no error response when only providing "live2DItemFileName" :)

BUG: Virtual camera install.bat (regsvr32) path issue

vdoninja
Reference image from VDO.Ninja when selecting VTubeStudioCam as a camera source.

The Issue:

When Steam is used to import an existing game library, some Japanese VTubers made use of Japanese characters in the directory name.

Normally, this isn't an issue. Games and software installed through Steam work just fine, and so does VTubeStudio to some extent.

The problem is installing the virtual camera with regsvr32. It does not recognize non-ASCII characters in the install path.

When install.bat is ran as admin, it forces the registration of the UnityCaptureFilter DLLs and notifies the user that it is a successful install. However, any non-ASCII characters in the install path will be left blank in the Windows Registry (from my testing at least).

Example:
This H:\ゲーム\steamapps\common\VTube Studio\VTube Studio_Data\Install_Webcam\UnityCaptureFilter64bit.dll
will become this H:\ in the Windows Registry, resulting in the camera being listed but not functional.

When install.bat is not ran as admin, it fails as expected but immediately closes the terminal window without notifying the user of the error.

This won't be an issue if VTubeStudio is installed in the default C:\Program Files (x86)\Steam\steamapps\common directory.

This is only an issue for people who like to move their "games" to a dedicated game drive and give the folder a custom name with Japanese or Chinese characters.


SOLUTION: (Hopefully, you can read or translate English.)
For those of you who come across this issue before the devs are able to address it.

  • Install VTubeStudio in the default Steam directory located in your C: drive.
  • or redo the Steam Library you created using EN alphanumeric ASCII characters.
    Example: ゲームGAME

Request: ability to override auto-blink and auto-breathe tracking

In the interest of users needing to modify their parameter configuration as little as possible, I think it would be good to have an auto-breathe setting that only applies its oscillation in the event that no tracking data is being piped to the designated input parameter.

For example, my vts-heartrate plugin provides a parameter that can be used to control Breathing. But if the user modifies their model to have the custom parameter control their Breathing output, then when they are doing a normal stream without the plugin active, their model simply will not breathe.

Conversely, if they have auto-breathing set up for their Breathing output, that will override any tracking data to that output parameter.

As such, it would be nice if one could connect an input parameter to an output, enable auto-breathing to only kick in when no tracking data is provided, and never need to worry about altering the parameter configuration again.

BUG: iphone face capture has problems with eye tracking

what happend

When I use the iphone face capture, I notice that when I try to squint, the capture parameter changes dramatically from 0.4 to 0.This prevents me from making fine eye movements.

what I try to do

  1. I tested both camera capture mode and Nvidia capture mode as well and neither had this problem
  2. I also turned on prprlive, another facial capture software often used by Vtubers. Both applications use the capture mode of the iphone at the same time, but prprlive does not have this problem
  3. I tried to capture using the iphone side without connecting to the computer. The problem is the same, but better. The parameter jumped from 0.25 to 0
  4. I asked friends who also use iphone face capture and they said they also encountered this problem. This may not be an isolated case but a common problem
  5. I have tried modifying the sensitivity and value mapping, but the wrong value seems to exist from the face algorithm stage and my attempts have had no effect

guess

I suspect that this problem may be caused by the core algorithm of the iphone capture mode

my env

iphone: iPhone XR (My friends also uses this model because it is the cheapest model with Faceid support)
vtubeStudio: stable version installed from steam

I have used translation software, if there is any ambiguity please let me know and I will explain in detail

Access to Movement Config via Plugin

Excluding the current model screen/absolute position settings, it would be nice to be able to plug data into the current movement config via the API!

How to get an API key?

124
23453124

I have uninstalled and reinstalled the program and performed an integrity check, but the issue persists. How can I obtain a personal API?

iOS-App 1.26.6 does not ask for Local Network Access (breaking 3rd Party Client support)

[UPDATE] I reinstalled VTubeStudio to make sure I got all the permission prompts and it never requests Local Network Access. As a result that permission is also unavailable in the App or Privacy section and YTS simply doesn't see LAN IPs anymore..
image

[ORIGINAL ISSUE TEXT]

I'm testing this with VTubeStudio 1.26.6 on iOS 16.5.1.

Looks like the current AppStore Version gets something messed up when enabling '3rd Party PC Clients' support and only and exclusively binds to my mobile data IPv4 address - check this screenshot while my phone is connected to WiFi:

IMG_BC40FE850B48-1 2

The 10.x.x.160 is the public IP address my mobile carrier assigns to the phone.
VTubeStudio ignores / doesn't see the WiFi address 192.168.x.x that is also available.

If I disable mobile data it fails to find any available IPv4 address even though my iPhone shows the correct IPv4 in system settings and is reachable via ping / other tools.

IMG_3E7FE64FEA09-1

As is, this completely breaks the ability to connect with 3rd-Party clients as none of them support connecting via IPv6 and even if I point them at my IPv4 VTubeStudio is simply not even listening on that address.

[EDIT] I also tried disabling and re-enabling the "Local Network Access" permission in System Settings but that didn't change anything.

Add functionality to retrieve and change Physics Settings

Would be great if we could work with the Physics Settings to create some interesting tools! (Hotkey functionality would also be great). I noticed that some streamers like to set their physics to maximum for redeems, so I think this feature would see some use!

I could also imagine that people could craft custom Overlays with leaves blowing through the screen in the direction of the "wind" parameter.

cant access virtual camera

im trying to access Vtube Studio's virtual camera(for personal use) using cv2.VideoCapture(1) but i get:

[ERROR:[email protected]] global obsensor_uvc_stream_channel.cpp:156 cv::obsensor::getStreamChannelGroup Camera index out of range

i tried with different cameras(my own builtin(index 0) and obs's virtual(index 2)) they work just fine, but this one returns error. i read Vtube Studio's licence agreement in case it was forbidden by Vtube Studio or something and found nothing related to it. also i can access it from obs.

im on windows 10, python3.10.11, cv2 version 4.8.0(got via print(cv2.__version__))

Feature : Dynamic update of translation

From my understanding, new translation files are updated when a new update of VTS is released. This means that; with this current process, translations are systematically one update behind the latest version.

This can be uncomfortable for people who are not fluent in English, making accessibility complicated. One idea would be to fetch a repo at VTS launch to check whether a new language file is available for the selected language, and download it if so.

This would enable us, as translators, to push out translations more quickly between two versions of the software, to facilitate accessibility for everyone.

BUG : InjectParameterDataRequest validate invalid value data

Explanation

This bug echoes observations we have made about the VTS-Sharp framework: FomTarro/VTS-Sharp#36

A custom parameter for which I injected a value was never updated, but I received a positive response from the API.

On further investigation, it appears that the API endpoint does not check the type of the given value for the keys :

  • value
  • weight

To test this, let's consider a parameter InputTestParameter defined as :

{
                "parameterName": "InputTestParameter",
                "explanation": "This is my new parameter.",
                "min": 0,
                "max": 1,
                "defaultValue": 0.5
}

Exchanged message

Value outside the bounds

Sent

{
  "apiName": "VTubeStudioPublicAPI",
  "apiVersion": "1.0",
  "timestamp": 1701634822409,
  "requestID": "52297ca7-3332-4585-aa31-79e4f8a9525e",
  "messageType": "InjectParameterDataRequest",
  "data": {
    "faceFound": false,
    "mode": "set",
    "parameterValues": [
      {
        "id": "InputTestParameter",
        "value": 10
      }
    ]
  }
}

Received

{
  "apiName": "VTubeStudioPublicAPI",
  "apiVersion": "1.0",
  "timestamp": 1701634822421,
  "messageType": "InjectParameterDataResponse",
  "requestID": "52297ca7-3332-4585-aa31-79e4f8a9525e",
  "data": {}
}

Value is null

Sent

{
  "apiName": "VTubeStudioPublicAPI",
  "apiVersion": "1.0",
  "timestamp": 1701635150890,
  "requestID": "f599ccdf-0bdf-4069-9708-06d0106755c3",
  "messageType": "InjectParameterDataRequest",
  "data": {
    "faceFound": false,
    "mode": "set",
    "parameterValues": [
      {
        "id": "InputTestParameter",
        "value": null
      }
    ]
  }
}

Received

{
  "apiName": "VTubeStudioPublicAPI",
  "apiVersion": "1.0",
  "timestamp": 1701635150899,
  "messageType": "InjectParameterDataResponse",
  "requestID": "f599ccdf-0bdf-4069-9708-06d0106755c3",
  "data": {}
}

Value is string

Sent

{
  "apiName": "VTubeStudioPublicAPI",
  "apiVersion": "1.0",
  "timestamp": 1701635368284,
  "requestID": "40a8f809-f427-49ab-a288-66fc8e3407fc",
  "messageType": "InjectParameterDataRequest",
  "data": {
    "faceFound": false,
    "mode": "set",
    "parameterValues": [
      {
        "id": "InputTestParameter",
        "value": "arbitrary string"
      }
    ]
  }
}

Received

{
  "apiName": "VTubeStudioPublicAPI",
  "apiVersion": "1.0",
  "timestamp": 1701635368294,
  "messageType": "InjectParameterDataResponse",
  "requestID": "40a8f809-f427-49ab-a288-66fc8e3407fc",
  "data": {}
}

Weight is a string (or null)

Sent

{
  "apiName": "VTubeStudioPublicAPI",
  "apiVersion": "1.0",
  "timestamp": 1701635771971,
  "requestID": "a70944b1-f23f-4cf3-9d79-4d716579bcc5",
  "messageType": "InjectParameterDataRequest",
  "data": {
    "faceFound": false,
    "mode": "set",
    "parameterValues": [
      {
        "id": "InputTestParameter",
        "value": 0.9,
        "weight": "my string"
      }
    ]
  }
}

Received

{
  "apiName": "VTubeStudioPublicAPI",
  "apiVersion": "1.0",
  "timestamp": 1701635771982,
  "messageType": "InjectParameterDataResponse",
  "requestID": "a70944b1-f23f-4cf3-9d79-4d716579bcc5",
  "data": {}
}

Suggestion

The current behavior may be misleading when using the API, as the parameter is not injected.
It would be interesting to explicit the reason if the type or the values of these fields are incorrect.

Request: Cyclic and Temporal Parameters

It would be incredibly useful to be able to create input parameters that automatically increment over time with the possible ability to set a cycle length before they automatically reset. I imagine this could be done via a plugin but it seems like the kind of thing that opens fundamental possibilities like looping effects and time-based events without needing any external programming.

I know at first glance for some this will sound identical to auto breath, but because breath oscillates as opposed to overflowing and resetting, anything dependent on it has to be designed around having the motion reversed, where this wouldn't.

DISPARITY: modelName value is different between Request and Event API.

The modelName value returned via the ModelLoadEvent subscription API and the modelName value returned via the GetCurrentModel request API are different, even though the payload is the same structure, one would think they behave the same.

For example, when this one loads, the event API gives me modelName: 'skeletom_real' while the request API gives me modelName: 'skeletom'. Akari, for example, comes back as akari over request and Akari over event.

image

Please separate `$` from commands in the wiki

In the Linux guide on the wiki, the commands are like so:

$sudo pacman -Sy python39

This makes it less easy to not include the $ character when copying commands. Instead I propose that the commands be separated from the $ character by a space, like so:

$ sudo pacman -Sy python39

This is standard for any guides for Linux commands, to separate the prompt from the command, with $ being regular user, and # being root user. The prompt itself in the terminal is separated normally from the command, too.

Also please, never use pacman -Sy. Either do pacman -Syu for a full upgrade, or do pacman -S to download from the mirrors that correspond to the current state of the system.

Feature Request: Request to see if expressions are set/unset

The ability to see which expressions are set and unset would allow for potentially fluid transitions/swaps between expressions.

Proposal:

One function added to API ExpressionStateRequest (this could also potentially be rolled into the hotkey request since that will return only expressions that can be toggled via the API).

Potential implementation:

REQUEST

{
	"apiName": "VTubeStudioPublicAPI",
	"apiVersion": "1.0",
	"requestID": "SomeID",
	"messageType": "ExpressionStateRequest",
	"data": {
		"expressionID": "Optional_ExpressionFileName",
	}
}

If an ExpressionFileName is provided the return Array will have a single item containing only that Expression.

It might also make sense to expose expressions via unique IDs.

RESPONSE

{
	"apiName": "VTubeStudioPublicAPI",
	"apiVersion": "1.0",
	"timestamp": 1625405710728,
	"requestID": "SomeID",
	"messageType": "ExpressionStateResponse",
	"data": {
		"modelLoaded": true,
		"modelName": "My Currently Loaded Model",
		"modelID": "UniqueIDOfModel",
		"expressionStates": [
			{
				"file": "myExpression_1.exp3.json",
                                "expressionSet": true
			},
			{
				"file": "myExpression_2.exp3.json",
                                "expressionSet": false
			}
		]
	}
}

Use Case:

In my case my model (an MLP:FiM style unicorn wearing an umbreon kigurumi) has a base model with no mane:

image

and two (relevant) available expressions:

  • "Hood Down" which places a normal mane on the model:
    image
  • and "Hood Up" which puts the hood up:
    image

To swap between the two I have to make sure to set/unset both or I get a weird chunk of man outside the hood:
image

Currently I am using Cazzar's plugin for the Stream Deck and running a multi-action to set/unset both simultaneously. In one direction it looks pretty ok, in the other it will always flash to the no mane at all look for a fraction of a second. However with the proposed change I (and others) could set and unset the expressions in the order that looks best on stream.

Example of how that might work:

bool isHoodUp = <API Request Result>
if(isHoodUp) {
    <SendRequest ToggleHoodDown>
    <SendRequest ToggleHoodUp>
}  else {
    <SendRequest ToggleHoodUp>
    <SendRequest ToggleHoodDown>
}

(note how the order is swapped in each half of the if/else)

BUG: Idle animation wrongfully overwritten by face tracking when camera is OFF

When you launch VTS, the idle animation is playing but all the trackable inputs are not moving.

When you turn on the camera and lose tracking, the idle animation will play properly.

Looking at the documentation, the idle animation should only be overwritten, when there's actual tracking happening, but there's not:
https://github.com/DenchiSoft/VTubeStudio/wiki/Interaction-between-Animations%2C-Tracking%2C-Physics%2C-etc.
image

The same issue is happening when you turn the camera off mid-tracking. The idle animation will still be overwritten by the non-existing tracking.

There's a workaround (although very inconvenient):

  • Turn on the camera and wait until it tracks your face
  • lose tracking (by covering the camera or moving out of frame)
  • turn off the camera

【MacOS】Window freeze Unable to click

My VTube Studio suddenly became unresponsive to clicks. I know that using fullscreen mode can resolve this issue, but it's not a perfect solution. The high resolution causes my CPU usage to be too high, and not every attempt to open or close the software in fullscreen mode is successful. I've already tried updating my system and reinstalling the software, but reinstalling the entire system is too costly for me.
I am using a 14-inch MacBook Pro M3Max
MacOS 14.2~14.3beta

Feature: feed in data for any default or custom parameter with a "lock" label

You have to re-send data for a parameter you want to control with your plugin at least once every second. Failure to do so will result in the parameter being considered "lost" and it will go back to the value of whatever was controlling it before.

I think this design is not elegant. I prefer a "lock" label with type boolean in the request, such as:

{
	"apiName": "VTubeStudioPublicAPI",
	"apiVersion": "1.0",
	"requestID": "SomeID",
	"messageType": "InjectParameterDataRequest",
	"data": {
		"parameterValues": [
			{
				"id": "FaceAngleX",
				"value": 12.31
			},
			{
				"id": "MyNewParamName",
				"weight": 0.8,
				"value": 0.7,
                                "lock": true
			}
		]
	}
}

Some times, i want to change the clothes of my model for some minutes, then i have to send ws requests to VTS in a loop with a intervial of 500 ms. But, this is not enough. I tested this method, there is a serious problem: the ws sometimes have a delay more than 1 second, which i think caused by the VTS plugins server. This means, I cannot keep my parameter value, it sometimes would blink to its default value!!!

Thinking the charactor Pio in the game "Poison Maker", she has toooooooo much clothes, and if she want to be a vtuber, she may need a plugin to help her choose a cloth. Then i have a need like this, help, please!

Broader control over art meshes

Similar to how we can tint art meshes, introducing more effects and parameters would be really fun to play with.

For example, we could have:

  • Grow [art meshes] by <multiplier> over <n> seconds (0 is instant)
  • Shrink [art meshes] by <multiplier> over <n> seconds (0 is instant)
  • Set opacity of [art meshes] to <number 0:1> over seconds (0 is instant)
  • Offset [art meshes] by <offset cordinates> over <n> seconds (0 is instant)

Request : Ability to translate phonem

Explanation

Phonemes are not universal and depend greatly on the language.

For example, "E" will not be pronounced the same in German, English or French.

As a result, this can lead to misunderstandings and errors when setting parameters on a model, because the form of the mouth does NOT match with the good phonem.

Here's a video for better understanding

Suggestion

A simple workaround would be :

  • to let phonems be translated for each language
  • to name the current parameters another way, or to change the mapping according to the user language

Add connect Android via USB Debugging mode

As the title says, I want the "wired connection to Android phone via USB Debugging" functionality.

Partly because I find Android users quite annoyed every time the Access Point malfunctions, but if they want to use a wire, they have to buy an iPhone!

Partly because I prefer to use Android over the iPhone because it is open source and can install my own custom Android operating system that I like!

Tracking Lost Switch via API

Currently, when using entirely different tracking from native VTS, the "tracking lost" animation doesn't work. It would be nice to have either a detection (no values moving) to switch the animation, or an API switch that can be triggered externally.

InjectParameterDataRequest

It seems like there is a bit of frame drops when you send the data fast, what is the rule with this api should we only send every second? or can we send every 16ms ? Thanks!

Question: How to further scale Live2D model

My Live2D model is too large and exceeds the screen even I scroll mouse wheel to the minimum scale. I tried to edit the "Scale" key in vtube.json directly and it seems not work, the minimum valid scale value is 0.05.
Screenshot 2022-04-14 172215

Screenshot 2022-04-14 172842

Help needed, Thanks!

Request: Idle Animation access via API

It would be nice to directly read current idle, read list of all idle animations, and set the idle animation via API. For a plugin there is currently no way to temporarily switch to an idle animation, which makes them impossible to use except via toggling different hotkeys, which requires a lot of user set up.

Movement Config uses the FacePosX/Y/Z when using API

While normally the config can be adjusted specifically with how much the user wants it to activate, the API can't be fine tuned separate from the HeadPosX/Y/Z values- this both leads to jitter (as the raw input cannot be smoothed) as well as being unable to control how the model moves.

A solution would be to allow people to change what the movement config inputs are, almost like the parameter boxes but for the movement config.
This would open up the possibility of using hand location (for example) to control the movement config, for a sock puppet like effect, or any number of other controls (including custom ones from plugins)!
A smoothing control would be beneficial, as base values can get quite jittery and there's no way to help it at the moment.

Overall-
In addition to position, a rotation added to the config might be fun, (thinking about the sock puppet), giving users a greater degree of control beyond the base rigging of their models.

In summary-
Add input/smoothing options to the movement config, as well as rotation (could put the current mouse wheel control as an additive modifier on the rotation input so you still get the overall rotation control)

Super rough mockup below-

image

Allow user to remove settings button on a web item

image
image


Please allow the user to remove that settings button. Make it appear during hover or something.
I use it as a speech bubble like this.

My alternative workaround was increasing the item size so that the button goes out of screen but it is inconvenient.

Preload models with a set zoom/position

I find myself wanting to allow users to switch my model but there is downtime visually that could be avoided.
If this gets implemented I could create a webplugin that would allow viewers to use points to preload and swap models. This could then appear like an instantaneous swap from the viewers pov

Even adding a api option for swapping models with preload. ie, lazySwap(position, zoom). It could then load in the background and not unload the current model until it’s ready for render

Request: an option to change the priority between value-providers (if possible)

In some cases I would like for Expression to be above Physics

for example I would make the [hand] parameter be controlled by physics but have it be able to be overwritten with expression or one time animation toggles. surely there are workarounds to achieve similar results, but if this is possible it would be great.

Bug: Face Position X/Y/Z Don't Calibrate on iPhone

Calibrating when using a webcam re-calibrates your Face Position X/Y/Z, but the same is not true for iPhone. When using iPhone as your tracking source, re-calibrating from the iPhone does not affect the Face Position X/Y/Z.

Add ability to control scene lighting

I would like the ability to alter the scene lighting through the api.
This would allow me to set overlay color, brightness and overlay threshhold.

With this I could create different affects on my character like rainbox effect.

Get the list of used tracking parameters

Is there a way to get the list of tracking parameters that are used (aka mapped with a L2D parameter) in a model? If not, this information could be useful in plugins that manage many parameters.

For example, a query on Live2D parameters in the current model could be great:

{
	...
	"messageType": "Live2DParameterListResponse",
	"data": {
		...
		"parameters": [
			{
				"name": "MyLive2DParameterID1",
                                "mappedWith": "MyCustomParameterID1",
				"value": 12.4,
				"min": -30,
				"max": 30,
				"defaultValue": 0
			},
			...
		]
	}
}

custom item that is a html page

Hey there.

I was checking out how items work and at the moment it does not support anything other than images.

Will it be possible in the future to make something similair like a browser window as an item (something like browsersource in OBS)?

i made a program in javascript that makes speechbubbles and does TTS and fits perfectly on a Vtuber but it the position is static at the moment and i would like to attach it to the face of the model. so i thaught about making it work like an item, would this be possible in the future?

here is an example of my program
program

if this is currently not possible. Would it then be possible to develop a plugin in vtube studio to integrate this:
Ultralight, this makes it possible to make UI elements with HTML,CSS and javascript and is compatible with unity.
i just dont know if your software allows to create plugins IN vroid studio itself

Does the API work with the mobile VTS apps?

I want to get tracking data from the vTube Studio app on my phone to send to an app I have made to run on the desktop. Does the API listed here apply to the mobile apps at all?

I've noticed that there are options to stream to PC via WiFi (I assume this is to vTube Studio on PC) and there is a third party integration switch for use with VSeeFace. Is there any API for these that can be used by other apps on the same network?

Any chance to get a non-steam / apple-silicon version for macOS?

It's nice to have VTubeStudio available for Mac users at all,
but having to install steam and all of the bloat services that come with it is really a pain.

If it's the storefront / distribution or having the option to offer paid extras - you can get that from the regular AppStore an macOS already. It would also probably help people find VTubeStudio on macOS a bit easier.

If the AppStore isn't interesting.. then 🙏 please consider releasing at least a non-steam version for macOS and build it as universal binary / AppleSilicon for better performance.

Request: Allow plugins to load items that are pinned to the model

I'm implementing a plugin in which items are dynamically attached to the model. The UI exposes an option to pin items to the model, and the item list API also lets plugin developers inspect which items are pinned, but it is currently not possible (in my knowledge) to create pinned items from plugins.

API support for adjusting artmesh screen colors

At present a ColorTintRequest can apply multiply colors to ArtMeshes, but no such API exists for adjusting their screen colors. Making it possible to adjust them both, preferably in the same request, would offer plugins more flexibility with color adjustment so they aren't limited to just darkening.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.