GithubHelp home page GithubHelp logo

exvr's Introduction

exvr

Virtual reality is a nascent field with a significant amount of potential. It is criminally underserved by its current ecosystem; highly-social, immersive experiences are possible today, but the state of affairs is that there isn't enough of a market to allow for these to be developed by the traditional games industry.

It doesn't have to be that way. Through extensive reverse engineering and the construction of a shared framework, we can look at bringing VR to flatscreen games in a way that both honours them and makes them so, so much more.

You've always been able to create characters in games and interact with other players. The only thing stopping us from being our characters is execution.

Objective

To build a general-purpose framework for adding VR to existing flatscreen games, with the following high-level goals to guide us (excerpt from xivr readme):

  • We should create a high-quality, comfortable, native experience comparable to an official project.
  • VR and non-VR players should be able to play, just like with VRChat.
  • It should be possible to play the actual game to completion, even if this is not necessarily the case on day one.
  • To the best of our ability, we should be open-source so that we can accept contributions from anyone.
  • The experience we create should entice flatscreen players to get headsets, and for non-players to become players.
  • Additional hardware capabilities should be supported where possible, including facial expression tracking, feet tracking, and more.
  • People should be able to have real social experiences.

This will necessitate a significant amount of work, including per-game reverse engineering, the development of generic VR abstractions, discovering new fields of VR design, external networking for the synchronisation of pose data, infrastructure around servers, and much more. It is not a project I intend to complete alone.

Projects

xivr

The lead project of exvr is xivr, an experimental project to bring Final Fantasy XIV to VR. It does not work, but it could. Only time and effort will tell.

common

Contains the crates that are shared between exvr projects. As xivr code matures, more of it will be moved to common, and xivr will consume common code to provide its functionality.

exvr's People

Contributors

philpax avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

exvr's Issues

Securely authenticate the player with XIVR servers

We don't want players to be able to impersonate other players, nor do we want players to provide sync data when they're not logged in.

To do this, we need a way of determining if the user has successfully logged into FFXIV and borrowing that certification for ourselves. I believe the game uses some form of OAuth token, but more research is necessary.

Transmit voice chat from and to players

Players should be able to speak to each other naturally with their desired voice (#16 and #17). To achieve this, their voice needs to be streamed to the server, which then needs to stream it to other players, while preserving the quality of the input and with as low latency as possible.

I don't know how to do this. What are the codecs used for this? What do existing voice chat solutions do? Can one be integrated with ease, or do we have to roll our own?

Generate IK for your own local character

To generate IK, we will need to use an IK algorithm. Probably the dev branch of https://github.com/TheComet/ik. This will give us a pose for the joints we solve for, and then we will need to apply those to characters.

Potential points of concern:

  • whether or not the selected library is ready for "production" use
  • finding and applying the relevant constraints for a body
  • gracefully degrading when full-body tracking data is not available
  • gracefully degrading when tracking is lost
  • avoiding self-occlusion
  • creating believable results in real-time
  • determining user limb lengths
    • could potentially be handled by an external program (LÖVR?)

Render character model in first-person

When you enter first-person mode in FFXIV, your character model and that of your mount/anything else you're attached to are hidden. We should either find a way to show this, or hijack the third-person camera to render first-person.

I am leaning towards the latter as the first-person mode seems compromised in several ways (not being able to go first-person while sitting, etc).

Apply real-time voice conversion to the user's voice

Based on my preliminary research, machine learning research is at the stage where real-time voice conversion is possible, but not easy. The largest proof-of-concept in this area is realtime-yukarin, which appears to work surprisingly well.

Open questions:

  • What's the current state of the art now?
  • How easy is it to integrate?
  • Can we produce the full range of voices you'd expect from FFXIV characters?
  • Can the proposed solution handle gender-bending?
  • What is the latency on conversion?

Create a server to synchronise player IK data + voice chat

Each data centre should have a corresponding XIVR server as close as possible responsible for receiving the additional sync data from XIVR clients and distributing it to other peers.

Requirements:

  • one server, one data centre (for now)
  • each server will accept peers from their associated DC, but all data in-out is restricted to the server the peer belongs to
  • needs to be able to support hundreds of players connected to the same server instance
    • additional servers may be required depending on the demands here
  • peers only receive sync data for the peers they have streamed in, obviously

Apply IK data to characters

Whoo-boy. Assuming IK data has been produced either locally (#11) or from the network, it must then be applied to the characters. Several projects within the FF14 ecosystem already do this, including CM, Anamnesis, and more.

Our primary resource will probably be https://github.com/lmcintyre/posetest though (ty perchbird)

Potential concerns:

  • Adapting the user's skeletal pose to that of their character. Somewhere out there is someone who's 2m tall who wants to play as a Lalafell, and more power to them. We will just need to figure out how exactly that works...
  • Resolving IK data across multiple characters all dealing with separate latencies. Combined network / animation problem, this will require some brainstorming

Spatialise incoming voice chat

Voice chat should be spatially localised to the person speaking. You should be able to determine where someone is, relative to you, from the direction of their voice.

Does FFXIV even have 3D audio? What's involved in 3D audio? many mysteries to be solved here

Switch to a projected flatscreen for cutscenes and parts of the game where you are not in control

The game frequently has moments where control is removed from you and/or you are no longer in the perspective of your character. We should display a virtual environment in which this cutscene is shown, preferably with the environment and cutscene both being in stereo (as if you are attending a 3D cinema).

To do this, we will need to capture the game (easy) at a different resolution to the VR headset (very not easy), and then render our own environment (moderate).

Support DecaMove

Immersion and quality of gameplay go up when your movement orientation is decoupled from both your look and aim orientations. The DecaMove appears to have a reasonable API to support this, but I wonder if OpenXR does as well...

Determine where all camera matrices are submitted for rendering

There are multiple locations where camera matrices are submitted for rendering, and we need to hijack all of them to ensure our model/view/projection matrices, as well as their inverses, are all correct.

  • g_CameraParameter
  • g_InstanceParameter.m_WorldViewMatrix (unsure)
  • g_WorldViewMatrix
  • g_WorldViewProjMatrix
  • g_PS_ViewProjectionInverseMatrix
  • probably some others that I've missed

Capture all UI to a separate render texture

At present, all UI rendering is disabled. Instead, it should be redirected to another texture and rendered on top of the scene in 3D. At a later stage, we can hijack rendering of each individual UI element so that we can reposition them in 3D space, but that's a problem for much later, especially as other applications (e.g. Dalamud) are also rendering into UI-space.

The findings here might also be useful for the people struggling with GShade affecting UI.

Switch over to egui from imgui

egui is generally nicer to use, even if all of our existing infrastructure is built up around imgui. May result in breaking compat with xivr, but that's okay for now.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.