GithubHelp home page GithubHelp logo

thaytan / openhmd Goto Github PK

View Code? Open in Web Editor NEW

This project forked from openhmd/openhmd

141.0 38.0 11.0 1.67 MB

Free and Open Source API and drivers for immersive technology.

License: Boost Software License 1.0

CMake 2.52% C 96.57% Meson 0.91%

openhmd's Introduction

OpenHMD

This project aims to provide a Free and Open Source API and drivers for immersive technology, such as head mounted displays with built in head tracking.

Oculus Rift Development

This repository is primarily for development of support for Oculus Rift CV1 and Rift S headsets.

Oculus Rift CV1

2021-01-19 - Positional tracking support is ongoing in this repo

Development toward full positional tracking is happening here in the https://github.com/thaytan/OpenHMD/tree/rift-kalman-filter branch. This branch has the latest code for acquiring device positions from LED matching and tracking over time.

My current focus is on improving the computer vision to:

  • Improve device acquisition and matching using IMU orientation data, particularly for controllers
  • Reduce jitter in tracking
  • Improve tracking under difficult conditions - when device LEDs are heavily occluded or start to merge into one tracking blob at a distance or sharp angles to the camera.
  • More testing of multiple camera setups.

There are also various issues to resolve - spurious startup errors with USB transactions, and occasional watchdog timeouts that lead to a black screen.

Joey made a demo video of current tracking status with a DK2 at https://www.youtube.com/watch?v=3AdmS3vy7ZE

Oculus Rift S

2021-01-19 - For Rift S, upstream OpenHMD has a working 3DOF driver. I plan to work on positional support after CV1 tracking is in a functional state, unless someone else tackles it first.

License

OpenHMD is released under the permissive Boost Software License (see LICENSE for more information), to make sure it can be linked and distributed with both free and non-free software. While it doesn't require contribution from the users, it is still very appreciated.

Supported Devices

For a full list of supported devices please check https://github.com/OpenHMD/OpenHMD/wiki/Support-List

Supported Platforms

  • Linux
  • Windows
  • OS X
  • Android
  • FreeBSD

Requirements

Language Bindings

Other FOSS HMD Drivers

Compiling and Installing

Using Meson:

With Meson, you can enable and disable drivers to compile OpenHMD with. Current available drivers are: rift, deepon, psvr, vive, nolo, wmr, xgvr, vrtek, external, and android. These can be enabled or disabled by adding -Ddrivers=... with a comma separated list after the meson command (or using meson configure ./build -Ddrivers=...). By default all drivers except android are enabled.

meson ./build [-Dexamples=simple,opengl]
ninja -C ./build
sudo ninja -C ./build install

Using CMake:

With CMake, you can enable and disable drivers to compile OpenHMD with. Current Available drivers are: OPENHMD_DRIVER_OCULUS_RIFT, OPENHMD_DRIVER_DEEPOON, OPENHMD_DRIVER_PSVR, OPENHMD_DRIVER_HTC_VIVE, OPENHMD_DRIVER_NOLO, OPENHMD_DRIVER_WMR, OPENHMD_DRIVER_XGVR, OPENHMD_DRIVER_VRTEK, OPENHMD_DRIVER_EXTERNAL and OPENHMD_DRIVER_ANDROID. These can be enabled or disabled adding -DDRIVER_OF_CHOICE=ON after the cmake command (or using cmake-gui).

mkdir build
cd build
cmake ..
make
sudo make install

Configuring udev on Linux

To avoid having to run your applications as root to access USB devices you have to add a udev rule (this will be included in .deb packages, etc).

A full list of known usb devices and instructions on how to add them can be found on: https://github.com/OpenHMD/OpenHMD/wiki/Udev-rules-list

After this you have to unplug your device and plug it back in. You should now be able to access the HMD as a normal user.

Compiling on Windows

CMake has a lot of generators available for IDE's and build systems. The easiest way to find one that fits your system is by checking the supported generators for you CMake version online. Example using VC2013.

cmake . -G "Visual Studio 12 2013 Win64"

This will generate a project file for Visual Studio 2013 for 64 bit systems. Open the project file and compile as you usually would do.

Cross compiling for windows using mingw

Using CMake:

For MinGW cross compiling, toolchain files tend to be the best solution. Please check the CMake documentation on how to do this. A starting point might be the CMake wiki: http://www.vtk.org/Wiki/CmakeMingw

Static linking on windows

If you're linking statically with OpenHMD using windows/mingw you have to make sure the macro OHMD_STATIC is set before including openhmd.h. In GCC this can be done by adding the compiler flag -DOHMD_STATIC, and with msvc it can be done using /DOHMD_STATIC.

Note that this is only if you're linking statically! If you're using the DLL then you must not define OHMD_STATIC. (If you're not sure then you're probably linking dynamically and won't have to worry about this).

Pre-built packages

A list of pre-built backages can be found on http://www.openhmd.net/index.php/download/

Using OpenHMD

See the examples/ subdirectory for usage examples. The OpenGL example is not built by default, to build it use the --enable-openglexample option for the configure script. It requires SDL2, glew and OpenGL.

An API reference can be generated using doxygen and is also available here: http://openhmd.net/doxygen/0.1.0/openhmd_8h.html

openhmd's People

Contributors

alexandre-janniaux avatar andrewkww avatar bastiaanolij avatar blejdfist avatar candyangel avatar chaiyabili avatar christophhaag avatar clearlyclaire avatar da-syntax avatar der-b avatar douglas-3glasses avatar fredz66 avatar hairyfotr avatar jsarrett avatar julianeisel avatar kleinerm avatar l33tlinuxh4x0r avatar linkmauve avatar lubosz avatar magestik avatar magwyz avatar mdnelson8 avatar neuhaus avatar noname22 avatar ph5 avatar saracenone avatar thaytan avatar theonlyjoey avatar wallbraker avatar xlava avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openhmd's Issues

Does the actual HMD have positional tracking?

I was wondering if the HMD does have Positional tracking because it's definitely not enabled by default I tried using OpenHMD in SteamVR but it was barely bearable because the HMD is not being tracked with the Sensors it just felt that there is huge latency and it most of the time always was kinda tilted so I was not looking straight is there fix for this? This issue probably would be better to put into thaytan/SteamVR-OpenHMD but I am pretty sure this is not issue within the driver itself.

Split camera processing across threads

Doing all the video processing directly in the UVC callback leads to frame misses when performing the expensive RansacPnP processing on tracked LED blobs.

It would be better to split capture and processing into separate threads.

In general, scanning a frame and reading out the blobs seems do-able in the callback, so it might be OK to keep the blob tracking in the new-frame callback, but to defer any RansacPnP operations to a separate thread.

Future Directions: Monado

OpenHMD has long since become unmaintained.

The future for the original Rift CV1, for those who do still have one, or those who wish to keep the old hardware alive and usable, is in my opinion, to port the code to Monado.

This would be a sizeable investment, that would take months of time for even a build to work.

Though, If you wish to be extra, Do it all in Rust for memory safety.


@thaytan Would you like to pin this issue, to direct any prospective developers into the future.


It seems you also have some desire as well: https://discord.com/channels/556527313823596604/837433069404160061/1246504735658348666

Remove OpenCV dependency

A few OpenCV-supplied functions are critical to the positional tracking, but not many. The main one is RANSAC PnP + Levenberg-Marquardt pose refinement. That could be rewritten based on the integrated Lambatwist P3P implementation.

The others that need replacing are for projecting 3D LED positions to 2D, and undistorting/correcting blobs in captured frames to compensate for lens distortion. Both should be fairly easy to extract from OpenCV or reimplement.

No display/tracking without sudo

When running the openGL example, i get no tracking and the display stays off (proximity sensor is covered). However, when running the openGL example with sudo, the headset display turns on after both monitors going black for a second which is followed by the rift displaying the dektop and the monitor displaying the rift's view, tracking does work. I have installed the xr-hardware udev rules. I also have extended mode enabled as described here (i'm using a nvidia GPU). Disabling extended more results in the rift's screen not turning on, and running without sudo provides no tracking, with sudo tracking works but i get the rift's view in a window (no going black).

Haptics provides no output, sends thousands of log messages

When trying out rift-kalman-filter the haptics provide no output and it sends this 10-20 times a second in the console:

vrclient_steamSun Jun 13 2021 21:51:05.187207 - Invalid input type touch for controller (/user/hand/right/input/b)
vrclient_steamSun Jun 13 2021 21:51:05.187310 - Invalid input type touch for controller (/user/hand/left/input/y)
vrclient_steamSun Jun 13 2021 21:51:05.187497 - Item /user/hand/left/output/haptic in "haptics" array missing path invalid device path for haptic output
vrclient_steamSun Jun 13 2021 21:51:05.187528 - Item /user/hand/right/output/haptic in "haptics" array missing path invalid device path for haptic output```

Rift/Nvidia HMD standby mode when a game is launched

SteamVR home works great, but when ever I launch a game I'll see it for a couple of seconds in my Rift HMD, and then the headset goes into standby mode. Looking through the SteamVR logs I get the following -

"System] Unknown Transition from 'SteamVRSystemState_Ready' to 'SteamVRSystemState_Standby'."
steamvrlog.txt
Using the latest rift-Kalman-filter branch in release mode on Arch with a GTX 1070. Headset and sensors are all connected via USB 3.0.

This isn't urgent, just curious. I'll be updating this issue with relevant info and if I find a fix!

Any suggestions for optimization possibilities for Raspberry 4b!?

Hello!

I am going to use OpenHMD for rotational RiftCV1 tracking in a small project but I would also love to have Positional tracking.

Iv'e managed to get it running on the Rift+RPi before but it took around 150% cpu (out of 4x100% for the cores) and I got some stuttery results.

Does anyone have know the current algorithm and code well enough to be able to advice what I should focus on if I wanted to attempt to speed this up?

Or maybe even just splitting into more threads as maybe it was a saturated core that caused the stutter I have no idea.

Any ideas welcome!
Cheers

Idea: Use Kalman filter position variance to reduce search area

Something to investigate in the future. When attempting to reacquire tracking lock on a lost object, it might be possible to use the filter variance for position to estimate a bounding box for the search area. This could possibly help with reducing the search time, and with avoiding erroneous LED matches.

Fix lens distortion in CV1

I can’t attest for the Rift S, but I can say for sure that the image looks warped towards the centre in steamVR. There is an open issue on the main OpenHMD repo for the same thing outside of steamVR, so I presume it’s not on the steamVR side.

Pose matching: Add an LED size model

If we use information about the expected size of an LED at a given distance, we can more intelligently constrain the match distance for blobs to LEDs in rift-sensor-pose-helper.c

Cross-platform support

In the interests of iterating fast, I've probably used posix or GNU specific API in places. Getting things running on Windows and other unixes needs testing.

UVC camera access is done directly through libusb, which should work cross-platform but again - needs testing.

API for pose conversions

We need some new API for tracking a 'pose' (position and orientation) as an entity, and for doing transformations on poses:

  • Transform a pose in one coordinate system to another - for example, given the pose of an object relative to a camera, and the pose of the camera in world coordinates, get the pose of the object in world coordinates
  • Pose inversion - given the pose of a camera relative to an object, invert the pose to get the pose of the object relative to the camera

Improve pose candidate evaluation

One challenging problem in the optical tracking is in deciding which pose hypothesis is the right one and best matches the observed blobs in a frame.

At the moment, I'm using some heuristics that change pretty often, based around taking the hypothesised pose, back projecting the visible LEDs, and then seeing if they land within a visible blob. That mostly works, but doesn't require that LEDs uniquely map to a blob. It's also not desirable to require that they uniquely map to a blob - because LEDs around the edge of the HMD/device can be collinear / overlapping in the projected view.

Mapping projected LED positions into blobs non-uniquely is that there are degenerate cases where every LED lands inside 1 large blob and is considered a strong match.

It would be better to reverse the matching - project LEDs and generate a list of x/y positions, then iterate over each blob and count how many map onto any LED, and calculate the reprojection error of the matching blobs.

Beyond that, we need to develop a solid heuristic for evaluating what constitutes the 'best' match among hypotheses, and if possible to reject bad hypotheses as early as possible as a performance optimisation (because full pose projection and checks are expensive)

Mapping WMR (HP Reverb 2) controllers to memory buffer changes.

Hi.
I have access to HP Reverb 2 and i decided to map memory signalls to controllers functionality. This is not full list but i will try to find all functionalities c: buffer[] is name directly taken from source code. L means left controller, R means right.

buffer[1] = L{joy_click = 1; win_button = 2; menu_button = 4; rear_button = 8}
buffer[2] = L{joy_x = [0,255]}
buffer[3] = L{joy_y = [0,255]}
buffer[5] = L{throttle_front = [0,255]}
buffer[6] = L{throttle_rear = [0,255]}
buffer[7] = L{y = 1; x = 2}
buffer[8] = {???}
buffer[9] = {???}
buffer[10] = {???}
buffer[11] = R{conroller_camera_detetction [0-255; 0 - not see; 255 - see]}
buffer[12] = {???}
buffer[14] = both_conrollers_camera_detecion [0-255; 0 - not see; 255 - both see; if one is seen msg is 0,255,0,255]
buffer[16] = {controllers_are_in_sleep_mode = high [0-255]}
buffer[17] = {??? something with gogles?}
buffer[18] = {???}
buffer[33 - 44] = {if any controller turn on = 0; else nothing; shorly before disabling and enabling - value >0; [0-255]}
buffer[45] = {if any controller turn on - high value; else nohing; [0-255]};

Hope this will be helpfull C:

OpenHMD started failing connecting to Rift using Ubuntu.

Hello!
I'm using OpenHMD to get rotational HMD tracking on a RPi4b+Ubuntu 22.04 and it worked fine a bit earlier but now 'm getting errors that it can't connect to device so I'm curious as to why that might be.

I have added the udev rules.

The Rift CV1 seems to be found and reported correctly,

OpenHMD version: 0.3.0
num devices: 7
device 0
  vendor:  Oculus VR, Inc.
  product: Rift (CV1)
  path:    1-1.1.1:1.0
  class:   HMD
  flags:   04
    null device:         no
    rotational tracking: yes
    positional tracking: no
    left controller:     no
    right controller:    no

But I'm then getting the error:

[EE] Could not open /dev/bus/usb/001/000.
Check your permissions: https://github.com/OpenHMD/OpenHMD/wiki/Udev-rules-list
[EE] Could not open device with index: 0, check device permissions?

I tried re-plugging the Rifts hdmi and usb.
I tried running the program with 'sudo'.

I saw in your howto that you say sometimes it's just not working and I have to re-boot?
Any ideas what might be causing this? Could it be a re-ordering of devices post some init in the ohmd code?

EDIT: I am using a tool/program to send signals to the Rift to keep the screen awake maybe it has something to do with that. Maybe it's somehow blocking it for ohmd I'll have to take a closer look.
EDIT2: I think that's it. If I close my program and quickly launch the main program that's using ohmd it works. The main program also takes over and keeps the Rift alive.

Cheers!

Restart camera sensor on error.

Sometimes video transfer fails due to USB errors. It might be good to schedule a task to tear down the UVC config and restart after a second or so.

Configuration / calibration cache

In https://github.com/thaytan/OpenHMD/tree/config-store I started some code for a simple config store where drivers could drop private cache / configuration data.

The idea is to store (for example) the touch controller JSON blobs that are currently read out on every startup. Reading those over the low bandwidth radio link takes a measurable amount of time as each controller comes online. It would be possible to instead cache those JSON blobs and only re-read them if the firmware hash changes.

Another useful thing to store is accel/gyro bias correction vectors extracted from the sensor fusion. Having a nearly correct bias setup from the start helps with initial tracking and acquisition. With default 0,0,0 bias vectors, the sensor fusion can take many seconds to converge on correct vectors.

Finally, a configuration store would be useful for room calibration - storing the location of sensor cameras, room origins and boundaries (eventually, once OpenHMD has API for that)

No positional information when building using CMake

When I build and make this project using CMake I don't receive positional information from my Rift CV1.
openhmd_simple_example output:

rotation quat:           0.064488 0.278859 -0.003097 0.958160
position vec:            0.000000 0.000000 0.000000 
controls state:          0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000

When I use the Meson/Ninja flow, I do get this information:

rotation quat:           0.086947 0.005303 -0.039788 0.995404 
position vec:            -1.170820 0.263225 -0.440830 
controls state:          0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000

System:
Debian 5.8.10-1~bpo10+1 (2020-09-26) x86_64 GNU/Linux
Oculus Rift CV1

Am i missing steps in the CMake process?

Improve blob extraction

The CV1 touch controller LEDs can easily be only a bright pixel or two at around 1.2m distance from the camera. The current blob extraction code filters out such small blobs, which makes them occluded or missing as far as the pose search code is concerned.

Would you be able to do a development sprint to get very rudimentary CV1-SteamVR working?

This project has been steadily improving since I first saw it last December!

I was wondering if you would be able to get the very basics of the CV1 to work with steamVR, as that seems to have garnered the most attention from the large amount of people with CV1's wanting to switch fully to Linux.

Being able to promote the progress you've made will definitely gather even more attention. The youtube channel LinusTechTips (11.2 million subscribers) talked about VR in the new Linux gaming video they published a couple of days ago, and the basic OpenHMD drivers that Rift's can use. I'd happily promote this on the Linux gaming subreddit to gather more funding for the project! (And maybe the project will get mentioned in the next Linux video LTT publishes!)

That being said, due to the current situation it is perfectly reasonable to not do this as I do not know about how much more time you can spare.

Room calibration configuration for world coordinate transform.

We need a configuration store that can record information about the room setup and relative positions of the Rift sensor cameras.

After that, we'll need a room setup utility like the official Rift software has, which can find one of the headset or controllers in the view of multiple cameras simultaneously and work backward to compute the relative position of the cameras to each other, and hence camera-to-world-coordinate transforms for each camera

Find ways to make the sensor fusion more efficient

The UKF fusion filter works fairly well, but is costly on CPU. It has to be compiled with optimisations to run fast enough, and even then chews a good chunk of a CPU core running @ 1000 Hz. A big part of this is the delay slot setup to compensate for video capture and processing latency, which expands the UKF state matrix and UKF cost grows at O(n^3) unfortunately.

Some other avenues to explore are:

  • Don't do full Kalman state updates at full IMU rate, but instead integrate the IMU readiings most of the time and only do Kalman measurements / corrections at a lower rate.
  • Try an implementation of the Square Root UKF to reduce the update cost
  • Look at implementations like ILLIXR have for SLAM using an EKF-based MSCKF formulation

Figure out how to extract initial LED correspondences

The current code uses LED blink patterns to assign IDs to LEDs in the image, but blink patterns are finicky to extract when the headset is moving, and controllers don't have them.

We need an algorithm that can analyse video frames and figure out the pose of Headset + Controllers without prior information for the initial startup of tracking. The faster this acquires the tracked objects, the better.

Can't access CV1 cameras without sudo

Which means no steamVR testing without workarounds... Are there any updated permissions I need to setup to let my user and Steam access the cameras?

Support Oculus CV1 sensor in USB 2.0 / JPEG mode

Hello,
when testing the latest additions on the rift-kalman-filter branch last night and spending quite a bit of time scratching my head, I discovered upon running the simple example that the sensor I was trying to use wasn't being detected. The sensor I used is mounted in the corner of my room and connected via a USB2.0 active extension cable - https://www.amazon.co.uk/dp/B00B2HP3A2/ref=pe_3187911_185740111_TE_item upon changing to a none extended cable, all worked as expected.

I was wondering if this might be due to the extension being USB 2.0 rather than 3.0. I should also stress that this is completely none urgent, just something to be aware of if feature parity is the end goal as this setup works fine on the proprietary software.

Make complains a lot when building

Unless I ignore the errors with -i it fails with the message:

[ 97%] Building C object examples/simple/CMakeFiles/simple.dir/simple.c.o
[100%] Linking C executable simple
/usr/bin/ld: ../../libopenhmd.a(rift.c.o): in function `handle_tracker_sensor_msg':
rift.c:(.text+0xa67): undefined reference to `rift_tracked_device_imu_update'
/usr/bin/ld: rift.c:(.text+0xacf): undefined reference to `rift_tracker_new_exposure'
/usr/bin/ld: ../../libopenhmd.a(rift.c.o): in function `handle_touch_controller_message':
rift.c:(.text+0xc96): undefined reference to `rift_tracker_add_device'
/usr/bin/ld: rift.c:(.text+0x10e1): undefined reference to `rift_tracked_device_imu_update'
/usr/bin/ld: ../../libopenhmd.a(rift.c.o): in function `getf_hmd':
rift.c:(.text+0x1d71): undefined reference to `rift_tracked_device_get_view_pose'
/usr/bin/ld: rift.c:(.text+0x1db3): undefined reference to `rift_tracked_device_get_view_pose'
/usr/bin/ld: ../../libopenhmd.a(rift.c.o): in function `getf_touch_controller':
rift.c:(.text+0x20a7): undefined reference to `rift_tracked_device_get_view_pose'
/usr/bin/ld: rift.c:(.text+0x20e9): undefined reference to `rift_tracked_device_get_view_pose'
/usr/bin/ld: ../../libopenhmd.a(rift.c.o): in function `open_hmd':
rift.c:(.text+0x3bdd): undefined reference to `rift_tracker_new'
/usr/bin/ld: rift.c:(.text+0x3c76): undefined reference to `rift_tracker_add_device'
/usr/bin/ld: ../../libopenhmd.a(rift.c.o): in function `close_hmd':
rift.c:(.text+0x3cf8): undefined reference to `rift_tracker_free'
collect2: error: ld returned 1 exit status
make[2]: *** [examples/simple/CMakeFiles/simple.dir/build.make:86: examples/simple/simple] エラー 1
make[1]: *** [CMakeFiles/Makefile2:123: examples/simple/CMakeFiles/simple.dir/all] エラー 2
make: *** [Makefile:130: all] エラー 

Literature research

Hi there,
first of all I really enjoy this coming along so nicely! Since I do not have a regular Rift but only a Rift S I have started looking into this code and simultaneously papers on tracking and specifically inside-out tracking. I'm not yet too deep into the literature, but I thought that it might be useful to collect links to interesting publications on this here (since I assume after the Rift the next logical step would be work on the Rift S).
For example, this appears to give a nice overview but not a lot of technical details:
https://onlinelibrary.wiley.com/doi/abs/10.1002/j.2637-496X.2017.tb00962.x
So, would this be of interest/helpful?

Edit: I also found this, which does relate more closely to inside-out tracking:
https://doi.org/10.1007/978-3-319-48496-9_21
This one is not freely available though

position-hack branch fails to build with ninja

Ninja fails to build with the error being:
[1/5] Compiling C++ object 'openhmd@sha/src_drv_oculus_rift_rift-sensor-opencv.cpp.o'. FAILED: openhmd@sha/src_drv_oculus_rift_rift-sensor-opencv.cpp.o c++ -Iopenhmd@sha -I. -I.. -I.././include -I/usr/include/hidapi -I/usr/include/libusb-1.0 -I/usr/include/opencv -fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wnon-virtual-dtor -g -fPIC -pthread -fvisibility=hidden -DDRIVER_OCULUS_RIFT -DHAVE_OPENCV=1 -MD -MQ 'openhmd@sha/src_drv_oculus_rift_rift-sensor-opencv.cpp.o' -MF 'openhmd@sha/src_drv_oculus_rift_rift-sensor-opencv.cpp.o.d' -o 'openhmd@sha/src_drv_oculus_rift_rift-sensor-opencv.cpp.o' -c ../src/drv_oculus_rift/rift-sensor-opencv.cpp ../src/drv_oculus_rift/rift-sensor-opencv.cpp: In function ‘void refine_pose(double**, rift_led**, int, quatf*, vec3f*, double*)’: ../src/drv_oculus_rift/rift-sensor-opencv.cpp:260:7: error: ‘solvePnPRefineLM’ is not a member of ‘cv’ cv::solvePnPRefineLM (list_points3d, list_points2d, dummyK, dummyD, rvec, tvec); ^~~~~~~~~~~~~~~~ ../src/drv_oculus_rift/rift-sensor-opencv.cpp:260:7: note: suggested alternative: ‘solvePnPRansac’ cv::solvePnPRefineLM (list_points3d, list_points2d, dummyK, dummyD, rvec, tvec); ^~~~~~~~~~~~~~~~ solvePnPRansac ninja: build stopped: subcommand failed.
I might be missing something, but I'm not aware what

Display all warpy and weird (weird distortion correction)

My headset display is a little weird looking, kinda like how the display mirroring of old oculus headsets would look on a video or something, but in the headset.

after playing with the distortion correction settings in rift.c (in src/drv_oculus_rift) around line 1233 it looks a little better, to the point where it's usable, but it's not perfect. I have no idea how to measure this, so I can't really make it perfect (not to mention how I barely understand what is going on there).

if anyone else is having this problem, I'd recommend you try this, it will probably give you better results than you had before. if anyone knows how to measure and better configure this stuff, please type it in the comments.

Pose Matching: Match on fewer LEDs when tracking

When already successfully tracking a device and have a good idea of the orientation, we can infer a tracking update from as few as 2 LEDs. Doing that will mean successfully tracking a device under fast motion and heavy occlusion.

Pose matching: Match the gravity vector of the HMD

Given gravity vectors from the IMU, we can make sure that the extracted poses all align the gravity vector in (approximately) the same direction to reject obviously wrong poses. The CV code usually manages to match the HMD most strongly, so if we get a really good pose for the HMD we can use that to sanity check the controllers.

Unable to determine if this fork is any better for the DK2 than master.

Hey, I wasn't sure where else to put this, but I'm interested in keeping the Oculus Development Kits alive, for the sake of archival, as well as the secondary bonus of one less piece of electronics going into a landfill. I have several of these HMDs, and have been thinking of giving a few to friends and OpenHMD contributors.

With that out of the way, I have been following your work on YouTube with the DK2/CV1, and I've been wondering if this is the right fork to use if I want at least some rudimentary positional tracking? I realize that all of the Constellation reverse engineering is relatively new, and am aware of the unfinished elements in the tracking loop.

Additionally, I have seen other implementations of this from a very long time ago, but if I'm barking up the right tree with DK2 positional tracking, what is the possibility of using multiple cameras, for 360 degree tracking, or a wider tracking field? I'm sure that with the right jack splitter, it would be technically possible.

Unable to turn on the display on the Oculus CV1

Hi, for some time I have been trying to get my Oculus CV1 to display ANYTHING on its display.
I have followed the guide from the diary https://noraisin.net/diary/?page_id=1048
And I have installed all equivalent packages for my Arch distro including xr-hardware 1.1.0.

However whenever I run SteamVR or monado-service they seem to start fine including CV1 tracking but the display on the headset remains disabled and the led stays orange.
xrandr does not detect the headset being plugged it at all.
Tracking seems to function as expected.

If I add AllowHMD to my Xorg config that allows me to see the headset being plugged in but only when OpenHMD is running. When it is running the display sometimes appears as "disabled" in the NVIDIA Settings and xrandr. Enabling it and changing any possible settings does not affect the HMD at all. It still displays nothing with orange led. However the PC does seem to detect the display and act like it works as I'm able to drag windows to it (although I have no idea how it looks like)

I have tried using oculus-udev package for the DK1/2 and writing my own udev rules but nothing has worked so far.
I am sure that the HDMI port and the HMD works in windows.

I am using the NVIDIA GTX 1050 Gpu with the 520.56.06 driver version on a pure Arch system.

Implement sensor fusion filter

There is an advanced Kalman filter formulation that can handle all the requirements positional tracking for the Rift:

  1. Extract orientation information from IMU information
  2. Interpolate position from IMU information
  3. Integrate corrections to position and orientation based on sporadic camera observations
  1. is the trickiest requirement, but it can be met by using lagged co-varience matrices as described in https://www.researchgate.net/publication/251994970_Delayed-state_sigma_point_Kalman_filters_for_underwater_navigation

I'm planning to use an Unscented Kalman Filter for the overall structure

Stuttering, loss of tracking when haptics are activated

On recent versions(past month or 2, cant remember exactly when) the system began stuttering whenever haptics were activated, this is particularly obvious in Beat Saber, where both controllers will lose tracking within 20-30 seconds of starting the level. The entire system(OpenHMD, SteamVR, the game) appears to momentarily slow down or lock up whenever haptics are triggered. Games play fine with haptics disabled.

Handle CV1 headband LEDs

Because the headband is articulated, the LEDs at the back of the headset are not where the 3D model says they should be, and matching pose yields the wrong orientation.

One idea would be to split out and track those LEDs as a separate sub-device from the main headset and extract a relative pose that translates from the main headset LEDs to the headband. Whenever both the headset and the headband are visible simultaneously, the conversion pose could be updated.

A quicker thing to do would be to ignore them entirely so they at least don't make anything worse.

Cannot use features from non-enabled language CXX

Hi,
This is not realy an issue but I wanted to share this info. I don't know if there is any other place to share.
I was facing an issue with setting up this project using cmake.

[main] Building folder: OpenHMD [main] Configuring folder: OpenHMD [cmake] Configuring done [cmake] CMake Error in CMakeLists.txt: [cmake] Cannot use features from non-enabled language CXX [cmake] [cmake] [cms-driver] Error during CMake configure: Error: Failed to compute build system.
I solved this by adding CXX to the project in CMakeLists.txt
project(openhmd C CXX)

Now I'm able to setup and debug this project in VSCode.
In VSCode I use these extensions:
ms-vscode.cpptools
ms-vscode.cmake-tools

No tracking in Zenith

First of all thank you for your work the driver works amazing except a slight delay between head movement and the movement visible inside the headset.
Now to the actual issue, Tracking works great everywhere except in Zenith or DeoVr. As soon as the app is done starting the controllers stop tracking, if I open the steamvr overlay in-game they continue to work just fine.

OS: Kubuntu 21.10
Kernel version: 5.17.0-8.1-liquorix-amd64
HMD: Oculus CV1
OpenHMD: 489481c
SteamVr: 1.22.13

Correlate Rift HMD time with frame captures

In order to properly combine data from IMU observations with camera tracking, we need a common set of timestamps.

The code that associates the led-pattern-phase with an incoming camera frame could also pass the IMU timestamp to associate that with the video frame.

It will need testing whether that 1ms granularity of 1000Hz IMU materially affects the results. If so, some smoothing and interpolation using the system clock could help.

UVC Dropping short frame warnings

I've been trying to get the dev-oculus-position-hack branch to work with my CV1 headset however the following is getting constantly spammed in the log:

UVC Dropping short frame: 1220184 < 1228800 (8616 lost)
UVC PTS changed in-frame at 1208996 bytes. Lost 73 ms
UVC Dropping short frame: 1208996 < 1228800 (19804 lost)
UVC PTS changed in-frame at 1211908 bytes. Lost 51 ms

The same will happen in the OpenHMD-RiftPlayground too.
I'm not quite sure why however in the opengl example my headset will also start drifting away into a random direction and it may be because of that.

It may be because im using an unsupported USB 3 card from VIA Labs and if I plug my sensors into a USB 2 port i will get the error:

failed to submit iso transfer 0. Error -2
could not start streaming

Thank you in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.