GithubHelp home page GithubHelp logo

bumblebee-project / bumblebee Goto Github PK

View Code? Open in Web Editor NEW
1.3K 91.0 144.0 800 KB

Bumblebee daemon and client rewritten in C

Home Page: http://www.bumblebee-project.org/

License: GNU General Public License v3.0

C 87.34% Shell 7.10% Makefile 2.71% M4 2.86%

bumblebee's Introduction

Bumblebee Daemon

Bumblebee daemon is a rewrite of the original Bumblebee service, providing an elegant and stable means of managing Optimus hybrid graphics chipsets. A primary goal of this project is to not only enable use of the discrete GPU for rendering, but also to enable smart power management of the dGPU when it's not in use.

Build Requirements

Source tarballs can be downloaded from https://github.com/Bumblebee-Project/Bumblebee/downloads

The following packages are dependencies for the build process:

  • pkg-config
  • glib-2.0 and development headers
  • libx11 and development headers
  • libbsd and development headers (if pidfile support is enabled, default yes)
  • help2man (optional, it is needed for building manual pages)

If you are building from git, you will also need:

  • autotools (2.68+ recommended)

Runtime dependencies

If you want to use optirun for running applications with the discrete nVidia card, you will also need:

  • At least one back-end for optirun:
  • Driver for nvidia graphics card: nouveau or the proprietary nvidia driver. Don't install it directly from nvidia.com as it will break 3D capabilities on the Intel graphics card and therefore affect the display of frames from the nvidia card.

If you want to make use of Power Management, you will need:

  • bbswitch
  • If you're brave and want to try the switcheroo method, install at least the optimus patch (merged in Linux 3.3). Note that suspend is not yet supported by this method.

Building

If you are building from git, you first need to run autoreconf -fi to generate the configure script.

Next, run the configure script to check for dependencies and populate the Makefile:

./configure

To set the default driver to nvidia and adjust the library and module paths for it, use ./configure like:

./configure CONF_DRIVER=nvidia CONF_DRIVER_MODULE_NVIDIA=nvidia-current \
  CONF_LDPATH_NVIDIA=/usr/lib/nvidia-current:/usr/lib32/nvidia-current \
  CONF_MODPATH_NVIDIA=/usr/lib/nvidia-current/xorg,/usr/lib/xorg/modules

For all available options, run:

./configure --help

After configuring, you can build the binaries with:

make

Installing System-wide and Packaging

You can build the binaries and set the system wide paths at configure time

./configure --prefix=/usr --sysconfdir=/etc
make

After building the binaries and bash completion script, it can be installed together with an udev rule (unless --without-udev-rules was passed) using make:

sudo make install

For packagers you need to add DESTDIR=$pkgdir

make install DESTDIR=$pkgdir

Example initscripts are available in the scripts/ directory. Currently, Upstart, systemd and Sys V initscripts are available.

Usage

The first time you install Bumblebee, the bumblebee group has to be created. Users who are allowed to use Bumblebee need to be added to the group:

sudo groupadd bumblebee
sudo gpasswd -a $USER bumblebee

To run Bumblebee after installing it system-wide, run:

sudo bumblebeed --daemon
optirun -- <application>

For more information, try --help on either of the two binaries.

Interesting Links

bumblebee's People

Contributors

amonakov avatar archangegabriel avatar bluca avatar kroki avatar lekensteyn avatar m42a avatar ralfjung avatar samsagax avatar ssaavedra avatar thefrozenfire avatar thulinma avatar timgates42 avatar z0rc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bumblebee's Issues

Use dbus to send message to desktop applications

In order to get an indicator applet that work well, it will be nice to launch a dbus server and to send messages through it.
This could be helpfull for any integration in a desktop environnement.
Some messages that could be send, could be :

  • Card State : ON or OFF (on every change and also on request for the initialization of the indicator at launch.)
  • Command line launched through optirun :
    • app name or command line
    • pid of the process,
    • if the process have been launched through VGL or run with fallback. (and if the card state has changed too)
  • Redirection of some warnings and errors
    All this message could be send though desktop notification if the user has choosed some options for the indicator.
    If the state of the card change, the state of the applets icon will change also.

DBUS is best suited according to what i've heard and read about ipc from kernel to desktop.
A lot of distro use dbus to manage the communication when some desktop application need to share datas.
Sorry for my english.

Integration of configurable options in configure script

We provide a great level of customizability for the options in bb, can those be integrated in the ./configure program?

All CONF_* options in configure.ac and Makefile.am needs to an option for configure, the most important ones:

  • CONF_PID_FILE: if empty no pidfile is created
  • CONF_DRIVER
  • CONF_LDPATH: currently only needed for DRIVER=nvidia
  • CONF_MODPATH: currently only needed for DRIVER=nvidia

Q: Upgrade path (instructions) from previous Bumblebee

Lekensteyn posted about Bumblebee rewrite being available on the testing PPA and I am very eager to test. I am currently on:

% bumblebee --version
Bumblebee version 2.4.0~git65b382e

% uname -a
Linux stan-inspiron 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 20:28:43 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

so I am wondering how do I go about installing 3.x? Do I simply add the testing PPA and perform a [dist-]upgrade? Furthermore, once the installation is complete, where are the relevant configuration files in case I want to tinker with Bumblebee/bbswitch?

I am the sort of person who doesn't like legacy files hanging around, would it be best to --purge my current install and then install from testing PPA (following a reboot)?

% dpkg-query -s virtualgl
Version: 2.3.1-1~oneiricppa3

% dpkg-query -s nvidia-current 
Version: 290.10-0ubuntu1~oneiric~xup1

% dpkg-query -s xserver-xorg-video-nouveau
Package `xserver-xorg-video-nouveau' is not installed and no info is available.

EDIT: Reading through the doc/s I stumbled upon this:

  • the video drivers must be unloadable. This means that it should not be
    compiled into the kernel. When using dynamically built modules like nvidia,
    be sure that the toolchain used for building the kernel matches the one that
    is used for building the module. This means that Ubuntu mainline kernels do
    not work and will cause crashes.

Does that say vanilla Ubuntu installs with nvidia-current as a DKMS will not work?

Upgraded to Bumblebee 3.0 stable via PPA - Fan TOO NOISY

I emailed this issue as requested including bugreport: https://github.com/Bumblebee-Project/Bumblebee/wiki/Reporting-Issues
thought I would post it here too as the noise of the fan rings in my ears...

Distro:
Linux Mint 12

Kernel:
$ uname -a
Linux XG 3.0.0-12-generic #20-Ubuntu SMP Fri Oct 7 14:56:25 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

Manufacturer and model:
Dell Inspiron 15R (N5110)
note:
the command mentioned here didnt work for me:
https://github.com/Bumblebee-Project/Bumblebee/wiki/Reporting-Issues

$ for keyword in baseboard-manufacturer baseboard-product-name baseboard-version system-manufacturer system-product-name system-version bios-vendor bios-version bios-release-date; do
printf "%-22s: " "$keyword";
sudo dmidecode -s "$keyword";
done


I upgraded to latest Bumblebee 3.0 stable via PPA through Update Manager. All went fine, then I run:

$ optirun firefox
Xlib: extension "GLX" missing on display ":8".

the clue to the problem I found here:
https://github.com/Bumblebee-Project/Bumblebee/wiki/Supported-drivers

I needed to do the following to make the LibGL work:

  1. open as admin /etc/bumblebee/bumblebee.conf
  2. edit line 22 so it reads
    Driver=nvidia

then do the following, to be sure:

After installation, allow yourself to use Bumblebee (replace $USER by your username):

sudo usermod -a -G bumblebee $USER

Then, reboot.


Tests:

$ glxspheres
Polygons in scene: 62464
Visual ID of window: 0x93
Context is Direct
OpenGL Renderer: Mesa DRI Intel(R) Sandybridge Mobile
56.909574 frames/sec - 52.008522 Mpixels/sec
59.973454 frames/sec - 54.808540 Mpixels/sec
59.971668 frames/sec - 54.806908 Mpixels/sec
59.472910 frames/sec - 54.351103 Mpixels/sec
59.981566 frames/sec - 54.815954 Mpixels/sec
59.975305 frames/sec - 54.810232 Mpixels/sec
59.966931 frames/sec - 54.802578 Mpixels/sec
59.983675 frames/sec - 54.817881 Mpixels/sec
59.978178 frames/sec - 54.812858 Mpixels/sec
59.974598 frames/sec - 54.809585 Mpixels/sec
59.975927 frames/sec - 54.810800 Mpixels/sec
59.976484 frames/sec - 54.811310 Mpixels/sec

$ optirun glxspheres
Polygons in scene: 62464
Visual ID of window: 0x21
Context is Direct
OpenGL Renderer: GeForce GT 525M/PCI/SSE2
83.493122 frames/sec - 76.302695 Mpixels/sec
89.495882 frames/sec - 81.788497 Mpixels/sec
90.311080 frames/sec - 82.533489 Mpixels/sec
92.954501 frames/sec - 84.949260 Mpixels/sec
97.630015 frames/sec - 89.222118 Mpixels/sec
90.991800 frames/sec - 83.155586 Mpixels/sec
91.404266 frames/sec - 83.532530 Mpixels/sec
88.815331 frames/sec - 81.166555 Mpixels/sec
89.083120 frames/sec - 81.411281 Mpixels/sec
89.972485 frames/sec - 82.224055 Mpixels/sec


Now I notice even though its running few degrees cooler, after a while - 5 minutes or so, when temp drops below 59 degrees - the fan starts to make intermittent whirring sound, then constant whirring very noisy, too noisy. I suspect Power Management features...

so I run as advised on bbswitch page:

$ dmesg | grep -C 10 bbswitch:
[ 14.845278] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input16
[ 14.845590] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input17
[ 14.845663] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input18
[ 16.015338] r8169 0000:05:00.0: eth0: link down
[ 16.015345] r8169 0000:05:00.0: eth0: link down
[ 16.015587] ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 16.322042] ppdev: user-space parallel port driver
[ 16.554974] init: failsafe main process (923) killed by TERM signal
[ 16.555357] init: apport pre-start process (1435) terminated with status 1
[ 16.560324] init: apport post-stop process (1464) terminated with status 1
[ 16.754854] bbswitch: Found integrated VGA device 0000:00:02.0: SB.PCI0.GFX0
[ 16.754866] bbswitch: Found discrete VGA device 0000:01:00.0: SB.PCI0.PEG0.PEGP
[ 16.754984] bbswitch: detected an Optimus _DSM function
[ 16.754992] bbswitch: Succesfully loaded. Discrete card 0000:01:00.0 is on
[ 16.818983] vboxdrv: Found 8 processor cores.
[ 16.819385] vboxdrv: fAsync=0 offMin=0x1a6 offMax=0x5421
[ 16.819435] vboxdrv: TSC mode is 'synchronous', kernel timer mode is 'normal'.
[ 16.819438] vboxdrv: Successfully loaded version 4.1.2_Ubuntu (interface 0x00190000).
[ 16.825871] vboxpci: IOMMU not found (not registered)
[ 16.878937] Bluetooth: Core ver 2.16
[ 16.878953] NET: Registered protocol family 31
[ 16.878954] Bluetooth: HCI device and connection manager initialized
[ 16.878956] Bluetooth: HCI socket layer initialized
[ 16.878957] Bluetooth: L2CAP socket layer initialized
[ 16.879090] Bluetooth: SCO socket layer initialized
[ 16.880473] Bluetooth: RFCOMM TTY layer initialized
[ 16.880476] Bluetooth: RFCOMM socket layer initialized
[ 16.880477] Bluetooth: RFCOMM ver 1.11
[ 16.880536] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
[ 16.880538] Bluetooth: BNEP filters: protocol multicast
[ 16.983018] [TTM] Finalizing pool allocator.
[ 16.983050] [TTM] Zone kernel: Used memory at exit: 0 kiB.
[ 16.991403] [drm] Module unloaded
[ 16.992156] bbswitch: disabling discrete graphics
[ 16.992373] bbswitch: Result of Optimus _DSM call: 11000059
[ 16.992401] pci 0000:01:00.0: PCI INT A disabled
[ 17.008283] pci 0000:01:00.0: Refused to change power state, currently in D0
[ 17.008638] pci 0000:01:00.0: power state changed by ACPI to D3
[ 17.076715] Adding 275452k swap on /dev/mapper/cryptswap1. Priority:-1 extents:1 across:275452k
[ 17.592938] r8169 0000:05:00.0: eth0: link up
[ 17.593263] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 18.513234] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro,commit=0
[ 19.071859] init: plymouth-stop pre-start process (1849) terminated with status 1
[ 20.299247] hda-intel: spurious response 0x0:0x0, last cmd=0x4f2d00
[ 20.299256] hda-intel: spurious response 0x0:0x0, last cmd=0x4f2d00


the fan noise problem occurs with or without the ppa:ubuntu-x-swat/x-updates
the machine is too noisy to use...
any ideas how to fix??

Bumblebeed -> Bumblebee

To avoid confusion, we should stick with "bumblebee" instead of "bumblebeed". We can keep the filenames (bumblebeed.c) intact, but there are possibly still references to the bumblebeed program which should be renamed to bumblebee. Thoughts?

Can't start bumblebeed daemon

I am trying to start optirun on my debian machine, however I fail to see where is the issue:

$ sudo /usr/bin/bumblebeed --daemon -vv --driver nvidia --driver-module nvidia -x /etc/bumblebee/xorg.conf.nvidia
$ echo $?
1

Ok, so apparently there is an issue somewhere. But some output would be nice. Starring at messages I can see:

Jan 3 18:54:27 maester bumblebeed: Skipping switcheroo PM method because it is not explicitly selected in the configuration.
Jan 3 18:54:27 maester bumblebeed: No switching method available. The dedicated card will always be on.
Jan 3 18:54:27 maester bumblebeed: Process modinfo started, PID 7685.

Is there a very very verbose mode to see what is going on ?

Thanks

Tearing when playing videos

Distribution: Arch Linux (3.2.1-1-ARCH)
optirun (Bumblebee) 3.0-2-ge37ae26
Nvidia driver: 290.10
GPU: GeForce GT 555M
baseboard-manufacturer: CLEVO CO.
baseboard-product-name: W150HRM
baseboard-version : N/A
system-manufacturer : CLEVO CO.
system-product-name : W150HRM
system-version : N/A
bios-vendor : American Megatrends Inc.
bios-version : 4.6.4
bios-release-date : 06/28/2011

I get a lot of tearing when playing any video. The problem happens when playing a video using either mplayer or Adobe Flash. It's probably a vsync issue. I've tried to enable and disable vsync from Nvidia settings which didn't help. Without optirun there's still some tearing. With optirun the tearing is so bad videos are almost unwatchable. I've tried different settings in mplayer, like -vsync and -vo gl.

Support driver-specific conffiles?

The nvidia driver needs some tweaks to make it work, namely setting MODULE_PATH, NV_LIBRARY_PATH and possibly DRIVER_NAME. These settings are closely related to the DRIVER setting. If the user decides to switch between drivers, he/she would have to either pass the --conf option to a different file or modify all values in the conffile.

To overcome this, we can make additional conffiles to be read, say bumblebee-DRIVER.conf or bumelbee.conf.DRIVER (I prefer the first). Thoughts?

Option for setting the PM method

The driver can be autodetected but can also be configured. Let's do it for PM as well which could have the next options:

  • auto
  • bbswitch
  • switcheroo
  • none

Bumblebee 3.0 doesn't work (Error loading or GLX)

Hello Everyone,
Today I made an install of Bumblebee 3.0 (fresh install so no need to any uninstall to update). After the installation ended and the user's been added to the group bumblebee, I tried to start optirun and this is what happens :

"With Driver="
[aliasbody@BodyAsus ~]$ optirun -vv glxgears
[DEBUG]Reading file: /etc/bumblebee/bumblebee.conf
[DEBUG]Process /sbin/modinfo started, PID 3055.
[DEBUG]Hiding stderr for execution of /sbin/modinfo
[DEBUG]SIGCHILD received, but wait failed with No child processes
[DEBUG]Process /sbin/modinfo started, PID 3056.
[DEBUG]Hiding stderr for execution of /sbin/modinfo
[DEBUG]SIGCHILD received, but wait failed with No child processes
[DEBUG]Detected nouveau driver
[DEBUG]Active configuration:
[DEBUG] bumblebeed config file: /etc/bumblebee/bumblebee.conf
[DEBUG] X display: :8
[DEBUG] LD_LIBRARY_PATH: /usr/lib/nvidia-bumblebee:/usr/lib32/nvidia-bumblebee
[DEBUG] Socket path: /var/run/bumblebee.socket
[DEBUG] VGL Compression: proxy
[DEBUG]optirun version 3.0 starting...
[INFO]Response: Yes. X is active.

[INFO]Running application through vglrun.
[DEBUG]Process vglrun started, PID 3065.
Xlib: extension "GLX" missing on display ":8".
Error: couldn't get an RGB, Double-buffered visual
[DEBUG]SIGCHILD received, but wait failed with No child processes
[DEBUG]Socket closed.
[DEBUG]Killing all remaining processes.

"With Driver=nvidia"
[aliasbody@BodyAsus ~]$ optirun -vv glxgears
[DEBUG]Reading file: /etc/bumblebee/bumblebee.conf
[INFO]Configured driver: nvidia
[DEBUG]Skipping auto-detection, using configured driver 'nvidia'
[DEBUG]Process /sbin/modinfo started, PID 3228.
[DEBUG]Hiding stderr for execution of /sbin/modinfo
[DEBUG]SIGCHILD received, but wait failed with No child processes
[DEBUG]Active configuration:
[DEBUG] bumblebeed config file: /etc/bumblebee/bumblebee.conf
[DEBUG] X display: :8
[DEBUG] LD_LIBRARY_PATH: /usr/lib/nvidia-bumblebee:/usr/lib32/nvidia-bumblebee
[DEBUG] Socket path: /var/run/bumblebee.socket
[DEBUG] VGL Compression: proxy
[DEBUG]optirun version 3.0 starting...
[ERROR]The Bumblebee daemon has not been started yet or the socket path /var/run/bumblebee.socket was incorrect.
[DEBUG]Socket closed.
[ERROR]Could not connect to bumblebee daemon - is it running?

"With Driver=nouveau"
[aliasbody@BodyAsus ~]$ optirun -vv glxgears
[DEBUG]Reading file: /etc/bumblebee/bumblebee.conf
[INFO]Configured driver: nouveau
[DEBUG]Skipping auto-detection, using configured driver 'nouveau'
[DEBUG]Active configuration:
[DEBUG] bumblebeed config file: /etc/bumblebee/bumblebee.conf
[DEBUG] X display: :8
[DEBUG] LD_LIBRARY_PATH: /usr/lib/nvidia-bumblebee:/usr/lib32/nvidia-bumblebee
[DEBUG] Socket path: /var/run/bumblebee.socket
[DEBUG] VGL Compression: proxy
[DEBUG]optirun version 3.0 starting...
[INFO]Response: Yes. X is active.

[INFO]Running application through vglrun.
[DEBUG]Process vglrun started, PID 3390.
Xlib: extension "GLX" missing on display ":8".
Error: couldn't get an RGB, Double-buffered visual
[DEBUG]SIGCHILD received, but wait failed with No child processes
[DEBUG]Socket closed.
[DEBUG]Killing all remaining processes.

The package as been installed directly from the aur :
http://aur.archlinux.org/packages.php?ID=49469

Thanks in Advance,
Luis Da Costa

Use bumblebeed everyday

Hello,

I use bumblebee everyday to make some OpenGL 4.2 programs, but I see that the project is not very alive from 1 month.

But I've seen bumblebeed :D
Does he work? And can I use it on Ubuntu or Archlinux?

Thanks for bumblebee!!

Can't use bumblebee from 32bit subsystem anymore

I am using 64bit system with 32bit bundled subsystem. In both I have bumblebee installed, but I have only one 64bit kernel.
I used to start daemon on 64bit system and do:

optirun 64bit_application
schroot -p -- optirun 32bit_application

for 64bit/32bit application and everything worked perfectly up to bumblebee 3.0.

Now the 64bit optirun is OK, but the 32bit one works no more. Always when I want to use optirun in 32 bit I'll receive following error.

schroot -p -- optirun glxinfo
ERROR]The Bumblebee daemon has not been started yet or the socket path /var/run/bumblebee.socket was incorrect.
[ERROR]Could not connect to bumblebee daemon - is it running?

I am in the group bumblebee and even when I hard link subsystem's /var/run to main system's /var/run...so bumblebee.socket can be seen in 32bit subsystem, the error remains the same :(.

Port some parts to Glib

There are some parts that are interesting to be ported to Glib library, that would make the code both cleaner, shorter and even more portable.

Such parts could be:

  • Configuration structure
  • Configuration parsing
  • Socket communication
  • Process spawning

I'll start experimenting with this in a new branch and stage it for 3.1 (or even later). (I won't be available until February)

vdpau

What about VDPAU? Do you plan to add VDPAU support to bumblebeed? If yes, i'll be happy. :)

Can't use on Samsung qx412

Nvidia 520m

lspci |grep nVidia
01:00.0 3D controller: nVidia Corporation Device 1050 (rev a1)

lsmod|grep nvidia
nvidia 12059207 0

./bumblebeed
[ERROR]No nVidia graphics card found, quitting.

Old version:
/usr/local/bin/optirun glxinfo
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: VirtualGL
server glx version string: 1.4
server glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_SGI_make_current_read, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SUN_get_transparent_index, GLX_ARB_create_context
client glx vendor string: VirtualGL
client glx version string: 1.4
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_SGI_make_current_read, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SUN_get_transparent_index, GLX_ARB_create_context
GLX version: 1.4
GLX extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_SGI_make_current_read, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SUN_get_transparent_index, GLX_ARB_create_context
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GT 520M/PCI/SSE2
OpenGL version string: 4.2.0 NVIDIA 290.10
OpenGL shading language version string: 4.20 NVIDIA via Cg compiler
OpenGL extensions:

installer does not add user to the bumblebee group

I just tried installing bumblebee v3.0-1~oneiricppa2 and it did not add me to the bumblebee group, so I couldn't run anything using optirun except as root.

I fixed it by adding myself manually with "usermod -a -G bumblebee rocko" and re-logging in, but didn't bumblebee used to do this upon install?

slow performance in bumblebee?

I just installed bumblebee 3.0-1~oneiricppa2 and tried 'optirun glxspheres'. Some sample performance results I got are:

  • proxy: 88-92 fps
  • jpeg: 114-117 fps
  • yuv: 97-120 fps
  • xv: 65-68 fps

But with ironhide I was getting around 150 fps when using proxy, and I'm pretty sure I was already using the virtualgl build from the bumblebee project (because its version is higher). Are there any settings that might be giving me the lower framerate in bumblebee vs ironhide?

System V init

Is this deamon only for systemd?

What about default sysvinit?

optirun doesn't work anymore since v3.0

Hi,

I've try the new version today, but optirun doesn't work for me.
bbswitch seems to work which is cool.

You can see below the result:
optirun glxspheres
Polygons in scene: 62464
Xlib: extension "GLX" missing on display ":8".
ERROR (596): Could not obtain RGB visual with requested properties

and my relevant syslog:
http://pastebin.com/5jLBJUkN

Bumblebee doesn't work, but nvidia card in use

$  optirun -vv glxspheres
[DEBUG]Reading file: /usr/local/etc/bumblebee/bumblebee.conf
[WARN]Could not open configuration file: /usr/local/etc/bumblebee/bumblebee.conf
[WARN]Using default configuration
[DEBUG]Detected nouveau driver
[DEBUG]Active configuration:
[DEBUG] bumblebeed config file: /usr/local/etc/bumblebee/bumblebee.conf
[DEBUG] X display: :8
[DEBUG] LD_LIBRARY_PATH: 
[DEBUG] Socket path: /var/run/bumblebee.socket
[DEBUG] VGL Compression: proxy
[DEBUG]optirun version  starting...
[INFO]Response: Yes. X is active.

[INFO]Running application through vglrun.
[DEBUG]Process vglrun started, PID 17119.
Polygons in scene: 62464
Visual ID of window: 0x21
Context is Direct
OpenGL Renderer: Gallium 0.4 on llvmpipe (LLVM 0x209)

bbswitch shows the card is being used by the nouveau driver (while it should use nvidia), and I can't turn it off.

[VGL] ERROR: Could not open display :8 if a program forks and exits optirun

Hello,

Today I got bumblebee updated to v3.0 and there seems to be a problem. Although optirun works for simple apps like glxgears and so on, I got trouble running some games with wine. I get this this error "[VGL] ERROR: Could not open display :8.". Other minor wine apps work with optirun just fine. Power management also seems to be working now. My laptop is Dell Inspiron N5110
Xorg.8.log: http://pastebin.com/CFJLbyFU
syslog: http://pastebin.com/HvUEvp8Q

uname -a: 3.2.0-10-generic-pae #17-Ubuntu SMP Thu Jan 19 21:44:50 UTC 2012 i686 i686 i386 GNU/Linux

Thank you.

Error installing bumblebee on Debian Squeeze

Hello
I'm trinying to install bumblebee on a debian squeeze (amd64). I follow the instruction on the wiki debian http://wiki.debian.org/Bumblebee
I installed VirtualGl, bbswitch, enable the squeeze-backport repository and make
sudo apt-get install -t squeeze-backports nvidia-glx linux-headers-$(uname -r) nvidia-kernel-dkms

But when I use this instruction from the wiki
sudo update-alternatives --set glx /usr/lib/mesa-diverted

I get an
update-alternatives: erreur: pas d'alternatives pour glx.

What amI doing wrong ? Is it missing something . I dont have any errors during the installation of the nvidia-glx package...

If I launch glxgears I get
Xlib: extension "GLX" missing on display ":0.0".
Error: couldn't get an RGB, Double-buffered visual

Thanks for your help

vga_switcheroo detection needs improvement

vga_switcheroo only works for nouveau. Newer Optimus models also need a patch for it to work. Furthermore, for vga_switcheroo to work after suspend, it needs another patch. Adding this up, vga_switcheroo detection needs to be improved in order to overcome these compatibility issues.

Support for AMD hybrid technology (PowerXpress)

It's confirmed that the concept for Bumblebee also works for AMD Hybrid graphics: http://forums.gentoo.org/viewtopic-t-909802.html (via http://phoronix.com/forums/showthread.php?68327-Bumblebee-Has-Tumbleweed-For-NVIDIA-Optimus-On-Linux&p=247659#post247659)

For supporting AMD hybrids, we need to:

  • modify the PCI Bus ID detection to check for both the NVIDIA and AMD vendor IDs (add an extra field to the bb_status struct)
  • modify documentation, texts, comments to refer to "discrete video card" or "discrete %s video card" instead of "nvidia card". Replace "Optimus" by "Hybrid Graphics" where applicable
  • adjust the switching methods in switch/ (mainly switcheroo) to detect radeon drivers (bbswitch won't detect a card and does not get loaded anyway)
  • separate xorg error log analysis?
  • add xorg.conf for radeon (and fglrx), extend bumblebee.conf
  • Look for ways to extend bbswitch supporting AMD, possibly helpful: http://git.kernel.org/?p=linux/kernel/git/next/linux-next-history.git;a=blob;f=drivers/gpu/drm/radeon/radeon_atpx_handler.c;hb=HEAD
  • (edit if I forgot something)

I don't have AMD hardware to play with, but it shouldn't be difficult to extend bbswitch for AMD PowerXpress. The other changes are neither difficult.

Create a menu item in Nautilus which use optirun

Hi,

My idea is to create a new menu item in Nautilus, which makes it possible to open a file/bin directly via optirun.
Example: Right click on the file, select 'Open with NVIDIA Optimus' below the normal 'Open' menu item.

I think it is relatively easy to make and easy to use for the end user.

Good luck!

Kind regards,
Melroy van den Berg

Instructions for installing nvidia blob

Can someone play with the installer from nvidia.com and make a /usr/lib/nvidia-bumblebee directory containing all required files? Possibly helpful are these results of (un)installing nvidia.

This is especially needed for distros which do not have something like an alternatives system for drivers (Ubuntu) like Debian, OpenSUSE and Fedora.

What about license?

I see you use GPL license in bumblebeed and bbswitch (see beginning of files).
If you want use GPL, shoul not you have a License or Copying file?

I'm not in the project, but I think you should have a more permisive License than GPL. At least LGPL.
And if possible use Boost, MIT, BSD... license. Because, if a project use bbswitch or bumblebeed code, it won't be a obligation for it to use GPL or LGPL license.

If you use MIT or BSD (I don't know for boost), a program that use your code must at least include your name!

Why umask(023)?

bb_chgid() in bumblebeed.c sets an umask of 023. How did you decide to use 023? For clarity, this means that the write permissions are not assigned for the group and that the world do not have either write nor executable bits.

ldpath autodetect

As the ldpath is only needed when not using nouveau, and the path may differ across distros, I suggest the following changes:

  • Only set the ld path when not using nouveau.

  • On start, attempt to autodetect ld path(s), if not set in config AND not using nouveau by checking:

    /usr/lib/nvidia-current
    /usr/lib64/nvidia-current
    /usr/lib32/nvidia-current
    /usr/lib/nvidia
    /usr/lib64/nvidia
    /usr/lib32/nvidia
    /usr/lib/nvidia-bumblebee
    /usr/lib64/nvidia-bumblebee
    /usr/lib32/nvidia-bumblebee

Then concatenate the ones that contain files into the used ldpath.

This should provide out-of-the-box compatibility for most distros, I think.

Thoughts?


Current status:

  • Only set the ld path when not using nouveau. - Done
  • At compile-time, attempt to autodetect ld path(s), if not set in nvidia part of config - Planned for 3.1 if no better solution surfaces

Could not Open Nvidia-Settings

Updated to version 3.0.

With or without bumblebeed running, I cannot open the Nvidia-settings properly!
On terminal if I type: optirun nvidia-settings -c :8
I get the message "You do not appear to be using Nvidia drivers...."

Was it disabled in this release?

Improvements for options parsing

Currently, the option parsing goes like this:

  1. parse commandline options
  2. parse config file (if any)
  3. parse commandline options (overriding config files)

This seems a bit excessive, a proposed alternative:

  1. parse commandline options, keeping track of the values set
  2. parse config file, without overriding options set in the commandline

bumblebee3 doesn't enable bbswitch

Hi, first great work on bb3! Shipping with bbswitch will make everyone happier! :)

I've been using bbswitch for a while recently without the nvidia driver, just disabling it. And I grabbed bumblebee3 now and rebooted. bbswitch loads, but doesn't disable the card.

I tried rmmod nvidia and get this error:
ERROR: Module nvidia is in use by [permanent]

I also get this error after shutting off bumblebeed. Any suggestions on how to once again be allowed to turn the card off?

bbswitch on fails - optirun hangs with 100% cpu usage

Hi

I had optirun working yesterday, today on a clean boot (on a laptop, ran it out of power yesterday..) it won't work.

optirun --debug glxgears hangs with 100% cpu usage with these messages:
[DEBUG]Active configuration:
[DEBUG] bumblebeed config file: /etc/bumblebee/bumblebee.conf
[DEBUG] X display: :8
[DEBUG] LD_LIBRARY_PATH: /usr/lib/nvidia-current:/usr/lib32/nvidia-current
[DEBUG] Socket path: /var/run/bumblebee.socket
[DEBUG] VGL Compression: proxy
[DEBUG]optirun version 3.0 starting...

I figured I'd try doing it manually, and executed echo "ON" > /proc/acpi/bbswitch and that hung as well. This dmesg | grep bbswitch:
[ 5.298012] bbswitch: Found integrated VGA device 0000:00:02.0: SB.PCI0.GFX0
[ 5.298021] bbswitch: Found discrete VGA device 0000:01:00.0: SB.PCI0.PEG0.PEGP
[ 5.298091] bbswitch: detected an Optimus _DSM function
[ 5.298097] bbswitch: Succesfully loaded. Discrete card 0000:01:00.0 is on
[ 5.404598] bbswitch: disabling discrete graphics
[ 5.404747] bbswitch: Result of Optimus _DSM call: 11000041
[ 91.949351] bbswitch: enabling discrete graphics
[ 531.305904] bbswitch: enabling discrete graphics

Alternative for determining "ConnectedMonitor" for xorg.conf.nvidia

The nvidia blob is that stupid that it cannot always detect the correct monitor setting. Therefore something like the below must be used for determining the monitor setting:

$ /usr/lib/nvidia-current/bin/nvidia-xconfig --query-gpu-info --nvidia-cfg-path=/usr/lib/nvidia-current
Number of GPUs: 1

GPU #0:
  Name      : GeForce GT 425M
  PCI BusID : PCI:1:0:0

  Number of Display Devices: 1

  Display Device 0 (CRT-0):
     No EDID information available.

The above output came from a Clevo B7130 which has an external VGA and HDMI connector and of course an internal connector to the LCD display.

At first I thought of some hardcoded table in the nvidia driver, but then it occured to me that it seeks access to /dev/nvidiactl which is a kernel interface. Searching for "DFP" in the ACPI Spec v4 pointed me at sect. B.4.2 _DOD (Enumerate All Devices Attached to the Display Adapter).

For the above output (Clevo B7130), the \_SB.PCI0.GFX0._DOD method returns a package containing three values:

  • 0x80010100 (binary 1000 0000 0000 0001 0000 0001 0000 0000)
  • 0x80010300 (binary 1000 0000 0000 0001 0000 0011 0000 0000)
  • 0x80010410 (binary 1000 0000 0000 0001 0000 0100 0001 0000)

An explanation of all bits can be found in table B-2 on page 699 of the ACPI Spec v4. Explanation of the above values (bits are counted from right and zero-based):

  • 0 - 3: display index (which is 0 in this case)
  • 4 - 7: Display Port attachment. In this case, 1 and 0 (this is possibly not useful for us)
  • 8 - 11: Display Type, for my case:
    • 0001 (1): "VGA, CRT or VESA compatible analog monitor" (external VGA?)
    • 0011 (3): "External Digital Monitor (see note 1)" (external HDMI?)
    • 0100 (4): "Internal/Integrated Digital Flat Panel (see note 2)" (internal LCD?)

Notes (taken from the ACPI Spec v4, page 701:

  1. An “External Digital Monitor” is an external display device attachable via a user-accessible connector standard (e.g. DFP* or DVI* Compatible Monitors).
  2. An “Internal Flat Panel” is a non-detachable fixed pixel display device, including a backlight, and is internally associated, without user-accessible connectors, to the Video Chip (e.g. TFT LCD via TMDS_, LVDS_ interface).

Now, how could this be useful? I'll check it later this day.

Problems in power management at boot

I've noticed if the primary X server is probing devices and the card is switched after the X server started but before it completed all detections. This is before it gets you to the login screen. It will freeze with a "service unavailable".

I wonder if we could get our hands/test the Dave Arlie's changes on X server to support hot-pluging.

I see two workarounds to this issue: Wait a certain time after the daemon starts to switch the card. Or wait until the display manager kicks in or the X server stops probing all devices.

Sometimes my secondary card wont turn off after I closed all optiruns

Just forwarding the issue made by @Danger89 in my personal repo.

Hi,

I use bumblebeed daemon and bbswitch to turn off my GT555M nVidia Optimus card. Some of the time when I disable optirun the card won't turn off again. I need to type manually: sudo bumblebeed --daemon to safe power again/disable my card.

Here are some log files for debugging:

kern.log: http://pastebin.com/ZQR4HXPp
syslog: http://pastebin.com/BdfPqkth
dmesg: http://pastebin.com/cqUYhEH4
Xorg.0.log: http://pastebin.com/VzTiMibR

To ask if you need more information.

Good luck!

Kind regards,
Melroy van den Berg

KeepUnusedXServer timeout

I have idea about stop_secondary, maybee there can be timeout for which will secondary xserver remain running and after this timeout will stop if unused.

Exit code of a forked and executed process is not preserved

The functions for executing a process do not keep the exit code of a process. Example for optirun:

optirun sh -c 'echo 33;exit 22'; echo $?

Expected value:

33
22

Actual value:

33
0

or:

[ERROR]Could not connect to /var/run/bumblebee.socket! Error: No such file or directory
[ERROR]Could not connect to bumblebee daemon - is it running?
33
1

If --failsafe was given and the server was not available, the result is correct.

Parsing order and autodetection

GH-#26 arose some issues regarding the driver auto-detection and parsing the driver specific configuration.

If no driver is specified, then the driver specific configuration won't be available because is not autodetected until the configuration file is parsed.
If the driver is autodetected then the configuration for each driver is not parsed.

It's a clear ordering problem. So I'm proposing to split parsers this way:

  • parse command line options -C and -v
  • autodetect driver (with check_secondary())
  • parse configuration file
  • parse the rest of command line options.

If no one has objections then I'll implement it with the new Glib parser all together and close both issues.

Replace VGL with a Compositing Window Manager.

so, this is my idea for a possible v4.0 or something:

I'm sure you guys have heard and/or know about compositing window managers.
Any (one at a time, max) application can register themselves with an X server as such a display manager.
Usually this stuff is used for fancy 3D effects and all that crap we don't really care about, think compiz, for example.

The cool part is that in this "mode", all windows get their own framebuffer, off-screen. We can obviously still do this on the nvidia card even though there is no real desktop active.

I want to connect to :8 as such a window manager, and instead of drawing all windows to :8, draw them to :0.

To compare:

  • VirtualGL works by capturing the window first, encoding (to for example JPEG), transferring over a socket, decoding, then finally drawing to the screen. Input goes similarly, but in the reverse direction. That means the screen image data is copied at least SIX times - twice in socket buffers, twice for encoding, twice for decoding. More if they didn't bother doing this efficiently, and since it is meant to be used over a network their biggest bottleneck is probably network speed, not drawing speed.
  • windump works by capturing the window, transferring over a socket, then drawing to the screen. No encoding/decoding here, so it's obviously quite a bit faster.
  • My proposed method would not do any capturing (after all - the framebuffer is readily available) and directly copy the window framebuffer to the other screen. One copy operation. In theory that's at least twice as fast as anything else out there, in practice it will most likely be somewhere around 10-100 times as fast. Proper framerates might even be a possibility here! ;-)

Note: This all sounds great, but it is an amount of work not to be underestimated. We should plan and spread out the tasks properly if we want this to work well.

Thoughts?

Make status reporting return an exit code describing the result

optirun without arguments returns the status. This should not be the case, instead --status should report the status. Furthermore, the exit code is not useful to distinguish the state.

What about:

  • 0 (EXIT_SUCCESS): BBd / X is available
  • 1 (EXIT_FAILURE): bbd is unavailable
  • 2: bbd is available, but X is not started
  • 3: bbd is available, but an error has occured

Review of log messages

We currently use four log levels:

  • LOG_ERR
  • LOG_WARNING
  • LOG_INFO
  • LOG_DEBUG

(with verbosity = VERB_NONE disabling all of these messages). I was surprised that I still got a lot noise when adding a single -v option (which incremented the verbosity from VERB_WARN to VERB_INFO). For example, why would it be helpful to an user that "the mainloop was entered"? Such a message is more suitable for LOG_DEBUG.

Automatically modprobe bbswitch if available during auto-detection?

If one has bbswitch installed, (s)he likely wants to use it, correct? So, what about changing the behavior of PM_METHOD=auto to always modprobe bbswitch whenever possible?
If it cannot be loaded, it won't do any harm. The only possible conflict is that the user used vga_switcheroo before without turning it ON again, but this chance is so small that the benefits of autoloading bbswitch during detection outweights this disadvantage imo.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.