GithubHelp home page GithubHelp logo

rustaudio / cpal Goto Github PK

View Code? Open in Web Editor NEW
2.5K 30.0 330.0 8.85 MB

Cross-platform audio I/O library in pure Rust

License: Apache License 2.0

Rust 99.66% C++ 0.27% Dockerfile 0.07%
rust audio

cpal's Introduction

CPAL - Cross-Platform Audio Library

Actions Status Crates.io docs.rs

Low-level library for audio input and output in pure Rust.

This library currently supports the following:

  • Enumerate supported audio hosts.
  • Enumerate all available audio devices.
  • Get the current default input and output devices.
  • Enumerate known supported input and output stream formats for a device.
  • Get the current default input and output stream formats for a device.
  • Build and run input and output PCM streams on a chosen device with a given stream format.

Currently, supported hosts include:

  • Linux (via ALSA or JACK)
  • Windows (via WASAPI by default, see ASIO instructions below)
  • macOS (via CoreAudio)
  • iOS (via CoreAudio)
  • Android (via Oboe)
  • Emscripten

Note that on Linux, the ALSA development files are required. These are provided as part of the libasound2-dev package on Debian and Ubuntu distributions and alsa-lib-devel on Fedora.

Compiling for Web Assembly

If you are interested in using CPAL with WASM, please see this guide in our Wiki which walks through setting up a new project from scratch.

Feature flags for audio backends

Some audio backends are optional and will only be compiled with a feature flag.

  • JACK (on Linux): jack
  • ASIO (on Windows): asio

Oboe can either use a shared or static runtime. The static runtime is used by default, but activating the oboe-shared-stdcxx feature makes it use the shared runtime, which requires libc++_shared.so from the Android NDK to be present during execution.

ASIO on Windows

ASIO is an audio driver protocol by Steinberg. While it is available on multiple operating systems, it is most commonly used on Windows to work around limitations of WASAPI including access to large numbers of channels and lower-latency audio processing.

CPAL allows for using the ASIO SDK as the audio host on Windows instead of WASAPI.

Locating the ASIO SDK

The location of ASIO SDK is exposed to CPAL by setting the CPAL_ASIO_DIR environment variable.

The build script will try to find the ASIO SDK by following these steps in order:

  1. Check if CPAL_ASIO_DIR is set and if so use the path to point to the SDK.
  2. Check if the ASIO SDK is already installed in the temporary directory, if so use that and set the path of CPAL_ASIO_DIR to the output of std::env::temp_dir().join("asio_sdk").
  3. If the ASIO SDK is not already installed, download it from https://www.steinberg.net/asiosdk and install it in the temporary directory. The path of CPAL_ASIO_DIR will be set to the output of std::env::temp_dir().join("asio_sdk").

In an ideal situation you don't need to worry about this step.

Preparing the build environment

  1. bindgen, the library used to generate bindings to the C++ SDK, requires clang. Download and install LLVM from here under the "Pre-Built Binaries" section. The version as of writing this is 17.0.1.

  2. Add the LLVM bin directory to a LIBCLANG_PATH environment variable. If you installed LLVM to the default directory, this should work in the command prompt:

    setx LIBCLANG_PATH "C:\Program Files\LLVM\bin"
    
  3. If you don't have any ASIO devices or drivers available, you can download and install ASIO4ALL. Be sure to enable the "offline" feature during installation despite what the installer says about it being useless.

  4. Our build script assumes that Microsoft Visual Studio is installed if the host OS for compilation is Windows. The script will try to find vcvarsall.bat and execute it with the right host and target machine architecture regardless of the Microsoft Visual Studio version. If there are any errors encountered in this process which is unlikely, you may find the vcvarsall.bat manually and execute it with your machine architecture as an argument. The script will detect this and skip the step.

    A manually executed command example for 64 bit machines:

    "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64
    

    For more information please refer to the documentation of `vcvarsall.bat``.

  5. Select the ASIO host at the start of our program with the following code:

    let host;
    #[cfg(target_os = "windows")]
    {
       host = cpal::host_from_id(cpal::HostId::Asio).expect("failed to initialise ASIO host");
    }

    If you run into compilations errors produced by asio-sys or bindgen, make sure that CPAL_ASIO_DIR is set correctly and try cargo clean.

  6. Make sure to enable the asio feature when building CPAL:

    cargo build --features "asio"
    

    or if you are using CPAL as a dependency in a downstream project, enable the feature like this:

    cpal = { version = "*", features = ["asio"] }

Updated as of ASIO version 2.3.3.

Cross compilation

When Windows is the host and the target OS, the build script of asio-sys supports all cross compilation targets which are supported by the MSVC compiler. An exhaustive list of combinations could be found here with the addition of undocumented arm64, arm64_x86, arm64_amd64 and arm64_arm targets. (5.11.2023)

It is also possible to compile Windows applications with ASIO support on Linux and macOS.

For both platforms the common way to do this is to use the MinGW-w64 toolchain.

Make sure that you have included the MinGW-w64 include directory in your CPLUS_INCLUDE_PATH environment variable. Make sure that LLVM is installed and include directory is also included in your CPLUS_INCLUDE_PATH environment variable.

Example for macOS for the target of x86_64-pc-windows-gnu where mingw-w64 is installed via brew:

export CPLUS_INCLUDE_PATH="$CPLUS_INCLUDE_PATH:/opt/homebrew/Cellar/mingw-w64/11.0.1/toolchain-x86_64/x86_64-w64-mingw32/include"

cpal's People

Contributors

alexmoon avatar ameknite avatar artemgr avatar dependabot[bot] avatar derekdreery avatar dheijl avatar enfipy avatar est31 avatar freesig avatar generalelectrix avatar gentoid avatar hybrideidolon avatar ishitatsuyuki avatar james7132 avatar jesnor avatar joshuabatty avatar kawogi avatar luni-4 avatar maxded avatar mbodmer avatar michaelhills avatar mitchmindtree avatar mockersf avatar msiglreith avatar retep998 avatar rfwatson avatar simlay avatar tomaka avatar xmac94x avatar yamadapc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cpal's Issues

OS X Support

Just wanted to make a cover-all issue for this. Known dependencies include:

#52

Latest coreaudio-rs (>=0.5.0) should have no problems supporting this at this point, but we need to put in the hours to actually fix everything up again :)

Expose raw platform-specific data structures

On Linux, for example, I can use the ALSA fd for async operation with mio, allowing me to have a single thread listening for commands to switch voices etc and also feed samples.

Unable to open slave

Could not open slave and resulted in thread panic. Installed pulseaudio-alsa package on Arch Linux to solve problem.

latency in playback?

I'm writing a program to run faust dsp code (which compiles to c++) from inside rust. This appears to be working fine, but I'm running into a latency issue.

You can change parameters to the faust dsp code to alter its output. I've got a simple patch that just outputs white noise with a single paramter, Volume.

What I'm seeing is that when I change the volume, it takes several seconds for the alteration to propogate out to the sound that I'm hearing from my laptop speakers. I'm using voice.append_data to hand over my dsp output values to cpal.

I suspect that cpal has a several second buffer internally which is getting filled up; when I change the volume it happens right away but that goes onto the back of the cpal queue and it takes a while for the change to be heard.

If this is the case, I don't really see how to configure it in cpal. Maybe there's something I'm missing? For this DSP stuff I really want to have latency as low as possible.

Running beep.rs example with ALSA only loads data to buffer once

No modifications were made to code. When I put println! statements in the stream.for_each closure, they only are called once. Issue persists with both version 0.4.0 and 0.4.1 of crate.

No panics or anything, and the program is properly blocking on the event_loop.run() call. Really not sure what could be going wrong. I can post more specific details of my system if that would help, but I'm fairly new to rust/audio stuff so not sure what would be relevant.

Introduce a `Device` API.

If we do wish to support both input and output as mentioned in #116, it might be a nice idea to replace the current EndPoint API with a Device API. A Device may contain any number of Input and/or Output channels, perhaps offering methods to enumerate them (i.e. device.inputs(), device.outputs()) along with providing methods for checking sample rate and format compatibility.

A side-note: At an even higher level, PortAudio provides a way to enumerate HostAPIs at runtime (i.e. DirectSound/ASIO), however we'll probably do better to just feature-gate different backends for now, as is currently the case. I personally haven't seen any DAWs (or ever come across the need for) selecting a different audio backend at runtime, but then again I've almost solely used OS X for audio dev. Would be interesting to get other opinions on this.

hread '<main>' panicked at 'called `Result::unwrap()` on an `Err` value: "PoisonError ...

Greetings,

Just copy and pasting the issue I put forward on rodio:

When trying to use this library, I end up with an error.

Here is the code that's causing the panic, inspired by one of your examples (https://github.com/tomaka/rodio/blob/master/examples/music_wav.rs)...


    use std::io::BufReader;
    let endpoint = rodio::get_default_endpoint().expect("Failed to retrieve default endpoint.");
    let sink = rodio::Sink::new(&endpoint);

    let file = std::fs::File::open(filepath).expect("file missing!");
    let source = rodio::Decoder::new(BufReader::new(file)).unwrap();
    sink.append(source);
    sink.sleep_until_end();


This is currently residing in a trigger system, so it currently occurs when the player steps on a particular tile. When I step on that tile, I get the following error:


thread '<main>' panicked at 'Unknown SubFormat GUID returned by GetMixFormat: GUID { Data1: 1048576, Data2: 128, Data3: 43520, Data4: [0, 56, 155, 113, 164, 120, 36, 176] }', C:\Users\<USERNAME>\.cargo\registry\src\github.com-88ac128001ac3a9a\cpal-0.2.11\src\wasapi/mod.rs:191
note: Run with `RUST_BACKTRACE=1` for a backtrace.
thread '<main>' panicked at 'called `Result::unwrap()` on an `Err` value: "PoisonError { inner: .. }"', ../src/libcore\result.rs:746
stack backtrace:
   0:     0x7ff7be789b5b - std::rt::lang_start::h5b0863080165c75e
   1:     0x7ff7be788f5b - std::rt::lang_start::h5b0863080165c75e
   2:     0x7ff7be77d34f - std::sys_common::unwind::begin_unwind_inner::h39d40f52add53ef7
   3:     0x7ff7be77e33d - std::sys_common::unwind::begin_unwind_fmt::h64c0ff793199cc1b
   4:     0x7ff7be78456b - rust_begin_unwind
   5:     0x7ff7be78f1b5 - core::panicking::panic_fmt::h73bf9d7e8e891a73
   6:     0x7ff7be66ae6f - main
   7:     0x7ff7be6676a6 - main
   8:     0x7ff7be64f47d - __ImageBase
   9:     0x7ffb3afbc6bb - _C_specific_handler
  10:     0x7ffb50859b7c - _chkstk
  11:     0x7ffb507e595b - RtlUnwindEx
  12:     0x7ffb3afbc604 - _C_specific_handler
  13:     0x7ffb50859afc - _chkstk
  14:     0x7ffb507e4fe8 - RtlImageNtHeaderEx
  15:     0x7ffb507e6c93 - RtlRaiseException
  16:     0x7ffb4cf61f27 - RaiseException
  17:     0x7ff7be782517 - std::io::stdio::_print::h03730948b3f63a9b
  18:     0x7ff7be77d4a9 - std::sys_common::unwind::begin_unwind_inner::h39d40f52add53ef7
  19:     0x7ff7be77e33d - std::sys_common::unwind::begin_unwind_fmt::h64c0ff793199cc1b
  20:     0x7ff7be777b3c - cpal::cpal_impl::Endpoint::get_supported_formats_list::h49aba8504a03d688
  21:     0x7ff7be7673b7 - rodio::engine::Engine::start::hb7ef03fed4b01782
  22:     0x7ff7be647c3c - __ImageBase
  23:     0x7ff7be788928 - std::rt::lang_start::h5b0863080165c75e
  24:     0x7ff7be7844d8 - std::sys_common::unwind::inner_try::h9eebd8dc83f388a6
  25:     0x7ff7be7886d7 - std::rt::lang_start::h5b0863080165c75e
  26:     0x7ff7be7af24f - __scrt_common_main_seh
                        at f:\dd\vctools\crt\vcstartup\src\startup\exe_common.inl:255
  27:     0x7ffb4dc08101 - BaseThreadInitThunk
thread panicked while panicking. aborting.
error: Process didn't exit successfully: `target\release\project_zed.exe` (exit code: 3221225477)

This panic is occuring lower than the caller level, so I'm not sure there is anything I can do with it at the user level.

I am running Windows 10 64-bit. If you need anything else, please ask.

Thanks,
Plasticcaz

crates.io libc vs rustc's libc

I just had a discussion with Yurume about some libc issues I was having where conflicts were arising between rustc's libc and the crates.io libc. He mentioned that the recommended crate to use is the crates.io libc.

I noticed that alsa-sys uses both #![feature(libc)] as well as libc = "*", though I'm pretty sure only the latter is necessary. I copied the #![feature(libc)] into my core_audio-sys (coreaudio bindings) crate which caused the issue - after I removed it and added libc = "*" to my toml it seems to work fine though. The issues were subtle, I'd get stuff like this:

 expected `*const libc::types::common::c95::c_void`,          
    found `*const libc::types::common::c95::c_void`           
(expected enum `libc::types::common::c95::c_void`,            
    found a different enum `libc::types::common::c95::c_void`)

Which tempted me to just use mem::transmute instead (which would have been incorrect).

This is mainly just a heads up in case someone finds themselves running into similar issues.

Creating Voice with custom format could trigger panic

When creating a new Voice in alsa, there are a lot of lines that use the expect function.
In one of my projects, rather than enumerating all formats I tried to guess the best format available by constructing one myself. Since Voice::new returns an option, I would expect that, given an invalid Format, I would recieve Err(FormatNotSupported) rather than a panic from which I cannot recover.

Shouldn't some of these use try! rather than expect?

ALSA backend call to `libc::poll` does not wakeup when building a new voice.

If EventLoop::run is called (on a different thread) prior to building a new voice, the EventLoop thread becomes blocked here

                let ret = libc::poll(run_context.descriptors.as_mut_ptr(),
                                     run_context.descriptors.len() as libc::nfds_t,
                                     -1 /* infinite */);

and does not wakeup when a new voice is created, despite the build_voice method writing to the file descriptor created for the purpose of waking up poll here

 self.pending_trigger.wakeup();

If anyone has any ideas on what might be going on here that would be greatly appreciated! I'm unfamiliar with the libc::poll function and and its usage so I'm learning as I go here.

I wonder if it has something to do with the descriptors buffer being empty, and as a result poll has not been told what "events" it should be waiting for? Currently, a descriptor is only specified if there was some Command to process:

// process commands
                        run_context.descriptors = vec![
                            libc::pollfd {
                                fd: self.pending_trigger.read_fd(),
                                events: libc::POLLIN,
                                revents: 0,
                            },
                        ];

Perhaps a descriptor of some sort should be specified regardless of whether or not some commands were processed? Edit: This does seem to fix the issue, though I'm unsure whether or not this is the "correct" fix. @tomaka are you familiar with the alsa backend at all? If so it would be great to get your thoughts.

Allow customisation of the Voice output format

As far as I can tell, all three output backends support multiple output formats. However each one is currently only configured for a single output format. Since different audio sources will have different formats, this means any source not in the hardcoded format for the voice has to be converted. This can actually mean a loss of fidelity in some cases, since it's possible to get formats that have 32-bit integer samples and even 64-bit float samples.

If the backend doesn't support a format, the conversion can be done then. I'd rather have the backend do the conversion if the hardware doesn't support a format than be forced to lose fidelity because the format is hard-coded.

Related to #30

Move `SampleFormat` check out of the main stream loop

Currently CPAL requires matching on an UnknownTypeBuffer enum upon every iteration of the audio loop, despite being able to query compatible sample formats prior to running the loop using Voice::format.

The beep example demonstrates how each format might be handled in this manner:

    loop {
        match channel.append_data(32768) {
            cpal::UnknownTypeBuffer::U16(mut buffer) => {
                for (sample, value) in buffer.chunks_mut(format.channels.len()).zip(&mut data_source) {
                    let value = ((value * 0.5 + 0.5) * std::u16::MAX as f32) as u16;
                    for out in sample.iter_mut() { *out = value; }
                }
            },

            cpal::UnknownTypeBuffer::I16(mut buffer) => {
                for (sample, value) in buffer.chunks_mut(format.channels.len()).zip(&mut data_source) {
                    let value = (value * std::i16::MAX as f32) as i16;
                    for out in sample.iter_mut() { *out = value; }
                }
            },

            cpal::UnknownTypeBuffer::F32(mut buffer) => {
                for (sample, value) in buffer.chunks_mut(format.channels.len()).zip(&mut data_source) {
                    for out in sample.iter_mut() { *out = value; }
                }
            },
        }

        channel.play();
    }

In any serious audio applications, the work contained in each branch would likely be required to be abstracted into some function, where the application either:

  • Does all its work in one format and then converts the final buffer to the stream's sample format or
  • Remains entirely generic over the sample type, using some Sample trait.

In either case, we likely end up using some generic function within the branching like so:

    loop {
        match channel.append_data(32768) {
            cpal::UnknownTypeBuffer::U16(mut buffer) =>
                fill_buffer_with_data::<u16>(&mut buffer, &mut data_source),
            cpal::UnknownTypeBuffer::I16(mut buffer) =>
                fill_buffer_with_data::<i16>(&mut buffer, &mut data_source),
            cpal::UnknownTypeBuffer::F32(mut buffer) =>
                fill_buffer_with_data::<f32>(&mut buffer, &mut data_source),
        }

        channel.play();
    }

Considering we already know whether or not certain formats are supported before we start the stream, we should be able to move this branching to occur before the stream even begins:

fn main() {
    let endpoint = cpal::get_default_endpoint().expect("Failed to get default endpoint");

    if run_stream::<f32>(&end_point, &mut data_source).is_ok() {}
    else if run_stream::<i32>(&end_point, &mut data_source).is_ok() {}
    else if run_stream::<i16>(&end_point, &mut data_source).is_ok() {}
    ...
    else {
        panic!("No compatible audio stream formats found for the device");
    }
}

This could of course be refactored to only try different sample formats if some specific UnsupportedFormat error is returned, or by iterating on the EndPoint's supported formats, matching on them and attempting to run the stream that way.

I'm curious to get your thoughts on this, as the current in-loop matching seems both unnecessary and to question whether the sample_format field in the Format struct yielded by endpoint.get_supported_formats_list() is useful at all?

Signed/unsigned is reversed

I believe signed and unsigned samples are handled in precisely the opposite way of how they should be handled. For instance, the beep example produces a very sharp sound, whereas it should produce a sine wave of exactly one frequency, which sounds very soft.

On the other hand, I modified the example to produce signed samples instead:

extern crate cpal;

fn main() {
    let mut channel = cpal::Voice::new();

    let amplitude = std::i16::MAX as f32;
    let mut data_source = (0u64..).map(|t| t as f32 * 0.03)
                                  .map(|t| (t.sin() * amplitude) as i16);

    loop {
        {
            let mut buffer: cpal::Buffer<i16> =
                channel.append_data(1, cpal::SamplesRate(44100), 32768);

            for (sample, value) in buffer.iter_mut().zip(&mut data_source) {
                *sample = value;
            }
        }

        channel.play();
    }
}

This again produces the sharp sound that sounds like a square wave. When the last two occurences of i16 are replaced by u16 โ€” effectively rendering a signed sine wave but casting it to u16 โ€” the sound sounds like a sine again. I verified the above code on Windows as well as Linux.

On a side note, when I ran the beep example on Windows I could sometimes hear large fluctuations in the frequency, even though it should be constant. The frequency would jump abrubtly after a few seconds.

Rename `Voice` to `Stream`?

In my own experience, the term Voice is normally associated with higher level concepts. I.e. A polyphonic software music instrument might allow the user to set the max number of Voices that it can perform before a new note would override the oldest.

On the other hand, low-level input/outputs are normally referred to as Streams. See cubeb, portaudio, libsound.io, coreaudio, ASIO stands for audio Stream input/output. I'm unsure of any examples in support of Voice.

Change volume level of Voice while playing it

So I came here out of a desire to enhance the rodio crate which one of my crates depends on. I'd like to add the missing implementation for set_volume on the Sink struct in Rodio. In order to do that though I need some way to modify the volume of a currently playing Voice in cpal. I'm not very well-versed in the internals of either library but a quick review of the Voice documentation here didn't show any obvious ways to change the volume of a playing Voice. Is there something I'm missing here or is that just not yet implemented in cpal?

Support for Cross-Compilation

Currently cpal can't be cross-compiled for ARM(at least that's how I understand it).
To compile alsa-sys for ARM I had to export 2 environment variables(after already having Rust and Cargo set to cross-compile), export PKG_CONFIG_PATH=$MY_ARM_SYSROOT/usr/lib/pkgconfig and export PKG_CONFIG_ALLOW_CROSS=1, after this I can compile alsa-sys, if I cd out into cpal and try to compile, I get this message:

src/alsa/mod.rs:1:1: 1:31 error: can't find crate for `alsa` [E0463]
src/alsa/mod.rs:1 extern crate alsa_sys as alsa;
                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: aborting due to previous error
Could not compile `cpal`.

With --verbose:

       Fresh pkg-config v0.3.5
       Fresh lazy_static v0.1.15
       Fresh libc v0.1.10
       Fresh gcc v0.3.17
       Fresh winapi v0.2.4
   Compiling cpal v0.2.7 (file://$MY_HOME/Git/cpal)
     Running `rustc src/lib.rs --crate-name cpal --crate-type lib 
-C opt-level=3 
--out-dir $MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release 
--emit=dep-info,link --target arm-unknown-linux-gnueabi 
-C ar=arm-montavista-linux-gnueabi-ar 
-C linker=gcc-sysroot 
-L dependency=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release 
-L dependency=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps 
--extern libc=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps/liblibc-144c435538abd757.rlib 
--extern ole32=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps/libole32-5219fe2002394e46.rlib 
--extern winapi=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps/libwinapi-21b078e9a1931364.rlib 
--extern lazy_static=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps/liblazy_static-f3aa6dfcc7c157cc.rlib`
       Fresh ole32-sys v0.1.0
       Fresh ogg-sys v0.0.9
       Fresh vorbis-sys v0.0.8
       Fresh vorbisfile-sys v0.0.8
       Fresh vorbis v0.0.13
src/alsa/mod.rs:1:1: 1:31 error: can't find crate for `alsa` [E0463]
src/alsa/mod.rs:1 extern crate alsa_sys as alsa;
                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: aborting due to previous error
Could not compile `cpal`.

Caused by:
  Process didn't exit successfully: `rustc src/lib.rs --crate-name cpal --crate-type lib 
-C opt-level=3 --out-dir $MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release 
--emit=dep-info,link --target arm-unknown-linux-gnueabi 
-C ar=arm-montavista-linux-gnueabi-ar -C linker=gcc-sysroot 
-L dependency=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release 
-L dependency=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps 
--extern libc=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps/liblibc-144c435538abd757.rlib 
--extern ole32=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps/libole32-5219fe2002394e46.rlib 
--extern winapi=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps/libwinapi-21b078e9a1931364.rlib 
--extern lazy_static=$MY_HOME/Git/cpal/target/arm-unknown-linux-gnueabi/release/deps/liblazy_static-f3aa6dfcc7c157cc.rlib` (exit code: 101)

Observe that it doesn't seem to link alsa...
Note that compiling any other Rust application that doesn't have bindings to C libraries works just fine.

music example fails on OSX due to AudioUnit frame size

thread '<unnamed>' panicked at 'The number of input frames given differs from the number requested by the AudioUnit: 64 and 512 respectively', [src/coreaudio/mod.rs:103](https://github.com/tomaka/cpal/blob/6ffdcd5343fffabffd7472e60962a72405f52f93/src/coreaudio/mod.rs#L103)

Running Yosemite 10.10.4.

Similarly, the beep example sigsegvs during execution, only playing the beep for a very short period of time.

Allow waiting for playback to finish

I wrote a simple playback program that streams audio from a file to cpal, and then exits. When the number of samples is small enough (this happened to me for 44100 samples, one second of mono audio), all of the samples can be passed to append_data in one step. So the application dumped all of its data into the buffer, called voice.play(), and exited before the audio even had a chance to begin to play.

Therefore, it would be useful to have a method to poll whether playback is complete, or a method that blocks until this is the case.

Use memory-mapped sample transfer on ALSA

This is good for reducing latency when generating samples. AFAICT only ALSA supports this natively, but it can be faked pretty easily on everything else by just exposing the &mut to an internal vector.

Example fails when using ALSA through PulseAudio

When ALSA is using PulseAudio, the reported maximum sample rate is garbage (4294967295) and the beep example fails with sample rate could not be set: "Invalid argument". (See e.g. here for the reported values) Setting the sample rate manually works flawlessly.

I'm not sure if this is really fixable or if the example should just use a common sample rate. This problem also makes rodio completely unusable on my system as it also always uses the maximum sample rate.

Exposing the output device stream directly.

At the moment the way CPAL interfaces with the audio stream is via .append_data. .append_data takes channels, sample rate and maximum buffer size as arguments while allowing the user to use any sample format they wish. This is a useful, high-level approach, allowing the user to not worry about the underlying stream format if they don't want to and also allowing a dynamic stream format.

This can however require a conversion to take place between the stream format given and the output device's current stream format every time a buffer is requested (if any part of either stream format differs, that is). It is not immediately obvious that this conversion takes place or is even necessary from a user's perspective.

CPAL does not currently offer direct access to the pure device stream. Perhaps before exposing the sort of dynamic interface that is currently implemented, it could be a good idea to first provide the pure audio device stream itself. As CPAL aims to be a cross-platform audio library, it could be beneficial to first provide the direct stream for users who wish to gain as low-level access as possible before providing the dynamic abstraction on top.

I think there are a couple ways we could do this - the following is the most satisfying I could think of:

We could change append_data to provide direct access to the device's stream.

let mut buffer = voice.append_data();
buffer.fill(|samples: &mut[u16], num_channels: usize, sample_rate: u32| {
    // fill samples
});

The dynamic stream format style that is currently in use could then be implemented on top of this:

let mut buffer = voice.append_data().custom(channels, sample_rate, max_elements);
// fill buffer

@tomaka what are your thoughts? I'd be happy to implement this.

Consider not enumerating endpoint formats, and expose an API to set desired parameters instead?

Today I spent some time trying to consume this library from my emu project, a cross-platform set of libraries for writing emulators in Rust. Currently the audio stuff only works on OSX, as it depends on coreaudio-rs. The goal today was to add Windows and Linux support in one go by incorporating this library, as I'm much more in favor of wrapping a full-Rust library like this than a C library with Rust bindings, like portaudio. This would allow projects like my SNES audio unit emulator to be cross-platform, which would be quite rad :)

However, one roadblock I ran into fairly early on was that all of the supported formats for the endpoints enumerated by the WASAPI implementation were either 8000hz, 44100hz, or 48000hz. Since I need an endpoint that can support 32000hz, this wouldn't quite do. So I started digging around to see if I could find any existing audio resampling libraries in Rust, but to no avail.

However, I stumbled upon this discussion on reddit, and in particular, this quote:

The Windows API is more restrictive, so I designed cpal to return the list of supported formats one by one. Hardwares usually support a limited number of rates, channels and datatypes, so I thought that this would be a good design.

It's mentioned that the reason this library is enumerating endpoints at all was basically because of the WASAPI abstraction, which is limited to a few very specific audio formats when used in shared mode. This raises a couple questions for me, such as, why was shared mode used in the first place? Is this because it could potentially be more performant? Could DirectSound have been used instead?

It's also noted that the current scheme also breaks down a bit with ALSA, which can have endpoints that support resampling internally, so they report a very large number of supported formats.

Additionally, had the Windows implementation used DirectSound instead of WASAPI, a similar case to ALSA would've occurred; DirectSound also supports resampling internally, which means the number of supported formats for an endpoint would also potentially be quite large. I believe using WASAPI's exclusive mode would have also exhibited this issue with the current design. And while the Core Audio parts of this library for OSX are in sort of an un-maintained state, the same would certainly hold true for that API as well.

While I agree with the logic that the hardware itself might support a select few sample rates, it appears the OS API's are much less restrictive, and I think we should consider redesigning this API to reflect that.

Back to my particular case; it might not seem too bad implementing a resampler for my project, or even writing my own OS API abstractions to give me the flexibility I need, which includes changing sample rates etc on the fly (another thing the OS API's tend to support), but I imagine many other projects that want to consume this library will run into the same issue. Perhaps this means someone should rather be working on a good audio resampling library for Rust, but I see a shorter path to solve more problems by letting the underlying OS API's handle this instead, as they already can (and can be quite robust and reliable).

I'm not entirely sure how the API would look given such a fundamental change, but I'd like to try to open up the discussion at least and see what you think before I run off and try it and make a PR that may not have been welcome in the first place :)

beep example panics 'Failed to get default endpoint' on macOS release builds

I look all through the implementation and couldn't figure out why cpal::get_default_endpoint would return None under release builds. The coreaudio backend always returns Some(Endpoint). I confirmed this with a bunch of printlns everywhere. The execution gets all the way down to event_loop.run(); but thread 'main' panicked at 'Failed to get default endpoint' just before it?

This happens on stable and nightly.

SamplesRate struct

@tomaka Just curious, do you have some plans for the SamplesRate struct? From what I can see it seems like it could just be a type alias instead of a struct? Just thought I'd check!

Also I think SampleRate (without the s) is the more stereotypical naming choice (or even SampleHz to be more precise).

Hitting debug_assertion on windows

Running the beep example I get:

thread 'main' panicked at 'assertion failed: !buffer.is_null()', src\wasapi\voice.rs:388

(relevant code)
Sometimes it happens quite early after I start the program but there are also times where I have to wait 1-2 minutes.

Publish new alsa-sys version

This is a reminder. I can't do it right now because there's a verification when uploading and I'm not on linux.

Trouble linking to linking to libvorbis

Having trouble running the examples as libvorbis-sys seems to be failing to link correctly -

error: linking with `cc` failed: exit code: 1
note: "cc" '"-m64"' '"-L"' '"/usr/local/lib/rustlib/x86_64-apple-darwin/lib"' '"-o"' '"/Users/Mitch/Programming/Rust/cpal/target/examples/music"' '"/Users/Mitch/Programming/Rust/cpal/target/examples/music.o"' '"-Wl,-force_load,/usr/local/
lib/rustlib/x86_64-apple-darwin/lib/libmorestack.a"' '"-Wl,-dead_strip"' '"-nodefaultlibs"' '"/Users/Mitch/Programming/Rust/cpal/target/libcpal-1d70b17cbec23cb5.rlib"' '"/Users/Mitch/Programming/Rust/cpal/target/deps/libvorbis-bc75887c24f
02a58.rlib"' '"/Users/Mitch/Programming/Rust/cpal/target/deps/libvorbisfile-sys-fb2761e2f49190d1.rlib"' '"/Users/Mitch/Programming/Rust/cpal/target/deps/libcore_audio-sys-805984bac33e3a9b.rlib"' '"/Users/Mitch/Programming/Rust/cpal/target
/deps/libvorbis-sys-79f0144fa41bc2d2.rlib"' '"/Users/Mitch/Programming/Rust/cpal/target/deps/libogg-sys-70b8cdec80f70328.rlib"' '"/Users/Mitch/Programming/Rust/cpal/target/deps/liblibc-8d21de95f4de7169.rlib"' '"/usr/local/lib/rustlib/x86_
64-apple-darwin/lib/libstd-4e7c5e5c.rlib"' '"/usr/local/lib/rustlib/x86_64-apple-darwin/lib/libcollections-4e7c5e5c.rlib"' '"/usr/local/lib/rustlib/x86_64-apple-darwin/lib/libunicode-4e7c5e5c.rlib"' '"/usr/local/lib/rustlib/x86_64-apple-d
arwin/lib/librand-4e7c5e5c.rlib"' '"/usr/local/lib/rustlib/x86_64-apple-darwin/lib/liballoc-4e7c5e5c.rlib"' '"/usr/local/lib/rustlib/x86_64-apple-darwin/lib/liblibc-4e7c5e5c.rlib"' '"/usr/local/lib/rustlib/x86_64-apple-darwin/lib/libcore-
4e7c5e5c.rlib"' '"-L"' '"/Users/Mitch/Programming/Rust/cpal/target"' '"-L"' '"/Users/Mitch/Programming/Rust/cpal/target/deps"' '"-L"' '"/usr/local/lib"' '"-L"' '"/Users/Mitch/Programming/Rust/cpal/target/build/vorbis-sys-79f0144fa41bc2d2/
out"' '"-L"' '"/Users/Mitch/Programming/Rust/cpal/target/build/vorbisfile-sys-fb2761e2f49190d1/out"' '"-L"' '"/usr/local/lib/rustlib/x86_64-apple-darwin/lib"' '"-L"' '"/Users/Mitch/Programming/Rust/cpal/.rust/lib/x86_64-apple-darwin"' '"-
L"' '"/Users/Mitch/Programming/Rust/cpal/lib/x86_64-apple-darwin"' '"-L"' '"/Users/Mitch/.rust/lib/x86_64-apple-darwin"' '"-framework"' '"AudioUnit"' '"-framework"' '"AudioUnit"' '"-framework"' '"AudioUnit"' '"-framework"' '"AudioUnit"' '
"-framework"' '"CoreAudio"' '"-framework"' '"CoreAudio"' '"-logg"' '"-lc"' '"-lm"' '"-lSystem"' '"-lpthread"' '"-lc"' '"-lm"' '"-lcompiler-rt"'
note: ld: warning: directory not found for option '-L/Users/Mitch/Programming/Rust/cpal/.rust/lib/x86_64-apple-darwin'
ld: warning: directory not found for option '-L/Users/Mitch/Programming/Rust/cpal/lib/x86_64-apple-darwin'
ld: warning: ignoring file /usr/local/lib/libogg.dylib, missing required architecture x86_64 in file /usr/local/lib/libogg.dylib (2 slices)
Undefined symbols for architecture x86_64:
  "_oggpack_bytes", referenced from:
      __vorbis_unpack_comment in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-info.o)
      _vorbis_staticbook_unpack in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-codebook.o)
  "_oggpack_write", referenced from:
      _vorbis_book_encode in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-codebook.o)
      _floor1_encode in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-floor1.o)
      _floor1_pack in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-floor1.o)
      _mapping0_pack in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-mapping0.o)
      _mapping0_forward in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-mapping0.o)
      _res0_pack in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-res0.o)
  "_oggpack_look", referenced from:
      _decode_packed_entry_number in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-codebook.o)
  "_ogg_stream_packetout", referenced from:
      _ov_raw_seek in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_and_process_packet in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_headers in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __initial_pcmoffset in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_oggpack_read", referenced from:
      _vorbis_synthesis_idheader in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-info.o)
      __v_readstring in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-info.o)
      _vorbis_synthesis_headerin in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-info.o)
      __vorbis_unpack_info in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-info.o)
      __vorbis_unpack_comment in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-info.o)
      __vorbis_unpack_books in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-info.o)
      _vorbis_synthesis in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-synthesis.o)
      ...
"_ogg_page_bos", referenced from:
      _ov_raw_seek in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_and_process_packet in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_headers in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __initial_pcmoffset in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_oggpack_readinit", referenced from:
      _vorbis_synthesis_idheader in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-info.o)
      _vorbis_synthesis_headerin in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-info.o)
      _vorbis_synthesis in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-synthesis.o)
      _vorbis_packet_blocksize in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-synthesis.o)
  "_ogg_sync_init", referenced from:
      __ov_open1 in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_oggpack_writeinit", referenced from:
      _vorbis_block_init in libvorbis-sys-79f0144fa41bc2d2.rlib(lldb-fix-r-vorbis-block.o)
  "_ogg_sync_reset", referenced from:
      __seek_helper in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_ogg_stream_reset", referenced from:
      _ov_raw_seek in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_ogg_sync_clear", referenced from:
      _ov_clear in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_oggpack_writeclear", referenced from:
      _vorbis_block_clear in libvorbis-sys-79f0144fa41bc2d2.rlib(lldb-fix-r-vorbis-block.o)
  "_ogg_page_serialno", referenced from:
      _ov_raw_seek in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_and_process_packet in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_headers in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __lookup_page_serialno in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __add_serialno in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __initial_pcmoffset in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __get_prev_page_serial in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      ...
"_ogg_sync_pageseek", referenced from:
      __get_next_page in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_ogg_stream_reset_serialno", referenced from:
      _ov_raw_seek in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_and_process_packet in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_headers in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_ogg_sync_wrote", referenced from:
      __ov_open1 in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __get_data in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_ogg_stream_pagein", referenced from:
      _ov_raw_seek in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_and_process_packet in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __fetch_headers in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __initial_pcmoffset in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_oggpack_adv", referenced from:
      _decode_packed_entry_number in libvorbis-sys-79f0144fa41bc2d2.rlib(r-vorbis-codebook.o)
  "_ogg_page_granulepos", referenced from:
      __initial_pcmoffset in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __get_prev_page_serial in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_ogg_sync_buffer", referenced from:
      __ov_open1 in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      __get_data in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_ogg_page_eos", referenced from:
      _ov_raw_seek in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_ogg_stream_init", referenced from:
      __ov_open1 in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      _ov_raw_seek in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
  "_ogg_stream_clear", referenced from:
      _ov_clear in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
      _ov_raw_seek in libvorbisfile-sys-fb2761e2f49190d1.rlib(r-vorbisfile-vorbisfile.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

error: aborting due to previous error
Could not compile `cpal`

should I have a C library installed or something?

Support Input, Output and Duplex stream types

Currently CPAL is oriented towards supporting output streams, i.e. Voice -> EndPoint. This is understandable considering CPALs user-base seems to be largely game devs so far.

If we do wish to target a more general audio audience we should also aim to equally support Input and Duplex streams. We might be able to get away with only Input and Output if we provide a nice API for "zipping" or "sync"ing them for cases where they share the same device.

  • Input (implemented #201).
  • Output
  • Duplex? (device synchronised I/O, important for many real-time applications)

CPAL not outputting sound under Mac OS X x86_64.

This is the followup bug to RustAudio/rodio#94.
This is a MacBookPro Retina 15 Inch, Early 2013 with Intel Core i7 running OS X El Capitan.

CPAL does not crash, but the examples do not play a sound under my Mac OS X:

$ cargo run --release --example beep
    Finished release [optimized] target(s) in 0.0 secs
     Running `target/release/examples/beep`
thread 'main' panicked at 'Failed to get default endpoint', /Users/rustbuild/src/rust-buildbot/slave/stable-dist-rustc-mac/build/src/libcore/option.rs:715
stack backtrace:
   1:        0x1084a102a - std::sys::imp::backtrace::tracing::imp::write::hd3b65cdfe843284c
   2:        0x1084a27cf - std::panicking::default_hook::{{closure}}::hf2b7428652613d83
   3:        0x1084a2477 - std::panicking::default_hook::h5da8f27db5582938
   4:        0x1084a2c96 - std::panicking::rust_panic_with_hook::hcef1e67c646c6802
   5:        0x1084a2b34 - std::panicking::begin_panic::hc2e8ca89533cd10d
   6:        0x1084a2a52 - std::panicking::begin_panic_fmt::h60990696c3c3a88d
   7:        0x1084a29b7 - rust_begin_unwind
   8:        0x1084c4c40 - core::panicking::panic_fmt::h10231c789bd0e97d
   9:        0x1084c4cad - core::option::expect_failed::h77d0b34eebcbdfc8
  10:        0x10849808e - beep::main::h7bcc7202bef6bbee
  11:        0x1084a382a - __rust_maybe_catch_panic
  12:        0x1084a2f06 - std::rt::lang_start::h87cb84a8b6cb187e

$ cargo run --release --example enumerate
   Compiling cpal v0.4.4 (file:///private/tmp/cpal)
    Finished release [optimized] target(s) in 0.83 secs
     Running `target/release/examples/enumerate`
Endpoints:
1. Endpoint "Default AudioUnit Endpoint" Audio formats:
1.1. Format { channels: [FrontLeft, FrontRight], samples_rate: SamplesRate(44100), data_type: F32 }

beep example panics on windows

Here's the message:

thread '<main>' panicked at 'not yet implemented', C:\Users\Mariusz Ceier.cargo\registry\src\github.com-0a35038f75765ae4\cpal-0.1.2\src\wasapi/mod.rs:44

"not yet implemented" samples format is 32-bit.

Clarify whether or not a `Voice` should be paused following creation and ensure each backend abides by this.

Currently the beep example calls play after constructing a voice, implying that upon creation it is paused by default which seems totally reasonable.

However, on Linux a Voice immediately begins playing during construction whether or not play is called.

Perhaps we should clarify that voices should be paused upon creation in the build_voice docs and fix this implementation in the ALSA backend.

Remove the conversion system

After some quick conversation on IRC, it would be a good idea to move the samples conversion system out of cpal (in another library, or just ditch it) so that the only thing that cpal handles is managing the various platform-specific APIs.

Feel free to give some feedback about this.

Panic when running example on window 10

Running example/enumerate.rs
I got panic like this:

thread '

' panicked at 'Unknown data format returned by GetMixFormat: 3', C:\Users\Gigih Aji Ibrahim.cargo\registry\src\github.com-1ecc6299db9ec823\cpal-0.2.11\src\wasapi/mod.rs:197

running on windows 10, rustc 1.10.0-nightly (d91f8ab0f 2016-05-07)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.