GithubHelp home page GithubHelp logo

naudio's Introduction

NAudio

GitHub Nuget Build Status

NAudio is an open source .NET audio library written by Mark Heath

NAudio logo

Features

  • Play back audio using a variety of APIs
    • WaveOut
    • DirectSound
    • ASIO
    • WASAPI
  • Read audio from many standard file formats
    • WAV
    • AIFF
    • MP3 (using ACM, DMO or MFT)
    • G.711 mu-law and a-law
    • ADPCM, G.722, Speex (using NSpeex)
    • WMA, AAC, MP4 and more others with Media Foundation
  • Convert between various forms of uncompressed audio
    • Change the number of channels - Mono to stereo, stereo to mono
    • Modify bit depth (8, 16, 24, 32 integer or 32 bit IEEE float)
    • Resample audio using a choice of resampling algorithms
  • Encode audio using any ACM or Media Foundation codec installed on your computer
    • Create MP3s (Windows 8 and above)
    • Create AAC/MP4 audio (Windows 7 and above)
    • Create WMA files
    • Create WAV files containing G.711, ADPCM, G.722, etc.
  • Mix and manipulate audio streams using a 32-bit floating mixing engine
    • construct signal chains
    • examine sample levels for the purposes of metering or waveform rendering
    • pass blocks of samples through an FFT for metering or DSP
    • delay, loop, or fade audio in and out
    • Perform EQ with a BiQuad filter (allowing low pass, high pass, peaking EQ, etc.)
    • Pitch shifting of audio with a phase vocoder
  • Record audio using a variety of capture APIs
    • WaveIn
    • WASAPI
    • ASIO
  • Record system audio with WASAPI Capture
  • Work with soundcards
    • Enumerate devices
    • Access soundcard controls and metering information
  • Full MIDI event model
    • Read and write MIDI files
    • Respond to received MIDI events
    • Send MIDI events
  • An extensible programming model
    • All base classes easily inherited from for you to add your custom components
  • Support for UWP (preliminary)
    • Create Windows 8 Store apps and Windows Universal apps

Getting Started

The easiest way to install NAudio into your project is to install the latest NAudio NuGet package. Prerelease versions of NAudio are also often made available on NuGet.

NAudio comes with several demo applications which are the quickest way to see how to use the various features of NAudio. You can explore the source code here.

Tutorials

Playback

Working with Codecs

Working with audio files

Manipulating audio

Generating audio

Recording

Visualization

MIDI

More...

Additional sources of documentation for NAudio are:

NAudio Training Courses

If you want to get up to speed as quickly as possible with NAudio programming, I recommend you watch these two Pluralsight courses. You will need to be a subscriber to access the content, but there is 10 hours of training material on NAudio, and it also will give you access to their vast training library on other programming topics.

To be successful developing applications that process digital audio, there are some key concepts that you need to understand. To help developers quickly get up to speed with what they need to know before trying to use NAudio, I have created the Digital Audio Fundamentals course, which covers sample rates, bit depths, file formats, codecs, decibels, clipping, aliasing, synthesis, visualisations, effects and much more. In particular, the fourth module on signal chains is vital background information if you are to be effective with NAudio.

Audio Programming with NAudio is a follow-on course which contains seven hours of training material covering all the major features of NAudio. It is highly recommended that you take this course if you intend to create an application with NAudio.

How do I...?

The best way to learn how to use NAudio is to download the source code and look at the two demo applications - NAudioDemo and NAudioWpfDemo. These demonstrate several of the key capabilities of the NAudio framework. They also have the advantage of being kept up to date, whilst some of the tutorials you will find on the internet refer to old versions of NAudio.

FAQ

What is NAudio?

NAudio is an open source audio API for .NET written in C# by Mark Heath, with contributions from many other developers. It is intended to provide a comprehensive set of useful utility classes from which you can construct your own audio application.

Why NAudio?

NAudio was created because the Framework Class Library that shipped with .NET 1.0 had no support for playing audio. The System.Media namespace introduced in .NET 2.0 provided a small amount of support, and the MediaElement in WPF and Silverlight took that a bit further. The vision behind NAudio is to provide a comprehensive set of audio related classes allowing easy development of utilities that play or record audio, or manipulate audio files in some way.

Can I Use NAudio in my Project?

NAudio is licensed under the MIT license which means that you can use it in whatever project you like including commercial projects. Of course we would love it if you share any bug-fixes or enhancements you made to the original NAudio project files.

Is .NET Performance Good Enough for Audio?

While .NET cannot compete with unmanaged languages for very low latency audio work, it still performs better than many people would expect. On a fairly modest PC, you can quite easily mix multiple WAV files together, including pass them through various effects and codecs, play back glitch free with a latency of around 50ms.

How can I get help?

There are three main ways to get help. First, you can raise an issue here on GitHub. This is the best option when you've written some code and want to ask why it's not working as you expect. I attempt to answer all questions, but since this is a spare time project, occasionally I get behind.

You can also ask on StackOverflow and tag your question with naudio, if your question is a "how do I..." sort of question. This gives you a better chance of getting a quick answer. Please try to search first to see if your question has already been answered elsewhere.

Finally, I am occasionally able to offer paid support for situations where you need quick advice, bugfixes or new features. Please contact Mark Heath directly if you wish to pursue this option.

How do I submit a patch?

I welcome contributions to NAudio and have accepted many patches, but if you want your code to be included, please familiarise yourself with the following guidelines:

  • Your submission must be your own work, and able to be released under the MIT license.
  • You will need to make sure your code conforms to the layout and naming conventions used elsewhere in NAudio.
  • Remember that there are many existing users of NAudio. A patch that changes the public interface is not likely to be accepted.
  • Try to write "clean code" - avoid long functions and long classes. Try to add a new feature by creating a new class rather than putting loads of extra code inside an existing one.
  • I don't usually accept contributions I can't test, so please write unit tests (using NUnit) if at all possible. If not, give a clear explanation of how your feature can be unit tested and provide test data if appropriate. Tell me what you did to test it yourself, including what operating systems and soundcards you used.
  • If you are adding a new feature, please consider writing a short tutorial on how to use it.
  • Unless your patch is a small bugfix, I will code review it and give you feedback. You will need to be willing to make the recommended changes before it can be integrated into the main code.
  • Patches should be provided using the Pull Request feature of GitHub.
  • Please also bear in mind that when you add a feature to NAudio, that feature will generate future support requests and bug reports. Are you willing to stick around on the forums and help out people using it?

naudio's People

Contributors

ahmed-abdelhameed avatar axynos avatar brianavid avatar crojewsk avatar danm36 avatar davidmartynwood avatar dmitry-ra avatar emsaks avatar flyingoverclouds avatar freefall63 avatar goddogthedoggod avatar hondarer avatar ioctllr avatar jacqipan avatar jnm2 avatar jwosty avatar kamenlitchev avatar kvanttt avatar laxader avatar markheath avatar neilt6 avatar peterjarrettuk avatar protyposis avatar rassilon avatar restnote avatar stanhuff avatar tebjan avatar vashq8 avatar xoofx avatar yitzchok avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

naudio's Issues

Add SilenceProvider.cs useful for WasapiLoopbackCapture

Add SilenceProvider.cs useful for WasapiLoopbackCapture when we want the silence also to be captured.

You may begin with:

using NAudio;
using NAudio.Wave;

class SilenceProvider : IWaveProvider
    {
        public SilenceProvider(WaveFormat wf) { this.WaveFormat = wf; }

        public int Read(byte[] buffer, int offset, int count)
        {
            buffer.Initialize();
            return count;
        }

        public WaveFormat WaveFormat { get; private set; }
    }

Connecting Multiple MixingSampleProviders?

I'm trying to connect multiple MixingSampleProviders to take multiple "sub-mixes" and combine them into 1 master mix wav output. If I do this everything works perfectly:

 mixer.AddMixerInput(sampleChannels[i]);
 WaveFileWriter.CreateWaveFile16("c:\\audio\\mixdowntest1.wav", mixer);

But if I do something like this:

 mixer.AddMixerInput(sampleChannels[i]);
 WaveFileWriter.CreateWaveFile16("c:\\audio\\mixdowntest1.wav", mixer);

var mixer2 = new MixingSampleProvider();
mixer2.AddMixerInput(mixer);
WaveFileWriter.CreateWaveFile16("c:\\audio\\mixdowntest2.wav", mixer2);

The first wav file is correct, but the 2nd wav file is only 1kb in size. Is it possible to do something like this in NAudio?

Detection of Xing Header

For CBR mp3 files, Lame will set a Xing header with identification string Info instead of Xing. So in XingHeader.LoadXingHeader(Mp3Frame frame) you should also check for this string to get the header.

How to play a Sound on a specific Channel?

Is there any function or easy way to play a Sound on a specific Channel?
Such as:
play(int DeviceID, int ChannelID);

There are many mono audio data coming from different devices. Which I want to do is playing data in different channels (I'm planning to use the USB Sound Devices).

Mp3FileReader - resulting file slightly behind?

Hi Mark, I'm using the Mp3FileReader to convert an mp3 to a wav. When I open the resulting wav file in an audio editor, I noticed it's just slightly behind.

  using (Mp3FileReader reader = new Mp3FileReader(filePath))
                {
                    using (WaveStream convertedStream = WaveFormatConversionStream.CreatePcmStream(reader))
                    {
                        WaveFileWriter.CreateWaveFile(newmp3ToWavFile, convertedStream);
                    }
                }

Here's a comparison of the result in my audio editor:

audiocompare

Both files are starting at the same location. Any ideas how I can account for this offset when converting mp3 to wav?

AudioSessionControl Dispose and UnRegisterEventClient throws COMException

Both methods AudioSessionControl.Dispose and UnRegisterEventClient are throwing a ComException under the following circumstances

  1. Call AudioSessionControl.UnRegisterEventClient
  2. Then call AudioSessionControl.Dispose
    Both methods are checking if the registered audioSessionEventCallback is null prior to the IAudioSessionControl.UnregisterAudioSessionNotification call but are not setting the audioSessionEventCallback member to null. Thus if UnRegisterEventClient had unregistered the event client then Dispose (and the finalizer) are calling IAudioSessionControl.UnregisterAudioSessionNotification again which then causes the exception.

This happened running on on Win 7 64 bit.

"Expecting PCM input" exception appeared when using VolumeWaveProvider

Hi,

After asking the last issue thread, I've tried using the solution you've mentioned before (http://markheath.net/post/fire-and-forget-audio-playback-with) for mixing several WAV wave files together to play. In order to set the volume seperately, I modified AudioPlaybackEngine class a bit for satisfy my demand.

Here is my modified code below:

public void PlaySound(string fileName, float volume = 1.0f)
{
       var input = new AudioFileReader(fileName);
       volumeControl = new VolumeWaveProvider16(input);
       volumeControl.Volume = volume;
       AddMixerInput(new AutoDisposeFileReader(input));
}

After I add "VolumeWaveProvider" and re-run the code, it threw out a exception which said it needs PCM input. But I did use a WAV wave file correctly, why it showed me this error?

Thanks and regards,
Jackson Hu

Cannot play .ogg or .adts (containing aac)

Hi,
the title says all.
I'm using MediaFoundationReader and it worked well so far.

I installed the ogg codec but it still doesnt work.
I always get HRESULT: 0xC00D36C4. That means something like format not supported I read on google.

Using the https://github.com/naudio/Vorbis project and using its reader also doesnt work. It says "cannot determine container format".

What can/should I do?
Should I upload the files here maybe?

Edit: I have a few other "adts" files (all files are from youtube) that MediaFoundationReader plays with no problem.
Do I have to install something else besides the ogg codec thing?

OffsetSampleProvider - Can't "Take" and "LeadOut" in single operation?

I'm trying to take the first 6 seconds or so of an audio clip, and then add a couple minutes of silence after it. When I run this code the resulting wav file is only 6 seconds long instead of 2:06:

 offsetSampleProvider.Take = endAudioTimeTimespan.Duration();
 offsetSampleProvider.LeadOut = silenceToAddToEnd.Duration();

WaveFileWriter.CreateWaveFile16(newWavFile, offsetSampleProvider);

If I break this into 2 separate AudioFileReader operations (take samples, create wav file, read wav file, add silence) it works as expected.

Naudio Popping, clicking, skipping, jumping, spluttering

I am using Naudio primarily to play sounds simultaneously. Problem is that Naudio make the .wav play with pops and clicks. (Random, sometimes it doesn't "sometimes" being rare)

Is there anyway to fix this?

Public Class NAudioPlayer

Private wfr As WaveFileReader
Private wc As WaveChannel32
Private dso As DirectSoundOut

Public Sub New(ByVal soundFile As String)
    wfr = New WaveFileReader(soundFile)
    wc = New WaveChannel32(wfr)
    dso = New DirectSoundOut
    dso.Init(wc)
End Sub

Public Property Volume() As Double

    Get
        Return wc.Volume
    End Get
    Set(value As Double)
        wc.Volume = value
    End Set

End Property

Public Sub play(ByVal fromBeggining As Boolean)

    If fromBeggining Then
        wc.Seek(0, SeekOrigin.Begin)
    End If

    dso.Play()
End Sub

Public Sub StopSound()
    dso.Stop()
End Sub

Public Sub Pause()
    dso.Pause()
End Sub

End Class

Exception: Pitch value must be in the range 0 - 0x4000

I am using a Fishman MIDI pickup and my interface awaits MIDI events as you'd expect. But whilst not playing a note, I've received the above exception randomly. I assume this means the Fishman pickup sent a message which was picked up by NAUDIO, where the value is out of range. But if that is the case I have no control over the behaviour and in that case don't feel an exception is appropriate, as I have no way to catch it before the MIDI input received event fires. It would be better to fail silently, perhaps log a message to the DEBUG output, or raise a new type of event that identifies to the consumer that badly formed messages are being received.

VS 2013 Cannot find NAudio.Wave.ASIO

Hello,
installed NAudio into a project using nuget, added:

using NAudio.Wave;
using NAudio.CoreAudioAPI;

I get an error when I try:
waveOutDevice = new NAudio.Wave.AsioOut();

Error 1 The type or namespace name 'AsioOut' does not exist in the namespace 'NAudio.Wave' (are you missing an assembly reference?)

Disposable Wave- / SampleProviders

Hi,

I'd like to ask why the ISampleProviders are not disposable. If there is a specific reason for this, I'd like to know the best practice for dealing with sample sources that need to be disposed. It's a little cumbersome to store all sample sources that need to be disposed along the chain and dispose them manually.

Exception when accessing FriendlyName in devices - Failed unit tests

I run the tests and 2 of them fail. Those are in Wasapi project at MMDeviceEnumeratorTests. The tests are CanEnumerateDevicesInVista and CanEnumerateCaptureDevices.

I did some investigation, they both fail for the same reason, the MMDeviceEnumerator returns the devices but the ones with state DeviceState.NotPresent throw exception when FriendlyName is accessed. In the demo applications only Active devices are used so no exceptions there.

I did a fix to write (with Debug.WriteLine as it was) the name only on the ones with state different than NotPresent and the ID for the ones with state NotPresent. Let me know if the tests are valid with this change to submit a pull request.

Current MidiConverter.cs file generates error on build

I just updated my local repo with the latest code, and on solution build the following error appears:

2>C:\...\NAudio\MidiFileConverter\MidiConverter.cs(404,25,404,26): error CS1525: Invalid expression term '.'
2>C:\...\NAudio\MidiFileConverter\MidiConverter.cs(404,26,404,39): error CS1003: Syntax error, ':' expected

On examination, there appears to be a question mark in the code at line 404 in MidiConverter.cs that shouldn't be there:

401  private bool IsEndTrack(MidiEvent midiEvent)
402  {
403      var meta = midiEvent as MetaEvent;
404      return meta?.MetaEventType == MetaEventType.EndTrack;
405  }

Removing the question mark results in a successful build.

AddSamples memory leak

I have a memory leak occurs when using the addSamples

bufferedWaveProvider.AddSamples(buffer, 0, decompressed);  // this line further

everything seems to be right, received bytes mp3 decompressed file into a readable for the player and added to buffer (of course it is not clear if the file weighs 5 MB, the RAM eats 20)

var buffer = new byte[16384 * 4];

IMp3FrameDecompressor decompressor = null;
try {
using (var responseStream = myResponse.GetResponseStream())
{
    var readFullyStream = new ReadFullyStream(responseStream);
    Mp3Frame frame;
    do
    {
        Debug.WriteLine(loaded_bytes);
        if (IsBufferNearlyFull)
        {
            Debug.WriteLine("Buffer getting full, taking a break");
            Thread.Sleep(200);
        }
        else
        {
            frame = Mp3Frame.LoadFromStream(readFullyStream);
            if (frame == null) { Debug.WriteLine("NULL"); break; }
            if (decompressor == null)
            {
                decompressor = new AcmMp3FrameDecompressor(new Mp3WaveFormat(frame.SampleRate, frame.ChannelMode == ChannelMode.Mono ? 1 : 2, frame.FrameLength, frame.BitRate));

                bufferedWaveProvider = new BufferedWaveProvider(decompressor.OutputFormat);
                bufferedWaveProvider.BufferDuration = TimeSpan.FromSeconds(500); // allow us to get well ahead of ourselves

                volumeProvider = new VolumeWaveProvider16(bufferedWaveProvider);

                volumeProvider.Volume = (float)volume / 100;
                waveOut.Init(volumeProvider);
                waveOut.Play();
            }
            int decompressed = decompressor.DecompressFrame(frame, buffer, 0);
            bufferedWaveProvider.AddSamples(buffer, 0, decompressed); /// без выполнения этого метода память не "сжирает", но и не работает))
        }
    } while (true);
    Debug.WriteLine("Exiting");
    decompressor.Dispose();
}
}            
finally
{
if (decompressor != null)
{
    decompressor.Dispose();
}

}

It is all in the method that I am doing in the new thread.

Trying to free memory in the following manner

waveOut.Stop();

bufferedWaveProvider.ClearBuffer(); // My buffer, which have Samples, but it does not change, the memory is not freed
bufferedWaveProvider = null;
pp.Buffer_indicator.Value = 0;
pp.Play_Button.Background = new ImageBrush { ImageSource = new BitmapImage(new     Uri(BaseUriHelper.GetBaseUri(pp), "Images/play.png")) };
if (streamer != null)
    streamer.Abort();

How to find the cause of the leak? Where he had left memory used, if it was to be released after the close of the stream?

Playing MP3 fast on other side of Phone

Hi @markheath
I am playing a file on by writing to serial port on GSM Modem, I can hear this at recepient's phone. But problem is its playing too fast or sometimes just noise. I tried many settings, still i cant figure out. Below is my serial port settings and source code :

string  voiceportname = "COM17";
   this.Talking = new SerialPort();

                Talking.PortName = voiceportname;
                Talking.BaudRate =9600;
                Talking.Parity = Parity.None;
                Talking.DataBits = 8;
                Talking.StopBits = StopBits.One ;
               // Talking.Handshake = Handshake.RequestToSend;
                Talking.DtrEnable = true;
                Talking.RtsEnable = true;
                //comSerial1.NewLine = "\r\n";
                //comSerial1 = null;
                //   this.Talking.Open();
                _spManager.StartListening(Talking);

           string Filenm = "D:\\06.mp3";
                PauseForMilliSeconds(4000);
                textBox2.Text = "playing audio\n";
                var pcmFormat = new WaveFormat(8000, 16, 1);
                //  var ulawFormat = WaveFormat.CreateMuLawFormat(8000, 1);
                var ulawFormat = WaveFormat.CreateCustomFormat(WaveFormatEncoding.MuLaw , 8000, 1, 8000 * 1, 1, 8);
                using (WaveFormatConversionStream pcmStm = new WaveFormatConversionStream(pcmFormat, new Mp3FileReader(Filenm)))
                {

                    using (WaveFormatConversionStream ulawStm = new WaveFormatConversionStream(ulawFormat, pcmStm))
                    {

                        byte[] buffer = new byte[320];
                        int bytesRead = ulawStm.Read(buffer, 0, 320);

                        while (bytesRead > 0)
                        {
                            byte[] sample = new byte[bytesRead];
                            Array.Copy(buffer, sample, bytesRead);
                            //  m_rtpChannel.AddSample(sample);
                            _spManager.WriteVoice(sample, 0, sample.Length - 0);
                            bytesRead = ulawStm.Read(buffer, 0, 320);
                            PauseForMilliSeconds(20);
                        }

                    }
                }


I have also tried BaudRate 28800 and 115200.

Regards

Is it possible to generate piano tones by this library?

Hi all,

I'm planning to make a C# program which can read a text-based script and generate a series of specific piano sound and then merge into a song. Is it possible to use this library to generate piano sound directly? (without using others' audio sources)

Regards,
Jackson Hu

Cannot play AIF file

I got an exception playing the file sample.aif from http://www.nch.com.au/acm/formats.html

System.ArgumentOutOfRangeException was caught
HResult=-2146233086
Message=Non-negative number required.
Parameter name: count
Source=mscorlib
ParamName=count
StackTrace:
at System.IO.BinaryReader.ReadBytes(Int32 count)
at NAudio.Wave.AiffFileReader.ReadAiffHeader(Stream stream, WaveFormat& format, Int64& dataChunkPosition, Int32& dataChunkLength, List`1 chunks) in j:\Samples\Media\NAudio-master\NAudio-master\NAudio\Wave\WaveStreams\AiffFileReader.cs:line 108
at NAudio.Wave.AiffFileReader..ctor(Stream inputStream) in j:\Samples\Media\NAudio-master\NAudio-master\NAudio\Wave\WaveStreams\AiffFileReader.cs:line 41
at NAudio.Wave.AiffFileReader..ctor(String aiffFile) in j:\Samples\Media\NAudio-master\NAudio-master\NAudio\Wave\WaveStreams\AiffFileReader.cs:line 28

I got it working by changing the code in AiffFileReader @ 108 to
else
{
if (chunks != null)
{
chunks.Add(nextChunk);
}

  •                if (br.BaseStream.Position + nextChunk.ChunkLength < br.BaseStream.Length)
                    br.ReadBytes((int)nextChunk.ChunkLength);
            }
    

But I'm not sure this is the right way.

Resampling from arbitrary sample rate to another by ACM Resampler

Hello,
I am using ACM Resampler (WaveFormatConversionStream) for resampling. Source sample rate is 17896Hz, target sample rate is 44100Hz. When using WaveFormatConversionStream I am getting poor quality output. If I tried to resample to double sample rate (35792Hz) sound is great. I suppose that poor output quality for 44100Hz is result of aliasing,right? Is there any simple way to improve result quality?

When using MediaFoundationResampler for resampling 17896Hz -> 44100Hz result if perfect.

Get a video stream of MediaFoundationApi instance if available in addition to the audio?

Even so this has very little to do with NAudio ... just leaving that question here:

By using the MediaFoundationApi, I am able to get the audio out of an MP4 file. Do you know if it's easy to also get the video, once I already have that handler to the file?

The use-case is, that I currently am working on a audio-cutting project, where I have multiple translations (available as WAVE files) that the user want to cut. I also have a video file, that just has to be synchronized with all these wave-files within my application. As this would've been the easiest way, and you already are in the system, I thought it's worth asking if there is an easy possibility to get the video 😉

Another option for me would be to start up VLC and play it using the embeded player, but I guess it would be hard to keep them in sync.

InvalidCastException with async, MediaType & MediaFoundationEncoder.Encode

Hello all

we'd just like to notify a problem we encountered. Not sure if we are doing it wrong or the library should behave differently, but thought to post it here in case other people find the same.

This bit of async method works

public async Task<int> JoinAsync(string outputPathName, IEnumerable<MediaFoundationReader> waveReaders)
{
  await Task.Run(() =>
  {
    // calls MediaFoundationEncoder.SelectMediaType with some constants and enums
    MediaType wmaMediaType = CreateMediaType();

    using (var joiningWaveProvider = new CustomJoiningWaveProvider(waveReaders))
    using (var wmaEncoder = new MediaFoundationEncoder(wmaMediaType))
    {
      wmaEncoder.Encode(outputPathName, joiningWaveProvider);
    }
  });
  // ...
}

CustomJoiningWaveProvider implements IWaveProvider and is able to join many _MediaFoundationReader_s (from Wav files) into a MemoryStream.

If we move the MediaType creation outside of Task.Run (as well as moving out the _using_s too) like in the following:

public async Task<int> JoinAsync(string outputPathName, IEnumerable<MediaFoundationReader> waveReaders)
{
  // calls MediaFoundationEncoder.SelectMediaType with some constants and enums
  MediaType wmaMediaType = CreateMediaType();

  await Task.Run(() =>
  {
    using (var joiningWaveProvider = new CustomJoiningWaveProvider(waveReaders))
    using (var wmaEncoder = new MediaFoundationEncoder(wmaMediaType))
    {
      wmaEncoder.Encode(outputPathName, joiningWaveProvider);
    }
  });
  // ...
}

the wmaEncoder.Encode call causes a System.InvalidCastException exception:

Message: Specified cast is not valid

StackTrace:
in System.StubHelpers.InterfaceMarshaler.ConvertToNative(Object objSrc, IntPtr itfMT, IntPtr classMT, Int32 flags)
in NAudio.MediaFoundation.IMFSinkWriter.AddStream(IMFMediaType pTargetMediaType, Int32& pdwStreamIndex)
in NAudio.Wave.MediaFoundationEncoder.Encode(String outputFile, IWaveProvider inputProvider)
in <--- our async method --->
in System.Threading.Tasks.Task.InnerInvoke()
in System.Threading.Tasks.Task.Execute()

HRESULT: 0x80004002

Maybe that's related to COM interoperability, not sure. Apart from using the working version of code, should we do something more? TA

Pause recording using WasapiCapture

I am working on simple Audio Recorder using NAudio and Visual Basic. I'm using .NET 4.0 and Visual Studio 2013 Community. Is it possible to pause audio recoding and then resume the recording?

This is my code to record audio :

Dim micWaveIn As IWaveIn
Dim micFileWriter As WaveFileWriter
Dim tempFolderPath As String = IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "recordings")

Private Sub cmdStartRecord_Click(sender As Object, e As EventArgs) Handles cmdStartRecord.Click
    Dim micDevice = DirectCast(cboInputDevice.SelectedItem, MMDevice)
    micDevice.AudioEndpointVolume.Mute = False
    micWaveIn = New WasapiCapture(micDevice)

    AddHandler micWaveIn.DataAvailable, AddressOf MIC_AudioDataAvailable
    AddHandler micWaveIn.RecordingStopped, AddressOf MIC_AudioStopRecording

    Dim filePath As String = IO.Path.Combine(tempFolderPath, "mic_record.wav")
    micFileWriter = New WaveFileWriter(filePath, micWaveIn.WaveFormat)

    micWaveIn.StartRecording()
End Sub

Private Sub cmdStopRecord_Click(sender As Object, e As EventArgs) Handles cmdStopRecord.Click
    If micWaveIn IsNot Nothing Then
        micWaveIn.StopRecording()

        RemoveHandler micWaveIn.DataAvailable, AddressOf MIC_AudioDataAvailable
        RemoveHandler micWaveIn.RecordingStopped, AddressOf MIC_AudioStopRecording

        micWaveIn.Dispose()
        micWaveIn = Nothing
    End If
    If micFileWriter IsNot Nothing Then
        micFileWriter.Dispose()
        micFileWriter = Nothing
    End If
End Sub

Private Sub MIC_AudioDataAvailable(sender As Object, e As WaveInEventArgs)
    If micFileWriter IsNot Nothing Then
        micFileWriter.Write(e.Buffer, 0, e.BytesRecorded)
    End If
End Sub

Private Sub MIC_AudioStopRecording(sender As Object, e As StoppedEventArgs)
    If e.Exception IsNot Nothing Then
        MsgBox("Error while recording.", vbExclamation, "Error.")
    End If
End Sub

I'm currently thinking on this code :

Private Sub MIC_AudioDataAvailable(sender As Object, e As WaveInEventArgs)
    If micFileWriter IsNot Nothing Then
        If isRecording Then
            micFileWriter.Write(e.Buffer, 0, e.BytesRecorded)
        End If
    End If
End Sub

But the output audio can't be played in my PC. Any suggestion?

How could make this support windows xp

I tried your demo app on windows xp, but lots of demo crashed. could anyone tell me if this support windows xp? I know some functions dependent on OS, I don`t need all features can be run on windows xp, but is there at least one api can run on both xp and win7/8/10 for playing and recording?

Thank you in advance.

Playbackstate when using mixersampleprovider

I use NAudio framework in my project. I have a MixerSampleProvider to play several Cashed sounds. I noticed when using mixersampleprovider in outputdevice, OutputDevice.PlaybackState always has Playing state. How can I trace the end of soundfile?

Release DLL is unsigned

I have a signed DLL with code that uses the NAudio library. I noticed it's on nuget so tried to use the latest version from there but unfortunately it isn't signed.

To get around the problem, I build it myself from the master branch after adding a signing key which is straightforward but it'll need to be done every time I update the code base.

Many thanks for a fantastic library.

Exception opening mp4 file with AudiofileReader

I have some code that works on my local dev system but fails on my Windows 2008 Server. I have installed the Desktop Experience on the server and my code works for mp3 files but I get the following exception when I execute the AudioFileReader constructor with an mp4 file:

Error: System.Runtime.InteropServices.COMException (0xC00D5212): Exception from HRESULT: 0xC00D5212 at NAudio.MediaFoundation.IMFSourceReader.SetCurrentMediaType(Int32 dwStreamIndex, IntPtr pdwReserved, IMFMediaType pMediaType) at NAudio.Wave.MediaFoundationReader.CreateReader(MediaFoundationReaderSettings settings) at NAudio.Wave.MediaFoundationReader..ctor(String file, MediaFoundationReaderSettings settings) at NAudio.Wave.AudioFileReader.CreateReaderStream(String fileName) at NAudio.Wave.AudioFileReader..ctor(String fileName)

Windows 10 Mobile ole32.dll error when playing

Hello, I suppose you know about this issue, tested on the latest release of Mobile:

Unable to load DLL 'ole32.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E):
When play a file, variant.Clear(); calls to

public void Clear()
{
PropVariantClear(ref this);
}

    [DllImport("ole32.dll")]
    private static extern int PropVariantClear(ref PropVariant pvar);

what is not available on Mobile, I understand.

Peek method for IWaveProvider

I'm trying to performt FFT using NAudio in Unity game engine. It fails for either WASAPI or ASIO. We want to have visualizations in our application for both local and streamed audio content. The common denominator between these is the IWaveProvider interface. It would be very useful if you could implement a Peek() method so that we could use it to grab audio data without clipping the audio (Read() advances the stream position).
Thank you.

Sometimes waveIn_DataAvailable gets called with BytesRecorded = 0 after recording is started

After such strange call, waveIn_RecordingStopped gets invoked with ev.Exception = NAudio.MmException: WaveStillPlaying calling waveUnprepareHeader.

Hardware used: A4 TECH PK-130MG USB2.0 Web Camera (USB\VID_0AC8&PID_0328&REV_0100&MI_01)
upd. Exactly the same problem with Realtek High Definition Audio (HDAUDIO\FUNC_01&VEN_10EC&DEV_0892&SUBSYS_10438613&REV_1003) and microphone МД-47 (yes, it is really old, but it works).
NAudio.dll: version 1.7.3, 471040 bytes
OS: Windows 7 x64 SP1

Test program:

using System;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Threading;
using NAudio.Wave;

namespace NAudioTest
{
    public class MyWindow : Window
    {
        private TextBox log;
        WaveIn waveIn;

        DispatcherTimer startTimer;
        DispatcherTimer stopTimer;

        public MyWindow()
        {
            Width = 640;
            Height = 480;

            Grid grid = new Grid();
            Content = grid;

            log = new TextBox();
            log.VerticalScrollBarVisibility = ScrollBarVisibility.Auto;
            grid.Children.Add(log);

            Loaded += MyWindow_Loaded;

            startTimer = new DispatcherTimer();
            stopTimer = new DispatcherTimer();
            startTimer.Interval = TimeSpan.FromMilliseconds(200);
            stopTimer.Interval = TimeSpan.FromMilliseconds(300);
            startTimer.Tick += startTimer_Tick;
            stopTimer.Tick += stopTimer_Tick;
        }

        void stopTimer_Tick(object sender, EventArgs e)
        {
            stopTimer.Stop();
            Log("Stop recording");
            waveIn.StopRecording();
        }

        void startTimer_Tick(object sender, EventArgs e)
        {
            startTimer.Stop();
            Log("Start recording");
            waveIn.StartRecording();
            stopTimer.Start();
        }

        void MyWindow_Loaded(object sender, RoutedEventArgs e)
        {
            waveIn = new WaveIn();
            waveIn.WaveFormat = new WaveFormat(44100, 1);
            waveIn.DataAvailable += waveIn_DataAvailable;
            waveIn.RecordingStopped += waveIn_RecordingStopped;
            startTimer.Start();
        }

        void Log(string message)
        {
            log.Text += message + Environment.NewLine;
            log.ScrollToEnd();
        }

        void waveIn_RecordingStopped(object sender, StoppedEventArgs ev)
        {
            if (ev.Exception == null)
            {
                Log("waveIn_RecordingStopped");
                startTimer.Start();
            }
            else
            {
                Log(string.Format("waveIn_RecordingStopped: exception={0}", ev.Exception.ToString()));
                Log("Logging stopped");
            }
        }

        void waveIn_DataAvailable(object sender, WaveInEventArgs e)
        {
            Log(string.Format("waveIn_DataAvailable: BytesRecorded={0}", e.BytesRecorded));
        }

        [STAThread]
        public static void Main()
        {
            Application app = new Application();

            app.Run(new MyWindow());
        }
    }
}

Example logs:
1.

Start recording
waveIn_DataAvailable: BytesRecorded=8820
waveIn_DataAvailable: BytesRecorded=8820
Stop recording
waveIn_DataAvailable: BytesRecorded=7938
waveIn_RecordingStopped
Start recording
waveIn_DataAvailable: BytesRecorded=0
waveIn_RecordingStopped: exception=NAudio.MmException: WaveStillPlaying calling waveUnprepareHeader
   в NAudio.Wave.WaveInBuffer.Reuse()
   в NAudio.Wave.WaveIn.Callback(IntPtr waveInHandle, WaveMessage message, IntPtr userData, WaveHeader waveHeader, IntPtr reserved)
Logging stopped
Stop recording

2.

Start recording
waveIn_DataAvailable: BytesRecorded=8820
waveIn_DataAvailable: BytesRecorded=8820
waveIn_DataAvailable: BytesRecorded=8820
Stop recording
waveIn_DataAvailable: BytesRecorded=882
waveIn_RecordingStopped
Start recording
waveIn_DataAvailable: BytesRecorded=8820
waveIn_DataAvailable: BytesRecorded=8820
Stop recording
waveIn_DataAvailable: BytesRecorded=7938
waveIn_RecordingStopped
Start recording
waveIn_DataAvailable: BytesRecorded=0
waveIn_RecordingStopped: exception=NAudio.MmException: WaveStillPlaying calling waveUnprepareHeader
   в NAudio.MmException.Try(MmResult result, String function)
   в NAudio.Wave.WaveInBuffer.Reuse()
   в NAudio.Wave.WaveIn.Callback(IntPtr waveInHandle, WaveMessage message, IntPtr userData, WaveHeader waveHeader, IntPtr reserved)
Logging stopped
Stop recording

Mixing two microphone inputs to one wav file

Hi,

I am trying to mix two microphone inputs and save them to one wav file. To this end I have been experimenting with the RecordingPanel from the NAudio demo trying to get the microphone input in and out of a MixingWaveProvider32.

I think I am missing a step to write the MixingWaveProvider into the WaveFileWriter I have open. Could someone give me a hand please.

Please see the code snippets below.

Regards,

Darren

`
private void CreateWaveInDevice()
{
if (radioButtonWaveIn.Checked)
{
waveIn = new WaveIn();
waveIn.WaveFormat = new WaveFormat(8000, 1);
}
else if (radioButtonWaveInEvent.Checked)
{
waveIn = new WaveInEvent();
waveIn.WaveFormat = new WaveFormat(8000, 1);
}
else if (radioButtonWasapi.Checked)
{
// can't set WaveFormat as WASAPI doesn't support SRC
var device = (MMDevice) comboWasapiDevices.SelectedItem;
waveIn = new WasapiCapture(device);
}
else
{
// can't set WaveFormat as WASAPI doesn't support SRC
waveIn = new WasapiLoopbackCapture();
}

        WaveFormat OutputFormat = new WaveFormat(8000, 16, 1);


        bufferedWaveProvider = new BufferedWaveProvider(OutputFormat);

        MediaFoundationResampler MFR = new MediaFoundationResampler(bufferedWaveProvider, WaveFormat.CreateIeeeFloatWaveFormat(8000, 1));

        mixer = new MixingWaveProvider32();
        mixer.AddInputStream(MFR);


        WFW = new WaveFileWriter("D:\\a\\DJS.wav", waveIn.WaveFormat);

        waveIn.DataAvailable += OnDataAvailable;
        waveIn.RecordingStopped += OnRecordingStopped;
    }

`

`
void OnDataAvailable(object sender, WaveInEventArgs e)
{
if (this.InvokeRequired)
{
//Debug.WriteLine("Data Available");
this.BeginInvoke(new EventHandler(OnDataAvailable), sender, e);
}
else
{
//Debug.WriteLine("Flushing Data Available");
//writer.Write(e.Buffer, 0, e.BytesRecorded);

            bufferedWaveProvider.AddSamples(e.Buffer, 0, e.BytesRecorded);

            int secondsRecorded = (int)(writer.Length / writer.WaveFormat.AverageBytesPerSecond);
            if (secondsRecorded >= 30)
            {
                StopRecording();
            }
            else
            {
                progressBar1.Value = secondsRecorded;
            }
        }
    }

`

Volume Mixer demo (GUI)

CoreAudioAPI features from #2 demostration.
Demo should be similar to Volume Mixer from Windows Vista (or above) one.

NullreferenceException in WaveIn.cs RaiseDataAvailible

Sometimes Naudio stops working when I record something, raising a NullreferenceException in WaveIn's RaiseDataAvailible function becuase it does not check wether the passed Argument is null. To fix this one only needs to add a null check, as I did in the pull request.

Create interfaces for MidiIn and MidiOut

I am currently developing a C# binding for libjack, and have started implementing a IWaveIn and IWavePlayer for binding it with NAudio.

libjack also has MIDI capabilities, and it would be great, if NAudio had interfaces similar to those for audio to implement, so that programmers can use NAudio and libjack e.g. to develop MIDI enabled synthesizers with low latency on Windows, Linux and Mac OS X.

WaveOut.PlaybackState is not changing, DirectSoundOut.PlaybackState is not playing

Hello

WaveOut.PlaybackState is not changed upon playing wave file by chunks. i need to perform changing speed of an mp3 thats why i'm reading wav by chunks.

Sample code:

using System.Threading;
using System.Windows;
using NAudio.Wave;

namespace Mp3SlowDowner
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        public MainWindow()
        {
            InitializeComponent();
        }

        private void ButtonBase_OnClick(object sender, RoutedEventArgs e)
        {
            const string mp3File = @"G:\Музыка\Epic 1.mp3.mp3";
            const string outputFile = @"G:\Музыка\Epic 1 wava.wav";

            ConvertMp3ToWav(mp3File, outputFile);

            PlayByChunks(outputFile);
        }

        private static void PlayByChunks(string outputFile)
        {
            var waveOut = new WaveOut();
            var waveFileReader = new WaveFileReader(outputFile);
            var bufferedWaveProvider = new BufferedWaveProvider(waveFileReader.WaveFormat);

            waveOut.Init(bufferedWaveProvider);

            waveOut.Play();

            byte[] buffer = new byte[1024 * 256];

            while (true)
            {
                int read = waveFileReader.Read(buffer, 0, buffer.Length);
                if (read == 0)
                    break;

                bufferedWaveProvider.AddSamples(buffer, 0, read);

                do
                {
                    Thread.Sleep(1000);
                } while (waveOut.PlaybackState == PlaybackState.Playing);
            }
        }

        private static void ConvertMp3ToWav(string mp3File, string outputWavFile)
        {
            using (Mp3FileReader reader = new Mp3FileReader(mp3File))
            {
                WaveFileWriter.CreateWaveFile(outputWavFile, reader);
            }
        }
    }
}

Also state seems doesn't work in DirectSoundOut either (its state is not Playing and buffer gets overflowed)

using System.Threading;
using System.Windows;
using NAudio.Wave;

namespace Mp3SlowDowner
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        public MainWindow()
        {
            InitializeComponent();
        }

        private void ButtonBase_OnClick(object sender, RoutedEventArgs e)
        {
            const string mp3File = @"G:\Музыка\Epic 1.mp3.mp3";
            const string outputFile = @"G:\Музыка\Epic 1 wava.wav";

            ConvertMp3ToWav(mp3File, outputFile);

            PlayByChunks(outputFile);
        }

        private static void PlayByChunks(string outputFile)
        {
            var waveOut = new DirectSoundOut();
            var waveFileReader = new WaveFileReader(outputFile);
            var bufferedWaveProvider = new BufferedWaveProvider(waveFileReader.WaveFormat);

            waveOut.Init(bufferedWaveProvider);

            waveOut.Play();

            byte[] buffer = new byte[1024 * 256];

            while (true)
            {
                int read = waveFileReader.Read(buffer, 0, buffer.Length);
                if (read == 0)
                    break;

                bufferedWaveProvider.AddSamples(buffer, 0, read);

                do
                {
                    Thread.Sleep(1000);
                } while (waveOut.PlaybackState == PlaybackState.Playing);
            }
        }

        private static void ConvertMp3ToWav(string mp3File, string outputWavFile)
        {
            using (Mp3FileReader reader = new Mp3FileReader(mp3File))
            {
                WaveFileWriter.CreateWaveFile(outputWavFile, reader);
            }
        }
    }
}

Support for NRPN

As far as I can tell by looking at the code, there is no support for NRPN control change messages - are there any plan to include this support at some time?

Change Amplitude of wave file

Hi Guys,

is it possible to change the Amplitude of a WAVE file?

My Task:
Convert MP3s to WAVE (PCM, 8 kHz, 16 bit, mono -6 dB Full Scale)

So far so good, convert is not the problem. I already get the WAVE file. Unfortunately I can't change the Amplitude.

Is there a way to do that or has someone a code snippet for me?

Thanks,
Greetings MK

ISampleProvider from WaveFileReader

Hey,

I was wondering if there was a way to get a ISampleProvider from a WaveFileReader whose WaveFormat is WaveFormatEncoding.Extensible? The wav file itself is 16bits, 44 kHZ 4 channels and it seems that SampleProviderConverters is only able to provde ISampleProvider for Pcm and IeeeFloat encodings.

I've also attempted to use WaveFormatConversionStream to convert the file and get a Pcm encoding but I'm getting an 'AcmNotPossible calling acmStreamOpen' exception. I understand this is related to the fact that the codec that can perform this conversion doesn't exist on my machine but I'm haven't been able to find one that can convert from 4 channels wav file in those which are listed by the NAudio demo app.

Will the Vorbis Codec be Ported Over?

Hi there! I see that there's Vorbis decoding capabilities, so I'm a bit curious as to whether Vorbis encoding will eventually be provided or even an entire port of the Vorbis codec. I think libvorbis is BSD licensed, so it should be okay, right?

Great library, by the way.

Creating mp4/m4a file

Hi
I'm trying to create an m4a file, but have been unable to find the necessary method. So far I have an AAC file, but need to wrap it in the M4A container. Is this supported in NAudio and if so are there any samples or docs relating to this.

Cheers

Keith

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.