GithubHelp home page GithubHelp logo

quefumas / gensound Goto Github PK

View Code? Open in Web Editor NEW
79.0 4.0 6.0 6.02 MB

Pythonic audio processing and generation framework

License: Apache License 2.0

Python 100.00%
audio audio-processing music-composition music audio-effect audio-synthesis dsp sound

gensound's Introduction

Gensound

The Python way to audio processing & synthesis.

An intuitive, flexible and lightweight library for:

  • Experimenting with audio and signal processing
  • Creating and manipulating sounds
  • Electronic composition

Core features:

  • Platform independent
  • Very intuitive syntax
  • Easy to create new effects or experiments and combine them with existing features
  • Great for learning about audio and signals
  • Multi-channel audio for customizable placement of sound sources
  • Parametrization
  • And more to come!

Setup

  1. Install using pip install gensound. This will also ensure NumPy is installed. For smoother playback, it is also recommended to have any one of simpleaudio, playsound, PyGame installed. It is also recommended to have FFMPEG installed, which enables read/export of file formats other than Wave and AIFF.

  2. Run the examples below (or some of the example files in the repository).

Gensound in less than a minute

All audio is a mixture of signals (audio streams), to which we can apply transforms (effects).

  • To apply a transform to a signal we use the syntax: Signal * Transform;
  • To mix two signals together we use addition: Signal + Signal;
  • And to concatenate two signals (play one after the other): Signal | Signal.

Each of these operations results in a new Signal object on which we can perform more of these operations.

Now, let's run some basic examples!

Show Me the Code

  • Load a WAV into a Signal object from a file:
from gensound import WAV, test_wav

wav = WAV(test_wav) # load sample WAV, included with gensound
  • Playback or file export:
wav.play()
wav.export("test.wav")
  • Play file using different sample rate (results in pitch shift):
wav.play(sample_rate=32000) # original sample rate 44.1 kHz
  • Play only the R channel:
wav[1].play() # wav[0] is L channel, wav[1] is R
  • Turn down the volume of L channel:
wav[0] *= 0.5 # amplitude halved; wav[1] amplitude remains the same
wav.play()
  • Same thing, but using dBs:
from gensound import Gain
wav[0] *= Gain(-3) # apply Gain transform to attenuate by 3 dB
  • Mix a Stereo signal (L-R channels) to mono (center channel only):
wav = 0.5*wav[0] + 0.5*wav[1] # sums up L and R channels together, halving the amplitudes
  • Switch L/R channels:
wav[0], wav[1] = wav[1], wav[0]
  • Crop 5 seconds from the beginning (5e3 is short for 5000.0, meaning 5,000 milliseconds or 5 seconds):
wav = wav[5e3:] # since 5e3 is float, gensound knows we are not talking about channels

If we only care about the R channel:

wav = wav[1, 5e3:] # 5 seconds onwards, R channel only

We can decide to slice using sample numbers (ints) instead of absolute time (floats):

wav = wav[:,:1000] # grabs first 1000 samples in both channels; samples are in ints
  • Repeat a signal 5 times:
wav = wav**5
  • Mix a 440Hz (middle A) sine wave to the L channel, 4 seconds after the beginning:
from gensound import Sine

wav[0,4e3:] += Sine(frequency=440, duration=2e3)*Gain(-9)
  • Play a tune (see full syntax here):
s = Sine('D5 C# A F# B G# E# C# F#', duration=0.5e3)
s.play()
  • Reverse the R channel:
from gensound import Reverse

wav[1] *= Reverse()
  • Haas effect - delaying the L channel by several samples makes the sound appear to be coming from the right:
from gensound import Shift

wav[0] *= Shift(80) # try changing the number of samples
# when listening, pay attention to the direction the audio appears to be coming from
  • Stretch effect - slowing down or speeding up the signal by stretching or shrinking it. This affects pitch as well:
from gensound.effects import Stretch

wav *= Stretch(rate=1.5) # plays the Signal 1.5 times as fast
wav *= Stretch(duration=30e3) # alternative syntax: fit the Signal into 30 seconds
  • Advanced: AutoPan both L/R channels with different frequency and depth:
from gensound.curve import SineCurve

s = WAV(test_wav)[10e3:30e3] # pick 20 seconds of audio

curveL = SineCurve(frequency=0.2, depth=50, baseline=-50, duration=20e3)
# L channel will move in a Sine pattern between -100 (Hard L) and 0 (C)

curveR = SineCurve(frequency=0.12, depth=100, baseline=0, duration=20e3)
# R channel will move in a Sine pattern (different frequency) between -100 and 100
    
t = s[0]*Pan(curveL) + s[1]*Pan(curveR)

Syntax Cheatsheet

Meet the two core classes:

  • Signal - a stream of multi-channeled samples, either raw (e.g. loaded from WAV file) or mathematically computable (e.g. a Sawtooth wave). Behaves very much like a numpy.ndarray.
  • Transform - any process that can be applied to a Signal (for example, reverb, filtering, gain, reverse, slicing).

By combining Signals in various ways and applying Transforms to them, we can generate anything.

Signals are envisioned as mathematical objects, and Gensound relies greatly on overloading of arithmetic operations on them, in conjunction with Transforms. All of the following expressions return a new Signal object:

  • amplitude*Signal: change Signal's amplitude (loudness) by a given factor (float)
  • -Signal: inverts the Signal
  • Signal + Signal: mix two Signals together
  • Signal | Signal: concatenate two Signals one after the other
  • Signal**4: repeat the Signal 4 times
  • Signal*Transform: apply Transform to Signal
  • Signal[start_channel:end_channel,start_ms:end_ms]: Signal sliced to a certain range of channels and time (in ms). The first slice expects integers; the second expects floats.
  • Signal[start_channel:end_channel,start_sample:end_sample]: When the second slice finds integers instead of floats, it is interpreted as a range over samples instead of milliseconds. Note that the duration of this Signal changes according to the sample rate.
  • Signal[start_channel:end_channel]: when a single slice of ints is given, it is taken to mean the channels.
  • Signal[start_ms:end_ms]: if the slice is made up of floats, it is interpreted as timestamps, i.e.: Signal[:,start_ms:end_ms].

The slice notations may also be used for assignments:

wav[4e3:4.5e3] = Sine(frequency=1e3, duration=0.5e3) # censor beep on seconds 4-4.5
wav[0,6e3:] *= Vibrato(frequency=4, width=0.5) # add vibrato effect to L channel starting from second 6

...and to increase the number of channels implicitly:

wav = WAV("mono_audio.wav") # mono Signal object
wav[1] = -wav[0] # now a stereo Signal with R being a phase inverted version of L

Note the convention that floats represent time as milliseconds, while integers represent number of samples.

When performing playback or file export of a Signal, Gensound resolves the Signal tree recursively, combining the various Signals and applying the transforms.

More

I would love to hear about your experience using Gensound - what worked well, what didn't, what do you think is missing. Don't hesitate to drop me a line.

The gradually evolving Wiki is both a tutorial and a reference, and will also provide many fun examples to learn and play with. If you are interested in contributing, check out Contribution.

gensound's People

Contributors

quefumas avatar zivkaplan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

gensound's Issues

Expand documentation

I think the documentation is quite welcoming so far, but it lacks advanced features and instructions for developers.

In particular, as stated:

  • How to extend Signal and Transform to implement new effects
  • Crossfades and BiTransforms
  • Curves and parametrization
  • Custom Panning Schemes

My idea is to create a (long) 'Reference' document which lists all available features extensively, with example where needed, and perhaps even migrate some from README to make it lighter. And add the developer's stuff into TECHNICAL.MD.

playback.py and audio.py cleanup

some issues between these 2 files:

  • Is play_Audio a good name? I think play or playback would be better.
  • is playback.py a good name? would IO be better?
  • should play_Audio and export_WAV be Audio class methods instead? They are implemented in playback.py (as they should be), but is there any reason not to attach them to Audio objects (or any reason for the opposite)?
  • Go over TODOs in both files

Sound is choppy on simpleaudio

Can't tell if this issue is because of simpleaudio or gensound. Happens occasionally, doesn't seem dependent on the actual code (can run the same thing twice with different behaviour).

During Audio.mixdown the buffer is built to be contiguous in memory, so in theory this shouldn't happen.

update __init__.py

the current init.py doesn't import effects.py and the others. please update init.py to make it easier to use this package

Crash when realising ADSR

Not sure what is happening here. It only happens if the sample rate is 22050. 44100 and 11025 both work. Rounding error somewhere?

Code:

from gensound import *

if __name__ == '__main__':
    snd = Sine(duration=0.1e3) * ADSR(attack=0.01e3, decay=0.01e3, sustain=0.25, release=0.01e3)
    snd.realise(sample_rate=22050)

Trace:

Traceback (most recent call last):
  File "test.py", line 5, in <module>
    snd.realise(sample_rate=22050)
  File "venv/lib/python3.8/site-packages/gensound/signals.py", line 47, in realise
    transform.realise(audio=audio)
  File "venv/lib/python3.8/site-packages/gensound/transforms.py", line 550, in realise
    audio.audio[:,:length_start] *= env_start.flatten(audio.sample_rate)
ValueError: operands could not be broadcast together with shapes (1,441) (440,) (1,441) 

Find available IO interfaces when compiled using PyInstaller

As discussed here #28, gensound can't figure out which libraries are available for I/O when compiled using PyInstaller.
(This can be attested by running from gensound.io import IO; IO.status(True) and seeing which options are available)

A good solution for now is to call IO.set_io once at the start of the script, which will make gensound use the provided method even if it thinks it's unavailable. For example: IO.set_io('play', 'pygame') (Read more).

Maybe we can detect installed libraries better using PyInstaller Hooks. Also, I imagine there are other ways to do this as well, without actually importing everything.

Signal.export does not handle Path object

Hi gensound maintainers,

I just started trying gensound and I observed that filename argument of Signal.export must be a string. I think that it could be great if we could give it Path object too (from pathlib).

One possible solution:

def export(self, filename, sample_rate=44100, **kwargs):
        file = os.fspath(filename)  # add this line to get a string from Path
        audio = self.realise(sample_rate)
        audio.export(file, **kwargs)

Maybe it would be more appropiate to add the file = os.fspath(filename) somewhere else so that all functions which export sounds can take Path object as input (I don't know if this is relevant).

Best regards

Pop in automation example

from gensound.signals import Triangle
from gensound.curve import Logistic, Line, Constant

freq = Line(220, 330, 1e3) | Constant(330, 1e3) | Logistic(330, 220, 1e3)
s = Triangle(freq, 4e3)
s.play()

This example in the autmation page creates a pop at 00:02.
By examining the audio it appears that phase inference was taking effect, but resulted in the wrong starting phase for the last segment of the curve.

Curves cannot take Curves as input

Hi,

I am playing with Curve but I observed that they cannot take as input another Curve object. I think it could be really useful. For example, if I want to create a tremolo effect, I can use a SineCurve to modulate the frequency of the signal. However, if I want the tremolo speed to vary over time, it is not possible as the frequency input of SineCurve needs to be a float.

Here is a solution I found. In SineCurve, the integral method could be replaced by something like:

def integral(self, sample_rate):
        if isinstance(self.frequency, Curve):
            freq = self.frequency.flatten(sample_rate)
        else:
            freq = self.frequency

        # sin (x-pi/2) + 1
        return self.depth*np.sin(2*np.pi*freq*self.sample_times(sample_rate, inclusive=False) - np.pi/2) + self.depth + self.baseline*self.sample_times(sample_rate,inclusive=False)

Note that I had to replace the option inclusive=True by inclusive=False (don't know why it was set to True).

Maybe it should be done also in the flatten function.

Inefficient processing when using Slice/Combine

Consider the following code (version 0.4):

from gensound.filters import SimpleLPF,SimpleHPF

w = WAV(test_wav)
w[0] *= SimpleHPF(1000)
w[1] *= SimpleLPF(1000)

Upon mixdown, Gensound computes the HPF twice (!). Why is that? Consider the following line:

w[1] = Sine(...)

It is clear that upon mixdown, the Sine should be realised, and then pasted into the appropriate place in w.
The last line of the code is taken by Gensound to be equivalent to:

w[1] = w[1] * SimpleLPF(1000)

This seems quite reasonable, but by analogy to the Sine case, it would imply that we should perform mixdown on the right hand side, which means performing the (computation-intensive) HPF a second time! (since we need to perform mixdown on w[1])

It appears to me that if the user insists on writing as above, there is indeed not much we can do; but if they use the syntax at the top, Gensound should take matters into its own hand and ensure simpler implementation, possibly by passing the audio mixed thus far directly to the Transform on the right-hand side. This means breaking the logic that says that x *= 5 is equivalent to x = x*5.
Another option to be considered is to use Combine only to override parts of a Signal (i.e. w[0] = AnySignal), and when applying a transform to a signal slice, the parent Signal can perhaps remember somehow the slice on which it should operate (this requires care with tails, etc...)

The entire Slice/Combine system works but is somewhat of a mess, and will need a lot of care to ensure it is working elegantly along all the other moving parts.
Luckily I took the pain to explain this system somehow in the Technical introduction.

Transforms to clean up

  1. In what module should they appear?
  2. How should they be named?
  3. What are the proper arguments?
  4. At least some superficial testing

These are the problem ones:

  • Fade - enable choosing curves? How wide should the dB range be? Should we enable determining is_in argument from multiplication direction?
  • Convolution - test. Also what does it mean to convolve two stereo signals together?
  • Raw signal - test.
  • Average_samples - what is the proper name. Also overlaps FIR.
  • AmpFreq - what is the proper name?

Crossfade not working

The Crossfade() is not working.

I spotted two problems.

  1. There is an if to check if the duration is passed in kwargs and converts it into args.
    However, there is no check to see if a duration was indeed specified in args.

    I thought of adding these lines: (note that duration is the first positional argument)

        if "duration" in kwargs:
            args = (kwargs["duration"],) + args # hackish, maybe there's a better way
            del kwargs["duration"]
        elif not args:
            args = (2e3,)
  1. afterwards there is still an error: AssertionError: can't apply BiTransform, use concat.

(File "/Users/maayansegal/Desktop/Learn Web Development/gensound/gensound/signals.py", line 116, in _apply
assert not isinstance(transform, BiTransform), "can't apply BiTransform, use concat.")

EDIT: The I was wrong, and the error message above is irrelevant. My only suggetion is to add a validation for duration in args, so a user can useCrossfade() with no parameters and trust a default values with no errors (at least while the sound file and syntax are right).

Usage as a synth is breaking

I am trying to use the below code to allow playing of a sequence of chords, each having different number of notes in them.

import numpy as np
import gensound 
from gensound.signals import Sine, Sawtooth, Square, Triangle, WhiteNoise, Silence, PinkNoise, Mix, WAV
from gensound.effects import OneImpulseReverb, Vibrato
from gensound.filters import SimpleBandPass, SimpleHPF, SimpleLPF, SimpleHighShelf, SimpleLowShelf
from gensound.transforms import ADSR, Fade, Amplitude, CrossFade
from gensound.curve import SineCurve, Constant, Line

def midi_to_freq(midi_number):
    return 440 * 2**((midi_number-69)/12)
def shift_freq_by_cents(freq, cents):
        return freq*(10**(cents/(1200*3.322038403)))
def detuned_voices(midi_number, relative_shift, duration, detune_pct, n_voices, wave_type='square'): # TODO: Change pitch to midi number
    # n_voices = how many oscillators in the array
    # detune_range = the difference in cents between the highest and lowest oscillators in the array
    freq = midi_to_freq(midi_number)#self.pitch_to_freq(self.NoteToMidi(pitch))
    # ic(freq)
    detune_range = freq * int(detune_pct)/100
    all_cents = [i*detune_range/n_voices - detune_range/2 for i in range(n_voices)] # how much to detune each signal in the array
    # print(isinstance(pitch, str))
    match wave_type: # REQUIRES PYTHON 3.10
        case 'sine':
            f = Sine
        case 'square':
            f = Square
        case 'triangle':
            f = Triangle
        case 'sawtooth':
            f = Sawtooth
    return gensound.mix([f(shift_freq_by_cents(freq*relative_shift,cents),duration) for cents in all_cents])
def generate_audio(midi, duration):
    osc1 = detuned_voices(midi, 1, duration, 30, 6, 'square')
    osc2 = detuned_voices(midi, 1, duration, 30, 6, 'triangle')
    output = gensound.mix([osc1,osc2])
    output *= SimpleLPF(1000)
    output*= SimpleHPF(100)
    output *= ADSR(0.5, 0.5, 0.5, 0.5)

    return output

chords = np.array([np.array([65,72]), np.array([70,74,77]), np.array([39,46]), np.array([44,48,51,56,63])])
durations = [0.5, 0.6, 0.2, 0.7]
for i in range(len(chords)):
    if i == 0:
        audio = gensound.mix([generate_audio(midi, durations[i]) for midi in chords[i]])
    else:
        audio = audio | gensound.mix([generate_audio(midi, durations[i]) for midi in chords[i]])

This is giving me the below error when trying to play, realise or export the audio created:

File ~/miniconda3/envs/bgm-generation/lib/python3.10/site-packages/gensound/signals.py:54, in Signal.play(self, sample_rate, **kwargs)
     53 def play(self, sample_rate=44100, **kwargs):
---> 54     audio = self.realise(sample_rate)
     55     return audio.play(**kwargs)

File ~/miniconda3/envs/bgm-generation/lib/python3.10/site-packages/gensound/signals.py:41, in Signal.realise(self, sample_rate)
     37 def realise(self, sample_rate):
     38     """ returns Audio instance.
     39     parses the entire signal tree recursively
     40     """
---> 41     audio = self.generate(sample_rate)
     43     if not isinstance(audio, Audio):
     44         audio = Audio(sample_rate).from_array(audio)

File ~/miniconda3/envs/bgm-generation/lib/python3.10/site-packages/gensound/signals.py:386, in Sequence.generate(self, sample_rate)
    383     else:
    384         phase = 0
--> 386     audio.concat(signal.realise(sample_rate))
    387     # TODO assymetric with Mix since we don't overload audio.concat
    389 return audio

File ~/miniconda3/envs/bgm-generation/lib/python3.10/site-packages/gensound/signals.py:41, in Signal.realise(self, sample_rate)
     37 def realise(self, sample_rate):
     38     """ returns Audio instance.
     39     parses the entire signal tree recursively
     40     """
---> 41     audio = self.generate(sample_rate)
     43     if not isinstance(audio, Audio):
     44         audio = Audio(sample_rate).from_array(audio)

File ~/miniconda3/envs/bgm-generation/lib/python3.10/site-packages/gensound/signals.py:360, in Mix.generate(self, sample_rate)
    357 audio = Audio(sample_rate)
    359 for signal in self.signals:
--> 360     audio += signal.realise(sample_rate)
    362 return audio

File ~/miniconda3/envs/bgm-generation/lib/python3.10/site-packages/gensound/signals.py:47, in Signal.realise(self, sample_rate)
     44     audio = Audio(sample_rate).from_array(audio)
     46 for transform in self.transforms:
---> 47     transform.realise(audio=audio)
...
--> 550 audio.audio[:,:length_start] *= env_start.flatten(audio.sample_rate)
    551 audio.audio[:,length_start:-length_end] *= self.sustain
    552 audio.audio[:,-length_end:] *= env_end.flatten(audio.sample_rate)

ValueError: operands could not be broadcast together with shapes (1,22) (44,) (1,22) 

Can you see if there is an issue with my approach, or if there is a bug that needs to be fixed?

TIA!

Validate & expand byte-width support

  1. Ensure that conversion from numpy.ndarray with type numpy.float64 to actual 1 or 2 byte WAVs is correct. I think it isn't, since playback when using byte_width=1 sounds distorted, more like having 4 bits of accuracy. Look in

  2. Enable support for sample width of 3 or more. This may not be trivial, since some of the possible representations are floats and not ints, and even 24-bit integer requires some work as this type does not exist in numpy, which uses C++ style sizes (1,2,4 bytes etc.)

Support more file formats

Task: Support import/export formats other than Wave.

Problem: Ideally, support for various formats would be delegated to other libraries, preferably cross-platform and easy to install using pip (like simpleaudio). Unfortunately, it seems there are very few simple Python libraries which do this task well. Some libraries offer playback functionality only and would require severe tweaking at the minimum in order to extract an actual buffer of samples. Some other libraries do offer this functionality but are platform-specific or are complex to install, sometimes requiring external software.

Suggested solution: Detect dynamically which libraries are available to the user, and determine on an ad-hoc basis which formats can be dealt with. This requires adapting for all the relevant packages commonly used for this purpose. Perhaps this would also require an initial step of conversion to a WAV file which would be dumped in a safe place and reloaded later.

Parametric Stretch

Stretch transform can only receive constant factor or rate, but it would be really cool to adapt it for parametric stretch, that changes with time.
This should not be too difficult, requires a bit of mathematics and 2-3 lines in Stretch.realise()

Typo in wiki

Hi,

There is a typo in the (wiki). Indeed, gensound.curves must be replaced by gensound.curve (no ''s'') in the imports.

Best

Metaplan

  • Make public:
    • Consider name
    • Mono Convolution
    • Test syntax features and MD file examples
    • Basic Signals/Transforms documentation
    • Technical explanation of invisible classes (Mix, Sequence, BiTransform, Combine, etc.)
    • Decide on optimal directory structure
    • Improve landing page
    • Passable LPF/HPF
    • Choose license
    • Upload test WAV
    • Cleanup file headers
    • Add CONTRIBUTING.md
  • First Beta release (pip installation)
    • Figure out how to build
    • Figure out how to deploy
    • QA everything
    • Example files
    • Enforce naming conventions
    • Signal/Transform extension tutorial
  • Version 1.0
    • Ensure all possible byte widths are supported
    • Expose to user the frequency naming scheme
    • Reconsider dependence on simpleAudio
    • Figure out Stereo Convolution #9
    • Solve Fade issue #8
  • Version 1.1
    • Add effects from SciPy
    • Basic Chorus/Flanger/etc.
    • Sample rate change
    • Pitch shift
    • Full-fledged basic additive/subtractive/FM/AM Synthesizer capabilities (Synth1 for example)
    • Functioning EQ
    • Functioning Compression

Order undetermined:

  • Fully parametric
  • Comprehensive transform library
  • Comprehensive analysis functionality
  • Use Midi to map Signals
  • Import/export other audio formats
  • Advanced musical features
  • Scala files
  • Warning system and better system feedback and logging

"When we have the time":

  • VST interaction
  • Compile transform into JS/VST code
  • Optimize for speed
  • Consider (im)possibility of real-time

Fade transform

Each of these can be dealt with separately, do not fear:

  1. How are fade curves typically implemented? exponential, linear change? in amplitude or dB? If the curve is by dB, how do we reach -inf? IIRC Reaper provides a good selection of fades, we can try to aim for these options.

  2. How should the fade curve be chosen by the user? Do we give a string as an argument, i.e. Fade(curve='linear')? Which one should be the default? Can we have the user replace the default manually?

  3. (Perhaps should be split into a separate issue in the future) I'm not entirely pleased with the is_in argument, which indicates if the user desires fade in or out. I believe we can find a better solution syntactically. One option coming to mind is to simply split it into 2 transforms, FadeIn and FadeOut, and that could be the default solution for now. A more tempting idea is to have the fade direction determined by the order of multiplication with the signal. For example: Fade()*sig is interpreted to mean fade in, while sig*Fade() is taken to mean fade out, implicitly.
    This solution is very nice for the user, but below the scenes it requires a significant break of the syntax rules, since the Transform has to be somehow notified of the side on which it was applied to the signal. Even if we supply each signal with 2 transform list, pre- and post- (as opposed to a single list like right now), still the signal will have to notify the transforms when realising.
    Do note however that there is somewhat of an exception to this multiplication rule, since we can left-multiply by a float, while other Transforms have to be right-multiplied. Still, this exception is allowed not only because amplitude change is an essential transform, but also because it admits this syntax which simply hides its being a transform, so it is more acceptable.

Stereo Convolution

  1. What does SciPy convolve actually do when convolving two stereo signals?
  2. What would we like it to do?

I've noticed some impulse responses are Stereo, which is cool and all, except that I'm not sure what's supposed to happen when convolving a stereo signal with a stereo impulse response. For example, suppose each of L/R undergoes convolution and outputs a stereo reverb, and then both these stereo outputs are mixed together - won't it cancel the original panning completely? What would we actually want to happen using such an impulse response anyway? Perhaps use Reaper to check.

Crash when realising sequential Sines controlled by Lines

Code:

if __name__ == '__main__':
    x = Sine(Line(220, 440, 1e3)) | Sine(Line(440, 220, 1e3))
    x.play()

Trace:

Traceback (most recent call last):
  File "test.py", line 15, in <module>
    x.play()
  File "python3.8/site-packages/gensound/signals.py", line 54, in play
    audio = self.realise(sample_rate)
  File "python3.8/site-packages/gensound/signals.py", line 41, in realise
    audio = self.generate(sample_rate)
  File "python3.8/site-packages/gensound/signals.py", line 382, in generate
    phase = (phase + signal.end_phase)%(2*np.pi) # phase inference
  File "python3.8/site-packages/gensound/signals.py", line 500, in end_phase
    return (phase + 2*np.pi * self.frequency * self.duration / 1000)%(2*np.pi)
TypeError: unsupported operand type(s) for *: 'float' and 'Line'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.