Comments (69)
PyVisualizer uses QtMultimedia as its backend. I don't know what is the source of QtMultimedia, guess a C++ implementation, but it could be a choice.
from panon.
Maybe we are in the wrong direction, you can forget about fifo, and modify SoundCardSource class in source.py directly, add your 2nd signal to SoundCardSource class.
from panon.
Glad you like it
from panon.
Documentation for sounddevice is here:
https://python-sounddevice.readthedocs.io/en/latest/
from panon.
I think pythonic means
Exploiting the features of the Python language to produce code that is clear, concise and maintainable.
which does not help solve compatible problems. But who knows. We can try it.
So would you please checkout issue8 branch and run python test/test_python-sounddevice.py
? You are expected to see output like this.
$ python test_python-sounddevice.py
Make sure you are playing music when run this script
0.030090332 -0.031433105 0.06317139
succeeded to catch audio
Will this script work for you in the situation where test_pyaudio.py
has failed?
Good to know there is an alternative to pyaudio, thanks.
from panon.
EDIT: Unfortunately it behaves the same. Unless there is sth wrong in my system, it seems that the ALSA plug-in cannot work full duplex. It may be a limitation of portaudio, perhaps.
from panon.
According to the comments in this issue, it might be a limitation of ALSA itself:
from panon.
In other words, it is sadly impossible to open one Stream for the first channel, and one Stream for the second channel. That being said, since you are only using Linux, you might just use pulseaudio directly.
Have you tried glava? https://github.com/jarcode-foss/glava
I think glava uses pulseaudio directly.
from panon.
PyVisualizer uses QtMultimedia as its backend. I don't know what is the source of QtMultimedia, guess a C++ implementation, but it could be a choice.
I think QtMultimedia is the way to go because it is much better integrated with KDE/Qt/Pulse. I actually used the QtPy (python3-qtpy), by only changing the following in the test script:
-from PySide2 import QtMultimedia
+from qtpy import QtMultimedia
That makes the script to run perfectly, but unfortunately the result was the same. Not even QtMultimedia based on pulse was able to capture input and output at once (at least using QtPy; I will check if PySide2 would be any different, which I assume will not).
Naively I think there are two ways to fix this once and for all:
-
create a script that captures audio input and sends it to audio output and then use audio output in panon (the wire.py script might help on doing it)
-
or, what I think it might be a better option, capture both audio input and audio output with independent pyside2/qtpy processes (in and out), and within panon add the two signals together and generate and display the resulting spectrum.
Do you think there is a way to implement second option in panon?
EDIT:
(at least using QtPy; I will check if PySide2 would be any different, which I assume will not)
Confirmed, both QtPy and PySide2 work the same, as expected.
from panon.
Sorry, somehow, I am confused. Are we still talking about the same issue of #6? Didn't know we need to mix two signals. I thought you only wanted to catch audio when microphone is muted.
How about this idea?
capture both audio input and audio output with independent pyside2/qtpy processes (in and out),
and add the two or even three signals together in a script/program. To connect this script to panon, it is only required to output to a fifo file like mpd
does. And I will build a backend for mpd
's /tmp/mpd.fifo
, so it will also accept this script's output.
This is how mpd
works
https://wiki.archlinux.org/index.php/ncmpcpp#Enabling_visualization
from panon.
Sorry for the confusion. This issue is indeed related to issue #6. However, issue 6 was (only) in the context of pyaudio (ALSA), and here in addition we have tested sounddevice (supposed to be improved ALSA) and QtMultimedia (pulse), with the hope that it would behave differently.
At this point, everything seems to indicate that this is something general, and that we indeed need to combine input and output together into a single signal.
Your approach seems great to me. Thank you!
from panon.
You are welcome.
So, this is an ugly example of writing the QtMultimedia's output to /tmp/my_program.fifo
.
https://gist.github.com/rbn42/44bbd5bfaa88a812b13040800456aba0
This script can work with panon in my system, just like mpd.
If you want to modify the audio data, you need an implementation of QIODevice, instead of which I am using a QFile, at line 33.
source = audio_input.start(_file)
A QFile simply recieves data from QtMultimedia, and writes to /tmp/my_program.fifo
. I think you need to recieve data from 2 different QtMultimedia instances, add the signals together, and then write to fifo file.
from panon.
Would you please consider adding QtMultimedia as a backend?
Could you actually use a Qt Binding such as this one:
https://github.com/ros-visualization/rviz/blob/melodic-devel/src/python_bindings/rviz/__init__.py
so the test (and the backend when implemented) supports both pyside2 and qtpy?
EDIT: the test works equally fine with both libraries. I tested them making the change manually
from panon.
Would you please consider adding QtMultimedia as a backend?
I am not sure, because it seems QtMultimedia requires a Qt main loop (If I am wrong, tell me please). And I am afraid introducing Qt main loop into panon may results other issues. So I prefer not to add it before it is proved to be a better option than pyaudio in some situations. For example, if it is proved to be able to catch audio data for you while pyaudio cannot #6.
Could you actually use a Qt Binding such as this one:
I think it is possible.
I tested them making the change manually
You can send a pull request for the qtpy part, put it in an if block. like
if QT_BINDING == 'qtpy':
balabala
from panon.
it seems QtMultimedia requires a Qt main loop (If I am wrong, tell me please)
I don't really know :/
from panon.
it seems QtMultimedia requires a Qt main loop (If I am wrong, tell me please)
I don't really know :/
It means, an QtWidgets.QApplication object must be created, app = QtWidgets.QApplication(sys.argv)
And the script must end with app.exec_()
, which starts the Qt main loop. Otherwise I see errors which I can't remember now. https://gist.github.com/rbn42/44bbd5bfaa88a812b13040800456aba0
I don't know is there a better way to do it without creating errors.
from panon.
Yeah, I don't know if you can do it without, probably not
from panon.
Oh no, I just noticed sounddevice is a binding for PortAudio, not a binding for PulseAudio. https://python-sounddevice.readthedocs.io/en/0.3.7/
This Python module provides bindings for the PortAudio library and a few convenience functions to play and record NumPy arrays containing audio signals.
from panon.
And this https://github.com/bastibe/SoundCard seems to be the python bingding for PulseAudio.
from panon.
Oh no, I just noticed sounddevice is a binding for PortAudio, not a binding for PulseAudio. https://python-sounddevice.readthedocs.io/en/0.3.7/
This Python module provides bindings for the PortAudio library and a few convenience functions to play and record NumPy arrays containing audio signals.
Yes, I mentioned it in first comment above. According to that comment, it's supposed to be better than pyaudio though...
from panon.
It was my fault. I don't know why I took it as a binding for PulseAudio.
from panon.
No problem. That was why I changed the title later on to QtMultimedia (targetting PulseAudio), but that has issues too that you mentioned, so maybe we should change it again and request SoundCard instead, should we?
from panon.
If you want to help, there is already a test script for SoundCard.
from panon.
I've just tested it. It works the same compared to other backends. By default it displays input (micro) sound only, which can be manually changed in pavucontrol to display output sound.
The difference with respect to other backends is that it always starts displaying input instead of whatever you configure in pavucontrol (Audio vs. Monitor of Audio), I suppose because the test explicitly requests the micro:
default_mic = sc.default_microphone()
from panon.
As I mentioned in #11, I don't really have a microphone, and this default_mic actually record audio from my default speaker.
Can we try to catch audio from all microphones? To see which one actually works.
"""
Requires https://github.com/bastibe/SoundCard
"""
import soundcard as sc
import numpy as np
default_mic = sc.default_microphone()
print('Make sure you are playing music when run this script')
mics = sc.all_microphones()
for mic in mics:
print(mic)
data = default_mic.record(samplerate=48000, numframes=48000)
_max = np.max(data)
_min = np.min(data)
_sum = np.sum(data)
print(_max, _min, _sum)
if _max > 0:
print('succeeded to catch audio')
else:
print('failed to catch audio')
from panon.
I ran the script.
- with muted micro but music playing:
Make sure you are playing music when run this script
<Microphone Audio intern Estèreo analògic (2 channels)>
0.0 0.0 0.0
failed to catch audio
- with unmuted micro:
Make sure you are playing music when run this script
<Microphone Audio intern Estèreo analògic (2 channels)>
0.07467651 -0.068603516 3.999298
succeeded to catch audio
from panon.
So it means SoundCard provides you only one "microphone" ?
from panon.
I can have up to three micros:
- internal (always enabled, although it can be muted or not)
- external (only enabled when plugged as headphones; I don't have any)
- bluetooth headset (I have one of those with micro, I could test it if you want, but in the test above it was not connected and hence disabled)
from panon.
I mean I saw only one "microphone" in the script's output. It was expected to show all the microphones
mics = sc.all_microphones()
soundcard.all_microphones(include_loopback=False, exclude_monitors=True)[source]
A list of all connected microphones.
By default, this does not include loopbacks (virtual microphones that record the output of a speaker).
from panon.
Well, so SoundCard doesn't work for us too.
from panon.
BTW, I don't know what does "monitor" mean here. You don't need to explain it for me if it doesn't matter.
from panon.
I mean I saw only one "microphone" in the script's output. It was expected to show all the microphones
Yes, I know, and I'm telling you why there is only one, because the other two are not connected
from panon.
Well, so SoundCard doesn't work for us too.
Not sure I am clear what the expectation for "work" is anymore :)
from panon.
Not sure I am clear what the expectation for "work" is anymore :)
For me, it means the library can catch audio from the default speaker.
from panon.
BTW, I don't know what does "monitor" mean here. You don't need to explain it for me if it doesn't matter.
I will explain again about monitor because we need to understand each other if we want this to move on.
Monitor appears in pavucontrol (it may be that the word is different in English).
- When capturing Monitor of Audio -> you capture the output (speakers)
- When capturing Audio (directly) -> you capture the input (micro)
And that is basically what happens in all backends. And when you use pavucontrol to redirect the desired configuration, this is therefore followed by all backends with the exception of this one that always redirects to micro, regardless you redirected to speakers at an earlier instance (i.e., running the test a second time after manually making the change).
from panon.
Not sure I am clear what the expectation for "work" is anymore :)
For me, it means the library can catch audio from the default speaker.
As mentioned above, all tested backends (but SoundCard) work fine after adjusting pavucontrol accordingly (from capturing Audio to capturing Monitor of Audio).
from panon.
Now I understand, thank you. I rewrote this script for monitors.
"""
Requires https://github.com/bastibe/SoundCard
"""
import soundcard as sc
import numpy as np
print('Make sure you are playing music when run this script')
mics = sc.all_microphones(exclude_monitors=False)
for mic in mics:
print(mic)
data = default_mic.record(samplerate=48000, numframes=48000)
_max = np.max(data)
_min = np.min(data)
_sum = np.sum(data)
print(_max, _min, _sum)
if _max > 0:
print('succeeded to catch audio')
else:
print('failed to catch audio')
from panon.
Awesome, it's a move in the right direction!
I needed to fix the code:
EDIT: data = mic.record(samplerate=48000, numframes=48000)
"""
Requires https://github.com/bastibe/SoundCard
"""
import soundcard as sc
import numpy as np
print('Make sure you are playing music when run this script')
mics = sc.all_microphones(exclude_monitors=False)
for mic in mics:
print(mic)
data = mic.record(samplerate=48000, numframes=48000)
_max = np.max(data)
_min = np.min(data)
_sum = np.sum(data)
print(_max, _min, _sum)
if _max > 0:
print('succeeded to catch audio')
else:
print('failed to catch audio')
- with music and unmuted micro:
Make sure you are playing music when run this script
<Loopback Monitor of Audio intern Estèreo analògic (2 channels)>
0.6021118 -0.58810425 29.975067
succeeded to catch audio
<Microphone Audio intern Estèreo analògic (2 channels)>
0.048065186 -0.047058105 -4.9709473
succeeded to catch audio
- with music but muted micro, however, something goes wrong:
Make sure you are playing music when run this script
<Loopback Monitor of Audio intern Estèreo analògic (2 channels)>
0.59664917 -0.6100769 -48.50177
succeeded to catch audio
<Microphone Audio intern Estèreo analògic (2 channels)>
Assertion 's' failed at pulse/stream.c:1411, function pa_stream_connect_record(). Aborting.
from panon.
<Loopback Monitor of Audio intern Estèreo analògic (2 channels)>
Well, so this is the device we need. Right?
Are you sure this device is not shown in your pyaudio device list? Maybe a different name? We need the device's id/index, so you can find the corresponding device in pyaudio.
"""
Requires https://github.com/bastibe/SoundCard
"""
import soundcard as sc
import numpy as np
print('Make sure you are playing music when run this script')
mics = sc.all_microphones(exclude_monitors=False)
for mic in mics:
print(mic,mic.id) #show device's id
data = default_mic.record(samplerate=48000, numframes=48000)
_max = np.max(data)
_min = np.min(data)
_sum = np.sum(data)
print(_max, _min, _sum)
if _max > 0:
print('succeeded to catch audio')
else:
print('failed to catch audio')
from panon.
After you get the id with the script above, can you put the id in this pyaudio script?
import pyaudio
p = pyaudio.PyAudio()
stream = p.open(
format=pyaudio.paInt16,
channels=2,
rate=44100,
input=True,
input_device_index=put your device id/index here,
)
data= stream.read(44100)
data = np.frombuffer(data, 'int16')
_max = np.max(data)
_min = np.min(data)
_sum = np.sum(data)
print(_max, _min, _sum)
if _max > 0:
print('succeeded to catch audio')
else:
print('failed to catch audio')
from panon.
There are two issues with your code, so using this one:
"""
Requires https://github.com/bastibe/SoundCard
"""
import soundcard as sc
import numpy as np
import time
print('Make sure you are playing music when run this script')
mics = sc.all_microphones(exclude_monitors=False)
for mic in mics:
print(mic,mic.id) #show device's id
time.sleep(2)
data = mic.record(samplerate=48000, numframes=48000)
_max = np.max(data)
_min = np.min(data)
_sum = np.sum(data)
print(_max, _min, _sum)
if _max > 0:
print('succeeded to catch audio')
else:
print('failed to catch audio')
- should call mic instead of default_mic
- without sleep, it sometimes crashes
from panon.
the result of the script (soundcard), will try pyaudio, which should just work equally fine
❯ python3 test_python-soundcard.py
Make sure you are playing music when run this script
<Loopback Monitor of Audio intern Estèreo analògic (2 channels)> alsa_output.pci-0000_00_1f.3.analog-stereo.monitor
0.58651733 -0.5818176 103.38672
succeeded to catch audio
<Microphone Audio intern Estèreo analògic (2 channels)> alsa_input.pci-0000_00_1f.3.analog-stereo
0.10623169 -0.12210083 36.598785
succeeded to catch audio
from panon.
Oh, I thought id is a number
from panon.
Mmm, I wonder if one can specify:
alsa_output.pci-0000_00_1f.3.analog-stereo
instead of:
alsa_output.pci-0000_00_1f.3.analog-stereo.monitor
and get it working.
from panon.
No you can't. You can only read from output.monitor
, and write to output
. Can't read from output
.
from panon.
I see.
This is the result, as you said it crashes as it expects an integer:
ALSA lib pcm_dmix.c:1052:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_dmix.c:1052:(snd_pcm_dmix_open) unable to open slave
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
Traceback (most recent call last):
File "test_pyaudio_new.py", line 8, in <module>
input_device_index='alsa_output.pci-0000_00_1f.3.analog-stereo.monitor',
File "/usr/lib/python3/dist-packages/pyaudio.py", line 750, in open
stream = Stream(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/pyaudio.py", line 441, in __init__
self._stream = pa.open(**arguments)
ValueError: input_device_index must be integer (or None)
from panon.
It would be easier for me, if there is a corresponding output.monitor
device in pyaudio, and we can find it. Otherwise I have to add SoundCard to panon. Either way, I think it is solved.
from panon.
Yes, I think that would be good.
It seems you are not interested in panon displaying audio from the micro, which I think it's fine. Not requesting this anymore.
There is however a side effect of panon using pyaudio at the moment, which is that it enables the micro, for really no reason.
Ideally panon should be able to self-configure itself to display music from output and ignore (not even enable) the input, that is the micro.
I know about this because the captured micro icon appears in the system tray whenever I use panon, regardless of what panon displays (output instead of input).
I find this useless and would prefer the icon not to appear which depends on pyaudio not enabling the micro when the micro is not really being captured.
from panon.
It seems you are not interested in panon displaying audio from the micro, which I think it's fine. Not requesting this anymore.
Sorry, I am not interested. But as I said before, it can be implemented as a fifo file. If I have some free time in future, I can help you write this script.
There is however a side effect of panon using pyaudio at the moment
I guess I have to add SoundCard to panon, so pyaudio won't bother you any more.
from panon.
I think the simplest is to find the way to directly target the monitor in pyaudio.
The test for soundcard also suffers this other side issue I mentioned above (micro icon in system tray) :/
It's fine, we can forget about this side effect too.
from panon.
This script can help you add 2 signals together.
from soundcard import pulseaudio as sc
import sys
import os
import numpy as np
SAMPLE_RATE = 44100 # [Hz]
SAMPLE_SIZE = 16 # [bit]
CHANNEL_COUNT = 2
BUFFER_SIZE = 5000
blocksize=SAMPLE_RATE // 60
l=sc.all_microphones( exclude_monitors=False,)
mic0=l[0] # Replace it with the mic you want
mic1=l[1] # Replace it with the mic you want
stream0=mic0.recorder(SAMPLE_RATE,CHANNEL_COUNT,blocksize)
stream0.__enter__()
stream1=mic1.recorder(SAMPLE_RATE,CHANNEL_COUNT,blocksize)
stream1.__enter__()
path = "/tmp/my_program.fifo"
if not os.path.exists(path):
os.mkfifo(path)
f_fifo=open(path,'wb')
import time
while True:
data=stream0.record(blocksize)+stream1.record(blocksize)
data = np.asarray(data * (2**16), dtype='int16').tobytes()
f_fifo.write(data)
After start the script, set fifo path to /tmp/my_program.fifo
from panon.
Will give it a try, thank you
from panon.
I tried it. While it works, the fifo introduces a delay of about 5-10 s in between of the sound and the spectrum display that makes it a non-suitable option, unless this can be somehow speeded up substantially.
from panon.
Oh yes, it is an error that I can't sense when connecting panon to mpd. So this is the new script, and I will fix the error in panon.
from soundcard import pulseaudio as sc
import os
import numpy as np
SAMPLE_RATE = 44100 # [Hz]
SAMPLE_SIZE = 16 # [bit]
CHANNEL_COUNT = 2
BUFFER_SIZE = 5000
blocksize=SAMPLE_RATE // 60
#blocksize=256
path = "/tmp/my_program.fifo"
if not os.path.exists(path):
os.mkfifo(path)
print('waiting')
f_fifo=open(path,'wb')
print('start')
l=sc.all_microphones(exclude_monitors=False)
mic0=l[0] # Replace it with the mic you want
stream0=mic0.recorder(SAMPLE_RATE,CHANNEL_COUNT,blocksize)
stream0.__enter__()
mic1=l[1] # Replace it with the mic you want
stream1=mic1.recorder(SAMPLE_RATE,CHANNEL_COUNT,blocksize)
stream1.__enter__()
while True:
data=stream0.record(blocksize) +stream1.record(blocksize)
data = np.asarray(data * (2**16), dtype='int16').tobytes()
f_fifo.write(data)
from panon.
It is slightly better with this change but unfortunately still too slow
from panon.
Are you sure you checked out the latest master branch and restarted panon? I see no delay in my system.
from panon.
Mmm, yes I did :/
Could it be that the delay is introduced when more than a signal is available?
from panon.
How are you checking the delay? I do it with single tones
from panon.
Maybe, but I won't know, I have only one signal here. But you can test. Do you have delay with only one signal?
How are you checking the delay? I do it with single tones
By playing / pausing music
from panon.
Oh I see what's going on, I think single tones introduce a delay that probably is due to the appearing/disappearing of panon itself.
You're right, while panon is shown the delay is very short and pretty much acceptable.
from panon.
I tested it again without panon auto-hide this time, and unfortunately I can still see the delay, so it's not that.
Doing with continuous music is not the best because you cannot perceive the delay, right?
I use the sound that plasma makes when changing the audio volume. And for that I use this sound (for the case you also have it):
/usr/share/sounds/freedesktop/stereo/audio-volume-change.oga
from panon.
Do you see delay in these two gif?
https://files.catbox.moe/js00vd.webp
This one shows audio-volume-change
https://files.catbox.moe/567zev.webp
from panon.
I watched the second one and it doesn't seem to be delay
I realize that delays in my system are cumulative, could that be the problem?
Could you keep the fifo going for a while and then test? In my case, the longer I wait until I do the test since I started the fifo, the longer the delay.
from panon.
The fifo is still going here, no delay.
I realize that delays in my system are cumulative, could that be the problem?
Try replace line 64 in panon/source.py with
data = self.stream.read(self.sample_rate // self.fps * self.channel_count * 2*8) #int16 requires 2 bytes
This will help eliminate accumulated delay, but may introduce other problems.
from panon.
Maybe we are in the wrong direction, you can forget about fifo, and modify SoundCardSource class in source.py directly, add your 2nd signal to SoundCardSource class.
If that would work, it would certainly be the best. Will check
from panon.
Is there a reason why you use microphone instead of speakers in your code?
According to https://pypi.org/project/SoundCard :
import soundcard as sc
# get a list of all speakers:
speakers = sc.all_speakers()
# get the current default speaker on your system:
default_speaker = sc.default_speaker()
# get a list of all microphones:
mics = sc.all_microphones()
# get the current default microphone on your system:
default_mic = sc.default_microphone()
All of these functions return Speaker and Microphone objects, which can be used for playback and recording. All data passed in and out of these objects are frames × channels Numpy arrays.
That should be why by default panon displays input (micro) instead of output (speakers). And that could be a way to get both at once and add them together within panon for display...
from panon.
All of these functions return Speaker and Microphone objects, which can be used for playback and recording
It means, a Speaker object has the play() function, while a Microphone object has the record() function. But a Speaker object does not have the record() function.
from panon.
I mean, I cannot fetch data from a Speaker. SoundCard does not allow me to do that.
from panon.
I mean, I cannot fetch data from a Speaker. SoundCard does not allow me to do that.
Yes right, you told me that before, sorry I forgot.
By the way this last commit is amazing! Kudos
from panon.
Related Issues (20)
- panon will not start effect
- [Feature request] (bring it back) Randomize colors on click
- panon not working when turning on a video or music
- [Feature request] Normal spectrogram HOT 1
- [BUG] Visualization won't show for PulseAudio, due to Python 3.10 HOT 1
- [BUG] No matter how loud, soft, or nonexistent the audio is, the visualiser is the same HOT 2
- [Feature request] Pipewire support HOT 4
- [BUG] Panon crashes semi-often with a visualizer py error HOT 1
- [Feature request] Add ability to choose the visualization channel manually for PortAudio HOT 1
- [Feature request] How to change the audio read source HOT 1
- [BUG] i get this error when trying to use panon
- [Feature request] Add Dependencies for Fedora on the README HOT 1
- [REQUEST] It's time for a new release HOT 2
- [BUG] panon on task manager bar refreshes only when you interact with something there HOT 2
- [BUG] {Panon can't be used in Ubuntu 22.04(KDE Frameword 5.98.0)} HOT 1
- [BUG] Error: Failed to load the visual affect. Please choose another visual effect in the configuration dialog.
- [BUG?] {client.python is using the microphone}
- [BUG] Panon doesn't work on Arch Linux, KDE 5.25.1 HOT 2
- [BUG] Panon not working after restart HOT 2
- [BUG] bar1ch decay freezes when sound is turned off.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from panon.