GithubHelp home page GithubHelp logo

lightpohl / podcast-dl Goto Github PK

View Code? Open in Web Editor NEW
330.0 330.0 27.0 250 KB

A humble CLI for downloading and archiving podcasts.

License: MIT License

JavaScript 100.00%
archiver cli downloader podcast podcast-dl podcasts

podcast-dl's Introduction

podcast-dl's People

Contributors

bbo2adwuff avatar calebj avatar dependabot[bot] avatar drmikecrowe avatar lightpohl avatar lukengda avatar mchill avatar rdwz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

podcast-dl's Issues

Alpine Binary Issues

Thanks for publishing this tool! I have tried it out on Ubuntu and it seems to work well.

However, I would like to run this in a container with an alpine linux base image. The binary does not work currently. I have tested it with the following Dockerfile

FROM alpine:3.11

WORKDIR /usr/local/bin

RUN wget https://github.com/lightpohl/podcast-dl/releases/download/v1.3.1/podcast-dl-linux-x64 \
  && chmod +x podcast-dl-linux-x64 \
  && mv podcast-dl-linux-x64 podcast-dl \
  && mkdir /downloads

WORKDIR /downloads

CMD ["ldd", "/usr/local/bin/podcast-dl"]
#CMD ["podcast-dl", "--help"]
#CMD ["tail","-f","/dev/null"]

which can be built with docker build --tag podcast-dl:1.3.1 .
and run with docker run --name podcast-dl -it podcast-dl:1.3.1

ldd displays the issues with the binary and I do not know where to go from here.

EDIT: a snippet of the ldd output

Error relocating /usr/local/bin/podcast-dl: _ZNSsC1ERKSsmm: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZnamRKSt9nothrow_t: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZNKSs7compareEPKc: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZNSt11logic_errorD1Ev: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZSt29_Rb_tree_insert_and_rebalancebPSt18_Rb_tree_node_baseS0_RS_: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZdlPvRKSt9nothrow_t: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZNSt9basic_iosIcSt11char_traitsIcEE5clearESt12_Ios_Iostate: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZNSs15_M_replace_safeEmmPKcm: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZNSsD1Ev: symbol not found
Error relocating /usr/local/bin/podcast-dl: backtrace: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZNSsC1EPKcRKSaIcE: symbol not found
Error relocating /usr/local/bin/podcast-dl: getcontext: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZNSt15basic_streambufIcSt11char_traitsIcEE6setbufEPcl: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZNSo9_M_insertIxEERSoT_: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZNSs7reserveEm: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZTISt9exception: symbol not found
Error relocating /usr/local/bin/podcast-dl: _ZTVSt12domain_error: symbol not found

[Feature Request] Validate file size before writing to archive

There have been several instances where a download fails resulting in a zero-byte file, but podcast-dl writes the URL to the archive. The result is that the failed download is not attempted again on subsequent runs.

Would it be possible to validate the downloaded file against the <enclosure> length and/or <itunes:duration> value in the xml file before writing to the archive file? Validating duration may require a ffprobe or similar dependency but file size should be easier to achieve.

P.S. thanks for your work on this project — it has been incredibly useful :)

getting errors while downloading podcasts

so i was downloading a podcast with 100 episodes but i got error after downloading 21 of them
this was the error

22 of 100
(node:10364) UnhandledPromiseRejectionWarning: TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received null
at validateString (internal/validators.js:121:11)
at Object.extname (path.js:752:5)
at getUrlExt (C:\snapshot\podcast-dl\bin\util.js:160:20)
at getIsAudioUrl (C:\snapshot\podcast-dl\bin\util.js:179:15)
at getEpisodeAudioUrlAndExt (C:\snapshot\podcast-dl\bin\util.js:188:20)
at main (C:\snapshot\podcast-dl\bin\bin.js:234:9)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
(Use podcast-dl --trace-warnings ... to show where the warning was created)
(node:10364) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:10364) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

can u plzz help in fixing this and this was the link i was trying to download

Does not create nested directories

podcast-dl v1.6.1

podcast-dl does not create directories when --out-dir is given a filepath with directories that do not exist.

For example, given --out-dir ~/Downloads/test/{{podcast_title}} and if ~/Downloads/test/ does not exist then the command will fail rather than creating the the directory.

I was thinking the expected behavior would be to create the directory tree since it is explicitly given by the user (similar to mkdir -p). Although this could also be a safety feature to prevent downloading to incorrect locations.

Python Integration

Hey its me again. I have been trying to build a basic webgui for podcast-dl (like youtube-dl-server) inside of a docker container using a Python webserver framework. I am having a lot of trouble being able to call podcast-dl as a shell command from Python, e.g. podcast-dl --url XXX. Running shell commands in Python is not a great way to go about it anyway...

I googled around a bit and could not find any alternative methods. Do you have any thoughts on better ways to integrate podcast-dl into a Python script?

Add date range examples

I keep getting "bad pattern" errors when i try to specify boundary dates. An example on how to download, for example, every episode from 2020 would help me immensely

Escape all metadata params passed into ffmpeg when using '--add-mp3-metadata'

'amp' is not recognized as an internal or external command,
operable program or batch file.

The 12 Deaths of Christmas - Day 1 | Error downloading episode: Error: Command failed: ffmpeg -loglevel quiet -i "D:\Media\Podcasts\The Amelia Project Patreon Feed\20231225-The 12 Deaths of Christmas - Day 1.mp3" -map_metadata 0 -metadata album="The Amelia Project Patreon Feed" -metadata title="The 12 Deaths of Christmas - Day 1" -metadata comment="<p>\"For the next two weeks this case will take over our lives!\"</p><p><img data-media-id=\"258530827\"></p><p>WISHING YOU THE HAPPIEST OF HOLIDAYS dear patrons!</p><p>We'll be accompanying you over the next two weeks with daily instalments of the bonus miniseries <strong>The 12 Deaths of Christmas.</strong></p><p>It chronicles one of the most bonkers cases The Amelia Project has ever taken on. Tune in daily for the spiralling chaos...</p><p>If you haven't done so yet, do check out the <a href=\"https://www.patreon.com/posts/taking-you-93046568?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=postshare_creator&amp;utm_content=join_link\">patron exclusive Behind the Scenes video for this series! </a></p><p>We will be collecting three pledges for this miniseries, one in December, two in January. Thank you so much for your support and allowing us to keep making this show!</p><p>Happy listening! Love &amp; Cocoa!</p><p>The Amelia Team 💝</p><p>Website: <a href=\"https://ameliapodcast.com/\" rel=\"nofollow noopener\" target=\"_blank\">https://ameliapodcast.com/</a></p><p>Transcripts: <a href=\"https://ameliapodcast.com/season-5\" rel=\"nofollow noopener\" target=\"_blank\">https://ameliapodcast.com/season-5</a></p><p>Merch: <a href=\"https://www.teepublic.com/stores/the-amelia-project?ref_id=6148\" rel=\"nofollow noopener\" target=\"_blank\">https://www.teepublic.com/stores/the-amelia-project?ref_id=6148</a></p><p>Instagram: <a href=\"https://www.instagram.com/ameliapodcast/\" rel=\"nofollow noopener\" target=\"_blank\">https://www.instagram.com/ameliapodcast/</a></p><p>Tumblr: <a href=\"https://www.tumblr.com/ameliapodcast\" rel=\"nofollow noopener\" target=\"_blank\">https://www.tumblr.com/ameliapodcast</a></p><p>X: <a href=\"https://twitter.com/amelia_podcast\" rel=\"nofollow noopener\" target=\"_blank\">https://twitter.com/amelia_podcast</a></p><p>Bluesky: <a href=\"https://bsky.app/profile/theameliaproject.bsky.social\" rel=\"nofollow noopener\" target=\"_blank\">https://bsky.app/profile/theameliaproject.bsky.social</a></p>" -metadata track="157" -metadata date="2023-12-25" -codec copy "D:\Media\Podcasts\The Amelia Project Patreon Feed\20231225-The 12 Deaths of Christmas - Day 1.mp3.tmp.mp3"

Feature Request: Download RSS podcast using Local RSS file

As the title says, it would be great to be able to pass an RSS file that you have locally because the RSS feed that I want to download sometimes includes broken & bad metadata, and instead of manually editing the metadata of each audio file, to pass an edited RSS file (that I downloaded and edited) through the CLI?

TimeoutError: Timeout awaiting 'request' for 5000ms

I try to download a podcast of which the RSS address is: https://www.omnycontent.com/d/playlist/6dd8413b-ede6-483a-bf4e-ab80014939de/20f4bf02-d62f-40b2-b532-af10011ba71b/2bdbf0f4-e0ca-4343-9fb2-af10011ba729/podcast.rss

When I run podcast-dl --url "https://www.omnycontent.com/d/playlist/6dd8413b-ede6-483a-bf4e-ab80014939de/20f4bf02-d62f-40b2-b532-af10011ba71b/2bdbf0f4-e0ca-4343-9fb2-af10011ba729/podcast.rss"
I get

English Listening - Goals | Error downloading episode: TimeoutError: Timeout awaiting 'request' for 5000ms

But I can download metadata(json file).

I tried it both on Linux and Windows. The results is same.

I tried a url of an episode in the rss, for example: https://omny.fm/shows/listening-time/english-listening-goals
I can open this page, and click "Download" button in this page, and download this episode.
I doubt this program copy the entire url in the "<p>" tag: "https://omny.fm/shows/listening-time/english-listening-goals clean English Listening - Board Games English Listening - Board Games" . Because when I open this url, the web page tell me "The clip does not exist or may be private".

Could you fix it or is there any workaround?
I'm so appreciated.


By the way, if there is a template option to add list number, it would be better.

Archive re-downloading podcast imagery

I'm using podcast-dl for periodically downloading multiple podcasts, and fetching new episodes each week (to be automatically uploaded to cloud and archived). I've got the --archive flag setup to output into a seperate JSON file for each podcast.

Each time I run the downloader, even with the --archive flag, it re-downloads all podcast artwork again. Happens both when there is and when there is not new episodes to be downloaded.

Does the archive retain information on already downloaded podcast artwork? I'm also using the --include-meta, --include-episode--meta and --out-dir flags in each command.

Feature request: flag to replace existing files if not in archive

If, while downloading a podcast, the server errors and the file isn't completely downloaded, Podcast-dl will move to the next file without logging the broken file in the archive.

If you then re-download the podcast to fetch new episodes, the existing broken file is skipped and logged in the archive as if it were a successful download, thereby leaving you with an incomplete file.

I think it would be useful if we could tell Podcast-dl to always replace existing files as long as they're not logged in the archive.

feature: add templating support to `--archive` option

Hi,

the Parameter "--archive" does not recognize the variable {{podcast_title}} in the path.

Example

--url https://sternengeschichten.podigee.io/feed/aac --out-dir Z:\Podcasts\{{podcast_title}} --include-episode-meta --include-meta --episode-template "{{release_date}} - {{title}}" --archive Z:\Podcasts\{{podcast_title}}\archive.log

Output

  • a file name {{podcast_title}} is created in "Z:\Podcasts"

thx
mg

Off-by-one error with --reverse option

Using the --reverse option will cause the most recent podcast to not be downloaded

i.e.

  • Running podcast-dl-linux-x64 --reverse leads to all but the most recent podcast to be downlaoded
  • Running podcast-dl-linux-x64 without the --reverse option leads to all podcasts (correctly) being downloaded

Handling problematic feeds

I've got one podcast with episode titles that exceed the filename length when .tmp is added by getTempPath() in util.js. Unfortunately I don't control the podcast, so I can't set them straight on the point of a title. I wanted to see how you might want to tackle this first issue, because it would also impact the .meta.json generation and possibly existing downloads.

From what I can tell, filenameify only looks at the last extension, so .meta.json would be cut to .json with an offset of 5, which would cause different prefixes to be generated for templates that render to a length of more than 245. Simply changing the length might cause differences in generated filenames for lengths on the edge, and cause some users to redownload episodes if not using an archive. I don't know what the best solution is here.


I have another podcast--who I have emailed a few times about this issue to no avail--with invalid URIs in the <link> field of some episodes. Since the exception this triggers is unhandled, it crashes the whole run:

TypeError [ERR_INVALID_URL]: Invalid URL
    at new NodeError (node:internal/errors:399:5)
    at new URL (node:internal/url:560:13)
    at getUrlExt (file:///home/caleb/.local/lib/node_modules/podcast-dl/bin/util.js:327:24)
    at getIsAudioUrl (file:///home/caleb/.local/lib/node_modules/podcast-dl/bin/util.js:352:15)
    at getEpisodeAudioUrlAndExt (file:///home/caleb/.local/lib/node_modules/podcast-dl/bin/util.js:362:15)
    at getItemsToDownload (file:///home/caleb/.local/lib/node_modules/podcast-dl/bin/util.js:148:7)
    at main (file:///home/caleb/.local/lib/node_modules/podcast-dl/bin/bin.js:172:23)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  input: 'https://%20',
  code: 'ERR_INVALID_URL'
}

I was thinking of wrapping the initial check in a try/catch and returning false if it fails:

const getIsAudioUrl = (url) => {
  try {
    const ext = getUrlExt(url);
  } catch (err) {
    // could log a warning here?
    return false;
  }

  if (!ext) {
    return false;
  }

  return VALID_AUDIO_EXTS.includes(ext);
};

The logic being that if it fails to parse, it can't reliably be checked for a valid audio extension, so it certainly is not an audio URL. I can PR this if you agree.


Finally, I have a third podcast with a CDN that rejects HEAD requests, but only for the tracking URLs. Stripping off the tracking redirect works fine. Rather than an option to disable HEAD, I was thinking of adding an option for a URL filter script that can be called to pre-process the audio links before they are downloaded, sort of like how ripgrep has a preprocessor command option. I have not decided if it would be better to pipe URLs through as lines, or call it once per URL. The experimental tracking bypass option didn't really work, but a custom script would be able to tackle it on a per-feed basis. This probably deserves its own issue.

ERR_INVALID_URL

I get this error when trying to download this podcast. I get the same (or maybe just similar?) error if I do not save metadata. This is on windows, podcast-dl version is 8.0.5.

PS T:\Hoard\Podcasts> .\podcast-dl --archive --include-meta --include-episode-meta --include-episode-images --episode-digits 3 --url "https://www.giantbomb.com/feeds/podcast/"
Giant Bombcast
Giant Bomb discusses the latest video game news and new releases, taste-test questionable beverages, and get wildly off-topic in this weekly podcast.


Downloading podcast image...
https://www.giantbomb.com/a/uploads/original/15/154673/3389479-1461983092-33823.jpg | Starting download of 3.01 MB...
https://www.giantbomb.com/a/uploads/original/15/154673/3389479-1461983092-33823.jpg | Download complete!

Saving podcast metadata...
(node:20836) UnhandledPromiseRejectionWarning: TypeError [ERR_INVALID_URL]: Invalid URL:
    at new NodeError (internal/errors.js:322:7)
    at onParseError (internal/url.js:270:9)
    at new URL (internal/url.js:346:5)
    at R (C:\snapshot\dist\podcast-dl-8.0.5.js)
    at j (C:\snapshot\dist\podcast-dl-8.0.5.js)
    at $ (C:\snapshot\dist\podcast-dl-8.0.5.js)
    at C:\snapshot\dist\podcast-dl-8.0.5.js
    at processTicksAndRejections (internal/process/task_queues.js:95:5)
(Use `podcast-dl --trace-warnings ...` to show where the warning was created)
(node:20836) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:20836) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Strange duplication of directory names

This may be more of a "what am I doing wrong" issue so I sincerely apologize if that's the case.

I recently moved from WSL to Raspberry Pi OS and resurrected a script that downloads podcasts to specific directories based on a short name. Here's an example command as output by the script:

npx podcast-dl --out-dir "/hd/podcasts/talkshow" --archive "/hd/podcasts/talkshow/archive.pdl" --threads 4 --include-meta --include-episode-meta --url https://daringfireball.net/thetalkshow/rss

This should download the episode and all associated files to /hd/podcasts/talkshow (as far as I know). And on WSL, it did. When I migrated the script to Linux, it started downloading to /hd/podcasts/talkshow/hd_podcasts_talkshow, and the archive (in the main talkshow directory) is called hd_podcasts_talkshow.pdl. this hd_podcasts_talkshow directory is basically the entire path with each / replaced with _.

I think I can just have the script cd into each directory and not specify a path at all, but I'm wondering what I did wrong. The exact same script worked on WSL with a version of podcast-dl from last year; the only thing I changed was the path (from /mnt/e/podcasts to /hd/podcasts).

Support ffmpeg for more file types

CLI options that use ffmpeg currently aren't allowed to run for non-mp3 files because of this check. ID3 tags are supported by other file types, so it would be nice to widen that restriction to include a few other common file types found in podcast feeds.

I tested it out locally with a few .m4a files, and writing metadata works fine. I can make the change if you're okay with it.

Allow override for download links without correct file ending

I have some feeds that do not have a file ending in the download file URL.
They will look something like https://some.podcast.domain.tdl/feed/files/49eab2eab9f1cd6148f211. Could we add a flag to override the getIsAudioUrl or to check the content_type in the http response header? It is correctly set as Content-Type: audio/mpeg in my case.

I am not quiet sure how we could determine a file ending for the downloaded file, chrome automatically puts .mp3 as the extension if i press Ctrl + S to save the served file.

I would be happy to provide a PR if we find a solution for my problem.

DL from radio station's xml-feed leads into errors rather than dowloads

Hi lightpohl,

thank you so much for this great tool! I really do enjoy every update, so cheers again.

I don't use too many different podcasts to download, but I keep running into issues with this one:

https://www.ndr.de/podcast/podcast4336.xml

I get:

(node:5527) UnhandledPromiseRejectionWarning: TypeError [ERR_INVALID_URL]: Invalid URL: undefined at new NodeError (internal/errors.js:322:7) at onParseError (internal/url.js:270:9) at new URL (internal/url.js:346:5) at /snapshot/dist/podcast-dl-9.3.2.js at processTicksAndRejections (internal/process/task_queues.js:95:5) (Usepodcast-dl --trace-warnings ...to show where the warning was created) (node:5527) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag--unhandled-rejections=strict(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1) (node:5527) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
and --trace-warnings is an "unknown option". Hm.

The problem is most likely the human sitting at my keyboard, but he'd also be very grateful for a hint in which direction to go for the xml-feed to work.

Thanks and all the very best!

Writing to Network Shares failes

Hi,

writing to network-shares does not work.

OS: Windows 10 - 2004
Share: Ubuntu 18.04 - Samba on ZFS

Command line:

not Working

--out-dir "\\fileserver2.grote.lan\Podcasts\{{podcast_title}}"
--out-dir \\fileserver2.grote.lan\Podcasts\{{podcast_title}}

working

--out-dir "Z:\Podcasts\{{podcast_title}}"
--out-dir Z:\Podcasts\{{podcast_title}}

Output

C:\Users\mg\Desktop>podcast-dl-win-x64.exe --url https://sternengeschichten.podigee.io/feed/aac --out-dir "\\fileserver2.grote.lan\Podcasts\{{podcast_title}}" --include-episode-meta --include-meta --episode-template "{{release_date}} - {{title}}" --archive "\\fileserver2.grote.lan\Podcasts\{{podcast_title}}\archive.log"
\\fileserver2.grote.lan\Podcasts\Sternengeschichten does not exist. Creating...
(node:18220) UnhandledPromiseRejectionWarning: Error: UNKNOWN: unknown error, mkdir '\\fileserver2.grote.lan\Podcasts\Sternengeschichten'
    at Object.mkdirSync (fs.js:914:3)
    at main (C:\snapshot\podcast-dl\bin\bin.js:112:8)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
(node:18220) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:18220) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Thx
mg

Breaking Change: remove `--prompt`

  • Adding a prompt for selecting a few episodes of a podcast was a fun experiment, but I don't believe anyone is using it and the effort to keep it up-to-date with the rest of the CLI doesn't seem worth it.

  • Older versions of the app will have it if someone needs that functionality.

A 0-byte archive file crashes podcast-dl

If you try to point podcast-dl at an archive file that exists and is empty, you get a lot of JSON errors:

Downloading podcast image...
Unable to download podcast image
Unexpected end of JSON input
Saving podcast metadata...
Unable to save podcast metadata
Unexpected end of JSON input
undefined:1
SyntaxError: Unexpected end of JSON input
at JSON.parse ()
at getJsonFile (file:///usr/lib/node_modules/podcast-dl/bin/util.js:39:15)
at getArchive (file:///usr/lib/node_modules/podcast-dl/bin/util.js:43:26)
at getItemsToDownload (file:///usr/lib/node_modules/podcast-dl/bin/util.js:112:34)
at main (file:///usr/lib/node_modules/podcast-dl/bin/bin.js:172:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Node.js v19.9.0

This is highly avoidable on my end, I just thought it was worth mentioning.

[Feature request] Initalize archive.json without downloading all old episodes

I had this thought already before but I didn't find time to make a pull request.

If I want to start following a podcast that already has in my case 300+ episodes and I just want to listen to the new weekly episodes rather than downloading all the old ones.

I was thinking that something like --archive-initalize could be used to just run these lines without the actual download:

podcast-dl/bin/util.js

Lines 417 to 419 in c246506

if (key && archive && !getIsInArchive({ key, archive })) {
writeToArchive({ key, archive });
}

If you like this idea I can make a pull request in 2-3 weeks or so.

Plus I just thought maybe a default option for --archive could also be nice (i.e. using basePath/archive.json)?

Error 406 - Logfile

Hi, first of all, thanks for that little tool.

I have a feed (https://feeds.br.de/radiowissen/feed.xml) on which i get Error 406. Is there a way to log the error so i can look whats wrong with it?
The Feed works in other Clients.

Wishes
mg

PS:
Used command: --url https://feeds.br.de/radiowissen/feed.xml --out-dir C:\Users\mg\Desktop\{{podcast_title}} --include-episode-meta --include-meta --episode-template "{{release_date}} - {{title}}"

--episode-regex option does not work when used along with --list

Thanks a lot for creating this awesome tool! It's been really helpful. 💯

I was trying to list episodes while using the episode regex option and it does not seem to filter the episodes. But when used to download (by omitting the list option), the episodes seem to get filtered correctly.

Sample output:

❯ podcast-dl --url "https://feeds.fireside.fm/selfhosted/rss" --episode-regex 'Plex' --list
┌─────────┬───────────────────────────────────────────────────┬───────────────────────────────────┐
│ (index) │                       title                       │              pubDate              │
├─────────┼───────────────────────────────────────────────────┼───────────────────────────────────┤
│    0    │             '50: Perfect Plex Setup'              │ 'Fri, 30 Jul 2021 05:00:00 -0700' │
│    1    │               '49: Update Roulette'               │ 'Fri, 16 Jul 2021 05:00:00 -0700' │
.... snip ....
│   49    │                '1: The First One'                 │ 'Thu, 12 Sep 2019 04:00:00 -0700' │
│   50    │             'Self-Hosted Coming Soon'             │ 'Tue, 27 Aug 2019 04:00:00 -0700' │
└─────────┴───────────────────────────────────────────────────┴───────────────────────────────────┘

Add option to cache `Etag`/`LastModified`

Comments taken from #22:

The one extra thing that might make it even better isn't related to logging, but a cache of the RSS with a quick check of the Last-Modified and/or ETag could make subsequent runs much quicker for large feeds (assuming the last run fetched all items it needed). —@calebj

As for caching, it seems like a free optimization to me. If the server provides the headers that indicate cachable content, I don't see why podcast-dl shouldn't take advantage of it. Conversely, if certain values are present for Cache-Control, the client knows that it shouldn't cache anything. It's reasonable to leave it up to the server and to cache what it allows us to, and I don't think doing so changes the category of the program at all. —@calebj

Certainly. I think a good first simple version could exclusively check for the Etag save a podcast.cache.json for additional runs. I'll spin this conversation out into a separate issue for tracking. —@lightpohl

Feature request: allow printing/exporting list of episodes in JSON format

Thanks again for creating this tool! Currently podcast-dl prints the list of episodes in a table format, but being able to print in JSON format would be really useful.

For example, something like this:

--list                             # prints in table format
--list --format=table  # print in table format
--list --format=json   # print in JSON format

The scenarios where this might be potentially useful:

  • saving the list of episodes as JSON metadata alongside podcast/episode metadata
  • pass the output of the list option to a tool like jq and calculate offset/limit based on some logic (eg. time period) to download specific episodes

Feature Request: File naming

Going along the same lines as the youtube-dl output format it would make sense to enable some file renaming.

This would enable more organized archiving, e.g. by automatically putting episodes in named podcast folders. Assuming other metadata could be extracted it would be nice to put the release date and duration in the filename among other things.

youtube-dl uses something like -o '%(podcast_name)s/%(release_date)s - %(title)s.%(ext)s

Planned breaking changes for v2

  • Change default template for --out-dir to "./{{podcast_title}}"
  • Rename podcast feed image from image.ext to {{podcast_title}}.image.ext
  • Rename podcast meta file from meta.json to {{podcast_title}}.meta.json

[feature] Filter download URLs to skip trackers

My firewall is blocking a few hosts like podtrac and chartable by both IP and DNS, so I've had to add a few whitelist entries for the feeds to work on my regular podcatcher and this archiver. Someday I'll make my own proxy to strip them out of the feed on the fly, but I'm looking for an interim solution to this game of weekly whack-a-mole.

What would be the least obtrusive way to add a custom program (return filtered URL via stdout) or JS function that can take something like these atrocities

https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/chtbl.com/track/2H910/traffic.megaphone.fm/WWO0123456789.mp3?updated=1635179747
https://anchor.fm/s/6483d2f1/podcast/play/15934174/sponsor/d91uy9c/https%3A%2F%2Fd3cmojq1ktx6ya.cloudfront.net%2Fstaging%2F2021-06-21%2F60112d485e1c79b8d194a31f14d7f9ab.m4a

and reduce them to the megaphone and cloudfront URLs respectively? Since one is URL escaped and one isn't, this (sadly) isn't within the reach of regex. I would also need more samples to ensure a baked-in function covers all the possibilities for redirect URL chains.


Tangentially related, with the trove of podcast files I have now, I'm trying to think of ways to efficiently fingerprint and trim ads and other annoying segments/bumpers. That would feed back into the RSS proxy and hopefully work relatively hassle-free. It's way down the road, though.

Promise rejection warning

Hi,

I am getting the following error: with "https://feeds.megaphone.fm/richdad"

(node:13184) UnhandledPromiseRejectionWarning: TypeError: Expected a string
at filenamify (C:\snapshot\podcast-dl\node_modules\filenamify\index.js:15:9)
at getSafeName (C:\snapshot\podcast-dl\bin\naming.js:8:10)
at getFolderName (C:\snapshot\podcast-dl\bin\naming.js:45:22)
at main (C:\snapshot\podcast-dl\bin\bin.js:108:5)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
(node:13184) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:13184) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

It seems to work with other podcast programs, so not sure if I'm doing something wrong.
(apologies if I have submitted this wrong -first time submitting an issue on github)

Great program btw.

Feature Request: --download-archive option

Similar to the --download-archive option in youtube-dl, this option would

  1. log every file that is downloaded
  2. prevent re-downloading the same files/epiodes by checking the log file

From youtube-dl this option is described as
Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it.

This would enable users to schedule podcast-dl to run and hav it only download new files/episodes.

TypeError [ERR_INVALID_ARG_VALUE]

Hello again!

Thanks for the new Windows binaries.

Unfortunately, I am getting this error. It works again when I revert back to v6.1.0.

.\podcast-dl-win-x64.exe --url "https://omnycontent.com/d/playlist/e73c998e-6e60-432f-8610-ae210140c5b1/A41BD365-C6BC-4966-9AC2-AE28005326F5/6097C5B8-233A-45BF-8F11-AE2800532703/podcast.rss" --episode-template "{{podcast_title}} - {{title}}"
internal/modules/cjs/loader.js:1089
      throw new ERR_INVALID_ARG_VALUE('filename', filename,
      ^

TypeError [ERR_INVALID_ARG_VALUE]: The argument 'filename' must be a file URL object, file URL string, or absolute path string. Received 'file:///home/joshua/code/podcast-dl/bin/bin.js'
←[90m    at createRequire (internal/modules/cjs/loader.js:1089:13)←[39m
    at C:\snapshot\dist\podcast-dl.js
    at C:\snapshot\dist\podcast-dl.js
    at Object.<anonymous> (C:\snapshot\dist\podcast-dl.js)
    at Module._compile (pkg/prelude/bootstrap.js:1930:22)
←[90m    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)←[39m
←[90m    at Module.load (internal/modules/cjs/loader.js:863:32)←[39m
←[90m    at Function.Module._load (internal/modules/cjs/loader.js:708:14)←[39m
    at Function.runMain (pkg/prelude/bootstrap.js:1983:12)
←[90m    at internal/main/run_main_module.js:17:47←[39m {
  code: ←[32m'ERR_INVALID_ARG_VALUE'←[39m
}

Feature request: out-dir Templating options

I would like my Podcasts to be structured as follows Podcast title Folder-> Episode title Folder -> Episode with image

Ideally because Jellyfin doesnt have a podcast library support but this file Structure would present the The Podcast as the Author, and each podcast as a book.

This lets Jellyfin's book Library Work Great.

This is what I am trying to do:
npx podcast-dl --limit 2 --url https://lexfridman.com/feed/podcast/ --include-episode-images --episode-template "{{title}}" --out-dir "./{{podcast_title}}/{{title}}"

v8 Preparation & Discussion

Hey all!

Staged a few changes for v8, but wanted to leave them in a discussion thread for a bit before releasing in case there are any concerns.

Breaking changes included so far:

  1. BREAKING CHANGE: remove '--filter-url-tracking' experimental flag
    a. I believe it is essentially unused and only works with a small subset of trackers. A standalone tool that does this would be preferred I think.
  2. BREAKING CHANGE: change 'filenameBase' to 'filename_base' for '--exec' templating
    a. More consistent template key naming
  3. BREAKING CHANGE: include all properties in metadata from rss-parser
    a. The motivation is allow users to access as much data as possible that is provided from rss-parser.
    b. Prevents me from getting in the way if you need a specific piece of metadata not already included by my original slimmer list.
    c. One con is that there will be duplicate data in the metadata.

[feature request] Enhancements for logging

I've been using this for a while as part of an archiving cronjob, and it's worked great. My one gripe is how chatty the logs are, since the app prints out information for every single episode. Altogether, each run generates over 5MB of output, most of it redundant. I suppose I could diff the logs on each run and store the differences, but it would be great if there was a flag to make podcast-dl only output (and/or act upon) changes in the feed since the last run.

Currently I use the following script to automate the invocation of podcast-dl for each feed:

#!/bin/env python3

PDL_BIN = 'podcast-dl'

from configparser import ConfigParser, ExtendedInterpolation
import subprocess
import sys

import fasteners

cp = ConfigParser(interpolation=ExtendedInterpolation(), delimiters=('=',))

with open('podcast-dl.ini') as f:
    cp.read_file(f)

dl_sections = cp.sections()

for sec in dl_sections:
    print(f"Running for {sec}")
    conf = cp[sec]
    args = [PDL_BIN]
    
    if not conf.get('url'):
        print(f"Missing url for section {sec}")
        continue
    elif not conf.get('outdir'):
        print(f"Missing outdir for section {sec}")
        continue

    args.extend(("--url", conf['url']))
    args.extend(("--out-dir", conf['outdir']))

    if 'archive' in conf:
        args.extend(("--archive", conf['archive']))

    if 'template' in conf:
        args.extend(("--episode-template", conf['template']))

    if 'regex' in conf:
        args.extend(("--episode-regex", conf['regex']))

    if 'extra_options' in conf:
        args.extend(conf['extra_options'].split())        

    lock = fasteners.InterProcessLock(f"{conf['outdir']}/.pdl_lock")

    if not lock.acquire(blocking=False):
        print("lock exists, skipping")
        continue

    with lock:
        subprocess.run(args)

It's nothing special, but makes the cron entry very straightforward. All that's missing is some way to monitor what happens in each run in a concise form, without the repeat entries or fancy progress indicator so it is easier to review the logs.

New environment variable value to avoid animation in download progress

Good afternoon, lightpohl, and many thanks for this downloader program. After trying some others, I find yours much more simple and practical.

I'm in the process of creating a python 3 script (that later I will "dockerize") to automate the downloading of my subscriptions and I have a problem when reading the output of your program using subprocess. Somehow, when I call your script using subprocess module in Python by doing:

o_process = subprocess.Popen(plu_cmd,
                             bufsize=0,
                             universal_newlines=True,
                             stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)

# METHOD 1
#---------
#o_process.wait()
#u_stdout = o_process.stdout.read()
#u_stderr = o_process.stderr.read()

# METHOD 2
#---------
u_stdout, u_stderr = o_process.communicate()

# --- TEST CODE ---
print('CMD:    %s' % repr(plu_cmd))
print('STDOUT: %s' % repr(u_stdout))
# ------ end ------`

where plu_cmd is the shell call to your script, I don't get the full stdout output and I precissely miss the animated lines of each episode, where the percentage downloaded is continuously updating.

CMD:    ('/mnt/jupiter/pw_david/projects/coding/docker/podcast_dl/scripts/bin/podcast-dl-linux-x64', '--url', 'https://www.ivoox.com/feed_fg_f133371_filtro_1.xml', '--out-dir', '/mnt/jupiter/pw_david/projects/coding/docker/podcast_dl/test_data/podcasts_private/podcast la hermandad', '--limit', '2', '--episode-template', '{{release_date}}_{{title}}')
STDOUT: 'Starting download of 2 episodes\n\n1 of 2\nTitle: La Hermandad Podcast 10x12: el programa de... algo, pero verano.\nPublish Date: Mon, 02 Aug 2021 12:23:34 +0200\n\n2 of 2\nTitle: La Hermandad Podcast 10x11: veraneando en la guerra del mañana\nPublish Date: Tue, 06 Jul 2021 13:53:26 +0200\n\n'

For that reason I would like to have an extra value for LOG_LEVEL environment variable that just printed a final static line.

Regards and many thanks in advance for taking it into consideration.

Automatically retry failed downloads

I'm bulk downloading several episodes, and ~5% of them timed out. Is there a way to re-download the selection while skipping the already downloaded ones or, alternatively, an option to retry failed downloads until completion?

Add OS binaries for v7

Hello, I was wondering if there will be a version for Windows on the release page for version 7.0.0. Thanks!

Escape all metadata params passed into ffmpeg when using '--add-mp3-metadata

i just tryd https://github.com/lightpohl/podcast-dl/releases/tag/v9.2.1 and got this error
i cant reopen #60
so im making a new issue

ffmpeg -loglevel quiet -i "D:\Media\Podcasts\The Amelia Project Patreon Feed\20210318-Composer’s Corner - Amelia Death Metal for Episode 8.mp3" -map_metadata 0 -metadata album='The Amelia Project Patreon Feed' -metadata title='Composer’s Corner - Amelia Death Metal for Episode 8' -metadata comment='<p>Hello dear patrons,</p><p>Some of you have asked for downloadable versions of the theme tune variations. So over the coming weeks we’ll be sharing them here on Patreon! We’ll also give a bit of context about each one…</p><p>Today: the metal version (still one of our absolute favourites)</p><p>Oystein and I knew from the beginning that we wanted to have fun with the music. We love to riff on different genres when we write episodes (crime, mystery, sci-fi etc) and we thought the music should do the same. Our aim has always been to create a show that keeps listeners on their toes and in which each episode contains a surprise.</p><p>For episode 8 “Luke Dougal” (the one in the Hell Park) we asked Fredrik Baden (the composer) to create a sort of Hammer Horror haunted house score, something with organ, gongs and thunderclaps.</p><p>Freddy went off, ignored all of our directions, and did something completely different instead. Something much better. A heavy metal version!</p><p>This is why we love working with Freddy. In the same way we aim to surprise listeners, Freddy surprises <em>us!</em></p><p>I still remember receiving the first draft of this composition, clicking play (unprepared for what was coming, my headphones turned up way too loud) and having Amelia Death Metal blow my head off.</p><p>I really like the fact that the very first time we start an episode with a theme variation, it’s something this big and brash and bold!</p><p>Although I feel I must apologise to the listener who wrote in to say she enjoys the show on her morning runs, and when the death metal theme kicked in she was so surprised she almost ran into a canal…</p><p>If you have any memories of listening to this variation we’d love to know in the comments below!</p><p>Anyway, over to Freddy himself to give you some musical nitty gritty:</p><p><em>Heyo, peeps and peepettes!</em></p><p><em>What Pip said is perfectly true. However, I </em>did<em> voice my concern about using the organ as a main instrument so soon after the «fugal» funeral scene in Steve (which we’ll come back to in a different Composer’s Corner). So, thinking of what else could represent Hell, naturally, death metal felt only natural.</em></p><p><em>I’ve talked about this theme before, but I don’t think I’ve pointed at the two obvious inspirations for this little piece:</em></p><p><em>1 - Dimmu Borgir - Progenies of the Apocalypse</em></p><p><em>My fellow countrymen are world renowned in the death metal genre, but I’ve never been very into it myself. However, the symphonic intro to this song completely blew me away the first time I heard it. Or, </em>saw<em> it actually - a drummer friend of mine in high school had burned me a CD-ROM (yes, I’m that old) with videos of drumming to aspire to. The resolution was comparable to the tiles on my bathroom floor, but I remember the need to head bang to this video! When thinking about how to go about making an epic intro for myself, this came back to me all these years later.</em></p><p><a href="https://www.youtube.com/watch?v=AaGbkvPrioU" rel="nofollow noopener" target="_blank"><em>Music video</em></a><em> - Viewer discretion: insane amount of screen shake, goat blood and strobe light. (And tiddies…)</em></p><p><em>2 - Without realising it, I completely ripped off the ending of the Buffy the Vampire Slayer theme for my own ending! We think that my girlfriend was re-binging the whole series around that time, or earlier, so it’s entirely possible that’s why. In any case, it’s a wonderful punk theme and I feel stupid to not have added the church bell at the last chord in my variation…</em></p><p><a href="https://www.youtube.com/watch?v=5-xnvdtuRRo&amp;t=124s" rel="nofollow noopener" target="_blank"><em>Music video</em></a></p><p><em>That’s all for this time. Reach out if you’re wondering about anything :)</em></p><p><em>Freddy</em></p>' -metadata track=75 -metadata date=2021-03-18 -codec copy "D:\Media\Podcasts\The Amelia Project Patreon Feed\20210318-Composer’s Corner - Amelia Death Metal for Episode 8.mp3.tmp.mp3" < was unexpected at this time.

"403 (Forbidden)" errors on Roderick on the Line podcast?

For some reason, the Roderick on the Line podcast feed always gives me 403 (Forbidden) errors when trying to download episodes using podcast-dl.

However the enclosure links download just fine when pasting them into the browser Location bar, or downloading the enclosure link using something like wget

This is the only feed (out of around 25) that I have this issue on.

The podcast feed is here: http://feeds.feedburner.com/RoderickOnTheLine

Example output from podcast-dl 9.3.3 on macOS:

Downloading: Roderick on the Line
Roderick on the Line
Merlin Mann's frank & candid weekly phone call with John Roderick of The Long Winters


Starting download of 517 episodes

Ep. 533: "Busty Trainfails" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 532: "My Umbrella Stand" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 531: "A Perfect Bead" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 530: "Something Else" | Download exists locally. Skipping...
Ep. 529: "Not Everything is Everything" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 528: "The Biggest Indoor Building in the World" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 527: "The Calm Cousins" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 526: "Some Nice of These" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 525: "The Penury of the Scots" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 524: "Dinner's at Five" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 523: "A Grocer You Haven't Met Yet" | Download exists locally. Skipping...
Ep. 522: "Cafeteria Californian" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 521: "Keep the Nickel" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 520: "Seven Minutes in Mexico" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 519: "Even Doves Have Pride" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 518: "Aspics of Me" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 517: "Sundown Acres" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 516: "Gambling Pocket" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 515: "Existential Plasticity" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 514: "Q.R. Crabs" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 513: "Mayor Jabberwocky" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 512: "American Horoscope" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 511: "A Horse of a Different Cat" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 510: "Dr. Pole" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 509: "A Dirty Rectangle on the Wall" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 508: "A Collerary of Bastiks" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 507: "Haunted House of a Mind" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 506: "The Shame Police" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 505: "Steamin' John & the Rebel" | Download exists locally. Skipping...
Ep. 504: "Mt. Florida" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 503: "A Still Period" | Download exists locally. Skipping...
Ep. 502: "Dr. Guy" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 501: "The Doors of Jazz" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 500: "Dr. Labyrinth, I Presume?" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 499: "A Ziggurat of Grievance" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 498: "The Jimmy Carter of Jerusalem" | Error downloading episode: HTTPError: Response code 403 (Forbidden)
Ep. 497: "Where the Grocery Stores Are" | Error downloading episode: HTTPError: Response code 403 (Forbidden)

Download to temp files

Right now the files are downloaded directly to their destination. The downside of this is that if the process is interrupted you will end up with an incomplete file, but on the next run the download will be skipped because the file exists locally.

Downloading the files to a temp location/with a temp suffix (like some browsers do) and then moving/renaming once complete would fix this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.