rubenkelevra / pacman.store Goto Github PK
View Code? Open in Web Editor NEWPacman Mirror via IPFS for ArchLinux, Endeavouros, Manjaro plus custom repos ALHP and Chaotic-AUR.
License: GNU General Public License v3.0
Pacman Mirror via IPFS for ArchLinux, Endeavouros, Manjaro plus custom repos ALHP and Chaotic-AUR.
License: GNU General Public License v3.0
because FUSE mount is too unstable and WAY to slow for receiving updates on a fast internet connection/when they are already locally available
This idea was based on the discussion at #42 about High I/O usage and the two ways that the cluster has distributed pin instructions to cluster follower nodes up to now:
As I run a cluster node, this affects the utilization of my hardware. I never noticed the high disk space utilization as a problem because of the amount of disk space I have (>10TB), but I have noticed the high disk utilization and have taken steps to mitigate the slowdown due to high disk I/O as it affected other processes I am running (SSD cache of the logical volume the data resides on).
This is an attempt to describe an idea that should have neither the high disk I/O utilization of pinning the root folder hash nor the high disk space utilization of pinning each updated file.
Under this option, the folder structure under /ipns/x86-64.archlinux.pkg.pacman.store/ is not changed at all from its current state at all. Instead we create a completely separate directory structure that contains the same package files with a different structure optimized for making the cluster members pin just the new packages without having to check all the other packages and directories in the repo.
As an example, consider that update with only the packages abiword and go-ipfs. You would create a directory like this:
/2021-01-22-001/
/2021-01-22-001/abiword-3.0.4-4-x86_64.pkg.tar.zst
/2021-01-22-001/go-ipfs-0.7.0-1-x86_64.pkg.tar.zst
in addition to updating /extra/ and /community/, then add the hash of the folder /2021-01-22-001/ to the cluster. This folder would exist only in the cluster, and only for the purpose of having the cluster members pin those two new packages. People not part of the cluster should never see these directories.
If you then got another set of package updates, you would create another folder for only those additional packages:
/2021-01-22-002/
/2021-01-22-002/dbus-broker-26-1-x86_64.pkg.tar.zst
/2021-01-22-002/fftw-3.3.9-1-x86_64.pkg.tar.zst
/2021-01-22-002/xorg-docs-1.7.1-3-any.pkg.tar.zst
/2021-01-22-002/yasm-1.3.0-4-x86_64.pkg.tar.zst
There are a number of ways do decide when to remove these update directories from the cluster:
Looking at rsync2ipfs-cluster/bin/rsync2cluster.sh, to implement this idea, I think you will only need to modify ipfs_mfs_add_file() to take a third parameter (the update folder path in MFS) along with adding the file's CID to the update folder, and add the update folder to the cluster pin set.
IPNS has an experimental feature to publish name updates over pub-sub. [link]
This allows cluster subscribers to get "push notifications" for cluster updates.
From a few ad-hoc observations I made on my own, resolving "pkg.pacman.store" is the main bottleneck when downloading packages from the cluster, and sometimes can reach even 40 seconds(!).
IPNS over pubsub could reduce the time needed to resolve "pkg.pacman.store" but it requires cooperation from the name holder.
Please consider enabling this feature on the master node, and update the documentation to hint this feature is available.
what i just experienced:
Websites prove their identity via certificates. Firefox does not trust this site because it uses a certificate that is not valid for x86-64.archlinux.pkg.pacman.store.ipns.dweb.link. The certificate is only valid for the following names: *.i.ipfs.io, *.ipfs.dweb.link, *.ipfs.io, *.ipns.dweb.link, dweb.link, ipfs.io
Error code: SSL_ERROR_BAD_CERT_DOMAIN
...so i guess the "*" wildcard character only covers a single-level subdomain, not "infinite"-level.
migrating to something like "x86-64-archlinux-pkg-pacman-store.ipns.dweb.link" would probably solve the issue but i can imagine it's not feasible for some reason i'm not aware of. :-(
...so, what to do now?
The compressed database files need to be fetched each time there's a change, even if there's just a single package updated.
To avoid this, I like to decompress them and use a rolling chunker with fairly small chunks on it. This way, ipfs can do delta updates on it.
Compared to packages, this is possible because database files are not signed โ it just doesn't matter how they are delivered as Pacman will always trust them.
hi, for the last three (or so) days there has been no updates for my system - which seems pretty unusual in the arch land. isn't the ipfs importer stuck?
thanks.
I've talked a while ago about this on the pacman dev mailing list - but it was kind of rejected.
https://lists.archlinux.org/pipermail/pacman-dev/2020-April/024179.html
This would still require the databases to be centralized stored in a cluster, as well as the packages, but it would be done by an "official" team instead of me - in the best case.
This way the IPFS update push would happen automatically and the updates would be faster and somewhat more reliably than a rsync to ipfs script written by some random guy on the internet.
It would also allow having multiple writing servers on the same cluster, which can do the updates seamlessly - since when they do the same update, the cluster would just merge them as the same change. This means we can completely eliminate any single point of failure.
Originally posted by @RubenKelevra in #40 (comment)
The 100GB space requirement is pretty hard to justify.
The reason I'm on this boat, is because I believe having arch packages on IPFS can really compete with traditional package mirrors in upgrade performance and decentralization (duh!).
Because of that, what I expect from this project to provide is just the newest packages, nothing else. Looking at Arch's stats, that shouldn't cross the 40GB line.
I understand why one would like to create an entire archive dedicated to archlinux on IPFS, providing old packages and ISOs, but those files are not really needed to my daily needs, and don't meet the reason of why I'm here in the first place.
Therefore, I would like you to split the cluster into multiple interest groups, separating people like myself, who are only interested in the cluster as a source for the newest packages, from people who want to chip into the historical archiving effort.
Final words: If I really miss the target with my requests, I would prefer to fork into a new cluster I will manage as I see fit, rather then chasing this cluster's tail to do what I want.
Name: go-ipfs-autosetup
This allows a script to cache updates:
#4
write a script which runs as a service under the user ipfs
The script should lookout for new versions of installed software and pin them locally, to avoid that updates need to be downloaded while updated
difficulties:
The pkgs are all in one folder, we need to separate between unstable and stable packages - how? see #11
As discussed in couple of other issues, a cool usage for this cluster could be pointing pacman to IPFS to download new packages during upgrades.
There's this implementation detail I would like to discuss: how should pacman integrate with IPFS.
By reading your docs and other issues, I understand your preferred way to do this is to mount the cluster repo on pacman's cache dir (/var/cache/pacman/pkg
), forcing pacman to get the package from IPFS during the step where it scans its cache dir to see if the package is already present.
The way I vision doing that, is modeling the repo tree after the way traditional mirrors are structured, setting Server = http://127.0.0.1:8080/ipns/pkg.pacman.store/$repo/os/$arch
in my /etc/pacman.d/mirrorlist
, and using pacman as usual.
Some pros and cons:
mirrorlist
after my local IPFS gateway, pacman will switch to the old mirrors if the IPFS gateway times outServer = http://localarchclusterpeer/ipns/pkg.pacman.store/$repo/os/$arch
/var/cache/pacman/pkg
, we find ourselves spending twice the space for each package.
Server = https://ipfs.io/ipns/pkg.pacman.store/$repo/os/$arch
on one's mirrorlist
with hashsums as filenames like /aur.pacman.store/sources/by-hash/sha512/2398293829382932838
error: failed retrieving file 'core.db' from x86-64.archlinux.pkg.pacman.store.ipns.localhost:8080 : Operation too slow. Less than 1 bytes/sec transferred the last 10 seconds
...on all my (geographically distributed) machines. this is probably not an issue on my side. can you please check?
thanks!
The cluster is already running the latest 0.5.0-Software which has major speed improvements for resolution of content-addresses as well as transfer speeds.
The last maintenance on 2020-04-21 UTC changed the CID's of the mirror to CIDv1.
This allows the cluster to be future proof, even when some packages might not be updated in a long timeframe.
But this breaks the compatibility with the current 0.4.23 version.
TL;DR: Update your system to the 0.5.0-version before accessing the IPFS-mirror
hi, i'm on a ipv6-only network and since some long time already i'm unable to use ipfs pacman store...
>ipfs --api /ip4/127.0.0.1/tcp/5001 resolve /ipns/x86-64.archlinux.pkg.pacman.store
/ipfs/bafybeidnlmimurjtvwhsji3xwbzjvylgw6kvhf3cclvfyfflvgvvxhdxau
> ipfs --api /ip4/127.0.0.1/tcp/5001 dht findprovs /ipfs/bafybeidnlmimurjtvwhsji3xwbzjvylgw6kvhf3cclvfyfflvgvvxhdxau
QmZJznS5TWZiea2zdV63qJzYkz6P5HBhPksojxhrfNximP
> ipfs --api /ip4/127.0.0.1/tcp/5001 swarm connect /ipfs/QmZJznS5TWZiea2zdV63qJzYkz6P5HBhPksojxhrfNximP
Error: connect QmZJznS5TWZiea2zdV63qJzYkz6P5HBhPksojxhrfNximP failure: failed to dial QmZJznS5TWZiea2zdV63qJzYkz6P5HBhPksojxhrfNximP: all dials failed
* [/ip6/::1/udp/4001/quic] CRYPTO_ERROR (0x12a): peer IDs don't match
* [/ip6/2600:8803:e600:18c:e445:edff:fe20:38d3/tcp/4001] dial tcp6 [::]:4001->[2600:8803:e600:18c:e445:edff:fe20:38d3]:4001: i/o timeout
* [/ip6/fd2c:853:d11f:0:e445:edff:fe20:38d3/tcp/4001] dial tcp6 [::]:4001->[fd2c:853:d11f:0:e445:edff:fe20:38d3]:4001: i/o timeout
* [/ip6/fd2c:853:d11f:0:e445:edff:fe20:38d3/udp/4001/quic] NO_ERROR: Handshake did not complete in time
* [/ip6/2600:8803:e600:18c:e445:edff:fe20:38d3/udp/4001/quic] NO_ERROR: Handshake did not complete in time
> ping 2600:8803:e600:18c:e445:edff:fe20:38d3
PING 2600:8803:e600:18c:e445:edff:fe20:38d3(2600:8803:e600:18c:e445:edff:fe20:38d3) 56 data bytes
^C
--- 2600:8803:e600:18c:e445:edff:fe20:38d3 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 9115ms
There need to be more documentation on the BloomFilter option in IPFS. This ticket is about tracking this process.
@Luflosi wrote
I found the
Datastore.BloomFilterSize
option, which sounds like it has the potential to speed up pinning operations but I couldn't find any documentation on what it actually does. > Do you know?
Originally posted by @Luflosi in #42 (comment)
Since ipfs-cluster 0.13 has added a non-recursive pinning functionality (see ipfs-cluster/ipfs-cluster#1009), we can create a folder on the cluster which holds the older versions of the mirror without actually holding any of the data.
Stuff that is still saved on some node in the ipfs-network can be fetched, others might be unavailable.
the current approach to pin on each update the full folder structure recursive requires IPFS to do quite a lot of IO, as @Luflosi mentioned here: #39
There's still the other option that I've implemented on version one of this project, which is to pin each file on it's own, as well as each folder non-recursive.
The advantage is a low amount of IO and a much faster processing time of updates on the cluster nodes, as it's easier to be processed by IPFS. But it requires a lot of disk space on the cluster-database since each changed file and each changed folder is an individual transaction. After just some months the database had a file size of 20 GB while there were 400'000 transactions.
Once ipfs-cluster/ipfs-cluster#1008 and ipfs-cluster/ipfs-cluster#1018 are implemented we could explore this possibility to reduce the IO load again, with one transaction per update, but individually pinned files and folders.
hi,
for last two (or so) days, i can't update some of my systems. it seems some haskell packages are missing (while the rest is ok).
error: failed retrieving file 'haskell-http-client-0.7.0-2-x86_64.pkg.tar.zst' from 127.0.0.1:8080 : The requested URL returned error: 404
...etc. other packages get downloaded fine.
wget http://mirror.kku.ac.th/archlinux/community/os/x86_64/haskell-http-client-0.7.0-2-x86_64.pkg.tar.zst
-> OK
wget http://127.0.0.1:8080/ipns/pkg.pacman.store/arch/community/os/x86_64/haskell-http-client-0.7.0-2-x86_64.pkg.tar.zst
-> 404
wget http://ipfs.io/ipns/pkg.pacman.store/arch/community/os/x86_64/haskell-http-client-0.7.0-2-x86_64.pkg.tar.zst
-> 404
...so i don't think it's just "my local" problem.
hello,
i'm taking to the liberty to create this issue to track the progress of the current import issue (as of 2022-01-22). regurarly checking the status page and/or matrix proved to be too cumbersome - at least for me.
that being said, what is the current state, @RubenKelevra ? is there any way i (or anyone else) can help?
thanks!
I feel we should explore pin-update
too.
Originally posted by @FireMasterK in #42 (comment)
When something is currently being pinned ipfs-cluster-follow
spams my log with ERROR monitor multicodec did not match
every few seconds.
Could this be a problem with my setup or with the cluster?
when the directories are part of the cluster, each cluster member can check if the root-directory is part of his pinset.
If the IPNS-cid is within the local pin-set of the ipfs node, the cluster is up to date.
The import seems to have stopped again. lastsync is from 2021.06.12.
Does anyone else have this problem?
and drop the bindfs dependency
So, as I understood it from following this repo's issues, I think we could use a pacman wrapper to fix some of the shortcomings we currently face.
I wrote a tiny POC (upgrade-only) wrapper for the first point:
#!/bin/sh
# ipfs-pacman-upgrade.sh
A2FLAGS="--conditional-get=true --continue=true --auto-file-renaming=false --remove-control-file=true -x 16"
if [ ! -f /etc/pacman.d/ipfsgateway ]; then
echo "/etc/pacman.d/ipfsgateway does not exist"
exit
fi
ipfs_gateway=$(cat /etc/pacman.d/ipfsgateway)
# download repo dbs
# horrible hack lol
expac -S "${ipfs_gateway}%r/%r.db" | sort -u | aria2c $A2FLAGS -i - -d /var/lib/pacman/sync || exit $?
if [ ! "$(pacman -Quq)" ]; then
echo 'nothing to upgrade'
exit
fi
echo "packages to upgrade: $(pacman -Quq | tr '\n' ' ' | fmt --width="$(tput cols)")"
printf "Continue? [Y/n]: "
read -r continue
if [ -n "$continue" ] && [ "$continue" != "y" ]; then
exit
fi
# download packages
pacman -Quq | xargs expac -S "${ipfs_gateway}%r/%f" -- | aria2c $A2FLAGS -i - -d /var/cache/pacman/pkg || exit $?
# install packages
pacman -Su
For the second point, we need you to add that hashes index to the repo, I'd be happy to discuss that!
Since a few days I get, both on my laptops and on my server (that joined the cluster) the error:
ipfs resolve -r /ipns/cluster.pkg.pacman.store: could not resolve name
It seems like the most recent packages aren't available on the cluster yet. Did it crash or so?
With a normal setup and an average of 850 peers negotiation via a local gateway is too slow for pacman. It times out with transfer was less than 1b/s for more than 10 seconds
. I got around this for the db files by pinning them. I cannot do this for the entire repo. What is the point of this if pacman just defaults to a different mirror because the negotiation is too slow.
...since "going offline" just ruined updates on all my machines. :-(
blocker: ipfs-cluster/ipfs-cluster#1006
Chaotic-Aur is an automated building repo for AUR packages, while its not a distro, it includes packages for common aur packages.
Hi,
I use Manjaro but love IPFS and was wondering if it'd be possible to use this for a Manjaro install.
From my understanding Manjaro uses the same packages as the Arch repo (+ a couple special ones like the Hardware detection packages) .
And since IPFS just uses each files' hash, I assume that the users who are hosting the arch mirrors would could also be hosting packages for Manjaro users?
Curious to hear what you think of this
Cheers,
Hello,
It's really nice to have these IPFS mirrors for regular Arch Linux.
However, I still pull packages for https://www.blackarch.org from HTTP mirrors. The blackarch repos provide many tools (https://www.blackarch.org/tools.html) in addition to regular arch.
It would be really cool if you could make an IPFS mirror for that as well.
Currently, everything hold in MFS will be stored forever on a node, regardless if pinned by the cluster or not.
To circumvent an overflow of the trusted node, inserting new packages, we need to modify the old.pkg.pacman.store function to an HTML document, which just holds links to the CID's to the snapshots.
In this process the snapshots should be moved from old.pkg.pacman.store to the subdirectory, having a file called default which is an HTML document in the style of an IPFS listing which holds the CID as a link.
The directories need to be pinned by the cluster, non-recursive, with a timeout, when they are replaced, to allow for proper unpinning and garbage collection of the files on the trusted host.
See also:
ipfs/kubo#6878
Hi, I was looking for go-pie 1.13.8 on this cluster but it is really hard to navigate http://old.pkg.pacman.store.ipns.localhost:8080/arch/x86_64/default.old.html. It seems like there are just old pinned versions of the whole repository. Maybe you could get some inspiration from the format of https://archive.archlinux.org/packages/. But I assume that you already thought about that but it is not so easy to implement. It also seems that this clusters archive doesn't go that far into the past (which is fine).
Hi there,
Your current setup uses IPNS
, which is painfully slow in many areas. I would suggest you use an IPFS
address instead of an IPNS
one for performance reasons.
This would require modifying your txt
record every time you update the repo but would drastically reduce the resolution times.
Thanks!
After reading the post of HalosGhost at 2021-07-27 15:51:34, I don't know whether I may or may not post a continuation to that thread, so I'm posting here.
Arch Wiki
Publishing the IPFS pacman.store mirror on Arch Wiki and official mirror list.
I also wanted to post an article for the sake of productivity and open a discussion on the forum... so I'm posting a link to it here too http://www.pragma-grid.net/images/pragma34/Tipchuen.pdf
I'm running a collaborative cluster follower on my server. I wrote a systemd unit that runs ipfs-cluster-follow pkg.pacman.store run
so I don't need to use tmux. It worked ok except for high disk IO but that's not the problem here.
There seem to be no new pins added to the cluster since about a week ago. This is the output of journalctl -b -u [email protected] --no-pager
since I restarted it yesterday:
Dec 29 17:03:32 ipfs ipfs-cluster-follow[108]: Starting the IPFS Cluster follower peer for "pkg.pacman.store".
Dec 29 17:03:32 ipfs ipfs-cluster-follow[108]: CTRL-C to stop it.
Dec 29 17:03:32 ipfs ipfs-cluster-follow[108]: Checking if IPFS is online (will wait for 2 minutes)...
Dec 29 17:03:32 ipfs ipfs-cluster-follow[108]: waiting for IPFS to become available on /ip4/127.0.0.1/tcp/5001...
Dec 29 17:03:34 ipfs ipfs-cluster-follow[108]: 2020-12-29T17:03:34.430+0100 INFO config config/config.go:361 loading configuration from http://127.0.0.1:8080/ipns/cluster.pkg.pacman.store
Dec 29 17:03:35 ipfs ipfs-cluster-follow[108]: 2020-12-29T17:03:35.935+0100 INFO cluster [email protected]/cluster.go:132 IPFS Cluster v0.13.0 listening on:
Dec 29 17:03:35 ipfs ipfs-cluster-follow[108]: /ip6/::1/tcp/16587/p2p/12D3KooWM2WefGdNzYLkduTBsTMrgtJevP3Wa8HV4tQ9pGfRTVXA
Dec 29 17:03:35 ipfs ipfs-cluster-follow[108]: /ip6/fd42:dd60:65a4:c374:216:3eff:febf:5066/tcp/16587/p2p/12D3KooWM2WefGdNzYLkduTBsTMrgtJevP3Wa8HV4tQ9pGfRTVXA
Dec 29 17:03:35 ipfs ipfs-cluster-follow[108]: /ip4/127.0.0.1/tcp/16587/p2p/12D3KooWM2WefGdNzYLkduTBsTMrgtJevP3Wa8HV4tQ9pGfRTVXA
Dec 29 17:03:35 ipfs ipfs-cluster-follow[108]: /ip4/10.208.171.176/tcp/16587/p2p/12D3KooWM2WefGdNzYLkduTBsTMrgtJevP3Wa8HV4tQ9pGfRTVXA
Dec 29 17:03:53 ipfs ipfs-cluster-follow[108]: 2020-12-29T17:03:53.113+0100 INFO restapi rest/restapi.go:515 REST API (HTTP): /unix//home/ipfs/.ipfs-cluster-follow/pkg.pacman.store/api-socket
Dec 29 17:03:53 ipfs ipfs-cluster-follow[108]: 2020-12-29T17:03:53.133+0100 INFO crdt [email protected]/crdt.go:275 crdt Datastore created. Number of heads: 1. Current max-height: 37761
Dec 29 17:03:57 ipfs ipfs-cluster-follow[108]: 2020-12-29T17:03:57.232+0100 INFO cluster [email protected]/cluster.go:619 Cluster Peers (without including ourselves):
Dec 29 17:03:57 ipfs ipfs-cluster-follow[108]: 2020-12-29T17:03:57.233+0100 INFO cluster [email protected]/cluster.go:626 - 12D3KooWDM4BGmkaxhLtEFbQJekdBHtWHo3ELUL4HE9f4DdNbGZx
Dec 29 17:03:57 ipfs ipfs-cluster-follow[108]: 2020-12-29T17:03:57.233+0100 INFO cluster [email protected]/cluster.go:634 ** IPFS Cluster is READY **
Dec 29 17:04:28 ipfs ipfs-cluster-follow[108]: 2020-12-29T17:04:28.115+0100 ERROR p2p-gorpc [email protected]/call.go:64 failed to dial 12D3KooWEweUswc6ZrQJACgGf13gmVBVVssK6LjCMENs6pu5yHth: all dials failed
Dec 29 17:04:28 ipfs ipfs-cluster-follow[108]: * [/ip4/192.168.2.212/tcp/16587] dial tcp4 192.168.2.212:16587: connect: connection refused
Dec 29 17:04:28 ipfs ipfs-cluster-follow[108]: * [/ip6/2001:470:1f11:90d::1/tcp/16587] dial tcp6 [2001:470:1f11:90d::1]:16587: connect: network is unreachable
Dec 29 17:04:28 ipfs ipfs-cluster-follow[108]: * [/ip6/fd2c:853:d11f:0:e445:edff:fe20:38d3/tcp/16587] dial tcp6 [fd2c:853:d11f:0:e445:edff:fe20:38d3]:16587: connect: network is unreachable
Dec 29 17:04:28 ipfs ipfs-cluster-follow[108]: * [/ip6/2600:8803:e600:18c:e445:edff:fe20:38d3/tcp/16587] dial tcp6 [2600:8803:e600:18c:e445:edff:fe20:38d3]:16587: connect: network is unreachable
Dec 29 17:04:28 ipfs ipfs-cluster-follow[108]: * [/ip6/fc7c:867c:98e9:7667:f32:9091:65a1:3d65/tcp/16587] dial tcp6 [fc7c:867c:98e9:7667:f32:9091:65a1:3d65]:16587: connect: network is unreachable
Dec 29 17:04:28 ipfs ipfs-cluster-follow[108]: * [/ip4/192.168.32.254/tcp/16587] dial tcp4 0.0.0.0:16587->192.168.32.254:16587: i/o timeout
Dec 29 17:04:28 ipfs ipfs-cluster-follow[108]: * [/ip4/10.0.0.2/tcp/16587] dial tcp4 0.0.0.0:16587->10.0.0.2:16587: i/o timeout
Dec 29 17:04:28 ipfs ipfs-cluster-follow[108]: * [/ip4/68.12.168.55/tcp/16587] dial tcp4 0.0.0.0:16587->68.12.168.55:16587: i/o timeout
Dec 29 17:15:59 ipfs ipfs-cluster-follow[108]: 2020-12-29T17:15:59.045+0100 INFO cluster [email protected]/cluster.go:487 reconnected to 12D3KooWK6dvqX7kXvJW8LkFDtT5zzTZLFF8PQAjHmR6Y9ych53C
The last line is then printed again and again a couple times per hour and nothing else.
If I try to ipfs swarm connect
to any of the three peers listed in https://github.com/RubenKelevra/pacman.store/blob/master/collab-cluster-config/service.json, it times out. I only have an IPv4 internet connection, so I could only try the three IPv4 addresses.
The time this started happening seems to coincide with the shutdown of loki, at least judging by the commit date of a482b39. Maybe this has something to do with it.
ipfs-cluster-follow pkg.pacman.store list
only prints
pinned bafybeiai2lhrnb6v53jkt7wf5wd7uljaoko342x3oxswe3lqkmxuws45ei x86-64.archlinux.pkg.pacman.store@2020-12-23T21:10:40+00:00
pinned bafykbzacecana3ogwm5n3ung7zit47e4tmjlozfvs7hjalp2tiacgilsbs3sq cluster-service-lowpower.json@64513e3
pinned bafykbzacedqspngi4evtuy7axrvbsvrrhu2kpr4ndabiqhlkf4i5yhq5ner5a cluster-service.json@64513e3
Is this a problem with the cluster or with my setup?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.