GithubHelp home page GithubHelp logo

Comments (14)

RubenKelevra avatar RubenKelevra commented on May 27, 2024

Hey @rpodgorny,

yeah sorry for not creating a tracking issue earlier. I know some people rely on this service, like me as well. It would be nice to get it more stable.

However, this is still limited by the pre-production status of go-ipfs.

What happened:

  • I upgraded to 12rc1 from 11.
  • After the upgrade, my scripts couldn't properly modify the MFS anymore. As any issue will lead to a retry of the whole transaction, this was retried multiple days with no success.
ipfs_api files rm /x86-64.archlinux.pkg.pacman.store/community/haskell-blaze-textual-0.2.2.1-16-x86_64.pkg.tar.zst.sig
return 1

While a ipfs_api files stat /x86-64.archlinux.pkg.pacman.store/community/haskell-blaze-textual-0.2.2.1-16-x86_64.pkg.tar.zst.sig reported its existence.

So I removed the block storage and started a reimport.

After 3 days (way longer than a full import usually takes) I've noticed that the IO had completely stopped, while IPFS was using 1.6-1.8 cores continuously.

The script was waiting several minutes on simple ipfs_api files cp /ipfs/$cid /path/to/file commands.

I've sent two dumps to @aschmahmann and tried to find the culprit, to no avail atm.

The dumps are

  • import running: /ipfs/QmPJ1ec2CywWLFeaHFaTeo6g56S5Bqi3g3MEF1a3JrL8zk
  • import stopped again: /ipfs/QmbotJhgzc2SBxuvGA9dsCFLbxd836QBNFYkLhdqTCZwrP

So as long these performance issues are not fixed I can't do a reimport on 0.12rc1.


Still undecided if I should roll back and retry or do a git bisect to find the commit(s) which introduced the issue.

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

Just for reference, that's the conversation with @@aschmahmann yesterday (from my timezone's perspective):

1/21/2022, 8:47:48 PM - @RubenKelevra: @aschmahmann: hey, you're up?

1/21/2022, 8:48:03 PM - @aschmahmann: @RubenKelevra: what's up?

1/21/2022, 8:48:56 PM - @RubenKelevra: @aschmahmann: you've asked for a report on my setup if I run into issues (something like that) with the new version. Yeah. It isn't looking pretty

1/21/2022, 8:50:18 PM - @aschmahmann: btw the error logs you found in the migration should be covered by ipfs/fs-repo-migrations#150. There shouldn't have been any data effected, just some extra database queries used and erroneous log messages.

however, if the MFS issue is related that would definitely be something to look into
1/21/2022, 8:54:04 PM - @RubenKelevra: Soo I'm running my import, which will just create some files and folders in the MFS. The auto-GC is off and I'm using flatfs
on a ZFS storage. ZFS has no dedup on or other space magic.

IPFS is at 160% cpu usage of 6 pretty fast cpu cores. All Experimental functions are false in the config. iotop reports something around 100 KByte/s read/write total on the disk (which is a SSD). zpool status reports no io at all, so everything ipfs does IO-wise is out of the file system cache.

Memory is:

MEM | tot    31.4G |  free    6.6G | cache   1.1G |  dirty   0.1M | buff   48.9M |  slab    7.1G | slrec   3.7G | shmem   2.0M  | shrss   0.0M | vmbal   0.0M  | zfarc  15.6G | hptot   0.0M  | hpuse   0.0M |

1/21/2022, 8:54:18 PM - @RubenKelevra: So somehow it got stuck

1/21/2022, 8:54:52 PM - @RubenKelevra:

go-ipfs version: 0.13.0-dev-2a871ef01
Repo version: 12
System version: amd64/linux
Golang version: go1.17.6

1/21/2022, 8:55:56 PM - @aschmahmann: can you grab an ipfs diag profile

1/21/2022, 8:56:53 PM - @RubenKelevra: yeah just doing this 🙂

1/21/2022, 8:57:54 PM - @aschmahmann: Also, I'm not totally understanding the issue. Is it that you're not sure why you're using a bunch of CPU, but minimal IO?

1/21/2022, 8:58:50 PM - @aschmahmann: Is it that you're running some command which is stalling? If so what's the command/script?

1/21/2022, 9:01:19 PM - @RubenKelevra: Yes, basiclly no IO while there are a lot of files which are local which should just be added to IPFS via a script

1/21/2022, 9:03:48 PM - @aschmahmann: This is running ipfs add -r on some big directory or are you using ipfs files write or something like that?

1/21/2022, 9:07:28 PM - @RubenKelevra: So I'm looping with bash through a directory and just check with a ipfs files stat if a file already exists ... if not I will do an ipfs add --chunker "size-65536" --hash "blake2b-256" --cid-version "1" --raw-leaves and the cid will be moved to the right location with ipfs files cp /ipfs/$cid /path/in/mfs and then unpinned again.

1/21/2022, 9:08:12 PM - @RubenKelevra: using ipfs files write was too unstable in the past to use directly

1/21/2022, 9:10:06 PM - @RubenKelevra: here's what happend in the past when using ipfs files write - so I thought I use the functions which are mostly used 🙂

ipfs/kubo#6936

1/21/2022, 9:12:58 PM - @aschmahmann: Is the bash script hung? Otherwise if all the files are there than you won't do much IO, right?

1/21/2022, 9:14:30 PM - @RubenKelevra: No, the bash script is still running

1/21/2022, 9:15:21 PM - @RubenKelevra: I have an output of the files imported counting up. It is progressing over days - so I think in the last 4 days it imported 4000 files

1/21/2022, 9:15:40 PM - @RubenKelevra: But back then there was some IO, now it's basiclly zero

1/21/2022, 9:15:51 PM - @RubenKelevra: <@@aschmahmann:matrix.org "can you grab an `ipfs diag profi..."> https://ipfs.io/ipfs/QmPJ1ec2CywWLFeaHFaTeo6g56S5Bqi3g3MEF1a3JrL8zk

1/21/2022, 9:17:21 PM - @RubenKelevra: <@@aschmahmann:matrix.org "Is the bash script hung? Otherwi..."> The Bashscript's last call to IPFS was: ipfs --api=/ip4/127.0.0.1/tcp/5001 files cp /ipfs/bafk2bzacedbzvu3pxjbpxpkmduxwzbztugfw2oahv3aswkicqjtl64ulwufgy /x86-64.archlinux.pkg.pacman.store/community/cinnamon-screensaver-5.2.0-1-x86_64.pkg.tar.zst.sig

1/21/2022, 9:17:50 PM - @RubenKelevra: this call is 2 minutes old

1/21/2022, 9:19:01 PM - @RubenKelevra: Ah and since I started with an empty repository, you can basiclly reproduce my setup by just running my toolset, which fetches the arch mirror via rsync and imports it into the MFS

1/21/2022, 9:23:31 PM - @aschmahmann: <@RubenKelevra:matrix.org "Ah and since I started with an e..."> So there's something I can run that's slower in v0.12.0-rc1 than v0.11.0?

1/21/2022, 9:24:05 PM - @RubenKelevra: Yeah I think so

1/21/2022, 9:24:19 PM - @RubenKelevra: I might have jumped 0.11 actually

1/21/2022, 9:24:52 PM - @RubenKelevra: https://github.com/RubenKelevra/rsync2ipfs-cluster.git

1/21/2022, 9:25:37 PM - @RubenKelevra: bash bin/rsync2cluster.sh --create --arch-config

1/21/2022, 9:25:54 PM - @aschmahmann: If you skipped v0.11.0 then it's possible that if you have large directories that there's something related to the automatic sharding work in v0.11.0 that's performing in a surprising way for you.

1/21/2022, 9:26:11 PM - @RubenKelevra: It should test the environment and complain about basiclly everything

1/21/2022, 9:27:00 PM - @RubenKelevra: I've used sharding in the past, but my directories were small enough that they didn't need to be sharded, so I turned that off.

1/21/2022, 9:27:10 PM - @RubenKelevra: let me look at the update log... one second

1/21/2022, 9:28:32 PM - @aschmahmann: Your data does seem to have sharding:

goroutine 49849232441 [runnable]:
github.com/ipfs/go-unixfs/io.(*HAMTDirectory).sizeBelowThreshold(0xc026ece2a0, {0x55f3480d9ff8, 0xc000d749c0}, 0x0)
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/io/directory.go:510 +0xbf
github.com/ipfs/go-unixfs/io.(*HAMTDirectory).needsToSwitchToBasicDir(0xc026ece2a0, {0x55f3480d9ff8, 0xc000d749c0}, {0xc0146f6c2d, 0x2f}, {0x55f3481029f8, 0xc00c6882c0})
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/io/directory.go:488 +0x16b
github.com/ipfs/go-unixfs/io.(*DynamicDirectory).AddChild(0xc03ae6d190, {0x55f3480d9ff8, 0xc000d749c0}, {0xc0146f6c2d, 0x2f}, {0x55f3481029f8, 0xc00c6882c0})
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/io/directory.go:544 +0x1be
github.com/ipfs/go-mfs.(*Directory).updateChild(0xc02fb9df00, {{0xc0146f6c2d, 0x55f347420186}, {0x55f3481029f8, 0xc00c6882c0}})
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/dir.go:131 +0x57
github.com/ipfs/go-mfs.(*Directory).sync(0xc02fb9df00)
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/dir.go:379 +0xc5
github.com/ipfs/go-mfs.(*Directory).GetNode(0xc02fb9df00)
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/dir.go:411 +0xb3
github.com/ipfs/go-ipfs/core/commands.getNodeFromPath({0x55f3480d9ff8, 0xc023b70fc0}, 0xc0000cc240, {0x55f3481098c0, 0xc003cbe000}, {0xc00ad079e0, 0x2c})
	/home/ruben/.cache/paru/clone/go-ipfs-git/src/go-ipfs/core/commands/files.go:434 +0xaf
github.com/ipfs/go-ipfs/core/commands.glob..func75(0xc00a686e00, {0x55f3480da500, 0xc00a686e70}, {0x55f347f54500, 0xc00142d620})
	/home/ruben/.cache/paru/clone/go-ipfs-git/src/go-ipfs/core/commands/files.go:173 +0x3ef
github.com/ipfs/go-ipfs-cmds.(*Command).call(0xc00b6f1220, 0xc00a686e00, {0x55f3480da500, 0xc00a686e70}, {0x55f347f54500, 0xc00142d620})
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/command.go:173 +0x1ac
github.com/ipfs/go-ipfs-cmds.(*Command).Call(0xc00142d620, 0x55f3480e1f78, {0x55f3480da500, 0xc00a686e70}, {0x55f347f54500, 0xc00142d620})
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/command.go:143 +0x4b
github.com/ipfs/go-ipfs-cmds/http.(*handler).ServeHTTP(0xc002c1cda0, {0x7f535c5ef238, 0xc02d5b2730}, 0xc037420f00)
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/http/handler.go:192 +0xa7e
github.com/ipfs/go-ipfs-cmds/http.prefixHandler.ServeHTTP({{0x55f347893cfc, 0xc00bac3325}, {0x55f3480ae580, 0xc002c1cda0}}, {0x7f535c5ef238, 0xc02d5b2730}, 0xc037420f00)
	/home/ruben/.cache/paru/clone/go-ipfs-git/go/pkg/mod/github.com/ipfs/[email protected]/http/apiprefix.go:24 +0x142

1/21/2022, 9:29:03 PM - @RubenKelevra:

[2021-06-16T13:29:53+0200] [ALPM] upgraded go-ipfs-git (0.9.0rc2.r20.g041de2aed-1 -> 0.9.0rc2.r23.gf05f84a79-1)
[2021-07-06T19:57:13+0200] [ALPM] upgraded go-ipfs-git (0.9.0rc2.r23.gf05f84a79-1 -> 0.9.0.r48.gae306994a-1)
[2021-07-18T13:39:53+0200] [ALPM] upgraded go-ipfs-git (0.9.0.r48.gae306994a-1 -> 0.9.0.r50.g9599ad5d7-1)
[2021-07-23T11:43:14+0200] [ALPM] upgraded go-ipfs-git (0.9.0.r50.g9599ad5d7-1 -> 0.9.1.r81.g461f69181-1)
[2021-07-27T12:47:55+0200] [ALPM] upgraded go-ipfs-git (0.9.1.r81.g461f69181-1 -> 0.9.1.r86.geda8b43a2-1)
[2021-08-14T01:32:31+0200] [ALPM] upgraded go-ipfs-git (0.9.1.r86.geda8b43a2-1 -> 0.9.1.r117.g7c76118b0-1)
[2021-09-03T18:12:00+0200] [ALPM] upgraded go-ipfs-git (0.9.1.r117.g7c76118b0-1 -> 0.10.0rc1.r14.g65d570c6c-1)
[2021-10-25T20:58:22+0200] [ALPM] upgraded go-ipfs-git (0.10.0rc1.r14.g65d570c6c-1 -> 0.10.0.r61.g5a61bedef-1)
[2021-12-11T15:51:52+0100] [ALPM] upgraded go-ipfs-git (0.10.0.r61.g5a61bedef-1 -> 0.11.0.r11.gdeb79a258-1)
[2022-01-09T16:54:29+0100] [ALPM] upgraded go-ipfs-git (0.11.0.r11.gdeb79a258-1 -> 0.12.0rc1.r5.g2a871ef01-1)

1/21/2022, 9:29:19 PM - @RubenKelevra: So I did run 11

1/21/2022, 9:30:37 PM - @aschmahmann: Does any of your logic assume that ipfs add --chunker "size-65536" --hash "blake2b-256" --cid-version "1" --raw-leaves followed by ipfs files cp /ipfs/$cid /path/in/mfs will result in the same CID across machines for the directory?

1/21/2022, 9:31:24 PM - @RubenKelevra: no

1/21/2022, 9:33:46 PM - @RubenKelevra: I'm using the CID just in this context to link the file in the mfs, remove the pin and then the function returns.

I will always just ask ipfs for the cid of a filepath rather than storing a CID

1/21/2022, 9:34:00 PM - @RubenKelevra:

function ipfs_mfs_add_file() {
	# expect a filepath
	[ -z "$1" ] && fail "ipfs_mfs_add_file() was called with an empty first argument" 280
	# expect a mfs destination path
	[ -z "$2" ] && fail "ipfs_mfs_add_file() was called with an empty second argument" 281
	[ ! -f "$1" ] && fail "ipfs_mfs_add_file() was called with a path to a file which didn't exist: '$1'" 282
	local _cid=""

	# workaround for https://github.com/ipfs/go-ipfs/issues/7532
	if ! _cid=$(ipfs_api add --chunker "$ipfs_chunker" --hash "$ipfs_hash" --cid-version "$ipfs_cid" --raw-leaves --quieter "$1"); then
		fail "ipfs_mfs_add_file() could not add the file '$1' to ipfs" 283
	elif ! ipfs_api files cp "/ipfs/$_cid" "$2" > /dev/null 2>&1; then
		fail "ipfs_mfs_add_file() could not copy the file '$1' to the mfs location '$2'. CID: '$_cid'" 284
	elif ! ipfs_api pin rm "/ipfs/$_cid" > /dev/null 2>&1; then
		fail "ipfs_mfs_add_file() could not unpin the temporarily pinned file '$1'. CID: '$_cid'" 285
	fi

}

1/21/2022, 9:34:13 PM - @RubenKelevra: that's the function

1/21/2022, 9:37:14 PM - @RubenKelevra: So IPFS is not in a deadlock, it will still respond (after a while). Currently my script waits for 2 minutes on a ipfs --api=/ip4/127.0.0.1/tcp/5001 files stat --hash for a directory 😀

1/21/2022, 9:38:53 PM - @aschmahmann: What happens if you pass --offline to the command?

1/21/2022, 9:40:23 PM - @RubenKelevra: like /usr/sbin/ipfs --api=/ip4/127.0.0.1/tcp/5001 files stat --hash --offline /x86-64.archlinux.pkg.pacman.store/community?

1/21/2022, 9:40:39 PM - @RubenKelevra: well, I'm waiting on the result, but I guess it won't be faster

1/21/2022, 9:41:30 PM - @RubenKelevra: I tried ipfs repo stat --size-only which returns instantly

1/21/2022, 9:41:32 PM - @aschmahmann: also try using --with-local

1/21/2022, 9:42:15 PM - @RubenKelevra:

$ time /usr/sbin/ipfs --api=/ip4/127.0.0.1/tcp/5001 files stat --hash --offline /x86-64.archlinux.pkg.pacman.store/community
bafybeianfwoujqfauris6eci6nclgng72jttdp5xtyeygmkivzyss4xhum

real	0m59.164s
user	0m0.299s
sys	0m0.042s

1/21/2022, 9:42:23 PM - @RubenKelevra: thats suspiciously close to one minute

1/21/2022, 9:43:44 PM - @RubenKelevra: ah and my script is still running, so it will fire a new command as soon as the old one comes back. I'm running this in parallel. But the whole time I wasn't doing something in paralllel

1/21/2022, 9:45:01 PM - @RubenKelevra: disk IO has waken up somehow

1/21/2022, 9:45:21 PM - @RubenKelevra: it's around 100 MB/s read and basiclly no write

1/21/2022, 9:47:08 PM - @aschmahmann: btw if I try running your script how much space do I need on my machine for things to run?

1/21/2022, 9:48:09 PM - @RubenKelevra: I'm currently using around 100 GB

1/21/2022, 9:48:20 PM - @RubenKelevra: it should be around 60-80 GB download size

1/21/2022, 9:50:13 PM - @aschmahmann: k, I probably won't be able to run it until the weekend or early next week.

1/21/2022, 9:53:41 PM - @aschmahmann: Could you try running with --with-local also?

1/21/2022, 9:57:47 PM - @aschmahmann: also you could try running it with just the daemon not running (I'm not sure how well plumbed --offline is through the MFS commands).

I'm trying to understand if you're making any network requests or not during that command

1/21/2022, 9:58:17 PM - @RubenKelevra: <@@aschmahmann:matrix.org "Could you try running with `--wi...">

$ time /usr/sbin/ipfs --api=/ip4/127.0.0.1/tcp/5001 files stat --hash --offline --with-local /x86-64.archlinux.pkg.pacman.store/community
bafybeie5kkzcg6ftmppbuauy3tgtx2f4gyp7nhfdfsveca7loopufbijxu
Local: 20 GB of 20 GB (100.00%)

real	4m55.298s
user	0m0.378s
sys	0m0.031s

1/21/2022, 9:58:46 PM - @RubenKelevra: No, no network commands

1/21/2022, 9:59:23 PM - @RubenKelevra: It`s connected to the network, obviously and probably publishing some stuff to the DHT, but I don't rely on the network connection

1/21/2022, 9:59:28 PM - @aschmahmann: well the API is running. If you're able to shutoff the daemon and run the command against the repo directly that'll definitely show it

1/21/2022, 9:59:29 PM - Jorropo.eth: Can I interrupt this conversation by notifying that ipfs is in sbin and that is making me feeling incomfortable.

That means you run IPFS as root and not it's own user.

1/21/2022, 10:00:06 PM - @RubenKelevra: <@_discord_183309006767521792:ipfs.io "Can I interrupt this conversatio..."> Well, that's incorrect 🙂

1/21/2022, 10:00:40 PM - @RubenKelevra: I'm actually most systemd protection stuff on the service and it runs on a dedicated user

1/21/2022, 10:00:41 PM - Jorropo.eth: if you are telling me that your IPFS user have sbin in it's path that making me EVEN MORE INCOMFORTABLE

1/21/2022, 10:01:19 PM - Jorropo.eth: just sbin is for super user bins and IPFS aint a super user binary

1/21/2022, 10:01:48 PM - @RubenKelevra: I feel like I might use the wrong folder, but I actually don't use the path at all

1/21/2022, 10:02:01 PM - Jorropo.eth: yeah ok then makes sense

1/21/2022, 10:02:18 PM - @RubenKelevra: I'm running the service with this service file: https://github.com/ipfs/go-ipfs/blob/master/misc/systemd/ipfs-hardened.service

1/21/2022, 10:03:08 PM - @RubenKelevra: the servie is quite isolated from the rest of the system

1/21/2022, 10:04:36 PM - @RubenKelevra: and usually the ipfs user can't even open a console. I need super user rights to overwrite the setting if I change to the user

1/21/2022, 10:05:33 PM - @RubenKelevra: <@@aschmahmann:matrix.org "well the API is running. If you'..."> Yeah... I still hope this script might actually finish ... but I think it's time to throw the towel 😀

1/21/2022, 10:06:25 PM - @aschmahmann: Another thing we could try that should be similar for debugging is if you give me the CID for x86-64....

1/21/2022, 10:06:44 PM - @aschmahmann: then I could run that command and pull from you and see if it takes the same amount of time

1/21/2022, 10:07:22 PM - @RubenKelevra: @aschmahmann: the script processed ~100 files since we started talking, it just printed the next "milestone" message

1/21/2022, 10:08:12 PM - @RubenKelevra: @aschmahmann: sure

1/21/2022, 10:08:43 PM - @RubenKelevra: I'm listing / (might take a while)

1/21/2022, 10:08:44 PM - @RubenKelevra: 😀

1/21/2022, 10:09:31 PM - @aschmahmann: you could just do ipfs files stat /x86-64.archlinux.pkg.pacman.store

1/21/2022, 10:16:23 PM - @RubenKelevra:

$ ipfs files ls --long /
x86-64.archlinux.pkg.pacman.store/	bafybeiakccw5dn5xcpmufkqm7zkxgwazqqqpldlcgawtahnlnwejlevese	0

1/21/2022, 10:16:36 PM - @RubenKelevra: Well, there's just the single folder

1/21/2022, 10:18:39 PM - @RubenKelevra: my import script is NOW off

1/21/2022, 10:19:22 PM - @RubenKelevra: disk is 0% busy but IPFS is still doing 160-180% cpu

1/21/2022, 10:19:33 PM - @RubenKelevra: and it dropped to 30%

1/21/2022, 10:20:59 PM - Jorropo.eth: 30% for IPFS doing nothing doesn't shock me.
Between the DHTserver and serving blocks, my workstation average 35% of background IPFS.

My gateway used to do 400% background but now it's doing 60% (I've changed it to .gateway.nofetch)

1/21/2022, 10:21:53 PM - @aschmahmann: can you do a manual DHT provide of that CID (or just tell me your multiaddr) so I can fetch the content.

1/21/2022, 10:22:21 PM - @aschmahmann: you're not running the accelerated client and I don't want to wait for it to make it's way through the queue 😄

1/21/2022, 10:23:10 PM - @RubenKelevra:

	"Addresses": [
		"/ip4/127.0.0.1/tcp/443/p2p/QmVoV4RiGLcxAfhA181GXR867bzVxmRTWwaubvhUyFrBwB",
		"/ip4/45.83.104.156/tcp/443/p2p/QmVoV4RiGLcxAfhA181GXR867bzVxmRTWwaubvhUyFrBwB",
		"/ip6/2a03:4000:46:26e:b42e:2bff:fe07:c3fb/tcp/443/p2p/QmVoV4RiGLcxAfhA181GXR867bzVxmRTWwaubvhUyFrBwB",
		"/ip6/64:ff9b::2d53:689c/tcp/443/p2p/QmVoV4RiGLcxAfhA181GXR867bzVxmRTWwaubvhUyFrBwB",
		"/ip6/::1/tcp/443/p2p/QmVoV4RiGLcxAfhA181GXR867bzVxmRTWwaubvhUyFrBwB"
	],

1/21/2022, 10:24:08 PM - @aschmahmann: @RubenKelevra: btw if you look at the pprof dump you sent most of the CPU is used walking the HAMT. If you're doing a bunch that might be necessary. Although some of those code paths are newer so they might be doing more work than necessary.

1/21/2022, 10:24:47 PM - @RubenKelevra: @aschmahmann: well, right after the CPU load dropped I did another dump. Maybe that's a helpful baseline?

1/21/2022, 10:25:38 PM - @RubenKelevra: @aschmahmann: you know what's interesting? bafybeiakccw5dn5xcpmufkqm7zkxgwazqqqpldlcgawtahnlnwejlevese is a
0x12 = sha2-256

1/21/2022, 10:27:00 PM - @aschmahmann: ❯ Measure-Command {ipfs files stat /ipfs/bafybeiakccw5dn5xcpmufkqm7zkxgwazqqqpldlcgawtahnlnwejlevese/community | Out-Default}
bafybeigdujz6rqffopottgfb7vdzpqe3uwcgmaummmxojfns7scsqy4vo4
Size: 0
CumulativeSize: 19511698019
ChildBlocks: 256
Type: directory

Days : 0
Hours : 0
Minutes : 0
Seconds : 0
Milliseconds : 57
Ticks : 574546
TotalDays : 6.64983796296296E-07
TotalHours : 1.59596111111111E-05
TotalMinutes : 0.000957576666666667
TotalSeconds : 0.0574546
TotalMilliseconds : 57.4546

1/21/2022, 10:27:08 PM - @aschmahmann: forgive the terrible powershell commands and output

1/21/2022, 10:27:17 PM - @RubenKelevra: @aschmahmann: here's the second dump: QmbotJhgzc2SBxuvGA9dsCFLbxd836QBNFYkLhdqTCZwrP

1/21/2022, 10:27:36 PM - IPFS Bot: @da2w posted in Will directory objects survive the CIDv0->CIDv1 transition? - https://discuss.ipfs.io/t/will-directory-objects-survive-the-cidv0-cidv1-transition/13227/1

1/21/2022, 10:28:04 PM - @RubenKelevra: Oh great, it only took 6.64983796296296E-07 days 😀

1/21/2022, 10:29:13 PM - @aschmahmann: <@RubenKelevra:matrix.org "@aschmahmann: you know what's int..."> That makes sense if you're creating the folder in MFS and moving the blake2 hashed files into it.

1/21/2022, 10:31:01 PM - @RubenKelevra: ah yeah, I was under the impression I had set the an option to use blake2b there as well... but I don't ipfs files mkdir --cid-version 1 $1 is my command

1/21/2022, 10:33:07 PM - @RubenKelevra: sorry my bad 🙂

1/21/2022, 10:35:30 PM - @RubenKelevra: So the last file my server was struggling to move into the mfs is bafykbzacea6kgtg4lzq3gc4hnwqx24zqhygel3nvaewo4bo7fifn5tfd7swfm

1/21/2022, 10:35:53 PM - @RubenKelevra: I did this on a different machine:

$ time ipfs files cp /ipfs/bafykbzacea6kgtg4lzq3gc4hnwqx24zqhygel3nvaewo4bo7fifn5tfd7swfm /a/test

real	0m0,534s
user	0m0,342s
sys	0m0,055s

1/21/2022, 10:36:33 PM - @RubenKelevra: couldn't be better, if you consider the ping of 50 ms

1/21/2022, 10:40:17 PM - @RubenKelevra: <@@aschmahmann:matrix.org "@RubenKelevra: btw if you look at..."> I'm not familiar with the go-ipfs code base nor any of the modules. So I don't even try to look at it. I would make more conclusions if I read an ancient greek text 😉

1/21/2022, 10:40:27 PM - @aschmahmann: got to head out for now. But perhaps if Jorropo.eth is around he can help you out. I'm not sure why your setup was making it take so long to walk the sharded directories. Running the same command as you ran pretty quickly on my machine.

1/21/2022, 10:40:52 PM - @aschmahmann: Otherwise, I can check in next week

1/21/2022, 10:41:17 PM - @RubenKelevra: Yeah I'll just archive the current state of IPFS and wipe it. Then I can try doing it offline. But I doubt there will be any difference

1/21/2022, 10:41:21 PM - @RubenKelevra: thanks for the help!

1/21/2022, 10:41:21 PM - Jorropo.eth: I havn't really followed, what is the issue ? MFS is slow ?

1/21/2022, 10:42:36 PM - @RubenKelevra: Jorropo.eth: well, I'm basiclly doing a bunch of
cid=ipfs add
ipfs files cp /ipfs/$cid /path/to/file
ipfs pin rm $cid

1/21/2022, 10:42:49 PM - Jorropo.eth: yeah idk

1/21/2022, 10:42:52 PM - Jorropo.eth: I have no idea

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

@aschmahmann I'm now running the import offline

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

I've posted the issue with the import now at ipfs/go-ipfs: ipfs/kubo#8694

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

@aschmahmann I'm now running the import offline

The offline import has finished. Not sure about the performance though - haven't observed it.

from pacman.store.

rpodgorny avatar rpodgorny commented on May 27, 2024

@aschmahmann I'm now running the import offline

The offline import has finished. Not sure about the performance though - haven't observed it.

does this mean it's supposed to be publicly available by now? i'm unable to even get listing as of now...

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

@rpodgorny nope, not yet. While the Archlinux part was imported, the Manjaro portion did throw an ipfs error on a ipfs files cp ... command.

I'll retry this one more time, and if this fails I think I'll just roll the version back. 0.12 is just not mature enough. 🤷

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

I've tried again. It will just stop at random locations:

+ ipfs_mfs_add_file /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-3.16.3-6-x86_64.pkg.tar.zst.sig /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-3.16.3-6-x86_64.pkg.tar.zst.sig
+ '[' -z /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-3.16.3-6-x86_64.pkg.tar.zst.sig ']'
+ '[' -z /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-3.16.3-6-x86_64.pkg.tar.zst.sig ']'
+ '[' '!' -f /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-3.16.3-6-x86_64.pkg.tar.zst.sig ']'
+ local _cid=
++ ipfs_api add --chunker size-65536 --hash blake2b-256 --cid-version 1 --raw-leaves --quieter /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-3.16.3-6-x86_64.pkg.tar.zst.sig
++ cmd=('/usr/sbin/ipfs')
++ local -a cmd
++ /usr/sbin/ipfs add --chunker size-65536 --hash blake2b-256 --cid-version 1 --raw-leaves --quieter /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-3.16.3-6-x86_64.pkg.tar.zst.sig
2022-01-27T23:57:33.763+0100    ERROR   provider.queue  queue/queue.go:124      Failed to enqueue cid: leveldb: closed
++ return 0
+ _cid=bafk2bzaced7tcd5m5dk33wiqtqikoqipgkofvgw3gckplykizq5u563yvkpk6
+ ipfs_api files cp /ipfs/bafk2bzaced7tcd5m5dk33wiqtqikoqipgkofvgw3gckplykizq5u563yvkpk6 /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-3.16.3-6-x86_64.pkg.tar.zst.sig
+ ipfs_api pin rm /ipfs/bafk2bzaced7tcd5m5dk33wiqtqikoqipgkofvgw3gckplykizq5u563yvkpk6
+ unset raw_filepath new_filepath mfs_filepath mfs_parent_folder fs_filepath
+ (( no_of_adds % 100 ))
+ (( no_of_adds++ ))
+ IFS=
+ read -r -d '' filename
++ echo ./stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
++ sed 's/^\.\///g'
+ raw_filepath=stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
+ [[ ./stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst == *\/\~* ]]
+ [[ ./stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst == *\/\.* ]]
++ rewrite_log_path stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
++ '[' -z stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst ']'
++ '[' manjaro == arch ']'
++ '[' manjaro == endeavouros ']'
++ '[' manjaro == manjaro ']'
+++ echo stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
++ output=stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
++ echo stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
+ new_filepath=stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
+ mfs_filepath=/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
++ get_path_wo_fn /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
++ '[' -z /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst ']'
++ echo /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
++ rev
++ cut -d/ -f2-
++ rev
+ mfs_parent_folder=/manjaro.pkg.pacman.store/stable/community/x86_64
+ fs_filepath=/var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
+ ipfs_mfs_path_exist /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
+ '[' -z /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst ']'
+ ipfs_api files stat --hash /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
+ return 1
+ ipfs_mfs_path_exist /manjaro.pkg.pacman.store/stable/community/x86_64
+ '[' -z /manjaro.pkg.pacman.store/stable/community/x86_64 ']'
+ ipfs_api files stat --hash /manjaro.pkg.pacman.store/stable/community/x86_64
+ return 0
+ ipfs_mfs_add_file /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
+ '[' -z /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst ']'
+ '[' -z /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst ']'
+ '[' '!' -f /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst ']'
+ local _cid=
++ ipfs_api add --chunker size-65536 --hash blake2b-256 --cid-version 1 --raw-leaves --quieter /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
++ cmd=('/usr/sbin/ipfs')
++ local -a cmd
++ /usr/sbin/ipfs add --chunker size-65536 --hash blake2b-256 --cid-version 1 --raw-leaves --quieter /var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
2022-01-27T23:57:34.513+0100    ERROR   provider.queue  queue/queue.go:124      Failed to enqueue cid: leveldb: closed
++ return 0
+ _cid=bafk2bzacedaj23muot6d7atvyytdllxst742w5g4ttdimjvirpzfp7yqmwyhs
+ ipfs_api files cp /ipfs/bafk2bzacedaj23muot6d7atvyytdllxst742w5g4ttdimjvirpzfp7yqmwyhs /manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst
+ fail 'ipfs_mfs_add_file() could not copy the file '\''/var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst'\'' to the mfs location '\''/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst'\''. CID: '\''bafk2bzacedaj23muot6d7atvyytdllxst742w5g4ttdimjvirpzfp7yqmwyhs'\''' 284
+ '[' -n '' ']'
+ '[' -n '' ']'
+ printf 'Error: %s; Errorcode: %s\n' 'ipfs_mfs_add_file() could not copy the file '\''/var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst'\'' to the mfs location '\''/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst'\''. CID: '\''bafk2bzacedaj23muot6d7atvyytdllxst742w5g4ttdimjvirpzfp7yqmwyhs'\''' 284
Error: ipfs_mfs_add_file() could not copy the file '/var/lib/ipfs/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst' to the mfs location '/manjaro.pkg.pacman.store/stable/community/x86_64/gambas3-gb-net-curl-3.16.3-6-x86_64.pkg.tar.zst'. CID: 'bafk2bzacedaj23muot6d7atvyytdllxst742w5g4ttdimjvirpzfp7yqmwyhs'; Errorcode: 284
+ exit 1

Since a repeat of this command is successful this looks like a race condition to me, @aschmahmann.

So I'll just roll back to 0.11.

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

0.11 has the same issue.

from pacman.store.

rpodgorny avatar rpodgorny commented on May 27, 2024

heh, sounds like ipfs team has some bisecting to do... ;-) ...is there any upstream ticket to track this?

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

@rpodgorny yeah, have a look:

#62 (comment)

from pacman.store.

rpodgorny avatar rpodgorny commented on May 27, 2024

@rpodgorny yeah, have a look:

#62 (comment)

...maybe you meant this one? -> ipfs/kubo#8694

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

@rpodgorny yeah, have a look:
#62 (comment)

...maybe you meant this one? -> ipfs/go-ipfs#8694

Well, that's what I referenced in my comment ;D

from pacman.store.

RubenKelevra avatar RubenKelevra commented on May 27, 2024

I've installed 0.9.1 on the server and the import worked flawlessly.

So it's up and running again (currently catching up a day or so).

Switching to the IPFS mirror should be safe in an hour or two since you would otherwise get "downgraded package" messages or not receive updates. :)

from pacman.store.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.