cugu / afro Goto Github PK
View Code? Open in Web Editor NEWFile recovery for APFS
File recovery for APFS
Hi, I've tried afro for recovering deleted files on a raw APFS image (500GB) but the result is always that only "Preboot", "Recovery" and "VM" volumes got recovered but not the volume where the OS is:
ls -l mac001.img.carve_apsb.extracted/
total 0
drwxrwxrwx 4 xx xxx 128 Apr 15 18:50 Preboot
drwxrwxrwx 4 xx xxx 128 Apr 15 18:49 Recovery
drwxrwxrwx 5 xx xxx 160 Apr 15 18:49 VM
The log for carving kept showing errors like:
INFO Found apsb in block 1060460
INFO Found apsb in block 1060469
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1060657
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1060688
INFO Found apsb in block 1060897
INFO 'ApfsSuperblockT' object has no attribute 'btn_flags'
INFO Found apsb in block 1060965
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1061039
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1061457
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1061578
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1061698
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1061742
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1061792
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1062839
INFO 'PointerValT' object has no attribute 'ov_paddr'
INFO Found apsb in block 1062842
INFO Found apsb in block 1063444
INFO 'BtreeNodePhysT' object has no attribute 'om_tree_oid'
INFO Found apsb in block 1063449
INFO 'BtreeNodePhysT' object has no attribute 'om_tree_oid'
INFO Found apsb in block 1063666
INFO 'BtreeNodePhysT' object has no attribute 'om_tree_oid'
INFO Found apsb in block 1063709
INFO Found apsb in block 1063782
INFO 'BtreeNodePhysT' object has no attribute 'om_tree_oid'
INFO Found apsb in block 1064320
INFO 'BtreeNodePhysT' object has no attribute 'om_tree_oid'
INFO Found apsb in block 1066450
INFO 'BtreeNodePhysT' object has no attribute 'om_tree_oid'
INFO Found apsb in block 1068889
INFO 'Obj' object has no attribute 'body'
And I tried the "-m parse" option to see if same errors appear:
afro -o 409640 -m parse -l DEBUG -e bodyfile disk0\ Image\ raw.00001
Traceback (most recent call last):
File "/usr/local/bin/afro", line 11, in <module>
load_entry_point('afro==0.2', 'console_scripts', 'afro')()
File "/usr/local/lib/python3.7/site-packages/afro-0.2-py3.7.egg/afro/__init__.py", line 115, in main
File "/usr/local/lib/python3.7/site-packages/afro-0.2-py3.7.egg/afro/__init__.py", line 74, in extract
File "/usr/local/lib/python3.7/site-packages/afro-0.2-py3.7.egg/afro/parse.py", line 74, in parse
File "/usr/local/lib/python3.7/site-packages/afro-0.2-py3.7.egg/afro/parse.py", line 59, in parse_nxsb
File "/usr/local/lib/python3.7/site-packages/afro-0.2-py3.7.egg/afro/parse.py", line 46, in parse_apsb
AttributeError: 'PointerValT' object has no attribute 'ov_paddr'
Would you mind explain what could be the reason for these "no attribute" errors? (parsing the OS volume leads to null objects?) Thanks very much!
Looks like I need to pack the entire partition into a dmg file first or something?
Hello,i want to get some software to get the binary data of the wsdf.dmg file.Do you know it?
Does this tool support recovery of compressed files e.g. compressed using a tool like https://github.com/RJVB/afsctool?
I took a look at sleuth kit -
https://github.com/sleuthkit/sleuthkit/tree/develop/win32/mmcat
it seems to only run on pc
I'm using mac and it's possible to clone drive/partition with dd using dd
sudo dd if=/dev/disk2 of=test.dd bs=1m
am testing this out. These are the instructions - was going to update readme with link to help others -
https://www.cyberciti.biz/faq/how-to-create-disk-image-on-mac-os-x-with-dd-command/
presumably this same as mmcat.
I have some questions, rather than an issue, that I'd love to get some help with. If this isn't the proper forum for that, please let me know. I have been searching for information regarding APFS data recovery and am having a hard time finding any.
For brevity, I'll start by pasting the output of mmls of disk4, a clone (created with DDRescue) of disk0, the HD portion of a Fusion Drive that, due to catastrophic failure, is now missing its SSD portion.
icemo$ sudo mmls /dev/disk4
GUID Partition Table (EFI)
Offset Sector: 0
Units are in 512-byte sectors
Slot Start End Length Description
000: Meta 0000000000 0000000000 0000000001 Safety Table
001: ------- 0000000000 0000000039 0000000040 Unallocated
002: Meta 0000000001 0000000001 0000000001 GPT Header
003: Meta 0000000002 0000000033 0000000032 Partition Table
004: 000 0000000040 0000409639 0000409600 EFI System Partition
005: 001 0000409640 3907029127 3906619488
006: ------- 3907029128 9767475199 5860446072 Unallocated
The ~2 TB of data from the HD has been cloned to a 5 TB USB external drive. Partition number 005 contains data that I want to retrieve. In diskutil
the type for 005 is listed as Apple_APFS.
I looked at TestDisk but it doesn't seem capable of dealing with APFS. I did run PhotoRec and was able to recover ~1.3 million files but, of course, they did not have their original files names and there is no file structure anymore. This is helpful but not so optimal.
Then I found and installed Afro (and Sleuthkit) and experimented with the example you provided, wsdf.dmg. That worked well although I have to admit I cannot for the life of me figure out how to output the bodyfile in some sort of human readable form. That's not important, though.
So, my questions are:
If there is more information you need or I have not been clear enough, please, don't hesitate to ask.
Any help you are able to provide with these questions is greatly appreciated and thanks for your time and efforts on this matter.
so I have a failed external ssd usb which was running apfs / osx majove. (I plugged it into the wrong usb port after usb disconnected and it died.)
I can easily clone this using clonzilla to iso file or dd. It doesn't seem possible to use drive directly.
do you have any advice?
While I have been able to dig through drive to detect certain file types - it's not helping me recover the failed partition / directory structure.
CAPTURE 7/23/2018 3:51:19 PM
Recovery Details
Drive:Physical hard drive 3.64 TB (DISK1:)
# MFT:0
# $Mft:0
# $MftMirr:0
# Index:0
# NT boot sectors:0
# FAT32 boot sectors:2
# FAT16 boot sectors:0
# exFAT boot sectors:0
# HFS+ volume headers:0
# Unused HFS volume headers:0
# HFS node ends:2
# HFS header nodes:0
# APFS NXSBs:134
# APFS APSBs:0
# APFS Blocks:196
# EXT Superblocks:0
# XFS Superblocks:0
# 12-bit FATs:0
# 16-bit FATs:0
# 32-bit FATs:0
# EXFAT FATs:0
# Directory starts:11
# Directory conts:111
# exFAT Directory roots:0
# exFAT Directory starts:0
# exFAT Directory conts:0
Dir tracker list:9 entries
FAT analyse list:8 entries
FAT cache:4 entries
CrissCross:Level 1, 104,264 sectors in 18 chunks
CrossCrossMemo:L1/Br100/Dry0
Sectors starts:63
Cluster starts:356
All clusters:338
Cl sizes:0
File system list:2 items
Selected file system:FAT32 at sector 4,112, cluster size 8 (511 MB) more...
ID:253129999 created 7/23/2018 3:50:30 PM
I spent a day creating an image using dd
eg.
dd if=/dev/sda of=/mnt/nfs/backup/harddrive.img
https://major.io/2010/12/14/mounting-a-raw-partition-file-made-with-dd-or-dd_rescue-in-linux/
I think it's important users check that the image is valid before hitting any snags with afro
I noticed the test folder has some complex building of images / managed to spit out this one / the tests reference a data folder which fails and no files are added which is a pity. otherwise flawless.
It would be neat just to have a one liner to create image.
file image_2G_4.dmg
image_2G_4.dmg: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x3ff,254,63), end-CHS (0x3ff,254,63), startsector 1, 4194303 sectors, extended partition table (last)
running this command file will dump diagostics of image.
unfortunately for my cloned drive - it doesn't read this data.
Instead I hit this
afro -e files parse /Volumes/4TB-WD/1tb-SAM.dmg
Traceback (most recent call last):
File "/usr/local/bin/afro", line 11, in <module>
load_entry_point('afro==0.1', 'console_scripts', 'afro')()
File "/usr/local/lib/python3.6/site-packages/afro-0.1-py3.6.egg/afro/__init__.py", line 139, in main
File "/usr/local/lib/python3.6/site-packages/afro-0.1-py3.6.egg/afro/__init__.py", line 72, in extract
File "/usr/local/lib/python3.6/site-packages/afro-0.1-py3.6.egg/afro/parse.py", line 69, in parse
File "/usr/local/lib/python3.6/site-packages/afro-0.1-py3.6.egg/afro/parse.py", line 52, in parse_nxsb
File "/usr/local/lib/python3.6/site-packages/afro-0.1-py3.6.egg/afro/libapfs/low.py", line 11, in get_nxsb_objects
AttributeError: 'Node' object has no attribute 'root'
have to go back to drawing board. current status of my drive is ERROR -69808
reading the article above - it seems expert deleted partition then recreated it exactly.
I'll need to look into this further.
diskutil ap list
APFS Containers (2 found)
|
+-- Container disk1 18635C24-A6E3-4EE4-914C-1477D9C821C8
| ====================================================
| APFS Container Reference: disk1
| Size (Capacity Ceiling): 249485074432 B (249.5 GB)
| Minimum Size: 236677918720 B (236.7 GB)
| Capacity In Use By Volumes: 229208588288 B (229.2 GB) (91.9% used)
| Capacity Not Allocated: 20276486144 B (20.3 GB) (8.1% free)
| |
| +-< Physical Store disk0s2 6F10FF5B-2195-4DF3-B545-FD2A8255CBB5
| | -----------------------------------------------------------
| | APFS Physical Store Disk: disk0s2
| | Size: 249485074432 B (249.5 GB)
| |
| +-> Volume disk1s1 4D6A5824-3A80-3C11-88CB-376C8B035BF6
| | ---------------------------------------------------
| | APFS Volume Disk (Role): disk1s1 (No specific role)
| | Name: Macintosh HD (Case-insensitive)
| | Mount Point: /
| | Capacity Consumed: 226377547776 B (226.4 GB)
| | FileVault: Yes (Unlocked)
| |
| +-> Volume disk1s2 52836323-51FD-46D8-ABB5-B6FE206680F9
| | ---------------------------------------------------
| | APFS Volume Disk (Role): disk1s2 (Preboot)
| | Name: Preboot (Case-insensitive)
| | Mount Point: Not Mounted
| | Capacity Consumed: 22970368 B (23.0 MB)
| | FileVault: No
| |
| +-> Volume disk1s3 F857EBE5-4C37-479D-8332-DD3529AFBB9B
| | ---------------------------------------------------
| | APFS Volume Disk (Role): disk1s3 (Recovery)
| | Name: Recovery (Case-insensitive)
| | Mount Point: Not Mounted
| | Capacity Consumed: 519090176 B (519.1 MB)
| | FileVault: No
| |
| +-> Volume disk1s4 29185EDF-C4B2-47DB-AF0B-141744407A0B
| ---------------------------------------------------
| APFS Volume Disk (Role): disk1s4 (VM)
| Name: VM (Case-insensitive)
| Mount Point: /private/var/vm
| Capacity Consumed: 2150735872 B (2.2 GB)
| FileVault: No
|
**+-- Container ERROR -69808
======================
APFS Container Reference: disk4
Size (Capacity Ceiling): ERROR -69620
Capacity In Use By Volumes: ERROR -69524
Capacity Not Allocated: ERROR -69524
|
+-< Physical Store disk3s2 B8843099-2B4F-4D1B-909A-DB5B1B516B9C
| -----------------------------------------------------------
| APFS Physical Store Disk: disk3s2
| Size: 999666946048 B (999.7 GB)
|
+-> No Volumes**
sudo gpt -r show disk3
start size index contents
0 244059313
incidentally - I'm using a cloned backup drive - so happy to run reckless commands to blow stuff up.
looking into fdisk
diskutil verifyVolume disk3
Started file system verification on disk3
Verifying storage system
Performing fsck_apfs -n -x /dev/disk2s2
Checking volume
Checking the container superblock
Checking the EFI jumpstart record
Checking the space manager
error: (oid 0x8790) cib: invalid o_xid (0x63f69)
error: failed to read spaceman cib 0x8790
Space manager is invalid
The volume /dev/disk2s2 could not be verified completely
Storage system check exit code is 0
Finished file system verification on disk3
Why the results are different with the same test file?i find different files in the same test dmg file。
Hi there, I am here because a utility deleted some 'work in progress' files and I'd like to have a chance to review them before moving on. I think that there were some useful changes that did not get committed to git before they were deleted.
I've tried some of the commercially available tools but they want a lot of $s for a modest recovery task. Here I am hoping that maybe afro can help me out.
So right now I have a 500g Macintosh HD partition which is my system drive and holds the deleted files in question. How do I proceed?
$ mount /dev/disk1s1 on / (apfs, NFS exported, local, journaled) devfs on /dev (devfs, local, nobrowse) /dev/disk1s4 on /private/var/vm (apfs, local, noexec, journaled, noatime, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse)
I tried mmls (installed sleuthkit via homebrew) on /dev/disk1s4, /dev/disk1s1 and /dev/disk1 - not surprisingly I get told resource busy.
Do you normally boot into another OS and work on the drive offline? How about the fact that the drive is encrypted? Should I expect to "dd" my unencrypted /dev/disk1s4 to a file and then use afro on that file?
Any pointers are appreciated and I can respond in kind with a completed how-to if you wish.
carefully looking through this file /it seems there maybe some bugs with apfs.fsk
https://github.com/ydkhatri/APFS_010/blob/master/apfs.010.bt#L455
@cugu i find you traverse every block to search the recovery files,but there are many unused blocks at the back of the volume,so how can we finish searching in advance?
Do you know what is going wrong here?
I had an apfs volume where I messed around with the gpt and ended up deleting it, thereby losing my whole mac os filesystem and the data on it.
I followed the readme and cloned my ssd into a .dmg file and ran mmls with the following results:
GUID Partition Table (EFI)
Offset Sector: 0
Units are in 4096-byte sectors
Slot Start End Length Description
000: Meta 0000000000 0000000000 0000000001 Safety Table
001: ------- 0000000000 0000000255 0000000256 Unallocated
002: Meta 0000000001 0000000001 0000000001 GPT Header
003: Meta 0000000002 0000000005 0000000004 Partition Table
004: 000 0000000256 0000077055 0000076800 EFI System Partition
005: ------- 0000076800 0122138132 0122061333 Unallocated
006: 002 0000076806 0122138127 0122061322
007: 001 0000077056 0000076799 18446744073709551360 EFI System Partition
Assuming the Unallocated
slot is where afro should parse I ran:
sudo afro -o 0000076806 -e files ssd.dmg
and keep getting this error:
mint@mint:/media/mint/4a980bb4-4e71-413b-bc7a-9af81f72e918$ sudo afro -o 0000076806 -e files ssd.dmg
Traceback (most recent call last):
File "/usr/local/bin/afro", line 11, in <module>
load_entry_point('afro==0.2', 'console_scripts', 'afro')()
File "/usr/local/lib/python3.8/dist-packages/afro-0.2-py3.8.egg/afro/__init__.py", line 117, in main
File "/usr/local/lib/python3.8/dist-packages/afro-0.2-py3.8.egg/afro/__init__.py", line 71, in extract
File "/usr/local/lib/python3.8/dist-packages/afro-0.2-py3.8.egg/afro/libapfs/apfs.py", line 1025, in block_size
AttributeError: 'Obj' object has no attribute 'body'
If anyone can help me with this issue it would be really helpfull.
p.s.
I don't know too much about data recovery; what is the difference between the carving methods parse and carve?
From the decoding white paper - it states
The APFS structure potentially provides the forensic investigator
with the possibility to recover earlier container states. After several
tests, both manually and programmatically, we have been able to
recover the container from previous checkpoints and by comparing
recovered stages, we are able to discover previous exist files and folders
Using the 010 hex editor / it's
It seems straightforward to me - in that if I capture the delta in apfs file internals between the changed checkpoints
I could then attempt to patch the hex values manually on failed drive - then flashback to healthier drive state may unfold. going to try to use git large file to discard changes.
ydkhatri/APFS_010#2
Back story
So from testing out different recovery software - I think I reduced chances of successfully recovering files - so a pointer for new users to create an image of failed drive I think is important. I tried using clonezilla 2nd time, again last night to create another backup / but this time it failed. (was trying to spit out file to ntfs drive as dmg /dd file directly rather than cloning drive)
I guess a simple dd of failed drive might be easiest but reading through this morning though - https://medium.com/@peterburkimsher/saving-a-friend-with-apfs-data-recovery-89cffaabdadd ddrescue which also provides a log / and pointers to failed /bad sectors will help more than dd out of the box.
Precautions When Cloning with ddrescue
ddrescue is a powerful utility and should only be utilized by experienced Linux users. It can cause damage to a failing hard drive in some circumstances. It can also overwrite data when used incorrectly. If you really need your data and your hard drive is failing, your best bet is a professional data recovery service. https://datarecovery.com/rd/how-to-clone-hard-disks-with-ddrescue/
Using ddrescue will provide a log that will allow you to capture bad sectors.
It can also recover from interruption.
Why this issue / I think there's a lot of people with failed APFS drives from Apple discussion boards + stack overflow. Searching google yields a few pieces of software / iboysoft /disk drill and youtube videos saying run this software and everything will be fixed. After wasting days on this - it would have been better approach to know the best approach. I believe this approach of reading through apfs is superior / needs to be performed on disk image. Just need a pointer on best way to create this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.