Skip to main content

Recovering data from external USB driver

Almost two weeks ago, I tried to connect one of my external USB hard drive to my desktop, but without any success. The files I wanted most were already backed up to a different disk, so I was not expecting a bit loss. However, I had a set of benchmarks that could be useful in the future. Ok I said, it is not a high priority, but how can I recover my data?

Here is a number of steps I followed trying to recover the disk:

1. The disk is a WD passport USB drive. I noticed when I was moving the USB cable, the driver will disconnect from the PC. So the first obvious step was to try a new USB cable, but no luck. There was a small improvement in the transfer speed, but nothing serious (more on that later).

2. So the second thought was to open the case and connect directly the disk to the desktop PC. Unfortunately, the disk has embedded the USB controller and it cannot be attached directly to the PC with  SATA cable.

WD Passport USB disk controller with USB connector.
WD Passport disk controller with USB connector.
A bit of more Google search revealed that newest drivers also encrypt the data, even if the user didn't set a password.

3. Then I tried to put the disk in the refrigerator, but it didn't do anything. A bit of warning here, this could help if the disc is stopped somewhere in the plate. Some people warned that this can actual destroy your disk. If your data are not critical or you are desperate, you can try to see what happens.

4. The next step is to try mounting the disk as read-only under Linux and use software recovery tools. This requires more experience than the average Windows user, but you can try using a live CD without installing anything. It is important to understand that the Linux must be able to see the device, even if you can't see the actual files.

In Linux, the storage devices and partitions are stored under the /dev/sd*. For example, in my system the default devices are:

root@korax-desktop:~# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda3  /dev/sda4  /dev/sda5 /dev/sda6 /dev/sdb /dev/sdb1

The /dev/sda is the primary disk, and the /dev/sdb is the external USB connected device. Not that I have dual boot machine, so I have different partitions for Windows. /dev/sdb is the external USB disk and the /dev/sdb1 is the partition.

If you can't see the partition, then you can't mount the disk (see step 6). If you can't see the disk at all, you are out of luck. You can try switching the controller of the disk using the same model, but it will not work in the case of WD passport disks due to encryption. An alternative when using external USB disks is to connect the USB cables directly to the controller, but you need some experience with electronics. I'm going to analyze this case in the future article.

All the new Linux distributions automatically mount the disk in read-write configuration. I really suggest that even if Linux is able to mount and see the files, to unmount the disk and remount it as read-only. To do so, you need to find where is mounted and force umount. For example, as root:

root@korax-desktop:~# mount | grep sdb
/dev/sdb1 on /media/wd type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096)

root@korax-desktop:~# umount --force /media/wd
root@korax-desktop:~# mkdir  /media/wd-recover/
root@korax-desktop:~# mount  -o ro /dev/sdb1 /media/wd-recover/

After the mount, you should be able to see the files, even in low speed.

5. If you can mount the disk, the next step is to try recover files from the disk. There are two famous tools that you have to use: ddrescue and PhotoRec (includes the TestDisk). From the time you can mount the disk, you should use the ddrescue to copy files that resist copying (for our example file.iso):

Create a directory. If you are using live CD try to connect an external disk and make the recovery directory there:

mkdir recover
cd recover

Recover the data without trying again. This is a first step to speed up the speed.

ddrescue --no-split -v -c 256 -A /media/wd-recover/file.iso file.iso logfile

Now let it retry previous errors 3 times, using uncached reads. This can be really slow:

ddrescue -v -d -r 3 /media/wd-recover/file.iso file.iso logfile

Now let it retry previous errors 5 times, using uncached reads:

ddrescue -v -d -r 5 -c 1 /media/wd-recover/file.iso file.iso logfile

Now let it retry previous errors 5 times, using uncached reads in reverse:

ddrescue -v -d -r 5 -c 1 -R /media/wd-recover/file.iso file.iso logfile

In any time, you can stop the process and continue in the future. The progress and the status are saved in the logfile.

6. If you can's see the partition, then you have to create an image of the disk and then use the photorec tool. You follow the same process as step 5 with ddrescoue, but in this case, you copy the entire disk. The downside is that you must have enough space. In addition, disks with 1TB or more are taking a long time for coping the data. For example:

Recover the data without trying again. This is a first step to speed up the speed.

ddrescue --no-split -v -c 256 -r 1 -A /dev/sdb image.dd logfile

Now let it retry previous errors 3 times, using uncached reads. This can be really slow:

ddrescue -v -d -r 3  /dev/sdb image.dd logfile

Next, you must use the testdisk to restore the partitions or recover deleted files:

testdisk image.dd

If you are able to restore the partition tables (check with fdisk -l image.dd), you can mount the file directly to your machine and read the files:

For example if the output of fdisk is:

Units = sectors of 1 * 512 = 512 bytes

Device Boot      Start         End      Blocks   Id  System
image.dd   *         128     8015999     4007936    b  W95 FAT32

You have to write (512*128 = 65536): 

mount -o loop,offset=65536 image.dd /mnt/tmp

7. If you can't restore the partitions, then you have to use directly the photorec to the image file.

photorec image.dd 

You follow the instructions and recover the data (will be analyzed in the future).

Popular posts from this blog

Processing Milky Way in RawTherapee

This text is an analysis of a video I did some months ago how to process photos of our Milky Way in RawTherapee. You can find the picture here . The photo was taken by wiegemalt. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Editing: Step 1: Fixing lighting The first thing someone notices when opening the picture is the extreme noise due to high ISO used (1600 - check the picture on the right). The first thought is to de-noise the picture, however, if you do that you are going to loose the details of the night sky. The main reason for the high noise is the additional exposure Rawtherapee adds when the 'Auto' button is pressed. In particular, the RT adds +2.4 EV to equalize the picture. This is Wrong! What we want is to keep the noise down and at the same time bring the stars up. That's why we are going to play with the Tone Curve of the RT. To adjust the light properly we increase the cont

Static linking with gcc and g++

In this tutorial, we will explain what the static linking is, how this affect the size of final binary, and why statically linking with g++ sometimes is pain. By definition, a statically compiled binary is a group of programmer ‘s routines, external functions, and variables which are packed into the final binary executable. The compiler or the linker produces the final object and embeds all the functions and variables and the linking phase.  There are two reasons of using dynamic linking and shared libraries: 1) Avoid creating a huge binary, if all the programs use a standard set of libraries why not having the operating system providing to them 2) Compatibility on operating system and machine dependant characteristics: sometimes the libraries must be implemented based on the architecture or the operating system and using dynamic linking is an easy way to avoid this catch. On the other hand, static linking is the ideal way of distributing one software product, paying

Homemade NAS after two and half years

Few years ago, I created a NAS system using my old PC. The old pc is powerful enough to do the basic NAS operation except real time transcoding of 4K or more than one 1080 streams.  The setup was setup as follows: 1 SSD 128 GB for boot device 2x 4 TBs for the main storage configured as RAID 1 The hard disks were from different manufacturers (WD red plus and Segate) to avoid catastrophic situations: disks are part of a bad batch and you will have a second failed disk during the build of the RAID array. Here is a list of my mistakes: My first mistake was CentOS. I wanted a stable machine without surprises, unexpected behavior, as much less bugs affecting my setup. I had three free/open source options:  Open Media Vault: Debian based Linux with many configuration options. The main downside is that you don't have support for ZFS and you have to very often use the command line to fix things. Moreover, there are a plethora of open issues/bugs that can make your life harder than expected