don't click here

Help Yo Linux/*nix users, can I have help with defragmenting and file repairing please?

Discussion in 'Technical Discussion' started by segamaniac, Mar 5, 2022.

  1. segamaniac

    segamaniac

    ️ Change was necessary for a truer me. Member
    So long story short, Windows went kaput on me, pal of mine backed the data up (albeit mostly corrupted) on a new SSD, and now currently using Kubuntu on said SSD. So here's what I'm wanting to plan...
    1. Defragment my HDD (I have NEVER defragged it, although it seems to work pretty good still and has no signs of being slower, just thinking about doing this as a precaution so I know for sure the next step will work)
    2. I going to backup the recovered files to my external HDD (not the whole thing since I'm not too worried about everything, since I don't have a lot things)
    3. Perform a fsck on my SSD (although I don't know to do it exactly yet, I'm planning on watching some videos and such on it)
    If anyone has some advice, help, or great learning resources, so I can get started on the repair process, please let me know for I am really eager to repair these files. Thank you all in advance!
     
    Last edited: Mar 5, 2022
  2. biggestsonicfan

    biggestsonicfan

    Model2wannaB Tech Member
    1,613
    418
    63
    ALWAYS Sonic the Fighters
    1. If the drive is formatted NTFS, you will only gain value if it's defragged from Windows. I don't believe Linux can optimize NTFS like Windows can, I've googled around before and came up empty handed. Also, if it's now on an SSD, you do not defrag SSDs.
    1a. You never defrag a damaged drive, as I understand it. AFAIK you can't?
    2. I am assuming the external HDD is NTFS formatted as well, I don't know how mounting works on Kubuntu but it should just be plug and play.
    3. I'm 99.5% positive fsck is used to repair file system errors in Linux, and you should use another install of Windows for the repair.

    Keep in mind a fresh install of Win10 isn't just going to stop working if you don't put in a license. What you could do is temporarily install a fresh Win10 on your SSD and use that to assist in the recovery (and defrag which I don't understand at this point what you're doing with that and it's my personal recommendation that you don't) and use Window's CHKDSK /F /R /X instead of fsck which I don't think will work at all.
     
  3. segamaniac

    segamaniac

    ️ Change was necessary for a truer me. Member
    I should've clarified a bit, the HDD isn't damaged, I had the old SSD that had the corrupted files on it wiped and my pal installed Windows if I recall correctly. The corrupted files were backed up and copied to my new SSD, which I am currently using for Kubuntu, so now they're existing within ext4 rather than NTFS. Also I should mentioned they were backed up with EaseUS Data Recovery Wizard (I think) if that's something key to note and you're right about mounting being plug and play on Kubuntu. Btw, nice to see you again BSF! ;)
    upload_2022-3-5_1-2-59.png
     
  4. Overlord

    Overlord

    Now playable in Smash Bros Ultimate Moderator
    19,240
    974
    93
    Long-term happiness
    Defragging a disk hasn't been necessary since XP. I have a 7 year old Windows 7 install that's been on 24/7 in that time frame and has never been manually defragged: it's fine.

    Backing up your files on a modern Linux (which should have the fuse driver in kernal for NTFS) should be as simple as drag and drop.
     
  5. biggestsonicfan

    biggestsonicfan

    Model2wannaB Tech Member
    1,613
    418
    63
    ALWAYS Sonic the Fighters
    I've cloned and upgraded a drive from a 2TB, to a 4TB, to an 8TB to a 10TB, and defragging does almost nothing to help the seek time because it has so many small and large files (a backup of my Bluray copy of Hackers had more fragments than half it's byte size!). However I feel that defragging can be useful in some circumstances. It just seems to be a nature of the beast of how NTFS works and allocates data on disk.

    Perhaps an alternative to defragging (and what I plan to do with my 10TB drive) is manually moving all files to another hard drive and back. This will rebuild the file structure and index in a way that all cluster blocks used by a particular file are aligned on the disk.
     
  6. President Zippy

    President Zippy

    Zombies rule Belgium! Member
    Overlord makes a good point about defragging being mostly useless in the modern-day implementation of NTFS. NTFS does its best to avoid creating fragmentation in the first place and isn't afraid to atomically move smaller files to a different space on the disk so a large file can grow contiguously.

    With that in mind, optimizing hard disk latency is better done by throwing more RAM at the problem and letting the filesystem cache do its thing, provided you have more money than time to invest in the problem.

    I can't say one way or another whether FUSE behaves similarly to NTFS on Windows, but a good way to performance test it is by creating a 10GB NTFS partition and writing a script that creates and deletes files of varying sizes all over the place. I need to think more carefully about it, but here is a rough draft of such a test in typical Bourne shell. Note that you will need to disable filesystem caching to get an honest view of internal fragmentation for large files.

    FSIZE_1K=1024
    FSIZE_1M=$((1024*1024))
    FSIZE_1G=$((1024*1024*1024))

    # Create 4,000,000 files, each 1K in size. For every 1,000 1K files, create a 1M file.
    # This should fill roughly 8GB, i.e. most of the disk.
    i=4000000
    head -c 1024 /dev/random > dummy1k
    while [ $i -gt 0 ]; do

    head -c $FSIZE_1K /dev/random > dummy1k_$i
    rem=$(($i % 1000))
    if [ $rem -eq 0 ]; then

    head -c $FSIZE_1M /dev/random > dummy1m_$(($i / 1000))
    fi

    i=$(($i - 1))
    done

    # Delete half of the files we just created in such a way that usually maximizes fragmentation.
    i=4000000
    i=$(($i - 1))
    while [ $i -gt 0 ]; do

    rm -f dummy1k_$i
    i=$(($i - 2))
    done

    i=4000
    i=$(($i - 1))
    while [ $i -gt 0 ]; do

    rm -f dummy1m_$i
    i=$(($i - 2))
    done


    # Lastly, try to create a 1GB file and see what happens. Some ancient filesystems can't handle
    # fragmentation and will give up if they cannot find 1GB of contiguous space. Some defragment lazily,
    # like NTFS. Some defragment eagerly, like the ext family of filesystems on most Linux distributions.
    head -c $FSIZE_1G /dev/random > dummy1g
    sts=$?
    if [ $sts -ne 0 ]; then

    echo "Attempt to create 1GB file failed with exit status $sts. This is most likely due to fragmentation,"
    echo " but for more information on the exit status, run 'man 1 head'."
    else
    echo "Successfully created 1GB dummy file, './dummy1g', run 'fsck' to check for external fragmentation."
    echo "To check for internal fragmentation, write a C program that reads the file in 1K increments using"
    echo " read(2) and measure the time of each operation. For accurate performance analysis, please disable"
    echo " filesystem caching first."
    fi
     
    Last edited: Mar 14, 2022
    • Like Like x 1
    • Informative Informative x 1
    • List