Linux

Daniel Weschke

January 18, 2019

1 XFCE

1.1 Set up startup configuration file and environment variables

1.1.1 Add a directory to the path variable

On a fresh installation copy the standard configuration file to home

cp /etc/xdg/xfce4/xinitrc ~/.config/xfce4

Then edited ~/.config/xfce4/xinitrc to include the following near the top of the file

if [ -d "${HOME}/bin" ] ; then
  PATH="${HOME}/bin:${PATH}"
fi

2 Mount RAID1 disk using mdadm

  1. See if it is a linux-raid system:

    $ sudo fdisk -l /dev/sdb
    Disk /dev/sdb: 1,84 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: EARX-00PASB0
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x000908d1
    
    Device     Boot   Start        End    Sectors  Size Id Type
    /dev/sdb1           256    4980735    4980480  2,4G fd Linux raid autodetect
    /dev/sdb2       4980736    9175039    4194304    2G fd Linux raid autodetect
    /dev/sdb3       9437184 3907015007 3897577824  1,8T  f W95 Ext'd (LBA)
    /dev/sdb5       9453280 3907015007 3897561728  1,8T fd Linux raid autodetect
    
  2. Direct mount is not possible

    $ sudo mkdir /mnt/hdd
    $ sudo mount /dev/sdb5 /mnt/hdd
    mount: /mnt/hdd: unknown filesystem type 'linux_raid_member'.
    
  3. Examine with mdadm

    $ sudo mdadm --examine /dev/sdd4
    /dev/sdb5:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : f5504213:137a9a81:b3ca09bd:4cbb4a97
               Name : DiskStation:2
      Creation Time : Fri Apr  6 03:17:12 2012
         Raid Level : raid1
       Raid Devices : 1
    
     Avail Dev Size : 3897559680 (1858.50 GiB 1995.55 GB)
         Array Size : 1948779648 (1858.50 GiB 1995.55 GB)
      Used Dev Size : 3897559296 (1858.50 GiB 1995.55 GB)
        Data Offset : 2048 sectors
       Super Offset : 8 sectors
       Unused Space : before=1968 sectors, after=384 sectors
              State : active
        Device UUID : 29b8b488:b282823a:b93d6f12:cfe9f2c4
    
        Update Time : Thu Apr  9 19:28:06 2020
           Checksum : c5e60bec - correct
             Events : 3
    
    
       Device Role : Active device 0
       Array State : A ('A' == active, '.' == missing, 'R' == replacing)
    
  4. Before creating a virtual device, check for used ones. Here only md127 is used. This is a plain old md device and usually used /dev/md127.

    $ ls /dev/md*
    /dev/md127
    
  5. If a virtual device is present check if it is the raid disk

    $ sudo mdadm --detail /dev/md127
    /dev/md127:
               Version : 1.2
         Creation Time : Fri Apr  6 03:17:12 2012
            Raid Level : raid1
            Array Size : 1948779648 (1858.50 GiB 1995.55 GB)
         Used Dev Size : 1948779648 (1858.50 GiB 1995.55 GB)
          Raid Devices : 1
         Total Devices : 1
           Persistence : Superblock is persistent
    
           Update Time : Thu Apr  9 19:31:42 2020
                 State : clean
        Active Devices : 1
       Working Devices : 1
        Failed Devices : 0
         Spare Devices : 0
    
    Consistency Policy : resync
    
        Number   Major   Minor   RaidDevice State
           0       8        5        0      active sync
    
  6. If in point 4 it is the same, then stop the virtual drive first

    $ sudo mdadm --stop /dev/md127
    

    If sopping failed

    $ sudo mdadm --stop /dev/md127
    mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?
    

    Ensure there’s nothing in the output of vgdisplay

    $ sudo vgdisplay
    

    list Device Mapper devices named after the volume group

    $ ls /dev/mapper/
    control  vg1000-lv
    

    remove the vg

    $ sudo dmsetup remove vg1000-lv
    

    check it

    $ ls /dev/mapper/
    control
    

    try again to stop

    $ sudo mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    
  7. If in point 4 it is not the raid disk create a virtual device using mdadm

    $ sudo mdadm -A -R /dev/md11 /dev/sdb5
    mdadm: /dev/md11 has been started with 1 drive.
    
  8. Mount to the virtual drive

    $ sudo mount /dev/md127 /mnt/hdd/
    

    If mount point is busy, see point 5 to remove a Device Mapper device

    $ sudo mount /dev/md127 /mnt/hdd/
    mount: /mnt/hdd: /dev/md127 already mounted or mount point busy.
    
    $ sudo mount /dev/md11 /mnt/hdd/
    mount: /mnt/hdd: unknown filesystem type 'LVM2_member'.
    
    $ sudo lvmdiskscan
      /dev/nvme0n1   [     931,51 GiB]
      /dev/nvme0n1p1 [    <862,73 GiB]
      /dev/nvme0n1p2 [      68,78 GiB]
      /dev/md11      [       1,81 TiB] LVM physical volume
      0 disks
      3 partitions
      0 LVM physical volume whole disks
      1 LVM physical volume
    
    $ sudo lvscan
      inactive          '/dev/vg1000/lv' [1,81 TiB] inherit
    
    $ sudo vgchange -ay
      1 logical volume(s) in volume group "vg1000" now active
    
    $ sudo lvscan
      ACTIVE            '/dev/vg1000/lv' [1,81 TiB] inherit
    
    sudo mount /dev/vg1000/lv /mnt/hdd
    

    or see and find the KNAME dm-0

    lsblk --output NAME,KNAME,TYPE,SIZE,MOUNTPOINT /dev/sdb
    NAME            KNAME TYPE   SIZE MOUNTPOINT
    sdb             sdb   disk   1,8T
    ├─sdb1          sdb1  part   2,4G
    ├─sdb2          sdb2  part     2G
    ├─sdb3          sdb3  part     1K
    └─sdb5          sdb5  part   1,8T
      └─md11        md11  raid1  1,8T
        └─vg1000-lv dm-0  lvm    1,8T
    
    sudo mount /dev/dm-0 /mnt/hdd
    
  9. if removing vg failed (here vg1000-lv is equal to /dev/dm-0)

    $ sudo dmsetup remove vg1000-lv
    device-mapper: remove ioctl on vg1000-lv  failed: Device or resource busy
    Command failed.
    
  10. un-mount

    $ sudo umount /mnt/hdd
    umount: /mnt/hdd: target is busy.
    
    $ sudo umount -l /mnt/hdd
    $ sudo umount -f /mnt/hdd
    
  1. if read-only

    $ sudo mount /dev/sda1 /mnt/hdd
    The disk contains an unclean file system (0, 0).
    Metadata kept in Windows cache, refused to mount.
    Falling back to read-only mount because the NTFS partition is in an
    unsafe state. Please resume and shutdown Windows fully (no hibernation
    or fast restarting.)
    
    $ sudo mount -o rw /dev/sda1
    
  2. $ sudo mount /dev/sda1 /mnt/hdd
    $MFTMirr does not match $MFT (record 3).
    Failed to mount '/dev/sda1': Input/output error
    NTFS is either inconsistent, or there is a hardware fault, or it's a
    SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
    then reboot into Windows twice. The usage of the /f parameter is very
    important! If the device is a SoftRAID/FakeRAID then first activate
    it and mount a different device under the /dev/mapper/ directory, (e.g.
    /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation
    for more details.
    
    $ sudo ntfsfix -b -d /dev/sda1
    
    $ sudo testdisk