5.9 KiB
XFS Info
Contents
Installation
Where its not already installed and available via the configured repositories (CentOS/Fedora/Ubuntu/RHEL7+):
yum install xfsprogs
or:
apt-get update && apt-get install xfsprogs
Operational Commands
Creating
# mkfs.xfs /dev/sdx1
# mkfs.xfs -d su=64k,sw=2 /dev/sdx1
Stripe size and width: only useful for local hardware arrays, not VMware vDisks, Cloud xen disks, SAN, etc.
- su = RAID controllers stripe size in bytes (or kB when used with k)
- sw = Number of data disks (don't count parity disks)
Checking
# xfs_repair -v /dev/sdx1
It is safe to use Ctrl+C to cancel
xfs_repair
Mounting
- Disable barriers if a write cache is present with the
nobarriermount option - XFS places inodes only in the first 1TB of a disk with default 32bit inodes. Use the
inode64option to override
/dev/sdx1 /mountpoint xfs inode64,nobarrier 0 0
Resizing
# xfs_growfs /mountpoint
Fragmentation
First check the fragmentation level, then start the defragmentation:
# xfs_db -c frag -r /dev/sdx1
# xfs_fsr /dev/sdx1
Others
- xfs_info - show filesystem geometry
- xfs_admin - change filesystem parameters
- xfs_bmap - print block map of filesystem
LVM Snapshots
Creating
It seems that using the xfs_freeze command before and after the snapshot creation is recommended; there are articles indicating that some versions of LVM2 hang though when doing this.
xfs_freeze -f /some/path
lvcreate -s ...
xfs_freeze -u /some/path
Mounting
XFS has UUIDs which are unique identifiers of the filesystem; two file systems with same UUID can not be mounted on the same server. Possible solutions are to mount the snapshot with no UUID, or generate a new UUID for it.
No UUID mount:
mount -o nouuid /dev/mapper/mysql-snap /mnt/mysql
Generate a new UUID:
xfs_admin -U generate /dev/mapper/mysql-snap
Troubleshooting
Filesystem Repairs
XFS has a journal just like ext3/ext4 does; with XFS it's important to try and mount the filesystem to let that journal replay it's transactions to ensure data consistency; however, this is not always possible, and a xfs_repair may also fail - even when following the XFS FAQ about this error:
# mount /data01
mount: Structure needs cleaning
# umount /data01
umount: /data01: not mounted
# xfs_repair /dev/mapper/data01
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem
to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem,
then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause
corruption -- please attempt a mount of the filesystem before doing this.
In this case, your only solution is to tell xfs_repair to zero out the journal – this could possibly lose data, so it should really be your last option after trying the above.
# xfs_repair -L /dev/mapper/data01
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used.
- scan filesystem freespace and inode maps...
bad magic # 0xb10bb320 for agi 27
bad version # 322677957 for agi 27
bad sequence # -517814273 for agi 27
bad length # -1743834476 for agi 27, should be 159793115
reset bad agi for ag 27
[...]
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
error following ag 27 unlinked list
- process known inodes and perform inode discovery...
- agno = 0
2b71c2c29940: Badness in key lookup (length)
bp=(bno 7756256, len 16384 bytes) key=(bno 7756256, len 8192 bytes)
[...]
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
disconnected inode 15512571, moving to lost+found
disconnected inode 51038624, moving to lost+found
Phase 7 - verify and correct link counts...
done
Stalled Repair
Older versions of xfs_repair may have bugs such as this one - unfortunately no newer versions are packaged for older RHEL5 systems. The way to feel the bug is if it hits Phase 6 and you use something like top and see no activity on the system (xfs_repair looks idle, no I/O wait, etc.). It is safe to hit Ctrl+C and restart the xfs_repair with the additional flags.