基于Debian 12 (Bookworm) 与 OpenZFS 打造可靠 Homelab 存储方案

Posted on Jun 12, 2023
OpenZFS 在 Linux 上已经非常稳定,早在很久以前就注意到已被纳入 Debian 的 contrib 源中。现在可以很轻松地通过 Debian 的官方源来安装 OpenZFS,享受其先进的文件系统和卷管理功能。无论是在企业级环境还是个人的 Homelab 中,OpenZFS 提供了强大的数据完整性、备份和恢复选项,使其成为理想的存储解决方案。

硬件

组件 详细信息
主板 Dell Precision 3440 SFF (Intel W480)
CPU Intel(R) Xeon(R) W-1270P(8C16t)
RAM 32GB DDR4
网卡 I219-LM(1G) I226-V(2.5G)
SSD HFM256GDGTNG-87A0APLEXTOR PX-512M9PeG * 2
HDD WDC WD5000LPVX-22V0TT0 * 3

存储规划

存储池 硬盘 接口 文件系统 RAID 用途描述
nvme0 HFM256GDGTNG-87A0A nvme ext4 - 系统盘
zpl-dbs PLEXTOR PX-512M9PeG * 2 nvme zfs mirror 数据库/虚拟机
zpl-shr WDC WD5000LPVX-22V0TT0 * 3 sata zfs raidz1 共享盘/数据备份

注意事项

  1. 配置好硬件及BIOS确保其可以断电导致关机情况下通电后可自启动
  2. 像Dell这样的品牌机还保留了开机检查键盘是否插好的传统需要在BIOS中禁掉
  3. 关闭硬件RAID功能(这里使用ZFS软RAID)
  4. 确保机器不会自动休眠: sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
  5. 为了减少不必要软件建议安装 minimal 版本的 Debian 并且需要选择ssh服务

系统设置

  • 启用sshd: sudo systemctl enable sshd

  • 停用SELinux: /etc/selinux/config -> SELINUX=disabled

  • 网络设置: 将全部以太网卡桥接到 brlan0

    /etc/network/interfaces.d/brlan0.iface

    auto brlan0
    iface brlan0 inet static
        address 10.21.0.26/24
        gateway 10.21.0.1
        dns-nameservers 10.21.0.1
        bridge_ports eth0 eth1
        bridge_stp on
        # bridge_ports eth0 eth1 wlan0
        # pre-up wpa_supplicant -B -Dnl80211 -iwlan0 -c /etc/wpa_supplicant/wpa_supplicant.con
        # hostapd
        # pre-up hostapd -B /etc/hostapd/hostapd.conf
    
  • 安装常用软件:

    apt update && apt upgrade

    apt install apt-utils vim sudo openssh-server sshpass openssh-client wget curl ifupdown nfs-common net-tools tcpdump tree unzip smartmontools

    apt install openssl-dev git samba nginx-full

  • 安装OpenZFS:

    apt install zfsutils-linux

    systemctl enable --now zfs-zed.service

创建数据库存储卷

zpool create -f zpl-dbs mirror /dev/nvme0n1 /dev/nvme1n1

zpool status zpl-dbs

NAME          STATE     READ WRITE CKSUM
zpl-dbs       ONLINE       0     0     0
    mirror-0    ONLINE     0     0     0
    nvme0n1   ONLINE       0     0     0
    nvme1n1   ONLINE       0     0     0

创建database volume: zfs create zpl-dbs/database

可创建快照: zfs snapshot zpl-dbs/database@backup-20230612

定期备份数据: crontab -l

*/10 * * * *    rsync -avz --delete /zpl-dbs/database user@<nasip>:/rsync-dbs/

共享文件系统配置

创建 zpl-shr:

zpool create -f zpl-shr raidz /dev/sda /dev/sdb /dev/sdc

zpool status zpl-dbs

NAME                                            STATE     READ WRITE CKSUM
zpl-vms                                         DEGRADED     0     0     0
    raidz1-0                                    DEGRADED     0     0     0
    ata-WDC_WD5000LPVX-22V0TT0_WD-WX11E4434180  ONLINE       0     0     0
    ata-WDC_WD5000LPVX-22V0TT0_WD-WXC1E840FKFT  ONLINE       0     0     0
    ata-WDC_WD5000LPVX-22V0TT0_WD-WX41A7486NSJ  FAULTED      0     0     0 

设置lz4压缩:
zfs set compression=lz4 zpl-shr

查看压缩率:
zfs get compressratio

NFS服务器: /etc/exports

/zpl-shr *(rw,no_root_squash,no_subtree_check)

systemctl enable --now rpcbind nfs-server

SAMBA服务: /etc/samba/smb.conf

# 免密码配置 加上 "security" 和 "map to guest"
[global]
    security = user
    map to guest = Bad User
[zpl-shr]
    comment = Work Dir
    path = /zpl-shr/
    public = no
    writeable = yes
    browseable = yes
    guest ok = yes
[user]
    comment = Work Dir
    path = /home/user
    public = no
    writeable = yes
    browseable = yes

启用samba: systemctl enable smbd

添加用户: smbpasswd -a user

重启samba: systemctl restart smbd

Web服务: systemc enable --now nginx.service

/etc/nginx/sites-enabled/default

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /zpl-shr/www;

        index index.html index.htm index.nginx-debian.html;

        server_name _;

        location / {
                # alias /var/www/html/;
                try_files $uri $uri/ =404;
        }

        location /files {
            alias /zpl-shr/pub;
            autoindex on;
            autoindex_exact_size off;
            autoindex_format html;
            autoindex_localtime on;
        }
}

其它常用服务

# cockpit
apt install cockpit cockpit-machines cockpit-podman
systemctl enable --now cockpit.socket

# podman
apt install podman
systemctl enable --now podman.service

# qemu-kvm & libvirt
apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virtinst libvirt-daemon virt-manager qemu-system
systemctl enable --now libvirtd
usermod -aG kvm,libvirt,libvirt-qemu <user>

# shell in a box
apt install shellinabox
systemctl enable --now shellinabox

# vnstat
apt install vnstat
systemctl enable --now vnstat.service

ZFS常用命令

# show zfs
zpool list # list all zpool
zpool status # show zpool & disk status
zfs list # list all zfs volumes

# zpool
zpool create -f data mirror /dev/sda /dev/sdb
zpool destroy data # delete pool
zpool import -f data/tank # import zpool

# compress
zfs set compression=lz4 data
zfs get compress
zfs get compressratio

# volume
zfs create data/tank # create
zfs create -V 2gb data/tank # Create with size
zfs destroy data/tank # remove

# snapshot
zfs list -t snapshot # show all zfs snapshot
zfs snapshot data/tank@backup1 # create snapshot
zfs destroy data/tank@backup1 # delete snapshot
zfs rollback data/tank@backup1 # rollback snapshot
zfs clone data/tank@backup1 data/tank2 # clone from snapshot
zfs send data/tank@backup1|zfs receive data2/tank # copy to newpool

# zfs cache
zpool add rpool cache <your-ssd-drive>

# mount
zfs mount -a # mount all zfs
zfs unmount data/tank # unmount
zfs mount # show all mountpoint
zfs get mountpoint data/tank # get mountpoint
zfs get mounted data/tank # get mounted status
zfs set mountpoint=/my_data data/tank # set mountpoint

# Replace hard drive
ls /dev/disk/by-id/ # show disk by ID
zpool offline data disk-id1
zpool replace -f data disk-id1 disk-id2

# errors: Permanent errors have been detected in the following files:
zpool scrub data

# Setting the acltype property to posixacl indicates Posix ACLs should be used.
zfs set acltype=posixacl data

ZFS&磁盘健康检查

systemctl enable --now smartmontools.service
smartctl -i /dev/sda
smartctl --all /dev/sda

zpool scrub data # scrub zpool
zpool status -xv data # check zpool status
zpool get all data

ZVOL转换(qcow2/raw)

qemu-img convert -f qcow2 debian.qcow2 -O raw debian.raw # qcow2 to raw

zfs create -V 12gb data/vm-100-debian # create zfs volume

dd if=debian.raw of=/dev/zvol/data/vm-100-debian bs=4M # write to volume

dd if=/dev/zvol/data/vm-100-debian of=debian.raw bs=4M # write to raw