实际上很早以前HostHatch就支持了,只不过我比较懒就一直没弄。
根据官网的文档,他们支持VPS之间部署私有网络,不过需要注意的是:
- 必须先向服务人员提交开通第二块网卡的支持票;
- 不支持DHCP,必须自己手动配置第二块网卡的网络信息。
最近不是部署了个下载站么,就索性把这个功能调整好了。
首先你看到自己网卡的状态应该类似这样:
enp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000。
一开始我尝试先用了官方推荐的netplan编辑/etc/netplan/90-private.yaml文件,发现一直总是配不好。
network:
ethernets:
eth1:
addresses:
- 192.168.10.1/24
match:
macaddress: 00:22:xx:xx:xx:xx
version: 2
我后来更换到了NetworkManager来管理。
root@Nicky:~# uuidgen
6955999c-6df6-440f-bd19-cd826a5b0dc4
root@Nicky:~#
/etc/NetworkManager/system-connections/enp2s0-static.nmconnection
顺便连巨型帧一并处理了
[connection]
id=enp2s0-static
uuid=6955999c-6df6-440f-bd19-cd826a5b0dc4
type=ethernet
interface-name=enp2s0
mtu=9000
[ipv4]
address1=192.168.100.10/24,192.168.100.1
dns=8.8.8.8;8.8.4.4;
method=manual
[ipv6]
addr-gen-mode=stable-privacy
method=ignore
[ethernet]
mac-address-blacklist=
接下来重新加载配置。
sudo nmcli con reload
sudo nmcli con up "enp2s0-static"
接下来验证是否配置成功。如果类似于这种就算配置成功了。
root@Nicky:~# ip link show enp2s0
3: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq state UP mode DEFAULT group default qlen 1000
link/ether 00:22:02:86:ac:1a brd ff:ff:ff:ff:ff:ff
root@Nicky:~#
如果有类似错误可能要修改权限问题。
** (process:1620): WARNING **: 15:40:42.956: Permissions for /etc/netplan/90-private.yaml are too open. Netplan configuration should NOT be accessible by others.
chmod 600 /etc/netplan/90-private.yaml
# 测试配置语法
sudo netplan try --timeout 30
# 按 Enter 接受配置,如果30秒内不按Enter会自动回滚
# 或者直接应用配置
sudo netplan apply
不过在RockyLinux上安装就比较痛苦了,可能跟我自己的系统有关系吧。一直提示Error: unknown connection 'enp2s0-static'.。后面用命令修复了(AI给的),IP起来了,但是巨型帧一直设置失效。
# 1. 停止 NetworkManager 临时服务(不影响网络连接)
sudo systemctl stop NetworkManager
# 2. 删除所有旧的配置文件
sudo rm -f /etc/NetworkManager/system-connections/*.nmconnection
# 3. 重启 NetworkManager
sudo systemctl start NetworkManager
# 4. 等待几秒让服务完全启动
sleep 3
# 5. 使用 nmcli 创建连接(这会自动创建正确的配置文件)
sudo nmcli connection add type ethernet \
con-name "enp2s0-internal" \
ifname enp2s0 \
ipv4.method manual \
ipv4.addresses "192.168.100.10/24" \
ipv4.gateway "192.168.100.1" \
ipv4.dns "8.8.8.8,8.8.4.4" \
ipv6.method disabled \
mtu 9000 \
autoconnect yes
# 6. 激活连接
sudo nmcli connection up "enp2s0-internal"
# 7. 验证
nmcli connection show
ip addr show enp2s0
ip link show 命令运行提示mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
全部配置完成以后就可以设置NFS挂载了。
Debian Server代码
root@Nicky:~# sudo apt install nfs-kernel-server nfs-common -y
root@Nicky:~# sudo mkdir -p /data
root@Nicky:~# sudo chown -R statd:nogroup /data/
root@Nicky:~# sudo chmod -R 777 /data/
root@Nicky:~# sudo vim /etc/exports
配置后按不同的步骤执行
# 格式:共享目录 客户端IP(选项)
/data 192.168.100.4(rw,sync,no_subtree_check,no_root_squash,insecure)
# 或者允许整个网段
# /data 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,insecure)
# 选项解释:
# rw: 读写权限
# sync: 同步写入(更安全,建议内网使用async提升性能)
# no_subtree_check: 禁用子树检查,提高性能
# no_root_squash: 允许root用户访问
# insecure: 允许非特权端口连接
# async: 异步写入(性能更好,但有数据丢失风险)
# 重启NFS服务
root@Nicky:~# sudo systemctl restart nfs-kernel-server
# 查看已导出的共享
root@Nicky:~# sudo exportfs -v
# 设置开机自启
root@Nicky:~# sudo systemctl enable nfs-kernel-server
# 修改NFS性能优化配置文件
sudo vim /etc/default/nfs-kernel-server
root@Nicky:~# sudo vim /etc/sysctl.d/99-nfs-optimization.conf
# 增加NFS服务线程数
RPCNFSDCOUNT=16
# 设置NFS版本(推荐v4.2)
RPCNFSDARGS="-N 2 -N 3 -V 4.2"
# 调整RPC服务参数
RPCMOUNTDOPTS="--manage-gids --no-nfs-version 2 --no-nfs-version 3"
---------------------------------------------
# 创建网络和NFS优化配置文件
sudo nano /etc/sysctl.d/99-nfs-optimization.conf
RockyLinux挂载按以下步骤执行
# 1. 安装NFS客户端
sudo dnf install nfs-utils -y
# 2. 创建本地挂载点
sudo mkdir -p /mnt/nfs_share
# 3. 测试连接
sudo showmount -e 192.168.100.5
# 4. 临时挂载(测试)
sudo mount -t nfs4 -o rw,hard,intr,rsize=65536,wsize=65536,timeo=600,retrans=2,noatime,nodiratime 192.168.100.5:/shared /mnt/nfs_share
# 5. 验证挂载
df -h | grep nfs
mount | grep nfs
ls /mnt/nfs_share
sudo umount /mnt/nfs_share
sudo nano /etc/sysctl.d/99-nfs-client-optimization.conf
# NFS客户端优化
# 增加NFS挂载缓存
vm.vfs_cache_pressure = 50
vm.swappiness = 10
# 网络优化(与服务器端类似但略有不同)
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
# 增加NFS客户端并发
sunrpc.tcp_max_slot_table_entries = 64
sudo sysctl -p /etc/sysctl.d/99-nfs-client-optimization.conf
创建优化的挂载配置文件
sudo nano /etc/systemd/system/mnt-nfs_share.mount
[Unit]
Description=NFS Share from Debian Server
Requires=network-online.target
After=network-online.target
[Mount]
What=192.168.100.5:/shared
Where=/mnt/nfs_share
Type=nfs4
Options=rw,hard,intr,rsize=65536,wsize=65536,timeo=600,retrans=2,noatime,nodiratime,vers=4.2,proto=tcp,port=2049
[Install]
WantedBy=multi-user.target
#以上配置where部分需要和文件名一致,不然在自动挂载中会报如下错误: my-nfs.mount’s Where= setting doesn’t match unit name. Refusing.
测试结果还不错。有兴趣的可以使用iperf做个测试。结果太长我就不贴了。
=== 网络存储性能测试 ===
测试位置: /mnt/data
1. 顺序写入测试:
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 6.34586 s, 338 MB/s
2. 顺序读取测试:
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.0722 s, 213 MB/s
3. 随机4K读写测试:
random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.35
Starting 4 processes
random-write: Laying out IO file (1 file / 1024MiB)
Jobs: 4 (f=4): [w(4)][100.0%][w=9344KiB/s][w=2336 IOPS][eta 00m:00s]
random-write: (groupid=0, jobs=4): err= 0: pid=16853: Sat Jan 10 11:41:02 2026
write: IOPS=2347, BW=9390KiB/s (9616kB/s)(551MiB/60085msec); 0 zone resets
slat (usec): min=2, max=227, avg=10.18, stdev= 7.26
clat (msec): min=3, max=365, avg=54.48, stdev=36.21
lat (msec): min=3, max=365, avg=54.49, stdev=36.21
clat percentiles (msec):
| 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 31],
| 30.00th=[ 35], 40.00th=[ 40], 50.00th=[ 44], 60.00th=[ 50],
| 70.00th=[ 57], 80.00th=[ 67], 90.00th=[ 92], 95.00th=[ 134],
| 99.00th=[ 194], 99.50th=[ 226], 99.90th=[ 300], 99.95th=[ 309],
| 99.99th=[ 321]
bw ( KiB/s): min= 5264, max=13016, per=100.00%, avg=9395.47, stdev=388.88, samples=480
iops : min= 1316, max= 3254, avg=2348.87, stdev=97.22, samples=480
lat (msec) : 4=0.01%, 10=0.03%, 20=0.09%, 50=60.90%, 100=30.24%
lat (msec) : 250=8.33%, 500=0.40%
cpu : usr=0.34%, sys=0.86%, ctx=140671, majf=0, minf=37
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.9%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,141055,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=9390KiB/s (9616kB/s), 9390KiB/s-9390KiB/s (9616kB/s-9616kB/s), io=551MiB (578MB), run=60085-60085msec
4. 延迟测试:
latency-test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.35
Starting 1 process
latency-test: Laying out IO file (1 file / 256MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=18.0MiB/s][r=4610 IOPS][eta 00m:00s]
latency-test: (groupid=0, jobs=1): err= 0: pid=16890: Sat Jan 10 11:41:33 2026
read: IOPS=4122, BW=16.1MiB/s (16.9MB/s)(483MiB/30001msec)
clat (usec): min=125, max=38940, avg=239.23, stdev=548.55
lat (usec): min=125, max=38940, avg=239.41, stdev=548.55
clat percentiles (usec):
| 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 163],
| 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 196],
| 70.00th=[ 210], 80.00th=[ 237], 90.00th=[ 334], 95.00th=[ 404],
| 99.00th=[ 693], 99.50th=[ 881], 99.90th=[ 9503], 99.95th=[13960],
| 99.99th=[19006]
bw ( KiB/s): min=12688, max=19568, per=99.78%, avg=16454.51, stdev=1711.90, samples=59
iops : min= 3172, max= 4892, avg=4113.63, stdev=427.97, samples=59
lat (usec) : 250=81.81%, 500=15.53%, 750=1.89%, 1000=0.35%
lat (msec) : 2=0.19%, 4=0.05%, 10=0.08%, 20=0.09%, 50=0.01%
cpu : usr=1.38%, sys=5.29%, ctx=130455, majf=0, minf=8
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=123681,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=16.1MiB/s (16.9MB/s), 16.1MiB/s-16.1MiB/s (16.9MB/s-16.9MB/s), io=483MiB (507MB), run=30001-30001msec
参考链接
HostHatch Docs:https://docs.hosthatch.com/networking/#private-networking
文章评论