

For all its flaws and mess, NFS is still pretty good and used in production.
I still use NFS to file share to my VMs because it still significantly outperforms virtiofs, and obviously network is a local bridge so latency is non-existent.
The thing with rsync is that it’s designed to quickly compute the least amount of data transfer to sync over a remote (possibly high latency) link. So when it comes to backups, it’s literally designed to do that easily.
The only cool new alternative I can think of is, use btrfs or ZFS and btrfs/zfs send | ssh backup btrfs/zfs recv
which is the most efficient and reliable way to backup, because the filesystem is aware of exactly what changed and can send exactly that set of changes. And obviously all special attributes are carried over, hardlinks, ACLs, SELinux contexts, etc.
The problem with backups over any kind of network share is that if you’re gonna use rsync anyway, the latency will be horrible and take forever.
Of course you can also mix multiple things: rsync laptop to server periodically, then mount the server’s backup directory locally so you can easily browse and access older stuff.
You need to set up your PC to be on that IP address first, TFTP doesn’t magically listen to a particular IP, you need to configure the PC with that IP.
ip link set eth0 up ip addr add 10.10.10.3/24 dev eth0 ip addr add 10.10.10.1/24 dev eth0
Then you can start the TFTP server on the interface:
dnsmasq -d --port=0 --enable-tftp --tftp-root=/path/to/tftp/root -i eth0