![]() When does this appear? Is it only during a failed restore, or does/would a health check catch this?ģ. That backup guy wrote: ↑ 11:19 pmMy questions are:ġ. Maybe I just never noticed the corruption prior? Maybe Veeam checks for corruption better? I have never had a corruption issue with Synology prior so it is hard for me to put all the blame on them. I too, wish Synology units were more supported/reliable with Veaam. Not sure why, but maybe copy jobs have less I/O to possibly get corrupted? Since we are now using a Dell server as the repo, we are now running a Backup Copy job to the Synology box (iSCSI, ReFS) and have yet to experience corruption (we run the backup health check daily here too). Of note, when we formatted the Synology repo with NTFS we did not experience corruption. We confirmed the Synthetic Fulls were valid by restoring the entire VM which was not possible using a corrupt incremental. A weekly Synthetic Full would not be corrupt, which is quite odd to me considering it is "made from" the incrementals marked as corrupt but maybe Synthetic Fulls "heal" the data? I have no clue. ![]() However, after a number of days of incrementals, one or two VMs would be corrupted again. When we performed an Active Full the issue went away, as expected. We run the backup health check daily so we catch this issue quickly. The issue we had was incrementals would get randomly corrupted. We have since started using an old Dell server with a battery backed hardware RAID controller and have had zero issues. We were connecting to the repo with iSCSI and formatted with ReFS. We have, what I consider, a mid-range model (RP2818RP+, 8x 4TB WD Gold HDDs in RAID 10, 10Gb Intel ethernet NIC, dual PSUs). We tried using a Synology box and had corruption issues with incremental backups, and thus Full VM Restores of those backups.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |