Blurry/Smudged Training - Newbie trying to learn
I am trying to learn about deepfakes and Deep Learning in general and now trying to understand expectations of this procedure. I attached my training to show...the time is actually at 15 hours. REALLY hard for me to get down below 0.035 on 'Loss' for A & B, Batch is set to 64
A has 71 reference images
B has 250 reference images
(What is a good amount?) for myself I recorded my face moving around and extracted that, so environment/lighting never changed.
I tried to move size to 512 on my size, I tried all sizes but if it is anything but 256 Swapface crashes. I want a sharper image, but right now I want to just get the iterations to get less blurry/smudge like. I feel that after a total of 15 hours I should see better results yes?? Or is that normal? Before I let this run for 1 week and see virtually no diff??
If I add more reference photos, I re-ran the extraction to get the alignment file, linked the trainer to those and tried to continue the training with the existing model, it crashed so I assume that is not an option? So you need to start the model and training over yes??
Any tips/advice/settings you can give for clearest sharpest the AI can yield, anything I can do to help it learn, I'd appreciate it.
Oh...also, I assume the Tensor Cores on the RTX cards are NOT being used correct?
System specs:
Intel Xeon e3-1270_v5 3.6Ghz turbo to 4GHZ
32GB DDR4 2133
SSD M.2
Nvidia 2080 RTX
System performance:
CPU goes to 100% for entire process
Ram sits at 11GB used of the 32 available
Cude runs at 50%
VRAM 8GB is 100% utilized
Thank you all!