Hello,
My training not starting after I updated faceswap.
Every setting is same its just something happened after I updated faceswap today.
I have attached the crash report.
Read the FAQs and search the forum before posting a new topic.
This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.
Please mark any answers that fixed your problems so others can find the solutions.
Hello,
My training not starting after I updated faceswap.
Every setting is same its just something happened after I updated faceswap today.
I have attached the crash report.
I have fixed this bug.
However, you should note that the reason it occurred (and why it hasn't come up before) is because your model input size is larger than your model output size. This doesn't really make a lot of sense, and means you are just wasting VRAM.
My word is final
Hello,
Thanks a Lot the update fixed the issue, plus all this time my training preview was showing blue faces with the latest update the color issue has also been resolved for me.
"However, you should note that the reason it occurred (and why it hasn't come up before) is because your model input size is larger than your model output size. This doesn't really make a lot of sense, and means you are just wasting VRAM".
Sorry I am very new to Machine learning and faceswap:
Please can you tell me the error I made.
I can see the output size in my phaze - A training setting is 256
But when you say input size are you referring to encoder I choose i,e efficientnet_v2_b3 (300 Px)@ 100 % scaling.
I read somewhere in forum where you explained encoder setting has nothing to do with input size it just adjust/balance the scale of output/input.
Or are you referring to output size of images in Extraction tab where I selected the value of 512.
So all my extracted images are of 512 X 512 Pixel while my output setting in phaze A is set to 256 and if we include encoder output it is 300.
So Where exactly is input setting for which I have to make the changes.
Sorry for troubling you further.
Extract size is fine.
The input size to EffNetV2_B3 is 300px (this actually gets scaled down to 288px because Faceswap always rounds down to the nearest number divisible by 16). There is no benefit to feeding the encoder images of a higher resolution than the output, so you should use the encoder scaling to match the output resolution. If you set the enc_scaling
to 85% this will give you 256px in -> 256px out (300 * 0.85 = 258. This will be rounded down to 256).
You can check whether you have set this correctly by selecting the "summary" option in the train tab, where you can see the input and output sizes of your created model.
My word is final
Oh thanks a lot again for explanation. I understood now.