Hi All,
I've decided to try and use different trainers with the same datasets for comparison. I've used Original, IAE and Dfaker. I've just tried using Realface and I'm coming across issues and not sure how best to resolve!
I have 2 x GTX 1070's with 8GB RAM a piece.
I have an Intel i7-8700 running at 4.3GHz when training and
16Gb RAM
When I try running the training even with batch size set to 8 I am getting an error of CUDA_OUT_OF_MEMORY. When looking at the GPU monitor it's only showing GPU 1 as having 6.7/8GB in use and GPU 2 is not being utilised at all. (I have stated I have 2 GPU's in the training options). I get the same error when selecting Optimizer Savings.
Am I doing something wrong? Is there a reason it is not utilising the total 16GB of VRAM?
Any suggestions would be much appreciated!