Hi! I have been training a model on DFL-SAE for the past month, and each time I press train and it starts training, it will run smoothly for about 4-6 hours, before giving me an out-of-memory error. Here's the latest output as an example:
Code: Select all
Loading...
Setting Faceswap backend to NVIDIA
07/10/2022 21:19:47 INFO Log level set to: INFO
07/10/2022 21:19:50 INFO Model A Directory: 'C:\Projects\TMR\Deepfake\Smith\v0.4\Workspace\data-dst-v2\training-faces2' (645 images)
07/10/2022 21:19:50 INFO Model B Directory: 'C:\Projects\TMR\Deepfake\Smith\v0.4\Workspace\data-src\training_faces' (4501 images)
07/10/2022 21:19:50 INFO Training data directory: C:\Projects\TMR\Deepfake\Smith\v0.4\Workspace\model\latest
07/10/2022 21:19:50 INFO ===================================================
07/10/2022 21:19:50 INFO Starting
07/10/2022 21:19:50 INFO ===================================================
07/10/2022 21:19:51 INFO Loading data, this may take a while...
07/10/2022 21:19:51 INFO Loading Model from Dfl_Sae plugin...
07/10/2022 21:19:51 INFO Using configuration saved in state file
07/10/2022 21:19:51 INFO Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
07/10/2022 21:19:53 INFO Loaded model from disk: 'C:\Projects\TMR\Deepfake\Smith\v0.4\Workspace\model\latest\dfl_sae.h5'
07/10/2022 21:19:53 INFO Loading Trainer from Original plugin...
07/10/2022 21:20:55 INFO [Saved models] - Average loss since last save: face_a: 0.05369, face_b: 0.10284
07/10/2022 21:24:36 INFO [Saved models] - Average loss since last save: face_a: 0.07755, face_b: 0.09259
07/10/2022 21:28:17 INFO [Saved models] - Average loss since last save: face_a: 0.07797, face_b: 0.09622
07/10/2022 21:31:56 INFO [Saved models] - Average loss since last save: face_a: 0.07773, face_b: 0.09112
07/10/2022 21:35:34 INFO [Saved models] - Average loss since last save: face_a: 0.07760, face_b: 0.09373
07/10/2022 21:39:11 INFO [Saved models] - Average loss since last save: face_a: 0.07949, face_b: 0.09314
07/10/2022 21:42:50 INFO [Saved models] - Average loss since last save: face_a: 0.07961, face_b: 0.09702
07/10/2022 21:46:29 INFO [Saved models] - Average loss since last save: face_a: 0.07759, face_b: 0.09640
07/10/2022 21:50:12 INFO [Saved models] - Average loss since last save: face_a: 0.07960, face_b: 0.09500
07/10/2022 21:53:56 INFO [Saved models] - Average loss since last save: face_a: 0.07620, face_b: 0.09647
07/10/2022 21:57:38 INFO [Saved models] - Average loss since last save: face_a: 0.07826, face_b: 0.09882
07/10/2022 22:01:17 INFO [Saved models] - Average loss since last save: face_a: 0.07747, face_b: 0.09434
07/10/2022 22:04:54 INFO [Saved models] - Average loss since last save: face_a: 0.08103, face_b: 0.09872
07/10/2022 22:08:30 INFO [Saved models] - Average loss since last save: face_a: 0.07650, face_b: 0.09557
07/10/2022 22:12:06 INFO [Saved models] - Average loss since last save: face_a: 0.07962, face_b: 0.09446
07/10/2022 22:15:42 INFO [Saved models] - Average loss since last save: face_a: 0.07678, face_b: 0.09453
07/10/2022 22:19:17 INFO [Saved models] - Average loss since last save: face_a: 0.08229, face_b: 0.09658
07/10/2022 22:22:53 INFO [Saved models] - Average loss since last save: face_a: 0.07912, face_b: 0.09463
07/10/2022 22:26:29 INFO [Saved models] - Average loss since last save: face_a: 0.07907, face_b: 0.09408
07/10/2022 22:30:05 INFO [Saved models] - Average loss since last save: face_a: 0.07969, face_b: 0.09567
07/10/2022 22:33:40 INFO [Saved models] - Average loss since last save: face_a: 0.07814, face_b: 0.09592
07/10/2022 22:37:17 INFO [Saved models] - Average loss since last save: face_a: 0.08148, face_b: 0.09338
07/10/2022 22:40:53 INFO [Saved models] - Average loss since last save: face_a: 0.07844, face_b: 0.09644
07/10/2022 22:44:30 INFO [Saved models] - Average loss since last save: face_a: 0.07917, face_b: 0.09601
07/10/2022 22:48:06 INFO [Saved models] - Average loss since last save: face_a: 0.08062, face_b: 0.09612
07/10/2022 22:51:42 INFO [Saved models] - Average loss since last save: face_a: 0.07801, face_b: 0.09760
07/10/2022 22:55:18 INFO [Saved models] - Average loss since last save: face_a: 0.07940, face_b: 0.09125
07/10/2022 22:58:54 INFO [Saved models] - Average loss since last save: face_a: 0.07934, face_b: 0.09323
07/10/2022 23:02:29 INFO [Saved models] - Average loss since last save: face_a: 0.07820, face_b: 0.09602
07/10/2022 23:06:06 INFO [Saved models] - Average loss since last save: face_a: 0.07865, face_b: 0.09869
07/10/2022 23:09:41 INFO [Saved models] - Average loss since last save: face_a: 0.07935, face_b: 0.09586
07/10/2022 23:13:18 INFO [Saved models] - Average loss since last save: face_a: 0.07851, face_b: 0.09326
07/10/2022 23:16:55 INFO [Saved models] - Average loss since last save: face_a: 0.07774, face_b: 0.09633
07/10/2022 23:20:31 INFO [Saved models] - Average loss since last save: face_a: 0.07908, face_b: 0.09649
07/10/2022 23:24:07 INFO [Saved models] - Average loss since last save: face_a: 0.07925, face_b: 0.09691
07/10/2022 23:27:43 INFO [Saved models] - Average loss since last save: face_a: 0.08005, face_b: 0.09347
07/10/2022 23:31:19 INFO [Saved models] - Average loss since last save: face_a: 0.07891, face_b: 0.09387
07/10/2022 23:34:55 INFO [Saved models] - Average loss since last save: face_a: 0.07824, face_b: 0.09808
07/10/2022 23:38:31 INFO [Saved models] - Average loss since last save: face_a: 0.07809, face_b: 0.09391
07/10/2022 23:42:07 INFO [Saved models] - Average loss since last save: face_a: 0.07788, face_b: 0.09536
07/10/2022 23:45:42 INFO [Saved models] - Average loss since last save: face_a: 0.08059, face_b: 0.09563
07/10/2022 23:49:18 INFO [Saved models] - Average loss since last save: face_a: 0.07899, face_b: 0.09372
07/10/2022 23:52:54 INFO [Saved models] - Average loss since last save: face_a: 0.07613, face_b: 0.09840
07/10/2022 23:56:30 INFO [Saved models] - Average loss since last save: face_a: 0.08088, face_b: 0.09789
07/10/2022 23:56:36 INFO Saved snapshot (1700000 iterations)
07/11/2022 00:00:08 INFO [Saved models] - Average loss since last save: face_a: 0.07852, face_b: 0.09435
07/11/2022 00:03:44 INFO [Saved models] - Average loss since last save: face_a: 0.07968, face_b: 0.09321
07/11/2022 00:07:20 INFO [Saved models] - Average loss since last save: face_a: 0.08029, face_b: 0.09513
07/11/2022 00:10:55 INFO [Saved models] - Average loss since last save: face_a: 0.07831, face_b: 0.09470
07/11/2022 00:14:31 INFO [Saved models] - Average loss since last save: face_a: 0.08060, face_b: 0.09495
07/11/2022 00:16:25 ERROR Caught exception in thread: '_training_0'
07/11/2022 00:16:25 ERROR You do not have enough GPU memory available to train the selected model at the selected settings. You can try a number of things:
07/11/2022 00:16:25 ERROR 1) Close any other application that is using your GPU (web browsers are particularly bad for this).
07/11/2022 00:16:25 ERROR 2) Lower the batchsize (the amount of images fed into the model each iteration).
07/11/2022 00:16:25 ERROR 3) Try enabling 'Mixed Precision' training.
07/11/2022 00:16:25 ERROR 4) Use a more lightweight model, or select the model's 'LowMem' option (in config) if it has one.
Process exited.
I would attach a crash log, but it doesn't output one in the faceswap folder.
When it does give me the error, I simply click the train button again and it trains smoothly for another 6 hours, before eventually crashing as it did before. Is there a way to make it run without crashing, or a way to have it automatically restart when it does run out of memory? Here is a summary of the model if you need it. Also, I use the computer strictly for deepfakes, and the only thing that runs while it's training is Faceswap. No browsers or other ram hogs. Hopefully there is a solution.
Code: Select all
Loading...
Setting Faceswap backend to NVIDIA
07/11/2022 10:45:31 INFO Log level set to: INFO
07/11/2022 10:45:34 INFO Loading Model from Dfl_Sae plugin...
07/11/2022 10:45:34 INFO Using configuration saved in state file
07/11/2022 10:45:34 INFO Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
07/11/2022 10:45:36 INFO Loaded model from disk: 'C:\Projects\TMR\Deepfake\Smith\v0.4\Workspace\model\latest\dfl_sae.h5'
Model: "encoder_df"
____________________________________________________________________________________________________
Layer (type) Output Shape Param #
====================================================================================================
input_1 (InputLayer) [(None, 144, 144, 3)] 0
conv_126_0_conv2d (Conv2D) (None, 72, 72, 126) 9576
conv_126_0_leakyrelu (LeakyReLU) (None, 72, 72, 126) 0
conv_252_0_conv2d (Conv2D) (None, 36, 36, 252) 794052
conv_252_0_leakyrelu (LeakyReLU) (None, 36, 36, 252) 0
conv_504_0_conv2d (Conv2D) (None, 18, 18, 504) 3175704
conv_504_0_leakyrelu (LeakyReLU) (None, 18, 18, 504) 0
conv_1008_0_conv2d (Conv2D) (None, 9, 9, 1008) 12701808
conv_1008_0_leakyrelu (LeakyReLU) (None, 9, 9, 1008) 0
flatten (Flatten) (None, 81648) 0
dense (Dense) (None, 512) 41804288
dense_1 (Dense) (None, 41472) 21275136
reshape (Reshape) (None, 9, 9, 512) 0
upscale_512_0_conv2d_conv2d (Conv2D) (None, 9, 9, 2048) 9439232
upscale_512_0_conv2d_leakyrelu (LeakyReLU) (None, 9, 9, 2048) 0
upscale_512_0_pixelshuffler (PixelShuffler) (None, 18, 18, 512) 0
====================================================================================================
Total params: 89,199,796
Trainable params: 89,199,796
Non-trainable params: 0
____________________________________________________________________________________________________
Model: "decoder_a"
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_2 (InputLayer) [(None, 18, 18, 512) 0 []
]
upscale_504_0_conv2d_conv2d (Co (None, 18, 18, 2016) 9291744 ['input_2[0][0]']
nv2D)
upscale_504_0_pixelshuffler (Pi (None, 36, 36, 504) 0 ['upscale_504_0_conv2d_conv2d[0][
xelShuffler) 0]']
leaky_re_lu (LeakyReLU) (None, 36, 36, 504) 0 ['upscale_504_0_pixelshuffler[0][
0]']
residual_504_0_conv2d_0 (Conv2D (None, 36, 36, 504) 2286648 ['leaky_re_lu[0][0]']
)
residual_504_0_leakyrelu_1 (Lea (None, 36, 36, 504) 0 ['residual_504_0_conv2d_0[0][0]']
kyReLU)
residual_504_0_conv2d_1 (Conv2D (None, 36, 36, 504) 2286648 ['residual_504_0_leakyrelu_1[0][0
) ]']
add (Add) (None, 36, 36, 504) 0 ['residual_504_0_conv2d_1[0][0]',
'leaky_re_lu[0][0]']
residual_504_0_leakyrelu_3 (Lea (None, 36, 36, 504) 0 ['add[0][0]']
kyReLU)
residual_504_1_conv2d_0 (Conv2D (None, 36, 36, 504) 2286648 ['residual_504_0_leakyrelu_3[0][0
) ]']
residual_504_1_leakyrelu_1 (Lea (None, 36, 36, 504) 0 ['residual_504_1_conv2d_0[0][0]']
kyReLU)
residual_504_1_conv2d_1 (Conv2D (None, 36, 36, 504) 2286648 ['residual_504_1_leakyrelu_1[0][0
) ]']
add_1 (Add) (None, 36, 36, 504) 0 ['residual_504_1_conv2d_1[0][0]',
'residual_504_0_leakyrelu_3[0][0
]']
residual_504_1_leakyrelu_3 (Lea (None, 36, 36, 504) 0 ['add_1[0][0]']
kyReLU)
upscale_252_0_conv2d_conv2d (Co (None, 36, 36, 1008) 4573296 ['residual_504_1_leakyrelu_3[0][0
nv2D) ]']
upscale_252_0_pixelshuffler (Pi (None, 72, 72, 252) 0 ['upscale_252_0_conv2d_conv2d[0][
xelShuffler) 0]']
leaky_re_lu_1 (LeakyReLU) (None, 72, 72, 252) 0 ['upscale_252_0_pixelshuffler[0][
0]']
residual_252_0_conv2d_0 (Conv2D (None, 72, 72, 252) 571788 ['leaky_re_lu_1[0][0]']
)
residual_252_0_leakyrelu_1 (Lea (None, 72, 72, 252) 0 ['residual_252_0_conv2d_0[0][0]']
kyReLU)
residual_252_0_conv2d_1 (Conv2D (None, 72, 72, 252) 571788 ['residual_252_0_leakyrelu_1[0][0
) ]']
add_2 (Add) (None, 72, 72, 252) 0 ['residual_252_0_conv2d_1[0][0]',
'leaky_re_lu_1[0][0]']
residual_252_0_leakyrelu_3 (Lea (None, 72, 72, 252) 0 ['add_2[0][0]']
kyReLU)
residual_252_1_conv2d_0 (Conv2D (None, 72, 72, 252) 571788 ['residual_252_0_leakyrelu_3[0][0
) ]']
residual_252_1_leakyrelu_1 (Lea (None, 72, 72, 252) 0 ['residual_252_1_conv2d_0[0][0]']
kyReLU)
residual_252_1_conv2d_1 (Conv2D (None, 72, 72, 252) 571788 ['residual_252_1_leakyrelu_1[0][0
) ]']
add_3 (Add) (None, 72, 72, 252) 0 ['residual_252_1_conv2d_1[0][0]',
'residual_252_0_leakyrelu_3[0][0
]']
residual_252_1_leakyrelu_3 (Lea (None, 72, 72, 252) 0 ['add_3[0][0]']
kyReLU)
upscale_126_0_conv2d_conv2d (Co (None, 72, 72, 504) 1143576 ['residual_252_1_leakyrelu_3[0][0
nv2D) ]']
upscale_126_0_pixelshuffler (Pi (None, 144, 144, 126 0 ['upscale_126_0_conv2d_conv2d[0][
xelShuffler) ) 0]']
leaky_re_lu_2 (LeakyReLU) (None, 144, 144, 126 0 ['upscale_126_0_pixelshuffler[0][
) 0]']
residual_126_0_conv2d_0 (Conv2D (None, 144, 144, 126 143010 ['leaky_re_lu_2[0][0]']
) )
residual_126_0_leakyrelu_1 (Lea (None, 144, 144, 126 0 ['residual_126_0_conv2d_0[0][0]']
kyReLU) )
upscale_168_0_conv2d_conv2d (Co (None, 18, 18, 672) 3097248 ['input_2[0][0]']
nv2D)
residual_126_0_conv2d_1 (Conv2D (None, 144, 144, 126 143010 ['residual_126_0_leakyrelu_1[0][0
) ) ]']
upscale_168_0_conv2d_leakyrelu (None, 18, 18, 672) 0 ['upscale_168_0_conv2d_conv2d[0][
(LeakyReLU) 0]']
add_4 (Add) (None, 144, 144, 126 0 ['residual_126_0_conv2d_1[0][0]',
) 'leaky_re_lu_2[0][0]']
upscale_168_0_pixelshuffler (Pi (None, 36, 36, 168) 0 ['upscale_168_0_conv2d_leakyrelu[
xelShuffler) 0][0]']
residual_126_0_leakyrelu_3 (Lea (None, 144, 144, 126 0 ['add_4[0][0]']
kyReLU) )
upscale_84_0_conv2d_conv2d (Con (None, 36, 36, 336) 508368 ['upscale_168_0_pixelshuffler[0][
v2D) 0]']
residual_126_1_conv2d_0 (Conv2D (None, 144, 144, 126 143010 ['residual_126_0_leakyrelu_3[0][0
) ) ]']
upscale_84_0_conv2d_leakyrelu ( (None, 36, 36, 336) 0 ['upscale_84_0_conv2d_conv2d[0][0
LeakyReLU) ]']
residual_126_1_leakyrelu_1 (Lea (None, 144, 144, 126 0 ['residual_126_1_conv2d_0[0][0]']
kyReLU) )
upscale_84_0_pixelshuffler (Pix (None, 72, 72, 84) 0 ['upscale_84_0_conv2d_leakyrelu[0
elShuffler) ][0]']
residual_126_1_conv2d_1 (Conv2D (None, 144, 144, 126 143010 ['residual_126_1_leakyrelu_1[0][0
) ) ]']
upscale_42_0_conv2d_conv2d (Con (None, 72, 72, 168) 127176 ['upscale_84_0_pixelshuffler[0][0
v2D) ]']
add_5 (Add) (None, 144, 144, 126 0 ['residual_126_1_conv2d_1[0][0]',
) 'residual_126_0_leakyrelu_3[0][0
]']
upscale_42_0_conv2d_leakyrelu ( (None, 72, 72, 168) 0 ['upscale_42_0_conv2d_conv2d[0][0
LeakyReLU) ]']
residual_126_1_leakyrelu_3 (Lea (None, 144, 144, 126 0 ['add_5[0][0]']
kyReLU) )
upscale_42_0_pixelshuffler (Pix (None, 144, 144, 42) 0 ['upscale_42_0_conv2d_leakyrelu[0
elShuffler) ][0]']
face_out_32_a_conv2d (Conv2D) (None, 36, 36, 3) 37803 ['residual_504_1_leakyrelu_3[0][0
]']
face_out_64_a_conv2d (Conv2D) (None, 72, 72, 3) 18903 ['residual_252_1_leakyrelu_3[0][0
]']
face_out_128_a_conv2d (Conv2D) (None, 144, 144, 3) 9453 ['residual_126_1_leakyrelu_3[0][0
]']
mask_out_a_conv2d (Conv2D) (None, 144, 144, 1) 1051 ['upscale_42_0_pixelshuffler[0][0
]']
face_out_32_a (Activation) (None, 36, 36, 3) 0 ['face_out_32_a_conv2d[0][0]']
face_out_64_a (Activation) (None, 72, 72, 3) 0 ['face_out_64_a_conv2d[0][0]']
face_out_128_a (Activation) (None, 144, 144, 3) 0 ['face_out_128_a_conv2d[0][0]']
mask_out_a (Activation) (None, 144, 144, 1) 0 ['mask_out_a_conv2d[0][0]']
====================================================================================================
Total params: 30,814,402
Trainable params: 30,814,402
Non-trainable params: 0
____________________________________________________________________________________________________
Model: "decoder_b"
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_3 (InputLayer) [(None, 18, 18, 512) 0 []
]
upscale_504_1_conv2d_conv2d (Co (None, 18, 18, 2016) 9291744 ['input_3[0][0]']
nv2D)
upscale_504_1_pixelshuffler (Pi (None, 36, 36, 504) 0 ['upscale_504_1_conv2d_conv2d[0][
xelShuffler) 0]']
leaky_re_lu_3 (LeakyReLU) (None, 36, 36, 504) 0 ['upscale_504_1_pixelshuffler[0][
0]']
residual_504_2_conv2d_0 (Conv2D (None, 36, 36, 504) 2286648 ['leaky_re_lu_3[0][0]']
)
residual_504_2_leakyrelu_1 (Lea (None, 36, 36, 504) 0 ['residual_504_2_conv2d_0[0][0]']
kyReLU)
residual_504_2_conv2d_1 (Conv2D (None, 36, 36, 504) 2286648 ['residual_504_2_leakyrelu_1[0][0
) ]']
add_6 (Add) (None, 36, 36, 504) 0 ['residual_504_2_conv2d_1[0][0]',
'leaky_re_lu_3[0][0]']
residual_504_2_leakyrelu_3 (Lea (None, 36, 36, 504) 0 ['add_6[0][0]']
kyReLU)
residual_504_3_conv2d_0 (Conv2D (None, 36, 36, 504) 2286648 ['residual_504_2_leakyrelu_3[0][0
) ]']
residual_504_3_leakyrelu_1 (Lea (None, 36, 36, 504) 0 ['residual_504_3_conv2d_0[0][0]']
kyReLU)
residual_504_3_conv2d_1 (Conv2D (None, 36, 36, 504) 2286648 ['residual_504_3_leakyrelu_1[0][0
) ]']
add_7 (Add) (None, 36, 36, 504) 0 ['residual_504_3_conv2d_1[0][0]',
'residual_504_2_leakyrelu_3[0][0
]']
residual_504_3_leakyrelu_3 (Lea (None, 36, 36, 504) 0 ['add_7[0][0]']
kyReLU)
upscale_252_1_conv2d_conv2d (Co (None, 36, 36, 1008) 4573296 ['residual_504_3_leakyrelu_3[0][0
nv2D) ]']
upscale_252_1_pixelshuffler (Pi (None, 72, 72, 252) 0 ['upscale_252_1_conv2d_conv2d[0][
xelShuffler) 0]']
leaky_re_lu_4 (LeakyReLU) (None, 72, 72, 252) 0 ['upscale_252_1_pixelshuffler[0][
0]']
residual_252_2_conv2d_0 (Conv2D (None, 72, 72, 252) 571788 ['leaky_re_lu_4[0][0]']
)
residual_252_2_leakyrelu_1 (Lea (None, 72, 72, 252) 0 ['residual_252_2_conv2d_0[0][0]']
kyReLU)
residual_252_2_conv2d_1 (Conv2D (None, 72, 72, 252) 571788 ['residual_252_2_leakyrelu_1[0][0
) ]']
add_8 (Add) (None, 72, 72, 252) 0 ['residual_252_2_conv2d_1[0][0]',
'leaky_re_lu_4[0][0]']
residual_252_2_leakyrelu_3 (Lea (None, 72, 72, 252) 0 ['add_8[0][0]']
kyReLU)
residual_252_3_conv2d_0 (Conv2D (None, 72, 72, 252) 571788 ['residual_252_2_leakyrelu_3[0][0
) ]']
residual_252_3_leakyrelu_1 (Lea (None, 72, 72, 252) 0 ['residual_252_3_conv2d_0[0][0]']
kyReLU)
residual_252_3_conv2d_1 (Conv2D (None, 72, 72, 252) 571788 ['residual_252_3_leakyrelu_1[0][0
) ]']
add_9 (Add) (None, 72, 72, 252) 0 ['residual_252_3_conv2d_1[0][0]',
'residual_252_2_leakyrelu_3[0][0
]']
residual_252_3_leakyrelu_3 (Lea (None, 72, 72, 252) 0 ['add_9[0][0]']
kyReLU)
upscale_126_1_conv2d_conv2d (Co (None, 72, 72, 504) 1143576 ['residual_252_3_leakyrelu_3[0][0
nv2D) ]']
upscale_126_1_pixelshuffler (Pi (None, 144, 144, 126 0 ['upscale_126_1_conv2d_conv2d[0][
xelShuffler) ) 0]']
leaky_re_lu_5 (LeakyReLU) (None, 144, 144, 126 0 ['upscale_126_1_pixelshuffler[0][
) 0]']
residual_126_2_conv2d_0 (Conv2D (None, 144, 144, 126 143010 ['leaky_re_lu_5[0][0]']
) )
residual_126_2_leakyrelu_1 (Lea (None, 144, 144, 126 0 ['residual_126_2_conv2d_0[0][0]']
kyReLU) )
upscale_168_1_conv2d_conv2d (Co (None, 18, 18, 672) 3097248 ['input_3[0][0]']
nv2D)
residual_126_2_conv2d_1 (Conv2D (None, 144, 144, 126 143010 ['residual_126_2_leakyrelu_1[0][0
) ) ]']
upscale_168_1_conv2d_leakyrelu (None, 18, 18, 672) 0 ['upscale_168_1_conv2d_conv2d[0][
(LeakyReLU) 0]']
add_10 (Add) (None, 144, 144, 126 0 ['residual_126_2_conv2d_1[0][0]',
) 'leaky_re_lu_5[0][0]']
upscale_168_1_pixelshuffler (Pi (None, 36, 36, 168) 0 ['upscale_168_1_conv2d_leakyrelu[
xelShuffler) 0][0]']
residual_126_2_leakyrelu_3 (Lea (None, 144, 144, 126 0 ['add_10[0][0]']
kyReLU) )
upscale_84_1_conv2d_conv2d (Con (None, 36, 36, 336) 508368 ['upscale_168_1_pixelshuffler[0][
v2D) 0]']
residual_126_3_conv2d_0 (Conv2D (None, 144, 144, 126 143010 ['residual_126_2_leakyrelu_3[0][0
) ) ]']
upscale_84_1_conv2d_leakyrelu ( (None, 36, 36, 336) 0 ['upscale_84_1_conv2d_conv2d[0][0
LeakyReLU) ]']
residual_126_3_leakyrelu_1 (Lea (None, 144, 144, 126 0 ['residual_126_3_conv2d_0[0][0]']
kyReLU) )
upscale_84_1_pixelshuffler (Pix (None, 72, 72, 84) 0 ['upscale_84_1_conv2d_leakyrelu[0
elShuffler) ][0]']
residual_126_3_conv2d_1 (Conv2D (None, 144, 144, 126 143010 ['residual_126_3_leakyrelu_1[0][0
) ) ]']
upscale_42_1_conv2d_conv2d (Con (None, 72, 72, 168) 127176 ['upscale_84_1_pixelshuffler[0][0
v2D) ]']
add_11 (Add) (None, 144, 144, 126 0 ['residual_126_3_conv2d_1[0][0]',
) 'residual_126_2_leakyrelu_3[0][0
]']
upscale_42_1_conv2d_leakyrelu ( (None, 72, 72, 168) 0 ['upscale_42_1_conv2d_conv2d[0][0
LeakyReLU) ]']
residual_126_3_leakyrelu_3 (Lea (None, 144, 144, 126 0 ['add_11[0][0]']
kyReLU) )
upscale_42_1_pixelshuffler (Pix (None, 144, 144, 42) 0 ['upscale_42_1_conv2d_leakyrelu[0
elShuffler) ][0]']
face_out_32_b_conv2d (Conv2D) (None, 36, 36, 3) 37803 ['residual_504_3_leakyrelu_3[0][0
]']
face_out_64_b_conv2d (Conv2D) (None, 72, 72, 3) 18903 ['residual_252_3_leakyrelu_3[0][0
]']
face_out_128_b_conv2d (Conv2D) (None, 144, 144, 3) 9453 ['residual_126_3_leakyrelu_3[0][0
]']
mask_out_b_conv2d (Conv2D) (None, 144, 144, 1) 1051 ['upscale_42_1_pixelshuffler[0][0
]']
face_out_32_b (Activation) (None, 36, 36, 3) 0 ['face_out_32_b_conv2d[0][0]']
face_out_64_b (Activation) (None, 72, 72, 3) 0 ['face_out_64_b_conv2d[0][0]']
face_out_128_b (Activation) (None, 144, 144, 3) 0 ['face_out_128_b_conv2d[0][0]']
mask_out_b (Activation) (None, 144, 144, 1) 0 ['mask_out_b_conv2d[0][0]']
====================================================================================================
Total params: 30,814,402
Trainable params: 30,814,402
Non-trainable params: 0
____________________________________________________________________________________________________
Model: "dfl_sae_df"
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
face_in_a (InputLayer) [(None, 144, 144, 3) 0 []
]
face_in_b (InputLayer) [(None, 144, 144, 3) 0 []
]
encoder_df (Functional) (None, 18, 18, 512) 89199796 ['face_in_a[0][0]',
'face_in_b[0][0]']
decoder_a (Functional) [(None, 36, 36, 3), 30814402 ['encoder_df[0][0]']
(None, 72, 72, 3),
(None, 144, 144, 3)
, (None, 144, 144, 1
)]
decoder_b (Functional) [(None, 36, 36, 3), 30814402 ['encoder_df[1][0]']
(None, 72, 72, 3),
(None, 144, 144, 3)
, (None, 144, 144, 1
)]
====================================================================================================
Total params: 150,828,600
Trainable params: 150,828,600
Non-trainable params: 0
____________________________________________________________________________________________________
Process exited.