Thanks. Like many DL area, most people use grid search for hyperparameter tuning. GAN is very hard on this because people add additional costs to the objective function and therefore you have extra parameters to tune. Just need a lot of patient.

Two major issues of GAN is diminishing/exploding gradients & mode collapse. The key focus of research for the first area is in the cost function but this is still debatable.

There are less on optimization methods. Many paper uses RMSProp or Adam. The Unrolled GAN changes how to do optimization to avoid mode collapse.

So that may be what I will tackle for now. Try to write a series on reinforcement learning. But that will take a while.

Thanks again.

Written by

Deep Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store