Thanks. Like many DL area, most people use grid search for hyperparameter tuning. GAN is very hard on this because people add additional costs to the objective function and therefore you have extra parameters to tune. Just need a lot of patient.
Two major issues of GAN is diminishing/exploding gradients & mode collapse. The key focus of research for the first area is in the cost function but this is still debatable.
There are less on optimization methods. Many paper uses RMSProp or Adam. The Unrolled GAN changes how to do optimization to avoid mode collapse.
So that may be what I will tackle for now. Try to write a series on reinforcement learning. But that will take a while.