音樂節奏遊戲是目前非常熱門的遊戲,但設計遊戲譜面過於耗時,此外,現存生成譜面的方法也有許多限制,於是我們提出了基於生成對抗網路的方法產生音樂節奏遊戲譜面。首先,音樂被分離為人聲與配樂兩部分,這使我們產生的譜面更貼近真實的譜面。我們的模型包含兩種生成對抗網路的概念:為了加入音樂資訊而採用的Conditional Generative Adversarial Nets (CGANs)以及使模型更好收斂的Improved Wasserstein GAN (WGAN-GP)。我們比較了此方法以及其他條件下的實驗。我們也設計了問卷來衡量此方法的實驗結果,而此方法的實驗結果與真實的譜面得到的分數非常相近,也就是說,我們的實驗結果與真實譜面非常相似。 Music games are very popular now, but designing beatmaps usually takes too much time. In addition, there is some limitation of existing methods to generate beatmaps. In this thesis, beatmap generation method is proposed based on Generative Adversarial Networks (GANs). Audio is firstly separated into the vocal and instrument parts to make this method close to beatmap design philosophy of designers. Our model consists of Conditional Generative Adversarial Nets (CGANs) and Improved Wasserstein GAN (WGAN-GP) for considering audio information and fast convergency of model training. Our results are compared with different methods. Besides, we conduct a subjective evaluation of our results and the real beatmaps. Our results are very competitive to the real beatmaps which means our results are close to the real beatmaps.