During the pre-production phase, I continued further with my machine learning experiments from project Yuru Kyara as it still required more exploration with the practical limitations I encountered during testing in order to incorporate it into the film.
Since the original dataset consists of yuru-characters from the frontal position, usually in a T- or A-pose, poses drawn with body parts overlapping, different angles or perspectives are difficult for the machine to interpret. Scenes in the imaginary parade of mascots viewed from above, touching other characters or holding a prop will thus be especially challenging.
I thought this could perhaps be alleviated through the use of 3D or 2D cut-out of the StyleGAN or Pix2PixHD outputs, although our primary preference is still 2D hand drawn animation using Pix2PixHD.
2D animation (input) > Pix2PixHD yurukyara 3 (output) and StyleGAN Yurukyara 1 & 2 > Monster Mash and Animated Drawings
StyleGAN yurukyara 1 > Monster Mash > Maya
Pix2PixHD yurukyara 3 > Monster Mash > Maya
Animated by others:
Since the model is trained on my drawings from the Yurukyara 3 dataset, it's been interesting to see how drawings by other animators are interpreted by the machine.