2020
Tiny Hero — pixel characters with GANs
Dataset + models: DCGAN, conditional DCGAN with angle embeddings, and a deep convolutional autoencoder—built on Universal LPC sprites (dual GPL 3.0 / CC‑BY‑SA 3.0).
TinyHero contains thousands of small RGB tiles: characters were procedurally sampled (body, skin, equipment, four viewing angles) from the Universal LPC spritesheet pipeline, then used to train and compare generative models.
DCGAN & training tricks
Starting from the official PyTorch DCGAN tutorial, experiments included latent and feature‑map sizes, soft and noisy labels, dropout in both generator and discriminator, and Wasserstein‑style objectives to mitigate mode collapse—documented in the repository notebooks.
Conditional DCGAN
A conditional setup generates characters conditioned on viewing angle: asymmetric learning rates, soft labels, and an auxiliary classification head so the discriminator judges real/fake and angle, while the generator is steered with a learned angle embedding.
Deep convolutional autoencoder
Same backbone as the GAN stack, plus a bottleneck fully connected layer to a chosen latent size—strong denoising, stable training, and compact embeddings (on the order of tens of dimensions) for reconstruction.