top of page

Generative AI in 2017 & 2014

After reading the first printing of Image-to-Image Translation with Conditional Adversarial Networks (2017) https://arxiv.org/abs/1611.07004 Christopher Hesse created a Tensorflow port of Pix2Pix covered in the article Image-To-Image Translation in Tensorflow, forcing Google Chrome to serve as user interface user interface.


The results: edges2cats. Output results were simultaneously crude and mindbending. With just a few lines doodled a cat would emerge.

ree

All it took was 2,000 stock photos of cats, and edges generated automatically from those photos. Cat-colored objects then entered the world! And sure, some would experience nightmarish faces.


And sometimes; nightmarish faces.

ree

Other times merely nightmarish:

ree

LaughingSquid had its own fun had at the poor abominations <3

ree

Now. 7 years later MidJourney v6 Alpha build generates the following with a prompt as simple as "cat"

ree



DALLE 3 is happeing now:

ree

Just stunning. That was only 7 years ago!


Google's Deep Dream? 10 years ago from 2014.

ree

Itself based on an idea first publicly articulated in 1988 in this paper Lewis, J.P. (1988). "Creation by refinement: a creativity paradigm for gradient descent learning networks". IEEE International Conference on Neural Networks. IEEE International Conference on Neural Networks. pp. 229-233 vol.2. doi:10.1109/ICNN.1988.23933. ISBN 0-7803-0999-5. Repitiloid acid dream


hehe Wild times

Comments


bottom of page