Among other things, I am an art lover and an art collector.
The advent of large language models and transformer models in artificial intelligence, able to generate images from text in plain English, is revolutionizing the way humans produce creative outputs: painting, sculpture, music, video games, films.
More importantly, for the first time ever, these AI models are democratizing access to artistic disciplines by allowing a much broader spectrum of people to become creators rather than mere consumers.
The capabilities of these models are mind-blowing and go beyond what you might have seen online so far.
These artworks don’t exist in real life. Various AIs, released by different companies, have helped me generate these images.
I’m doing this also to resurface an eternal question: What is Art?
Like every notice, it will take me time to understand how to master this new technique and develop my own style.
At the beginning, I’ll steal, mixing and remixing.
At the moment, Dall-E 2 is accessible via a waiting list only (and the waiting can be months or years long), while Midjourney and Stable Diffusion are both available in beta without a waiting list.
I’ll expand my research to additional models as I get access to them.
For now, I’m not detailing which AI model and which prompt (the request to the AI model) I’ve used to generate each image in the gallery. I don’t because I want the audience to focus on the aesthetic qualities of the image and not on the process.
Also, I don’t want to trivialize the process itself: each one of these artworks requires hours to identify which prompt generates the idea or emotion that I want to convey, and to select the best image among the ones that are offered by the AI model.
In the future, every image will list both AI model and prompt.