“legs” is a visual experiment combining stream of consciousness/word association, AI generated imagery, and mobile imaging tools.
Using a geographic prompt (in this case, the legs on Haight Street in San Francisco), I wrote a list of 50 words (some related to the legs, others related to the previous word).
Then using a Text-To-Image AI, I generated an image for each of these words.
Using the mobile app Glitché, I further abstracted these images.
Finally, using an image interpolation software, I sequenced the images in time based on their visual attributes in a way I predicted would result in an interesting motion output.
EDIT: at the time when I concieved and executed this project, AI generated imagery was far more primitive and my understanding of its ethical implications was not as advanced. I think I put this together in Fall 2021.