Karaoke, Animal Print and AI: An interview with Dayna McLeod

BY THE ENVIRONMENTAL MEDIA LAB

EML: Tell us what you’re listening to, watching and reading these days; what concepts you’re drawing from to make art. 

DM: So many things! I find comfort in television shows I’ve watched before—they seem safe during these ongoing and never-ending pandemic times. I’m not reading about dream interpretation on purpose right now as I am working on a project about sleep and dreams in an artist residency for The Sociability of Sleep. I am avoiding texts and theories that interpret dreams as I want to produce this work before my subconscious dreaming self gets together with my conscious self to influence and affect my project. I want my conscious and subconscious to collaborate for sure, but for right now, I want them to stay in their lanes. I might get them more thoroughly intertwined and working together in the next leg of this project, but for now, a cigar is just a cigar.

EML: Tell us about your art practice. I’ve (MH) known you for a long time and you’ve always managed to be extremely prolific – even when you were doing your PhD. Tell us what you’re up to these days and how academia and art have come together (and possibly apart) for you. How was your doctoral research-creation process? 

DM: In my practice, I use autoethnographic methods to examine representation, experience, and embodiment from my perspective as a white cisgender nondisabled middle-class passing middle-aging queer woman in performance-based works, both in front of a live audience and in front of the camera for video and moving image installations. The autoethnographic and research-creation methods I use in my practice work on and with representation that is based on and centered in my lived experience. It is from this position that I wrote my research-creation dissertation. How women’s bodies (and here I use this term in the most inclusive way possible) are perceived as public property and other questions about representation are key to this work. When I began my PhD, I was a few months into performing a yearlong durational performance called Cougar for a Year in which I wore nothing but animal print, 24 hours a day/seven days a week to challenge, question, and live the stereotype of a cougar—an over-forty woman who aggressively demonstrates her sexuality. This project grounded my art practice in my PHD work. It is the first chapter of my dissertation, which I have since expanded on for an article in Theatre Research in Canada. Originally, the cougar project was going to be the basis for my entire dissertation, but then a vaginal speaker (targeted to pregnant parents) appeared in my Facebook Newsfeed, and my performance-based research for my dissertation changed.

Intimate Karaoke, Live at Uterine Concert Hall is a sound-based interactive performance installation that asks audiences to imagine my middle-aging queer uterus as a physical place and space to be used for art production, not human reproduction. I have performed this work over the past six years, and used it as a case study for my dissertation to think through some of the ways queerness, gender, sound, vulnerability, empathy, and age work on the body and have the potential to circulate and connect with others as experience and representation. In this piece, audiences are invited to sing their favourite karaoke song into my uterus while other audience members listen via stethoscope through my flesh. Karaoke singers are in one room singing in front of an audience and can hear the musical track in their headphones. There’s also plenty of reverb so the singer sounds amazing to themselves. The trick here is that everyone in that room with them can only hear the voice, not the musical backing track so it is as though the singer is singing acapella and demonstratively vulnerable. Both vocals and music are wired into another room and literally into my body via vaginal speaker, where I am also demonstratively vulnerable as I am penetrated by sound. I’m basically hosting a listening party with and through my body. Audiences can listen to this mixture of music—the karaoke singer, the backing musical track, and any gurgles or pulses my body performs—through a shared stethoscope with several earpieces (for audience members and for me so that I can find the sound) with the bell of the stethoscope trained on the outside of my pelvis. I have published several academic journal articles that examine this performance in relation to autoethnography, sound, and age while considering cisheteropatriarchy, queerness, and expectations of bodies marked female. These articles are extracted from and expand on writing and thinking from my dissertation. This is where art and academia have come together for me during and since my dissertation: thinking through my practice in thorough and nuanced ways complete with references and footnotes.


EML: Tell us about your work with AI…

DM: I was really taken with the AI-generated article The Pleasure Panic that Heliotrope produced, so much so that I made Capitalism Machina, a video essay that uses this text as a script. I used an AI actor to narrate the text and paired this with an excerpt from the 2014 film, Ex Machina (directed by Alex Garland) that features Alicia Vikander as AI robot Ava who is more self-aware than her creator gives her credit for. I’ve taken the end of this visually rich film when—SPOILER ALERT—Ava is making her nonchalant escape after murdering her creator and locking up Caleb (played by Domhnall Gleeson), a computer programmer who was tasked with testing Ava’s abilities, consciousness, and self-awareness. This section of the film immediately came to mind when I read the essay because it seemed to illustrate the AI writer’s critique of capitalism:

“Treating people as human capital is how we justify the exploitation of bodies for profit, whether those bodies are working in coal mines or laying microchips on an assembly line. Human capital is nothing more than a way to view humans as property. Capitalism is not simply an economic system; it's a logic that permeates every aspect of our lives.”

In making Capitalism Machina, this section seemed to dovetail perfectly with shots of Ava assembling the surface of her body by applying synthetic skin to herself that she’s taken from other AI robots, and examining herself in the mirror to appreciate and evaluate the results. 

Experimenting with different AI voice actors, Kaylee by Replica Studios jumped out at me. The bio for this AI character describes her as “from the sun-soaked shores of California, Kaylee is all about the West Coast Spirit and vibe. A little bit of drama a whole lot of sass, this California girl is a great supporting starlet.” Kaylee features three settings ripe with varying vocal fry: Angry, Distraught, and Sassy (I’ve used Sassy for almost all of the narration and Angry for the conclusion). I was drawn to this voice because, much like Ava, the underestimated AI robot in Ex Machina, I see Kaylee as representative of young women who are underestimated because of how they sound. Kaylee’s voice also reminds me of Shalita Grant’s outstanding portrayal of rookie lawyer Cassidy Diamond in season 3 of Search Party, and how Grant uses vocal fry to charm and disarm those around her, potentially causing people to underestimate her and her abilities.

EML: This is so interesting to me, that the parameters of emotion, or maybe of “emoting” through voice specifically, is set by companies that offer up different registers for us customers to use… from ‘light-hearted’ to ‘wise’ to ‘persuasive’ to ‘distraught’. Replica Studios works with voice actors to train each of the AI voices – and this is making us think of the recent example of converting audio from Andy Warhol’s real voice, for the new Netflix documentary The Andy Warhol Diaries, where his voice is used by AI to narrate his diaries. A similar thing was done in a documentary of Anthony Bourdain, Roadrunner. When AI voice is used for fiction or gaming, it’s one thing, but when it’s done to commemorate someone who has died, it strikes a different cord. In the case of the Bourdain film, it wasn’t disclosed to viewers. Anyway, all of this has us wondering about your art practice, and whether or not you think of voices as belonging to anyone, if it matters, or if art in fact allows for a more risky/nuanced/playful/reckless approach to all of these emergent technologies?

DM: Some AI actors are based on real people: an actor’s voice and or likeness has been captured with consent and intent to be used as AI. In these cases, there is a contract between an actor and the company that produces the AI. That contract extends to a client or user for those rights and permissions and these processes and agreements are made clear to all parties. This transparency and accountability of use is important.

What I like about working with AI voice and image actors from companies like Synthesia and Colossyan is that they don’t seem to react or emote differently based on the content that I make them say. I like these AI performances precisely because of this disjointed uncanniness, as I am not so interested in an authentic mimicry of humanity or the plausible believability of deep fake. However, because I can basically make an AI actor say whatever I want, I question my ethical responsibilities to AI and to the actors used in original captures to produce this technology. I consider questions of consent and context despite actor-to-end-user contractual agreements. AI actor casting is also problematic because of the way their embodiment has the potential to extend to and mis/represent specific people and communities in terms of race, gender, sexuality, age, disability, and class. The AI production applications that I am experimenting with pitch themselves to businesses looking to cut down on production expenses for internal and external promotional materials and offer a range of diverse AI actors for this purpose without guiding principles related to identity beyond concerns about the ethics of “content authenticity and provenance” and misinformation. This approach to AI seems to make it easy to perform digital blackface under the guise of diversity, equity, and inclusion without acknowledgement, discussion, responsibility, or accountability. This is left up to the client or user at the end of a turnkey line of production.

In an artist residency for the Sociability of Sleep, I started to experiment with AI actors reciting my dreams and dreams that I’ve collected from friends and friends of friends. When I started collecting dreams, I didn’t know how I would use them, but have since tried translating them in digital collage and with AI actors. I have decided to not use AI actors as talking heads to recite other people’s dreams as there is an embodied misrepresentation here that I don’t think honours the generosity and trust of someone sharing their dream with me. I have instead embraced digital collage as an aesthetic and technique as a means to interpret dreams people have shared with me. I’ve also embraced the failure of this act since my version of your dream will never accurately reproduce how you experienced the dream when you dreamt it.

EML: In my academic work lately, I’m looking at how data centers generate the idea of ‘the human’ – via AI chatbots, holograms (Houston), voiceovers (Bourdain), and robots etc. I’m wondering if in your work you’ve had insights about how we conceived of “human”?

DM: Before I read The Pleasure Panic and only knew that it was AI-generated, I thought it would be really bad and obviously written by a robot like AI-generated scripts and novels that have circulated online over the past few years. However, although the essay is not perfect and there are flaws with it, I was disturbed by how not entirely awful it is—by how passable it is as a human-written text. I can’t help but think about high school, college, and university teaching and how over-extended and overworked professors might grade this essay if it was submitted by one of their students—that this paper and others like it will take up the time, labour, and resources of these professors to the detriment of students who are actually doing this work themselves. Is this plagiarism? But then I think I’m being too pessimistic, and about students who might benefit from writing help that applications like AI Writer™, Sassbook, and Ryte provide when used as a tool. Check out this AI-generated writing I input to Ryte for this interview:

DM: What I find interesting in these screenshots is that Ryte seems to be underscoring the text it’s generated with a sales pitch to use AI technology and AI writers. 

EML: I’m so glad you raised this. During one of our EML Reading Groups (Twitter chats), this issue came up a lot! There is so much to be said about the way students learn these days that is drastically different from what we experienced as students and what we expect as a result. I think it’s tough to grow into a world that is tethered to the internet, and where ‘hot takes’ matter a lot, and where reading is less enticing than video and so on. The media landscape has changed not only the way students think, but how they think of themselves in the world. I’m increasingly aware of these differences and always struggle with what it means to teach now, with all of these technologies. My sense is that AI-writers are already being used by students, along with so many other services that can assist in essay writing. I think the question is why students feel the need to turn to these services, on the one hand, and then also what the stakes are when they do. Plagiarism seems to be a growing problem, but that’s even without AI – turning to the internet as a source already exacerbated this issue. You teach as well – what are your thoughts on this? And what are your thoughts on using AI-writers to assist you in your art practice? How would you credit AI in that case? 

DM: Like you’ve said, our experience as students is so incredibly different from what students are experiencing today and tools like these might help. In terms of assignment citations and credit, these tools are already being used so let’s not make it shameful but encourage good citational practices and clarify how they could be used while contextualizing and defining plagiarism. I’m a sessional part-time contract teacher at Concordia University and McGill University so if I’m offered another course to teach, I’ll certainly look to institutional leadership about AI usage and to Department Chairs and full-time tenured profs for guidance.

I have not used AI-writers in my art practice or writing (yet?!), only the free trial of Ryte for this interview as shown in the above screenshots where sadly, I have run out of free words. However, your question about how I might credit AI in my practice makes me think of invisibilized labour practices within art-making—established artists who employ assistants in busy studios who often go uncredited in the creation of works. Is AI like the paint, or is AI like the assistant who lays down the sky in the background of a landscape?  

EML: Also, in the screen grabs of the AI writer above, you show that AI is marketing itself through the tool! It’s basically writing that machines will take over and humans will take a back seat. AI also seems to justify students using AI, which is …funny. Is all of this foreshadowing an inevitable turn to AI?

DM: I used to joke that we should be nice to our toasters for when the robots take over the world, but it seems that the toasters have already taken over, and I’m just out of the loop still trying to figure out how to change the clock on the stove.

EML: Can you say more about this feeling of falling behind technology? When did it start and how does it play out? When we are young, being on top of tech seems so important. I think this changes as we age, and that is partly wisdom…

DM: I have this recurring dream that there is a machine that I don’t know how to use, but I’m supposed to be an expert on, and if I can’t get it working, everyone dies.



 


Previous
Previous

Data Infrastructures of the Dead

Next
Next

Counter(media) Visioning and AI: Patrick Brian Smith interviews Adam Harvey on uses, misuses, and the possibility of subversion