SEARCH EVENTS

- This event has passed.
Artificial Visionaries: Exploring the intersections of machine vision, computation, and our aural and visual cultures
27 November 2024 @ 9:00 am - 28 November 2024 @ 4:45 pm AEST

Artificial Visionaries: exploring the intersections of machine vision, computation, and our aural and visual cultures, is a two-day symposium with the goal of bringing together scholars who are exploring the intersections between computation and creativity across a broad range of aural and visual cultures.
As artificial intelligence and generative technologies become entangled with our day-to-day creative practices and industrial forms of cultural production, it prompts critical reflection on the affordances, differences, and points of connection between human perception and machine vision, human labour and machine labour, and human creativity and computational creativity. How are generative technologies being incorporated into our creative practices? How are data and algorithms influencing the way we make, exhibit, distribute, perceive, or consume art?
ChatGPT suggested we call this event “artificial visionaries” — so we did. But who are the visionaries? The hallucinations of the machines, or the creative visions (and hallucinations) of the humans who use them? Whilst the phrase may bring to mind questions of authenticity, authorship, or aesthetic judgement for some cultural studies scholars, we’re sure it will prompt very different ideas for a computational scientist. We feel that the polysemy of a machine-generated term such as this is also representative of the many different approaches scholars are taking toward digital cultural research.Travel awards (HDR Students)
Interstate applicants: ADM+S research training has earmarked limited travel bursaries to enable our interstate ADM+S students and ECR members to travel to participate in person. These bursaries are to contribute to return economy airfares and accommodation. Please email m.thomas@uq.edu.au and sally.storey@rmit.edu.au if you would like to apply for a travel bursary to attend.
This event has been organised by Meg Herrmann with the support of The Centre for Digital Cultures & Societies at the University of Queensland and The ARC Centre of Excellence for Automated Decision Making and Society.
KEYNOTE SPEAKERS

Dr Joel Stern presents ‘Degenerative Music: Listening with and against algorithmic aberrations’
Explore acoustic chicago blues algorave. Make a song that feels how you feel. Write a songbook about automatic music generation. Prompt: choir, replication, disquiet, clone, drone, decompose, female vocalist, rhythmic, LLM poetry, DIY, heavy, absurd. Enter custom mode. Perform live.
“Suno is building a future where anyone can make great music. Whether you’re a shower singer or a charting artist, we break barriers between you and the song you dream of making. No instrument needed, just imagination. From your mind to music.”
“Udio builds AI tools to enable the next generation of music creators. We believe AI has the potential to expand musical horizons and enable anyone to create extraordinary music. With Udio, anyone with a tune, some lyrics, or a funny idea can now express themselves in music.”
Generative AI platforms like Suno and Udio promise a future where “anyone can make great music” regardless of skills, experience or knowledge by simply using a prompt interface. While this notion radically redefines what it means to create music in a conventional sense, it aligns, weirdly, and perhaps unintentionally, with certain avant-garde and experimental music traditions, which foreground de-skilling (no instrument needed…) and conceptual purity (…just imagination).
Further, when we listen to AI-generated music in 2024, despite promises to the contrary, we don’t hear seamless genre replication or polished production. Instead, what stands out are aberrations—glitches, artifacts, and strange affectations—what we might call sonic disaggregations or degenerations. These imperfections are not merely flaws; they are the defining features of AI music.
Rather than focusing on AI’s ability to faithfully replicate musical conventions, this talk proposes that the medium specificity of AI music lies in its errors and mutations, its absence of human intentionality, and the ‘lack of shame’ that often accompanies creative choices. While these qualities preclude (at least for now) AI-generated music from being seen as “authentic” popular music, they fulfil long-held avant-garde desires to replace aesthetic choices with automated processes, structures, mechanisations and prompts.

Dr Lisa Bode presents ‘Weird by Design: Generative AI and the aesthetics and visual culture of weirdness’
In 2024, new generative AI models for image and video are released every few weeks, and each one seems to promise improved accuracy and unprecedented user control. Often though, if we consider such AI generated videos as “Will Smith Eating Spaghetti” (2023) by Reddit user, chaindrop, using HuggingFace’s ModelScope text2video, it is the inaccuracy and chaos of AI generated works that comprises their viral attraction. This is a rarely examined aesthetic quality we tend to call weird.
In one sense – but not all – AI generated weirdness is related to what Carolyn Kane has called “the aesthetics of failure” (2019): associated with technological artefacts that are part of development cycles, but slowly disappearing with the training of each new model. It is possible that weirdness is merely a temporary characteristic of AI aesthetics – one that is leant into or emphasized in vernacular and artistic uses of these applications. But weirdness may also be a more persistent feature of generative AI. For, as I argue here, it operates alongside, underneath, and in relation to generative AI’s developmental trajectories, and their corporate framing and branding. This talk is a brief exploration of the manifestation, experience, and functions of AI weirdness, and how and why weirdness – at least for now – is a significant part of the shifting aesthetic and cultural frameworks through which we understand, share, categorize, and experience emerging AI applications and the text, images, and video they produce.