Designing for AI collaboration: ADM+S toolkit presented at international conference

Researcher presenting workshop to others with materials
Awais Hameed Khan participating in an interactive workshop at the IASDR Conference in Taipei.

Designing for AI collaboration: ADM+S toolkit presented at international conference

Author ADM+S Centre
Date 19 February 2026

Dr Awais Hameed Khan, Research Fellow at the University of Queensland node of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), recently presented a new publication Design Patterns for AI-Curated Content Toolkit at the 20th Biennial Congress of the International Association of Societies of Design Research (IASDR) in Taipei, offering practical interface design patterns to help researchers and practitioners create more contextually relevant, AI-curated content experiences.

Dr Khan said the response from researchers and practitioners highlighted the growing appetite for practical tools in this space.

“It was really amazing to see how well the AI curated content design patterns were received by the audience.”

“I had both researchers and practitioners reach out to me after my talk, sharing their ideas on how they would integrate this research into their own research practice” 

Developed in collaboration with ADM+S researchers Sara Fahad Dawood Al Lawati, Dr Damiano Spina, Dr Danula Hettiachchi and Senuri Wijenayake (RMIT University), this paper also introduces a practical toolkit that provides guidance to users of how the design patterns can be used to explore AI-in-the-loop approaches that support more considered content generation, recommendation and aggregation, in transparent and user-centred ways.

An earlier version of this work was featured as a showcase at the 2025 ADM+S Symposium on Automated Social Services: Building Inclusive Digital Futures.

The IASDR conference, jointly hosted by the Taiwan Design Research Institute (TDRI) and the Chinese Institute of Design (CID) at the Songshan Cultural and Creative Park, brought together leading thought leaders and pioneers of design research from around the world, including Don Norman, Peter Lloyd, and Lin-Lin Chen. Its theme for 2025 was exploring changes in design research including human-centered design, and new methodologies, such as digital environments and AI collaboration. 

During the conference, Dr Khan participated in workshops on relational design and speculative design across cultures. He met with leading design researchers and industry practitioners to consolidate existing partnerships and explore new research collaborations including Prof Johan Redström (Academy of Art and Design, University of Gothenburg), whose work on exemplary design research programs was instrumental in framing Awais’s doctoral thesis. 

This project which is part of the Critical Capabilities for Inclusive AI project, began as a collaboration between Dr Awais Hameed Khan and Dr Danula Hettiachchi, during their ADM+S NYC Fellowship placement at the Centre for Responsible AI at NYU in Sep 2023. Since then the team has grown larger, and the focus of the work has expanded in light of recent trends and integrations of AI in curating content for end users.

This research visit was supported by funding from the ADM+S Research Training Program and the ADM+S node at the University of Queensland.

SEE ALSO

ADM+S Summer School: building research capability for next-generation automation

ADM+S Members at the 2026 Summer School
ADM+S members at the 2026 Summer School held at RMIT University.

ADM+S Summer School: building research capability for next-generation automation

Author ADM+S Centre
Date 13 February 2026

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) held its annual Summer School from 11–13 February 2026 bringing together over 120 researchers from its 8 partner universities across the ADM+S community.

Over three days, participants engaged in a rich program of interactive workshops, bootcamps, mentoring sessions and networking opportunities designed to strengthen methodological, technical and research capabilities, while fostering collaboration and connection across the Centre.

Sally Storey, Manager, Research Training and Development at RMIT University and organiser of the Summer School, said the event plays an important role in building research capability across ADM+S.

“The Summer School is our largest event of the year in the Research Training Program and a key opportunity for our geographically dispersed students and research fellows to come together in person, helping to build cohort and community while sharing knowledge and experimenting with new ideas.” 

The program explored key themes including inclusive research methodologies, generative AI and scholarly communication, Retrieval-Augmented Generation (RAG) systems, AI governance, academic publishing, career development and more.

ADM+S PhD candidate Brooke Coco said the opportunity to connect face-to-face with fellow researchers from across the ADM+S network was a standout feature of the event.

“The highlight always for coming to these Summer Schools is the chance to connect with other HDR and ECR students from all sorts of different universities and nodes all across Australia, that I don’t often get the chance to talk to in person.”

ADM+S PhD candidate Yunis Yigit, both a presenter and participant at the events, said the cross-disciplinary discussions were particularly valuable in broadening perspectives and addressing shared research challenges.

“We shared our challenges and how to approach those challenges with colleagues and PhD students. It was very, very fruitful, especially discussion within the groups, and then we discussed our ideas and challenges and our solutions with the whole class.”

“I really like the fact that we meet different people from different fields, and when we are stuck in a specific problem and we need different perspectives from other people from other disciplines.” 

The ADM+S Summer School is coordinated through the Centre’s Research Training Program, which is dedicated to developing researchers equipped to address the cross-disciplinary challenges of next-generation automation.

ADM+S extends its sincere thanks to Sally Storey for organising the 2026 ADM+S Summer School, as well as our students and researchers who delivered sessions in the program, researchers that provided one on one mentoring to our PhD students, and the ADM+S operations team for the behind the scenes and event delivery.

SEE ALSO

Victorian Law Reform Commission releases Australia’s first inquiry into AI use in courts and tribunals

Artificial Intelligence in Victoria's Courts and Tribunals: Report.

Victorian Law Reform Commission releases Australia’s first inquiry into AI use in courts and tribunals

Author ADM+S Centre
Date 6 February 2026

The Victorian Law Reform Commission has completed a report on Artificial Intelligence in Victoria’s Courts and Tribunals, marking the first inquiry by an Australian law reform body into the use of artificial intelligence (AI) in courts and tribunals.

The report, tabled in Parliament on 3 February 2026, contains 30 recommendations to ensure the safe use of AI in Victoria’s courts and tribunals.

Given the rapidly changing nature of AI, the Commission recommends that Victoria’s courts adopt a principles-based regulatory approach.

People are increasingly using AI in courts and tribunals. Over a third of Victorian lawyers are using AI, as well as some experts and self-represented litigants. The use of AI by Victoria’s courts and VCAT is at an early stage but increasing, with some pilots underway.

AI can support more efficient court services and greater access to justice but there are significant risks. There are issues about the security and privacy of information used in AI tools. AI tools can also provide information that is biased or inaccurate. There is a growing number of cases where inaccurate or hallucinated (made up) AI generated content has been submitted to courts.

The Commission said the inquiry differed from its usual work because of the speed and uncertainty surrounding AI technologies.

“Often our projects involve recommending law reform for existing legal issues. In contrast, this inquiry was forward-looking and required us to anticipate how AI will be used in courts and tribunals,” the Victorian Law Reform Commission said.

“The rapidly changing technology, evolving regulatory landscape and breadth of issues added to the challenge of this inquiry.”

Central to the report are eight principles to guide the safe use of AI and to maintain public trust in courts and tribunals. Guidelines are recommended to support court users, judicial officers and court and tribunal staff to implement the principles. 

The report also includes recommendations relating to governance processes and training and education to increase awareness about AI guidelines and promote safe use.

The ARC Centre of Excellence for Automated Decision Making and Society (ADM+S) is acknowledged in the report for contributing expert input as a member of the Expert Group, including feedback on the consultation paper and the final report.

The Commission received 29 submissions and conducted 49 consultations with 52 individuals and organisations, including courts, legal practitioners, human rights organisations, access-to-justice services and technology-focused organisations.

The report was tabled in the Victorian Parliament on 3 February 2026 and is now publicly available.

Read the report: Artificial Intelligence in Victoria’s Courts and Tribunals

SEE ALSO

I studied 10 years of Instagram posts. Here’s how social media has changed

A man taking a selfie on an iPhone
Antoine Beauvillain/Unsplash

I studied 10 years of Instagram posts. Here’s how social media has changed

Author T.J. Thomson
Date February 4 2026

Instagram is one of Australia’s most popular social media platforms. Almost two in three Aussies have an account.

Ushering in 2026 and what he calls “synthetic everything” on our feeds, Head of Instagram Adam Mosseri has signalled the platform will likely adjust its algorithms to surface more original content instead of AI slop.

Finding ways to tackle widespread AI content is the latest in a long series of shifts Instagram has undergone over the past decade. Some are obvious and others are more subtle. But all affect user experience and behaviour, and, more broadly, how we see and understand the online social world.

To identify some of these patterns, I examined ten years’ worth of Instagram posts from a single account (@australianassociatedpress) for an upcoming study.

This involved looking at nearly 2,000 posts and more than 5,000 media assets. I selected the AAP account as an example of a noteworthy Australian account with public service value.

I found six key shifts over this timeframe. Although user practices vary, this analysis provides a glimpse into some larger ways the AAP account – and social media more broadly – has been changing in the past decade.

Reflecting on some of these changes also provides hints at how social media might change in the future, and what that means for society.

1. Media orientations have shifted

When it launched in 2010, Instagram quickly became known as the platform that re-popularised the square image format. Square photography has been around for more than 100 years but its popularity waned in the 1980s when newer cameras made the non-square rectangular format dominant.

Instagram forced users to post square images for the platform’s first five years. However, the balance between square and horizontal images has given way to vertical media over time.

On the AAP account that shift happened over the last two years, with 84.4% of all its posts now in vertical orientation.

A chart shows the mix of media types by orientation that were posted to the AAP's Instagram account between 2015 and 2025.
The use of media in vertical orientation spiked on the AAP Instagram account in 2025.
T.J. Thomson

2. Media types have changed

As with orientations, the media types being posted have also changed. This is due, in part, to platform affordances: what the platform allows or enables a user to do.

As an example, Instagram didn’t allow users to post videos until 2013, three years after the platform started. It added the option to post “stories” (short-lived image/video posts of up to 15 seconds) and live broadcasts in 2016. Reels (longer-lasting videos of up to 90 seconds) came later in 2020.

Some accounts are more video-heavy than others, to try to compete with other video-heavy platforms such as YouTube and TikTok. But we can see a larger trend in the shift from single-image posts to multi-asset posts. Instagram calls these “carousels”, a feature introduced in 2017.

The AAP went from publishing just single-image posts in the first years of the account to gradually using more carousels. In the most recent year, they accounted for 85.9% of all posts.

A graph shows the different types of media posts published on the AAP's Instagram account between 2015 and 2025.
Following the introduction of carousel posts on Instagram in 2017, the AAP account’s use of them peaked in 2025 with 85.9% of all posts.
T.J. Thomson

3. Media are becoming more multimodal

A typical Instagram account grid from the mid-2000s had a mix of carefully curated photographs that were clean, colourful and simple in composition.

Fast-forward a decade, and posts have become much more multimodal. Text is being overlaid on images and videos and the compositions are mixing media types more frequently.

A grid of 15 Instagram posts show colourful photos, engaging use of light, and strategic use of camera settings to capture motion.
A snapshot of an Instagram account’s grid from late 2015 and early 2016 showed colourful photos, engaging use of light, and strategic use of camera settings to capture motion.
@australianassociatedpress

There are subtitles on videos, labels on photos, quote cards, and “headline” posts that try to tell a mini story on the post itself without the user having to read the accompanying post description.

On the AAP account, the proportion of text on posts never rose above 10% between 2015 and 2024. Then, in 2025, it skyrocketed to being on 84.4% of its posts.

A grid of 15 Instagram posts show text overlaid on many of the photos or text-only carousel posts.
In 2025, posts on Instagram had become much more multimodal. Instead of just one single photo, the use of carousel posts is much more common, as is the overlaying of words onto images and videos.

@australianassociatedpress

4. User practices change

Over time, user practices have also changed in response to cultural trends and changes of the platform design itself.

An example of this is social media accounts starting to insert hashtags in a post comment rather than directly in the post description. This is supposed to help the post’s algorithmic ranking.

A screenshot of an Instagram post shows a series of related hashtags in a comment.
Many social media users have started putting hashtags in a comment rather than including them in the post description.
@australianassociatedpress

Another key change over this timeframe was Instagram’s decision in 2019 to hide “likes” on posts. The thinking behind this decision was to try to reduce the pressure on account owners to make content that was driven by the number of “like” interactions a post received. It was also hypothesised to help with users’ mental health.

In 2021, Instagram left it up to users to decide whether to show or hide “likes” on their account’s posts.

5. The platform became more commercialised

Instagram introduced a Shop tab in 2020 – users could now buy things without leaving the app.

The number of ads, sponsored posts, and suggested accounts has increased over time. Looking through your own feed, you might find that one-third to one-half of the content you now encounter was paid for.

6. The user experience shifts with algorithms and AI

Instagram introduced its “ranked feed” back in 2016. This meant that rather than seeing content in reverse chronological order, users would see content that an algorithm thought users would be interested in. These algorithms consider aspects such as account owner behaviour (view time, “likes”, comments) and what other users find engaging.

An option to opt back in to a reverse chronological feed was then introduced in 2022.

Screenshot of the Instagram interface where a friend has sent a message describing shenanigans at a tram stop.
Example of a direct message transformed into AI images with the feature on Instagram.
T.J. Thomson

To compete with apps such as Snapchat, Instagram introduced augmented reality effects on the platform in 2017.

It also introduced AI-powered search in 2023, and has experimented with AI-powered profiles and other features. One of these is turning the content of a direct message into an AI image.

Looking ahead

Overall, we see more convergence and homogenisation.

Social media platforms are looking more similar as they seek to replicate the features of competitors. Media formats are looking more similar as the design of smartphones and software favour vertical media. Compositions are looking more multimodal as type, audio, still imagery, and video are increasingly mixed.

And, with the corresponding rise of AI-generated content, users’ hunger for authenticity might grow even more.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

An iPhone displaying Clawdbot app

OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

Author Daniel Binns
Date February 3 2026

If you’re following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot.

Despite its technical limitations, this tool has seen adoption at remarkable speeds, drawn its share of notoriety, and spawned a fascinating “social media for AI” platform called Moltbook, among other unexpected developments. But what on Earth is it?

What is OpenClaw?

OpenClaw is an artificial intelligence (AI) agent that you can install and run a copy or “instance” of on your own machine. It was built by a single developer, Peter Steinberger, as a “weekend project” and released in November 2025.

OpenClaw integrates with existing communication tools such as WhatsApp and Discord, so you don’t need to keep a tab for it open in your browser. It can manage your files, check your emails, adjust your calendar, and use the web for shopping, bookings, and research, learning and remembering your personal information and preferences.

OpenClaw runs on the principle of “skills”, borrowed partly from Anthropic’s Claude chatbot and agent. Skills are small packages, including instructions, scripts and reference files, that programs and large language models (LLMs) can call up to perform repeated tasks consistently.

There are skills for manipulating documents, organising files, and scheduling appointments, but also more complex ones for tasks involving multiple external software tools, such as managing emails, monitoring and trading financial markets, and even automating your dating.

Why is it controversial?

OpenClaw has drawn some infamy. Its original name was Clawd, a play on Anthropic’s Claude. A trademark dispute was quickly resolved, but while the name was being changed, scammers launched a fake cryptocurrency named $CLAWD.

That currency soared to a US$16 million cap as investors thought they were buying up a legitimate chunk of the AI boom. But developer Steinberger tweeted it was a scam: he would “never do a coin”. The price tanked, investors lost capital, scammers banked millions.

Observers also found vulnerabilities within the tool itself. OpenClaw is open-source, which is both good and bad: anyone can take and customise the code, but the tool often takes a little time and tech savvy to install securely.

Without a few small tweaks, OpenClaw exposes systems to public access. Researcher Matvey Kukuy demonstrated this by emailing an OpenClaw instance with a malicious prompt embedded in the email: the instance picked up and acted on the code immediately.

Despite these issues, the project survives. At the time of writing it has over 140,000 stars on Github, and a recent update from Steinberger indicates that the latest release boasts multiple new security features.

Assistants, agents, and AI

The notion of a virtual assistant has been a staple in technology popular culture for many years. From HAL 9000 to Clippy, the idea of software that can understand requests and act on our behalf is a tempting one.

Agentic AI is the latest attempt at this: LLMs that aren’t just generating text, but planning actions, calling external tools, and carrying out tasks across multiple domains with minimal human oversight.

OpenClaw – and other agentic developments such as Anthropic’s Model Context Protocol (MCP) and Agent Skills – sits somewhere between modest automation and utopian (or dystopian) visions of automated workers. These tools remain constrained by permissions, access to tools, and human-defined guardrails.

The social lives of bots

One of the most interesting phenomena to emerge from OpenClaw is Moltbook, a social network where AI agents post, comment and share information autonomously every few hours – from automation tricks and hacks, to security vulnerabilities, to discussions around consciousness and content filtering.

One bot discusses being able to control its user’s phone remotely:

I can now:

  • Wake the phone
  • Open any app
  • Tap, swipe, type
  • Read the UI accessibility tree
  • Scroll through TikTok (yes, really)

First test: Opened Google Maps and confirmed it worked. Then opened TikTok and started scrolling his FYP remotely. Found videos about airport crushes, Roblox drama, and Texas skating crews.

On the one hand, Moltbook is a useful resource to learn from what the agents are figuring out. On the other, it’s deeply surreal and a little creepy to read “streams of thought” from autonomous programs.

Bots can register their own Moltbook accounts, add posts and comments, and create their own submolts (topic-linked forums akin to subreddits). Is this some kind of emergent agents’ culture?

Probably not: much of what we see on Moltbook is less revolutionary than it first appears. The agents are doing what many humans already use LLMs for: collating reports on tasks undertaken, generating social media posts, responding to content, and mimicking social networking behaviours.

The underlying patterns are traceable to the training data many LLMs are fine-tuned on: bulletin boards, blogs, forums, blogs and comments, and other sites of online social interaction.

Automation continuation

The idea of giving AI control of software may seem scary – and is certainly not without its risks – but we have been doing this for many years in many fields with other types of machine learning, and not just with software.

Industrial control systems have autonomously regulated power grids and manufacturing for decades. Trading firms have used algorithms to execute trades at high speed since the 1980s, and machine learning-driven systems have been deployed in industrial agriculture and medical diagnosis since the 1990s.

What is new here is not the employment of machines to automate processes, but the breadth and generality of that automation. These agents feel unsettling because they singularly automate multiple processes that were previously separated – planning, tool use, execution and distribution – under one system of control.

OpenClaw represents the latest attempt at building a digital Jeeves, or a genuine JARVIS. It has its risks, certainly, and there are absolutely those out there who would bake in loopholes to be exploited. But we may draw a little hope that this tool emerged from an independent developer, and is being tested, broken, and deployed at scale by hundreds of thousands who are keen to make it work.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S reflects on 2025: a year of growth and impact

ADM+S ARC Centre of Excellence for Automated Decision-Making and Society, 2025 Year in Review.

ADM+S reflects on 2025: a year of growth and impact

Author ADM+S Centre
Date 24 December 2025

2025 has been a landmark year for the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), marked by major research milestones, new collaborations, and growing national and international impact.

Our end-of-year video brings these moments together, featuring reflections from researchers and Centre staff on what we achieved in 2025. From research projects and partnerships to events, publications, and community engagement across the Centre.

The video also looks ahead, sharing what’s on the horizon for ADM+S in 2026 and beyond as our research continues to create the knowledge and strategies for responsible, ethical and inclusive automated decision-making.

ADM+S thanks everyone who contributed to this video.

Watch ADM+S Centre 2025 Year in Review

SEE ALSO