As an innovative engine for digital content generation, AI-Generated Content (AIGC) has drawn more and more attention from academic fields as well as industries. Specifically in the area of art creation, AI has demonstrated its great potential and gained increasing popularity. People are greatly impressed by AI painting, composing, writing, and design. The emerging technologies of the metaverse even provide more opportunities for AI art. AI has not only exhibited a certain degree of creativity, but also helped in uncovering the principles and mechanisms of creativity and imagination from the perspective of neuroscience, cognitive science, and psychology.
This is the 5th AIART workshop to be held in conjunction with ICME 2023 in Brisbane, Australia, and it aims to bring forward cutting-edge technologies and most recent advances in the area of AI art in terms of enabling creation, analysis, understanding, and rendering technologies.
The theme topic of AIART 2023 will be AI for Creative Synergy. We plan to invite 5 keynote speakers to present their insightful perspectives on AI art.
We sincerely invite high-quality papers presenting or addressing issues related to AI art, including but not limited to the following topics:
The authors of selected high-quality papers will be invited to submit an extended version to the Machine Intelligence Research (MIR) journal published by Springer.
Additionally, one Best Paper Award will be given.
AIART 2023 is also launching a demo track for artists to showcase their creative artworks in the form of in-person or online gallery. The demo track will provide a great opportunity for people to experience interactive artworks and communicate creative ideas. The submission guideline for the demo track follows that of the main ICME conference: https://www.2023.ieeeicme.org/demonstrations.php.
Authors should prepare their manuscript according to the Guide for Authors of ICME available at Author Information and Submission Instructions: https://www.2023.ieeeicme.org/author-info.php
Submission address: https://cmt3.research.microsoft.com/ICMEW2023
April 9, 2023
July 14, 2023
Brian C. Lovell
Synthesizing Faces for Ethical Face Recognition using Stable Diffusion
8:35 – 9:05, July 14, 2023
We propose a solution to address ethical concerns in face recognition databases by synthesizing faces to replace internet-scraped photographs without consent. Our approach utilizes generative techniques, including StyleGAN and Stable Diffusion. StyleGAN generates diverse and realistic synthetic faces by learning from extensive data, allowing control over facial attributes for demographic alignment. Stable Diffusion models image generation dynamics, producing high-quality, visually coherent synthetic faces. Our solution preserves database functionality while respecting privacy. Through evaluations, we assess visual quality, diversity, and demographic fairness of synthesized faces. Compatibility and effectiveness in face recognition tasks are also evaluated to maintain system accuracy and robustness. Our research highlights the potential of ethical face synthesis for creating privacy-preserving face recognition databases.
Brian C. Lovell was born in Brisbane, Australia in 1960. He received the BE in electrical engineering in 1982, the BSc in computer science in 1983, and the PhD in signal processing in 1991: all from the University of Queensland (UQ). Professor Lovell is Director of the Advanced Surveillance Group in the School of ITEE, UQ. He was President of the International Association for Pattern Recognition (IAPR) [2008-2010], and is Fellow of the IAPR, Senior Member of the IEEE, and voting member for Australia on the Governing Board of the IAPR. He was General Co-Chair of the IEEE International Conference on Image Processing (ICIP) in Melbourne, 2013, Program Co-Chair of the International Conference of Pattern Recognition (ICPR) in Cancun, 2016, and Program Co-Chair of ICPR2020 in Milan. His interests include Artificial Intelligence, Computer Vision, non-cooperative Face Recognition, Biometrics, and Pattern Recognition.
Perception and Assessment of Image Aesthetics: Recent Advances and New Thinking
10:30 – 11:00, July 14, 2023
With the explosive growth of digital images produced by low-cost built-in cameras, image aesthetics assessment (IAA) has become an increasing popular research topic in both academia and industry. IAA has wide applications in art design, smart photography, photo editing, etc. In this talk, the latest research progresses on IAA will be introduced, including both generic and personalized IAA. Multi-modal IAA will also be introduced in the context of large vision-language models. We will also discuss research challenges and future trends on image aesthetics research.
Leida Li received the B.Sc. and Ph.D. degrees from Xidian University in 2004 and 2009, respectively. From 2014 to 2015, he was a Research Fellow with the Rapid-rich Object SEarch (ROSE) Lab, Nanyang Technological University (NTU), Singapore, where he was a Senior Research Fellow from 2016 to 2017. From 2009 to 2019, he worked as Lecturer, Associate Professor and Professor, in the School of Information and Control Engineering, China University of Mining and Technology, China. Currently, he is a Full Professor with the School of Artificial Intelligence, Xidian University, China. His research interests include image/video quality evaluation, computational aesthetics and visual emotion analysis. His research is funded by NSFC, OPPO, Huawei and Tencent, etc. He has published more than 100 papers in these areas. He is on the editor board of Journal of Visual Communication and Image Representation (Best Associate Editor Award 2021), EURASIP Journal on Image and Video Processing and Journal of Image and Graphics (Excellent Editor Award 2022). He is a senior member of CCF and CSIG.
Creating a Massive Open Metaverse Course (MOMC)
13:30 – 14:00, July 14, 2023
Much effort has been made in using virtual reality (VR) technology to support massive open online course (MOOC) environments. This talk briefly reviews the latest research in VR/AR/XR application in education, and argues how immersive virtual educational experiences could be gained. We then introduce the new concept of Massive Open Metaverse Course (MOMC), combining MOOC and Metaverse and utilizing the latest volumetric video technology. We offer our vision on dual campus online education for HKUST 2.0, with a real case study, i.e., the President’s First Lecture, under development at the Guangzhou campus. This is the world’s first true MOMC environment, providing immersive and realistic virtual and augmented reality experiences to both teachers and learners.
Kang Zhang is Acting Head and Professor of Computational Media and Arts, Information Hub, Hong Kong University of Science and Technology (Guangzhou), Professor of Division of Emerging Interdisciplinary Areas, HKUST, and Professor Emeritus of Computer Science, The University of Texas at Dallas. He was a Fulbright Distinguished Chair and an ACM Distinguished Speaker, and held academic positions in China, the UK, Australia and USA. Zhang's current research interests include computational aesthetics, visual languages, and generative art and design; and has published 8 books, and over 120 journal papers in these areas. He has delivered keynotes at art and design, computer science, and management conferences, and is on the editorial boards of Journal of Big Data, The Visual Computer, Journal of Visual Language and Computing, International Journal of Software Engineering and Knowledge Engineering, International Journal of Advanced Intelligence, and Visual Computing for Industry, Biomedicine, and Art.
Towards the wellbeing in space: measuring and monitoring the emotions of users immersed in meaningful virtual reality experiences
15:30 – 16:00, July 14, 2023
As the world advances in space exploration with current and planned stations in Low Earth Orbit (LEO), and long-duration missions to the Moon and Mars, a more diverse range of people will live and work in space. This increasing diversity in space exploration necessitates a deeper understanding of cultural context and diverse backgrounds when designing wellbeing solutions for astronauts. NASA identifies Five Hazards of Human Spaceflight: Radiation, Isolation and confinement, Distance from Earth, Gravity (or lack thereof), and Hostile/closed environments. Astronauts report challenges such as loneliness, boredom, disconnection, sensory deprivation, diminished cognitive performance, and stress while in orbit. In this talk we explore these challenges and how integrating extended reality (XR) environments, wearable sensors, and Artificial Intelligence can revolutionize astronauts' daily routines. We discuss how to mitigate environmental challenges and psychological stressors while promoting physical activity, motivation, and overall well-being in confined and isolated environment using advanced technologies.
Dr. Bahareh Nakisa is a Lecturer of Applied AI and the course director of Applied AI at School of Information Technology, Deakin University, She received a B.Sc. degree in Soft Engineering from Iran in 2008, Master of Computer Science from the National University of Malaysia in 2014, and a PhD in Computer Science (Artificial Intelligence) from the Queensland University of Technology (QUT), Australia in 2019. She started working in Industry as AI scientist and Lead AI scientist and then she joined School of Information Technology, Deakin University as a Lecturer of Applied AI in 2019. Her research interests are in the areas of artificial intelligence (AI), Deep learning, affective computing and time-series data analysis. She is particularly interested in the application of AI/DL models to solve real-world problems in applications such as healthcare, transportation, defence and space. She has published more than 39 publications in top-tier international venues in AI, computer science. She has secured more than AUD 2 million in external research and development funding.
Controllable Image Synthesis with Diffusion Models
16:45 – 17:15, July 14, 2023
Diffusion models have demonstrated impressive capability in synthesizing photorealistic images given a few or even no words. These models may not fully satisfy user need, as normal users or artists intend to control the synthesized images with specific guidance, like overall layout, color, structure, object shape, and so on. We propose a method to adapt diffusion models for controllable image synthesis. Our method outperforms the existing methods and demonstrates multiple applications with its plausible generalization ability and flexible controllability.
Dong Liu received the B.S. and Ph.D. degrees in electrical engineering from the University of Science and Technology of China (USTC), Hefei, China, in 2004 and 2009, respectively. He was a Member of Research Staff with Nokia Research Center, Beijing, China, from 2009 to 2012. He joined USTC as a faculty member in 2012 and became a Professor in 2020. His research interests include image and video processing, coding, analysis, and data mining. He has authored or co-authored more than 200 papers in international journals and conferences, which were cited more than 12000 times according to Google Scholar (h-index is 42). He has more than 30 granted patents. He has several technique proposals adopted by standardization groups. He received 2009 IEEE TCSVT Best Paper Award, VCIP 2016 Best 10% Paper Award, and ISCAS 2022 Grand Challenge Top Creativity Paper Award. He is a Senior Member of IEEE, CCF, and CSIG, an elected member of MSA-TC of IEEE CAS Society, and an elected member of Multimedia TC of CSIG. He serves or had served as the Chair of IEEE 1857.11 Standard Working Subgroup (also known as Future Video Coding Study Group), a Guest Editor for IEEE TCSVT, an Organizing Committee Member for VCIP 2022, ChinaMM 2022, ICME 2021, etc.
Beijing University of Technology
Dr. Luntian Mou is an Associate Professor with Beijing Institute of Artificial Intelligence (BIAI), the Faculty of Information Technology, Beijing University of Technology. He was a Visiting Scholar with the University of California, Irvine, from 2019 to 2020. And he was a Postdoctoral Fellow at Peking University, from 2012 to 2014. He initiated the IEEE Workshop on Artificial Intelligence for Art Creation (AIART) on MIPR 2019. His current research interests include artificial intelligence, machine learning, brain-like computing, multimedia computing, affective computing, and neuroscience. And he serves as a Co-Chair of System subgroup in AVS workgroup and IEEE 1857 workgroup as well. He is a Senior Member of IEEE (SA, SPS), and a Member of ACM, CCF, CAAI, CSIG, and MPEG China.
Dr. Feng Gao is an Assistant Professor with the School of Arts, Peking University. He has long researched in the disciplinary fields of AI and art, especially in AI painting. He co-initiated the international workshop of AIART. Currently, he is also enthusiastic in virtual human. He has demonstrated his AI painting system, called Daozi, in several workshops and drawn much attention.
Central Conservatory of Music
Dr. Zijin Li is a Professor with the Department of AI Music and Music Information Technology, Central Conservatory of Music. She was a Visiting Scholar with McGill University. Her current research interests include music acoustics, music creativity, new musical instrument design and Innovation theory of music technology. She is committee chair of New Interface Music Expressions (NIME2021), IEEE MIPR AI Art Workshop, China Sound and Music Technology Conference (CSMT), China Music AI Development Symposium, China Musical Instrument Symposium. She served as the judge of the New Music Device Invention Award of International "Danny award", International Electronic Music Competition (IEMC) and NCDA Awards.
Queen Mary University of London
Dr. Nick Bryan-Kinns is Professor of Interaction Design and Director of the Media and Arts Technology Centre at Queen Mary University of London. He is Fellow of the Royal Society of Arts, Turing Fellow at The Alan Turing Institute, Fellow of the British Computer Society, and Senior Member of the Association for Computing Machinery. He is Director of International Joint Ventures, a leader of the AI and Music Centre, and leads the Sonic Interaction Design Lab in the Centre for Digital Music. He has published internationally on AI and music, cross-cultural design, participatory design, mutual engagement, interactive art, and tangible interfaces. His research has been exhibited internationally and reported widely from the New Scientist to the BBC. He chaired the Steering Committee for the ACM Creativity and Cognition Conference series, and is a recipient of ACM and BCS Recognition of Service Awards.
Dr. Jiaying Liu is currently an Associate Professor with the Wangxuan Institute of Computer Technology, Peking University. She received the Ph.D. degree (Hons.) in computer science from Peking University, Beijing China, 2010. She has authored over 100 technical articles in refereed journals and proceedings, and holds 43 granted patents. Her current research interests include multimedia signal processing, compression, and computer vision. Dr. Liu is a Senior Member of IEEE, CSIG and CCF. She was a Visiting Scholar with the University of Southern California, Los Angeles, from 2007 to 2008. She was a Visiting Researcher with the Microsoft Research Asia in 2015 supported by the Star Track Young Faculties Award. She has served as a member of Membership Services Committee in IEEE Signal Processing Society, a member of Multimedia Systems & Applications Technical Committee (MSA TC), Visual Signal Processing and Communications Technical Committee (VSPC TC) in IEEE Circuits and Systems Society, a member of the Image, Video, and Multimedia (IVM) Technical Committee in APSIPA. She received the IEEE ICME 2020 Best Paper Awards and IEEE MMSP 2015 Top10% Paper Awards. She has also served as the Associate Editor of IEEE Trans. on Image Processing, and Elsevier JVCI, the Technical Program Chair of IEEE VCIP-2019/ACM ICMR-2021, the Publicity Chair of IEEE ICME-2020/ICIP-2019, and the Area Chair of CVPR-2021/ECCV-2020/ICCV-2019. She was the APSIPA Distinguished Lecturer (2016-2017).
Tongji University Design Artificial Intelligence Lab
Dr. Ling Fan is a scholar and entrepreneur to bridge machine intelligence with creativity. He is the founding chair and professor of Tongji University Design Artificial Intelligence Lab. Before, he held teaching position at the University of California at Berkeley and China Central Academy of Fine Arts. Dr. Fan co-founded Tezign.com, a leading technology start-up with the mission to build digital infrastructure for creative contents. Tezign is backed by top VCs like Sequoia Capital and Hearst Ventures. Dr. Fan is a World Economic Forum Young Global Leader, an Aspen Institute China Fellow, and Youth Committee member at the Future Forum. He is also a member of IEEE Global Council for Extended Intelligence. Dr. Fan received his doctoral degree from Harvard University and master's degree from Princeton University. He recently published From Universality of Computation to the Universality of Imagination, a book on how machine intelligence would influence human creativity.
Hong Kong University of Science and
Dr. Zeyu Wang is an Assistant Professor of Computational Media and Arts (CMA) in the Information Hub at the Hong Kong University of Science and Technology (Guangzhou) and an Affiliate Assistant Professor in the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology. He received a PhD from the Department of Computer Science at Yale University and a BS from the School of Artificial Intelligence at Peking University. He leads the Creative Intelligence and Synergy (CIS) Lab at HKUST(GZ) to study the intersection of Computer Graphics, Human-Computer Interaction, and Artificial Intelligence, with a focus on algorithms and systems for digital content creation. His current research topics include sketching, VR/AR/XR, and generative techniques, with applications in art, design, perception, and cultural heritage. His work has been recognized by an Adobe Research Fellowship, a Franke Interdisciplinary Research Fellowship, a Best Paper Award, and a Best Demo Honorable Mention Award.