Authors are encouraged to submit previously unpublished research papers in the scope of one of the accepted special sessions. The submissions are peer reviewed in a single blind process. Unless otherwise specified, the same lengths as for regular paper apply.
The deadlines are the same as for regular papers.
The following special sessions are open for submission:
MSPND: Multimodal Signal processing technologies for Protecting people and environment against Natural Disasters
Over the recent years and especially since 2016, with the success of Paris Climate Change Agreement, there has been a revitalised and stimulated awareness on climate change. The evidence gathered from 2021, on the unprecedented number of natural disasters crisis across southern Europe has further re-emphasised the need to undertake further actions in combating the environmentally sustainability. The number of deaths from natural disasters can be highly variable from year-to-year; some years pass with very few deaths before a large disaster event claims many lives. Over the past decade on an average, approximately 60,000 people have globally died from natural disasters each year. This represents 0.1% of global deaths. The increase in the frequency of natural disasters ranging from earthquakes, floods, hurricanes, and wildfires, among others like volcanic eruptions are posing a real threat to the livelihood of many citizens. Addressing these challenges needs a concerted effort to undertake research activities considering the global macro scale analysis complemented with on-ground technologies deployed for monitoring the causes and sources of natural disaster
Despite several efforts being carried out among the disaster resilience community, there remains a lack of concerted effort in bringing together interdisciplinary experts to address the problem of early-stage warning of natural disasters. Additionally, there is also a critical need to develop technologies that could save lives following the impact of natural disasters. Therefore, the objective of the special session is to bring together leading contributions focussed on the scientific innovations in the field of multi-modal signal processing technologies that can facilitate protection of people and environment against natural disasters. Such contributions could range from the use of advances in the field of sensor technologies, computer vision technologies, earth observation data analytics, and climate and weather data services among others.
Topics include (but not limited to):
- Earth observation data analysis for climate change insights
- Social media analysis for the detection and response coordination for natural disasters
- Computer vision technologies for automated detection of natural disasters
- Multispectral and hyperspectral imaging studies for rescue management
- Analysis of UAV footage for aerial surveillance of affected regions of natural disasters
- GIS visualisation toolkit to map visual concepts related to natural disasters
- Risk assessment methodologies for regions that could be affected by natural disasters
- Rehabilitation impact assessment of natural disasters for community and environment revival
Session Organisers:
- Krishna Chandramouli, Venaka Treleaf GbR, k.chandramouli@venaka.co.uk
- Konstantinos Avgerinakis, Catalink Limited
- Philippe Besson, Pompiers de l’urgence internationale
- Iosif Vourvachis, Hellenic Rescue Team
- Dr. Ilias Gialampoukidis, Information Technologies Institute Centre of Research & Technology – Hellas
- Dr. Stefanos Vrochidis, Information Technologies Institute Centre of Research & Technology – Hellas
Computer-Assisted Clinical Applications
Computer-aided analysis of clinical procedures is becoming ever more prominent in modern medicine. State-of-the-Art operating rooms are equipped with multiple cameras, sensors and high-tech equipment enabling precise patient monitoring and thorough treatment. Moreover, hospitals strive for secure patient management as well as sophisticated picture archiving and communication systems (PACS). Within this ensemble of components, data analysis, storage and retrieval play a vital role for the development of viable clinical applications assisting medical as well as administrative staff. Due to the big variety of modalities in manifold medical domains, such applications often are highly task- and data-specific. Therefore, corresponding research combines and draws from many traditional multimedia domains offering numerous opportunities for hand-crafted as well as machine learning based solutions.
The interdisciplinary nature of medical multimedia applications potentially requires close collaboration of medical experts, computer scientists and data analysts as well as electrical engineers. Therefore, much effort needs to be put into establishing common goals and reasonable collaboration. Furthermore, the rise of deep learning in multimedia applications certainly as well is advantageous for computer-assisted medicine, yet, also entails many domain-specific challenges such as a fundamental difference in data, conditions and purpose. Finally, data acquisition as a big part of multimedia research is distinctly more restrictive in medical domains, since patient information is sensitive and personal.
Many of the above mentioned aspects are actively researched by various communities, yet, their practical applicability often still is limited, which encourages research on more general, robust and reliable approaches. This special session targets (but is not limited to) novel research in the following domains:
- Medical image and video analysis
- Medical data exploration and browsing
- Medical multimodal data indexing and retrieval
- Medical data storage optimization
- Securing in medical data and systems
- Applications for surgical assistance
- Applications for patient data management
- Applications for robot-assisted surgery
- Augmented and virtual reality for medical applications
- Explainability of medical multimedia analysis algorithms
Session Organisers:
- Andreas Leibetseder (aleibets@itec.aau.at), Institute of Information Technology, Klagenfurt University, Austria
- Klaus Schoeffmann (ks@itec.aau.at), Institute of Information Technology, Klagenfurt University, Austria
Learning from scarce data challenges in the media domain
Deep learning-based algorithms for multimedia content analysis need a large amount of annotated data for effective training, e.g., for image classification on the ImageNet dataset, each class comprises several thousand annotated samples. Having a dataset of insufficient size for training usually leads to a model which is prone to overfitting and performs poorly in practice. But in many real-world applications in multimedia content analysis, it is not possible or not viable to gather and annotate such a large training data. This may be due to the prohibitive cost of human annotation, ownership/copyright issues of the data, or simply not having enough media content of a certain kind available.
To address this issue, a lot of research has been performed in recent years on learning from scarce data/learning from limited data. There are a variety of ways to work around the problem of data scarcity like using transfer learning, domain transfer or few-shot learning.
The special session on “Learning from scarce data challenges in the media domain” aims to provide a forum for novel approaches on learning from scarce data for multimedia content analysis, with a focus on the media domain.
The topics of interest include, but are not limited to:
- Transfer learning
- Synthetic data generation
- Domain transfer/adaptation
- Semi-supervised and self-supervised learning, e.g. to take advantage of large amounts of unlabeled media archive content
- Few-shot learning (classification, object detection etc.), which is useful e.g. for adding new object classes to an automatic tagging engine for media archive content
- Benchmarking and evaluation frameworks for content from the media domain
- Open resources, e.g., software tools for learning from scarce data in the media domain
Session Organisers:
- Dr. Giuseppe Amato, CNR-ISTI, Italy
- Prof. Bogdan Ionescu, AI Multimedia Lab, Politehnica University of Bucharest, Romania
- Hannes Fassold, JOANNEUM RESEARCH, Austria
Multimedia Analysis for Digital Twins
A digital twin comprises a digital replica of an object or a system that could span its lifecycle. Such a digital twin is updated with real-time data and exploits simulation, machine learning and reasoning to assist decision-making e.g., for repairing a heavily damaged bridge. Enhanced visual representations that rely on processing multimedia data along with prevention-through-prediction models establish a concrete solution in various domains and applications where real-time updates are crucial as mitigation measures of hazardous circumstances. In brief, collected data from the actual sensors of the monitored system is forwarded to its virtual representation where predictive models estimate how a status can be involved over time and propose mitigation actions before resulting to undesirable situations. Such predictive models also consider the system’s physical constraints and configurations. Hence, accurate and efficient analysis of the collected multimedia data is an important topic in this domain.
Multidisciplinary expertise that combines knowledge from different research domains, as well as improvements of existing models and novel technologies will be required to increase the exploitation of digital twins in multiple domains such as production line, industrial applications etc. To continuously update the visual representation and its semantic content, it is considered imperative, yet not limited, for a digital twin to exploit multiple visual sources such as mobile cameras and drones. Effectively processing multimedia content and efficiently retrieving information for a monitored system is expected to improve the overall performance and applicability of a digital twin. Although substantial effort has been dedicated in the last decade for incorporating distinct models by both research communities, their correlation for common objectives in the scope of a digital twin were insufficiently studied.
Hence, this special session targets (but is not limited to) the presentation of novel research in the following domains:
- Multimedia modelling and simulation for digital twins
- Multimedia interconnection and interoperation for digital twin
- Digital twin in multimedia optimization
- Digital twin and multimedia big data
- Multimedia technologies for digital twin implementation
- Visual analysis and multi-view geometry
- Dense reconstruction from multiple visual sensors
- Point cloud extraction
- 3D deep and machine learning
- Image-based modeling and 3d reconstruction
- Visual semantic extraction for improved representations
- Semantic digital twins in various applications
- Multimedia data models for digital twin
- Benchmarks and evaluation protocols for digital twins
Session Organisers:
- Dr. Konstantinos Ioannidis (kioannid@iti.gr), Centre for Research and Technology Hellas, Information Technologies Institute
- Dr. Stefanos Vrochidis (stefanos@iti.gr), Centre for Research and Technology Hellas, Information Technologies Institute
- Dr. Klaus Schoeffmann (ks@itec.aau.at), Institute of Information Technology, Klagenfurt University, Austria
- Prof. Rolando Chacón Flores (rolando.chacon@upc.edu), UPC Universitat Politecnica de Catalunya, Spain