Applications focus  on autonomous/self-driving cars, marine vehicles and drones

 

DESCRIPTION

This two-day short course provides an overview and in-depth presentation of the various computer vision and deep learning problems encountered in autonomous systems perception, e.g. in drone imaging or autonomous car vision. It consists of two parts (A, B) and each of them includes up to 8 one-hour lectures.

Part A lectures (6-8 hours) provide an in-depth presentation to autonomous systems imaging and the relevant architectures as well as a solid background on the necessary topics of computer vision (Image acquisition, camera geometry, Stereo and Multiview imaging, Mapping and Localization) and machine learning (Introduction to neural networks, Perceptron, backpropagation, Deep neural networks, Convolutional NNs).

Part B lectures (6-8 hours) provide in-depth views of the various topics encountered in autonomous systems perception, ranging from vehicle localization and mapping, to target detection and tracking, autonomous systems communications and embedded CPU/GPU computing. Part B also contains application-oriented lectures on autonomous drones, cars and marive vessels (e.g. for land/marive surveillance, search&rescue missions, infrastructure/building inspection and modeling, cinematography).

*The course content and exact lecture topics may vary from the above ones depending on recent advances and will be finalized in consultation with the local organizer

WHEN?

The course will take place on 17-18 August 2020.

WHERE?

The course will take place at KEDEA, 3is Septemvriou – Panepistimioupoli, 54636, Thessaloniki, Greece.

You can find additional information about the city of Thessaloniki and details on how to get the city here.

PROGRAM

Time*/date 17/08/2020 18/08/2020
08:00 – 09:00 Registration Registration
09:00 – 10:00 Introduction to autonomous systems imaging Localization and mapping
10:00 – 11:00 Introduction in computer vision Deep learning for object/target detection
11:00 – 11:30 Coffee break Coffee break
11:30 – 12:30 Image acquisition, camera geometry Object tracking and 3D locilization
12:30 – 13:30 Stereo and Multiview imaging Parallel GPU and multicore CPU programming. GPU programming
13:30 – 14:30 Lunch break Lunch break
14:30 – 15:30 Introduction to neural networks. Perceptron, backpropagation Fast convolution algorithms
15:30 – 16:30 Deep neural networks. Convolutional NNs Drone cinematography
16:30 – 17:30
Introduction to multiple drone imaging Introduction to car vision
17:30 – 18:30
Drone mission planning  and control Introduction to autonomous marine vehicles

*Eastern European Summer Time (EEST)

LECTURERS

Prof. Ioannis Pitas (IEEE Fellow, IEEE Distinguished Lecturer, EURASIP fellow) received the Diploma and Ph.D. degree in Electrical Engineering, both from the Aristotle University of Thessaloniki, Greece. Since 1994, he has been a Professor at the Department of Informatics of the same University. He served as a Visiting Professor at several Universities. His current interests are in the areas of image/video processing, machine learning, computer vision, intelligent digital media, human-centered interfaces, affective computing, 3D imaging, and biomedical imaging. He is currently leading the big European H2020 R&D project MULTIDRONE. He is also chair of the Autonomous Systems initiative.

Professor Pitas will deliver 12 lectures on deep learning and computer vision.

 

TOPICS

17/08/2020 – Part A (first day, 8 lectures):

1. Introduction to autonomous systems imaging

Abstract: This lecture will provide an introduction and the general context for this new and emerging topic, presenting the aims of autonomous systems imaging and the many issues to be tackled, especially from an image/video analysis point of view as well as the limitations imposed by the system’s hardware. Applications on autonomous cars, drones or marine vessels will be overviewed.

2. Introduction in computer vision

Abstract: A detailed introduction in computer vision will be made, mainly focusing on 3D data types as well as color theory. The basics of color theory will be presented, followed by the several color coordinate systems, and finally, image and video content analysis and sampling will be thoroughly described.

3. Image acquisition, camera geometry

Abstract: After a brief introduction to image acquisition and light reflection, the building blocks of modern cameras will be surveyed, along with geometric camera modeling. Several camera models, like pinhole and weak-perspective camera model, will subsequently be presented, with the most commonly used camera calibration techniques closing the lecture.

4. Stereo and Multiview imaging

Abstract: The workings of stereoscopic and multiview imaging will be explored in depth, focusing mainly on stereoscopic vision, geometry and camera technologies. Subsequently, the main methods of 3D scene reconstruction from stereoscopic video will be described, along with the basics of multiview imaging.

5. Introduction to neural networks. Perceptron, backpropagation

Abstract: This lecture will cover the basic concepts of neural networks: biological neural models, perceptron, multilayer perceptron, classification, regression, design of neural networks, training neural networks, deployment of neural networks, activation functions,  loss types, error backpropagation, regularization, evaluation, generalization.

6. Deep neural networks. Convolutional NNs

Abstract: From multilayer perceptrons to deep architectures. Fully connected layers. Convolutional layers. Tensors and mathematical formulations. Pooling. Training convolutional NNs. Initialization. Data augmentation. Batch Normalization. Dropout. Deployment on embedded systems. Lightweight deep learning.

7. Introduction to multiple drone imaging

Abstract: This lecture will provide the general context for this new and emerging topic, presenting the aims of drone vision, the challenges (especially from an image/video analysis and computer vision point of view), the important issues to be tackled, the limitations imposed by drone hardware, regulations and safety considerations etc. An overview of the use of multiple drones in media production will be made. The three use scenaria, the challenges to be faced and the adopted methodology will be discussed at the first part of the lecture, followed by scenario-specific, media production and system platform requirements. Multiple drone platform will be detailed during the second part of the lecture, beginning with platform hardware overview, issues and requirements and proceeding by discussing safety and privacy protection issues. Finally, platform integration will be the closing topic of the lecture, elaborating on drone mission planning, object detection and tracking, UAV-based cinematography, target pose estimation, privacy protection, ethical and regulatory issues, potential landing site detection, crowd detection, semantic map annotation and simulations.

8. Drone mission planning and control

Abstract: In this lecture, first the audiovisual shooting mission is formally defined. The introduced audiovisual shooting definitions are encoded in mission planning comannds, i.e., navigation and shooting action vocabulary, and their corresponding parameters. The drone mission commands, as well as the hardware/software architecture required for manual/autonomous mission execution are described. The software infrastructure includes the planning modules, that assign, monitor and schedule different behaviours/tasks to the drone swarm team according to director and enviromental requirements, and the control modules, which execute the planning mission by translating high-level commands to intro desired drone+camera configurations, producing commands for autopilot, camera and gimbal of the drone swarm.

18/08/2020 – Part B (second day, 8 lectures):

1. Localization and mapping

Abstract: The lecture includes the essential knowledge about how we obtain/get 2D and/or 3D maps that robots/drones need, taking measurements that allow them to perceive their environment with appropriate sensors. Semantic mapping includes how to add semantic annotations to the maps such as POIs, roads and landing sites. Section Localization is exploited to find the 3D drone or target location based on sensors using specifically Simultaneous Localization And Mapping (SLAM). Finally, drone localization fusion describes improves accuracy on localization and mapping by exploiting the synergies between different sensors.

2. Deep learning for object/target detection

Abstract: Recently, Convolutional Neural Networks (CNNs) have been used for object/target (e.g., car, pedestrian, road sign) detection with great results. However, using such CNN models on embedded processors for real-time processing is prohibited by HW constraints. In that sense, various architectures and settings will be examined in order to facilitate and accelerate the use of embedded CNN-based object detectors with limited computational capabilities. The following target detection topics will be presented: Object detection as search and classification task. Detection as classification and regression task. Modern architectures for target detection (e.g., RCNN, Faster-RCNN, YOLO, SSD). Lightweight architectures. Data augmentation. Deployment. Evaluation and benchmarking.

3. Object tracking and 3D target localization

Abstract: Target tracking is a crucial component of many vision systems. Many approaches regarding person/object detection and tracking in videos have been proposed. In this lecture, video tracking methods using correlation filters or convolutional neural networks are presented, focusing on video trackers that are capable of achieving real-time performance for long-term tracking on embedded computing platforms.

4. Parallel GPU and multicore CPU programming. GPU programming

Abstract: In this lecture, various GPU and multicore CPU architectures will be reviewed, used notably in GPU cards and in embedded boards, like NVIDIA TX1, TX2, Xavier. The principles of the parallelization of various algorithms on GPU and multicore CPU architectures are reviewed. Sequentially the essentials of GPU programming are presented. Finally, special attention is paid on: a.) fast and parallel linear algebra operations (e.g., using cuBLAS) and, b.) convolution FFS algorithms, as all of them have particular importance in deep machine learning (CNNs) and in real-time computer vision.

5. Fast convolution algorithms

Abstract: Two huge factors related to deep neural network models are the amount of time spend on training such models as well as the response time during DNN inference. Many autonomous systems vision-related applications require very low latency during inference. Both aforementioned are associated with how fast we can compute neural network operations, such as 2D and 3D convolutions. Introducing new and fast ways of conducting these operations can boost the computation speed of deep neural networks.

6. Drone cinematography

Abstract: The main building blocks of drone cinematography will be surveyed, especially focusing on UAV shot types (framing and camera motion types). Additionally, the state-of-the-art on autonomous capture of cinematic UAV footage will be described, with emphasis on relevant algorithms, commercial products and tools-of-the-trade. Drones have already made their wat into media production,  be it cinema movies (Spectre, Captain America: Civil War etc) or TV content (e.g. documentaries and news coverage), and they have done so for a good reason. Indeed, the versatility provided by the camera-carrying drones is expected to revolutionize aerial shooting, allowing faster and more flexible camera positioning and movements (including low-altitude ones or shots close to the subject)  than those provided by helicopters, while at the same reducing cost and increasing safety and ease operation. Drones are expected to enable film-makers and TV crews to develop a new cinematographic language, especially in combination with techniques that enable automated and intelligent shooting (a topic that has just started to emerge). This part of the tutorial will review characteristic cases of drone usage in cinematography, provide a taxonomy of existing drone cinematography static & dynamic shot types and shot sequencing, delve into the new horizons that open for the creation of new visual effects and shot types and discuss technical/research issues and challenges as well as issues related to viewer’s experience and perceived quality. It will also review recent approaches for automatic drone cinematography or approaches for the «virtual» planning of drone shots. The opportunities and challenges stemming from the use of multiple drones will also be discussed.

7. Introduction to car vision

Abstract:  In this lecture, an overview of the autonomous car technologies will be presented (structure, HW/SW, perception), focusing on car vision. Examples of the autonomous vehicle will be presented as a special case, including its sensors and algorithms. Then, an overview of computer vision applications in this autonomous vehicle will be presented, such as visual odometry, lane detection, road segmentation, etc. Also, the current progress of autonomous driving will be introduced.

8. Introduction to autonomous marine vehicles

Abstract: Autonomous marive vehicles can be described as surface (boats, ships) and unterwater ones (submarines). They have many applications in marine transportations, marine/submarine surveillance and many challenges in environment perception/mapping and vehicle control, to be reviewed in this lecture.

AUDIENCE

Any engineer or scientist practicing or student having some knowledge of computer vision and/or machine learning, notable CS, CSE, ECE, EE students, graduates or industry professionals with relevant background.

REGISTRATION

Registration information will be announced soon.

Remote short course participation is allowed.

Lectures will be in English. PDF slides will be available to course attendees.

A certificate of attendance will be provided.

 

IF I HAVE A QUESTION?

Contact

SAMPLE COURSE MATERIAL & RELATED LITERATURE

1.) C. Regazzoni, I. Pitas, Perspectives in Autonomous Systems research’, Signal Processing Magazine, September 2019

2.) Artificial neural networks

3.) 3D Shape Reconstruction from 2D Images

4.) Overview of self-driving car technologies

5.) I. Pitas, ‘3D imaging science and technologies’, Amazon CreateSpace preprint, 2019

6.)  R. Fan, U. Ozgunalp, B. Hosking, M. Liu, I. Pitas, «Pothole Detection Based on Disparity Transformation and Road Surface Modeling«, IEEE Transactions on Image Processing (accepted for publication 2019).

7.) Rui Fan, Xiao Ai, Naim Dahnoun, «Road Surface 3D Reconstruction Based on Dense Subpixel Disparity Map Estimation«, IEEE Transactions on Image Processing, vol 27, no. 6, June 2018

8.) Umar Ozganalp, Rui Fan, Xiao Ai, Naim Dahnoun, «Multiple Lane Detection Algorithm Based on Novel Dense Vanishing Point Estimation«, IEEE Transactions on Intelligent Transportation Systems, vol. 18, no.3, March 2017

9.) Rui Fan, Jianhao Jiao, Jie Pan, Huaiyang Huang, Shaojie Shen, Ming Liu, «Real-Time Dense Stereo Embedded in A UAV for Road Inspection«, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019

10.) Semi-Supervised Subclass Support Vector Data Description for image and video classification, V. Mygdalis, A. Iosifidis, A. Tefas, I. Pitas, Neurocomputing, 2017

11.) Neurons With Paraboloid Decision Boundaries for Improved Neural Network Classification Performance, N. Tsapanos, A. Tefas, N. Nikolaidis and I. Pitas IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 14 June 2018, pp 1-11

12.) Convolutional Neural Networks for Visual Information Analysis with Limited Computing Resources, P. Nousi, E. Patsiouras, A. Tefas, I. Pitas, 2018 IEEE International Conference on Image Processing (ICIP), Athens, Greece, October 7-10, 2018

13.) Learning Multi-graph regularization for SVM classification, V. Mygdalis, A. Tefas, I. Pitas, 2018 IEEE International Conference on Image Processing (ICIP), Athens, Greece, October 7-10, 2018

14.) Quality Preserving Face De-Identification Against Deep CNNs, P. Chriskos, R. Zhelev, V. Mygdalis, I. Pitas 2018 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg,
Denmark, September 2018

15.) P. Chriskos, O. Zoidi, A. Tefas, and I. Pitas, «De-identifying facial images using singular value decomposition and projections», Multimedia Tools and Applications, 2016

16.) Deep Convolutional Feature Histograms for Visual Object Tracking, P. Nousi, A. Tefas, I. Pitas, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019

17.) Exploiting multiplex data relationships in Support Vector Machines, V. Mygdalis, A. Tefas, I. Pitas, Pattern Recognition 85, pp. 70-77, 2019

 

USEFUL LINKS

•Prof. Ioannis Pitas: https://scholar.google.gr/citations?user=lWmGADwAAAAJ&hl=el

•Department of Computer Science, Aristotle University of Thessaloniki (AUTH): https://www.csd.auth.gr/en/

Laboratory of Artificial Intelligence and  Information Analysis: http://www.aiia.csd.auth.gr/

•Thessaloniki: https://wikitravel.org/en/Thessaloniki