Short description

This three day short course and workshop provides an in-depth presentation of programming tools and techniques for various computer vision and deep learning problems encountered in drone imaging. Special attention will be paid to drone cinematography, which is one of the main application areas of drone technologies. The same machine learning and computer vision problems do occur in other drone applications as well, e.g., for land/marine surveillance, search&rescue, infrastructure/building inspection and 3D modeling. The short course consists of three parts (A,B,C), each having lectures and a programming workshop with hands-on lab exercises.

Part A will focus on Deep Learning. The lectures of this part provide a solid background on Deep Neural Networks (DNN) topics, notably convolutional NNs (CNNs) and deep learning for object detection. Various DNN programming tools will be presented, e.g., PyTorch, Keras, Tensorflow. The hands-on programming workshop will be on PyTorch basics and target detection with Pytorch.

Part B lectures will focus on computer vision algorithms, namely on 2D target tracking, 3D target localization techniques (giving the attendants the opportunity to master state of the art video trackers), parallel GPU, multi-core CPU architectures and GPU programming (CUDA). Two programming workshops will take place. The first one will be on CUDA programming, focusing on 2D convolution algorithms. The second one will be on how to use OpenCV (the most used library for computer vision) for target tracking.

As drones execute missions (e.g., AV shooting, inspection), Part C lectures will focus on Drone mission planning and control. Before mission execution, it is best simulated, using drone mission simulation tools. Such simulations will be presented using AirSim. Additionally a programming workshop on ROS and Gazebo simulations for drones will take place.

The lectures and programming tools will provide programming skills for the various computer vision and deep learning problems encountered in drone imaging and cinematography, which is one of the main application areas of drone technologies. The same machine learning and computer vision problems do occur in other drone applications as well, e.g., for land/marine surveillance, search&rescue, building and machine inspection.

Lectures and programming workshops will be in English. PDF files will be available at the end of the course. 40 programming workstation positions will be available on a registration priority basis. Registration will stop if/when the positions are filled.

If in-depth coverage of computer vision and deep learning is desired, audience may also want to join the short course on Computer Vision and Deep Learning course’ 26-27/08/2019, Aristotle University of Thessaloniki, that is back to back to this course.

Part A (8 hours), Deep learning sample topic list

  1. Deep neural networks. Convolutional NNs
  2. Deep learning for target detection
  3. PyTorch basics
  4. Target detection with PyTorch
  5.  Object oriented Tensorflow in Google Colab

Part B (8 hours), Computer vision sample topic list

  1. 2D target tracking and 3D target localization
  2. Parallel GPU and multi-core CPU architectures. GPU programming
  3. CUDA programming
  4. OpenCV programming for object tracking
  5.  Drone mission simulations

Part C (8 hours), Drone planning/control sample topic list

  1. Drone mission planning and control
  2. Gimbal control for target tracking
  3. Drones with ROS and Gazebo simulations
  4. Brain-Drone Interaction

WHEN?

The course will take place on 28-30 August 2019.

WHERE?

All lectures and workshops will take place at the School of Informatics lecture halls and labs, Aristotle University of Thessaloniki. (mezzanine of Biology building)

You can find additional information about the city of Thessaloniki and details on how to get the city here.

PROGRAM

Date/time* 28/08/2019 29/08/2019 30/08/2019
Topic Deep Learning Computer Vision Drone planning and control
8:00-8:30 Registration    
  LECTURES LECTURES LECTURES
8:30-9:00 Introduction to drone imaging   Drones, new legislation and applications
9:00-10:00 Deep neural networks – Convolutional NNs 2D target tracking
Drone mission planning and control
10:00-11:00 Deep learning for target detection Parallel GPU and multi-core CPU architectures – GPU programming. Gimbal control for Target Tracking
11:00-11:30 Coffee break Coffee break Coffee break
  WORKSHOPS WORKSHOPS WORKSHOPS
11:30-13:30 PyTorch basics.
Object detection, image synthesis and style transfer on images using PyTorch.
CUDA programming. Drones with ROS and Gazebo simulations
13:30-14:30 Lunch break Lunch break Lunch break
14:30-16:30 PyTorch:
Understand the core functionalities of an object detector. Training and deployment.
OpenCV programming for object tracking
Drones with ROS and Gazebo simulations
16:30-18:30 Object oriented Tensorflow in Google Colab. Drone mission simulations
Brain-Drone Interaction
(ends at 17:00)
20:00 Welcome party   Goodbye party

* Eastern European Summer Time (EEST)

TOPICS

Part A (first day, 2 lectures, 2 programming exercises) 28/08/2019
Deep Learning

The lectures of Part A provide a solid background on the topics of Deep neural networks. Convolutional NNs and deep learning for object detection. Various DNN programming tools will be presented, e.g., PyTorch, Keras, Tensorflow.

The hands-on programming workshop will be on PyTorch basics and target detection with PyTorch.

1. Deep neural networks. Convolutional NNs:
Abstract: From multi-layer Perceptrons to deep architectures. Fully connected layers. Convolutional layers. Tensors and mathematical formulations. Pooling. Training convolutional NNs. Initialization. Data augmentation. Batch Normalization. Dropout. Deployment on embedded systems. Lightweight deep learning. DNN programming tools (e.g., PyTorch, Keras, Tensorflow).

2. Deep learning for target detection:
Abstract: Recently, Convolutional Neural Networks (CNNs) have been used for object/target (e.g., car, pedestrian, road sign) detection with great results. However, using such CNN models on embedded processors for real-time processing is prohibited by HW constraints. In that sense various architectures and settings will be examined in order to facilitate and accelerate the use of embedded CNN-based object detectors with limited computational capabilities. The following target detection topics will be presented: Object detection as search and classification task. Detection as classification and regression task. Modern architectures for target detection (e.g., RCNN, Faster-RCNN, YOLO, SSD). Lightweight architectures. Data augmentation. Deployment. Evaluation and benchmarking.

3. PyTorch basics:
Abstract: Introduction to PyTorch, simple commands, learn how to build an Image Classifier, generate fake images using GANs and transfer style between images using PyTorch.

4. Target detection with PyTorch:
Abstract: Given some parts of the code, build and train an object detector in PyTorch. Dataset preparation, data loaders, dealing with unequal number of boxes for each image, understanding the core functionality of an object detector. Training and deployment.

5. Object oriented Tensorflow in Google Colab:
Abstract: Online source code examples and hands-on challenge using Google Colaboratory. Introduction to Tensorflow (declare tensors, run sessions, visualization), object oriented wrappers for tensors (dense and convolutional layer wrappers, neural network inference, load/save model parameters, gradient descent), train a state-of-the-art CNN classifier in CIFAR10 (data pre-processing and augmentation, batch normalization, advanced activation functions, optimizers, L2 regularization). Latest developments in Tensorflow (Tensorflow 2.0, tensorflow.js, TensorRT.)

Part B (second day, 2 lectures, 2 programming exercises) 29/08/2019:
Computer Vision

Part B lectures will focus on computer vision algorithms, namely on 2D target tracking, 3D target localization techniques (giving the attendants the opportunity to see state of the art video trackers), parallel GPU and multi-core CPU architectures and GPU programming (CUDA). Two programming workshops will take place. The first one will be on CUDA programming, focusing on 2D convolution algorithms. The second one will be on how to use OpenCV (the most used library for computer vision) for target tracking.

1. 2D target tracking:
Abstract: Target tracking is a crucial component of many computer vision systems. Many approaches regarding face/object detection and tracking in videos have been proposed. In this lecture, video tracking methods using correlation filters or convolutional neural networks are presented, focusing on video trackers that are capable of achieving real time performance for long-term tracking on a UAV platform.

2. Parallel GPU and multi-core CPU architectures . GPU programming:
Abstract: GPU’s unique architectural features are emphasized through CPU-GPU comparison. GPU’s architecture in terms of ALUs and memory types is given in detail in order to introduce the GPU’s programming special characteristics. The audience becomes familiar with terms such as grid, block, thread, kernel, etc. and the general layout of a CUDA program is presented. Cuda keywords are explained by presenting simple CUDA programs. Finally, areas where GPU programming achieves outstanding performance are mentioned and 2D convolution algorithm implementations are demonstrated.

3. CUDA programming:
Abstract: 2D and 3D convolutions are very important tools both for computer vision (e.g., for target tracking and for deep learning (convolutional NNs). Learn how to implement a 2D convolution between an image and a mask with CUDA.

4. OpenCV programming for object tracking:
Abstract: The first part of this tutorial will have an introduction to the OpenCV library using Python. Students can learn how to perform basic image processing operations, such as reading and displaying an image, extracting ROIs, applying filters etc. In the second part of the tutorial, the students will learn how to perform visual object tracking in video sequences, with correlation filter based tracking algorithms and OpenCV.

5. Drone mission simulations:
Abstract: Machine learning algorithms need large amounts of quality data to be trained efficiently. Gathering and annotating that sheer amount of data is a time-consuming and error-prone task. Those problems limit scale and quality. Synthetic data generation has become increasingly popular due to fast generation and automatic annotation. Specifically for drones, AirSim is a drone data simulator, built on Unreal Engine 4 offering visually realistic simulations and therefore the ability to generate realistic data for tasks such as detection, tracking and pose estimation. The lecture will cover 1) basic world modelling in UE4 using blueprints, 2) use of AirSim for synthetic data generation, 3) a live-demo of cyclist detection and crowd detection using ROS.

Part C (third day, 2 lectures, 1 programming exercise) 30/08/2019:
Drone planning and control

As drones execute missions (e.g., AV shooting, inspection), Part C lectures will focus on Drone mission planning and control. Before mission execution, it is best simulated, using drone mission simulation. Such simulations will be presented using AirSim. Additionally a programming workshop on ROS and Gazebo simulations for drones will take place.

1. Drones, new legislation & applications:
Abstract: Rapid evolution of Drone industry means rules and regulations must change. This year brought the first step in the unification of rules across Europe. This means greater flexibility. Agriculture, Aviation, Surveillance, Photography, Air Taxis are some of the key changing applications. The drone market will grow from 18 billion USD in 2018 to 43 billion USD iby 2024. The future is here.

2. Drone mission planning and control:
Abstract: In this lecture, first the audiovisual shooting mission is formally defined. The introduced audiovisual shooting definitions are encoded in mission planning commands, i.e., navigation and shooting action vocabulary, and their corresponding parameters. The drone mission commands, as well as the hardware/software architecture required for manual/autonomous mission execution are described. The software infrastructure includes the planning modules, that assign, monitor and schedule different behaviors/tasks to the drone swarm team according to director and environmental requirements, and the control modules, which execute the planning mission by translat0ing high-level commands to into desired drone+camera configurations, producing commands for autopilot, camera and gimbal of the drone swarm.

3. Gimbal control for Target Tracking:
Abstract: In this lecture, we will describe how to control a 3-axis gimbal so that the camera installed in the gimbal can track a moving target of interest. The controller can be based either on global position measurements for the case of targets equipped with GNSS or local image measurements that visual analysis for detection and tracking of the target in the image plane. The resulting control law will be described together with the hardware and software used for implementation.

4. Drones with ROS and Gazebo simulations:
Abstract: How to control a drone using ROS. A brief introduction to ROS topics and services. Brief introduction to Gazebo: worlds, models and plugins. What is UAL and how to use it. How to simulate one or several drones in Gazebo. Given a sample Gazebo world with one drone and one autonomous vehicle, develop a ROS node in Python or C++ so the drone can follow the vehicle.

5. Brain-Drone Interaction:
Abstract: Α Brain-Drone Interaction experiment will be demonstrated, where a brain- computer (EEG) interface will be used for drone control (take-off and landing). Brain Computer Interface (BCI) is a bidirectional link between a wired brain and a device. It provides a great potential in interacting with the environment, without the involvement of the peripheral nervous system. This finds usage in multiple applications, by utilizing extracted brain waves generated by electroencephalography recordings, analyzing them and providing feedback through visual stimuli. The signal analysis process of this system will be discussed as well as the applications of BCI in gaming, medical, etc. A live demonstration will be given, based on a NeuroSky Mindwave headset and a small drone.

REGISTRATION

!!!We have reached the maximum amount of registrations for this year’s course. We will be happy to invite you to future events.


Early registration (till 30/06/2019):

a) Standard: 190 Euros

b) Reduced registration for young professionals (up to 2 years after graduation)*: 90 Euros

c) Undergraduate/MSc/PhD students*: 30 Euros

Late or on-site registration (after 30/06/2019):

a) Standard: 200 Euros

b) Reduced registration for young professionals (up to 2 years after graduation)*: 100 Euros

c) Undergraduate/MSc/PhD students*: 40 Euros


*A student card or proof of employment must be shown during registration.

!!!We have reached the maximum amount of registrations for this year’s course. We will be happy to invite you to future events.

All lectures and workshops will be in English.

A certificate of attendance will be provided.

Remote course participation is not available.

Cancellation policy:

  • 70% refund for cancellation up to 30/5/2019
  • 50% refund for cancellation up to 19/7/2019
  • 0% refund afterwards
Presentations

Presentations and lab notes will be available to the attendees.

LECTURERS & TUTORS

Prof. Ioannis Pitas (IEEE fellow, IEEE Distinguished Lecturer, EURASIP fellow) received the Diploma and Ph.D.Degree in Electrical Engineering, both from the Aristotle University of Thessaloniki, Greece. Since 1994, he has been a Professor at the Department of Informatics of the same University. He served as a Visiting Professor at several Universities. His current interests are in the areas of image/video processing, machine learning, computer vision, intelligent digital media, human centered interfaces, affective computing, 3D imaging and biomedical imaging. He is currently leading the big European H2020 R&D project MULTIDRONE. He is also chair of the Autonomous Systems initiative. (Lecture: Introduction to drone imaging.)

Jesús Capitán is Assistant Professor at the University of Seville. He received his degree in Telecommunication Engineering (2006) from the University of Seville, and a Ph.D. in Robotics (2011) from the same university. In 2005 he joined the Robotics, Vision and Control Research Group. During his Ph.D., he worked as a visiting fellow at the Robotics Institute, Carnegie Mellon University, Pittsburgh, U.S.A.; and the Institute for Systems and Robotics, Instituto Superior Tecnico, Lisboa, Portugal. After his Ph.D., he worked as a senior researcher at the Institute for Systems and Robotics, Instituto Superior Tecnico, Lisboa, Portugal (2011-2012) and the Networked Embedded System Group, University of Duisburg-Essen, Essen, Germany (2012-2013). His research is focused on cooperative multi-robot systems. In particular, he is interested in decentralized decision-making, planning under uncertainty, cooperative active perception and Partially Observable Markov Decision Processes. (Lecture: Drone mission planning and control.)

Arturo Torres-González is a Postdoc Researcher at the University of Seville. He received his degree in Telecommunication Engineering (2011) from the University of Seville, and a Ph.D. in Robotics (2017) from the same university. In 2010 he joined the Robotics, Vision and Control Research Group. During his Ph.D., he worked as a visiting fellow at the Australian Center for Field Robotics, University of Sydney,  Sydney, Australia. He obtained the Best Iberian Thesis in Robotics Award 2017 by Spanish and Portuguese robotics societies SEIDROB and SPR. His research is focused on multi-agent systems, robot-sensor network cooperation and robot localization and mapping. (Programming workshop: Drones with ROS and Gazebo simulations.)

Paraskevi Nousi obtained her BsC in Informatics in 2014 from Aristotle University of Thessaloniki and is currently pursuing her PhD in Computational Intelligence at the Informatics Department of Aristotle University of Thessaloniki. Her research is focused on developing effective and efficient Deep Learning methods for visual analysis tasks, such as Visual Object Tracking, Object Detection and Recognition and has been influenced by the needs of the H2020 project MULTIDRONE. (Lecture: Deep learning for target detection. Programming workshop: PyTorch: Understand the core functionalities of an object detector. Training and deployment.)

Vasco Sampaio has obtained his MSc in Mechanical Engineering in 2017 from University of Lisbon and is working as Research Enginneer at Institute for Systems and Robotics in Lisbon, since 2018. Currently his is participating in the MULTIDRONE project, working on software and hardware development and implementation. His research is focused on gimbal and drone control. (Lecture: Gimbal Control for Target Tracking)

Miguel Malaca is currently pursuing his MSc in Electric Engineering from University of Lisbon and has joined the Institute for Systems and Robotics in 2018. His research is focused on vision-based gimbal control for target tracking. His work has contributed to achieve control related tasks on the H2020 project MULTIDRONE. (Lecture: Gimbal Control for Target Tracking)

Iason Karakostas received the Diploma of Electrical Engineering in 2017 and is currently a PhD Student at the Artificial Intelligence and Information Analysis Laboratory (AIIA) in the Department of Informatics of AUTH. He has co-authored 2 papers in international conferences and has participated in a European Union-funded R&D project. His current research interests include machine learning, computer vision, autonomous robotics and intelligent cinematography. (Lecture: 2D target tracking. Programming workshop: OpenCV programming for object tracking.)

Charalampos Symeonidis obtained his BsC in Informatics in 2016 from Aristotle University of Thessaloniki and is currently pursuing his PhD in Computational Intelligence at the Informatics Department of Aristotle University of Thessaloniki. His research is focused in machine learning, computer vision, intelligent cinematography and has been influenced by the needs of the H2020 project MULTIDRONE. (Lecture: Drone mission simulations.)

Pantelis I. Kaplanoglou is a machine learning engineer with M.Sc. in Web Intelligence, PhD candidate, with 20+ years of professional experience in computer industry. Has worked as a team leader in a software R&D department and as an international software development associate for LIDL. Has acquired knowledge on various domains and subjects spanning from IT Security to Computer Vision, developing in diverse languages from x86 assembly to ASP.NET/C#. (Lecture: Deep neural networks – Convolutional NNs. Programming workshop: Object oriented Tensorflow in Google Colab.)

IF I HAVE A QUESTION?

Contact

SPONSORS

 

If you want to be our sponsor send us an email here: koroniioanna@csd.auth.gr

SAMPLE COURSE MATERIAL. RELATED LITERATURE

Target Detection (pdf)

1) Multidrone Project (MULTIple DRONE platform for media production), funded by the EU (2017-19), within the scope of the H2020 framework,  https://multidrone.eu/

2) Semi-Supervised Subclass Support Vector Data Description for image  and video classification,     V. Mygdalis, A. Iosifidis, A. Tefas, I.  Pitas, Neurocomputing, vol. 291, pp. 237-241, 2018

3) Face detection Hindering, P. Chriskos, J. Munro, V. Mygdalis, I. Pitas, Proceedings of the IEEE Global Conference on Signal and Information Processing (GLOBALSIP), Quebec, Montreal, 2017

4) 2D visual tracking for sports UAV cinematography applications, O. Zachariadis, V. Mygdalis, I. Mademlis, I. Pitas, Proceedings of the IEEE Global  Conference on Signal and Information Processing (GLOBALSIP), Montreal, Canada, 2017

5) Neurons With Paraboloid Decision Boundaries for Improved Neural Network Classification Performance, N. Tsapanos, A. Tefas, N. Nikolaidis and I. Pitas, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), vol. 30, issue 1, pp. 284-294, 2019

6) Convolutional Neural Networks for Visual Information Analysis with Limited Computing Resources, P. Nousi, E. Patsiouras, A. Tefas, I. Pitas, Proceedings of the IEEE International Conference on Image Processing (ICIP), Athens, Greece, 2018

7) Overview of drone cinematography for sports filming, I. Mademlis, V. Mygdalis, C. Raptopoulou, N.Nikolaidis, N. Heise, T. Koch, T. Wagner, A. Messina, F. Negro, S. Metta, I.Pitas, European  Conference on Visual Media Production (CVMP), London, UK, 2017

8) Challenges in Autonomous UAV cinematography: An overview, I. Mademlis, V. Mygdalis, N. Nikolaidis, I. Pitas, Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), San Diego, USA, 2018

9) Learning Multi-graph regularization for SVM classification, V.Mygdalis, A.Tefas, I.Pitas, Proceedings of the IEEE International Conference on Image Processing (ICIP), Athens, Greece, 2018

10) UAV Cinematography Constraints Imposed by Visual Target Trackers, I. Karakostas, I. Mademlis, N. Nikolaidis, I. Pitas, Proceedings of the IEEE International Conference on Image Processing (ICIP), Athens, Greece, 2018

11) Efficient camera control using 2D visual information for unmanned  aerial vehicle-based cinematography, N. Passalis, A. Tefas, I. Pitas, Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 2018

12) The future of media production through multi-drones’ eyes, A. Messina, S. Metta, M. Montagnuolo, F. Negro, V. Mygdalis, I. Pitas, J.  Capitán, A. Torres, S. Boyle, D. Bull, F. Zhang, International  Broadcasting Convention (IBC), Amsterdam, Netherlands, 2018

13) Quality Preserving Face De-Identification Against Deep CNNs, P.  Chriskos, R. Zhelev, V. Mygdalis, I. Pitas, Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg, Denmark, 2018

14) Improving Face Pose Estimation using Long-Term Temporal Averaging  for Stochastic Optimization, N. Passalis, A. Tefas, Proceedings of the International Conference on Engineering Applications of Neural Networks, EANN 2017, Athens, Greece, 2017

15) Discriminatively Trained Autoencoders for Fast and Accurate Face Recognition, P. Nousi, A. Tefas, Proceedings of the International Conference on Engineering Applications of Neural Networks, EANN, Athens, Greece, 2017

16) Concept Detection and Face Pose Estimation Using Lightweight Convolutional Neural Networks for Steering Drone Video Shooting, N. Passalis, A. Tefas, Proceedings of the European Signal Processing Conference (EUSIPCO), Kos, Greece, 2017

17) Human Crowd Detection for Drone Flight Safety Using Convolutional Neural Networks,     M.Tzelepi, A.Tefas, Proceedings of the European Signal  Processing Conference (EUSIPCO), Kos, Greece, 2017

18) Lightweight Two-Stream Convolutional Face Detection, D. Triantafyllidou, P. Nousi, A. Tefas, Proceedings of the European Signal Processing Conference (EUSIPCO), Kos, Greece, August, 2017

19) Fast Deep Convolutional Face Detection in the Wild Exploiting Hard Sample Mining, D. Triantafyllidou, P. Nousi, A. Tefas, Big Data Research, Elsevier, vol. 11, pp. 65-76, 2018

20) Learning Bag-of-Features Pooling for Deep Convolutional Neural   Networks, N. Passalis, A. Tefas, Proceedings of the International Conference on Computer Vision (ICCV), Venice, Italy, 2017

21) Self-Supervised Auto-encoders for Clustering and Classification, P. Nousi, A. Tefas, Evolving Systems Journal, Springer, pp 1–14, 2018

22) Unsupervised Knowledge Transfer using Similarity Embeddings, N. Passalis, A. Tefas,     IEEE Transactions on Neural Networks and Learning Systems (TNNLS), vol. 30, issue 3, pp. 946-950, 2018

23) Recurrent Attention for Deep Neural Object Detection, G.  Symeonidis, A. Tefas, Hellenic Conference on Artificial Intelligence (SETN), Rio Patras, Greece, 2018

24) Neural Network Knowledge Transfer using Unsupervised Similarity  Matching, N. Passalis, A. Tefas, Proceedings of the International Conference on  Pattern Recognition (ICPR), Beijing, China, 2018

25) Deep reinforcement learning for frontal view person shooting using  drones, N. Passalis, A. Tefas, Proceedings of the IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS), Rhodes, Greece, 2018

26) A Multidrone Approach for Autonomous Cinematography Planning, A. Torres-Gonzalez, J. Capitan, R. Cunha, A. Ollero and I. Mademlis, Proceedings of the Iberian Robotics Conference (ROBOT), 2017

27) Decentralized safe conflict resolution for multiple robots in dense scenarios, E. Ferrera, J. Capitán, A.R. Castaño and P.J. Marrón, Robotics and Autonomous Systems, vol. 91, pp. 179-193, 2017

28) Cooperative perimeter surveil using Bluetooth framework under communication constraints,     J.M. Aguilar, P. R. Soria, B.C. Arrue and A. Ollero, Proceedings of the Iberian Robotics Conference (ROBOT), 2017

29) Applying Frontier Cells Based Exploration and Lazy Theta* Path Planning over Single Grid-Based World Representation for Autonomous Inspection of Large 3D Structures with an UAS,     M. Faria,  I. Maza and A. Viguria, Journal of Intelligent & Robotic Systems, accapted Springer

30) Discriminative Optimization: Theory and Applications to Computer Vision Problems, J. Vongkulbhisal, F. De la Torre, and J. P. Costeira, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 41, issue 4, pp. 829 – 843, 2018

31) Integrated Visual Servoing Solution to Quadrotor Stabilization and Attitude Estimation Using a Pan and Tilt Camera, D. Cabecinhas, S. Brás, R. Cunha, C. Silvestre, P. Oliveira, IEEE Transactions on  Control Systems Technology, vol. 27, issue 1, pp. 14-29, 2017

32) UAL: An Abstraction Layer for Unmanned Aerial Vehicles,  F.  Real, A. Torres-González, P. Ramón-Soria, J. Capitán and A. Ollero, Proceedings of the International Symposium on Aerial Robotics (ISAR), Philadelphia, PA, USA, 2018

33) Inverse Composition Discriminative Optimization for Point Cloud  Registration, J. Vongkulbhisal, B. I. Ugalde, F. De la Torre, J. P.  Costeira, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, 2018

34) P. Chriskos, O.Zoidi, A.Tefas and I.Pitas, De-identifying facial images using singular value decomposition and projections, Multimedia Tools and Applications, Springer, vol. 76, issue 3, pp. 3435-3468, 2017

35) Cooperative Unmanned Aerial Systems for Fire Detection, Monitoring and Extinguishing, L. Merino, J.R. Martinez-de Dios, A.  Ollero, In “Handbook of Unmanned Aerial Vehicles”, ISBN 978-90-481-9706-4, Springer, 2015

36) Shot Type Feasibility in Autonomous UAV Cinematography, I.  Karakostas, I. Mademlis, N. Nikolaidis, I. Pitas, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 2019

37) High-Level Multiple-UAV Cinematography Tools for Covering Outdoor Events, I. Mademlis, V. Mygdalis, N. Nikolaidis, M. Montagnuolo, F.  Negro, A. Messina, I. Pitas, IEEE Transactions on Broadcasting, accepted for publication, 2019

38) Autonomous Unmanned Aerial Vehicles Filming in Dynamic Unstructured Outdoor Environments, I. Mademlis, N. Nikolaidis, A.  Tefas, I. Pitas, T. Wagner, A. Messina,

IEEE Signal Processing Magazine, vol. 36, issue 1, pp. 147-153, 2019

39) Deep Convolutional Feature Histograms for Visual Object Tracking, P. Nousi, A. Tefas, I. Pitas, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 2019

40) Semantic Map Annotation Through UAV Video Analysis Using Deep Learning Models in ROS, E. Kakaletsis, M. Tzelepi, P.I. Kaplanoglou, C. Symeonidis, N. Nikolaidis, A. Tefas, I. Pitas, Proceedings of the International Conference on Multimedia Modeling (MMM), Thessaloniki, Greece, 2019

41) Exploiting multiplex data relationships in Support Vector Machines, V. Mygdalis, A. Tefas, I. Pitas, Pattern Recognition, Elsevier, vol. 85, pp. 70-77, 2019

42) Deep reinforcement learning for controlling frontal person close-up shooting, N. Passalis, A. Tefas, Neurocomputing, Elsevier, vol. 335, pp. 37-47, 2019

43) Graph Embedded Convolutional Neural Networks in Human Crowd Detection for Drone Flight Safety, M. Tzelepi, A. Tefas, IEEE Transactions on Emerging Topics in Computational Intelligence, accepted for publication, 2019

44) Training Lightweight Deep Convolutional Neural Networks Using Bag-of-Features Pooling, N. Passalis, A. Tefas, IEEE Transactions on Neural Networks and Learning Systems, accepted for publication, 2018

Useful Links

Prof. Ioannis Pitas: https://scholar.google.gr/citations?user=lWmGADwAAAAJ&hl=el

Multidrone project:  https://multidrone.eu/

Icarus Research Team: http://icarus.csd.auth.gr/

Laboratory of Artificial Intelligence and Information Analysis: http://www.aiia.csd.auth.gr/

Department of Informatics, Aristotle University of Thessaloniki (AUTH): http://www.csd.auth.gr/en/

Thessaloniki: https://wikitravel.org/en/Thessaloniki