Open positions
AI-based adaptive human-computer interfaces

We consider in this project the design of adaptive user interfaces, following an AI-based recommendation approach integrating sequential and active learning techniques.



CCn general, AI-based recommender systems can benefit significantly from sequential (such as multi-armed bandits) or reinforcement learning [2], and active learning techniques [5], when user interests / profiles are unknown, hardly generalizable, or highly dynamic. Such conditions arise in particular in cold-start scenarios, i.e., when addressing a diverse or highly dynamic user base. Online and adaptive recommendation algorithms must therefore (re)learn preferences over time and continuously, striking a balance between exploiting popular recommendation options and exploring new ones, that may improve overall user satisfaction.

When the task at hand is the design of an adapted user interfaces (AUIs) [1], for atypical interface users (such as users with certain disabilities), such techniques can enable the gathering of necessary feedback, in order to understand the key profile features of the user and to adapt to them.  In this sense, AUI design using AI techniques morphs into an approach of incremental recommendation and data completion, at the intersection of recommender systems and active learning.

Data selection methods, such as active learning and core-set selection, are well approaches for improving the data efficiency of predictive models [5]. In short, they assist the learning procedure by prioritizing the selection of relevant unlabeled samples for human labeling, with the goal of maximizing the model performance with minimal labeling cost.

More precisely, on one hand, the learning / recommendation process must request user feedback on interface design alternatives in a principled and optimal way, reminiscent of conversational systems [3] and active-learning in recommender systems [4]. On the other hand, since the context of feedback gathering can be seen as a highly constrained one, interactive mechanisms that can collect the most relevant and complementary data at the right moment require approaches that are online by design. This means in short that they are able to integrate the optimization objective and the task of feedback gathering.

Participants: Christophe Jouffrais (DR CNRS, director of IPAL), Bogdan Cautis (IPAL,  Professeur des Universités – U. Paris Saclay).

Contact: (replace _AT_ by @)


[1] E. Brulé, G. Bailly, A. M. Brock, F. Valentin, G. Denis, C. Jouffrais. MapSense: Multi-Sensory Interactive Maps for Children Living with Visual Impairments. CHI 2016.

[2] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. A Bradford Book; MIT Press, 1998.

[3] Sun et al. Conversational Recommender System. In ACM SIGIR 2018.

[4] Rubens et al. Active Learning in Recommender Systems. In Recommender Systems Handbook, 2015.

[5] G. Luo. A Review of Automatic Selection Methods for Machine Learning Algorithms and Hyper-Parameter Values. NetMAHIB, 5(1):18, 2016

Internships and PhD positions in the framework of the Descartes Program

All the following positions are open for Interns, and can be continued by a PhD. All these positions will take place in the exciting scientific environment of the Descartes collaborative program. See here for more in Descartes: Descartes presentation

Starting date

The starting date will be in early 2022. All the master internships can lead to a PhD in France or Singapore. Interns who aim to do a PhD will be preferred.


Send an email to the corresponding supervisors with the following documents: 

  • Complete CV (with possible publications)
  • Letter of motivation
  • transcripts of records since L1 or Prepa
  • Report of a previous internship

You may apply to more than one proposal. In this case, please send the documents to all supervisors and mention it in your message.

Proposal #1 - Understanding the environment from drones with multiples sensors

Supervisors: Lai-Xing Ng (Contact: and Benoit Cottereau (Contact:

Abstract: Drones, or machines in general, have a multitude of sensors that provide information about the surroundings. Existing works on drone perception often use image-based sensors, such as RGB cameras and depth cameras. Image-based sensors are susceptible to motion blur as well as variation in illumination and thus do not work well when the drone is fast-moving. For teleoperated drones, the human operator can only rely on the live video feed of a single camera and the restricted field-of-view affects the human understanding of the drone’s environment. In this project, the aim is to utilize the available sensors and provide a human operator a perspective of being at the drone’s location. Selected candidate will work on developing novel approaches on how distributed sensors can communicate, collaborate (including changing what they sense) and process the signals in an energy-efficient way to extract meaningful information from the scene, in response to existing knowledge models (long term memory) and real-time interaction and decisions from humans, and send back the information to humans for visualization. Research tasks include:

  • Process different types of sensory signals (e.g. data collected from event-based cameras, synchronous cameras, and other sensors) for scene understanding (e.g. object detection and localization) using neuromorphic systems based on artificial neural networks and embedded on a single or multiple drones.
  • Extract meaningful information (3D layout of the scene, objects of interest, threats, etc.) and combine with existing knowledge models.
  • Provide meaningful multimodal feedback to the user based on a wearable device (e.g. smart glasses) that should provide remote (augmented) perception.

Expected skills: The candidate should be willing to work in an international environment which involves Singapore and France, have a good level in English and very good programming skills (in Matlab, Python or C++).

Proposal #2 - Drone Piloting from Different Perspectives

Supervisors: Shen ZHAO ( and Christophe JOUFFRAIS (

Abstract: New technologies such as mixed reality, natural or wearable interfaces, as well as Artificial Intelligence are beginning to take hold in production facilities. They promise performance gains but can also improve safety and comfort in interactions between human operators and semi-autonomous systems. For these technologies to be accepted and deployed, human factors must be considered.

In this internship, we will design and evaluate a multisensory interface for drone piloting from different perspectives. The aim of the project will be to define the characteristics of a multimodal interface for the control of semi-autonomous drones. Behavioral experiments and the analysis of the collected data will allow the selection of the most suitable parameters for both the design of the interfaces and the evaluation of the human-system interaction. 

Expected skills: The candidate must have skills in human-computer interaction, cognitive science and/or human factors. He-She should be willing to work in an international environment which involves Singapore and France, have a good level in English and good programming skills.

Proposal #3 - Future Video Prediction using Generative Models

Supervisors: Ying SUN ( and Christophe JOUFFRAIS (

AbstractLearning to predict the future is an important research problem in machine learning and artificial intelligence. In this project, we focus on the task of predicting future frames in videos, i.e., video prediction, given a sequence of previous frames. Recently, deep-learning-based methods have emerged as a promising approach for video prediction, especially generative models such as variational autoencoders (VAEs) and generative adversarial networks (GANs). VAEs can generate various plausible outcomes, however, the predicted frames are blurry and of low quality. While GAN-based models tend to produce higher quality future frames, adversarial training is unstable and may lead to model collapse. Therefore, we will explore state-of-the-art generative models for video prediction and develop new strategies to address the limitations of existing methods.

Expected skills: The candidate should be willing to work in an international environment which involves Singapore and France, have a good level in English and very good programming skills (in Matlab, Python or C++).

Proposal #4 - human-in-the-loop learning

Supervisors: Lai-Xing Ng (Contact:, Wei-Tsang Ooi (Contact:  and Axel Carlier (Contact:

AbstractWhile deep learning has brought important advances in many domains, large labeled datasets are required to ensure good model performances. Several models for collaborations between human and machine learning have been proposed to overcome this limitation and try to decrease the need for labeled data. In active learning, the model explicitly chooses data samples for humans to label, which is then fed into the training process in an online fashion. 

Unlike learning from a large number of pre-labeled data samples, human inputs in human-in-the-loop learning have a larger impact or even overriding effects on machine decisions. Such human-AI collaboration models make it possible for malicious humans to impact the outcome of machine learning models.

In this internship, we plan to build upon existing criteria, such as for example expected model output change (EMOC), to study possible trade-off between impact of (possible malicious) input vs. how fast a model can learn from humans.

Proposal #5 - Interactive Explainable AI

Supervisors: Christophe Hurter (, Brian Lim (, and Jamie Ng (

AbstractWith AI capabilities, drones can be used to automatically inspect airplanes and buildings to improve the safety of these structures. However, it is ultimately dependent on the human operator to verify the severity of defects. In this project, we will develop interactive methods to make AI explainable for drone operators and inspectors to interpret and verify the image predictions. This research will investigate how to support user understanding of AI decisions using interactive visualization, and explainable AI. 

We are looking for talented candidates to join our multidisciplinary team.  The project looks into how robotics, computer vision, artificial intelligence, virtual/augmented reality and human-computer interaction can lead to effective human-AI collaboration. We are looking for candidates with passion in research and development, as well as in translating R&D technology into industry applications. 

Expected Skills:

  • Qualification/Field of Study: Bachelors or Masters
  • Technical skills: Strong programming skills (e.g., Python, Javascript, C++).
  • Experience and Knowledge: Computer Vision and Machine Learning, Virtual Reality, Human-Computer Interaction, etc.
  • Aptitude: Critical thinker, self-motivated, can work both independently and in teams, with good analytical and communication skills.

PhD position - Can GPS guidance lead to cognitive mapping?
Eye-gaze patterns during teaching/learning

Study of the human factors (eye gaze) on task learning. The aim consists in finding out the eye-gaze patterns of the student during task learning (e.g., when mistakes are likely to happen, when is assistance needed, etc.), which may help the design of future Augmented Human tech.

Contact: C. Jouffrais christophe.jouffrais_AT_ (replace _AT_ by @)


Previous Internship Positions

2020  Master Internship positions

Multimodal feedback in a virtual scene

Designing non-visual multimodal feedback to help with navigation in a virtual scene. The student will be helping to develop audio and tactile feedback to guide a user in navigation through a virtual scene. These two modalities will be integrated in a system that attempts to understand user preferences, obtain their feedback for human-in-the-loop reinforcement learning and evaluate our approach. The job task involves help to prepare a software library that, given a 3D virtual scene and a route, renders the orientations and directions to the users using audio (via text-to-speech) and tactile feedback. Both signals will be sent to an earphone and one or two wrist-based tactile bands with motors to provide spatial cues.

Date: project start between May and Dec 2020

Contact: C. Jouffrais christophe.jouffrais  (@)  and Shen Zhao dcszs  (@)

Easier Scene Understanding with Deep Learning using Context

In this project, the intern will study the problem of scene understanding from a given image using a deep neural network.  Current state-of-the-art methods require a complex and deep network and a large amount of training data.  We will explore how having prior context information about the scene can simplify the problem, and thus the complexity of the network as well as the amount of training data required.

The intern will assist the researcher in experimenting with different neural network models and how the context information can be integrated into the training and inferencing phase of the problem.

Date: project start between May and Dec 2020

Contact: Axel Carlier Axel.Carlier (@) and Wei Tsang Ooi ooiwt (@)

How To Fool a Deep Neural Net with another Deep Neural Net

Deep neural network has been proved successful in computer vision and natural language. Nevertheless, the research literature has shown that they can be vulnerable if we change several pixels of an image of a dog so that the model may make a wrong prediction. Such a mutated image is called an adversarial sample for the neural network. Such perturbation-based approach looks for adversarial samples from a low and detailed perspective. In this research, we investigate a new adversarial sample generation technique by exploring GAN (Generative Adversarial Network). We are exploring how to use GAN to generate adversarial samples from a higher perspective. More specifically, we are exploring to generate a face of Bob which has never appeared in the training set but can be mistakenly classified as Alice to fool some face recognition system.

Date: project start between May and Dec 2020

Contact: Blaise Genest blaise.genest (@) and Jin Song Dong dcsdjs (@)

Neural Network for Differential Equations

Differential equations are one of the main tools for the modelling, simulation and analysis of complex systems in most domains of science and engineering. Neural networks have recently been shown to be able to effectively and efficiently solve differential equations. In fact, several possible approaches are still under investigation. In this project, the researcher will implement and evaluate several existing and new approaches to represent and solve systems of differential equations with neural networks. The student researcher may also be involved in the development of applications of the work to hydrology, meteorology and climate change.

Date: project start between May and Dec 2020

Contact: Talel Abdessalem Talel.Abdessalem (@) and Stephane Bressan steph (@)


Previous positions:

      2019 Internships:

      • Towards ageing-well through trusted intelligent systems based on AI, IoT and Formal Analysis
      • Android development of urban mobility app using Fitbit and environment APIs (app implementation, analysis, reasoning)
      • Front and back-end dev and data analysis Node.js (Machine learning, IoT for health, on site validation)
      • Software IoT architecture (refactoring, optimization of platform to enhance large-scale deployments)
      • Web-based visualisation of GeoJSON (interactions in WebGIS environment)
      • Continuous and nonintrusive vital sign monitoring using optical fibre sleep mat (machine learning, sleep cycles data analysis) in collaboration with Khoo Teck Puat Hospital (KTPH) and Singapore University of Technology and Design (SUTD)

      Get to know more:
      Scientists worldwide are welcome to join our challenges! IPAL provides great opportunities to researchers and students from all nationalities who desire to blossom in an excellent international research laboratory. We are committed to provide a unique platform for candidates to begin research and develop their skills in a top-ranked university fully supported by distinguished and world-renowned researchers from Singapore and France.
      CNRS and Universities mobility: If you are already a researcher working for the CNRS, we will be very honored to welcome you in our laboratory. Please have a look at the CNRS website for the procedure, do come in touch with us to prepare a joint ambitious projects, able to boost your carreer, and do not hesitate to contact us for further assistance: CNRS Mobility website
      Singapore, a high-tech and world-class scientific environmentIn a very competitive scientific environment, surrounded by dynamic and talented scientists and supported by one of the best basic and translational research infrastructures worldwide, working in Singapore is a valuable experience. In partnership with the National University of Singapore and the Agency for Science, Technology and Research institutes, world-class scientists from all major scientific centres in the world, are exchanging and sharing with us all year long, generating a prolific scientific osmosis.
      Open PhD Positions and Regular PhD applications: In order to work with IPAL, you need to come in touch with one of our staff during your first year. Please look at our research goalsaxesprojects and publications and you will quickly understand what competencies we will always welcome. Don’t hesitate to contact us if needed. Beside the open position(s) above, a regular submission can be done via the graduates portal NUS, School of Computing, Computer Science Dpt. or NUS, Faculty of Engineering depending on your profile. See also the PhD Programme at NUS School of Computing. Another possibility to get a NUS degree at IPAL is to go for the SINGA – Singapore International Graduate Award programme or the ARAP – A*STAR Research Attachment Programme both funded by A*STAR, with a graduation through NUS in the case of IPAL. Last but not least, regular applications can be done via the EDITE doctoral scool (Informatics, Telecommunications and Electronics) of the University Pierre and Marie Curie, Paris, France or the Doctoral School for Computer Sciences, Applied and Pure Mathematics (MSTII) of the University Joseph Fourier, Grenoble 1, France, for a French PhD while working at IPAL in Singapore or in a collaborative way with highly reputable CNRS labs in France. Please get in touch with us to define your project before application in this case.

      Previous positions:

      2017 Support Team

      2017 Master Internship Proposals

      Internships hosted by our partners on joint projects:

      2017 Post-doc fellowship @BII:

      2016 Master Internship Proposals

      Internships hosted by our partners on joint projects:

      2016 PhD positions

      2014 PhD positions

      2015 Master internship positions

      Internships hosted by our partners on joint projects:

      2014 Master internship positions