If you are interested in joining us, don’t hesitate to send your application to this mail address: christophe.jouffrais@cnrs.fr.
PhD offer on Human Augmentation and/or Assistive Technologies for the People with Visual Impairments
PhD offer on Human Augmentation and/or Assistive Technologies for the People with Visual Impairments Supervisors: Suranga Nanayakkara (NUS) & Christophe Jouffrais (CNRS) Conditions: Monthly Stipend (no tax & top up possible): 2700 SG$ net before Qualifying...
You consider joining IPAL for a PhD but you wonder if NUS is a good University
The National University of Singapore (NUS) is globally recognized as one of the leading universities in the world, attracting students and faculty from diverse backgrounds and maintaining a reputation for excellence in education, research, and innovation. Here are...
You consider joining IPAL for a PhD but you wonder about Singapore ?
You consider joining IPAL for doing a PhD but you wonder about Singapore ? Singapore is a vibrant and dynamic city that captivates visitors and residents alike with its unique blend of cultures, advanced infrastructure, and a high standard of living. Here are...
Doing a PhD with IPAL@NUS
The National University of Singapore (NUS) is now accepting applications for fully-subsidized PhD positions for 2025, 2026 and 2027 intake. This is an excellent opportunity for aspiring researchers to join one of the world's leading universities and benefit from...
Post-doc offer on Human Augmentation
New technologies such as mixed reality, natural or wearable interfaces, or Artificial Intelligence are beginning to take hold in production facilities. They promise performance gains but can also improve safety and comfort in interactions between human operators and semi-autonomous systems. For these technologies to be accepted and deployed, human functioning must be considered.
In this project, the postdoc will explore the topics of cognitive load, multimodal representation of information, as well as the design and evaluation of multisensory interfaces designed to control intelligent and autonomous agents. Urban Air mobility is one possible context. In this context, the project will be to define the characteristics of a multimodal mixed reality interface for the control of semi-autonomous drones.
Applicants must minimally have a PhD in Cognitive Science, Computing, Engineering or a related discipline, research experience in human-computer interaction, a strong publication track record.
Further information & Contact
The salary for the position ranges from 70K to 85K SGD per year, depending on suitability and experience
Workplace: CREATE Tower, Create Way #08-01, Singapore 138602
Interested applicants should email a cover letter to Christophe Jouffrais (christophe.jouffrais@cnrs.fr)
Research fellow to lead and coordinate research efforts on Human-AI collaboration
DesCartes Program
DesCartes is a 5-year program that aims to develop hybrid artificial intelligence systems that combine learning from data, prior knowledge, and logical reasoning for smart city applications, such as digital energy, infrastructure monitoring, and air traffic control. The program brings together 80 permanent researchers from France and Singapore, with the support of large industrial groups, including Thales SG, EDF SG, ESI Group, CETIM Matcor, and ARIA. The research takes place mainly in Singapore, at the premises of CNRS@CREATE on the campus of the National University of Singapore (NUS).
Description
The program has an open position for a postdoctoral research fellow to lead and coordinate research efforts on human-AI collaboration, specifically on how AI can assist humans in decision-making and how humans can complement AI in critical decisions. The research fellow will work with researchers from the French National Centre for Scientific Research (CNRS), the Agency for Science, Technology, and Research (A*STAR), and the National University of Singapore (NUS) to develop state-of-the-art methods in human-in-the-loop machine learning, human-AI interaction, and human-centric AI.
Experience & Qualifications
Applicants must minimally have a PhD in Computing, Engineering or a related discipline, research experience in AI and/or human-computer interaction, a strong publication track record
Further information & Contact
The salary for the position ranges from 70K to 85K SGD per year, depending on suitability and experience
Workplace: CREATE Tower, Create Way #08-01, Singapore 138602
Interested applicants should email a cover letter and CV to the lead PIs of Work Package 4 of DesCartes, Christophe Jouffrais (christophe.jouffrais@cnrs.fr) or Wei Tsang Ooi (ooiwt@comp.nus.edu.sg)
Research Assistant in Drone Companion for People with Visual Impairments
Job Description
IPAL invites applications for the position of Research Assistant in the Department of Computer Science, School of Computing (SoC). SoC is strongly committed to research excellence in all its dimensions: Searching for fundamental results and insights, developing novel computational solutions to a wide range of applications, building large-scale experimental systems, and improving the well-being of society. We seek to play an active role both internationally and locally in the core and emerging areas of Computer Science and Information Systems.
The Research Assistant will be responsible for working closely with the Principal Investigator (A/Prof Ooi Wei Tsang) and co-Principal Investigators (A/Prof Suranga Nanayakkara & Professor Christophe Jouffrais) on developing a drone companion for People with Visual Impairments. We envision a lightweight, personal drone, equipped with state-of-the-art computer vision algorithms, that serves as a highly-customizable assistive device for a PVI.
Job responsibilities would include the development of drone applications, conducting user studies, and documenting the research.
Qualifications
The potential candidate should possess experience or interest in the following areas:
- At least a bachelor’s degree in computer science, cognitive science or related field with a focus on Software development
- Hands-on experience in full stack development
- Experience in areas such as Human-Computer Interfaces, User-Centred Design, and Computer Vision would be preferred
- Excellent communication skills (written and verbal) and interpersonal skills
How to Apply:
Email CV and a short statement of interest to Prof Christophe Jouffrais Cristophe.Jouffrais@cnrs.fr and A/Prof Ooi Wei Tsang ooiwt@comp.nus.edu.sg
Postdoctoral fellow in computational biology, bioinformatician, internship opportunities in Research Data Integration Group, Biomedical Datahub Division, Bioinformatics Institute (A*STAR)
Supervisor: Dr. Woo Xing Yi, Senior Principal Investigator and Head of Research Data Integration
Contact: woo_xing_yi@bii.a-star.edu.sg
About us
Data science is an important component of biomedical and translational research, where
data of multiple modalities are being constantly generated at unprecedented scale. The
Research Data Integration group in the Biomedical Datahub Division of the Bioinformatics
Institute (BII), A*STAR, aims to bridge the complexity of computational biology and data
science with the needs of biologists and clinicians to drive biological discoveries and predict
translational outcomes. One of our immediate challenges is to integrate and analyze multi-
omics, imaging and clinical data generated by biomedical institutes, hospitals and national
initiatives to improve the usability and interpretability of large-scale multimodal datasets of
cancer and other diseases. We seek motivated individuals to join us to push the potential of
biomedical data in truly benefitting patients.
Project description
We work closely with clinicians to explore personalized treatment options for cancer
patients using multi-omic and spatial profiling, and therapeutic screening in patient-derived
models. Data of multiple modalities are generated in the process, and we are developing
systematic workflows to integrate and analyze the data to enable clinical-decision-making,
predict translation outcomes and drive biological discoveries. This project is looking for
candidates to develop computational methods, including big-data analytics and AI/ML
approaches, to analyze and integrate the multi-modal data (sequencing, imaging, spatial
profiling, treatment response and clinical data) that can deliver translational outcomes to
cancer patients. The candidate will have the opportunity to work in a multi-disciplinary team
led by a senior Principal Investigator highly experienced in cancer computational biology and
clinician-scientists specializing in oncology. Eventually, the candidate will receive training in
both computational biology and translation oncology disciplines.
The candidate is expected to work on any (but not limited) to any of these tasks, depending on position, experience, field of study and interests.
1. Develop, implement and benchmark executable workflows for variant (SNP, Indels, SV, CNV) calling from WES/WGS data and transcriptome profiling from RNASeq data.
2. Develop image processing workflows for histology images using AI/ML and computer vision methods.
3. Write scripts to output data in a format that can be integrated with publicly available cancer datasets.
4. Organize and analyze publicly available cancer datasets, including sequencing and drug treatment.
5. Develop visualization tools to visualize results in a meaningful way.
6. Organize all data in a structured manner using relational databases.
7. Develop methods to integrate multi-modal data.
8. Curation of cancer treatment and biomarkers, and patient clinical data.
9. Develop cancer variant annotation database
Requirements
• The candidate should have basic programming skills (e.g. Python, R, RStudio, Jupyter Notebook, RShiny, SQL), except for curation tasks.
• Familiarity with Unix/Linux environment or cloud architecture would be an advantage
• Strong analytical and problem-solving skills.
• Excellent oral and written communication and presentation skills.
• Able to work independently, and as part of a team
Suitable field of study
Bioinformatics, Computational biology, Computer science, Data science and analytics, Statistics and biostatistics, Mathematics, Genetics, Life sciences, Biology, Epidemiology, computer vision, any field of Science and Engineering, Pharmacy, Medicine
Research Fellow (postdoctoral) position is available immediately in the Computational Digital Pathology Lab (CDPL) of Bioinformatic Institute (BII), A*STAR, Singapore.
Novel imaging analysis methods and the advancement of Artificial Intelligence are making it feasible to extract quantitative information from Digital Pathology images. New knowledge, skills, and approaches are urgently required to develop novel, efficient and reliable computational tools for emerging challenges in biological and biomedical studies. Our CDPL
is dedicated to the development of new solutions in this field. Currently, we are expanding our team and seeking enthusiastic scientists to join us!
For a research overview of our unit, please refer to https://www.a-star.edu.sg/bii/research/ciid/cdpl
General requirements:
• Self-motivated scientist/Ph. D graduate to pursue a scientific career. Independent and passionate about biological image processing projects;
• Good team player. Able to undertake independent research projects under the direction of the PI together with other team members;
• Hold a Ph.D. in a relevant field of computer vision, image processing or machine vision, or other relevant fields;
• Good general knowledge and concept of science and engineering;
• Excellent scientific/technical writing skills and communication capability;
• Prior experience in working with image analysis projects (Industrial or academic).
Specific technical requirements:
• Excellent experience, knowledge, and skills in one or two following programming languages, i.e., Matlab, C/C++, Java or Python;
• Excellent knowledge and skills in digital image processing;
• Deep understand of AI learning, machine learning and data science with hand-on skill and experience.
• Skills for numerical computational algorithms; Strong in mathematics theories.
• Rigid and logical thinking of scientific problems;
• General understanding of machine learning, pattern recognition, and artificial intelligence is a plus, but not compulsory;
• Basic knowledge of cell biology or pathology is a value-add;
• Present research achievements at internal/external seminars and conferences
Please email a detailed CV containing a list of publications to Dr. Weimiao YU,
yu_weimiao@bii.a-star.edu.sg
We regret that only shortlisted applicants will be notified.
Research Associate (M.E. or M.Sc.) position is available immediately at the Computational Digital Pathology Lab (CDPL) of Bioinformatics Institute (BII), A*STAR, Singapore.
Our CMPL team is dedicated to the development of new solutions in this field. Currently, we are expanding our team and seeking enthusiastic young researchers to join us!
For a research overview of our unit, please refer to https://www.a-star.edu.sg/imcb/imcb-research/scientific-programmes/innovative-technologies
General requirements:
• Self-motivated researcher/M.E. or M.Sc. graduate to pursue a scientific career. Independent and passionate about biological/biomedical imaging projects;
• Good team player. Able to undertake independent research projects under the direction of the PI together with other lab members
• Hold a Master degree in a relevant field;
• Good general knowledge and concept of science and engineering;
• Excellent scientific/technical writing skills and communication capability
• Prior experience in working with imaging projects, such IHC imaging, H&E imaging and Multiplex immunofluorescence imaging (Industrial or academic).
Specific technical requirements:
• Excellent experience, knowledge, and skills in one or two following programming languages, i.e., Matlab, C/C++, Java or Python;
• Good experience on image quality control;
• Excellent knowledge and skills in medical sample staining and imaging technical;
• Rigid and logical thinking of scientific problems;
• General understanding of machine learning, pattern recognition, and artificial intelligence is a plus, but not compulsory;
• Basic knowledge of cell biology or pathology is a value-add;
• Present research achievements at internal/external seminars and conferences
Please email a detailed CV containing a list of publications to Dr. Weimiao YU, wmyu@imcb.a-star.edu.sg
We regret that only shortlisted applicants will be notified.
Opening for Bioinformaticians in Genomics and Sequences Analysis for the Eisenhaber Group (GIS/BII Singapore)
The Eisenhaber Group affiliated with the Genome Institute of Singapore (GIS) and Bioinformatics Institute (BII) of A*STAR has two openings for researchers/bioinformatics analysts in omics data (with emphasis on protein sequence data) analysis (Equal Opportunity/Affirmative Action/Equal Access Employer). The application of bioinformatics, computational and advanced data analysis techniques and concepts is aimed at discovering biomolecular mechanisms that are relevant for
phenotypic effects, disease, natural product research, etc. The successful applicant will also be involved in mentoring interns and students helping them getting an entrance into the field. An introduction into the group’s in-house software suites and local databases as well into advanced concepts of protein sequence analysis will be provided.
Job Duties
• Analyze multi-modal omics datasets using available open software packages and tools and, potentially, own programs and scripts with the goal of discovering hints for biomolecular mechanisms. Interpret results in biological terms.
• Reformat data sets so that they can serve as input for different software suites. Map various datasets onto a common basis so that integrated analysis becomes possible. Assess data quality and clean up data sets.
• Analyze in-house data in context with public datasets. The analysis will be carried out jointly with collaborators from experimental life science and/or clinical labs.
• Analyze life science literature (e.g., PMID 30265449)
• Assist with the preparation/writing of scientific manuscripts and grant submissions.
Preferred Qualifications
• A PhD or MSc in Computer Science/Engineering, Bioinformatics, Genetics, Biology, Bioinformatics, Biostatistics, Computational Biology or Computer Science (a MSc applicant might move towards a PhD)
• Preferentially with solid biological background knowledge
• Preferentially some ability in programming and/or scripting (e.g. Python/R programming), some knowledge of standard bioinformatics analysis tools for omics data, pathway analysis, etc. Familiarity with Unix/Linux is desirable.
• New tools appear all the time and learning is part of the job though some experience will be helpful.
If interested, please send your application letter, updated CV, supporting documents/certificates/work examples/thesis/etc. to franke@bii.a-star.edu.sg.
Positions open to both interns and PhDs
All the following positions are open for both interns and PhDs candidates. If you want to inquire or apply, please send your CV to the names indicated at the end of each proposal.
A regular PhD stipend with IPAL is ~1400€ or 2700 $SG. But under certain conditions (e.g. nationality), IPAL can allow you to apply for exceptional thesis stipends from 4000 to 5500 SGD. Feel free to inquire
Previous Internship and PhD Positions
Internships and PhD positions in the framework of the Descartes Program
All the following positions are open for Interns, and can be continued by a PhD. All these positions will take place in the exciting scientific environment of the Descartes collaborative program. See here for more in Descartes: Descartes presentation
All the master internships can lead to a PhD in France or Singapore. Interns who aim to do a PhD will be preferred.
Application
Send an email to the corresponding supervisors with the following documents:
- Complete CV (with possible publications)
- Letter of motivation
- transcripts of records since L1 or Prepa
- Report of a previous internship
You may apply to more than one proposal. In this case, please send the documents to all supervisors and mention it in your message.
Proposal #1 - Understanding the environment from drones with multiples sensors (closed)
Supervisors: Lai-Xing Ng (Contact: ng_lai_xing@i2r.a-star.edu.sg) and Benoit Cottereau (Contact: benoit.cottereau@cnrs.fr)
Abstract: Drones, or machines in general, have a multitude of sensors that provide information about the surroundings. Existing works on drone perception often use image-based sensors, such as RGB cameras and depth cameras. Image-based sensors are susceptible to motion blur as well as variation in illumination and thus do not work well when the drone is fast-moving. For teleoperated drones, the human operator can only rely on the live video feed of a single camera and the restricted field-of-view affects the human understanding of the drone’s environment. In this project, the aim is to utilize the available sensors and provide a human operator a perspective of being at the drone’s location. Selected candidate will work on developing novel approaches on how distributed sensors can communicate, collaborate (including changing what they sense) and process the signals in an energy-efficient way to extract meaningful information from the scene, in response to existing knowledge models (long term memory) and real-time interaction and decisions from humans, and send back the information to humans for visualization. Research tasks include:
- Process different types of sensory signals (e.g. data collected from event-based cameras, synchronous cameras, and other sensors) for scene understanding (e.g. object detection and localization) using neuromorphic systems based on artificial neural networks and embedded on a single or multiple drones.
- Extract meaningful information (3D layout of the scene, objects of interest, threats, etc.) and combine with existing knowledge models.
- Provide meaningful multimodal feedback to the user based on a wearable device (e.g. smart glasses) that should provide remote (augmented) perception.
Expected skills: The candidate should be willing to work in an international environment which involves Singapore and France, have a good level in English and very good programming skills (in Matlab, Python or C++).
Proposal #2 - Drone Piloting from Different Perspectives (closed)
Supervisors: Shen ZHAO (zhaosd@comp.nus.edu.sg) and Christophe JOUFFRAIS (Christophe.Jouffrais@cnrs.fr)
Abstract: New technologies such as mixed reality, natural or wearable interfaces, as well as Artificial Intelligence are beginning to take hold in production facilities. They promise performance gains but can also improve safety and comfort in interactions between human operators and semi-autonomous systems. For these technologies to be accepted and deployed, human factors must be considered.
In this internship, we will design and evaluate a multisensory interface for drone piloting from different perspectives. The aim of the project will be to define the characteristics of a multimodal interface for the control of semi-autonomous drones. Behavioral experiments and the analysis of the collected data will allow the selection of the most suitable parameters for both the design of the interfaces and the evaluation of the human-system interaction.
Expected skills: The candidate must have skills in human-computer interaction, cognitive science and/or human factors. He-She should be willing to work in an international environment which involves Singapore and France, have a good level in English and good programming skills.
Proposal #3 - Future Video Prediction using Generative Models
Supervisors: Ying SUN (suny@i2r.a-star.edu.sg) and Christophe JOUFFRAIS (Christophe.Jouffrais@cnrs.fr)
Abstract: Learning to predict the future is an important research problem in machine learning and artificial intelligence. In this project, we focus on the task of predicting future frames in videos, i.e., video prediction, given a sequence of previous frames. Recently, deep-learning-based methods have emerged as a promising approach for video prediction, especially generative models such as variational autoencoders (VAEs) and generative adversarial networks (GANs). VAEs can generate various plausible outcomes, however, the predicted frames are blurry and of low quality. While GAN-based models tend to produce higher quality future frames, adversarial training is unstable and may lead to model collapse. Therefore, we will explore state-of-the-art generative models for video prediction and develop new strategies to address the limitations of existing methods.
Expected skills: The candidate should be willing to work in an international environment which involves Singapore and France, have a good level in English and very good programming skills (in Matlab, Python or C++).
Proposal #4 - human-in-the-loop learning
Supervisors: Lai-Xing Ng (Contact: ng_lai_xing@i2r.a-star.edu.sg), Wei-Tsang Ooi (Contact: ooiwt@comp.nus.edu.sg) and Axel Carlier (Contact: Axel.Carlier@toulouse-inp.fr)
Abstract: While deep learning has brought important advances in many domains, large labeled datasets are required to ensure good model performances. Several models for collaborations between human and machine learning have been proposed to overcome this limitation and try to decrease the need for labeled data. In active learning, the model explicitly chooses data samples for humans to label, which is then fed into the training process in an online fashion.
Unlike learning from a large number of pre-labeled data samples, human inputs in human-in-the-loop learning have a larger impact or even overriding effects on machine decisions. Such human-AI collaboration models make it possible for malicious humans to impact the outcome of machine learning models.
In this internship, we plan to build upon existing criteria, such as for example expected model output change (EMOC), to study possible trade-off between impact of (possible malicious) input vs. how fast a model can learn from humans.
Proposal #5 - Interactive Explainable AI
Supervisors: Christophe Hurter (christophe.hurter@enac.fr), Brian Lim (brianlim@comp.nus.edu.sg), and Jamie Ng (jamie@i2r.a-star.edu.sg)
Abstract: With AI capabilities, drones can be used to automatically inspect airplanes and buildings to improve the safety of these structures. However, it is ultimately dependent on the human operator to verify the severity of defects. In this project, we will develop interactive methods to make AI explainable for drone operators and inspectors to interpret and verify the image predictions. This research will investigate how to support user understanding of AI decisions using interactive visualization, and explainable AI.
We are looking for talented candidates to join our multidisciplinary team. The project looks into how robotics, computer vision, artificial intelligence, virtual/augmented reality and human-computer interaction can lead to effective human-AI collaboration. We are looking for candidates with passion in research and development, as well as in translating R&D technology into industry applications.
Expected Skills:
- Qualification/Field of Study: Bachelors or Masters
- Technical skills: Strong programming skills (e.g., Python, Javascript, C++).
- Experience and Knowledge: Computer Vision and Machine Learning, Virtual Reality, Human-Computer Interaction, etc.
- Aptitude: Critical thinker, self-motivated, can work both independently and in teams, with good analytical and communication skills.
Proposal #6 - Natural Language Processing (NLP)
CNRS@CREATE Singapore has 2 PhD offer positions in hybrid strategies for NLP. The candidate will work within the DesCartes program, https://www.cnrsatcreate.cnrs.fr/descartes/, a large research project that aims to develop disruptive hybrid AI to serve the smart city and to enable optimized decision-making in complex situations, encountered for critical urban systems.
We are looking for candidates with:
→ Master degree in Computer science with solid background in NLP, AI and/or machine learning. Very strong academic records are highly recommended.
→ Good experience in deep learning approaches for NLP
→ Good programming skills in Python
→ Very good English skills (both writing and speaking)
→ Can work collaboratively with other researchers
The candidate will be registered at Paul Sabatier University in Toulouse for 3 years and is expected to spend time in Singapore (https://www.cnrsatcreate.cnrs.fr/about-us/). The thesis will be supervised by Jian Su (Astar, sujian@i2r.a-star.edu.sg), and Farah Benamara (IRIT, farah.benamara@irit.fr).
To apply, please send a detailed CV, your grades and a list of publications. The position is open until fulfilled but the deadline to apply is October 15th.
Feel free to contact us for any questions or comments: Farah Benamara (farah.benamara@irit.fr)
2020 Master Internship positions
Multimodal feedback in a virtual scene
Designing non-visual multimodal feedback to help with navigation in a virtual scene. The student will be helping to develop audio and tactile feedback to guide a user in navigation through a virtual scene. These two modalities will be integrated in a system that attempts to understand user preferences, obtain their feedback for human-in-the-loop reinforcement learning and evaluate our approach. The job task involves help to prepare a software library that, given a 3D virtual scene and a route, renders the orientations and directions to the users using audio (via text-to-speech) and tactile feedback. Both signals will be sent to an earphone and one or two wrist-based tactile bands with motors to provide spatial cues.
Date: project start between May and Dec 2020
Contact: C. Jouffrais christophe.jouffrais (@) ipal.cnrs.fr and Shen Zhao dcszs (@)nus.edu.sg
Easier Scene Understanding with Deep Learning using Context
In this project, the intern will study the problem of scene understanding from a given image using a deep neural network. Current state-of-the-art methods require a complex and deep network and a large amount of training data. We will explore how having prior context information about the scene can simplify the problem, and thus the complexity of the network as well as the amount of training data required.
The intern will assist the researcher in experimenting with different neural network models and how the context information can be integrated into the training and inferencing phase of the problem.
Date: project start between May and Dec 2020
Contact: Axel Carlier Axel.Carlier (@) enseeiht.fr and Wei Tsang Ooi ooiwt (@) comp.nus.edu.sg
How To Fool a Deep Neural Net with another Deep Neural Net
Deep neural network has been proved successful in computer vision and natural language. Nevertheless, the research literature has shown that they can be vulnerable if we change several pixels of an image of a dog so that the model may make a wrong prediction. Such a mutated image is called an adversarial sample for the neural network. Such perturbation-based approach looks for adversarial samples from a low and detailed perspective. In this research, we investigate a new adversarial sample generation technique by exploring GAN (Generative Adversarial Network). We are exploring how to use GAN to generate adversarial samples from a higher perspective. More specifically, we are exploring to generate a face of Bob which has never appeared in the training set but can be mistakenly classified as Alice to fool some face recognition system.
Date: project start between May and Dec 2020
Contact: Blaise Genest blaise.genest (@) irisa.fr and Jin Song Dong dcsdjs (@) nus.edu.sg
Neural Network for Differential Equations
Differential equations are one of the main tools for the modelling, simulation and analysis of complex systems in most domains of science and engineering. Neural networks have recently been shown to be able to effectively and efficiently solve differential equations. In fact, several possible approaches are still under investigation. In this project, the researcher will implement and evaluate several existing and new approaches to represent and solve systems of differential equations with neural networks. The student researcher may also be involved in the development of applications of the work to hydrology, meteorology and climate change.
Date: project start between May and Dec 2020
Contact: Talel Abdessalem Talel.Abdessalem (@) telecom-paristech.fr and Stephane Bressan steph (@) nus.edu.sg
Previous positions:
2019 Internships:
- Towards ageing-well through trusted intelligent systems based on AI, IoT and Formal Analysis
- Android development of urban mobility app using Fitbit and environment APIs (app implementation, analysis, reasoning)
- Front and back-end dev and data analysis Node.js (Machine learning, IoT for health, on site validation)
- Software IoT architecture (refactoring, optimization of platform to enhance large-scale deployments)
- Web-based visualisation of GeoJSON (interactions in WebGIS environment)
- Continuous and nonintrusive vital sign monitoring using optical fibre sleep mat (machine learning, sleep cycles data analysis) in collaboration with Khoo Teck Puat Hospital (KTPH) and Singapore University of Technology and Design (SUTD)
Get to know more:
Scientists worldwide are welcome to join our challenges! IPAL provides great opportunities to researchers and students from all nationalities who desire to blossom in an excellent international research laboratory. We are committed to provide a unique platform for candidates to begin research and develop their skills in a top-ranked university fully supported by distinguished and world-renowned researchers from Singapore and France.
CNRS and Universities mobility: If you are already a researcher working for the CNRS, we will be very honored to welcome you in our laboratory. Please have a look at the CNRS website for the procedure, do come in touch with us to prepare a joint ambitious projects, able to boost your carreer, and do not hesitate to contact us for further assistance: CNRS Mobility website
Singapore, a high-tech and world-class scientific environmentIn a very competitive scientific environment, surrounded by dynamic and talented scientists and supported by one of the best basic and translational research infrastructures worldwide, working in Singapore is a valuable experience. In partnership with the National University of Singapore and the Agency for Science, Technology and Research institutes, world-class scientists from all major scientific centres in the world, are exchanging and sharing with us all year long, generating a prolific scientific osmosis.
Open PhD Positions and Regular PhD applications: In order to work with IPAL, you need to come in touch with one of our staff during your first year. Please look at our research goals, axes, projects and publications and you will quickly understand what competencies we will always welcome. Don’t hesitate to contact us if needed. Beside the open position(s) above, a regular submission can be done via the graduates portal NUS, School of Computing, Computer Science Dpt. or NUS, Faculty of Engineering depending on your profile. See also the PhD Programme at NUS School of Computing. Another possibility to get a NUS degree at IPAL is to go for the SINGA – Singapore International Graduate Award programme or the ARAP – A*STAR Research Attachment Programme both funded by A*STAR, with a graduation through NUS in the case of IPAL. Last but not least, regular applications can be done via the EDITE doctoral scool (Informatics, Telecommunications and Electronics) of the University Pierre and Marie Curie, Paris, France or the Doctoral School for Computer Sciences, Applied and Pure Mathematics (MSTII) of the University Joseph Fourier, Grenoble 1, France, for a French PhD while working at IPAL in Singapore or in a collaborative way with highly reputable CNRS labs in France. Please get in touch with us to define your project before application in this case.
Previous positions:
2017 Support Team
2017 Master Internship Proposals
- Optimize daily lives with IoT devices
- 3D segmentation of biomedical data using a super-pixels approach
- Front-End UX/UI (Ambient Assisted Living)
- Interactive Applications for Supporting Social links in Assistive Technologies
- Making Sense of Environmental Data in Real Time to Improve the Service Delivery in Assistive Technology
- Merging Wearables Devices and Open Linked Data to Improve Elderly Assessment in Open Spaces
- Social and Behavioural Sciences applied on Assistive technology for ageing people
Internships hosted by our partners on joint projects:
- Assessing the effects of acute mild exercise on working memory: an EEG-fNIRS approach – based in Malaysia, Ipoh, Universiti Teknologi Petronas
- Deep Learning Techniques for DME Classification on SD-OCT images – based in Malaysia, Ipoh, Universiti Teknologi Petronas
- Integration of new Indoor Sensing Technologies in Ambient Assistive Living Platform – based at LIRMM in Montpellier, France
- Integration of new Outdoor Technologies in Ambient Assistive Living Platform – based at LIRMM in Montpellier, France
2017 Post-doc fellowship @BII:
2016 Master Internship Proposals
- Android App for Context-Aware Interaction in Ambient Assisted Livings
- Embedded Sensors for IoT
- Image Retrieval
- Inertial Sensor Fusion
- Internet of Things Framework for Ambient Intelligence
- Generic Visual Search based on Deep Attention Model
- Holistic Scene Understanding for Indoor/Outdoor Navigation
- Video based Context Awareness for Ambient Assistive Livings
Internships hosted by our partners on joint projects:
- Extending a Platform for Ambient Assistive Living at LIRMM, Montpellier, France
2016 PhD positions
- 3-D structural analysis of neuronal cells in live animals
- Bio-inspired Vision for Mobile Robotics
- Interactive Reconstruction and Navigation in 3D Cities Models
- “Events” detection in videos
- Learning and tracking in 3D+t biomedical data
- Image analysis for Urine Cytology data
- Holistic Scene Understanding for Weak-Sighted Elderly
- Context Continuity in Ambient Assisted Living (connected car, with PSA Peugeot Citroën)
- Event Analytics for Smart Cities
- Video based Context Awareness for Ambient Assistive Livings
2014 PhD positions
- Augmented Visual Perception
- Combining Data Driven & Knowledge Driven Techniques for the Context Awareness in Ambient Assistive Livings
- Continuous and unobtrusive vital signs monitoring with ballistocardiogram sensors for sleep awakening and apnea
- Deep-Learning for Satellite Imagery
- Generic segmentation and classification in biomedical images
- Learning Approaches in Dynamic Data Management
- Modeling of spatial and temporal organization in images and videos
2015 Master internship positions
- Automated machine learning in 3D/4D biological image data
- Automated tracking in 3D+time biological data
- Bio-inspired Vision for Mobile Robotics
- Design of a Home Gateway for Ambient Assistive Livings
- Design of a REST Framework for Ambient Assistive Livings
- Large scale data mining for fraud and intrusion detection
- Querying Probabilitistic Data via Tree Decompositions
- Sensing Environment, Health and Hygiene from Social Media
- Sensor development for vital signs monitoring
- Signal Processing for Vital Signs Monitoring
- Spatial Information Extraction in Video
- Video based Context Awareness for Ambient Assistive Livings
- Large Scale Image Classification with Deep Learning
- Object Localization with Deep CNNs
- Deep Learning: Solving the Detection Problem
- Describing Images with Sentences
- Online Deep Learning for Google Glass
Internships hosted by our partners on joint projects:
- Dynamic sensors integration and monitoring within a pervasive framework at LIRMM, Montpellier, France
2014 Master internship positions
- ComputerVision: CRF-based Identification of ROIs on Histopathological Images
- Data Analytics for Context Awareness in Ambient Assistive Livings
- Data Visualization: Supervised Dimensionality-Reduction Methods for Visualization of High-Content Data
- Design of a Home Gateway for Ambient Assistive Livings
- Design of a REST Framework for Ambient Assistive Livings
- Detection and Segmentation of Nuclei and Mitosis on Histological Images
- Machine-Learning : Self-Organizing Maps for Knowledge-Oriented Exploratory Data Analysis
- Medical Image Understanding: Semantic Tools for Computer-Aided Diagnosis of Colon Cancer
- Modeling Spatial Information in 2D Biomedical Images
- Sensing Environment, Health and Hygiene from Social Media