About Me
AboutMe.Main History
Hide minor edits - Show changes to markup
- Did not get to work on medical apps myself, but Man CTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.
- Did not get to work on medical apps myself, but ManCTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.
The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011!
The success of RGBDemo led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011!
- Improved our RGBD tracking, 3D reconstruction and texturing in real-time on iOS.
(:youtube YPLJsYYzFA4:)
- Kept improving our RGBD tracking, 3D reconstruction and texturing in real-time on iOS.
(:youtube YPLJsYYzFA4:)
The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011.
The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011!
- Launch of Structure Sensor, wrote the first version of the Objective C API for Structure SDK.
- RGBD tracking, 3D reconstruction and texturing in real-time on iOS.
- Launch of Structure Sensor, worked with our small team on the 3D reconstruction stack and wrote the first version of the Objective C API for Structure SDK to expose it to developers.
- Improved our RGBD tracking, 3D reconstruction and texturing in real-time on iOS.
- Did not work on actual medical apps myself, but Man CTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.
- Did not get to work on medical apps myself, but Man CTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.
- Positional tracking for AR/VR with a single camera and an IMU. Depth sensor not always required anymore :)
- Positional tracking for AR/VR with a single camera and an IMU. Port from mobile to Windows/PC. Depth sensor not always required anymore :)
- Launch of Structure Sensor, wrote the first version of the API (Objective C) for Structure SDK.
- Launch of Structure Sensor, wrote the first version of the Objective C API for Structure SDK.
I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping using RGB-D cameras. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics.
I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics.
The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. We were selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras.
In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. Our Structure Sensor, the first depth sensor for mobile, became the #6 most funded project on Kickstarter.
The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011.
In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. We participated to the launch of Structure Sensor, the first depth sensor for mobile, and it became the #6 most funded project on Kickstarter.
Here are some of the projects I’ve participated to at Occipital:
- Launch of Structure Sensor, wrote the first version of the API (Objective C) for Structure SDK.
- RGBD tracking, 3D reconstruction and texturing in real-time on iOS.
(:youtube YPLJsYYzFA4:)
- Calibrator iOS app to calibrate the iOS color camera with Structure Sensor, using feature matching between the IR camera in the sensor and color.
- Unbounded positional tracking using RGBD for CES 2015.
(:youtube UfQgkzfDwHw:)
- Mixed reality demo for iOS at CES 2016. Combining RGBD tracking, 3D reconstruction, and physics via an integration to the Scene Kit (and later on Unity) game engines.
(:youtube cEnnbCSbijo:)
- Did not work on actual medical apps myself, but Man CTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.
(:youtube 9LgmQYkRiSY:)
- We adapted Bridge Engine to launch a VR headset for iPhone. Optimized for latency, and leveraged visual-inertial sensor fusion for pose prediction.
(:youtube qbkwew3bfWU:)
- Launch of Canvas (scan your home), that leveraged our work on real-time unbounded large-scale SLAM for mobile.
(:youtube XA7FMoNAK9M:)
- Positional tracking for AR/VR with a single camera and an IMU. Depth sensor not always required anymore :)
(:youtube aVdWED6kfKc:)
- TapMeasure led a small team to build that iOS app in a very short time to leverage ARKit and take 3D measurements.
- Positional tracking for AR/VR extended to stereo and including more room perception for CES 2018.
(:youtube ra4u5np4HXk:)
I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping using RGB-D cameras. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics. The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. We were selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. Our Structure Sensor, the first depth sensor for mobile, became the #6 most funded project on Kickstarter.
I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping using RGB-D cameras. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics.
The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. We were selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras.
In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. Our Structure Sensor, the first depth sensor for mobile, became the #6 most funded project on Kickstarter.
Resume in english PDF
I am a PhD in computer vision and since 2009 I am a postdoctoral scholar at Carlos III University of Madrid. There, I am working on the perception aspects of the Handle FP7 european project on dextrous robotic manipulation. I am also involved in the Prosave project to study the suitability of Time-of-Flight cameras for airplane applications, in collaboration with Airbus Military. My main interests are computer vision for robotics, RGB-D cameras, object recognition and statistical approaches to visual event detection.
I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping using RGB-D cameras. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics. The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. We were selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. Our Structure Sensor, the first depth sensor for mobile, became the #6 most funded project on Kickstarter.
I am a PhD in computer vision and since 2009 I am a postdoctoral scholar at Carlos III University of Madrid. There, I am working on the perception aspects of the Handle FP7 european project on dextrous robotic manipulation. I am also involved in the Prosave project to study the suitability of Time-of-Flight cameras for airplane applications, in collaboration with Airbus Military. My main interests are computer vision for robotics, RGB-D cameras, object recognition and statistical approaches to visual event detection.
I am a PhD in computer vision and since 2009 I am a postdoctoral scholar at Carlos III University of Madrid. There, I am working on the perception aspects of the Handle FP7 european project on dextrous robotic manipulation. I am also involved in the Prosave project to study the suitability of Time-of-Flight cameras for airplane applications, in collaboration with Airbus Military. My main interests are computer vision for robotics, RGB-D cameras, object recognition and statistical approaches to visual event detection.
Resume in english PDF
CV
Docteur en Informatique
Intelligence Artificielle et Imagerie
F O R M A T I O N
2005 - 2008 | Doctorat à l’UPMC (Université Pierre et Marie Curie, Paris 6), laboratoire d’acceuil à l’ENSTA (École Nationale Supérieure de Techniques Avancées) sous la direction de Thierry Bernard (ENSTA) et de Jean-Michel Jolion (LIRIS - INSA Lyon). Apprentissage a-contrario et architecture efficace pour la détection d’évènements visuels significatifs (plus de détails). |
2004 - 2005 | Master M2 de recherche IAD (Intelligence Artificielle et Décision) à l’université Pierre et Marie Curie (Paris VI). Spécialisation Image et Son. Mention très bien. |
2002 - 2004 | Formation par la recherche au LRDE (Laboratoire de Recherche et de Développement d’EPITA), dirigé par Dr. Akim Demaille. |
1999 - 2004 | Cycle préparatoire et cycle ingénierie à l’EPITA (Ecole Pour l’Informatique et les Techniques Avancées). Mention très bien. |
E N S E I G N E M E N T
2007 | ENSTA : encadrement de projets en C (20h) |
2007 | UPMC (Université Pierre et Marie Curie) : TD et TP de programmation par objets, design patterns (50h) |
2006 | UPMC : TP de C et de shell Unix (75h). |
2005 | ESILV (École Supérieure d’Ingénieurs Léonard de Vinci) : TP et TD de Java (15h), TP de gestion de projet (10h). |
2005 | CESI (Centre d’Etudes Supérieures Industrielles) : mise en place d’un module d’introduction à Linux (20h). |
2005 | Acadomia : cours de soutien en maths et anglais. |
E X P É R I E N C E P R O F E S S I O N N E L L E
2009 - 6 mois | Université de Liège : postdoc. Approches statistiques pour l’estimation monoculaire de pose du corps humain. Application à l’analyse visuelle du langage des signes. |
2005 - 6 mois | ENSTA : stage de Master. Détection statistique d’objets saillants sur rétine artificielle programmable. Application à la détection de segments significatifs. |
2004 - 7 mois | Siemens Corporate Research (Princeton, États-Unis) : stage de recherche au sein du département imagerie. Recherche d’un modèle d’application des filtres de particules au suivi de fibres cérébrales. Recherche et implémentation (C++, Windows/Linux) d’algorithmes de visualisation d’images de tenseur de diffusion (DT-MRI). Brevet |
2002 - 2003 | LRDE : recherche et développement. Développement d’Evidenz, un moteur de raisonnement générique à base de théorie de l’évidence (Dempster-Shafer). Application au traitement d’image. Participation à la conception et au développement d’Olena, une bibliothèque générique de traitement d’image en C++ (metaprogramming). |
2002 - 3 mois | Radio France : stage au département de recherche et développement. Conception et réalisation d’applications de surveillance de flux audio en C++ sous environnement multiplate-forme (Linux, NetBSD, OpenBSD, FreeBSD). |
2001 - 2 mois | Snecma Services : génération de statistiques automatisées (VBA, Access). |
2000 - 2 mois | WebValley : administration réseau sous Linux (serveur web / ftp, monitoring, sauvegardes, …) |
P U B L I C A T I O N S
D I V E R S
Langues | Anglais: courant (7 mois de vie aux États-Unis). Espagnol: scolaire. |
Activités |
Contributions à divers logiciels libres. Sports : volley, natation, roller. Voyages. |
This document was translated from LATEX by HEVEA.
Resume in english PDF
I am a PhD in computer vision and since 2009 I am a postdoctoral scholar at Carlos III University of Madrid. There, I am working on the perception aspects of the Handle FP7 european project on dextrous robotic manipulation. I am also involved in the Prosave project to study the suitability of Time-of-Flight cameras for airplane applications, in collaboration with Airbus Military. My main interests are computer vision for robotics, RGB-D cameras, object recognition and statistical approaches to visual event detection.
Approches statistiques pour l’estimation monoculaire de pose d’humains.
Application à l’analyse visuelle du langage des signes.
Approches statistiques pour l’estimation monoculaire de pose du corps humain.
Application à l’analyse visuelle du langage des signes.
Approches statistiques pour le suivi de pose d’humains.
Application à l’analyse du langage des signes.
Approches statistiques pour l’estimation monoculaire de pose d’humains.
Application à l’analyse visuelle du langage des signes.
Approches statistiques pour le suivi de pose d’humains.
Application à l’analyse du langage des signes.
\textbf{Approches statistiques} pour le suivi de pose d’humains.
Application à l’analyse du langage des signes.
Détection statistique d’objets saillants sur rétine artificielle programmable.
Application à la détection de segments significatifs.
Détection statistique d’objets saillants sur rétine artificielle programmable.
Application à la détection de segments significatifs.
Détection statistique d’objets saillants sur rétine artificielle programmable.
Application à la détection de segments significatifs.
\textbf{Approches statistiques} pour le suivi de pose d’humains.
Application à l’analyse du langage des signes.
Détection statistique d’objets saillants sur rétine artificielle programmable.
Application à la détection de segments significatifs.
Resume in english PDF
Resume in english PDF