About Me

AboutMe.Main History

Hide minor edits - Show changes to markup

June 07, 2020, at 02:40 PM by 81.164.27.8 -
Changed line 28 from:
  • Did not get to work on medical apps myself, but Man CTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.
to:
  • Did not get to work on medical apps myself, but ManCTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.
June 07, 2020, at 06:25 AM by 81.164.27.8 -
Changed lines 9-10 from:

The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011!

to:

The success of RGBDemo led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011!

Deleted lines 16-18:
  • Improved our RGBD tracking, 3D reconstruction and texturing in real-time on iOS.

(:youtube YPLJsYYzFA4:)

Added lines 18-20:
  • Kept improving our RGBD tracking, 3D reconstruction and texturing in real-time on iOS.

(:youtube YPLJsYYzFA4:)

June 07, 2020, at 06:24 AM by 81.164.27.8 -
Changed line 9 from:

The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011.

to:

The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011!

June 07, 2020, at 06:18 AM by 81.164.27.8 -
June 07, 2020, at 06:18 AM by 81.164.27.8 -
Changed lines 15-17 from:
  • Launch of Structure Sensor, wrote the first version of the Objective C API for Structure SDK.
  • RGBD tracking, 3D reconstruction and texturing in real-time on iOS.
to:
  • Launch of Structure Sensor, worked with our small team on the 3D reconstruction stack and wrote the first version of the Objective C API for Structure SDK to expose it to developers.
  • Improved our RGBD tracking, 3D reconstruction and texturing in real-time on iOS.
Changed line 28 from:
  • Did not work on actual medical apps myself, but Man CTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.
to:
  • Did not get to work on medical apps myself, but Man CTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.
Changed line 37 from:
  • Positional tracking for AR/VR with a single camera and an IMU. Depth sensor not always required anymore :)
to:
  • Positional tracking for AR/VR with a single camera and an IMU. Port from mobile to Windows/PC. Depth sensor not always required anymore :)
June 07, 2020, at 06:14 AM by 81.164.27.8 -
Changed line 15 from:
  • Launch of Structure Sensor, wrote the first version of the API (Objective C) for Structure SDK.
to:
  • Launch of Structure Sensor, wrote the first version of the Objective C API for Structure SDK.
June 07, 2020, at 06:12 AM by 81.164.27.8 -
Changed line 7 from:

I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping using RGB-D cameras. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics.

to:

I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics.

June 07, 2020, at 06:12 AM by 81.164.27.8 -
Changed lines 9-12 from:

The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. We were selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras.

In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. Our Structure Sensor, the first depth sensor for mobile, became the #6 most funded project on Kickstarter.

to:

The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. After some initial R&D projects we got selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. Watch our first prototype in action in 2011.

In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. We participated to the launch of Structure Sensor, the first depth sensor for mobile, and it became the #6 most funded project on Kickstarter.

Here are some of the projects I’ve participated to at Occipital:

  • Launch of Structure Sensor, wrote the first version of the API (Objective C) for Structure SDK.
  • RGBD tracking, 3D reconstruction and texturing in real-time on iOS.

(:youtube YPLJsYYzFA4:)

  • Calibrator iOS app to calibrate the iOS color camera with Structure Sensor, using feature matching between the IR camera in the sensor and color.
  • Unbounded positional tracking using RGBD for CES 2015.

(:youtube UfQgkzfDwHw:)

  • Mixed reality demo for iOS at CES 2016. Combining RGBD tracking, 3D reconstruction, and physics via an integration to the Scene Kit (and later on Unity) game engines.

(:youtube cEnnbCSbijo:)

  • Did not work on actual medical apps myself, but Man CTL started with an R&D project for foot orthotics, but I’ve been very proud to see the many medical use of the Structure Sensor SDK and our live 3d reconstruction.

(:youtube 9LgmQYkRiSY:)

  • We adapted Bridge Engine to launch a VR headset for iPhone. Optimized for latency, and leveraged visual-inertial sensor fusion for pose prediction.

(:youtube qbkwew3bfWU:)

  • Launch of Canvas (scan your home), that leveraged our work on real-time unbounded large-scale SLAM for mobile.

(:youtube XA7FMoNAK9M:)

  • Positional tracking for AR/VR with a single camera and an IMU. Depth sensor not always required anymore :)

(:youtube aVdWED6kfKc:)

  • TapMeasure led a small team to build that iOS app in a very short time to leverage ARKit and take 3D measurements.
  • Positional tracking for AR/VR extended to stereo and including more room perception for CES 2018.

(:youtube ra4u5np4HXk:)


May 26, 2014, at 01:59 AM by 208.66.25.130 -
Changed lines 7-11 from:

I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping using RGB-D cameras. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics. The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. We were selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. Our Structure Sensor, the first depth sensor for mobile, became the #6 most funded project on Kickstarter.

to:

I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping using RGB-D cameras. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics.

The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. We were selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras.

In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. Our Structure Sensor, the first depth sensor for mobile, became the #6 most funded project on Kickstarter.

May 26, 2014, at 01:58 AM by 208.66.25.130 -
Changed lines 7-9 from:

Resume in english PDF

I am a PhD in computer vision and since 2009 I am a postdoctoral scholar at Carlos III University of Madrid. There, I am working on the perception aspects of the Handle FP7 european project on dextrous robotic manipulation. I am also involved in the Prosave project to study the suitability of Time-of-Flight cameras for airplane applications, in collaboration with Airbus Military. My main interests are computer vision for robotics, RGB-D cameras, object recognition and statistical approaches to visual event detection.

to:

I am a PhD in computer vision with a strong interest in real-time 3D tracking and mapping using RGB-D cameras. Between 2009 and 2012 I was a postdoc at Carlos III University of Madrid working on object 3D reconstruction and tracking for robotics. The success of RGBDemo, my opensource software showcasing the possibilities of the Microsoft Kinect, led me to cofound ManCTL in 2011. We were selected to become a member of the Microsoft Kinect Accelerator powered by Techstars in 2012, and developed Skanect, a real-time 3D scanning software compatible with low-cost RGB-D cameras. In 2013 we joined forces with Occipital to better explore the possibilities of depth sensing on mobile devices. Our Structure Sensor, the first depth sensor for mobile, became the #6 most funded project on Kickstarter.

April 04, 2011, at 10:01 PM by 92.151.173.114 -
Added line 10:


April 04, 2011, at 10:00 PM by 92.151.173.114 -
Changed line 9 from:

I am a PhD in computer vision and since 2009 I am a postdoctoral scholar at Carlos III University of Madrid. There, I am working on the perception aspects of the Handle FP7 european project on dextrous robotic manipulation. I am also involved in the Prosave project to study the suitability of Time-of-Flight cameras for airplane applications, in collaboration with Airbus Military. My main interests are computer vision for robotics, RGB-D cameras, object recognition and statistical approaches to visual event detection.

to:

I am a PhD in computer vision and since 2009 I am a postdoctoral scholar at Carlos III University of Madrid. There, I am working on the perception aspects of the Handle FP7 european project on dextrous robotic manipulation. I am also involved in the Prosave project to study the suitability of Time-of-Flight cameras for airplane applications, in collaboration with Airbus Military. My main interests are computer vision for robotics, RGB-D cameras, object recognition and statistical approaches to visual event detection.

April 04, 2011, at 10:00 PM by 92.151.173.114 -
Changed lines 7-121 from:

Resume in english PDF

CV

Docteur en Informatique

Intelligence Artificielle et Imagerie


F O R M A T I O N


2005 - 2008 Doctorat à l’UPMC (Université Pierre et Marie Curie, Paris 6), laboratoire d’acceuil à l’ENSTA (École Nationale Supérieure de Techniques Avancées) sous la direction de Thierry Bernard (ENSTA) et de Jean-Michel Jolion (LIRIS - INSA Lyon). Apprentissage a-contrario et architecture efficace pour la détection d’évènements visuels significatifs (plus de détails).
2004 - 2005 Master M2 de recherche IAD (Intelligence Artificielle et Décision) à l’université Pierre et Marie Curie (Paris VI). Spécialisation Image et Son. Mention très bien.
2002 - 2004 Formation par la recherche au LRDE (Laboratoire de Recherche et de Développement d’EPITA), dirigé par Dr. Akim Demaille.
1999 - 2004 Cycle préparatoire et cycle ingénierie à l’EPITA (Ecole Pour l’Informatique et les Techniques Avancées). Mention très bien.

E N S E I G N E M E N T


2007 ENSTA : encadrement de projets en C (20h)
2007 UPMC (Université Pierre et Marie Curie) : TD et TP de programmation par objets, design patterns (50h)
2006 UPMC : TP de C et de shell Unix (75h).
2005 ESILV (École Supérieure d’Ingénieurs Léonard de Vinci) : TP et TD de Java (15h), TP de gestion de projet (10h).
2005 CESI (Centre d’Etudes Supérieures Industrielles) : mise en place d’un module d’introduction à Linux (20h).
2005 Acadomia : cours de soutien en maths et anglais.

E X P É R I E N C E   P R O F E S S I O N N E L L E


2009 - 6 mois Université de Liège : postdoc.
Approches statistiques pour l’estimation monoculaire de pose du corps humain.
Application à l’analyse visuelle du langage des signes.
2005 - 6 mois ENSTA : stage de Master.
Détection statistique d’objets saillants sur rétine artificielle programmable.
Application à la détection de segments significatifs.
2004 - 7 mois Siemens Corporate Research (Princeton, États-Unis) : stage de recherche au sein du département imagerie.
Recherche d’un modèle d’application des filtres de particules au suivi de fibres cérébrales.
Recherche et implémentation (C++, Windows/Linux) d’algorithmes de visualisation d’images de tenseur de diffusion (DT-MRI).
Brevet
2002 - 2003 LRDE : recherche et développement.
Développement d’Evidenz, un moteur de raisonnement générique à base de théorie de l’évidence (Dempster-Shafer). Application au traitement d’image.
Participation à la conception et au développement d’Olena, une bibliothèque générique de traitement d’image en C++ (metaprogramming).
2002 - 3 mois Radio France : stage au département de recherche et développement. Conception et réalisation d’applications de surveillance de flux audio en C++ sous environnement multiplate-forme (Linux, NetBSD, OpenBSD, FreeBSD).
2001 - 2 mois Snecma Services : génération de statistiques automatisées (VBA, Access).
2000 - 2 mois WebValley : administration réseau sous Linux (serveur web / ftp, monitoring, sauvegardes, …)

P U B L I C A T I O N S



Check the research page

D I V E R S


Langues Anglais: courant (7 mois de vie aux États-Unis).
Espagnol: scolaire.

Activités

Contributions à divers logiciels libres.
Sports : volley, natation, roller.
Voyages.


    This document was translated from LATEX by HEVEA.
to:

Resume in english PDF

I am a PhD in computer vision and since 2009 I am a postdoctoral scholar at Carlos III University of Madrid. There, I am working on the perception aspects of the Handle FP7 european project on dextrous robotic manipulation. I am also involved in the Prosave project to study the suitability of Time-of-Flight cameras for airplane applications, in collaboration with Airbus Military. My main interests are computer vision for robotics, RGB-D cameras, object recognition and statistical approaches to visual event detection.

March 31, 2011, at 07:26 PM by 87.217.160.151 -
Deleted lines 4-5:

27 ans

August 12, 2009, at 02:34 PM by 82.123.33.143 -
Changed lines 7-8 from:
to:
August 12, 2009, at 02:28 PM by 82.123.33.143 -
Changed line 72 from:
Université de Liège : postdoc.
Approches statistiques pour l’estimation monoculaire de pose d’humains.
Application à l’analyse visuelle du langage des signes.
to:
Université de Liège : postdoc.
Approches statistiques pour l’estimation monoculaire de pose du corps humain.
Application à l’analyse visuelle du langage des signes.
August 12, 2009, at 02:27 PM by 82.123.33.143 -
Changed line 72 from:
Université de Liège : postdoc.
Approches statistiques pour le suivi de pose d’humains.
Application à l’analyse du langage des signes.
to:
Université de Liège : postdoc.
Approches statistiques pour l’estimation monoculaire de pose d’humains.
Application à l’analyse visuelle du langage des signes.
August 12, 2009, at 02:17 PM by 82.123.33.143 -
Added lines 71-74:
2009 - 6 mois Université de Liège : postdoc.
Approches statistiques pour le suivi de pose d’humains.
Application à l’analyse du langage des signes.
Changed line 76 from:
Université de Liège : postdoc.
\textbf{Approches statistiques} pour le suivi de pose d’humains.
Application à l’analyse du langage des signes.
to:
ENSTA : stage de Master.
Détection statistique d’objets saillants sur rétine artificielle programmable.
Application à la détection de segments significatifs.
Deleted lines 78-81:
2005 - 6 mois ENSTA : stage de Master.
Détection statistique d’objets saillants sur rétine artificielle programmable.
Application à la détection de segments significatifs.
August 12, 2009, at 02:17 PM by 82.123.33.143 -
Changed line 72 from:
ENSTA : stage de Master.
Détection statistique d’objets saillants sur rétine artificielle programmable.
Application à la détection de segments significatifs.
to:
Université de Liège : postdoc.
\textbf{Approches statistiques} pour le suivi de pose d’humains.
Application à l’analyse du langage des signes.
Added lines 75-78:
2005 - 6 mois ENSTA : stage de Master.
Détection statistique d’objets saillants sur rétine artificielle programmable.
Application à la détection de segments significatifs.
August 12, 2009, at 02:15 PM by 82.123.33.143 -
Changed lines 9-10 from:

Resume in english PDF

to:

Resume in english PDF

August 12, 2009, at 02:14 PM by 82.123.33.143 -
Changed line 22 from:
Doctorat à l’UPMC (Université Pierre et Marie Curie, Paris 6), laboratoire d’acceuil à l’ENSTA (École Nationale Supérieure de Techniques Avancées) sous la direction de Thierry Bernard (ENSTA) et de Jean-Michel Jolion (LIRIS - INSA Lyon). Apprentissage a-contrario et architecture efficace pour la détection d’évènements visuels significatifs (plus de détails).
to:
Doctorat à l’UPMC (Université Pierre et Marie Curie, Paris 6), laboratoire d’acceuil à l’ENSTA (École Nationale Supérieure de Techniques Avancées) sous la direction de Thierry Bernard (ENSTA) et de Jean-Michel Jolion (LIRIS - INSA Lyon). Apprentissage a-contrario et architecture efficace pour la détection d’évènements visuels significatifs (plus de détails).