I’m currently a robotics research engineer at Robotics Research Centre of Nanyang Technological University, Singapore (NTU). Before that, I have been fortunate enough to serve as a Research Associate at the University of Hong Kong and participated in the DAPRA Robotics Challenge (DRC) until end of December 2015. My research interest is Artificial Intelligence, Whole Body Manipulation Control, Motion Planning, Robot Vision & Perception, Path Planning and Localization.
University of Manchester (UK)
MSc in Advanced Control & System Engineering
University of Leeds (UK)
BEng in Mechatronics and Robotics Engineering
Nanyang Technological University, Singapore (NTU)
Robotics Research Engineer
January 2016 - Present
Design and build a fully autonomous mobile robot picker system used for e-commerce warehouse. The robot could autonomously move, identify, locate and pick-and-place targeted objects according to the purchases order. My research focuses on, inter alia, motion planning, grasping analysis and planning, path planning, robot perception (identification and provide estimation), base movement control, localization and whole system setup and integration with Robot Operating System (ROS)
The University of Hong Kong (HKU)
July 2013 - December 2015
Team member of HKU, participating in the DARPA Robotics Challenge.
My research focus in team is Whole Body Manipulation Control, Motion planning, Robot Vision and Perception, Robot’s hand Design, Motor Control on Robot Operating Systems (ROS).
I was very much involved in the manipulation and robot vison & perception areas, I worked on Robot Kinematics and Continuous Mathematics Calculation based on the D-H parameters to improve Atlas manipulation control. This basement program enabled Atlas to perform several manipulation tasks such as Valve Turning, Wall Cutting and Pick and Place. In those tasks, I programmed the Atlas to identify the objects (Valve, Wall, Drill) and then to pick it up or move/cut it as per requirements imposed by the authorities. All perceptions and motion planning are auto-generated by the pre-formatted program.
Aritificial Intelligence (AI)
Artificial intelligence (AI) is arguably the most exciting field in robotics. It's certainly the most controversial: Everybody agrees that a robot can work in an assembly line, but there's no consensus on whether a robot can ever be intelligent.
Like the term "robot" itself, artificial intelligence is hard to define. Ultimate AI would be a recreation of the human thought process -- a man-made machine with our intellectual abilities. This would include the ability to learn just about anything, the ability to reason, the ability to use language and the ability to formulate original ideas.
Roboticists are nowhere near achieving this level of artificial intelligence, but they have made a lot of progress with more limited AI. Today's AI machines can replicate some specific elements of intellectual ability.
A humanoid robot is a robot with its overall appearance based on that of the human body.
In general humanoid robots have a torso with a head, two arms and two legs, although some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robots may also have a 'face', with 'eyes' and 'mouth'.
Whole Body Motion Planning
Humanoid service robots performing complex object manipulation tasks need to plan whole-body motions that satisfy a variety of constraints: The robot must keep its balance, self-collisions and collisions with obstacles in the environment must be avoided and, if applicable, the trajectory of the end-effector must follow the constrained motion of
a manipulated object in Cartesian space. These constraints and the high number of degrees of freedom make wholebody motion planning for humanoids a challenging problem.
Robot Vision & Perception
Robot Vision & Perception given robot capability of see and interactive with the robot. It's an importance criteria for smart robot.
Localization involves one question: Where is the robot now? Or, robo-centrically, where am I, keeping in mind that "here" is relative to some landmark (usually the point of origin or the destination) and that you are never lost if you don't care where you are.
Although a simple question, answering it isn't easy, as the answer is different depending on the characteristics of your robot. Localization techniques that work fine for one robot in one environment may not work well or at all in another environment.
For example, localizations which work well in an outdoors environment may be useless indoors. All localization techniques generally provide two basic pieces of information:
what is the current location of the robot in some environment?
what is the robot's current orientation in that same environment?
The first could be in the form of Cartesian or Polar coordinates or geographic latitude and longitude. The latter could be a combination of roll, pitch and yaw or a compass heading.
AWARDS & HONOURS
LabVIEW Associate Developer (CLAD)
October 2012 - October 2014