With the recent growth of virtual reality (VR) applications, there is a demand to create highly immersive environments in which the avatar that the user embodies reflects any kind of actions in the virtual world as precise as possible. The major action humans use for interacting with the world is grasping of objects with their hands. In real world, human hand and fingers are constrained by the object’s shape and the intended use of the object. However in a virtual environment where the realistic physical contact with the object cannot be sensed by the human user, the visual representation of the virtual hand grasping various objects requires tedious manual animation.
Gleechi provides a software solution called VirtualGrasp which makes it possible to animate natural looking grasping interactions in real-time based on the constraints of the virtual world (such as shape of objects, kinematics of the hand, etc). This solution is not a hand tracking algorithm, but a tool that animates a given hand model.
In VR applications, an important measure of success for such a system is to create hand and finger motions that both satisfy the physical constraints placed by the object, and are natural and realistic to the human eyes. The first is easy to measure, the second however is difficult to achieve. We believe a data-driven approach exploiting machine learning techniques is a good solution to quantify the “realism” and “naturalism” of the grasps. Such an approach also provides a foundation to synthesize grasps that satisfy user’s intention when interacting in the virtual world.
Recently machine learning techniques that exploit the deep structure of neural networks have achieved significant progress towards many practical industrial problems. In the context of 3D geometric data, deep neural network (DNN) is an active research area with a lot of potential applications ranging from 3D shape reconstruction, segmentation , recognition , retrieval, etc. The goal of this thesis is to exploit DNN for object shape representation for the purpose of human hand grasp animation. For this purpose, the previous thesis project at Gleechi  has laid out a good foundation. The scope of the current project will be to continue the work of , with the goal to apply DNN for part-based object representation, and to derive a quantifiable measure to evaluate the quality of generated hand grasp motion on a given object.
Summarize state-of-the-art of deep learning study aimed at modeling and representing 3D object shape and segmentation of the shape, and representing hand motion and grasp synthesis.
Collect training database for object-grasp representations.
Implement modeling and training of DNNs, preferrably using Caffe2 deep learning framework, in C++.
Test, optimize and evaluate the implemented process using the database.
Summarize and discuss the findings in a report / thesis.
Supervisor at Gleechi: Dr. Dan Song
 “Human Grasp Synthesis with Deep Learning”, Sylvain Potuaud. 2018. Pdf.
 “Learning Shape Abstractions by Assembling Volumetric Primitives”, Shubham Tulsiani et. al. CVPR, 2017. Project page
 “Multi-view Convolutional Neural Networks for 3d Shape Recognition”. By SU, Hang, MAJI, Subhransu, KALOGERAKIS, Evangelos, et al. 2015
 Introduction slides on 3D deep learning: http://ai.stanford.edu/~haosu/slides/IntroTo3DDL.pdf
We are a Stockholm-based startup coming from robotics research and the first in the world to enable natural artificial hand movement and interaction in real-time. Our software enables realistic interaction in VR games, improve learning for industry training and helps stroke patients do rehabilitation in VR.
We're a small but fast-growing team that combines awarded entrepreneurs, top-ranked robotics researchers and experienced developers. The company was founded in 2014 and since then we have been awarded the Super startup of the year by Veckans Affärer, won the European startup competition EIT Digital Idea Challenge and much else.
We got a ridiculously exciting time ahead and we'd love to get more awesome people onboard!