top of page
AUTOLAB: UC Berkeley Automation Lab

CONTACT INFORMATION

UC Berkeley’s Laboratory for Automation Science and Engineering (AUTOLAB), directed by Professor Ken Goldberg, is a world-renowned center for research in robotics and automation sciences,

The AUTOLAB supports over 30 postdocs, PhD students, and undergrads pursuing projects in Cloud Robotics, Learning from Demonstrations, Robust Robot Grasping and Manipulation for Warehouse Logistics, Home Robotics, Deep Reinforcement Learning, and Computer Assisted Surgery, and New Media Artforms.


AUTOLAB Research Papers: http://goldberg.berkeley.edu/pubs/

This research is sponsored in part by NSF, NDSEG, USDA, Siemens, Honda, Autodesk, Google, Toyota Research Institute, Amazon Robotics, Intel, Loccioni, Knapp, Samsung, Cisco, and Intuitive Surgical. The AUTOLAB is located in 2111 Etcheverry Hall and 444 Soda Hall, UC Berkeley.

A Sample of Ongoing Projects:
 
Robust Grasping and Manipulation. Combining analytic mechanics theory with new results in Deep Learning opens the door to highly efficient, robust, grasping that can generalize to new objects. Recent results with our Grasp Quality Convolutional Neural Network (GQ-CNN) obtained 99% Precision with test objects.
 

Cloud Robotics. Cloud Computing offers many potential benefits for Robots and Automation: the ability to access remote data, code, and processing and distributed policy learning.  we are developing new Cloud Robotics algorithms and platforms such as the Dexterity Network. The benefits of large-scale learning using datasets with millions of examples have recently been illustrated in image classification and speech recognition, where Deep Learning has surpassed decades of previous research. This suggests that machine learning of grasps for a large-scale object dataset of various shapes, poses, and environment configurations, could exhibit similarly disruptive improvements. The Dexterity Network (Dex-Net) is a growing dataset of tens of thousands of 3D object models each labeled with parallel-jaw grasps that cover the object surface and measures of grasp quality and robustness to imprecision in sensing and actuation, to study the scaling effects of learning to predict successful grasps on a physical system.
 

Deep Learning from Demonstrations. By leveraging recent advances in Deep Learning, Active Learning, Augmented Reality, and Learning from Demonstrations, we are developing new methods where humans can efficiently teach robots to robustly perform tasks such as surgical needle insertion, grasping in clutter, part singulation, rope-tying and assembly. We are investigating optimally concise and relevant representations of the state, sensing, and task dynamics. Initial results suggest that active supervision schemes are not uniformly beneficial, due to the fallibility of human supervisors and the potential to increase sample complexity, motivating new research into adaptive LfD methods compatible with highly expressive Deep Learning architectures.
 

Computer Assisted Surgery. To improve patient care and more accurately target treatment within the human body, we're developing new geometric models and algorithms for automating surgical subtasks such as suturing and debridement using the da Vinci surgical robot.  We are also developing models of soft tissues and new methods for dose planning, brachytherapy, and planning algorithms for steering flexible needles.
 

Sequential Windowed Inverse Reinforcement Learning (SWIRL). Inverse Reinforcement Learning (IRL), where the implicit reward function is learned from examples, holds great promise to increase generalization for robot learning. We are extending IRL to more complex tasks using new methods for automatic segmentation into short linear windows, and computing a reward function for each using Bayesian inference. Initial results suggest that the sequential structure inferred from a small number of demonstrations can accelerate learning, reduce sensitivity to noise, improve policy performance, and enhance generalization. We are also studying how learning the subtask structure of demonstrations can enable the construction of control hierarchies that generalize effectively.
 

Energy-Bounded Caging. We are extending caging theory for robot grasping by defining “energy-bounded” cages with respect to an energy field such as gravity. Caging grasps restrict object motion without requiring complete immobilization, providing a practical alternative to force- and form-closure grasps. Complete caging of an object is useful but may not be necessary in cases where forces such as gravity are present. We developed EBCA-2D, an algorithm for analyzing the minimum escape energy for a fixed planar configuration using weighted alpha shapes, and extended this algorithm to EBCS-2D, which synthesized rigid configurations of gripper fingers relative to objects using persistent homology.
​
New Media Artforms. To discover what can be expressed with new technologies such as networks, robots, digital cameras, and sensors that could not previously be expressed, we're designing art installations that explore issues related to cultural history, privacy, and "telepistemology: what is knowable at a distance."

Email:

liang.shuai.robotic@gmail.com

​

Office:

2111 Etcheverry Hall, UC Berkeley.

Berkeley, California, United States

Thanks for submitting!

Berkeley logo.png
AUTOLab.jpeg
UVSQ.jpg
CITRIS logo.png
CNRS.png
UBFC.png
1024px-Logo_officiel_de_Sorbonne_Univers
isir-trans.png
femto.jpeg
  • LinkedIn
  • YouTube
bottom of page