Briefly explain what role machine learning has played in the field of robots

I believe that everyone watching the first class of Wu Enda ’s machine learning open class was impressed by the use of reinforcement learning to train and control robots, helicopters, and let them learn new skills.

So, what are the applications of machine learning in robots? This article will briefly introduce this issue.

1. Computer Vision

Because "robot vision" does not only involve computer algorithms, some people think that the correct term is machine vision or robot vision. Robotists or engineers must also choose camera hardware that allows the robot to process physical data. Robot vision is closely related to machine vision, which is used to guide robot guidance and automatic detection systems. The slight difference between them may be applied to the kinematics of robot vision, which includes reference frame calibration and the ability of the robot to physically affect its environment.

The influx of large amounts of data, that is, visual information available on the web (including annotated / tagged photos and videos), has promoted the advancement of computer vision, which in turn also helps further structured predictive learning technology based on machine learning and promote robotic vision applications , Such as object recognition and sorting. An example of a branch is anomaly detection for unsupervised learning, such as building systems that can use convolutional neural networks to find and evaluate silicon chip failures, designed by researchers at the BiomimeTIc Robotics and Machine Learning Lab, a nonprofit organization AssistenzroboTIk Part of the electronic volts in Munich. Super-sensing technologies such as radar, lidar and ultrasound have also driven the development of 360-degree vision systems for autonomous vehicles and drones.

Briefly explain what role machine learning has played in the field of robots

2. Imitation learning

Imitation learning is closely related to observation learning, which is a behavior exhibited by infants and young children. Imitation learning is also a general category of reinforcement learning and the biggest challenge for agents to take action around the world. Bayesian or probabilistic models are common features of this machine learning method. The question of whether imitation learning can be used in humanoid robots was assumed as early as 1999.

Imitation learning has become an integral part of on-site robotics, and the mobile characteristics of some factories, such as construction, agriculture, search and rescue, military and other fields, make manual programming robotic solutions challenging. Examples include inverse optimization control methods, or "programming by demonstration (PbD)." CMU and other organizations have found applications in the fields of humanoid robots, leg motion, and rough terrain mobile navigators. Researchers at Arizona State University published this video two years ago, demonstrating a humanoid robot that uses imitation learning to acquire different mastering skills.

Bayesian belief networks are also applied to forward learning models, where the robot learns the motion system or external environment without prior knowledge. This example is "motor babbling", as organized by the Language Acquisition and Robot Group at the University of Illinois at Urbana-Champaign (UIUC), Bert is an "iCub" humanoid robot.

3. Self-supervised learning

Self-supervised learning methods enable robots to generate their own training examples to improve performance; this includes using a priori training and data capture up close to explain "remote ambiguous sensor data." It is incorporated into robots and optical devices and can detect and exclude objects (such as dust and snow); identify vegetables and obstacles in rugged terrain; and analyze and model vehicle dynamics in 3D scenes.

Watch-Bot is a specific example created by researchers at Cornell and Stanford. It uses a 3D sensor (Kinect), camera, laptop, and laser pointer to detect "normal human activity." This is a pattern learned through probabilistic methods . Watch-Bot uses a laser pointer as a reminder to target objects (for example, milk left in the refrigerator). In the initial test, the robot was able to successfully remind humans 60% of the time (it did not understand what it was doing or why), and the researchers expanded the experiment by allowing their robot to learn from online videos (called the project RoboWatch).

Other examples of self-supervised learning methods applied to robotics include road detection algorithms in forward-looking monocular cameras with road probability distribution models (RPDM) and fuzzy support vector machines (FSVM), which are autonomous vehicles at MIT Design and other mobile robots on the road.

Autonomous learning is a variant of self-supervised learning involving deep learning and unsupervised methods. It is also applied to robotics and control tasks. A team at Imperial College London collaborated with researchers from the University of Cambridge and the University of Washington to create a new way to speed up learning, incorporating learning model uncertainty (probabilistic models) into long-term planning and controller learning, thereby reducing the impact of learning The model of the new skill is wrong.

4. Assistive and medical technology

An assistive robot is a device that can sense, process sensory information, and perform behaviors that are beneficial to the disabled and the elderly (although intelligent assistive technology is also applicable to the general population, such as driver assistance tools). Exercise therapy robots provide diagnostic or therapeutic benefits. These are mostly (unfortunately) technology still limited to laboratories, because for most hospitals in the US and abroad, these technologies are still costly.

Early examples of assistive technologies include DeVAR or desktop professional assistant robots developed by Stanford University and Palo Alto Veterans Affairs Rehabilitation Research and Development Corporation in the early 1990s. The latest examples of robot-assisted technology based on machine learning are currently being developed, including the combination of more autonomous auxiliary machines, such as the MICO robot arm (developed by Northwester University) that observes the world through Kinect Sensor. These effects are more complex and smarter auxiliary robots can more easily adapt to user needs, but also require partial autonomy (ie, shared control between robots and humans).

In the medical community, advances in robot learning methods are rapidly developing, although it is not easy in many medical institutions. Through Cal-MR: Medical Robot Automation and Learning Center, a network of researchers and doctors from multiple universities (in collaboration with researchers from multiple universities and doctors) led to the creation of an intelligent organization autonomous robot (STAR), through autonomous learning and With the innovation of 3D sensing technology, STAR can stitch together “pig intestines” (used to replace human tissues) with better accuracy and reliability than the best human surgeons. Researchers and doctors explain that STAR cannot replace surgeons- In the foreseeable future, who will handle emergency situations nearby-but provides significant benefits in performing similar types of delicate surgery.

5. Multi-agent learning

Coordination and negotiation are the key components of multi-agent learning. It involves robots (or agents based on machine learning, currently the relevant technology about agents has been widely used in games), can adapt to the transformation pattern of other robots / agents, and find examples of "balanced multi-agent learning methods including spare no effort Learning tools, which mainly involve reinforcement learning algorithms, "strengthening" the learning results of multi-agent planning, and the learning of market-based distributed control systems.

A more specific example is an algorithm created by researchers of distributed agents or robots, by the MIT Information and Decision Systems Laboratory at the end of 2014. Robots collaborate to build a better, more inclusive learning model than a robot (smaller pieces of information are processed and then combined), based on the concept of exploring buildings and their room layout, and independently build a knowledge base.

Each robot builds its own catalog and combines the data sets of other robots. The distributed algorithm is superior to the standard algorithm in creating this knowledge base. Although it is not a perfect system, this machine learning method allows robots to compare catalogs or data sets, enhance mutual observation and correct omissions or excessive generalization, which will undoubtedly play a recent role in several robot applications, including multiple Autonomous regions and airborne vehicles.

Air Fan Filter

The high-power axial fan support "blow" or "pump" installation method.The rainproof function of the ventilating cascade can prevent the machine mechanical damage, and prevent the entry of moisture. With the dense Enclosure protection category, it can reach IP54 according to EN60529/10.91.Easily to replace a new fiber by moving the cascade

Ventilation Fan Filter,Electrical Cabinet Fan Filter,Cooling Filter Fan,Industrial Fan Filter

Wonke Electric CO.,Ltd. , https://www.wkdq-electric.com