![]() As doctoral student and co-author of the project Grzegorz Sochacki explains “I can do demos on one day and then code the rest of the system step by step afterwards. Rather than making use of real learning scenarios, the researchers favoured a video-based approach in a bid to save time. Once their ability to identify ingredients has been perfected, these robot cooks could make use of sites like YouTube to learn a huge variety of recipes. However, if it is presented with new data, it will understand that it is dealing with a new recipe and identify it as such. When analysing each new video, the system is designed to compare actions and objects with data garnered from previous videos, so that it does not have to stock information on processes that it can already recognise, which saves time on training. Correlation of data on the position of the cook’s right hand and object recognition enabled the system to correctly identify the tool being used and to take this information into account in its training. “The coordinates of each body part and each object were saved and formed a path when extracted from multiple frames”, explains the article. A second algorithm, OpenPose (2018), was also deployed to recognise and analyse the posture and actions of the cook, notably with regard to the movement and position of their right wrists. In the context of the project, this component took charge of the identification of cooking utensils and ingredients as well as their relative positions. For this purpose, the team made use of two existing neural networks, developed and updated for artificial intelligence research: YOLO (published in 2016) which was designed for the detection and recognition of objects and trained on the COCO database of realistic everyday scenes. In concrete terms, this involved presenting the robot with film of a human member of the research time following recipes, which it analysed frame by frame with real-time computer-vision algorithms. Training for the robot was based on the visual observation of human cooks. Recognition of objects and the actions of human chefs At this stage, the robot can make eight different salads composed of two or three ingredients selected from a choice of five fruits and vegetables, the idea being to demonstrate the robust nature of underlying technologies rather than to create a machine already capable of accomplishing highly complex cooking tasks. Now their latest project detailed in a scientific article in June 2023 aims to create a robotic salad chef that can prepare and mix ingredients with the autonomous ability to learn new recipes. Last year, researchers from the laboratory presented a robot with the surprising ability to assess the saltiness of a dish at different stages of the chewing process. However, they may soon be joined by an entirely new generation of “real” cooking robots developed by the Bio-Inspired Robotics Laboratory at the University of Cambridge that make use of neural networks and artificial intelligence to learn and follow recipes. The handle complies with the ergonomic design so that you still feel comfortable with repetitive movements.Mixers, graters, and blenders… Although they are growing in sophistication and increasingly common in our kitchens, today’s food processors are little more than improved utensils. ![]() Easy to Clean: Open the blade cover from the back of the rotary switch and turn it to separate the upper and lower covers, remove the cover, take out the blade and rinse with water. ![]() ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |