Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. Humanoids 2024: 22–24 November 2024, NANCY, FRANCE Enjoy today’s videos! We’re hoping to get more on this from Boston Dynamics, but if you haven’t seen it yet, here’s electric Atlas doing something productive (and autonomous!). And why not do it in a hot dog costume for Halloween, too? [ Boston Dynamics ] Ooh, this is exciting! Aldebaran is getting ready to release a seventh generation of NAO! [ Aldebaran ] Okay I found this actually somewhat scary, but Happy Halloween from ANYbotics! [ ANYbotics ] Happy Halloween from the Clearpath! [ Clearpath Robotics Inc. ] Another genuinely freaky Happy Halloween, from Boston Dynamics! [ Boston Dynamics ] This “urban opera” by Compagnie La Machine took place last weekend in Toulouse, featuring some truly enormous fantastical robots. [ Compagnie La Machine ] Thanks, Thomas! Impressive dismount from Deep Robotics’ DR01. [ Deep Robotics ] Cobot juggling from Daniel Simu. [ Daniel Simu ] Adaptive-morphology multirotors exhibit superior versatility and task-specific performance compared to traditional multirotors owing to their functional morphological adaptability. However, a notable challenge lies in the contrasting requirements of locking each morphology for flight controllability and efficiency while permitting low-energy reconfiguration. A novel design approach is proposed for reconfigurable multirotors utilizing soft multistable composite laminate airframes. [ Environmental Robotics Lab paper ] This is a pitching demonstration of new Torobo. New Torobo is lighter than the older version, enabling faster motion such as throwing a ball. The new model will be available in Japan in March 2025 and overseas from October 2025 onward. [ Tokyo Robotics ] I’m not sure what makes this “the world’s best robotic hand for manipulation research,” but it seems solid enough. [ Robot Era ] And now, picking a micro cat. [ RoCogMan Lab ] When Arvato’s Louisville, Ky. staff wanted a robotics system that could unload freight with greater speed and safety, Boston Dynamics’ Stretch robot stood out. Stretch is a first of its kind mobile robot designed specifically to unload boxes from trailers and shipping containers, freeing up employees to focus on more meaningful tasks in the warehouse. Arvato acquired its first Stretch system this year and the robot’s impact was immediate. [ Boston Dynamics ] NASA’s Perseverance Mars rover used its Mastcam-Z camera to capture the silhouette of Phobos, one of the two Martian moons, as it passed in front of the Sun on Sept. 30, 2024, the 1,285th Martian day, or sol, of the mission. [ NASA ] Students from Howard University, Moorehouse College, and Berea College joined University of Michigan robotics students in online Robotics 102 courses for the fall ‘23 and winter ‘24 semesters. The class is part of the distributed teaching collaborative, a co-teaching initiative started in 2020 aimed at providing cutting edge robotics courses for students who would normally not have access to at their current university. [ University of Michigan Robotics ] Discover the groundbreaking projects and cutting-edge technology at the Robotics and Automation Summer School (RASS) hosted by Los Alamos National Laboratory. In this exclusive behind-the-scenes video, students from top universities work on advanced robotics in disciplines such as AI, automation, machine learning, and autonomous systems. [ Los Alamos National Laboratory ] This week’s Carnegie Mellon University Robotics Institute Seminar is from Princeton University’s Anirudha Majumdar, on “Robots That Know When They Don’t Know.” Foundation models from machine learning have enabled rapid advances in perception, planning, and natural language understanding for robots. However, current systems lack any rigorous assurances when required to generalize to novel scenarios. For example, perception systems can fail to identify or localize unfamiliar objects, and large language model (LLM)-based planners can hallucinate outputs that lead to unsafe outcomes when executed by robots. How can we rigorously quantify the uncertainty of machine learning components such that robots know when they don’t know and can act accordingly? [ Carnegie Mellon University Robotics Institute ]