(Nanowerk News) Poems, essays and even books – what is an open AI platform that ChatGPT can’t handle? This new AI development has inspired researchers at TU Delft and the Swiss technical university EPFL to dig deeper: For example, could ChatGPT also design robots? And is this a good thing for the design process, or is there a risk?
The researchers published their findings in Natural Machine Intelligence (“How can LLM change the robot design process?”).
What are the biggest future challenges for humanity? This is the first question that Cosimo Della Santina, assistant professor, and PhD student Francesco Stella, both from TU Delft, and Josie Hughes from EPFL, asked ChatGPT. “We wanted ChatGPT to design not just robots, but really useful ones,” said Della Santina. In the end, they chose food supply as their challenge, and while chatting with ChatGPT, they came up with the idea of building a robotic tomato harvester.
The researchers followed all ChatGPT design decisions. That input proved invaluable in the conceptual phase, according to Stella. “ChatGPT extends designer knowledge to other areas of expertise. For example, a chat robot teaches us which pruning is the most economical to automate.” But ChatGPT also gave useful advice during the implementation phase: “Make a silicone or rubber gripper so the tomatoes don’t crush” and “Dynamixel motors are the best way to move the robot”. The result of this human-AI partnership is a robotic arm that can harvest tomatoes.
ChatGPT as researcher
Researchers find the collaborative design process to be positive and enriching. “However, we found that our role as engineers shifted to doing more technical tasks,” says Stella. In Nature Machine Intelligence, researchers explore the different levels of cooperation between humans and the Large Language Model (LLM), of which ChatGPT is one. In the most extreme scenario, AI provides all the inputs to robot design, and humans follow it blindly. In this case, the LLMs act as researchers and engineers, while the humans act as managers, tasked with setting design goals.
Such extreme scenarios are not yet possible with the current LLM. And the question is whether it is desirable. “In fact, LLM output can be misleading if it is not verified or validated. AI bots are designed to generate the ‘most likely’ answer to a question, so there is a risk of misinformation and bias in the robotics field,” said Della Santina. Working with an LLM also raises other important issues, such as plagiarism, traceability and intellectual property.
Della Santina, Stella and Hughes will continue to use the robotic tomato harvester in their robotics research. They also go on to study LLM to design new robots. In particular, they see AI autonomy in designing their own bodies. “Ultimately an open question for the future of our field is how LLM can be used to assist robot developers without limiting the creativity and innovation that robotics needs to meet the challenges of the 21st century,” Stella concludes.