Stroke survivors feeding themselves by controlling a robotic arm with their thoughts. Patients with impaired upper-limb function using “motor imagery” to drive an exoskeleton for rehabilitation training.
What may sound like science fiction is now being developed at Changzhou University by a research team specializing in brain–computer integration and machine intelligence.
Focusing on two practical needs — independent feeding and active rehabilitation — the team has developed an intelligent self-feeding robotic system and a brain–computer interface (BCI)-based upper-limb rehabilitation system. Both technologies are now transitioning from laboratory prototypes to early-stage commercialization.
“Our intelligent feeding robot is designed primarily for elderly care institutions, community day-care centers and rehabilitation facilities serving people with disabilities,” said Zou Ling, head of the research team. Traditional assisted feeding relies heavily on caregivers and often struggles to balance efficiency and user comfort, she noted.
The system works by capturing electroencephalogram (EEG) signals generated when a user intends to eat. It then decodes the signal and directs a robotic arm to identify food, pick it up and deliver it safely to the user — effectively translating thought into action.
To achieve reliable “mind-controlled feeding,” the team had to overcome three major challenges.
First, weak EEG signals reflecting eating intentions must be stably collected and accurately decoded in complex real-world environments.
Second, machine vision must precisely locate the user’s mouth in real time while coordinating the robotic arm to ensure smooth, safe and spill-free feeding.
Third, the system must incorporate adaptive algorithms capable of adjusting to different users’ physical conditions and personal habits.
The prototype has already met key performance targets: more than 80 percent accuracy in EEG-based food selection, over 90 percent accuracy in visual mouth recognition, and a feeding cycle completed in under five seconds. The team is currently working with elderly care and rehabilitation institutions to pilot the system.
The BCI-based upper-limb rehabilitation system is designed for patients suffering motor impairment caused by stroke or spinal cord injury. By decoding EEG signals generated during motor imagery — when patients imagine performing a movement — the system drives an exoskeleton to assist with grasping and extension exercises. The approach transforms conventional passive therapy into intention-driven active rehabilitation.
The system’s entire chain — signal acquisition, decoding and mechanical execution — has been independently developed by the team and is currently undergoing clinical validation.
Behind these advances lies more than two decades of interdisciplinary research in neural engineering and machine intelligence at Changzhou University. The core team includes specialists from microelectronics, computer science, mechanical engineering, medicine and health engineering, forming a multidisciplinary structure.
In recent years, leveraging platforms such as the International Joint Laboratory for Human–Machine Intelligence and Interaction, the team has partnered closely with Changzhou No.1 People’s Hospital, Changzhou No.2 People’s Hospital, De’an Hospital, Qianjing Rehabilitation and Neuracle Technology, among others.
The team has also participated in drafting seven national and industry standards, including China’s first group standard for consumer-grade brain–computer interface devices.
扫一扫在手机打开当前页  |