Advertisment

EPFL Develops Innovative Machine Learning Technique for Neural Prostheses

EPFL researchers have developed a machine learning technique to enhance the functionality of retinal implants and other neural prostheses. The new method optimizes sensory encoding, promising to redefine our approach to sensory prostheses.

author-image
Israel Ojoko
New Update
EPFL Develops Innovative Machine Learning Technique for Neural Prostheses

EPFL Develops Innovative Machine Learning Technique for Neural Prostheses

Researchers at EPFL (École Polytechnique Fédérale de Lausanne) in Switzerland have made a remarkable stride in the field of neural prosthesis with an innovative machine learning technique designed to significantly enhance the functionality of retinal implants and other medical electronic devices. The state-of-the-art AI-based method addresses the long-standing challenge of sensory encoding in neural prostheses, which is the process of converting environmental information into neural signals interpretable by the human nervous system.

Advertisment

Revolutionizing Downsampling with AI

The team, under the leadership of Demetri Psaltis, Christophe Moser, and Diego Ghezzi, concentrated their efforts on bettering the traditional approach to the critical process of downsampling. Downsampling, an indispensable step because of the limited number of electrodes in a prosthesis, necessitates the reduction of environmental input while preserving the quality of data transmitted to the brain. Traditional downsampling methods like pixel averaging have been purely mathematical and lack a learning component, a gap that the new AI-based approach aspires to fill.

Actor-Model Framework: A New Approach

Advertisment

In this novel approach, two neural networks are employed in an actor-model framework to optimize sensory encoding. The model network, acting as a digital twin of the retina, initially learns to convert a high-resolution image into a binary neural code akin to that of a biological retina. Subsequently, the actor network learns to downsample high-resolution images to closely mimic the neural code produced by the biological retina.

Unveiling Potential for Advanced Image Compression

The preliminary results have been promising, showing that the unconstrained neural network could learn to mimic aspects of retinal processing. This groundbreaking research not only offers a more precise image compression technique tackling multiple visual dimensions but also opens up possibilities for its potential application in other types of neural prostheses, such as auditory or limb prostheses. Indeed, the implications of this work extend beyond the realm of vision restoration, promising to redefine our approach to sensory prostheses.

Advertisment
Advertisment