But credit assignments for these learning signals need another key part of their model, without hitting the “break” in sensitive processing to solve the problem. The team of Noud and Richards proposed that there are separate parts above and below the neuron that process the neural code in completely different ways.
“[Our model] Shows that you can really have two signals, one going up and one going down, and they can pass each other, ”Naud said.
To make this possible, their model posits that tree-like branches receive input at the top of neurons listening only to explosions – signals of internal learning – to secure their connections and reduce errors. The tuning is from top to bottom, just like backpropagation, because in their model, the upper neurons are controlling the possibility of their lower neurons exploding. Researchers have shown that when a network explodes more, the neurons increase their connection strength, whereas when the explosion signal is less frequent, the connection strength decreases. The idea is that the blasting signal tells the neurons that they should be active during the task, that their connections should be strengthened, if doing so reduces the error. The absence of explosions tells neurons that they should be inactive and may need to weaken their connections.
At the same time, the lower branches of the neuron burst in such a way that they are single spikes – normal, external world signals – that allow them to continue sending sensitive information upwards in the circuit without any interruption.
“In the past, the idea presented seemed logical, and I think it speaks to its beauty,” he said. John Sacramento, Zurich and ETH is a computational neuroscientist at the University of Zurich. “I think it’s brilliant.”
Others have tried to follow similar arguments in the past. Twenty years ago, Conrad Coding Of the University of Pennsylvania and Peter King University of Osnabrck, Germany Recommended A learning structure with two-bogie neurons. But their proposal lacked many specific details of the new model that were biologically relevant, and it was just a proposition তারা they could not prove that it could actually solve the credit assignment problem.
“Back then, we lacked the ability to test these ideas,” Cording said. He considers the new paper to be “extraordinary work” and will follow it up in his own lab.
With the help of today’s computational power, Noud, Richards and their collaborators have successfully mimicked their model, playing the role of the law of learning exploding neurons. They showed that it solves the credit assignment problem in a classic task called XOR, which requires learning to respond if one of the two inputs (but not both) is 1. They further showed that a deep neural network built with their burst regulation could be approximate. The performance of the backpropagation algorithm in the tasks of challenging image classification. However, there is still room for improvement, as the backpropagation algorithm was still more accurate and none matched the human capabilities perfectly.
“We need to have details that we don’t have, and we need to make the model better,” Naoud said. “The main goal of the paper is to say that the kind of learning that machines are doing can be estimated by physiological processes.”