New brain-like optical chip can process 2 billion frames per second

Researchers at the University of Pennsylvania have invented a new optical chip capable of processing more than 2 billion photos per second. The gadget consists of a neural network that processes data like light without using components that slow down standard computer chips, such as memory.

The research was published in the journal Nature.

The new chip is based on a neural network, which is a system modeled after the way the brain processes information. These networks are made up of nodes that connect like neurons, and they even “learn” the same way organic brains do by being trained on sets of data, like recognizing objects in photos or recognizing of speech. In other words, they improve a lot in these areas over time.

The new chip, as pointed out earlier, handles information in the form of light rather than electrical signals. Its “neurons” are optical threads, which are superimposed in several layers, each specializing in a different form of classification.

In experiments, scientists created a chip with an area of ​​0.01 square inches (9.3 mm2) and used it to classify a sequence of handwritten characters that looked like letters. The chip was able to classify photos with 93.8% accuracy for sets containing two types of characters and 89.8% accuracy for sets containing four types after being trained on relevant data sets.

Most notably, the chip was able to classify each character in 0.57 nanoseconds, allowing it to process 1.75 billion photos per second. The team says this speed comes from the chip’s ability to process information in the form of light, giving it several advantages over existing computer chips.

“Our chip processes information through what we call ‘propagation computing,’ meaning that unlike clock-based systems, computations occur when light propagates through the chip,” Firooz said. Aflatouni, lead author of the study. “We also skip the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and these two changes make our chip a much faster technology.”

Another advantage is that the data being processed does not need to be stored, therefore, it saves time by not having to transmit data to memory and space by not requiring no memory component. According to experts, not storing the data is also safer as it avoids any potential leaks.

The team’s next steps will be to evolve the device and modify the technology to handle different types of data.

“What’s really interesting about this technology is that it can do so much more than classify images,” Aflatouni said. “We already know how to convert many types of data in the electrical domain – images, audio, speech and many other types of data. Now we can convert different types of data into the optical domain and have them processed almost instantly using this technology.

Summary of the study:

Deep neural networks with applications ranging from computer vision to medical diagnostics are typically implemented using clock-based processors, in which the calculation speed is mainly limited by the clock frequency and the memory access time. In the optical field, despite advances in photonic computing, the lack of scalable on-chip optical nonlinearity and the loss of photonic devices limit the scalability of deep optical networks. Here, we report an end-to-end integrated photonic deep neural network (PDNN) that performs sub-nanosecond image classification by direct processing of optical waves striking the on-chip pixel array as they propagate through layers of neurons. In each neuron, the linear calculation is performed optically and the nonlinear activation function is performed optoelectronically, allowing classification time of less than 570 ps, ​​which is comparable to a single clock cycle of digital platforms. peak. Uniformly distributed feed light provides the same optical output range per neuron, allowing scalability to large-scale PDNNs. Two- and four-class classification of handwritten letters with accuracies greater than 93.8% and 89.8%, respectively, is demonstrated. Direct, clock-free processing of optical data eliminates analog-to-digital conversion and the need for a large memory module, enabling faster, more power-efficient neural networks for next-generation deep learning systems. “


Source link

Comments are closed.