This Simple Circuit Taught Itself To Recognize Flowers
AI is a revolutionary technology that has applications in many diverse fields. From autonomous cars to optimal stock trading and beyond, AI has a profound impact on the world around us. One facet of AI is machine learning, where a computer system can learn to perform a task without explicit instruction via algorithms and statistical analysis of data sets. Due to these requirements, machine learning requires a lot of computational power. However, scientists have recently developed a simple electrical circuit, based loosely on how human brains function, that can teach itself to perform various AI tasks, such as recognition, without a computer aid.
To understand how this circuit works like an AI, we need to
understand a basic neural network structure. Neural networks can be modeled as
a collection of nodes connected by weighted edges that numerically represent
the correlation between nodes. Several networks are layered on top of one
another, where the first network takes input data, and the last network
produces the output. For example, pixel data from pictures of cars and planes are
input, and an output value 0 or 1 identifies the picture as a car or plane
respectively. To teach this system, training data is supplied as input and the
weights of the edges are adjusted until the AI produces the correct output.
![]() |
| This is one of the circuits used in the experiment with its network structure mapped out. |
Scientists developed an electrical circuit composed of 16
resistors connected arbitrarily. In this setup, the resistors (and their resistance)
represent weighted edges, and the intersection of resistor leads represented nodes.
By adjusting the values of the resistors, scientists could train the circuit to
produce the desired output from given input data. To allow the system to train
itself, they developed two networks that connected to each other. One was a “clamped”
network with fixed inputs and outputs, which we can think of as a fully trained
system. The other was a “free” network with fixed inputs but variable outputs.
Using a rule that changed the resistance of resistors in both systems based on
the voltage difference between a representative resistor from each, they were
able to train both systems to produce the same outputs. This was used to train
the free system to perform different AI tasks like recognition of flowers based on their physical measurements.
This research is important for a few key reasons. Firstly, and
most importantly, this circuit can help model how the human brain learns
information. The circuit was developed with the neuroscience behind human intelligence in mind, and
different tests can provide insight into how we learn. Secondly, the circuit can
be taught to perform various functions, even those that it wasn't originally trained to do. In other
words, one circuit can be reused in different computational systems after a
period of retraining. Lastly, the circuit can continue to perform
its task even if there is damage to the physical system. This is because the network
is composed of identical units that individually adjust to changes in the
system.
Scientists are not done investigating this circuit. Although
the system is relatively small with only 16 edges, they believe that this network
is easily scalable. This is because adding edges does not significantly change the time to
compute the output and the system is resistant to hardware damage. In addition,
the system as is can be used as a replacement for neural networks in electrical
systems where space is limited. Further research must be conducted to determine
exactly how the network learns to perform tasks, and more complicated
electrical components can be added to give the system more complicated functionality.

Wow this is awesome! Very fascinating read, I like the structure of the article it flows nicely together. Overall great content!
ReplyDelete