Remember when cell phones looked like this? You could call, text, maybe play snake on it … and it had about 6 megabytes of memory, which was a small miracle at the time. Then, phones got faster
and around every two years, you probably upgraded your phone from 8 gigs to 16 to 32 and so on and so forth. This incremental technological progress we’ve all been participating in for years hinges on one key trend, called Moore’s Law. Co-founder of Intel, Gordon Moore made a prediction in 1965 that integrated circuits, or chips, were the path to cheaper electronics. Moore’s Law states that the number of transistors, the tiny switches that control the flow of an electrical current that can fit in an integrated circuit, will double every two years, while the cost will halve. Chip power goes up as cost goes down. That exponential growth has brought massive advances in computing power… hence tiny computers in our pockets! A single chip today can contain billions of transistors, and each transistor is about 14 nanometres across! That’s smaller than most human viruses! Now, Moore’s Law isn’t a law of physics, it’s just a good hunch that’s driven companies to make better chips. But experts are claiming that this trend is slowing down. Granddaddy chip maker Intel recently disclosed that it’s becoming more difficult to roll out smaller transistors in a two year timeframe while also being affordable. So, to power the next wave of electronics, there are a few promising options in the works. One is quantum computing. Another currently in the lab stage is neuromorphic computing, which are computer chips that are modeled after our own brains! They’re basically capable of learning and remembering all at the same time at an incredibly fast clip. Let’s break that down and start with the human brain. So, your brain has billions of neurons, each of which forms synapses or connections with other neurons. Synaptic activity relies on ion channels, which control the flow of charged atoms like sodium and calcium that make your brain function and process properly. So, a neuromorphic chip
copies that model by relying on a densely connected web of transistors that mimic the activity of ion channels. Each chip has a network of cores, with inputs and outputs that are wired to additional cores, which all operate in conjunction with each other. Because of this connectivity, neuromorphic chips are able to integrate memory, computation, and communication all together. These chips are an entirely new computational design. Standard chips today are built based on von Neumann architecture… where the processor and memory are separate and the data moves between them. A central processing unit runs commands that are fetched from memory to execute tasks. This is what’s made computers very good at computing, but not as efficiently as they could be. Neuromorphic chips however completely change that model by having both storage and processing connected within these “neurons” that are all communicating and learning together. The hope is that these neuromorphic chips could transform computers from general purpose calculators into machines that can learn from experience and make decisions. We’d leap to a future where computers wouldn’t just be able to crunch data at breakneck speeds but could do that AND process sensory data in real time. Some future applications of neuromorphic chips might include combat robots that could decide how to act in the field, drones that could detect changes in the environment, and your car taking you to a drive through for ice cream after being dumped… basically these chips could power our future robot overlords. We don’t have machines with sophisticated, brain-like chips yet but they’re on the horizon. So get ready for a whole new meaning for the term “brain power.”