The Enchanted Loom
Brain and Computer, How Long are we debating about similarities and controversies? Where is state-of-the-art today?
The human brain is a marvel of energy efficiency.
Without exceeding 12 volts, less energy than a light bulb, 10 times less than a computer, it does everything you know.
Even if you remove a third of a seven-year-old’s brain, neuroplasticity rewires it into a great chess player. That’s the true story of Tanner Collins, operated on at Pittsburgh Hospital and a case study at Carnegie Mellon.
And yet, our brain is full of limitations compared to different types of hardware and non-carbon-based intelligence systems. We are the result of Evolution, Richard Dawkins’ blind watchmaker who would have been better off seeing something else.
We can only handle about seven memory units,
all of them in series, not parallel
We see the light from above. We establish intuitive patterns that in a machine would be unacceptable. With a limited tolerance to blue, we are hooked on memes by a logic that will make you never buy the system
We have a storage capacity of one petabyte, that’s a thousand one-terabyte disks.
No computer will be able to do that for a very long time.
Well, better I stop, or this article goes away in the first description.
So, what we need is to know more about how we are made. Study our brains. Here’s a problem. Either we see the shape, MRIs illuminate where activity occurs, or see the results of that activity. EEG gives us general non-individualized activity.
Could Adam Cohen’s work get spikes of activity from multiple nearby genetically labeled neurons be the theory of unifying the best of both worlds? Neurons receive numerous inputs, need some time, and change their voltage, never higher than 12 volts. We are very close to seeing it in situ: form and action.
The Coolest Technology Changing Neuroscience In 2021
Connectomics and the future of Neuroscience
Exciting times are coming in Neurology; maybe we will have to combine multiple techniques. Still, we are on our way to understand our furthest frontier of knowledge, our own brain. But also our theories.
I know that you are as tired as I am of comparing the brain with the computer.
What have we been debating for almost 100 years? From the Turing machine to the last SciFi series you care to remember. I remember my first contact with the subject through the book The Enchanted Loom by Robert Jastrow of Columbia in 1981.
Neurology makes us learn how to build systems; systems make us learn how our brains work. They are two very different things that work very differently. These divergences make us learn if we manage not to apply the wrong logical systems. If we order the ideas correctly so that our brain can understand them, the part here will help.
If we do not fall into the egocentrism of wanting to turn machines into improved humans, the part over there will collaborate.
Our brain can, with a minor and well ordered to do things that our computers will learn. But when we activate the amygdala and the limbic system before the time. When emotions enter at the wrong time, unchecked beliefs, excluding identitarianism, we seize it.
Error is easier and more frequent than success.
We have learned it from literally transcribing what we found inside the skull to our supposedly intelligent engineering systems. We understood that
starting by making systems less dumb was better than trying to get them to start by being superintelligent.
Today Rosenblatt’s perceptron-based neural networks do what Minski’s symbolic AI never achieved from its primitive superiority. Neural networks benefit daily from advances in hardware. It is not the same to run from a TensorFlow or Pytorch framework on a CPU hardware architecture as TPU. In a little, we will be on AI-specific IAUs. These needs are increasing rather than decreasing.
The results are that more developers can do better things with fewer elements.
Today we have a huge challenge with enabling deep learning. Deep Learning neural networks are as unique as you want to imagine. They managed to be computable realities from descent gradients and backpropagation, and a lot of talent employed and experimentation. But today, they still need too much data, too much computational power, and spend too much. To top it off, they have less common sense than some people you know. Yeah, that one, go figure. We need to self-monitor the enormous amount of data to train them or acquire that common sense, the dark matter of today’s AI. We need to prune the networks to decrease the computation. We need to drop that brutal carbon footprint that holds up on your screen, but not in the atmosphere.
You know blockchain would be unsustainable with a coinciding climate policy. You’re either in one or the other. You can’t play both parties,
This cannot be even if it holds up in the discourse of public figures, not in your head. We have too much at stake.
Now that we are getting closer to our brain’s much more precise knowledge, to our farthest, never the last frontier. Now that we can speak with more sense of the reality of Neurology.
Let us try from the humility of Science to understand the great lessons that will allow us better cognitive systems in Engineering.