“There are things known and there are things unknown and in between are the doors of perception.” — Aldous Huxley
I’m Huxley Westemeier (26’) and welcome to “The Sift,” a weekly opinions column focused on the impacts and implications of new technologies.
______________________________________________________
There have been COUNTLESS announcements related to computing in the last two weeks.
Microsoft claimed it achieved a quantum-computing breakthrough, yet hasn’t released any peer-reviewed paper or scientific research proving their claims are valid. Apple recently unveiled the latest Mac Studio with the M3 Ultra chip (two M3 Max chips glued together), a desktop processor that is the most powerful the company has ever created, with a price tag to match: $14,099 for the highest-spec option.
But that’s not what I want to discuss.
A company named Corticol just announced the CL1 dubbed as a “body in a box.” It’s a $35,000 machine that combines soft tissue and existing computer silicon to create lab-grown neurons capable of running customized code and completing other computing tasks.
What does this mean? Let me explain.
Using neurons found in human brains– albeit lab-developed ones- is borderline terrifying and reminds me of something out of a science-fiction film. However, the technology itself is impressive. According to “The Independent,” the CL1 contains an “internal life support system” that can sustain the neurons for six months and aims to be the first computing system that can mimic precisely how the human brain thinks and understands concepts instead of relying on existing machine-learning approaches that require massive amounts of training data. Wow. Instead of glorified autocorrect systems, the CL1 is the real deal. Brett Kagan stated in an interview with The Independent that an earlier version of the CL1 that contained under a million neurons could teach itself how to play the original Atari game Pong when placed in a simulated environment. Corticol Labs official website claims that the neurons are self-programmable and require little energy to run besides the electricity and resources (oxygen, nutrients) needed to maintain the life support system.
I have serious ethical concerns about this system. S
ure, existing AI technologies like ChatGPT or Adobe’s Photoshop Generative Fill bring their own moral concerns to the table. Yet under the hood, those existing products rely on massive server farms that run graphical processing units calculating complicated mathematical problems every millisecond. But using physical, living neurons (even if they weren’t harvested from a human) raises the question of whether these computers have the potential to become conscious or even sentient. The technology is impressive by itself, and the performance gains indicate that these devices have the promise to be more efficient and intelligent than existing systems. The fine line between alive and artificial is blurrier than ever. Would I view a service like ChatGPT differently if I knew there was a computer-controlled blend of a silicon and tissue brain somewhere answering my request? The only silver lining for now is that the technology isn’t available. The CL1, for example, will be available for purchase (or at least ‘rentable’ through the cloud) by the end of 2025.
Neuron-controlled computers could quickly turn dystopian. Why bother with scaling up lab-grown neuron production when an entire population of neurons is waiting to be gathered from the human race?
Artificial intelligence remains purely artificial. For now.