Nicholas West | Activist Post | Source URL
Neural Dust – “Smart Dust” – previously entered mainstream discussion via the 2016 Independent article: “Tiny implant could connect humans and machines like never before.” It was implied to be a new technology that can wirelessly link a human brain to a computer via the implantation of a device the size of a grain of sand. This invention, however, is much older and was officially supported by the National Nanotechnology Initiative 2011 Strategic Plan.
In 2013, I covered how researchers at Berkeley Engineering discussed moving nanotechnology from environmental sensor applications toward human applications such as brain-computer interfaces. Their paper stated:
A network of tiny implantable sensors could function like an MRI inside the brain, recording data on nearby neurons and transmitting it back out. The smart dust particles would all contain an extremely small CMOS sensor capable of measuring electrical activity in nearby neurons. The researchers envision a piezoelectric material backing the CMOS capable of generating electrical signals from ultrasound waves. The process would also work in reverse, allowing the dust to beam data back via high-frequency sound waves. The neural dust would also be coated with polymer. (Source)
Now scientists believe that they have crossed a new threshold toward making the concept of smart dust a reality that would offer a far wider scope than originally envisioned. At the recent IEEE Conference, researchers from Brown University, Qualcomm and the University of California San Diego announced that they are the first to have achieved a wireless transfer of information from an implanted neural device to an external computer that interpreted the data received.
It allows bidirectional communication between the implants and an external device with an uplink rate of 10 megabits per second and a downlink rate of 1 Mb/s.
“We believe that we are the first group to realize wireless power transfer and megabits per second communications” in a neural implant, says Wing Ching (Vincent) Leung, technical director at the Qualcomm Institute Circuits Lab at UC San Diego.
Nurmikko calls the 0.25-square-millimeter implants “neurograins.” They each consist of a chip capable of harvesting RF energy; that chip powers an electrode that senses spikes of voltage from individual neurons, as well as the wireless communications. An antenna set outside of the skull provides the RF power, transmits to the implants, and receives data from them.
Source: IEEE Spectrum
Researchers believe that the introduction of thousands of “neurograins” will enable far more complex data collection and transference than a single implant.
As I’ve highlighted in previous articles on the topic, this type of communication is a two-way street — some people might feel content, for example, with sending their brain’s information out to a doctor for evaluation, but this sensor network could also transmit data back, as was admitted by MIT in their 2013 article “How Smart Dust Could Spy on Your Brain.”
That’s why Seo and co have chosen ultrasound to send and receive data. They calculate that the power required to use electromagnetic waves on the scale would generate a damaging amount of heat because of the amount of energy the body absorbs and the troubling signal-to-noise ratios at this scale.
By contrast, ultrasound is a much more efficient and should allow the transmission of at least 10 million times more power than electromagnetic waves at the same scale. (emphasis added).
This is also an area of research that continues to interest DARPA as one of the future methods of mind control. A smart dust neural network that can communicate externally could form the basis for reading soldiers’ minds and creating thought-controlled weapons.
Back in May, DARPA (Defense Advanced Research Projects Agency) announced that six teams will receive funding under the Next-Generation Nonsurgical Neurotechnology (N3) program. Participants are tasked with developing technology that will provide a two-way channel for rapid and seamless communication between the human brain and machines without requiring surgery.
“Imagine someone who’s operating a drone or someone who might be analyzing a lot of data,” said Jacob Robinson, an assistant professor of bioengineering at Rice University, who is leading one of the teams. “There’s this latency, where if I want to communicate with my machine, I have to send a signal from my brain to move my fingers or move my mouth to make a verbal command, and this limits the speed at which I can interact with either a cyber system or physical system. So the thought is maybe we could improve that speed of interaction.”
The agency is interested in systems that can read and write to 16 independent locations in a chunk of brain the size of a pea with a lag of no more than 50 milliseconds within four years, said Robinson, who is under no illusion about the scale of the challenge.
“When you try to capture brain activity through the skull, it’s hard to know where the signals are coming from and when and where the signals are being generated,” he told Live Science. “So the big challenge is, can we push the absolute limits of our resolution, both in space and time?” -Live Science
Scientists continue to push into territories that require the strictest examination of ethical implications. However, thus far, those discussions have lagged far behind the experimentation that is taking place.