Elon Musk’s Neuralink startup raises $39 MILLION as it seeks to develop tech that will connect the human brain with computers

Elon Musk’s Neuralink startup raises $39 MILLION as it seeks to develop tech that will connect the human brain with computers


James Pero | The Daily Mail | Source URL

An Elon Musk-backed startup looking to connect human brains to computers has raised most of its $51 million funding target. 

According to a report by Bloomberg, Neuralink has raised $39 million of its planned $51 million funding round as per a filing with the Securities and Exchange Commission (SEC). 

Prior funding rounds date back two years when the company raised $27 million after aiming for as much as $100 million. 

While it's unclear what progress Neuralink has made in its technology, if any, the filings come less than a month after the SpaceX and Tesla CEO foreshadowed the startup's endeavors in an ambiguous tweet. 

In a response on Twitter, Musk said Neuralink technology is 'coming soon.' 

Elon Musk believes humans must link up with machines in order to fight the inevitable onslaught of artificial intelligence. In a recent tweet, the SpaceX and Tesla CEO said technology from his latest company, Neuralink, will be 'coming soon'

In November last year, Musk told Axios that Neuralink technology would involve an 'electrode to neutron interface at a micro level.'

More specifically, it would be 'a chip and a bunch of tiny wires' that's implanted surgically into your skull.

'The long term aspiration with Neuralink would be to achieve a symbiosis with artificial intelligence and to achieve a sort of democratization of intelligence, such that it is not monopolistically held in a purely digital form by governments and large corporations,' he said in the interview with Axios. 

'I believe this can be done...It's probably on the order of a decade.' 

Musk believes that humans will have to explore the cyborg-like technology as artificial intelligence continues to advance and become integrated into the products we use. 

'Essentially, how do we ensure that the future constitutes the sum of the will of humanity?' Musk told Axios. 

'If we have millions of people with a high bandwidth link to the AI extension of themselves it would make everyone hyper smart.'

He likened AI to 'digital intelligence' that could spiral out of control if we don't pay attention. 

'As the algorithms and the hardware improve, that digital intelligence will exceed biological intelligence by a substantial margin,' Musk told Axios.

'...We're like children in a playground...We're not paying attention.'

Ultimately, the development of implanted chips could be what stands in the way of the human race becoming an endangered species, Musk said.

'...When a species of primate, homo sapiens, became much smarter than other primates, it pushed all the other ones into a very small habitat,' he added. 

'So there are very few mountain gorillas and orangutans and chimpanzees - monkeys in general.

'Even the jungles that they're in are narrowly defined so they were sort of like big cages.

'So, you know, that's one possible outcome for us,' Musk said.  

Musk has long been a critic of artificial intelligence, warning that should it fall into the wrong hands or become too smart, it could wreak havoc on the world.

He launched San Francisco-based Neuralink in 2016 to develop implantable brain-computer interfaces that could upload and download thoughts. 

Musk has envisioned other applications for the technology, in fields including medicine.

One specific use would be reducing memory loss or curing spinal cord injuries, by implanting electrodes into the motor cortex of the brain, Musk said.

This would 'bypass the severed section of the spine and have effectively local micro controllers near the muscle groups,' he added.

'It could restore full limb functionality,' Musk told Axios.


While many tech leaders push that AI will become invaluable to humanity, others argue it poses a threat to our species.

In November, Tesla and SpaceX CEO Elon Musk said that efforts to make AI safe only have 'a five to 10 per cent chance of success.'

Musk made his comments in a talk to employees at his firm Neuralink, which is working on ways to implant technology into our brains, according to Rolling Stone.

He added that the employees should 'sleep well' after his warning, according to people close with the matter.

The warning came shortly after Musk claimed that regulation of AI is drastically needed because it's a 'fundamental risk to the existence of human civilisation.'


Elon Musk is one of the most prominent names and faces in developing technologies. 

The billionaire entrepreneur heads up SpaceX, Tesla and the Boring company. 

But while he is on the forefront of creating AI technologies, he is also acutely aware of its dangers. 

Here is a comprehensive timeline of all Musk's premonitions, thoughts and warnings about AI, so far.   

August 2014 - 'We need to be super careful with AI. Potentially more dangerous than nukes.' 

October 2014 - 'I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence.'

October 2014 - 'With artificial intelligence we are summoning the demon.' 

June 2016 - 'The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we'd be like a pet, or a house cat.'

July 2017 - 'I think AI is something that is risky at the civilisation level, not merely at the individual risk level, and that's why it really demands a lot of safety research.' 

July 2017 - 'I have exposure to the very most cutting-edge AI and I think people should be really concerned about it.'

July 2017 - 'I keep sounding the alarm bell but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.'

August 2017 -  'If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.'

November 2017 - 'Maybe there's a five to 10 percent chance of success [of making AI safe].'

March 2018 - 'AI is much more dangerous than nukes. So why do we have no regulatory oversight?' 

April 2018 - '[AI is] a very important subject. It's going to affect our lives in ways we can't even imagine right now.'

April 2018 - '[We could create] an immortal dictator from which we would never escape.' 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *