Zuckerberg Teases AI ‘Brain Chip’… GOOGLE Coming for Your Face

Mark Zuckerberg Teases AI ‘Brain Chip’—But It Will Be Different Than Elon Musk’s

If Silicon Valley were to put a team of tech bros together on a project to merge computers and people, the lineup wouldn’t be complete without car and rocket man Elon Musk and the Valley’s most dubious robot, Mark Zuckerberg.

In all fairness, Musk has already proposed a plan to make an actual chip which is meant to be inserted into human brains. (It will be designed with taste and wireless cleanliness so you “have no wires poking out of your head,” as Musk assured.) Zuckerberg has said he’s interested in the idea of computer-human integration, too, but will approach it differently than Musk.

At a recent internal employee Q&A session, the Facebook CEO hinted that such technology could see promising use cases in Facebook’s future augmented reality and virtual reality products.

“Brain-computer interface is an exciting idea,” Zuckerberg told employees, according to a meeting transcript leaked earlier this month. “The field quickly branches into two approaches: invasive and non-invasive… We’re more focused on—I think completely focused on non-invasive.”

“Non-invasive is like, you wear a band or glasses, you shine an optical light and get a sense of blood flow in certain areas of the brain,” he explained.

Google Is Coming for Your Face

Personal data is routinely harvested from the most vulnerable populations, without transparency, regulation, or principles—and this should concern us all.

Last week, The New York Times reported on the federal government’s plans to collect DNA samples from people in immigration custody, including asylum seekers. This is an infringement of civil rights and privacy, and opens the door to further misuse of data in the long term. There is no reason for people in custody to consent to this collection of personal data. Nor is there any clarity on the limits on how this data may be used in the future. The DNA samples will go into the FBI’s criminal database, even though requesting asylum is not a crime and entering the country illegally is only a misdemeanor. That makes the practice not only an invasion of privacy in the present but also potentially a way to skew statistics and arguments in debates over immigration in the future.

The collection of immigrant DNA is not an isolated policy. All around the world, personal data is harvested from the most vulnerable populations, without transparency, regulation, or principles. It’s a pattern we should all be concerned about, because it continues right up to the user agreements we click on again and again.

In February, the World Food Program (WFP) announced a five-year partnership with the data analytics company Palantir Technologies. While the WFP claimed that this partnership would help make emergency assistance to refugees and other food-insecure populations more efficient, it was broadly criticized within the international aid community for potential infringement of privacy. A group of researchers and data-focused organizations, including the Engine Room, the AI Now Institute, and DataKind, sent an open letter to the WFP, expressing their concerns over the lack of transparency in the agreement and the potential for de-anonymization, bias, violation of rights, and undermining of humanitarian principles, among other issues.

Many humanitarian agencies are struggling with how to integrate modern data collection and analysis into their work. Improvements in data technology offer the potential to improve processes and ease the challenges of working in chaotic, largely informal environments (as well as appealing to donors), but they also raise risks in terms of privacy, exposure, and the necessity of partnering with private-sector companies that may wish to profit from access to that data.

 

423 views