By: Clayton Bonigut | Writer
May 24, 2018
Alphabet is the owner of one of the largest tech and data companies in the world, Google. They oversee the web searches, files, photos, programs, communications and more of billions of users on their platforms—so naturally, there’s a common concern about this company’s moral commitment, because they hold possibly the most influential power in our modern society.
Throughout its history, Alphabet and Google have actually done a fairly good job at staying within the lines that most people would call acceptable, as far as privacy and ethics are concerned. Their online service model gives users almost complete control over how their data is used, and the projects they participate in are almost always meant to benefit the community at large. Erika Hunting (senior) likes Google for this reason because “it feels like they’re being transparent with their users”. However, there have recently been developments that suggest Alphabet is willing to stray away from its morals.
Back in March, news arose that Google was partnering with the United States Department of Defense on “Project Maven”, an artificial intelligence software to help military drones identify surroundings. They won’t give explicit details, but what we can infer is that the technology will in part be used to identify targets in war. This is a huge problem for the company—not only do they have the weight of taking a side in war, they are potentially engaging in combat with their own users. Google provides internationally-available online services, meaning that it’s actually pretty likely they have a user base in every country of the world.
On top of this, there’s already a huge ongoing conversation about the morality behind the use of AI in the physical world where it has a direct effect on humans. Alphabet is already dealing with criticisms about its self-driving car program, Waymo (previously Google’s own project), due to the ethical decisions it has to make regarding crash aversion. People are skeptical about technology becoming “too smart” and causing damage to human life, but there’s also the concern that AI will be making too ambitious judgements that should ultimately be left for real people to decide. With Project Maven, the fear is that AI will effectively be deciding who lives and who dies. By exporting this choice to the mind of AI, we might become even more detached from our actions, which could involve harming or killing others.
Last month, in protest of Project Maven, over 3,000 Google employees signed a petition to cancel Project Maven entirely. In addition to the issues mentioned above, their particular concern is that Google is not sticking to its core values by making contracts with the Pentagon. Unfortunately, this didn’t have the impact that the employees were hoping it would.
In protest against the decisions that Google and Alphabet are making, over 10 employees have formally resigned from Google, stating Project Maven as their reason for leaving. Erika Hunting thinks that this was a smart move. “Google isn’t a company that should be competing for contracts in defense[…] They’re usually all about helping people; it’s good to see that employees are holding their company accountable.” This time, it’s much more popular in the media, and Alphabet is being brought into the spotlight for their questionable practices.
When digging into the people behind Alphabet, people are finding that this company’s hopes for AI technology are somewhat concerning. Alphabet’s CEO, Larry Page, has very unconventional thoughts about it. He’s accused Elon Musk (an AI skeptic) of being “speciesist” because he believes that humans and human-created AI can ultimately be considered as equal. While it’s certainly an interesting perspective, this is pretty much exactly what people fear, straight out of science fiction: AI taking over. Raiki Nishida (freshman) thinks that “he sounds kinda crazy”. I can’t say I disagree.
Overall, what we can hope for is that Google will now be forced to properly assess the damage they might cause to themselves by continuing with Project Maven, and possibly also re-consider how ambitious they will be with AI projects in general.In a nontraditional fashion for Google, they’re not being very transparent regarding their thoughts on these issues—let’s hope that can change.