Google says it's AI software can't appear in weapons, "unreasonable" surveillance

Artificial intelligence debate flares at Google

Google CEO bans autonomous weapons in new AI guidelines

Weeks after facing both internal and external blowback for its contract selling AI technology to the Pentagon for drone video analysis, Google on Thursday published a set of principles that explicitly states it will not design or deploy AI for "weapons or other technologies whose principal goal or implementation is to cause or directly facilitate injury to people".

This commitment follows protests from staff over the U.S. military's research into using Google's vision recognition systems to help guide drones.

Google insisted last week that its AI technology is not being used to help drones identify human targets, but told employees that it would no renew its contract after it expires in 2019.

Meghan Markle makes her Buckingham Palace balcony debut alongside the royal family
The Duke of Edinburgh, who celebrates his 97th birthday on Sunday, has retired from official public duties and did not attend. The Duchess of Cambridge and Camilla , Duchess of Cornwall, traveled together in a coach as they led the Whitehall procession.

Pichai's insistence that Google will continue to work with the military may be a signal that Google still plans to vye for Joint Enterprise Defense Infrastructure (JEDI), a 10-year, $10 billion cloud contract with the USA military that drew the attention of major tech companies like Amazon and Google. So today, we're announcing seven principles to guide our work going forward.

More than 4,000 employees signed a petition calling for the cancellation of the Project Maven contract, citing Google's history of avoiding military work and worries about autonomous weapons. The company said on Thursday that if the principles had existed earlier, Google would not have bid for Project Maven. Several employees said that they did not think the principles went far enough to hold Google accountable-for instance, Google's AI guidelines include a nod to following "principles of worldwide law" but do not explicitly commit to following global human rights law. To improve upon its principles, Google should commit to independent and transparent review to ensure that its rules are properly applied, he said.

Google said Thursday that it would not let its artificial intelligence (A.I.) tools be used for deadly weapons or surveillance.

Porsche to name all-electric model Taycan
Porsche is doubling down on electric cars, saying it will pour over €6 billion (about $7.1 million) into electromobility by 2022. To boost that range quickly, the vehicle will have 800-volt charging technology.

"This approach is consistent with the values laid out in our original founders' letter back in 2004", Pichai wrote, citing the document in which Larry Page and Sergey Brin set out their vision for the company to "organise the world's information and make it universally accessible and useful". Asaro praised Google's ethical principles for their commitment to building socially beneficial AI, avoiding bias, and building in privacy and accountability.

Yet Google's cloud-computing unit, where the company is investing heavily, wants to work with the government and the Department of Defense because they are spending billions of dollars on cloud services. "For example, we will continue to work with government organizations on cybersecurity, productivity tools, healthcare, and other forms of cloud initiatives". In response, they circulated an internal letter, arguing that "Google should not be in the business of war".

After "Mystery Illness" Hits US Diplomats In China, Alert For Tourists
The updated statement , sent by email, changed the location of the health alert to "countrywide" from Guangzhou . The new evacuations come after the USA sent a medical team to Guangzhou to screen American government workers.

Latest News