AI is getting cheaper, Good or Bad?

SAN FRANCISCO: If I tell you AI is getting cheaper, what would be your reaction? A Silicon Valley startup recently unveiled a drone that can set a course entirely on its own. A handy smartphone app allows the user to tell the airborne drone to follow someone. Once the drone starts tracking, its subject will find it remarkably hard to shake. The drone is meant to be a fun gadget — sort of a flying selfie stick. But it is not unreasonable to find this automated bloodhound a little unnerving.AI is getting cheaper

On Tuesday, a group of artificial intelligence researchers and policymakers from prominent labs and think tanks in both the United States and Britain released a report that described how rapidly evolving and increasingly affordable AI technologies could be used for malicious purposes. They proposed preventive measures, including being careful with how research is shared: Don’t spread it widely until you have a good understanding of its risks.

AI experts and pundits have discussed the threats created by the technology for years, but this is among the first efforts to tackle the issue head-on. And the little tracking drone helps explain what they are worried about. The drone, made by a company called Skydioand announced this month, costs $2,499. It was made with technological building blocks that are available to anyone: ordinary cameras, opensource software and low-cost computer chips. In time, putting these pieces together — researchers call them dual-use technologies — will become increasingly easy and inexpensive.

“This stuff is getting more available in every sense,” said one of Skydio’s founders, Adam Bry. These same technologies are bringing a new level of autonomy to cars, warehouse robots, security cameras and a wide range of internet services. But at times, new AI systems also exhibit strange and unexpected behaviour because the way they learn from large amounts of data is not entirely understood. That makes them vulnerable to manipulation. AI is getting cheaper.

“This becomes a problem as these systems are widely deployed,” said Miles Brundage, a research fellow at the University of Oxford’s Future of Humanity Institute and one of the report’s primary authors.

The report warns against the misuse of drones and other autonomous robots. But there may be bigger concerns in less obvious places, said Paul Scharre, another author of the report.

The rapid evolution of AI is creating new security holes. If a computer-vision system can be fooled into seeing things that are not there, for example, miscreants can circumvent security cameras or compromise a driverless car.

Researchers are also developing AI systems that can find and exploit security holes in all sorts of other systems, Scharre said. These systems can be used for both defense and offense.

AI is getting cheaper

Leave a Reply