Neerja DeodharSep 12, 2019 09:52:14 IST
There’s a fascinating video of a robot folding clothes with precision and accuracy, right out of an Asimov-ian vision. It was shot 10 years ago at Berkeley, which makes one wonder how much further machine intelligence may have progressed over a decade. It’s all impressive until you realise that the video is playing at 50x the speed it was originally shot at. It’s an example of Moravec’s paradox, which says that what is easy for machines is tough for humans, and what is easy for humans is tough for machines (high-level reasoning requires little computation, but low-level sensorimotor skills need massive computation resources).
Still, the day is not far when machines will be as smart as human beings. Artificial intelligence expert Toby Walsh, who was speaking at the 2019 edition of Mountain Echoes literature festival, narrows down that day to 2062. “The title of my new book, 2062: The World That AI Made, was chosen by 300 of my colleagues. I asked them about when machines will be as smart as us, and they arrived at the year 2062. Through my book, I want to encourage people to think of the implications of such a development,” he said.
Walsh spoke about how the brain – though it uses the most energy in the body (still a measly 20 watts) – has its own limitations. “It is terribly conceited to think that we are as smart as can be,” he asserted. He also spoke about how machine intelligence and human intelligence cannot be thought of in the same way. “People speak of intelligence but forget to add the word artificial before it. This is to say that the intelligence of humans and machines is different, in the same vein that birds and airplanes can both fly, using different methods.”
Speaking about the sheer progress made in AI, he showed us a video of a phone app that translates languages in real time. “We have more computing power in this phone than the NASA satellite that went to Mars.” The more pressing concern is the secondary consequences of such technological advancements, he said. For example, most automobile companies say that they will have Level 5 automatic cars by the year 2025, which means that passengers won’t necessarily need to be awake during their commute, let alone actively drive the cars. This changes our relationship with both vehicles and the city, he explained. It means that people can re-imagine their vehicles as living spaces, and that they needn’t account for long commutes during waking hours – the car could drive them to work while they’re asleep, allowing for long-distance travel.
But the picture isn’t always rosy; consider ‘deep-fakes’ and facial recognition. If recognition software can be used to identify one criminal from among 60,000 others at a concert, it could very well be used to identify individuals who stage protests or dissent against governments.
One of the fears associated with advances in artificial intelligence is whether large swathes of the population will lose their jobs. Terming this worry as misplaced, Walsh said, “Machines can’t do many things… Moreover, we should celebrate when machines execute dull, repetitive jobs. And new technology is always accompanied by new jobs,” he explained.
“There will be plenty of jobs we won’t let machines do… A conscious decision can also be taken to not include some tech, to rely more heavily on human values in some situations. Technology is not destiny,” he said. He did acknowledge, however, that there will be a foreseeable period of disruption where people who have been displaced will have to be supported. “Inevitably, they will have to learn new skills and schools will have to impart these new skills,” he said.
When an audience member raised the question of the consequent blunting of skills because of the adoption of technology (for example, poorer mental math abilities because of larger reliance on calculators), Walsh said that we must measure what we gain in return when a machine takes over. He cited the example of remembering phone numbers – a skill necessary up until the mid-2000s – which no one needs anymore, but that our communication has not been hindered by this loss. Another example is that of the healthcare sector, where it is known that computers are able to make more accurate predictions.
Though advances are being made in facial recognition technology, experts still find that the algorithms suffer from both gender and racial biases. Speaking to Firstpost about this subject, he said, “We were sold a lie by Silicon Valley for years that algorithms are unbiased. Algorithms have all the biases of humans, in fact they’re worse because they can’t be held responsible. There are a number of intentional and unintentional ways that these biases can creep in – the subconscious bias of the developers who are often a minority (white, male), and also through the data sets, which may not be representative.” Biases are ineliminable from our digital lives to an extent, he says, because a bias is the reason why you are recommended one thing over another, such as products on a shopping website. He added that it is important to recognise biases as an issue and interrogate them at the very start.
Among the many anxieties that increasingly plague newsrooms across the world is reporters being replaced by computers. Walsh said that while this fear is not unsubstantiated, the consequences aren’t as far-ranging as one may imagine them to be. “Unfortunately, there are thousands of articles being written everyday by computer programmes. Many of the financial and sports reports – pieces that require number crunching – are written automatically by computers. They’re only one or two paragraphs long, but they write quite convincing stories quicker, for far cheaper and more accurately than humans do. Longform journalism is very safe, because we’re a long way from having machines do in-depth analysis or obtaining data from different sources,” he explained.
Hours before I interviewed Walsh, a video of people in Hong Kong toppling over a facial recognition tower amid ongoing protests went viral. I juxtaposed this to the newly emerging conversation about using technology to clean sewers – a solution to the dehumanising, oppressive caste occupation of manual scavenging, which has been declared illegal but remains rampant. On the one hand, governments are using technological progress to thwart rights, and on the other, they arrive at equality-driven solutions at a very belated stage. I asked Walsh how AI and tech can be made more people-oriented.
“These are difficult questions and we don’t have perfect answers to them yet… We have given the technology sector too much freedom to act. There’s a slogan at Facebook, and at much of Silicon Valley: ‘Disruption first, fix later.’ That’s brought decades of great innovation. But some of those changes, we’re now discovering, are quite harmful – the way elections are being disrupted, and not necessarily in a democratic way. There was also a misconception that the digital space could not – and should not – be regulated, that we shouldn’t try to enact rules because it would stifle innovation. But we’re discovering now that we can, and we should,” he explained.
Walsh cited three examples of progress, one of which was regulation at the regional level, such as the General Data Protection Regulation in Europe. The second was making tech companies pay taxes to ensure accountability. “They sit on mountains of cash, and if they did pay tax, it would not stifle their growth. A number of countries like the UK and Australia have now instated Google taxes – if companies fail to pay tax, they’re charged a certain percentage of their global turnover,” he explained. The third example was of the Christ Church massacre and the consequent discussions about the live streaming of this terrorist episode. “A special law was passed in Australia immediately, which held the owners of social media platforms responsible if they allowed such content to exist on their platforms,” he said.
The notion ‘Disrupt first, fix later’ seems to be the conflict in many episodes of the award-winning dystopian sci-fi series Black Mirror – except the ‘fix later’ is usually a post-script. I ask Walsh if there is a way around this approach, to instead try to fully comprehend the consequences of the technology we build, to prevent dystopian fiction from turning into reality. “There are many efforts around the world that are trying to determine how we can innovate in responsible ways. We can’t keep breaking the world in the way that we have in the last 20 years… A lot of it comes down to education – educating the people who build the technology, using tech while considering risks and limitations. Equally, it is worth being optimistic as well. It’s worth remembering that human beings are terrible at making decisions, and that machines may be better than us in this respect – at making more unbiased, principled decisions – but it’s not easy and it requires a lot of work. It also won’t happen easily,” he concluded.
Find our entire collection of stories, in-depth analysis, live updates, videos & more on Chandrayaan 2 Moon Mission on our dedicated #Chandrayaan2TheMoon domain.