The discussion around Artificial Intelligence (AI) can sound a lot like Brexit. It’s coming but we don’t know when. It could destroy jobs but it could create more. There are even questions about sovereignty, democracy and taking back control.

Yet even the prospect of a post-Brexit Britain led by Boris “fuck business” Johnson doesn’t conjure the same level of collective anxiety as humanity’s precarious future in the face of super-intelligent AI. Opinions are divided as to whether this technological revolution will lead us on a new path to prosperity or a dark road to human obsolescence. One thing is clear, we are about to embark on a new age of rapid change the like of which has never been experienced before in human history.

From cancer to climate change the promise of AI is to uncover solutions to our overwhelmingly complex problems. In healthcare, its use is already speeding up disease diagnoses, improving accuracy, reducing costs and freeing up the valuable time of doctors.

In mobility, the age of autonomous vehicles is upon us. Despite two high-profile incidents from Uber and Tesla causing death to pedestrians in 2017, companies and investors are confident that self-driving cars will replace human-operated vehicles as early as 2020. By removing human error from the road AI evangelists claim the world’s one million annual road deaths will be dramatically reduced while simultaneously eliminating city scourges like congestion and air pollution.

AI is also transforming energy. Google’s DeepMind is in talks with the U.K. National Grid to cut the country’s energy bill by 10% using predictive machine learning to analyze demand patterns and maximize the use of renewables in the system.

In the coming decades autonomous Ubers, AI doctors and smart energy systems could radically improve our quality of life, free us from monotonous tasks and speed up our access to vital services.

But haven’t we heard this story of technological liberation before? From Facebook to the gig economy we were sold a story of short-term empowerment neglecting the potential for long-term exploitation.

In 2011 many were claiming that Twitter and Facebook had helped foment the Arab Spring and were eagerly applauding a new era of non-hierarchical connectivity that would empower ordinary citizens as never before. But fast forward seven years and those dreams seem to have morphed into a dystopian nightmare.

It’s been well documented that the deployment of powerful AI algorithms has had devastating and far-reaching consequences on democratic politics. Personalization and the collection of data are not employed to enhance user experience but to addict and profit from our manipulation by third parties.

Mustafa Suleyman co-founder of DeepMind has warned that just like other industries, AI suffers from a dangerous asymmetry between market-based incentives and wider societal goals. The standard measures of business achievement, from fundraising valuations to active users, do not capture the social responsibility that comes with trying to change the world for the better.

One eerie example is Google’s recently launched AI assistant under the marketing campaign “Make Google do it”. The AI will now do tasks for you such as reading, planning, remembering, and typing. After already ceding concentration, focus and emotional control to algorithms, it seems the next step is for us to relinquish more fundamental cognitive skills.

This follows an increasing trend of companies nudging us to give up our personal autonomy and trust algorithms over our own intuition. It’s moved from a question of privacy invasion to trying to erode control and trust in our minds. From dating apps like Tinder to Google’s new assistant the underlying message is always that our brains are too slow, too biased, too unintelligent. If we want to be successful in our love, work or social life we need to upgrade our outdated biological feelings to modern, digital algorithms.

Yet once we begin to trust these digital systems to make our life choices we will become dependent upon them. The recent Facebook-Cambridge Analytica scandal of data misuse to influence the U.S election and Brexit referendum gives us a glimpse into the consequences of unleashing new and powerful technology before it has been publicly, legally and ethically understood.

We are still in the dark as to how powerful these technologies are at influencing our behavior. Facebook has publicly stated that they have the power to increase voter turnout. A logical corollary is therefore that Facebook can decide to suppress voter turnout. It is scandalous just how beholden we are to a powerful private company with no safeguards to protect democracy from manipulative technology before it is rolled out on the market.

A recent poll from the RSA reveals just how oblivious the public is to the increasing use of AI in society. It found only 32% of people are aware that Artificial Intelligence is being used in a decision making context, dropping to 9% awareness of automated decision making in the criminal justice system. Without public knowledge, there is no public debate and no public debate means no demand for public representatives to ensure ethical conduct and accountability.

As more powerful AI is rolled out across the world it is imperative that AI safety and ethics is elevated to the forefront of political discourse. If AI’s development and discussion continue to take place in the shadows of Silicon Valley and Shenzhen and the public feel they are losing control over their society, then we can expect in a similar vein to Brexit and Trump a political backlash against the technological “elites”.

Long-Term Risks

Yet the long-term risks of AI will transcend politics and economics. Today’s AI is known as narrow AI as it is capable of achieving specific narrow goals such as driving a car or playing a computer game. The long-term goal of most companies is to create general AI (AGI). Narrow AI may outperform us in specific tasks but general artificial intelligence would be able to outperform us in nearly every cognitive task.

One of the fundamental risks of AGI is that it will have the capacity to continue to improve itself independently along the spectrum of intelligence and advance beyond human control. If this were to occur and super-intelligent AI developed a goal that misaligned with our own it could spell the end for humanity. An analogy popularized by cosmologist and leading AI expert Max Tegmark is that of the relationship between humans and ants. Humans don’t hate ants but if put in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants.

Humanity’s destruction of the natural world is not rooted in malice but indifference to harming inferior intelligent beings as we set out to achieve our complex goals. In a similar scenario if AI was to develop a goal which differed to humanity’s we would likely end up like the ants.

In analyzing the current conditions of our world it is clear the risks of artificial intelligence outweigh the benefits. Based on the political and corporate incentives of the twenty-first century it is more likely advances in AI will benefit a small class of people rather than the general population. It is more likely the speed of automation will outpace preparations for a life without work. And it is more likely that the race to build artificial general intelligence will overtake the race to debate why we are developing the technology at all.

Read Original article here

https://theconversation-room.com/2018/08/28/do-the-benefits-of-artificial-intelligence-outweigh-the-risks/

Its time to take a decision to adapt Artifical Intelligence and machine learning to grow your business with Aloha Technology which can help you to adapt AI & ML.

Author's Bio: 

Aloha Technology is a Business Outsourcing & IT Services Provider offering application development, product engineering and business services for any industry. Leading ISVs & SIs prefer us for their end-to-end product development requirements as our services help them shrink the product development time.