The five primary misconceptions of AI: Bias-free, self-learning, and more
Artificial intelligence (AI) and machine learning (ML) are becoming increasingly key to enterprise strategies and, in time, tech implementations. But if you want to get ahead, you have to avoid misconceptions about the technology and its uses.
According to Gartner, organisations need to be wary of bias and ethical issues, as well as avoid falling into the trap of thinking that AI and ML will only replace repetitive, non-creative work. Most intriguingly, Gartner notes that organisations may need an AI strategy sooner than they thought.
Myth #1: AI works in the same way the human brain does
“Some forms of machine learning may have been inspired by the human brain,” but they are not equivalent,” argued Alexander Linden, research vice president at Gartner. “Image recognition technology, for example, is more accurate than most humans, but is of no use when it comes to solving a problem.”
This can be evinced in the various human versus machine tests which periodically take place to show firstly how impressive the vendor’s kit is, and secondly how differently and – hopefully – quickly the system works out a problem. Except it doesn’t quite work in all cases; earlier this week IBM’s AI-powered debating computer lost out against a human professional on a debate as to whether pre-school should be subsidised.
“The rule with AI today is that it solves one task exceedingly well, but if the conditions of the task change only a bit, it fails,” added Linden.
Myth #2: Intelligent machines learn on their own
Plenty can be seen and heard around how machine algorithms get better by constant learning. Yet this often needs intervention and direction. Algorithms left to their own devices tend not to be the most reliable.
Earlier this month a study argued that autonomous pricing algorithms could collude. The research, using experiments with pricing algorithms powered by AI in a controlled environment, “demonstrated that even relatively simple algorithms systematically learn to play sophisticated collusive strategies. Most worrying is that they learn to collude by trial and error, with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude.”
Gartner noted the importance of continually updating software to integrate new knowledge and data into further learning cycles.
Myth #3: AI can be free of bias
Those who have seen recent tests in this area will know that, for now at least, this statement is about as authentic as the proverbial nine bob note.
In July, the American Civil Liberties Union (ACLU) ran a test of Amazon’s facial recognition technology and concluded that it erroneously labelled profiles with darker skin colours as criminals. Amazon Web Services (AWS) disputed the methodology. At the start of this year Joy Buolamwini, founder of the Algorithmic Justice League, re-emphasised the importance of fighting algorithmic bias in a major speech.
Ultimately, AI systems can show the bias of whoever programmed it. “Today, there is no way to completely banish bias,” said Linden. “However, we have to try and reduce it to a minimum.
“In addition to technological solutions, such as diverse datasets, it is also crucial to ensure diversity in the teams working with the AI, and have team members review each other’s work,” he added. “This simple process can significantly reduce selection and confirmation bias.”
Myth #4: AI will only replace repetitive jobs that don’t require advanced degrees
It is something of an optimistic view; don’t worry, jobs won’t go, because the work done by AI systems will be restricted to mundane tasks. The first part may be correct, but the second isn’t. Some organisations don’t quite see it that way; note the famous case of Fukoku Mutual Life Insurance, which at the start of 2017 laid off more than 30 staff to replace them with AI.
As Gartner noted, there are plenty of complex tasks being augmented today. Take imaging AI in healthcare as an example; chest X-ray applications can detect disease quicker than radiologists, while roboadvisors are working in terms of wealth management and fraud detection. Human involvement is not entirely eliminated, but will instead move on to more complex issues.
Myth #5: Not every business needs an AI strategy
Some organisations may be reticent, but they need to see AI as simply the next step in automation. If that is the case, then they can’t afford to miss out. “Even if the current strategy is ‘no AI’, this should be a conscious decision based on research and consideration,” said Linden. “And – as ever other strategy – it should be periodically revisited and changed according to the organisation’s needs.
“With AI technology making its way into the organisation, it is crucial that business and IT leaders fully understand how AI can create value for their business and where its limitations lie,” Linden added. “AI technologies can only deliver value if they are part of the organisation’s strategy and used in the right way.”
Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology.
- » The three biggest challenges facing the equipment-centric enterprise
- » IoT, AI and blockchain all key to digital transformation – but ensure you get the millennial view
- » A blueprint for open source language automation: How enterprises and developers can benefit
- » How AI can support and elevate the role of the CIO
- » How artificial intelligence will affect the future of networks – and what you need to do about it now