David Tolfree, VP, Micro, Nano and Emerging Technologies Commercialisation Education Foundation (MANCEF)
Artificial Intelligence (AI), which is now ubiquitous, will lead us in new directions. Its increasing use is a hot topic not only because of the opportunities it will bring but also the concerns it raises about the possible disruptive implications it will have on the way we live and work.
AI is now embedded into the lexicon of 21st century digital technology. Autonomous vehicles (driverless cars, trains and drones), intelligent manufacturing (smart factories), virtual home assistants (Alexa, Siri, etc.), auto diagnostics and robotic surgery are just some examples where AI is being applied. Advances in computer technology and machine-learning have brought us to a time when almost every human endeavour depends on their use. They are the key elements of automation.
If we trace back to the time when humans first made machines, then a primitive form of AI was used to instruct them to carry out physical tasks. A modern computer is just an extension of the human brain that can be programmed to provide instructions for machines. This was exemplified when the first computers were made in the 1940s. A machine language had to be created to enable a digital computer to understand instructions. This was a simple binary code organised in patterns of 0s and 1s.
The human brain is different to a digital computer since it operates in analogue mode. This gives our brain wider scope in analysing information owing to its cognitive abilities (pattern recognition, awareness and creative thinking). Although present supercomputers do not possess these attributes, they do have greater storage facilities and faster data retrieval ability. However, a greater understanding of the mechanism of neural processing in the brain will be necessary before AI can match human intelligence. The operational speed of the human brain is about 2.2 bn megaflops. The fastest supercomputer is about 9.3 bn megaflops1. For comparison, the iPad is 170 megaflops2.
Numerous articles have been written about the applications and societal implications of AI, but in the context of this issue’s industry focus, I will restrict my narrative to its use and impact on medical diagnostics and healthcare. It is a sector that has greatly benefited during the last decade from the developments in micro-nanotechnologies and one in which most governments are investing huge sums of money. Industry has responded by creating and manufacturing new types of precision diagnostic and analytical tools and surgical instruments that are now revolutionising medical practice. AI is also the foundation of the advancing use of robotic surgery.
What is AI and what does it mean?
A number of different definitions can be used for AI, but I believe the following is simple and most understandable: The ability of a digital computer or computer-controlled machine to efficiently perform tasks commonly associated with human intelligence.
The next advance, already being actively pursued with some success, can extend this term to apply to computer-driven machines with the ability to discover meaning, learn from past experiences to make reasoned decisions and ultimately re-create themselves. The latter possibility invokes both excitement and fear.
Computers are programmed to acquire, store, process and retrieve data and, if required, access databases held in other computers. They can activate and control machine-based processes and robots. There is, however, a significant dividing line between programmed functions and the ability to make autonomous decisions about what to do with the data.
The big question is how close are we to creating a computer-based machine that has intelligence equal to, or even exceeding, that possessed by the human brain? Before attempting to answer that question, I want to review the status of AI in medical diagnostics and healthcare.
Medical diagnostics
The rapid advance of technologies and operational methods used in medicine coupled with huge improvements in computing power, data collection and processing, has produced a paradigm shift in diagnostic methods and analytical techniques. A new range of precision instruments and tools endowed with AI are now available. Some are described in this and previous editions of the magazine. Many can be connected via Wi-Fi to medical centres to access records and information.
Computers endowed with AI enable huge amounts of data to be instantaneously accessed, processed and evaluated beyond the capability of the human brain. For example, in 2016, an innovative diagnosis was made at the University of Tokyo’s Institute of Medical Science in Japan3. A computer was used to scan around 20 mn research papers, then using AI, a rare form of leukaemia that was in a patient in her 60s was identified in just 10 minutes. This could not have been diagnosed by specialists in such a short timescale. It helped doctors devise the optimal treatment and saved her life. This is one of many examples I found where AI had assisted a physician in making a more informed treatment decision.
The fast and intelligent evaluation of medical knowledge stored in vast databases worldwide can now be made available to practitioners to enable them to more accurately diagnose and treat disease. It removes ambiguity, saves lives and reduces the cost of treatment, particularly in the usage of drugs. The over-prescription and misuse of antibiotic drugs has produced bacteria resistance, making treatment less effective.
In the last year, researchers have revealed that AI computer systems can now diagnose diabetic eye disease, skin cancer and arrhythmias more effectively than human doctors. Clearly this is going to increase and advance the effectiveness of medical practice.
AI and image evaluation
Pattern recognition has its roots in AI. It is the study of how machines can learn to distinguish patterns and make decisions about the categories of the patterns. It is used in almost every sector in the form of a printed pattern and barcoding for identification.
Image-based diagnosis is one of the important areas being developed for use in medicine. Magnetic resonance imaging (MRI) and computed tomography (CT) brain and body scans, now commonly used in hospitals, still rely mainly on human interpretation. AI can refine and speed up the analysis so assisting clinicians in determining diseased sites, particularly important in locating cancer tumours. The next step for AI would be in the identification of the exact nature of a disease from the data.
Visual pattern recognition software can store and compare tens of thousands of images. But at present, due to the lack of suitable algorithms, it is estimated to be only marginally more accurate than the average clinician’s interpretation. This, and the need for better access to a large amount of detailed stored medical knowledge that relates to a particular clinical domain, has so far delayed the extensive use of AI in image analysis.
AI and healthcare
Healthcare budgets are rising everywhere because new advances in medicine (diagnostics, drugs and surgical techniques) have raised public expectations. Coupled with an ageing population, demands for healthcare will continue to rise.
Increased efficiency and new methodologies are therefore paramount if this demand is to be satisfied. In my local health centre, there has recently been a noticeable improvement in the system practices with more information being provided to patients at home, thus obviating the need to physically consult a practitioner. Clearly AI is playing a role in increasing efficiency.
The building block for AI applications is an algorithm, which is a specific set of instructions that enable a computer to learn from data and make predictions based on it. This enables faster and more accurate diagnosis of disease to be made and appropriate treatment applied.
Take cancer treatment as an example. Using consensus algorithms from experts in the field, along with the data that oncologists enter into a medical record (i.e. a patient’s age, genetics, cancer staging and associated medical problems), a computer can review large numbers of established treatment alternatives and recommend the most appropriate combination of chemotherapy drugs for a patient.
Current healthcare programmes
All advanced nations have medical and healthcare programmes with various funding mechanisms. Funding for advancing medical diagnosis is included in the grand challenges of the UK government’s Industrial Strategy. Up to £50 mn will be invested in the creation of centres of excellence in digital pathology and/or in-vivo imaging, including radiology and other imaging techniques. A programme entitled From data to early diagnosis and precision medicine will develop AI and machine learning to maximise the use of digital imaging4. It is planned to fund up to five centres across the UK.
A recent study found that digital technologies including AI created a net total of 80,000 new jobs annually in the UK5. One estimate indicates that AI could add £232 bn to the UK economy by 2030.
The future
The growth in the use of AI in diagnostics and healthcare is producing significant benefits in assisting medical practitioners and is generally welcomed. The next big step will be when computers and machines (robots) have real intelligence and can make decisions from a diagnosis and then act upon it.
AI and robotics are fertile subjects for the predictions of futurists. I may be slightly guilty of this but there is an inevitability of these two being part of our future. It is too late to turn back since both are deeply embedded in our technology. In general, they provide tangible benefits, some of which I have outlined above for medicine and healthcare. In addition, AI provides a competitive advantage in the global market for insights-driven businesses.
AI is fundamental to the design and operation of new robots. The majority of us embrace virtual assistants, for example, Amazon’s Alexa, and humanoid robots, such as Hanson Robotics’ Sophia and Honda’s ASIMO. At present, they are not seen as threats but amusements. They are, however, just the precursors to more seriously intelligent robots that are currently being developed for domestic use. With the internet of things (IoT) being established to connect with home appliances and other services in the community, robots endowed with AI will take over many of the jobs and services being provided by humans.
Only cautionary notice should be taken of the predictions of futurist writers like Ray Kurzweil, who is also a director of engineering at Google. His claims, although thought-provoking, raise concerns and can invoke fear about the future.
I have a copy of Kurzweil’s book, The Singularity is Near, in which he shares his belief that machines will be smarter than humans by 20296. He describes this as the AI singularity point where machines match human-level intelligence.
Kurzweil states: “Essentially we will use AI to intensify human intelligence—the same way a lens can intensify the power of the sun’s light. Ultimately, this new form of hybrid intelligence will enable us to solve humanity’s greatest challenges.”
Predictions are often based on a mixture of extrapolation and fantasy thinking. They make good stories in books and publications, although they should not be ignored or taken too seriously. Whether humans will become machine hybrids in the future may depend primarily on them surviving a global pandemic disease for which there may be no protection unless advances in medicine are able to eliminate such a possibility. The future for humanity is therefore uncertain, but that is what makes life a challenge.
MANCEF
References
1November 2017—The 50th TOP500 list of the fastest supercomputers in the world. (2017). TOP500.org. Available at: www.top500.org/lists/2017/11/
2Fischetti, M. (2011). Computers versus brains. Scientific American [online]. Available at: www.scientificamerican.com/article/computers-vs-brains/
3AI provides doctors with diagnostic advice: how will AI change future medical care? (2017). Fujitsu Journal. Available at: http://journal.jp.fujitsu.com/en/2017/11/29/01/
4Biddle, M. (2017). Industrial Strategy Challenge Fund—more challenges, more opportunities. Innovate UK. Available at: https://innovateuk.blog.gov.uk/2017/11/30/industrial-strategy-challenge-
5Artificial intelligence could add £232 bn to UK GDP by 2030—PwC research. (2017). PwC UK. Available at: www.pwc.co.uk/press-room/press-releases/artificial-intelligence-could-add-232bn-to-UK-gdp.html
6Kurzweil, R. (2005). The Singularity is Near. London: Duckworth Publishers.