By Rajeev Sharma, SVP and global head of enterprise AI solutions & cognitive engineering at AI consultancy firm Pactera EDGE |

Around the world, hundreds of millions of people have been tested for Covid-19 by now. As anyone who has undergone the process of taking a test and awaiting results, the procedure is crude, time consuming, and inefficient. Testing and diagnosing this horrible virus has also created more strain on physicians and hospitals that are already experiencing unimaginable stress. What if we could use artificial intelligence (AI) to perform better and faster testing in the future? This is a reasonable question. And we may arrive at an answer sooner than you think.

The healthcare industry is in the early stages of understanding how to use AI to diagnose medical conditions. But the industry has strong motivation to start unlocking AI’s potential to help physicians break through the laborious and difficult process of diagnostic testing. The cataclysmic pandemic has highlighted the enormous stress that physicians experience treating medical problems – and that stress predates the pandemic, too. The more technology can help them diagnose problems faster and easier, the more they can focus on treating patients with their burdened eased. Here are two important ways AI can help:

  • Reducing mistakes and improving care. According to the Institute of Medicine at the National Academies of Science, Engineering and Medicine (NASEM), diagnostic errors contribute to approximately 10 percent of patient deaths. AI can cut down on mistakes and improve care quality by processing vast amounts of healthcare data ranging from medical images to doctor reports and detect symptoms that no human being, however talented, could possibly find. For example, machine vision applications can help pathologists more accurately identify diseases in bodily fluids and tissues. In addition, AI can improve care by diagnosing problems faster than human beings can. Google researchers, for example, recently showed that a well-trained neural network detected lung cancer in medical images faster than trained radiologists according to Google, and AI can detect breast cancer with 95 percent accuracy.
  • Rare diseases. One of the most vexing issues physicians face is treating rare diseases because by their nature they are not encountered as often as other ailments are and therefore can be more difficult to detect. Children born with rare diseases are misdiagnosed 40 percent of the time. One of the problems with diagnosing rare diseases is that physicians have a more difficult time connecting seemingly disparate symptoms to a rare disease. But health tech nonprofit startup, Foundation 29, is developing tool called Dx29 that uses AI to interpret genetic tests and explore possible pathologies faster than physicians can by sifting through patient data. But the initiative is young and requires more participation from patients willing to share data in order to train the AI to know what to look for.

We have a long way to go on our journey, and many obstacles remain. They include:

  • Data inaccuracy: AI is only as good as the data it uses. And AI systems need to rely on a vast amount of data, including images, to be trained properly. Data curation requires AI to collect and label text, images, audio, video, speech, and other data to improve machine learning algorithms. And yet, data curation is enormously challenging to manage correctly. The process is fraught with pitfalls. As noted here, the quality of those images can have a vast impact on the efficacy of AI to diagnose a problem accurately.
  • Lack of inclusivity: AI data curation can become hampered by bias and a lack of inclusivity. Recently Google shared a preview of an AI-powered dermatology assist tool that helps people understand what’s going on with issues related to their skin, hair, and nails. The tool received criticism for failing to adequately account for people of color. Although the dermatology assistant is technology for use to formally diagnose medical issues, the controversy underlines how hard it is to collect data accounts for the diversity of society.

I believe that one way to overcome both of these obstacles is to apply a human-centered approach to AI. A human-centered approach means combine a strong technology platform to curate data at scale with a diverse pool of humans in the loop for tasks such as data curation. A human-centered approach also means working with under-represented communities through the diversity needed for data programs (age, gender, groups, genre, geographies, ethnicities, cultures, and languages) to build more inclusive AI-based products and to reduce bias. It’s not up to the medical profession to figure all this out alone. Only a strong partnership with technology providers will help bring about a breakthrough.

Subscribe to Our Newsletter

We keep your data private and share your data only with third parties that make this service possible. See our Privacy Policy for more information.