Ever since algorithms began recognizing patterns faster and better than humans, computers have been making doctors’ lives easier and diagnoses more accurate. But widely used tools like automated cell counters, which can quickly point to diseases like malaria and leukemia by getting a head count on different kind of blood cells, are beginning to look quaint next to the deep learning and neural networks coming online. Today, hospitals can outfit their existing computer systems with a $1,000 graphics processor and speed-boost their capacity up to 260 million images per day. That’s basically equivalent to all the MRIs, CT scans, and other images that all the radiologists in America look at each day.
Unleashing that kind of AI on the medical world’s mountains of patient data could speed up diagnoses and get patients on the path to recovery much sooner. But it also promises to drastically change the job description for doctors who identify as information specialists—those whose primary tasks involve deciphering diagnoses from images. Doctors who get their MDs in image interpretation, namely pathologists, radiologists, and dermatologists, are the most vulnerable. “These three areas will be the first strike,” says Eric Topol, director of the Scripps Translational Science Institute and a leader in the NIH’s Precision Health Initiative. “Then we’ll start to see it across the board for medicine.”
Take skin cancer. Each year five million American moles, freckles, and skin spots turn out to be malignant, costing the healthcare system $8 billion. Catching deadly cancers like melanoma early makes a huge difference—survival rates drop from 98 percent to as low as 16 percent if the disease progresses to the lymph nodes.
Dermatologists use a variety of magnifying instruments to identify possible bad blemishes, and because the outcomes can be so disastrous, they tend to be a cautious bunch. For every 10 lesions surgically biopsied, only one melanoma gets discovered. That’s a lot of unnecessary knifing.
So doctors are now turning to artificial intelligence to tell the difference between innocuous and potentially fatal blotches. The hope is that computer vision, with its ability to make thousands of tiny measurements, will catch cancers early enough and with enough specificity to cut down on the amount of cutting doctors do. And by initial measures, it’s well on its way. Computer scientists and physicians at Stanford University recently teamed up to train a deep learning algorithm on 130,000 images of 2,000 skin diseases. The result, the subject of a paper out today in Nature, performed as well as 21 board-certified dermatologists in picking out deadly skin lesions.
The researchers started with a Google-developed algorithm primed to differentiate cats from dogs. Then they fed it images from medical databases and the web and taught it to differentiate between a malignant squamous cell carcinoma and a patch of scratchy dry skin. Like an outstanding dermatology resident, the more images it saw, the better it got. “It was definitely an incremental process, but it was exciting to see it slowly be able to actually do better than us at classifying these lesions,” said Roberto Novoa, the Stanford dermatologist who first contacted the school’s AI group about collaborating on skin cancer.
Stanford’s robo-derm may be pure research at this point, but there are plenty of AI start-ups (more than 100) and software giants (Google, Microsoft, IBM) working to get deep learning into hospitals, clinics, and even smartphones. Last year, a team of Harvard and Beth Israel Deacon researchers won an international imaging competition with a neural network that could detect metastatic breast cancer just by looking at pathology slide images from lymph nodes. The researchers are now commercializing the technology through a spinoff called PathAI. IBM’s artificial intelligence engine, Watson, has also been working on identifying skin cancers, when it’s not analyzing CT scans for blood clots or watching for wonky heart wall motion in ECGs. With 30 billion images and counting, Watson will soon have specialized knowledge in all the big imaging fields—radiology, pathology and now, dermatology—setting it up to be either a doctor’s best friend or biggest nemesis.
The key to avoiding being replaced by computers, Topol says, is for doctors to allow themselves to be displaced instead. “Most doctors in these fields are overtrained to do things like screen images for lung and breast cancers,” he says. “Those tasks are ideal for delegation to artificial intelligence.” When a computer can do the job of a single radiologist, the job of the radiologist expands—perhaps to monitoring multiple AI systems and using the results to make more comprehensive treatment plans. Less time drawing on X-rays, more time talking patients through options.
That’s exactly what cloud-based medical imaging company Arterys is doing for cardiologists, with an application that uses AI to quantify blood flowing through the heart. The algorithm, which is based on about 10 million rules, uses MRI images to produce contours of each of the heart’s four chambers, precisely measuring how much blood they move with each contraction. Today, cardiologists have to draw these contours by hand—especially tricky with the peanut-shaped right ventricle. Doctors usually need 30 to 60 minutes to calculate the volume of blood transported with each pump. But Arterys’s AI comes up with the answer in 15 seconds.
Earlier this month the FDA cleared the company to market its product, and with a partnership with GE Healthcare to get the Arterys system in GE MRI machines, doctors could be using it as soon as this year. The decision opens up the path for more applications of deep learning AI to get into the hands of doctors as fast as companies can train them. Whether or not doctors use them will be the first true test of the technology’s potential to improve patient care.