• News

"Demystifying The Black Box That Is Artificial Intelligence" - Ariel Bleicher

  • Scientific American
  • New York, NY
  • (August 09, 2017)

Today lies an AI conundrum: the most capable technologies – namely, deep neural networks – are notoriously opaque, offering few clues as to how they arrive to their conclusions. But if consumers are to, say, entrust their safety in AI-driven vehicles or their health to AI-assisted medical care; they will want to know how these systems make critical decisions. Deep learning allows neural nets to create AI models that are too complicated or too tedious to code by hand. These models can be mind-bogglingly complex, with the largest nearing one trillion parameters. “What’s cool about deep learning is you don’t have to tell the system what to look for,” said Joel Dudley, PhD, director of the Next Generation Healthcare Institute the Icahn School of Medicine at Mount Sinai and associate professor of genetics and genomic sciences, population health science and policy at the Icahn School of Medicine at Mount Sinai. The flexibility allows neural nets to outperform other forms of machine learning – which are limited by their relative simplicity – and sometimes even humans. An experimental neural net at Mount Sinai called Deep Patient can forecast whether a patient will receive a particular diagnosis within the next year, months before a doctor would make the call. Dr. Dudley and his colleagues trained the system by feeding it 12 years’ worth of electronic health records, including test results and hospital visits, from 700,000 patients. “We showed it could predict 90 different diseases, ranging from schizophrenia to cancer to diabetes, with very high accuracy – without ever having talked to an expert,” Dr. Dudley added.

- Joel Dudley, PhD, Director, Next Generation Healthcare Institute, Associate Professor, Genetics and Genomic Sciences, Population Health Science and Policy, Icahn School of Medicine at Mount Sinai

Learn more