Imagine enduring a painful, expensive and scar-inducing surgery—only to find out afterwards that it wasn’t necessary.
This is the situation for many women with high-risk breast lesions—areas of tissue that appear suspicious on a mammogram and have abnormal but not cancerous cells when tested by needle biopsy. Following surgical removal, 90% of these lesions end up being benign.
A change in the standard of care could be on the horizon thanks to researchers at Massachusetts General Hospital and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who have found a more precise and less invasive way to separate harmful lesions from benign ones.
“The decision about whether or not to proceed to surgery is challenging, and the tendency is to aggressively treat these lesions [and remove them],” said Manisha Bahl, MD, Director of the Breast Imaging Fellowship Program at Mass General, in a recent interview.
Bahl, along with a team of researchers, have harnessed the power of artificial intelligence (AI) to develop a more accurate and less invasive screening method for high-risk lesions. When tested, the machine correctly diagnosed 97 percent of 335 high-risk breast lesions as malignant and reduced the number of benign surgeries by more than 30 percent compared to existing approaches. These results were recently published in Radiology.
The team developed an AI system that uses machine learning to distinguish between high-risk lesions that need to be surgically removed from those that should just be watched over time. They created this model by feeding it data on over 600 high-risk lesions, including information on the patient’s demographics and pathology reports, and then tasked it to identify patterns among the different data elements.
Through a process called deep learning, the machine uses the data to create an algorithm that can be used to predict which high-risk lesions should be surgically removed. This process differs from traditional software programming in that the researchers did not give the machine the formula for diagnosis, but rather let it analyze the data and identify patterns on its own.
“To our knowledge, this is the first study to apply machine learning to the task of distinguishing high-risk lesions that need surgery from those that don’t,” said collaborator Constance Lehman, MD, PhD, chief of the Breast Imaging Division at Mass General’s Department of Radiology, in a recent interview. “We believe this could support women to make more informed decisions about their treatment and that we could provide more targeted approaches to health care in general.”
Lehman says Mass General radiologists will begin incorporating the model into their clinical practice over the next year.
If you find yourself tossing and turning all night, or hitting snooze a few too many times each morning, you’re not alone. More than 50 million Americans suffer from sleep disorders, and these sleep issues can get worse in individuals with Parkinson’s and Alzheimer’s disease.
Researchers from MIT and Mass General recently unveiled a wireless, portable system for monitoring individuals during sleep that could provide new insights into sleep disorders and reduce the need for time and cost-intensive overnight sleep studies in a clinical sleep lab.
Here are five things to know:
Sleep disorders are typically diagnosed by bringing a patient into an overnight sleep lab, hooking them up to electrodes, and monitoring their brain activity while they sleep. While this process is effective, it is also limiting. Individuals with sleep disorders may have even more difficulty sleeping when they are hooked up to wires and in the artificial setting of a sleep lab.
To make it easier to diagnose and study sleep problems at home, researchers at MIT and Mass General have created a new system for measuring sleep that is wireless, portable and powered by artificial intelligence.
The system consists of a laptop-sized device that emits low frequency radio waves while an individual is sleeping. The device then measures changes in those waves that are caused by shifts in movement and breathing patterns in sleeping individuals. The device then uses an advanced algorithm—powered by artificial intelligence—to translate these changes into the different stages of sleep, including light, deep and rapid eye movement (REM).
In a test of 25 healthy volunteers, the new system proved to be 80 percent accurate in identifying sleep stages, which is comparable to the accuracy of a sleep specialist reading EEG measurements, according to the research team
The team is now planning to use their system to investigate how Parkinson’s disease affects sleep. Future research projects could look into common sleep disorders such as insomnia and sleep apnea, investigating how sleep is affected by Alzheimer’s disease, and detecting epileptic seizures that occur during sleep.
Researchers involved in this work are Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, Matt Bianchi, chief of the Division of Sleep Medicine at Mass General, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT. Mingmin Zhao, an MIT graduate student, is the paper’s first author, and Shichao Yue, another MIT graduate student, is also a co-author.
Last Wednesday the Mass General Research Institute hosted The Art of Talking Science: Rise of the Machines at the Russell Museum at Massachusetts General Hospital.
As part HUBweek’s weeklong festival, this science communication competition challenged researchers focused on artificial intelligence, machine learning and digital health to present their science in four minutes or less. Each contestant received feedback from a panel of celebrity judges and, at the end, one presenter was crowned the winner.
Here’s a look back at some of the highlights from the afternoon:
Sue Slaugenhaupt, PhD, Scientific Director of the Research Institute, gave an introduction on the importance of communicating science.
Meet the Judges
Dr. Slaugenhaupt also introduced our panel of judges who each spoke for a few minutes about what science communicating means to them.
Our amazing judges, were (from left): Ike Swetlitz, Reporter for STAT News, Rich Hayes, Creative Director/Deputy Director of Communications for the Union of Concerned Scientists, Carey Goldberg, Editor for the WBUR CommonHealth Blog, and Christine Reich, PhD, Vice President of Exhibit Development and Conservation at the Museum of Science, Boston.
Then judge Christine Reich gave a keynote presentation discussing how the Museum of Science empowers their guests through science communication.
After Dr. Reich’s fascinating presentation, the competition began!
Justin Baker, MD, PhD, went first with his presentation, Exploring the Human-Human Interface. Dr. Baker is Scientific Director at the Institute for Technology in Psychiatry and an Assistant Psychiatrist at McLean Hospital.
Kamal Jethwani, MD, MPH, Senior Director of Connected Health Innovation, Partners Connected Health, then gave a slideless presentation entitled, Want to Lose 5 Lbs Fast? Artificial Intelligence Holds the Key.
Our third presenter was Jacob Dal-Bianco, MD, who spoke about preventing rheumatic heart disease. Dr. Dal-Bianco is a cardiologist at Massachusetts General Hospital.
David Gow, PhD, of the Cognitive/Behavioral Neurology Group at Massachusetts General Hospital, then gave his presentation, Using Machine Learning to Help the Brain Understand Itself.
Up next was Lisa Gualtieri, PhD, ScM, who discussed a lending library for fitness trackers. Dr. Gualtieri is the founder of Recycle Health, an Assistant Professor in the Department of Public Health and Community Medicine at Tufts University School of Medicine, and the Director of the Digital Health Communication Certificate Program.
Closing out the program was Roland Carlstead, PhD, of the Developmental Biology Research Program at McLean Hospital. Dr. Carlstead discussed whether treatment works and if the placebo effect is real.
After much deliberation, the judges named Justin Baker as the winner.
Thank you to all our contestants and the judges for their insightful feedback and support of science communication!
There’s so much more to artificial intelligence (AI) than what you’ve seen in sci-fi movies. In fact, advancements in machine learning could provide new opportunities for medical research and diagnosis.
Here are five things to know from a recent interview with Dreyer:
AI is created through a process called machine learning. Unlike traditional computer programming where the process of moving from point A to point B is entirely mapped out by the programming team before being loaded onto the computer, in machine learning the computer is given a vast repository of data and told what the data indicates. The computer then has to identify the underlying logic that connects the data to the results. In creating this algorithm, it can predict answers when given new data in the future.
Although the concept of using machines to create AI has been around for more than 50 years, faster computation speeds and more accurate algorithms are now enticing health care companies to invest in AI.
Researchers have trained computers to develop algorithms that distinguish between millions of simple images such as dogs, cat, and beaches. For example, researchers will show a computer many different images of dogs and tell the computer, “these are dogs.” The computer will then have to develop an algorithm that can be used to identify dogs from a new set of images that includes dogs, cats, horses or anything at all.
Now researchers are asking computers to apply that same knowledge to look at millions of MRI, CT, and X-ray images to detect things such as lung cancer, breast cancer or a hemorrhagic stroke. A computer could be shown millions of mammogram images from patients who subsequently developed a certain type of breast cancer. The goal would be to see if there is an underlying pattern that could lead to earlier diagnosis and treatment.
Mass General is poised to be a leader in the field of AI. The hospital has incredibly large amounts of electronic data that can be used to develop new algorithms for screening and diagnosis. Mass General also has a vast community of clinicians and researchers who can work together to develop these tools and integrate them into the delivery of care.