23 August 2018. Engineering researchers developed a system with machine learning that analyzes CT scans to detect tiny early-stage lung tumors at a higher rate than visual inspections by radiologists. A team from University of Central Florida in Orlando plans to describe its system in September at a meeting of the Medical Image Computing and Computer Assisted Intervention Society in Granada, Spain.
An engineering group in UCF’s Center for Research in Computer Vision, led by imaging scientist Ulas Bagci, aims to improve the capabilities of radiologists in finding lung abnormalities, indicating potential tumors, in their earliest stages. Lung cancers, both small cell and non-small cell, are the second most common type of cancer, accounting for 14 percent of all new cancer cases. Only breast cancer in women and prostate cancer in men have higher rate of occurrence. American Cancer Society estimates some 234,000 people in the U.S. will have lung cancer in 2018, leading to more than 154,000 deaths.
Lung tumors are diagnosed today with chest X-rays and computerized tomography or CT scans that combine a series of X-rays taken at different angles. Analyzing results of these scans rely on a radiologist’s visual review, and while computer-assisted methods are available, catching the tiny, early-stage abnormalities in the lung remains a challenge. Much of the difficulty in detecting these small tumors is the high degree of variation in their texture, shape, and position in the lung, as well as similarity to surrounding lung tissue.
Bagci and colleagues developed techniques using deep machine learning to enhance and extend the abilities of radiologists for this task. Their system they call S4ND employs a machine learning method called convolutional neural networks that analyzes images similarly to the way the human brain processes visual stimuli. The system recognizes and absorbs more detail in images, adding to its store of knowledge, and organizes the data in 3 dimensions, in much the same way as the brain learns to visualize and understand 3-D images.
Rodney LaLonde, a doctoral candidate on the project, describes the process in a university statement. “You know how connections between neurons in the brain strengthen during development and learn?” says LaLonde. “We used that blueprint, if you will, to help our system understand how to look for patterns in the CT scans and teach itself how to find these tiny tumors.” This approach, say the researchers, is similar to facial recognition algorithms that discern patterns from thousands of face images.
In this case, the team used more than 1,000 CT scan images provided by National Institutes of Health and the Mayo Clinic to train S4ND. The training employed fine-grain grids to process the images and construct 3-D data sets that capture the most descriptive features of those blocks. The result is a deep data architecture for assessing CT scans in a single pass, which the team says offers an accurate and efficient system for radiologists.
The researchers tested S4ND against a standard data set of 888 CT scans compiled for improving lung cancer screening and tumor detection. They report S4ND outperforms visual analysis by a wide margin, returning a free-response receiver operating characteristic, or FROC, score — a statistical measure for detecting abnormalities — of 90 percent, while the researchers say visual inspection by radiologists typically finds 65 percent. In addition, processing each image with S4ND took between 11 and 27 seconds on standard work stations.
The team next plans to evaluate S4ND in clinical settings, and is seeking hospital partners for those tests.
More from Science & Enterprise:
- Nanoparticle Process Designed to Detect, Treat Oral Bacteria
- Yeast Biosensor Devised to Measure Radiation Exposure
- Tiny Soft Robots Designed for Surgery, Diagnostics
- Big Data, A.I. Applied to Precision Medicine for Lung Disease
- Graphene Circuits Enhanced to Monitor, Image Brain Signals
* * *
You must be logged in to post a comment.