Donate to Science & Enterprise

S&E on Mastodon

S&E on LinkedIn

S&E on Flipboard

Please share Science & Enterprise

Photo Algorithms Find Cancerous Melanoma Lesions

Phone photo

(tookapic, Pixabay)

17 Feb. 2021. An engineering team designed artificial intelligence techniques with image analysis to identify early signs of skin cancer in ordinary photographs. Researchers from Massachusetts Institute of Technology and the Wyss Institute for Biologically Inspired Engineering at Harvard University describe their techniques in today’s issue of the journal Science Translational Medicine (paid subscription required).

The team led by Luis Soenksen, a postdoctoral researcher at the Wyss Institute, is seeking simple and reliable ways for general practice physicians to identify melanoma in patients early on, while more treatment options are available. Melanoma is an aggressive type of skin cancer, which while not as common as basal cell and squamous cell skin cancers, is more likely to spread to other parts of the body. If melanoma is caught and treated early, before it spreads or metastasizes, the 5-year survival rate is 99 percent according to American Cancer Society. After the cancer spreads to other parts of the body, however, the 5-year survival rate drops to 27 percent.

Soenksen and colleagues sought to evaluate skin discolorations and growths using computerized techniques from photos taken with consumer-grade cameras, like those found on smartphones, and approached the problem much like trained dermatologists. These specialists use what they call the “ugly ducking” technique, looking for any moles or abnormalities known as suspicious pigmented lesions on a patient’s skin, then focus in on those with particularly problematic characteristics. The team discovered current computerized techniques assess one image at a time, which differs from physicians’ methods.

For their analysis, the researchers built a database of nearly 34,000 images made up of suspicious pigmented lesions and non-cancerous skin images from 133 patients in a hospital in Madrid, Spain, as well as publicly available images. The wide-field photos were taken with consumer-grade cameras and included a variety of backgrounds, such as different color painted walls or furniture fabrics to simulate real-world conditions.

Performing an ugly ducking analysis

The team used these images to write algorithms for a deep convolutional neural network. These algorithms combine image analysis and machine learning to dissect an image by layers for understanding features in the image. Different aspects of each layer discovered and analyzed by the system are translated into data that the algorithm then uses to train its understanding of the problem being solved, with that understanding enhanced and refined as more images and data are encountered.

To perform an ugly-duckling analysis, the researchers trained the neural network to evaluate images with groups of suspicious pigmented lesions and non-cancerous skin growths on patients. The team designed the algorithms to assess the problematic characteristics of a lesion in the context of a patient’s other growths or discolorations, not just individual images on their own.

The researchers validated the techniques by comparing evaluations of 135 photos from 68 patients made by their algorithms with assessments by three board-certified dermatologists. The results show the algorithm’s evaluations matched the dermatologists’ judgments as a group 88 percent of the time, and the physicians’ individual assessments in 86 percent of cases.

“We essentially provide a well-defined mathematical proxy for the deep intuition a dermatologist relies on when determining whether a skin lesion is suspicious enough to warrant closer examination,” says Soenksen in a Wyss Institute statement. “This innovation allows photos of patients’ skin to be quickly analyzed to identify lesions that should be evaluated by a dermatologist, allowing effective screening for melanoma at the population level.”

Along with his research at the Wyss Institute, Soenksen is a venture builder at MIT, where he seeks out new business opportunities from campus research on health care and artificial intelligence. In addition, Susan Conover, a co-author of the paper, is CEO of LuminDx, a Cambridge, Massachusetts company developing a technology with artificial intelligence to evaluate patients’ skin conditions from smartphone photos.

More from Science & Enterprise:

*     *     *

Comments are closed.