Melanoma is considered the most deadly form of skin cancer and is caused by the development of a malignant tumour of the melanocytes. The objective of the skin cancer detection project is to develop a framework to analyze and assess the risk of melanoma using dermatological photographs taken with a standard consumer-grade camera.
The skin cancer detection framework consists of novel algorithms to perform the following:
- illumination correction preprocessing
- segmentation of the lesion
- feature extraction
Our data set is provided at the end of the page. This includes images extracted from the public databases DermIS and DermQuest, along with manual segmentations of the lesions.
Illumination Correction Preprocessing
The first step in the proposed framework is a preprocessing step, where the image is corrected for illumination variation (i.e. the presence of shadows and brightly illuminated areas). Our approach to the problem of correcting is a multi-stage illumination modeling algorithm. We assume an illumination-reflectance multiplicative model where each pixel from the Value colour channel (in the HSV colour space) can be decomposed into an illumination component and a reflectance component. The goal of the algorithm is to first estimate the illumination component and calculate the reflectance component using that illumination estimate.
In the VIP lab, the proposed multi-stage illumination modeling algorithm uses the following stages to estimate and correct for illumination variation (shown in Fig. 1):
- Initial Monte Carlo illumination estimate
- Final parametric illumination estimate
- Calculate the reflectance component
Fig. 1. Flow chart showing the steps to correct the input image for illumination variation.
Skin Lesion Segmentation
The objective of the skin lesion segmentation step is to find the border of the skin lesion. It is important that this step is performed accurately because many features used to assess the risk of melanoma are derived based on the lesion border. Our approach to finding the lesion border is a texture distinctiveness-based lesion segmentation.
Fig 2. Flow chart showing the steps to calculate the texture distinctiveness of metric
The first stage of the skin lesion segmentation algorithm is to learn representative texture distributions and calculate the texture distinctiveness metric for each distribution (shown in Fig. 2). A texture vector is extracted for each pixel in the image. Then, a Gaussian mixture model is implemented that learns the texture distributions based on the set of texture vectors. Finally, the dissimilarity between a texture distribution and all other texture distirbutions is measured, which is quantified as the texture distinctiveness metric.
Fig. 3. Flow chart showing the merging of the texture distinctiveness map and initial regions. Each solid colour in the image of initial regions corresponds to a single region.
In the second stage, the pixels in the image are classified as being part of the normal skin or lesion class (shown in Fig. 3). To do this, the image is divided into a number of regions. These regions are combined with the texture distinctiveness map to find the skin lesion.
In order to use classification techniques, the image must be transformed such that it represented a point in some n-dimensional feature space. The axes in this feature space represent calculations that are relevant to describing the observed phenomenon (e.g., malignancy).
We propose a set of what we call high-level intuitive features (HLIFS). A HLIF is defined as follows (Amelard 2013):
High-Level Intuitive Feature (HLIF): A mathematical model that has been carefully designed to describe some human-observable characteristic, and whose score can be intuited in a natural way.
In standard clinical practice, many dermatologists follow the ABCD rule for identifying melanoma. That is, for a given skin lesion, they attempt to identify Asymmetry (colour/structure), Border irregularity, Colour patterns, and Diameter. Our work involves modeling the ABC components of this analysis. Using the HLIF framework (Amelard 2013), these features are modeled such that intuitive diagnostic rationale can be given back to the doctor.
Fig. 4 shows an example interface. Since the features were designing according to the HLIF framework, the system is able to convey to the user why it detected colour asymmetry.
Fig. 4. Example interface showing how intuitive diagnostic rationale can be given to the user. This increases the user-system trust over a simple classification output.
Melanoma images are from the DermIS (http://www.dermis.net) and DermQuest database.
Related research areas
Amelard, R., J. Glaister, A. Wong, and D. A. Clausi, “Melanoma decision support using lighting-corrected intuitive feature models“, Computer Vision Techniques for the Diagnosis of Skin Cancer, pp. 193 – 219, October, 2013. Get it here.