• 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • br Furthermore the classifier was evaluated on a


    Furthermore, the classifier was evaluated on a dataset of 98 full Pap smear images (49 normal and 49 abnormal) that had been prepared and classified, as normal or abnormal, by a cytotechnologist at Mbarara Regional Referral Hospital. Of the 49 normal Pap smears, 45 were correctly classified as normal and four were incorrectly classified as abnormal. Of the 49 abnormal Pap smears, 47 were correctly classified as abnormal and two were incorrectly classified as normal. The overall accuracy, sensitivity and specificity of the classifier on this 83730-53-4 dataset were 93.88%, 95.92% and 91.84% respectively. A False Negative Rate (FNR), False Positive Rate (FPR) and classification error of 4.08%, 8.16% and 6.12% respectively were obtained.
    The performance of the developed classifier was compared with results obtained by Martin et al. [65] and Norup et al. [47] on the same dataset (single cell dataset) using fuzzy based algorithms. Table 7  Informatics in Medicine Unlocked 14 (2019) 23–33
    reports the performances of the two methods together with the results achieved by the proposed method. It was found that the proposed ap-proach outperforms many of the existing fuzzy based classifiers in terms of FNR (0.15%), FPR (2.10%) and classification error (0.65%). Furthermore, as shown in Table 8, the proposed approach was compared with contemporary classification algorithms documented in the relevant literature. Results show that the proposed method out-performs many of the documented algorithms in terms of classification cell level accuracy (98.88%), specificity (97.47%) and sensitivity (99.28%), when applied to the DTU/Herlev dataset benchmark Pap smear dataset (single cell dataset).
    3.2. Processing time analysis
    This approach was tested on an Intel Core i5-6200U [email protected] GHz 8 GB memory computer. Twenty randomly selected full Pap smear images were run through the algorithm, and the computational time was measured for both the individual steps and the overall duration. Average processing times for segmentation, debris removal, feature selection and classification were 38, 58, 23 and 42 s, respectively. Debris removal took longest (58 s), while feature selection was the shortest (23 s). The overall time taken per Pap smear image averaged 161 s, and was 3 min at most, demonstrating the feasibility for real-time diagnosis of the Pap smear.
    4. Discussion
    This paper describes the automated analysis of Pap smear images to facilitate the classification of cervical cancer. Image enhancement using CLAHE makes the output of a processed image more suitable for image analysis. Unlike in many studies where CLAHE is applied on RGB Images [78,79], in the work documented here, CLAHE was applied to grayscale images as used in Ref. [80]. A Trainable Weka Segmentation (TWS) was utilized to provide a cheaper alternative to tools such as CHAMP. TWS produced excellent segmentations for the single images. However, segmentation results from full slide Pap smear images re-quired more pre-processing before feature extraction. TWS has been used in many studies and its accuracy is largely dependent on the ac-curacy of training the pixel level classifier [81,82]. Increasing the training sample as reported by Maiora et al. [83] could improve the performance of the classifier. TWS's capability to produce good seg-mentation is due to its pixel level classification, where each pixel is assigned to a given class. However, the poor performance when seg-menting the whole slide could be attributed to the small dataset used for building the segmentation classifier, as this was a manual process that involved annotation by an experienced cytopathologist.
    Feature selection played an important role in this work, eliminating features that increased error in the classification algorithm. Eighteen out of the 29 extracted features were selected for classification pur-poses. It was noted that most of the features that added noise to the classifier were cytoplasmic features. This could be attributed to the difficulty in separating the cytoplasm from the background as opposed to the nucleus, which is darker [18]. Increasing the number of clusters during feature selection reduced the fuzziness exponent (Table 2). Si-milarly increasing the number of clusters using the fuzziness exponent of 1.0930 reduced the defuzzification fitness error to 6.4210 with 25 clusters and 10 fold cross validation (Table 3), less than that obtained by Martin et al. [65]. This implies that increasing the number of clusters reduces the defuzzification error computed by the defuzzification method presented in this paper, which is based on Bayesian probability to generate a probabilistic model of the membership function for each data point, and applying the model to the image to produce the clas-sification information. An optimal number of 25 clusters was attained, and overtraining occurred when too many clusters (above 25) were used. A value of 25 clusters lower than 100 clusters could partly be because of the defuzzification method used. Its density measure works