Détail de l'auteur
Auteur Majedaldein Almahasneh |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images / Majedaldein Almahasneh in Machine Vision and Applications, vol 33 n° 1 (January 2022)
[article]
Titre : MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images Type de document : Article/Communication Auteurs : Majedaldein Almahasneh, Auteur ; Adeline Paiement, Auteur ; Xianghua Xie, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 9 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] atmosphère solaire
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] couche thématique
[Termes IGN] détection d'objet
[Termes IGN] image multibande
[Termes IGN] segmentation d'imageRésumé : (auteur) Precisely localising solar Active Regions (AR) from multi-spectral images is a challenging but important task in understanding solar activity and its influence on space weather. A main challenge comes from each modality capturing a different location of the 3D objects, as opposed to typical multi-spectral imaging scenarios where all image bands observe the same scene. Thus, we refer to this special multi-spectral scenario as multi-layer. We present a multi-task deep learning framework that exploits the dependencies between image bands to produce 3D AR localisation (segmentation and detection) where different image bands (and physical locations) have their own set of results. Furthermore, to address the difficulty of producing dense AR annotations for training supervised machine learning (ML) algorithms, we adapt a training strategy based on weak labels (i.e. bounding boxes) in a recursive manner. We compare our detection and segmentation stages against baseline approaches for solar image analysis (multi-channel coronal hole detection, SPOCA for ARs) and state-of-the-art deep learning methods (Faster RCNN, U-Net). Additionally, both detection and segmentation stages are quantitatively validated on artificially created data of similar spatial configurations made from annotated multi-modal magnetic resonance images. Our framework achieves an average of 0.72 IoU (segmentation) and 0.90 F1 score (detection) across all modalities, comparing to the best performing baseline methods with scores of 0.53 and 0.58, respectively, on the artificial dataset, and 0.84 F1 score in the AR detection task comparing to baseline of 0.82 F1 score. Our segmentation results are qualitatively validated by an expert on real ARs. Numéro de notice : A2022-089 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00138-021-01261-y Date de publication en ligne : 29/11/2021 En ligne : https://doi.org/10.1007/s00138-021-01261-y Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99500
in Machine Vision and Applications > vol 33 n° 1 (January 2022) . - n° 9[article]