Détail de l'auteur
Auteur Li Mi |
Documents disponibles écrits par cet auteur (2)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Superpixel-enhanced deep neural forest for remote sensing image semantic segmentation / Li Mi in ISPRS Journal of photogrammetry and remote sensing, vol 159 (January 2020)
[article]
Titre : Superpixel-enhanced deep neural forest for remote sensing image semantic segmentation Type de document : Article/Communication Auteurs : Li Mi, Auteur ; Zhenzhong Chen, Auteur Année de publication : 2020 Article en page(s) : pp 140 - 152 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] algorithme SLIC
[Termes IGN] apprentissage automatique
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] image à très haute résolution
[Termes IGN] processus stochastique
[Termes IGN] réseau neuronal profond
[Termes IGN] segmentation sémantique
[Termes IGN] superpixelRésumé : (Auteur) Semantic segmentation plays an important role in remote sensing image understanding. Great progress has been made in this area with the development of Deep Convolutional Neural Networks (DCNNs). However, due to the complexity of ground objects’ spectrum, DCNNs with simple classifier have difficulties in distinguishing ground object categories even though they can represent image features effectively. Additionally, DCNN-based semantic segmentation methods learn to accumulate contextual information over large receptive fields that causes blur on object boundaries. In this work, a novel approach named Superpixel-enhanced Deep Neural Forest (SDNF) is proposed to target the aforementioned problems. To improve the classification ability, we introduce Deep Neural Forest (DNF), where the representation learning of deep neural network is conducted by a completely differentiable decision forest. Therefore, better classification accuracy is achieved by combining DCNNs with decision forests in an end-to-end manner. In addition, considering the homogeneity within superpixels and heterogeneity between superpixels, a Superpixel-enhanced Region Module (SRM) is proposed to further alleviate the noises and strengthen edges of ground objects. Experimental results on the ISPRS 2D semantic labeling benchmark demonstrate that our model significantly outperforms state-of-the-art methods thus validate the efficiency of our proposed SDNF. Numéro de notice : A2020-014 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.11.006 Date de publication en ligne : 29/11/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.11.006 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94403
in ISPRS Journal of photogrammetry and remote sensing > vol 159 (January 2020) . - pp 140 - 152[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020011 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020013 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020012 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Scene context-driven vehicle detection in high-resolution aerial images / Chao Tao in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)
[article]
Titre : Scene context-driven vehicle detection in high-resolution aerial images Type de document : Article/Communication Auteurs : Chao Tao, Auteur ; Li Mi, Auteur ; Yansheng Li, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 7339 - 7351 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] détection d'objet
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] objet mobile
[Termes IGN] véhicule automobileRésumé : (auteur) As the spatial resolution of remote sensing images is improving gradually, it is feasible to realize “scene-object” collaborative image interpretation. Unfortunately, this idea is not fully utilized in vehicle detection from high-resolution aerial images, and most of the existing methods may be promoted by considering the variability of vehicle spatial distribution in different image scenes and treating vehicle detection tasks scene-specific. With this motivation, a scene context-driven vehicle detection method is proposed in this paper. At first, we perform scene classification using the deep learning method and, then, detect vehicles in roads and parking lots separately through different vehicle detectors. Afterward, we further optimize the detection results using different postprocessing rules according to different scene types. Experimental results show that the proposed approach outperforms the state-of-the-art algorithms in terms of higher detection accuracy rate and lower false alarm rate. Numéro de notice : A2019-535 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2912985 Date de publication en ligne : 03/06/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2912985 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94131
in IEEE Transactions on geoscience and remote sensing > Vol 57 n° 10 (October 2019) . - pp 7339 - 7351[article]