Descripteur


Etendre la recherche sur niveau(x) vers le bas
Crater detection and registration of planetary images through marked point processes, multiscale decomposition, and region-based analysis / David Solarna in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)
![]()
[article]
Titre : Crater detection and registration of planetary images through marked point processes, multiscale decomposition, and region-based analysis Type de document : Article/Communication Auteurs : David Solarna, Auteur ; Alberto Gotelli, Auteur ; Jacqueline Le Moigne, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 6039 - 6058 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] cratère
[Termes descripteurs IGN] détection de contours
[Termes descripteurs IGN] distance de Hausdorff
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] image multitemporelle
[Termes descripteurs IGN] image thermique
[Termes descripteurs IGN] Mars (planète)
[Termes descripteurs IGN] ondelette
[Termes descripteurs IGN] processus ponctuel marqué
[Termes descripteurs IGN] séparateur à vaste marge
[Termes descripteurs IGN] transformation de Hough
[Termes descripteurs IGN] zone d'intérêtRésumé : (auteur) Because of the large variety of planetary sensors and spacecraft already collecting data and with many new and improved sensors being planned for future missions, planetary science needs to integrate numerous multimodal image sources, and, as a consequence, accurate and robust registration algorithms are required. In this article, we develop a new framework for crater detection based on marked point processes (MPPs) that can be used for planetary image registration. MPPs were found to be effective for various object detection tasks in Earth observation, and a new MPP model is proposed here for detecting craters in planetary data. The resulting spatial features are exploited for registration, together with fitness functions based on the MPP energy, on the mean directed Hausdorff distance, and on the mutual information. Two different methods—one based on birth–death processes and region-of-interest analysis and the other based on graph cuts and decimated wavelets—are developed within the proposed framework. Experiments with a large set of images, including 13 thermal infrared and visible images of the Mars surface, 20 semisimulated multitemporal pairs of images of the Mars surface, and a real multitemporal image pair of the Lunar surface, demonstrate the effectiveness of the proposed framework in terms of crater detection performance as well as for subpixel registration accuracy. Numéro de notice : A2020-526 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2970908 date de publication en ligne : 18/03/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2970908 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95704
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 9 (September 2020) . - pp 6039 - 6058[article]CSVM architectures for pixel-wise object detection in high-resolution remote sensing images / Youyou Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)
![]()
[article]
Titre : CSVM architectures for pixel-wise object detection in high-resolution remote sensing images Type de document : Article/Communication Auteurs : Youyou Li, Auteur ; Farid Melgani, Auteur ; Binbin He, Auteur Année de publication : 2020 Article en page(s) : pp 6059 - 6070 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] classification par séparateurs à vaste marge
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] données d'apprentissage
[Termes descripteurs IGN] image captée par drone
[Termes descripteurs IGN] processeur graphiqueRésumé : (auteur) Detecting objects becomes an increasingly important task in very high resolution (VHR) remote sensing imagery analysis. With the development of GPU-computing capability, a growing number of deep convolutional neural networks (CNNs) have been designed to address the object detection challenge. However, compared with CPU, GPU is much more costly. Therefore, GPU-based methods are less attractive in practical applications. In this article, we propose a CPU-based method that is based on convolutional support vector machines (CSVMs) to address the object detection challenge in VHR images. Experiments are conducted on three VHR and two unmanned aerial vehicle (UAV) data sets with very limited training data. Results show that the proposed CSVM achieves competitive performance compared to U-Net which is an efficient CNN-based model designed for small training data sets. Numéro de notice : A2020-527 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2972289 date de publication en ligne : 02/03/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2972289 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95705
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 9 (September 2020) . - pp 6059 - 6070[article]Heliport detection using artificial neural networks / Emre Baseski in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 9 (September 2020)
![]()
[article]
Titre : Heliport detection using artificial neural networks Type de document : Article/Communication Auteurs : Emre Baseski, Auteur Année de publication : 2020 Article en page(s) : pp 541-546 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] analyse comparative
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] hélicoptère
[Termes descripteurs IGN] image à haute résolution
[Termes descripteurs IGN] réseau neuronal artificiel
[Termes descripteurs IGN] zone militaireRésumé : (Auteur) Automatic image exploitation is a critical technology for quick content analysis of high-resolution remote sensing images. The presence of a heliport on an image usually implies an important facility, such as military facilities. Therefore, detection of heliports can reveal critical information about the content of an image. In this article, two learning-based algorithms are presented that make use of artificial neural networks to detect H-shaped, light-colored heliports. The first algorithm is based on shape analysis of the heliport candidate segments using classical artificial neural networks. The second algorithm uses deep-learning techniques. While deep learning can solve difficult problems successfully, classical-learning approaches can be tuned easily to obtain fast and reasonable results. Therefore, although the main objective of this article is heliport detection, it also compares a deep-learning based approach with a classical learning-based approach and discusses advantages and disadvantages of both techniques. Numéro de notice : A2020-439 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.86.9.541 date de publication en ligne : 01/09/2020 En ligne : https://doi.org/10.14358/PERS.86.9.541 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95929
in Photogrammetric Engineering & Remote Sensing, PERS > vol 86 n° 9 (September 2020) . - pp 541-546[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2020091 SL Revue Centre de documentation Revues en salle Disponible A novel deep network and aggregation model for saliency detection / Ye Liang in The Visual Computer, vol 36 n° 9 (September 2020)
![]()
[article]
Titre : A novel deep network and aggregation model for saliency detection Type de document : Article/Communication Auteurs : Ye Liang, Auteur ; Hongzhe Liu, Auteur ; Nan Ma, Auteur Année de publication : 2020 Article en page(s) : pp 1883 - 1895 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] architecture de réseau
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] déconvolution
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] saillanceRésumé : (auteur) Recent deep learning-based methods for saliency detection have proved the effectiveness of integrating features with different scales. They usually design various complex architectures of network, e.g., multiple networks, to explore the multi-scale information of images, which is expensive in computation and memory. Feature maps produced with different subsampling convolutional layers have different spatial resolutions; therefore, they can be used as the multi-scale features to reduce the costs. In this paper, by exploiting the in-network feature hierarchy of convolutional networks, we propose a novel multi-scale network for saliency detection (MSNSD) consisting of three modules, i.e., bottom-up feature extraction, top-down feature connection and multi-scale saliency prediction. Moreover, to further boost the performance of MSNSD, an input image-aware saliency aggregation method is proposed based on the ridge regression, which combines MSNSD with some well-performed handcrafted shallow models. Extensive experiments on several benchmarks show that the proposed MSNSD outperforms the state-of-the-art saliency methods with less computational and memory complexity. Meanwhile, our aggregation method for saliency detection is effective and efficient to combine deep and shallow models and make them complementary to each other. Numéro de notice : A2020-601 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-019-01781-9 date de publication en ligne : 09/12/2019 En ligne : https://doi.org/10.1007/s00371-019-01781-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95952
in The Visual Computer > vol 36 n° 9 (September 2020) . - pp 1883 - 1895[article]Vehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
![]()
[article]
Titre : Vehicle detection of multi-source remote sensing data using active fine-tuning network Type de document : Article/Communication Auteurs : Xin Wu, Auteur ; Wei Li, Auteur ; Danfeng Hong, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 39 - 53 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] Allemagne
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] données multisources
[Termes descripteurs IGN] étiquette
[Termes descripteurs IGN] image aérienne
[Termes descripteurs IGN] modèle numérique de surface
[Termes descripteurs IGN] modèle stéréoscopique
[Termes descripteurs IGN] segmentation
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] véhiculeRésumé : (auteur) Vehicle detection in remote sensing images has attracted increasing interest in recent years. However, its detection ability is limited due to lack of well-annotated samples, especially in densely crowded scenes. Furthermore, since a list of remotely sensed data sources is available, efficient exploitation of useful information from multi-source data for better vehicle detection is challenging. To solve the above issues, a multi-source active fine-tuning vehicle detection (Ms-AFt) framework is proposed, which integrates transfer learning, segmentation, and active classification into a unified framework for auto-labeling and detection. The proposed Ms-AFt employs a fine-tuning network to firstly generate a vehicle training set from an unlabeled dataset. To cope with the diversity of vehicle categories, a multi-source based segmentation branch is then designed to construct additional candidate object sets. The separation of high quality vehicles is realized by a designed attentive classifications network. Finally, all three branches are combined to achieve vehicle detection. Extensive experimental results conducted on two open ISPRS benchmark datasets, namely the Vaihingen village and Potsdam city datasets, demonstrate the superiority and effectiveness of the proposed Ms-AFt for vehicle detection. In addition, the generalization ability of Ms-AFt in dense remote sensing scenes is further verified on stereo aerial imagery of a large camping site. Numéro de notice : A2020-546 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.06.016 date de publication en ligne : 13/07/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.06.016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95772
in ISPRS Journal of photogrammetry and remote sensing > vol 167 (September 2020) . - pp 39 - 53[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020091 SL Revue Centre de documentation Revues en salle Disponible 081-2020093 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2020092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Towards structureless bundle adjustment with two- and three-view structure approximation / Ewelina Rupnik in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-2 (August 2020)
PermalinkA worldwide 3D GCP database inherited from 20 years of massive multi-satellite observations / Laure Chandelier in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-2 (August 2020)
PermalinkA hybrid deep learning–based model for automatic car extraction from high-resolution airborne imagery / Mehdi Khoshboresh Masouleh in Applied geomatics, vol 12 n° 2 (June 2020)
PermalinkAn integrated approach to registration and fusion of hyperspectral and multispectral images / Yuan Zhou in IEEE Transactions on geoscience and remote sensing, vol 58 n° 5 (May 2020)
PermalinkAutomatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks / Mahmoud Saeedimoghaddam in International journal of geographical information science IJGIS, vol 34 n° 5 (May 2020)
PermalinkAutomated terrain feature identification from remote sensing imagery: a deep learning approach / Wenwen Li in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)
PermalinkWavelet and non-parametric statistical based approach for long term land cover trend analysis using time series EVI data / Niraj Priyadarshi in Geocarto international, vol 35 n° 5 ([01/04/2020])
PermalinkDeep learning for geometric and semantic tasks in photogrammetry and remote sensing / Christian Helpke in Geo-spatial Information Science, vol 23 n° 1 (March 2020)
PermalinkEdge-reinforced convolutional neural network for road detection in very-high-resolution remote sensing imagery / Xiaoyan Lu in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 3 (March 2020)
PermalinkHeuristic sample learning for complex urban scenes: Application to urban functional-zone mapping with VHR images and POI data / Xiuyuan Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)
Permalink