Descripteur
Termes IGN > imagerie > image numérique
image numériqueSynonyme(s)image en mode mailléVoir aussi |
Documents disponibles dans cette catégorie (2121)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Tree species classification in tropical forests using visible to shortwave infrared WorldView-3 images and texture analysis / Matheus Pinheiro Ferreira in ISPRS Journal of photogrammetry and remote sensing, vol 149 (March 2019)
![]()
[article]
Titre : Tree species classification in tropical forests using visible to shortwave infrared WorldView-3 images and texture analysis Type de document : Article/Communication Auteurs : Matheus Pinheiro Ferreira, Auteur ; Fabien Hubert Wagner, Auteur ; Luiz E.O.C. Aragão, Auteur Année de publication : 2019 Article en page(s) : pp 119 - 131 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse texturale
[Termes IGN] arbre (flore)
[Termes IGN] Brésil
[Termes IGN] canopée
[Termes IGN] classification dirigée
[Termes IGN] espèce végétale
[Termes IGN] forêt tropicale
[Termes IGN] houppier
[Termes IGN] image à très haute résolution
[Termes IGN] image infrarouge
[Termes IGN] image Worldview
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] matrice de co-occurrence
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] variation saisonnièreRésumé : (Auteur) Tropical forest conservation and management can significantly benefit from information about the spatial distribution of tree species. Very-high resolution (VHR) spaceborne platforms have been hailed as a promising technology for mapping tree species over broad spatial extents. WorldView-3, the most advanced VHR sensor, provides spectral data in 16 bands covering the visible to near-infrared (VNIR, 400–1040 nm) and shortwave-infrared (SWIR, 1210–2365 nm) wavelength ranges. It also collects images at unprecedented levels of details using a panchromatic band with 0.3-m of spatial resolution. However, the potential of WorldView-3 at its full spectral and spatial resolution for tropical tree species classification remains unknown. In this study, we performed a comprehensive assessment of WorldView-3 images acquired in the dry and wet seasons for tree species discrimination in tropical semi-deciduous forests. Classification experiments were performed using VNIR individually and combined with SWIR channels. To take advantage of the sub-metric resolution of the panchromatic band for classification, we applied an individual tree crown (ITC)-based approach that employed pan-sharpened VNIR bands and gray level co-occurrence matrix texture features. We determined whether the combination of images from the two annual seasons improves the classification accuracy. Finally, we investigated which plant traits influenced species detection. The new SWIR sensing capabilities of WorldView-3 increased the average producer’s accuracy up to 7.8%, by enabling the detection of non-photosynthetic vegetation within ITCs. The combination of VNIR bands from the two annual seasons did not improve the classification results when compared to the results obtained using images from each season individually. The use of VNIR bands at their original 1.2-m spatial resolution yielded average producer’s accuracies of 43.1 ± 3.1% and 38.8 ± 3% in the wet and dry seasons, respectively. The ITC-based approach improved the accuracy to 70 ± 8% in the wet and 68.4 ± 7.4% in the dry season. Texture analysis of the panchromatic band enabled the detection of species-specific differences in crown structure, which improved species detection. The use of texture analysis, pan-sharpening, and ITC delineation is a potential approach to perform tree species classification in tropical forests with WorldView-3 satellite images. Numéro de notice : A2019-117 Affiliation des auteurs : non IGN Thématique : BIODIVERSITE/FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.01.019 Date de publication en ligne : 28/01/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.01.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92444
in ISPRS Journal of photogrammetry and remote sensing > vol 149 (March 2019) . - pp 119 - 131[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019031 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019033 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019032 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Complete 3D scene parsing from an RGBD image / Chuhang Zou in International journal of computer vision, vol 127 n° 2 (February 2019)
![]()
[article]
Titre : Complete 3D scene parsing from an RGBD image Type de document : Article/Communication Auteurs : Chuhang Zou, Auteur ; Ruiqi Guo, Auteur ; Zhizhong Li, Auteur ; Derek Hoiem, Auteur Année de publication : 2019 Article en page(s) : pp 143 - 162 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] cohérence géométrique
[Termes IGN] compréhension de l'image
[Termes IGN] image isolée
[Termes IGN] image RVB
[Termes IGN] reconstruction d'objet
[Termes IGN] scène 3DRésumé : (Auteur) One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of orthogonal walls and the extent of objects, modeled with CAD-like 3D shapes. We parse both the visible and occluded portions of the scene and all observable objects, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the well-known challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scenes. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and spatial consistency. We use support inference to aid interpretation and propose a retrieval scheme that uses convolutional neural networks to classify regions and retrieve objects with similar shapes. We demonstrate the performance of our method on our newly annotated NYUd v2 dataset (Silberman et al., in: Computer vision-ECCV, 2012, pp 746–760, 2012) with detailed 3D shapes. Numéro de notice : A2018-598 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1133-z Date de publication en ligne : 21/11/2018 En ligne : https://doi.org/10.1007/s11263-018-1133-z Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92525
in International journal of computer vision > vol 127 n° 2 (February 2019) . - pp 143 - 162[article]Efficiently annotating object images with absolute size information using mobile devices / Martin Hofmann in International journal of computer vision, vol 127 n° 2 (February 2019)
![]()
[article]
Titre : Efficiently annotating object images with absolute size information using mobile devices Type de document : Article/Communication Auteurs : Martin Hofmann, Auteur ; Marco Seeland, Auteur ; Patrick Mäder, Auteur Année de publication : 2019 Article en page(s) : pp 207 - 224 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] appareil portable
[Termes IGN] appariement automatique
[Termes IGN] image numérique
[Termes IGN] longueur focaleRésumé : (Auteur) The projection of a real world scenery to a planar image sensor inherits the loss of information about the 3D structure as well as the absolute dimensions of the scene. For image analysis and object classification tasks, however, absolute size information can make results more accurate. Today, the creation of size annotated image datasets is effort intensive and typically requires measurement equipment not available to public image contributors. In this paper, we propose an effective annotation method that utilizes the camera within smart mobile devices to capture the missing size information along with the image. The approach builds on the fact that with a camera, calibrated to a specific object distance, lengths can be measured in the object’s plane. We use the camera’s minimum focus distance as calibration distance and propose an adaptive feature matching process for precise computation of the scale change between two images facilitating measurements on larger object distances. Eventually, the measured object is segmented and its size information is annotated for later analysis. A user study showed that humans are able to retrieve the calibration distance with a low variance. The proposed approach facilitates a measurement accuracy comparable to manual measurement with a ruler and outperforms state-of-the-art methods in terms of accuracy and repeatability. Consequently, the proposed method allows in-situ size annotation of objects in images without the need for additional equipment or an artificial reference object in the scene. Numéro de notice : A2018-600 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1093-3 Date de publication en ligne : 24/05/2018 En ligne : https://doi.org/10.1007/s11263-018-1093-3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92527
in International journal of computer vision > vol 127 n° 2 (February 2019) . - pp 207 - 224[article]Improving LiDAR classification accuracy by contextual label smoothing in post-processing / Nan Li in ISPRS Journal of photogrammetry and remote sensing, vol 148 (February 2019)
![]()
[article]
Titre : Improving LiDAR classification accuracy by contextual label smoothing in post-processing Type de document : Article/Communication Auteurs : Nan Li, Auteur ; Chun Liu, Auteur ; Norbert Pfeifer, Auteur Année de publication : 2019 Article en page(s) : pp 13 - 31 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] graphe
[Termes IGN] lissage de valeur
[Termes IGN] post-traitement
[Termes IGN] précision de la classification
[Termes IGN] régularisation
[Termes IGN] scène urbaine
[Termes IGN] semis de pointsRésumé : (Auteur) We propose a contextual label-smoothing method to improve the LiDAR classification accuracy in a post-processing step. Under the framework of global graph-structured regularization, we enhance the effectiveness of label smoothing from two aspects. First, each point can collect sufficient label-relevant neighborhood information to verify its label based on an optimal graph. Second, the input label probability set is improved by probabilistic label relaxation to be more consistent with the spatial context. With this optimal graph and reliable label probability set, the final labels are computed by graph-structured regularization. We demonstrate the contextual label-smoothing approach on two separate urban airborne LiDAR datasets with complex urban scenes. Significant improvements in the classification accuracies are achieved without losing small objects (such as façades and cars). The overall accuracy is increased by 7.01% on the Vienna dataset and 6.88% on the Vaihingen dataset. Moreover, most large, wrongly labeled regions are corrected by long-range interactions that are derived from the optimal graph, and misclassified regions that lack neighborhood communications in terms of correct labels are also corrected with the probabilistic label relaxation. Numéro de notice : A2019-069 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.11.022 Date de publication en ligne : 13/12/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.11.022 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92156
in ISPRS Journal of photogrammetry and remote sensing > vol 148 (February 2019) . - pp 13 - 31[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Learning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery / Lichao Mou in IEEE Transactions on geoscience and remote sensing, vol 57 n° 2 (February 2019)
![]()
[article]
Titre : Learning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery Type de document : Article/Communication Auteurs : Lichao Mou, Auteur ; Lorenzo Bruzzone, Auteur ; Xiao Xiang Zhu, Auteur Année de publication : 2019 Article en page(s) : pp 924 - 935 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] image multibande
[Termes IGN] réseau neuronal récurrentRésumé : (Auteur) Change detection is one of the central problems in earth observation and was extensively investigated over recent decades. In this paper, we propose a novel recurrent convolutional neural network (ReCNN) architecture, which is trained to learn a joint spectral-spatial-temporal feature representation in a unified framework for change detection in multispectral images. To this end, we bring together a convolutional neural network and a recurrent neural network into one end-to-end network. The former is able to generate rich spectral-spatial feature representations, while the latter effectively analyzes temporal dependence in bitemporal images. In comparison with previous approaches to change detection, the proposed network architecture possesses three distinctive properties: 1) it is end-to-end trainable, in contrast to most existing methods whose components are separately trained or computed; 2) it naturally harnesses spatial information that has been proven to be beneficial to change detection task; and 3) it is capable of adaptively learning the temporal dependence between multitemporal images, unlike most of the algorithms that use fairly simple operation like image differencing or stacking. As far as we know, this is the first time that a recurrent convolutional network architecture has been proposed for multitemporal remote sensing image analysis. The proposed network is validated on real multispectral data sets. Both visual and quantitative analyses of the experimental results demonstrate competitive performance in the proposed mode. Numéro de notice : A2019-110 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2863224 Date de publication en ligne : 20/11/2018 En ligne : https://doi.org/10.1109/TGRS.2018.2863224 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92449
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 2 (February 2019) . - pp 924 - 935[article]Smart cartographic background symbolization for map mashups in geoportals : A proof of concept by example of landuse representation / Nadia H. Panchaud in Cartographic journal (the), Vol 56 n° 1 (February 2019)
PermalinkTree cover mapping using hybrid fuzzy C-means method and multispectral satellite images / Linda Gulbe in Baltic forestry, vol 25 n° 1 ([01/02/2019])
PermalinkPermalink3D radiative transfer modeling over complex vegetation canopies and forest reconstruction from LIDAR measurements / Jianbo Qi (2019)
PermalinkPermalinkAilanthus altissima mapping from multi-temporal very high resolution satellite images / Cristina Tarantino in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)
PermalinkArchival aerial photogrammetric surveys, a data source to study land use/cover evolution over the last century : opportunities and issues / Arnaud Le Bris (2019)
PermalinkPermalinkChallenges in grassland mowing event detection with multimodal Sentinel images / Anatol Garioud (2019)
PermalinkDétection et localisation d'objets 3D par apprentissage profond en topologie capteur / Pierre Biasutti (2019)
![]()
Permalink