Descripteur
Documents disponibles dans cette catégorie (75)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space / Min Wu in The Visual Computer, vol 37 n° 7 (July 2021)
[article]
Titre : Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space Type de document : Article/Communication Auteurs : Min Wu, Auteur ; Xin Jin, Auteur ; Qian Jiang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1707 - 1729 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] contraste de couleurs
[Termes IGN] données multiéchelles
[Termes IGN] image en couleur
[Termes IGN] image RVB
[Termes IGN] niveau de gris (image)
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Image colorization technique is used to colorize the gray-level image or single-channel image, which is a very significant and challenging task in image processing, especially the colorization of remote sensing images. This paper proposes a new method for coloring remote sensing images based on deep convolution generation adversarial network. The adopted generator model is a symmetrical structure using the principle of auto-encoder, and a multi-scale convolutional module is specially designed to introduce into the generator model. Thus, the proposed generator can enable the whole model to retain more image features in the process of up-sampling and down-sampling. Meanwhile, the discriminator uses residual neural network 18 that can compete with the generator, so that the generator and discriminator can effectively optimize each other. In the proposed method, the color space transformation technique is first utilized to convert remote sensing images from RGB to YUV. Then, the Y channel (a gray-level image) is used as the input of the neural network model to predict UV channels. Finally, the predicted UV channels are concatenated with the original Y channel as a whole YUV that is then transformed into RGB space to get the final color image. Experiments are conducted to test the performance of different image colorization methods, and the results show that the proposed method has good performance in both visual quality and objective indexes on the colorization of remote sensing image. Numéro de notice : A2021-540 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01933-2 Date de publication en ligne : 28/08/2020 En ligne : https://doi.org/10.1007/s00371-020-01933-2 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98018
in The Visual Computer > vol 37 n° 7 (July 2021) . - pp 1707 - 1729[article]Semantic unsupervised change detection of natural land cover with multitemporal object-based analysis on SAR images / Donato Amitrano in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)
[article]
Titre : Semantic unsupervised change detection of natural land cover with multitemporal object-based analysis on SAR images Type de document : Article/Communication Auteurs : Donato Amitrano, Auteur ; Raffaella Guida, Auteur ; Pasquale Lervolino, Auteur Année de publication : 2021 Article en page(s) : pp 5494 - 5514 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse d'image orientée objet
[Termes IGN] biomasse forestière
[Termes IGN] canopée
[Termes IGN] changement d'occupation du sol
[Termes IGN] classification floue
[Termes IGN] classification non dirigée
[Termes IGN] déboisement
[Termes IGN] détection de changement
[Termes IGN] image multitemporelle
[Termes IGN] image radar moirée
[Termes IGN] image RVB
[Termes IGN] image Sentinel-SAR
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] segmentation d'image
[Termes IGN] seuillage d'image
[Termes IGN] texture d'imageRésumé : (auteur) Change detection is one of the most addressed topics in the remote sensing community. When performed on synthetic aperture radar images, the most critical issues are as follows: 1) the labeling of the identified changing patterns and 2) the scarce robustness of classic pixel-based approaches based on threshold segmentation of an appropriate change index, which tend to fail when multiple changes are present in the study area. In this work, a new methodology for unsupervised change detection in vegetation canopy is presented. It overcomes these limitations by exploiting multitemporal geographical object-based image analysis with the aim to make the intrinsic semantic of data emerge and direct the processing toward the identification of precise classes of changes through dictionary-based preclassification and fuzzy combination of class-specific information layers. The proposed methodology has been tested in ten different experiments covering agriculture and clear-cut deforestation applications. The results, validated against literature methods, highlighted the superiority of the proposed approach, which was quantitatively assessed in terms of standard classification quality parameters. On agriculture experiments, it allowed for an average increase in the detection accuracy of about 11% with respect to the best performing literature method, with an increment of the false alarm rate in the order of 0.5%. In case of deforestation, the registered detection accuracy was comparable to that achieved by the literature, while the most significant benefit was the reduction, of more than one-third, of the number of detected false deforestation patterns. Overall, the main characteristics of the proposed architecture are the robustness and the lack of any supervision, which makes it very well-suited for operational scenarios. Numéro de notice : A2021-528 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3029841 Date de publication en ligne : 22/10/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3029841 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97978
in IEEE Transactions on geoscience and remote sensing > Vol 59 n° 7 (July 2021) . - pp 5494 - 5514[article]Assessing forest phenology: A multi-scale comparison of near-surface (UAV, spectral reflectance sensor, PhenoCam) and satellite (MODIS, Sentinel-2) remote sensing / Shangharsha Thapa in Remote sensing, vol 13 n° 8 (April-2 2021)
[article]
Titre : Assessing forest phenology: A multi-scale comparison of near-surface (UAV, spectral reflectance sensor, PhenoCam) and satellite (MODIS, Sentinel-2) remote sensing Type de document : Article/Communication Auteurs : Shangharsha Thapa, Auteur ; Virginia Garcia Millan, Auteur ; Lars Eklundh, Auteur Année de publication : 2021 Article en page(s) : n° 1597 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse multiéchelle
[Termes IGN] capteur multibande
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Terra-MODIS
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] phénologie
[Termes IGN] réflectance spectrale
[Termes IGN] série temporelle
[Termes IGN] Suède
[Termes IGN] surveillance forestière
[Termes IGN] variation saisonnièreRésumé : (auteur) The monitoring of forest phenology based on observations from near-surface sensors such as Unmanned Aerial Vehicles (UAVs), PhenoCams, and Spectral Reflectance Sensors (SRS) over satellite sensors has recently gained significant attention in the field of remote sensing and vegetation phenology. However, exploring different aspects of forest phenology based on observations from these sensors and drawing comparatives from the time series of vegetation indices (VIs) still remains a challenge. Accordingly, this research explores the potential of near-surface sensors to track the temporal dynamics of phenology, cross-compare their results against satellite observations (MODIS, Sentinel-2), and validate satellite-derived phenology. A time series of Normalized Difference Vegetation Index (NDVI), Green Chromatic Coordinate (GCC), and Normalized Difference of Green & Red (VIgreen) indices were extracted from both near-surface and satellite sensor platforms. The regression analysis between time series of NDVI data from different sensors shows the high Pearson’s correlation coefficients (r > 0.75). Despite the good correlations, there was a remarkable offset and significant differences in slope during green-up and senescence periods. SRS showed the most distinctive NDVI profile and was different to other sensors. PhenoCamGCC tracked green-up of the canopy better than the other indices, with a well-defined start, end, and peak of the season, and was most closely correlated (r > 0.93) with the satellites, while SRS-based VIgreen accounted for the least correlation (r = 0.58) against Sentinel-2. Phenophase transition dates were estimated and validated against visual inspection of the PhenoCam data. The Start of Spring (SOS) and End of Spring (EOS) could be predicted with an accuracy of Numéro de notice : A2021-382 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs13081597 Date de publication en ligne : 20/04/2021 En ligne : https://doi.org/10.3390/rs13081597 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97633
in Remote sensing > vol 13 n° 8 (April-2 2021) . - n° 1597[article]Visual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors / Longyu Zhang in ISPRS International journal of geo-information, vol 10 n° 4 (April 2021)
[article]
Titre : Visual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors Type de document : Article/Communication Auteurs : Longyu Zhang, Auteur ; Hao Xia, Auteur ; Qingjun Liu, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 195 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par nuées dynamiques
[Termes IGN] estimation de pose
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image RVB
[Termes IGN] modélisation 3D
[Termes IGN] positionnement en intérieur
[Termes IGN] Ransac (algorithme)
[Termes IGN] scène intérieure
[Termes IGN] semis de points
[Termes IGN] SIFT (algorithme)
[Termes IGN] SURF (algorithme)
[Termes IGN] téléphone intelligent
[Termes IGN] vision par ordinateurRésumé : (auteur) Positioning information has become one of the most important information for processing and displaying on smart mobile devices. In this paper, we propose a visual positioning method using RGB-D image on smart mobile devices. Firstly, the pose of each image in the training set is calculated through feature extraction and description, image registration, and pose map optimization. Then, in the image retrieval stage, the training set and the query set are clustered to generate the vector of local aggregated descriptors (VLAD) description vector. In order to overcome the problem that the description vector loses the image color information and improve the retrieval accuracy under different lighting conditions, the opponent color information and depth information are added to the description vector for retrieval. Finally, using the point cloud corresponding to the retrieval result image and its pose, the pose of the retrieved image is calculated by perspective-n-point (PnP) method. The results of indoor scene positioning under different illumination conditions show that the proposed method not only improves the positioning accuracy compared with the original VLAD and ORB-SLAM2, but also has high computational efficiency. Numéro de notice : A2021-481 Affiliation des auteurs : non IGN Thématique : IMAGERIE/POSITIONNEMENT Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi10040195 Date de publication en ligne : 24/03/2021 En ligne : https://doi.org/10.3390/ijgi10040195 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97425
in ISPRS International journal of geo-information > vol 10 n° 4 (April 2021) . - n° 195[article]Multi-level progressive parallel attention guided salient object detection for RGB-D images / Zhengyi Liu in The Visual Computer, vol 37 n° 3 (March 2021)
[article]
Titre : Multi-level progressive parallel attention guided salient object detection for RGB-D images Type de document : Article/Communication Auteurs : Zhengyi Liu, Auteur ; Quntao Duan, Auteur ; Song Shi, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 529 - 540 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] approche hiérarchique
[Termes IGN] détection automatique
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image RVB
[Termes IGN] optimisation spatiale
[Termes IGN] profondeur
[Termes IGN] réseau neuronal récurrent
[Termes IGN] saillanceRésumé : (auteur) Detecting salient objects in RGB-D images attracts more and more attention in recent years. It benefits from the widespread use of depth sensors and can be applied in the comprehensive understanding of RGB-D images. Existing models focus on double-stream networks which transfer from color stream to depth stream, but depth stream with one channel information cannot learn the same feature as color stream with three channels information even if HHA representation is adopted. In our works, RGB-D four-channels input is chosen, and meanwhile, progressive parallel spatial and channel attention mechanisms are performed to improve feature representation. Spatial and channel attention can pay more attention on partial positions and channels in the image which show higher response to salient objects. Both attentive features are optimized by attentive feature from higher layer, respectively, and parallel fed into recurrent convolutional layer to generate side-output saliency maps guided by saliency map from higher layer. Last multi-level saliency maps are fused together from multi-scale perspective. Experiments on benchmark datasets demonstrate that parallel attention mechanism and progressive optimization operation play an important role in improving the accuracy of salient object detection, and our model outperforms state-of-the-art models in evaluation matrices. Numéro de notice : A2021-340 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01821-9 Date de publication en ligne : 18/02/2020 En ligne : https://doi.org/10.1007/s00371-020-01821-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97578
in The Visual Computer > vol 37 n° 3 (March 2021) . - pp 529 - 540[article]Activity recognition in residential spaces with Internet of things devices and thermal imaging / Kshirasagar Naik in Sensors, vol 21 n° 3 (February 2021)PermalinkAleatoric uncertainty estimation for dense stereo matching via CNN-based cost volume analysis / Max Mehltretter in ISPRS Journal of photogrammetry and remote sensing, vol 171 (January 2021)PermalinkCartographie dense et compacte par vision RGB-D pour la navigation d’un robot mobile / Bruce Canovas (2021)PermalinkDétection d’ouvertures par segmentation sémantique de nuages de points 3D : apport de l’apprentissage profond / Camille Lhenry (2021)PermalinkPermalinkReal-time multimodal semantic scene understanding for autonomous UGV navigation / Yifei Zhang (2021)PermalinkPermalinkThe challenge of robust trait estimates with deep learning on high resolution RGB images / Etienne David (2021)PermalinkCNN-based tree species classification using high resolution RGB image data from automated UAV observations / Sebastian Egli in Remote sensing, vol 12 n° 23 (December-2 2020)PermalinkAutomatic building footprint extraction from UAV images using neural networks / Zoran Kokeza in Geodetski vestnik, vol 64 n° 4 (December 2020 - February 2021)PermalinkConvolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery / Teja Kattenborn in Remote sensing in ecology and conservation, vol 6 n° 4 (December 2020)PermalinkMapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks / Felix Schiefer in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)PermalinkTextural classification of remotely sensed images using multiresolution techniques / Rizwan Ahmed Ansari in Geocarto international, vol 35 n° 14 ([15/10/2020])Permalink3D hand mesh reconstruction from a monocular RGB image / Hao Peng in The Visual Computer, vol 36 n° 10 - 12 (October 2020)PermalinkTrajectory drift–compensated solution of a stereo RGB-D mapping system / Shengjun Tang in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 6 (June 2020)PermalinkAutomatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks / Mahmoud Saeedimoghaddam in International journal of geographical information science IJGIS, vol 34 n° 5 (May 2020)PermalinkA review of techniques for 3D reconstruction of indoor environments / Zhizhong Kang in ISPRS International journal of geo-information, vol 9 n° 5 (May 2020)PermalinkShrub biomass estimates in former burnt areas using Sentinel 2 images processing and classification / Jose Aranha in Forests, vol 11 n° 5 (May 2020)PermalinkAbove-ground biomass estimation and yield prediction in potato by using UAV-based RGB and hyperspectral imaging / Bo Li in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)PermalinkMultichannel Pulse-Coupled Neural Network-Based Hyperspectral Image Visualization / Puhong Duan in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)Permalink