Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet
analyse d'image orientée objetVoir aussi |
Documents disponibles dans cette catégorie (245)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Object-based crop classification using multi-temporal SPOT-5 imagery and textural features with a Random Forest classifier / Huanxue Zhang in Geocarto international, vol 33 n° 10 (October 2018)
[article]
Titre : Object-based crop classification using multi-temporal SPOT-5 imagery and textural features with a Random Forest classifier Type de document : Article/Communication Auteurs : Huanxue Zhang, Auteur ; Qiangzi Li, Auteur ; Jiangui Liu, Auteur ; Taifeng Dong, Auteur ; Heather McNairn, Auteur Année de publication : 2018 Article en page(s) : pp 1017 - 1035 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] bande spectrale
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] corrélation par régions de niveaux de gris
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image SPOT 5
[Termes IGN] indice de végétation
[Termes IGN] limite de terrain
[Termes IGN] Ontario (Canada)
[Termes IGN] réflectance spectrale
[Termes IGN] segmentation d'image
[Termes IGN] surface cultivée
[Termes IGN] surveillance agricole
[Termes IGN] texture d'image
[Termes IGN] variogrammeRésumé : (auteur) In this study, an object-based image analysis (OBIA) approach was developed to classify field crops using multi-temporal SPOT-5 images with a random forest (RF) classifier. A wide range of features, including the spectral reflectance, vegetation indices (VIs), textural features based on the grey-level co-occurrence matrix (GLCM) and textural features based on geostatistical semivariogram (GST) were extracted for classification, and their performance was evaluated with the RF variable importance measures. Results showed that the best segmentation quality was achieved using the SPOT image acquired in September, with a scale parameter of 40. The spectral reflectance and the GST had a stronger contribution to crop classification than the VIs and GLCM textures. A subset of 60 features was selected using the RF-based feature selection (FS) method, and in this subset, the near-infrared reflectance and the image acquired in August (jointing and heading stages) were found to be the best for crop classification. Numéro de notice : A2019-049 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2017.1333533 Date de publication en ligne : 23/06/2017 En ligne : https://doi.org/10.1080/10106049.2017.1333533 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92063
in Geocarto international > vol 33 n° 10 (October 2018) . - pp 1017 - 1035[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 059-2018041 RAB Revue Centre de documentation En réserve L003 Disponible Stand age estimation of rubber (Hevea brasiliensis) plantations using an integrated pixel- and object-based tree growth model and annual Landsat time series / Gang Chen in ISPRS Journal of photogrammetry and remote sensing, vol 144 (October 2018)
[article]
Titre : Stand age estimation of rubber (Hevea brasiliensis) plantations using an integrated pixel- and object-based tree growth model and annual Landsat time series Type de document : Article/Communication Auteurs : Gang Chen, Auteur ; Jean-Claude Thill, Auteur ; Sutee Anantsuksomsri, Auteur ; Nij Tontisirin, Auteur ; Ran Tao, Auteur Année de publication : 2018 Article en page(s) : pp 94 - 104 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] Birmanie
[Termes IGN] Chine
[Termes IGN] croissance des arbres
[Termes IGN] dendrochronologie
[Termes IGN] Hevea brasiliensis
[Termes IGN] image Landsat
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] Laos
[Termes IGN] modèle de croissance végétale
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] plantation forestière
[Termes IGN] série temporelleRésumé : (Auteur) Rubber (Hevea brasiliensis) plantations are a rapidly increasing source of land cover change in mainland Southeast Asia. Stand age of rubber plantations obtained at fine scales provides essential baseline data, informing the pace of industrial and smallholder agricultural activities in response to the changing global rubber markets, and local political and socioeconomic dynamics. In this study, we developed an integrated pixel- and object-based tree growth model using Landsat annual time series to estimate the age of rubber plantations in a 21,115 km2 tri-border region along the junction of China, Myanmar and Laos. We produced a rubber stand age map at 30 m resolution, with an accuracy of 87.00% for identifying rubber plantations and an average error of 1.53 years in age estimation. The integration of pixel- and object-based image analysis showed superior performance in building NDVI yearly time series that reduced spectral noises from background soil and vegetation in open-canopy, young rubber stands. The model parameters remained relatively stable during model sensitivity analysis, resulting in accurate age estimation robust to outliers. Compared to the typically weak statistical relationship between single-date spectral signatures and rubber tree age, Landsat image time series analysis coupled with tree growth modeling presents a viable alternative for fine-scale age estimation of rubber plantations. Numéro de notice : A2018-399 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.07.003 Date de publication en ligne : 13/08/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.07.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90828
in ISPRS Journal of photogrammetry and remote sensing > vol 144 (October 2018) . - pp 94 - 104[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018101 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018103 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018102 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Assessment of Nigeriasat-1 satellite data for urban land use/land cover analysis using object-based image analysis in Abuja, Nigeria / Christopher Ifechukwude Chima in Geocarto international, vol 33 n° 9 (September 2018)
[article]
Titre : Assessment of Nigeriasat-1 satellite data for urban land use/land cover analysis using object-based image analysis in Abuja, Nigeria Type de document : Article/Communication Auteurs : Christopher Ifechukwude Chima, Auteur ; Nigel Trodd, Auteur ; Matthew Blackett, Auteur Année de publication : 2018 Article en page(s) : pp 893 - 911 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification par maximum de vraisemblance
[Termes IGN] image Landsat-ETM+
[Termes IGN] image NigeriaSat
[Termes IGN] image SPOT 5
[Termes IGN] image SPOT-HRG
[Termes IGN] occupation du solRésumé : (Auteur) This study assesses the usefulness of Nigeriasat-1 satellite data for urban land cover analysis by comparing it with Landsat and SPOT data. The data-sets for Abuja were classified with pixel- and object-based methods. While the pixel-based method was classified with the spectral properties of the images, the object-based approach included an extra layer of land use cadastre data. The classification accuracy results for OBIA show that Landsat 7 ETM, Nigeriasat-1 SLIM and SPOT 5 HRG had overall accuracies of 92, 89 and 96%, respectively, while the classification accuracy for pixel-based classification were 88% for Landsat 7 ETM, 63% for Nigeriasat-1 SLIM and 89% for SPOT 5 HRG. The results indicate that given the right classification tools, the analysis of Nigeriasat-1 data can be compared with Landsat and SPOT data which are widely used for urban land use and land cover analysis. Numéro de notice : A2018-336 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2017.1316778 Date de publication en ligne : 08/05/2017 En ligne : https://doi.org/10.1080/10106049.2017.1316778 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90550
in Geocarto international > vol 33 n° 9 (September 2018) . - pp 893 - 911[article]Augmented reality meets computer vision : efficient data generation for urban driving scenes / Hassan Abu Alhaija in International journal of computer vision, vol 126 n° 9 (September 2018)
[article]
Titre : Augmented reality meets computer vision : efficient data generation for urban driving scenes Type de document : Article/Communication Auteurs : Hassan Abu Alhaija, Auteur ; Siva Karthik Mustikovela, Auteur ; Lars Mescheder, Auteur ; Andreas Geiger, Auteur ; Carsten Rother, Auteur Année de publication : 2018 Article en page(s) : pp 961 - 972 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] détection d'objet
[Termes IGN] réalité augmentée
[Termes IGN] scène urbaine
[Termes IGN] vision par ordinateurRésumé : (Auteur) The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data. Numéro de notice : A2018-417 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1070-x Date de publication en ligne : 07/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1070-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90900
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 961 - 972[article]Adaptive correlation filters with long-term and short-term memory for object tracking / Chao Ma in International journal of computer vision, vol 126 n° 8 (August 2018)
[article]
Titre : Adaptive correlation filters with long-term and short-term memory for object tracking Type de document : Article/Communication Auteurs : Chao Ma, Auteur ; Jia-Bin Huang, Auteur ; Xiaokang Yang, Auteur ; Ming-Hsuan Yang, Auteur Année de publication : 2018 Article en page(s) : pp 771 - 796 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] détection d'objet
[Termes IGN] filtre adaptatif
[Termes IGN] méthode fondée sur le noyau
[Termes IGN] méthode robuste
[Termes IGN] poursuite de cibleRésumé : (Auteur) Object tracking is challenging as target objects often undergo drastic appearance changes over time. Recently, adaptive correlation filters have been successfully applied to object tracking. However, tracking algorithms relying on highly adaptive correlation filters are prone to drift due to noisy updates. Moreover, as these algorithms do not maintain long-term memory of target appearance, they cannot recover from tracking failures caused by heavy occlusion or target disappearance in the camera view. In this paper, we propose to learn multiple adaptive correlation filters with both long-term and short-term memory of target appearance for robust object tracking. First, we learn a kernelized correlation filter with an aggressive learning rate for locating target objects precisely. We take into account the appropriate size of surrounding context and the feature representations. Second, we learn a correlation filter over a feature pyramid centered at the estimated target position for predicting scale changes. Third, we learn a complementary correlation filter with a conservative learning rate to maintain long-term memory of target appearance. We use the output responses of this long-term filter to determine if tracking failure occurs. In the case of tracking failures, we apply an incrementally learned detector to recover the target position in a sliding window fashion. Extensive experimental results on large-scale benchmark datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods in terms of efficiency, accuracy, and robustness. Numéro de notice : A2018-414 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1076-4 Date de publication en ligne : 16/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1076-4 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90897
in International journal of computer vision > vol 126 n° 8 (August 2018) . - pp 771 - 796[article]Detecting newly grown tree leaves from unmanned-aerial-vehicle images using hyperspectral target detection techniques / Chinsu Lin in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkLabel propagation with ensemble of pairwise geometric relations : towards robust large-scale retrieval of object instances / Xiaomeng Wu in International journal of computer vision, vol 126 n° 7 (July 2018)PermalinkA light and faster regional convolutional neural network for object detection in optical remote sensing images / Peng Ding in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)PermalinkPredicting foreground object ambiguity and efficiently crowdsourcing the segmentation(s) / Danna Gurari in International journal of computer vision, vol 126 n° 7 (July 2018)PermalinkA review of accuracy assesment for object-based image analysis: from per pixel to per-polygon approaches [review article] / Su Ye in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)PermalinkApplication of deep learning for object detection / Ajeet Ram Pathak in Procedia Computer Science, vol 132 (2018)PermalinkAn object-based approach for mapping forest structural types based on low-density LiDAR and multispectral imagery / Luis Angel Ruiz in Geocarto international, vol 33 n° 5 (May 2018)PermalinkDeep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification / Tao Liu in ISPRS Journal of photogrammetry and remote sensing, vol 139 (May 2018)PermalinkGeneric rule-sets for automated detection of urban tree species from very high-resolution satellite data / Razieh Shojanoori in Geocarto international, vol 33 n° 4 (April 2018)PermalinkActive learning-based optimized training library generation for object-oriented image classification / Rajeswari Balasubramaniam in IEEE Transactions on geoscience and remote sensing, vol 56 n° 1 (January 2018)Permalink