Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet
analyse d'image orientée objetVoir aussi |
Documents disponibles dans cette catégorie (294)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Semantic signatures for large-scale visual localization / Li Weng in Multimedia tools and applications, vol 80 n° 15 (June 2021)
[article]
Titre : Semantic signatures for large-scale visual localization Type de document : Article/Communication Auteurs : Li Weng , Auteur ; Valérie Gouet-Brunet , Auteur ; Bahman Soheilian , Auteur Année de publication : 2021 Projets : THINGS2D0 / Gouet-Brunet, Valérie Article en page(s) : pp 22347 - 22372 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement sémantique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image numérique
[Termes IGN] information sémantique
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] segmentation sémantique
[Termes IGN] zone urbaineRésumé : (auteur) Visual localization is a useful alternative to standard localization techniques. It works by utilizing cameras. In a typical scenario, features are extracted from captured images and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. These approaches offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called “semantic signature” is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. This paper is an extended version of a conference paper in CBMI’18. A more efficient retrieval protocol is presented with additional experiment results. Numéro de notice : A2021-787 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers ArXiv Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11042-020-08992-6 Date de publication en ligne : 07/05/2020 En ligne : https://doi.org/10.1007/s11042-020-08992-6 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95407
in Multimedia tools and applications > vol 80 n° 15 (June 2021) . - pp 22347 - 22372[article]Structure-aware completion of photogrammetric meshes in urban road environment / Qing Zhu in ISPRS Journal of photogrammetry and remote sensing, vol 175 (May 2021)
[article]
Titre : Structure-aware completion of photogrammetric meshes in urban road environment Type de document : Article/Communication Auteurs : Qing Zhu, Auteur ; Qisen Shang, Auteur ; Han Hu, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 56 - 70 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] détection de partie cachée
[Termes IGN] espace urbain
[Termes IGN] image aérienne oblique
[Termes IGN] maillage
[Termes IGN] modélisation 3D
[Termes IGN] reconstruction de route
[Termes IGN] réseau routier
[Termes IGN] texture d'image
[Termes IGN] véhicule automobileRésumé : (auteur) Photogrammetric mesh models obtained from aerial oblique images have been widely used for urban reconstruction. However, photogrammetric meshes suffer from severe texture problems, particularly in typical road areas, owing to occlusion. This paper proposes a structure-aware completion approach to improve mesh quality by seamlessly removing undesired vehicles. Specifically, a discontinuous texture atlas is first integrated into a continuous screen space by rendering trough a graphics pipeline. The rendering also records the necessary mapping for deintegration to the original texture atlas after editing. Vehicle regions are masked by a standard object detection approach, namely, Faster RCNN. Subsequently, the masked regions are completed, guided by the linear structures and regularities in the road region; this is implemented based on PatchMatch. Finally, the completed rendered image is deintegrated to the original texture atlas, and the triangles for the vehicles are also flattened so that improved meshes can be obtained. Experimental evaluation and analysis are conducted on three datasets, which were captured with different sensors and ground sample distances. The results demonstrate that the proposed method can produce quite realistic meshes after removing the vehicles. The structure-aware completion approach for road regions outperforms popular image completion methods, and an ablation study further confirms the effectiveness of the linear guidance. It should be noted that the proposed method can also handle tiled mesh models for large-scale scenes. Code and datasets are available at the project website. Numéro de notice : A2021-263 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.02.010 Date de publication en ligne : 11/03/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.02.010 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97312
in ISPRS Journal of photogrammetry and remote sensing > vol 175 (May 2021) . - pp 56 - 70[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021051 SL Revue Centre de documentation Revues en salle Disponible 081-2021052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt 081-2021053 DEP-RECP Revue Saint-Mandé Dépôt en unité Exclu du prêt The delineation of tea gardens from high resolution digital orthoimages using mean-shift and supervised machine learning methods / Akhtar Jamil in Geocarto international, vol 36 n° 7 ([15/04/2021])
[article]
Titre : The delineation of tea gardens from high resolution digital orthoimages using mean-shift and supervised machine learning methods Type de document : Article/Communication Auteurs : Akhtar Jamil, Auteur ; Bulent Bayram, Auteur Année de publication : 2021 Article en page(s) : pp 758 - 772 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de décalage moyen
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage automatique
[Termes IGN] arbre de décision
[Termes IGN] Camellia sinensis
[Termes IGN] classification dirigée
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] exploitation agricole
[Termes IGN] extraction de la végétation
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] orthoimage
[Termes IGN] segmentation hiérarchique
[Termes IGN] TurquieRésumé : (Auteur) Rize district is an important tea production site in Turkey, which is known for high quality tea. Determining the temporal changes is very crucial from the viewpoint of agricultural management and protection of tea areas. In addition, delineation of tea gardens using photogrammetric evaluation techniques for a single orthoimage takes approximately 8 h of labour work, which is both costly and time-consuming process. To overcome these issues, a method is proposed for demarcation of tea gardens from high-resolution orthoimages. In this article, a hierarchical object-based segmentation using mean-shift (MS) and supervised machine learning (ML) methods are investigated for delineation of tea gardens. First, the MS algorithm was applied to partition the images into homogeneous segments (objects) and then from each segment, various spectral, spatial and textural features were extracted. Finally, four most widely used supervised ML classifiers, support vector machine (SVM), artificial neural network (ANN), random forest (RF), and decision trees (DTs), were selected for classification of objects into tea gardens and other types of trees. Photogrammetrically evaluated tea garden borders were taken as reference data to evaluate the performance of the proposed methods. The experiments showed that all selected supervised classifiers were effective for delineation of the tea gardens from high-resolution images. Numéro de notice : A2021-293 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2019.1622597 Date de publication en ligne : 19/06/2019 En ligne : https://doi.org/10.1080/10106049.2019.1622597 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97349
in Geocarto international > vol 36 n° 7 [15/04/2021] . - pp 758 - 772[article]A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; Diogo Nunes Gonçalves, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1 - 17 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] carte agricole
[Termes IGN] Citrus sinensis
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] comptage
[Termes IGN] cultures
[Termes IGN] détection d'objet
[Termes IGN] extraction de la végétation
[Termes IGN] gestion durable
[Termes IGN] image captée par drone
[Termes IGN] maïs (céréale)
[Termes IGN] rendement agricoleRésumé : (auteur) Accurately mapping croplands is an important prerequisite for precision farming since it assists in field management, yield-prediction, and environmental management. Crops are sensitive to planting patterns and some have a limited capacity to compensate for gaps within a row. Optical imaging with sensors mounted on Unmanned Aerial Vehicles (UAV) is a cost-effective option for capturing images covering croplands nowadays. However, visual inspection of such images can be a challenging and biased task, specifically for detecting plants and rows on a one-step basis. Thus, developing an architecture capable of simultaneously extracting plant individually and plantation-rows from UAV-images is yet an important demand to support the management of agricultural systems. In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in (a) a cornfield (Zea mays L.) with different growth stages (i.e. recently planted and mature plants) and in a (b) Citrus orchard (Citrus Sinensis Pera). Both datasets characterize different plant density scenarios, in different locations, with different types of crops, and from different sensors and dates. This scheme was used to prove the robustness of the proposed approach, allowing a broader discussion of the method. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases – young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For the citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops. The method proposed here may be applied to future decision-making models and could contribute to the sustainable management of agricultural systems. Numéro de notice : A2021-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.024 Date de publication en ligne : 13/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.024 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97171
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 1 - 17[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss Type de document : Article/Communication Auteurs : Ruoqiao Jiang, Auteur ; Shaohui Mei, Auteur ; Mingyang Ma, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 3326 - 3337 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] échantillon
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] invariant
[Termes IGN] réseau neuronal siamois
[Termes IGN] rotationRésumé : (auteur) Rotation-invariant features are of great importance for object detection and image classification in very-high-resolution (VHR) optical remote sensing images. Though multibranch convolutional neural network (mCNN) has been demonstrated to be very effective for rotation-invariant feature learning, how to effectively train such a network is still an open problem. In this article, a nested Siamese structure (NSS) is proposed for training the mCNN to learn effective rotation-invariant features, which consists of an inner Siamese structure to enhance intraclass cohesion and an outer Siamese structure to enlarge interclass margin. Moreover, a double center loss (DCL) function, in which training samples from the same class are mapped closer to each other while those from different classes are mapped far away to each other, is proposed to train the proposed NSS even with a small amount of training samples. Experimental results over three benchmark data sets demonstrate that the proposed NSS trained by DCL is very effective to encounter rotation varieties when learning features for image classification and outperforms several state-of-the-art rotation-invariant feature learning algorithms even when a small amount of training samples are available. Numéro de notice : A2021-286 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3021283 Date de publication en ligne : 18/07/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3021283 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97395
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 3326 - 3337[article]Assessing land use–land cover change and soil erosion potential using a combined approach through remote sensing, RUSLE and random forest algorithm / Siddhartho Shekhar Paul in Geocarto international, vol 36 n° 4 ([01/03/2021])PermalinkImproving the unsupervised mapping of riparian bugweed in commercial forest plantations using hyperspectral data and LiDAR / Kabir Peerbhay in Geocarto international, vol 36 n° 4 ([01/03/2021])PermalinkMulti-level progressive parallel attention guided salient object detection for RGB-D images / Zhengyi Liu in The Visual Computer, vol 37 n° 3 (March 2021)PermalinkPBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery / Xian Sun in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)PermalinkAn anchor-based graph method for detecting and classifying indoor objects from cluttered 3D point clouds / Fei Su in ISPRS Journal of photogrammetry and remote sensing, vol 172 (February 2021)PermalinkDeep traffic light detection by overlaying synthetic context on arbitrary natural images / Jean Pablo Vieira de Mello in Computers and graphics, vol 94 n° 1 (February 2021)PermalinkDetection of pictorial map objects with convolutional neural networks / Raimund Schnürer in Cartographic journal (the), vol 58 n° 1 (February 2021)PermalinkSemi-supervised joint learning for hand gesture recognition from a single color image / Chi Xu in Sensors, vol 21 n° 3 (February 2021)PermalinkPermalinkAnalyse de la dynamique d’embroussaillement des pelouses calcaires par traitement d’images / Théo Mesure (2021)Permalink