Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet > détection d'objet
détection d'objetVoir aussi |
Documents disponibles dans cette catégorie (194)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Reconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne / Valentin Desbiolles in XYZ, n° 167 (juin 2021)
[article]
Titre : Reconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne Type de document : Article/Communication Auteurs : Valentin Desbiolles, Auteur Année de publication : 2021 Article en page(s) : pp 33 - 38 Note générale : Bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] Autocad Map
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dessin assisté par ordinateur
[Termes IGN] détection automatique
[Termes IGN] détection d'objet
[Termes IGN] image aérienne
[Termes IGN] jumeau numérique
[Termes IGN] orthoimage
[Termes IGN] reconnaissance d'objets
[Termes IGN] transformation de Hough
[Termes IGN] voie ferréeRésumé : (Auteur) Ce projet propose une étude sur l’insertion automatique d’objets utiles au fonctionnement d’une voie ferrée dans un plan DAO. Ces objets sont visibles sur des orthophotos acquises par moyens aéroportés (drone ou hélicoptère). La solution se scinde en deux grands axes : 1- la détection et la localisation des objets d’intérêt sur une orthophoto ; 2- leurs insertions dans un plan DAO. Ce PFE parcourt ainsi les différentes techniques pour automatiser une phase de reconnaissance de certains éléments cibles sur une image pour finir sur le développement d’une méthode permettant de les reporter dans un plan DAO automatiquement. Numéro de notice : A2021-462 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueNat DOI : sans Date de publication en ligne : 01/06/2021 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97928
in XYZ > n° 167 (juin 2021) . - pp 33 - 38[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 112-2021021 RAB Revue Centre de documentation En réserve L003 Disponible Structure-aware completion of photogrammetric meshes in urban road environment / Qing Zhu in ISPRS Journal of photogrammetry and remote sensing, vol 175 (May 2021)
[article]
Titre : Structure-aware completion of photogrammetric meshes in urban road environment Type de document : Article/Communication Auteurs : Qing Zhu, Auteur ; Qisen Shang, Auteur ; Han Hu, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 56 - 70 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] détection de partie cachée
[Termes IGN] espace urbain
[Termes IGN] image aérienne oblique
[Termes IGN] maillage
[Termes IGN] modélisation 3D
[Termes IGN] reconstruction de route
[Termes IGN] réseau routier
[Termes IGN] texture d'image
[Termes IGN] véhicule automobileRésumé : (auteur) Photogrammetric mesh models obtained from aerial oblique images have been widely used for urban reconstruction. However, photogrammetric meshes suffer from severe texture problems, particularly in typical road areas, owing to occlusion. This paper proposes a structure-aware completion approach to improve mesh quality by seamlessly removing undesired vehicles. Specifically, a discontinuous texture atlas is first integrated into a continuous screen space by rendering trough a graphics pipeline. The rendering also records the necessary mapping for deintegration to the original texture atlas after editing. Vehicle regions are masked by a standard object detection approach, namely, Faster RCNN. Subsequently, the masked regions are completed, guided by the linear structures and regularities in the road region; this is implemented based on PatchMatch. Finally, the completed rendered image is deintegrated to the original texture atlas, and the triangles for the vehicles are also flattened so that improved meshes can be obtained. Experimental evaluation and analysis are conducted on three datasets, which were captured with different sensors and ground sample distances. The results demonstrate that the proposed method can produce quite realistic meshes after removing the vehicles. The structure-aware completion approach for road regions outperforms popular image completion methods, and an ablation study further confirms the effectiveness of the linear guidance. It should be noted that the proposed method can also handle tiled mesh models for large-scale scenes. Code and datasets are available at the project website. Numéro de notice : A2021-263 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.02.010 Date de publication en ligne : 11/03/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.02.010 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97312
in ISPRS Journal of photogrammetry and remote sensing > vol 175 (May 2021) . - pp 56 - 70[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021051 SL Revue Centre de documentation Revues en salle Disponible 081-2021052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt 081-2021053 DEP-RECP Revue Saint-Mandé Dépôt en unité Exclu du prêt A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; Diogo Nunes Gonçalves, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1 - 17 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] carte agricole
[Termes IGN] Citrus sinensis
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] comptage
[Termes IGN] cultures
[Termes IGN] détection d'objet
[Termes IGN] extraction de la végétation
[Termes IGN] gestion durable
[Termes IGN] image captée par drone
[Termes IGN] maïs (céréale)
[Termes IGN] rendement agricoleRésumé : (auteur) Accurately mapping croplands is an important prerequisite for precision farming since it assists in field management, yield-prediction, and environmental management. Crops are sensitive to planting patterns and some have a limited capacity to compensate for gaps within a row. Optical imaging with sensors mounted on Unmanned Aerial Vehicles (UAV) is a cost-effective option for capturing images covering croplands nowadays. However, visual inspection of such images can be a challenging and biased task, specifically for detecting plants and rows on a one-step basis. Thus, developing an architecture capable of simultaneously extracting plant individually and plantation-rows from UAV-images is yet an important demand to support the management of agricultural systems. In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in (a) a cornfield (Zea mays L.) with different growth stages (i.e. recently planted and mature plants) and in a (b) Citrus orchard (Citrus Sinensis Pera). Both datasets characterize different plant density scenarios, in different locations, with different types of crops, and from different sensors and dates. This scheme was used to prove the robustness of the proposed approach, allowing a broader discussion of the method. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases – young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For the citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops. The method proposed here may be applied to future decision-making models and could contribute to the sustainable management of agricultural systems. Numéro de notice : A2021-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.024 Date de publication en ligne : 13/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.024 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97171
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 1 - 17[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss Type de document : Article/Communication Auteurs : Ruoqiao Jiang, Auteur ; Shaohui Mei, Auteur ; Mingyang Ma, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 3326 - 3337 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] échantillon
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] invariant
[Termes IGN] réseau neuronal siamois
[Termes IGN] rotationRésumé : (auteur) Rotation-invariant features are of great importance for object detection and image classification in very-high-resolution (VHR) optical remote sensing images. Though multibranch convolutional neural network (mCNN) has been demonstrated to be very effective for rotation-invariant feature learning, how to effectively train such a network is still an open problem. In this article, a nested Siamese structure (NSS) is proposed for training the mCNN to learn effective rotation-invariant features, which consists of an inner Siamese structure to enhance intraclass cohesion and an outer Siamese structure to enlarge interclass margin. Moreover, a double center loss (DCL) function, in which training samples from the same class are mapped closer to each other while those from different classes are mapped far away to each other, is proposed to train the proposed NSS even with a small amount of training samples. Experimental results over three benchmark data sets demonstrate that the proposed NSS trained by DCL is very effective to encounter rotation varieties when learning features for image classification and outperforms several state-of-the-art rotation-invariant feature learning algorithms even when a small amount of training samples are available. Numéro de notice : A2021-286 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3021283 Date de publication en ligne : 18/07/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3021283 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97395
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 3326 - 3337[article]Improving the unsupervised mapping of riparian bugweed in commercial forest plantations using hyperspectral data and LiDAR / Kabir Peerbhay in Geocarto international, vol 36 n° 4 ([01/03/2021])
[article]
Titre : Improving the unsupervised mapping of riparian bugweed in commercial forest plantations using hyperspectral data and LiDAR Type de document : Article/Communication Auteurs : Kabir Peerbhay, Auteur ; Onisimo Mutanga, Auteur ; Romano Lottering, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 465 - 480 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] carte de la végétation
[Termes IGN] classification non dirigée
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] espèce exotique envahissante
[Termes IGN] forêt ripicole
[Termes IGN] image AISA+
[Termes IGN] image hyperspectrale
[Termes IGN] précision cartographique
[Termes IGN] semis de pointsRésumé : (auteur) Accurate spatial information on the location of invasive alien plants (IAPs) in riparian environments is critical to fulfilling a comprehensive weed management regime. This study aimed to automatically map the occurrence of riparian bugweed (Solanum mauritianum) using airborne AISA Eagle hyperspectral data (393 nm–994 nm) in conjunction with LiDAR derived height. Utilising an unsupervised random forest (RF) classification approach and Anselin local Moran’s I clustering, results indicate that the integration of LiDAR with minimum noise fraction (MNF) produce the best detection rate (DR) of 88%, the lowest false positive rate (FPR) of 7.14% and an overall mapping accuracy of 83% for riparian bugweed. In comparison, utilising the original hyperspectral wavebands with and without LiDAR produced lower DRs and higher FPRs with overall accuracies of 79% and 68% respectively. This research demonstrates the potential of combining spectral information with LiDAR to accurately map IAPs using an automated unsupervised RF anomaly detection framework. Numéro de notice : A2021-163 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2019.1614101 Date de publication en ligne : 10/06/2019 En ligne : https://doi.org/10.1080/10106049.2019.1614101 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97084
in Geocarto international > vol 36 n° 4 [01/03/2021] . - pp 465 - 480[article]Multi-level progressive parallel attention guided salient object detection for RGB-D images / Zhengyi Liu in The Visual Computer, vol 37 n° 3 (March 2021)PermalinkPBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery / Xian Sun in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)PermalinkDeep traffic light detection by overlaying synthetic context on arbitrary natural images / Jean Pablo Vieira de Mello in Computers and graphics, vol 94 n° 1 (February 2021)PermalinkDetection of pictorial map objects with convolutional neural networks / Raimund Schnürer in Cartographic journal (the), vol 58 n° 1 (February 2021)PermalinkSemi-supervised joint learning for hand gesture recognition from a single color image / Chi Xu in Sensors, vol 21 n° 3 (February 2021)PermalinkPermalinkAutomatic object extraction from airborne laser scanning point clouds for digital base map production / Elyta Widyaningrum (2021)PermalinkPermalinkCombining deep learning and mathematical morphology for historical map segmentation / Yizi Chen (2021)PermalinkDétection/reconnaissance d'objets urbains à partir de données 3D multicapteurs prises au niveau du sol, en continu / Younes Zegaoui (2021)PermalinkPermalinkExtraction of street pole-like objects based on plane filtering from mobile LiDAR data / Jingming Tu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 1 (January 2021)PermalinkPermalinkPermalinkPermalinkPermalinkPermalinkObject detection using component-graphs and ConvNets with application to astronomical images / Thanh Xuan Nguyen (2021)PermalinkPerception de scène par un système multi-capteurs, application à la navigation dans des environnements d'intérieur structuré / Marwa Chakroun (2021)PermalinkStudy of an integrated pre-processing architecture for smart-imaging-systems, in the context of lowpower computer vision and embedded object detection / Luis Cubero Montealegre (2021)PermalinkUnderwater object detection and reconstruction based on active single-pixel imaging and super-resolution convolutional neural network / Mengdi Li in Sensors, vol 21 n° 1 (January 2021)PermalinkUnderstanding the role of individual units in a deep neural network / David Bau in Proceedings of the National Academy of Sciences of the United States of America PNAS, vol 117 n° 48 (1 December 2020)PermalinkBayesian transfer learning for object detection in optical remote sensing images / Changsheng Zhou in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)PermalinkApplication of convolutional and recurrent neural networks for buried threat detection using ground penetrating radar data / Mahdi Moalla in IEEE Transactions on geoscience and remote sensing, vol 58 n° 10 (October 2020)PermalinkCSVM architectures for pixel-wise object detection in high-resolution remote sensing images / Youyou Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkHeliport detection using artificial neural networks / Emre Baseski in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 9 (September 2020)PermalinkShip detection in SAR images via local contrast of Fisher vectors / Xueqian Wang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkVehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)PermalinkGeoNat v1.0: A dataset for natural feature mapping with artificial intelligence and supervised learning / Samantha T. Arundel in Transactions in GIS, Vol 24 n° 3 (June 2020)PermalinkPhotogrammetric determination of 3D crack opening vectors from 3D displacement fields / Frank Liebold in ISPRS Journal of photogrammetry and remote sensing, vol 164 (June 2020)PermalinkTraffic signal detection from in-vehicle GPS speed profiles using functional data analysis and machine learning / Yann Méneroux in International Journal of Data Science and Analytics JDSA, vol 10 n° 1 (June 2020)PermalinkAutomatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks / Mahmoud Saeedimoghaddam in International journal of geographical information science IJGIS, vol 34 n° 5 (May 2020)PermalinkAutomated terrain feature identification from remote sensing imagery: a deep learning approach / Wenwen Li in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)PermalinkGeocoding of trees from street addresses and street-level images / Daniel Laumer in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)PermalinkThe application of bidirectional reflectance distribution function data to recognize the spatial heterogeneity of mixed pixels in vegetation remote sensing: a simulation study / Yanan Yan in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 3 (March 2020)PermalinkApplication of machine learning techniques for evidential 3D perception, in the context of autonomous driving / Edouard Capellier (2020)PermalinkCattle detection and counting in UAV images based on convolutional neural networks / Wen Shao in International Journal of Remote Sensing IJRS, vol 41 n° 1 (01 - 08 janvier 2020)PermalinkContext-aware convolutional neural network for object detection in VHR remote sensing imagery / Yiping Gong in IEEE Transactions on geoscience and remote sensing, vol 58 n° 1 (January 2020)PermalinkPermalinkDétection et vectorisation automatiqued’objets linéaires dans des nuages de points de voirie / Etienne Barçon (2020)PermalinkPermalinkImage processing applications in object detection and graph matching: from Matlab development to GPU framework / Beibei Cui (2020)PermalinkPermalinkPermalinkReconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne / Valentin Desbiolles (2020)Permalink