Descripteur
Termes descripteurs IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet > détection d'objet
détection d'objet |



Etendre la recherche sur niveau(x) vers le bas
A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, Vol 174 (April 2021)
![]()
[article]
Titre : A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; Diogo Nunes Gonçalves, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1 - 17 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] carte agricole
[Termes descripteurs IGN] Citrus sinensis
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] comptage
[Termes descripteurs IGN] cultures
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] extraction de la végétation
[Termes descripteurs IGN] gestion durable
[Termes descripteurs IGN] image captée par drone
[Termes descripteurs IGN] maïs (céréale)
[Termes descripteurs IGN] rendement agricoleRésumé : (auteur) Accurately mapping croplands is an important prerequisite for precision farming since it assists in field management, yield-prediction, and environmental management. Crops are sensitive to planting patterns and some have a limited capacity to compensate for gaps within a row. Optical imaging with sensors mounted on Unmanned Aerial Vehicles (UAV) is a cost-effective option for capturing images covering croplands nowadays. However, visual inspection of such images can be a challenging and biased task, specifically for detecting plants and rows on a one-step basis. Thus, developing an architecture capable of simultaneously extracting plant individually and plantation-rows from UAV-images is yet an important demand to support the management of agricultural systems. In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in (a) a cornfield (Zea mays L.) with different growth stages (i.e. recently planted and mature plants) and in a (b) Citrus orchard (Citrus Sinensis Pera). Both datasets characterize different plant density scenarios, in different locations, with different types of crops, and from different sensors and dates. This scheme was used to prove the robustness of the proposed approach, allowing a broader discussion of the method. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases – young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For the citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops. The method proposed here may be applied to future decision-making models and could contribute to the sustainable management of agricultural systems. Numéro de notice : A2021-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.024 date de publication en ligne : 13/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.024 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97171
in ISPRS Journal of photogrammetry and remote sensing > Vol 174 (April 2021) . - pp 1 - 17[article]PBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery / Xian Sun in ISPRS Journal of photogrammetry and remote sensing, Vol 173 (March 2021)
![]()
[article]
Titre : PBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery Type de document : Article/Communication Auteurs : Xian Sun, Auteur ; Peijin Wang, Auteur ; Cheng Wang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 50 - 65 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes descripteurs IGN] analyse contextuelle
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] Chine
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] objet géographique complexe
[Termes descripteurs IGN] rectangle englobant minimumRésumé : (auteur) In recent years, deep learning-based algorithms have brought great improvements to rigid object detection. In addition to rigid objects, remote sensing images also contain many complex composite objects, such as sewage treatment plants, golf courses, and airports, which have neither a fixed shape nor a fixed size. In this paper, we validate through experiments that the results of existing methods in detecting composite objects are not satisfying enough. Therefore, we propose a unified part-based convolutional neural network (PBNet), which is specifically designed for composite object detection in remote sensing imagery. PBNet treats a composite object as a group of parts and incorporates part information into context information to improve composite object detection. Correct part information can guide the prediction of a composite object, thus solving the problems caused by various shapes and sizes. To generate accurate part information, we design a part localization module to learn the classification and localization of part points using bounding box annotation only. A context refinement module is designed to generate more discriminative features by aggregating local context information and global context information, which enhances the learning of part information and improve the ability of feature representation. We selected three typical categories of composite objects from a public dataset to conduct experiments to verify the detection performance and generalization ability of our method. Meanwhile, we build a more challenging dataset about a typical kind of complex composite objects, i.e., sewage treatment plants. It refers to the relevant information from authorities and experts. This dataset contains sewage treatment plants in seven cities in the Yangtze valley, covering a wide range of regions. Comprehensive experiments on two datasets show that PBNet surpasses the existing detection algorithms and achieves state-of-the-art accuracy. Numéro de notice : A2021-105 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.12.015 date de publication en ligne : 16/01/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.12.015 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96891
in ISPRS Journal of photogrammetry and remote sensing > Vol 173 (March 2021) . - pp 50 - 65[article]Deep traffic light detection by overlaying synthetic context on arbitrary natural images / Jean Pablo Vieira de Mello in Computers and graphics, vol 94 n° 1 (February 2021)
![]()
[article]
Titre : Deep traffic light detection by overlaying synthetic context on arbitrary natural images Type de document : Article/Communication Auteurs : Jean Pablo Vieira de Mello, Auteur ; Lucas Tabelini, Auteur ; Rodrigo F. Berriel, Auteur Année de publication : 2021 Article en page(s) : pp 76 - 86 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] analyse d'image orientée objet
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] échantillonnage d'image
[Termes descripteurs IGN] feu de circulation
[Termes descripteurs IGN] image à haute résolution
[Termes descripteurs IGN] navigation autonome
[Termes descripteurs IGN] signalisation routière
[Termes descripteurs IGN] trafic routierRésumé : (auteur) Deep neural networks come as an effective solution to many problems associated with autonomous driving. By providing real image samples with traffic context to the network, the model learns to detect and classify elements of interest, such as pedestrians, traffic signs, and traffic lights. However, acquiring and annotating real data can be extremely costly in terms of time and effort. In this context, we propose a method to generate artificial traffic-related training data for deep traffic light detectors. This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds that are not related to the traffic domain. Thus, a large amount of training data can be generated without annotation efforts. Furthermore, it also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state. Experiments show that it is possible to achieve results comparable to those obtained with real training data from the problem domain, yielding an average mAP and an average F1-score which are each nearly 4 p.p. higher than the respective metrics obtained with a real-world reference model. Numéro de notice : A2021-151 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.cag.2020.09.012 date de publication en ligne : 09/10/2020 En ligne : https://doi.org/10.1016/j.cag.2020.09.012 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97027
in Computers and graphics > vol 94 n° 1 (February 2021) . - pp 76 - 86[article]Semi-supervised joint learning for hand gesture recognition from a single color image / Chi Xu in Sensors, vol 21 n° 3 (February 2021)
![]()
[article]
Titre : Semi-supervised joint learning for hand gesture recognition from a single color image Type de document : Article/Communication Auteurs : Chi Xu, Auteur ; Yunkai Jiang, Auteur ; Jun Zhou, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 1007 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] apprentissage semi-dirigé
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] image en couleur
[Termes descripteurs IGN] jeu de données
[Termes descripteurs IGN] reconnaissance de gestesRésumé : (auteur) Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task. Numéro de notice : A2021-160 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/s21031007 date de publication en ligne : 02/02/2021 En ligne : https://doi.org/10.3390/s21031007 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97076
in Sensors > vol 21 n° 3 (February 2021) . - n° 1007[article]Combining deep learning and mathematical morphology for historical map segmentation / Yizi Chen (2021)
![]()
Titre : Combining deep learning and mathematical morphology for historical map segmentation Type de document : Article/Communication Auteurs : Yizi Chen, Auteur ; Edwin Carlinet, Auteur ; Joseph Chazalon, Auteur ; Clément Mallet , Auteur ; Bertrand Duménieu
, Auteur ; Julien Perret
, Auteur
Editeur : Ithaca [New York - Etats-Unis] : ArXiv - Université Cornell Année de publication : 2021 Projets : SODUCO / Perret, Julien Note générale : bibliographie
soumis à DGMM 2021Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes descripteurs IGN] analyse diachronique
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] carte ancienne
[Termes descripteurs IGN] chaîne de traitement
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] données maillées
[Termes descripteurs IGN] morphologie mathématique
[Termes descripteurs IGN] vectorisationRésumé : (auteur) The digitization of historical maps enables the study of ancient, fragile, unique, and hardly accessible information sources. Main map features can be retrieved and tracked through the time for subsequent thematic analysis. The goal of this work is the vectorization step, i.e., the extraction of vector shapes of the objects of interest from raster images of maps. We are particularly interested in closed shape detection such as buildings, building blocks, gardens, rivers, etc. in order to monitor their temporal evolution. Historical map images present significant pattern recognition challenges. The extraction of closed shapes by using traditional Mathematical Morphology (MM) is highly challenging due to the overlapping of multiple map features and texts. Moreover, state-of-the-art Convolutional Neural Networks (CNN) are perfectly designed for content image filtering but provide no guarantee about closed shape detection. Also, the lack of textural and color information of historical maps makes it hard for CNN to detect shapes that are represented by only their boundaries. Our contribution is a pipeline that combines the strengths of CNN (efficient edge detection and filtering) and MM (guaranteed extraction of closed shapes) in order to achieve such a task. The evaluation of our approach on a public dataset shows its effectiveness for extracting the closed boundaries of objects in historical maps. Numéro de notice : P2021-001 Affiliation des auteurs : LaSTIG+Ext (2020- ) Autre URL associée : vers HAL Thématique : GEOMATIQUE Nature : Preprint nature-HAL : Préprint En ligne : https://arxiv.org/abs/2101.02144 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96739 Extraction of street pole-like objects based on plane filtering from mobile LiDAR data / Jingming Tu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 1 (January 2021)
PermalinkUnderwater object detection and reconstruction based on active single-pixel imaging and super-resolution convolutional neural network / Mengdi Li in Sensors, vol 21 n° 1 (January 2021)
PermalinkBayesian transfer learning for object detection in optical remote sensing images / Changsheng Zhou in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)
PermalinkApplication of convolutional and recurrent neural networks for buried threat detection using ground penetrating radar data / Mahdi Moalla in IEEE Transactions on geoscience and remote sensing, vol 58 n° 10 (October 2020)
PermalinkCSVM architectures for pixel-wise object detection in high-resolution remote sensing images / Youyou Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)
PermalinkHeliport detection using artificial neural networks / Emre Baseski in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 9 (September 2020)
PermalinkShip detection in SAR images via local contrast of Fisher vectors / Xueqian Wang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)
PermalinkVehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
PermalinkGeoNat v1.0: A dataset for natural feature mapping with artificial intelligence and supervised learning / Samantha T. Arundel in Transactions in GIS, Vol 24 n° 3 (June 2020)
PermalinkTraffic signal detection from in-vehicle GPS speed profiles using functional data analysis and machine learning / Yann Méneroux in International Journal of Data Science and Analytics JDSA, vol 10 n° 1 (June 2020)
![]()
Permalink