Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet > détection d'objet
détection d'objetVoir aussi |
Documents disponibles dans cette catégorie (121)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Detecting interchanges in road networks using a graph convolutional network approach / Min Yang in International journal of geographical information science IJGIS, vol 36 n° 6 (June 2022)
[article]
Titre : Detecting interchanges in road networks using a graph convolutional network approach Type de document : Article/Communication Auteurs : Min Yang, Auteur ; Chenjun Jiang, Auteur ; Xiongfeng Yan, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1119 - 1139 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] analyse de groupement
[Termes IGN] analyse vectorielle
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification semi-dirigée
[Termes IGN] détection d'objet
[Termes IGN] échangeur routier
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] modélisation
[Termes IGN] noeud
[Termes IGN] Pékin (Chine)
[Termes IGN] réseau neuronal de graphes
[Termes IGN] réseau routier
[Termes IGN] Wuhan (Chine)Résumé : (auteur) Detecting interchanges in road networks benefit many applications, such as vehicle navigation and map generalization. Traditional approaches use manually defined rules based on geometric, topological, or both properties, and thus can present challenges for structurally complex interchange. To overcome this drawback, we propose a graph-based deep learning approach for interchange detection. First, we model the road network as a graph in which the nodes represent road segments, and the edges represent their connections. The proposed approach computes the shape measures and contextual properties of individual road segments for features characterizing the associated nodes in the graph. Next, a semi-supervised approach uses these features and limited labeled interchanges to train a graph convolutional network that classifies these road segments into an interchange and non-interchange segments. Finally, an adaptive clustering approach groups the detected interchange segments into interchanges. Our experiment with the road networks of Beijing and Wuhan achieved a classification accuracy >95% at a label rate of 10%. Moreover, the interchange detection precision and recall were 79.6 and 75.7% on the Beijing dataset and 80.6 and 74.8% on the Wuhan dataset, respectively, which were 18.3–36.1 and 17.4–19.4% higher than those of the existing approaches based on characteristic node clustering. Numéro de notice : A2022-404 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.2024195 Date de publication en ligne : 11/03/2022 En ligne : https://doi.org/10.1080/13658816.2021.2024195 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100716
in International journal of geographical information science IJGIS > vol 36 n° 6 (June 2022) . - pp 1119 - 1139[article]Invariant structure representation for remote sensing object detection based on graph modeling / Zicong Zhu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)
[article]
Titre : Invariant structure representation for remote sensing object detection based on graph modeling Type de document : Article/Communication Auteurs : Zicong Zhu, Auteur ; Xian Sun, Auteur ; Wenhui Diao, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 5625217 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement sans étiquette
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] filtrage numérique d'image
[Termes IGN] granularité d'image
[Termes IGN] graphe
[Termes IGN] invariantRésumé : (auteur) Due to the characteristics of vertical orthophoto imaging, the apparent structural features of the object in the remote sensing (RS) image are relatively stable, such as the cross-shaped structure of the aircraft and the rectangular structure of the vehicle. Compared with the traditional visual features, using these features is conducive to improving the accuracy of object detection. However, there are few studies on such characteristics. In this article, we systematically study the invariant structural features of remote sensing objects and propose a graph focusing aggregation network (GFA-Net) to represent the structural features of remote sensing objects. Among them, in view of the problem that traditional convolutional neural networks (CNNs) are sensitive to the changes in rotation, scale, and other factors, which makes it difficult to extract structural features, we propose the graph focusing process (GFP) based on the idea of graph convolution. Analysis and experiments show that graph structure has significant advantages over Euclidean feature space under CNN in expressing such structural features. In order to realize the end-to-end efficient training of the above model, we design a graph aggregation network (GAN) to update the weight of nodes. We verify the effectiveness of our method on the proposed multitask datasets aircraft component segmentation dataset (ACSD) and the large-scale Fine-grAined object recognItion in high-Resolution RS imagery (FAIR1M). Experiments conducted on the object detection datasets of large-scale Dataset for Object deTection in Aerial images (DOTA) and HRSC2016 prove that the proposed method is superior to the current state-of-the-art (SOTA) method. Numéro de notice : A2022-560 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3181686 Date de publication en ligne : 09/06/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3181686 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101186
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 6 (June 2022) . - n° 5625217[article]Large-scale automatic identification of urban vacant land using semantic segmentation of high-resolution remote sensing images / Lingdong Mao in Landscape and Urban Planning, vol 222 (June 2022)
[article]
Titre : Large-scale automatic identification of urban vacant land using semantic segmentation of high-resolution remote sensing images Type de document : Article/Communication Auteurs : Lingdong Mao, Auteur ; Zhe Zheng, Auteur ; Xiangfeng Meng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 104384 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] Chine
[Termes IGN] détection d'objet
[Termes IGN] grande échelle
[Termes IGN] identification automatique
[Termes IGN] image à haute résolution
[Termes IGN] milieu urbain
[Termes IGN] occupation du sol
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Urban vacant land is a growing issue worldwide. However, most of the existing research on urban vacant land has focused on small-scale city areas, while few studies have focused on large-scale national areas. Large-scale identification of urban vacant land is hindered by the disadvantage of high cost and high variability when using the conventional manual identification method. Criteria inconsistency in cross-domain identification is also a major challenge. To address these problems, we propose a large-scale automatic identification framework of urban vacant land based on semantic segmentation of high-resolution remote sensing images and select 36 major cities in China as study areas. The framework utilizes deep learning techniques to realize automatic identification and introduces the city stratification method to address the challenge of identification criteria inconsistency. The results of the case study on 36 major Chinese cities indicate two major conclusions. First, the proposed framework of vacant land identification can achieve over 90 percent accuracy of the level of professional auditors with much higher result stability and approximately 15 times higher efficiency compared to the manual identification method. Second, the framework has strong robustness and can maintain high performance in various cities. With the above advantages, the proposed framework provides a practical approach to large-scale vacant land identification in various countries and regions worldwide, which is of great significance for the academic development of urban vacant land and future urban development. Numéro de notice : A2022-267 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.landurbplan.2022.104384 Date de publication en ligne : 03/03/2022 En ligne : https://doi.org/10.1016/j.landurbplan.2022.104384 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100275
in Landscape and Urban Planning > vol 222 (June 2022) . - n° 104384[article]Line-based deep learning method for tree branch detection from digital images / Rodrigo L. S. Silva in International journal of applied Earth observation and geoinformation, vol 110 (June 2022)
[article]
Titre : Line-based deep learning method for tree branch detection from digital images Type de document : Article/Communication Auteurs : Rodrigo L. S. Silva, Auteur ; José Marcato Junior, Auteur ; Laisa Almeida, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 102759 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] branche (arbre)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données qualitatives
[Termes IGN] estimation quantitative
[Termes IGN] image à haute résolution
[Termes IGN] ligne (géométrie)
[Termes IGN] transformation de HoughRésumé : (auteur) Preventive maintenance of power lines, including cutting and pruning of tree branches, is essential to avoid interruptions in the energy supply. Automatic methods can support this risky task and also reduce time-consuming. Here, we propose a method in which the orientation and the grasping positions of tree branches are estimated. The proposed method firstly predicts the straight line (representing the tree branch extension) based on a convolutional neural network (CNN). Secondly, a Hough transform is applied to estimate the direction and position of the line. Finally, we estimate the grip point as the pixel point with the highest probability of belonging to the line. We generated a dataset based on internet searches and annotated 1868 images considering challenging scenarios with different tree branch shapes, capture devices, and environmental conditions. Ten-fold cross-validation was adopted, considering 90% for training and 10% for testing. We also assessed the method under corruptions (gaussian and shot) with different severity levels. The experimental analysis showed the effectiveness of the proposed method reporting F1-score of 96.78%. Our method outperformed state-of-the-art Deep Hough Transform (DHT) and Fully Convolutional Line Parsing (F-Clip). Numéro de notice : A2022-550 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102759 Date de publication en ligne : 09/05/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102759 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101153
in International journal of applied Earth observation and geoinformation > vol 110 (June 2022) . - n° 102759[article]Unsupervised multi-view CNN for salient view selection and 3D interest point detection / Ran Song in International journal of computer vision, vol 130 n° 5 (May 2022)
[article]
Titre : Unsupervised multi-view CNN for salient view selection and 3D interest point detection Type de document : Article/Communication Auteurs : Ran Song, Auteur ; Wei Zhang, Auteur ; Yitian Zhao, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1210 - 1227 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] objet 3D
[Termes IGN] point d'intérêt
[Termes IGN] saillanceRésumé : (auteur) We present an unsupervised 3D deep learning framework based on a ubiquitously true proposition named by us view-object consistency as it states that a 3D object and its projected 2D views always belong to the same object class. To validate its effectiveness, we design a multi-view CNN instantiating it for salient view selection and interest point detection of 3D objects, which quintessentially cannot be handled by supervised learning due to the difficulty of collecting sufficient and consistent training data. Our unsupervised multi-view CNN, namely UMVCNN, branches off two channels which encode the knowledge within each 2D view and the 3D object respectively and also exploits both intra-view and inter-view knowledge of the object. It ends with a new loss layer which formulates the view-object consistency by impelling the two channels to generate consistent classification outcomes. The UMVCNN is then integrated with a global distinction adjustment scheme to incorporate global cues into salient view selection. We evaluate our method for salient view section both qualitatively and quantitatively, demonstrating its superiority over several state-of-the-art methods. In addition, we showcase that our method can be used to select salient views of 3D scenes containing multiple objects. We also develop a method based on the UMVCNN for 3D interest point detection and conduct comparative evaluations on a publicly available benchmark, which shows that the UMVCNN is amenable to different 3D shape understanding tasks. Numéro de notice : A2022-415 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-022-01592-x Date de publication en ligne : 16/03/2022 En ligne : https://doi.org/10.1007/s11263-022-01592-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100771
in International journal of computer vision > vol 130 n° 5 (May 2022) . - pp 1210 - 1227[article]Deep learning for archaeological object detection on LiDAR: New evaluation measures and insights / Marco Fiorucci in Remote sensing, vol 14 n° 7 (April-1 2022)PermalinkDetermination of building flood risk maps from LiDAR mobile mapping data / Yu Feng in Computers, Environment and Urban Systems, vol 93 (April 2022)PermalinkResearch on machine intelligent perception of urban geographic location based on high resolution remote sensing images / Jun Chen in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 4 (April 2022)PermalinkVisual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)PermalinkMapping global flying aircraft activities using Landsat 8 and cloud computing / Fen Zhao in ISPRS Journal of photogrammetry and remote sensing, vol 184 (February 2022)PermalinkObject recognition algorithm based on optimized nonlinear activation function-global convolutional neural network / Feng-Ping An in The Visual Computer, vol 38 n° 2 (February 2022)PermalinkAutomatic extraction of damaged houses by earthquake based on improved YOLOv5: A case study in Yangbi / Yafei Jing in Remote sensing, vol 14 n° 2 (January-2 2022)PermalinkPermalinkDeep learning based 2D and 3D object detection and tracking on monocular video in the context of autonomous vehicles / Zhujun Xu (2022)PermalinkPermalink