Descripteur
Termes IGN > informatique > réseautique > architecture de réseau
architecture de réseauSynonyme(s)Topologie de réseau configuration de réseauVoir aussi |
Documents disponibles dans cette catégorie (69)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Extracting built-up land area of airports in China using Sentinel-2 imagery through deep learning / Fanxuan Zeng in Geocarto international, vol 37 n° 25 ([01/12/2022])
[article]
Titre : Extracting built-up land area of airports in China using Sentinel-2 imagery through deep learning Type de document : Article/Communication Auteurs : Fanxuan Zeng, Auteur ; Xin Wang, Auteur ; Mengqi Zha, Auteur Année de publication : 2022 Article en page(s) : pp 7753 - 7773 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] aéroport
[Termes IGN] apprentissage profond
[Termes IGN] architecture de réseau
[Termes IGN] Chine
[Termes IGN] détection du bâti
[Termes IGN] image Sentinel-MSIRésumé : (auteur) In China, airports have a profound impact on people’s lives, and understanding their dimensions has great significance for research and development. However, few existing airport databases contain such details, which can be reflected indirectly by the built-up land in the airport. In this study, a deep learning-based method was used for extraction of built-up land of airports in China using Sentinel-2 imagery and for further estimating their area. Here, a benchmark generation method is introduced by fusing two reference maps and cropping images into patches. Following this, a series of experiments were conducted to evaluate the network architectures and select the positive impact bands in Sentinel-2 imagery. A well-trained model was used to extract the built-up land for China airports, and the relationship between China airports’ built-up land and the carrying capacity of air transportation was further analysed. Results show that ResUNet-a outperformed U-Net, ResUNet, and SegNet, and the B2, B4, B6, B11, and B12 bands of Sentinel-2 had a positive impact on built-up land extraction. A well-trained model with an overall accuracy of 0.9423 and an F1 score of 0.9041 and 434 China airports’ built-up land was extracted. The four most developed airports are located in Beijing, Shanghai, and Guangzhou, which matches China’s political and economic development. The area of built-up land influenced the passenger throughput and aircraft movements. The total area influenced the cargo throughput, and we found a certain correlation among the built-up land, carrying capacity, and nighttime light. Numéro de notice : A2022-929 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/10106049.2021.1983034 Date de publication en ligne : 01/10/2021 En ligne : https://doi.org/10.1080/10106049.2021.1983034 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102662
in Geocarto international > vol 37 n° 25 [01/12/2022] . - pp 7753 - 7773[article]Feature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images / Hanwen Xu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)
[article]
Titre : Feature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images Type de document : Article/Communication Auteurs : Hanwen Xu, Auteur ; Xinming Tang, Auteur ; Bo Ai, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 4411915 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] architecture de réseau
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] entropie
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] segmentation multi-échelle
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Very-high-resolution (VHR) remote sensing images contain various multiscale objects, such as large-scale buildings and small-scale cars. However, these multiscale objects cannot be considered simultaneously in the widely used backbones with a large downsampling factor (e.g., VGG-like and ResNet-like), resulting in the appearance of various context aggregation approaches, such as fusing low-level features and attention-based modules. To alleviate this problem caused by backbones with a large downsampling factor, we propose a feature-selection high-resolution network (FSHRNet) based on an observation: if the features maintain high resolution throughout the network, a high precision segmentation result can be obtained by only using a 1× 1 convolution layer with no need for complex context aggregation modules. Specifically, the backbone of FSHRNet is a multibranch structure similar to HRNet where the high-resolution branch is the principal line. Then, a lightweight dynamic weight module, named the feature-selection convolution (FSConv) layer, is presented to fuse multiresolution features, allowing adaptively feature selection based on the characteristic of objects. Finally, a specially designed 1× 1 convolution layer derived from hypersphere embedding is used to produce the segmentation result. Experiments with other widely used methods show that the proposed FSHRNet obtains competitive performance on the ISPRS Vaihingen dataset, the ISPRS Potsdam dataset, and the iSAID dataset. Numéro de notice : A2022-559 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3183144 En ligne : https://doi.org/10.1109/TGRS.2022.3183144 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101184
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 6 (June 2022) . - n° 4411915[article]Deep image translation with an affinity-based change prior for unsupervised multimodal change detection / Luigi Tommaso Luppino in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)
[article]
Titre : Deep image translation with an affinity-based change prior for unsupervised multimodal change detection Type de document : Article/Communication Auteurs : Luigi Tommaso Luppino, Auteur ; Michael Kampffmeyer, Auteur ; filipo Maria Bianchi, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 4700422 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse comparative
[Termes IGN] architecture de réseau
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Image translation with convolutional neural networks has recently been used as an approach to multimodal change detection. Existing approaches train the networks by exploiting supervised information of the change areas, which, however, is not always available. A main challenge in the unsupervised problem setting is to avoid that change pixels affect the learning of the translation function. We propose two new network architectures trained with loss functions weighted by priors that reduce the impact of change pixels on the learning objective. The change prior is derived in an unsupervised fashion from relational pixel information captured by domain-specific affinity matrices. Specifically, we use the vertex degrees associated with an absolute affinity difference matrix and demonstrate their utility in combination with cycle consistency and adversarial training. The proposed neural networks are compared with the state-of-the-art algorithms. Experiments conducted on three real data sets show the effectiveness of our methodology. Numéro de notice : A2022-027 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2021.3056196 Date de publication en ligne : 17/02/2021 En ligne : https://doi.org/10.1109/TGRS.2021.3056196 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99263
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 1 (January 2022) . - n° 4700422[article]Deep learning based 2D and 3D object detection and tracking on monocular video in the context of autonomous vehicles / Zhujun Xu (2022)
Titre : Deep learning based 2D and 3D object detection and tracking on monocular video in the context of autonomous vehicles Type de document : Thèse/HDR Auteurs : Zhujun Xu, Auteur ; Eric Chaumette, Directeur de thèse ; Damien Vivet, Directeur de thèse Editeur : Toulouse : Université de Toulouse Année de publication : 2022 Importance : 136 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse en vue de l'obtention du Doctorat de l'Université de Toulouse, spécialité Informatique et TélécommunicationsLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] architecture de réseau
[Termes IGN] détection d'objet
[Termes IGN] échantillonnage de données
[Termes IGN] objet 3D
[Termes IGN] segmentation d'image
[Termes IGN] véhicule automobile
[Termes IGN] vidéo
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The objective of this thesis is to develop deep learning based 2D and 3D object detection and tracking methods on monocular video and apply them to the context of autonomous vehicles. Actually, when directly using still image detectors to process a video stream, the accuracy suffers from sampled image quality problems. Moreover, generating 3D annotations is time-consuming and expensive due to the data fusion and large numbers of frames. We therefore take advantage of the temporal information in videos such as the object consistency, to improve the performance. The methods should not introduce too much extra computational burden, since the autonomous vehicle demands a real-time performance.Multiple methods can be involved in different steps, for example, data preparation, network architecture and post-processing. First, we propose a post-processing method called heatmap propagation based on a one-stage detector CenterNet for video object detection. Our method propagates the previous reliable long-term detection in the form of heatmap to the upcoming frame. Then, to distinguish different objects of the same class, we propose a frame-to-frame network architecture for video instance segmentation by using the instance sequence queries. The tracking of instances is achieved without extra post-processing for data association. Finally, we propose a semi-supervised learning method to generate 3D annotations for 2D video object tracking dataset. This helps to enrich the training process for 3D object detection. Each of the three methods can be individually applied to leverage image detectors to video applications. We also propose two complete network structures to solve 2D and 3D object detection and tracking on monocular video. Note de contenu : 1- Introduction
2- Video object detection avec la heatmap propagation (propagation de carte de chaleur)
3- Video instance segmentation with instance sequence queries
4- Semi-supervised learning of monocular 3D object detection with 2D video tracking annotations
5- Conclusions and perspectivesNuméro de notice : 24072 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique et Télécommunications : Toulouse : 2022 DOI : sans En ligne : https://www.theses.fr/2022ESAE0019 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102136 Effective triplet mining improves training of multi-scale pooled CNN for image retrieval / Federico Vaccaro in Machine Vision and Applications, vol 33 n° 1 (January 2022)
[article]
Titre : Effective triplet mining improves training of multi-scale pooled CNN for image retrieval Type de document : Article/Communication Auteurs : Federico Vaccaro, Auteur ; Marco Bertini, Auteur ; Tiberio Uricchio, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 16 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] agrégation de données
[Termes IGN] analyse visuelle
[Termes IGN] architecture de réseau
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] exploration de données
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] réseau neuronal siamois
[Termes IGN] tripletRésumé : (auteur) In this paper, we address the problem of content-based image retrieval (CBIR) by learning images representations based on the activations of a Convolutional Neural Network. We propose an end-to-end trainable network architecture that exploits a novel multi-scale local pooling based on the trainable aggregation layer NetVLAD (Arandjelovic et al in Proceedings of the IEEE conference on computer vision and pattern recognition CVPR, NetVLAD, 2016) and bags of local features obtained by splitting the activations, allowing to reduce the dimensionality of the descriptor and to increase the performance of retrieval. Training is performed using an improved triplet mining procedure that selects samples based on their difficulty to obtain an effective image representation, reducing the risk of overfitting and loss of generalization. Extensive experiments show that our approach, that can be effectively used with different CNN architectures, obtains state-of-the-art results on standard and challenging CBIR datasets. Numéro de notice : A2022-237 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00138-021-01260-z Date de publication en ligne : 06/01/2022 En ligne : https://doi.org/10.1007/s00138-021-01260-z Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100153
in Machine Vision and Applications > vol 33 n° 1 (January 2022) . - n° 16[article]Representing vector geographic information as a tensor for deep learning based map generalisation / Azelle Courtial (2022)PermalinkEfficient image dataset classification difficulty estimation for predicting deep-learning accuracy / Florian Scheidegger in The Visual Computer, vol 37 n° 6 (June 2021)PermalinkPermalinkA novel deep network and aggregation model for saliency detection / Ye Liang in The Visual Computer, vol 36 n° 9 (September 2020)PermalinkCreating a web mapping portal to manage Malta’s underwater cultural heritage / Mélissa Dupuis (2020)PermalinkRecherche multimodale d'images aériennes multi-date à l'aide d'un réseau siamois / Margarita Khokhlova (2020)PermalinkGéomatique webmapping en open source / David Collado (2019)PermalinkSimultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks / Rasha Alshehhi in ISPRS Journal of photogrammetry and remote sensing, vol 130 (August 2017)PermalinkEtude et méthodes d'intégration et d'interaction de données 3D complexes type "nuages de points" vers un web SIG / Victor Lambert (2017)PermalinkRéseaux de neurones convolutifs pour la segmentation sémantique et l'apprentissage d'invariants de couleur / Damien Fourure (2017)Permalink