Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet > détection d'objet
détection d'objetVoir aussi |
Documents disponibles dans cette catégorie (105)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Underwater object detection and reconstruction based on active single-pixel imaging and super-resolution convolutional neural network / Mengdi Li in Sensors, vol 21 n° 1 (January 2021)
[article]
Titre : Underwater object detection and reconstruction based on active single-pixel imaging and super-resolution convolutional neural network Type de document : Article/Communication Auteurs : Mengdi Li, Auteur ; Anumoi Mathai, Auteur ; Stephen L. H. Lau, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 313 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification pixellaire
[Termes IGN] détection d'objet
[Termes IGN] fond marin
[Termes IGN] rapport signal sur bruit
[Termes IGN] reconstruction d'image
[Termes IGN] reconstruction d'objetRésumé : (auteur) Due to medium scattering, absorption, and complex light interactions, capturing objects from the underwater environment has always been a difficult task. Single-pixel imaging (SPI) is an efficient imaging approach that can obtain spatial object information under low-light conditions. In this paper, we propose a single-pixel object inspection system for the underwater environment based on compressive sensing super-resolution convolutional neural network (CS-SRCNN). With the CS-SRCNN algorithm, image reconstruction can be achieved with 30% of the total pixels in the image. We also investigate the impact of compression ratios on underwater object SPI reconstruction performance. In addition, we analyzed the effect of peak signal to noise ratio (PSNR) and structural similarity index (SSIM) to determine the image quality of the reconstructed image. Our work is compared to the SPI system and SRCNN method to demonstrate its efficiency in capturing object results from an underwater environment. The PSNR and SSIM of the proposed method have increased to 35.44% and 73.07%, respectively. This work provides new insight into SPI applications and creates a better alternative for underwater optical object imaging to achieve good quality. Numéro de notice : A2021-158 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/s21010313 Date de publication en ligne : 05/01/2021 En ligne : https://doi.org/10.3390/s21010313 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97073
in Sensors > vol 21 n° 1 (January 2021) . - n° 313[article]Understanding the role of individual units in a deep neural network / David Bau in Proceedings of the National Academy of Sciences of the United States of America PNAS, vol 117 n° 48 (1 December 2020)
[article]
Titre : Understanding the role of individual units in a deep neural network Type de document : Article/Communication Auteurs : David Bau, Auteur ; Jun-Yan Zhu, Auteur ; Hendrik Strobelt, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : n° 30071-30078 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] cadre conceptuel
[Termes IGN] détection d'objet
[Termes IGN] réseau antagoniste génératif
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scèneRésumé : (auteur) Deep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing. Numéro de notice : A2020-864 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1073/pnas.1907375117 En ligne : https://doi.org/10.1073/pnas.1907375117 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99086
in Proceedings of the National Academy of Sciences of the United States of America PNAS > vol 117 n° 48 (1 December 2020) . - n° 30071-30078[article]Bayesian transfer learning for object detection in optical remote sensing images / Changsheng Zhou in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)
[article]
Titre : Bayesian transfer learning for object detection in optical remote sensing images Type de document : Article/Communication Auteurs : Changsheng Zhou, Auteur ; Jiangshe Zhang, Auteur ; Junmin Liu, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 7705 - 7719 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] détection d'objet
[Termes IGN] distribution de Fisher
[Termes IGN] jeu de données localisées
[Termes IGN] théorème de BayesRésumé : (auteur) In the literature of object detection in optical remote sensing images, a popular pipeline is first modifying an off-the-shelf deep neural network, then initializing the modified network by pretrained weights on a source data set, and finally fine-tuning the network on a target data set. The procedure works well in practice but might not make full use of underlying knowledge implied by pretrained weights. In this article, we propose a novel method, referred to as Fisher regularization, for efficient knowledge transferring. Based on Bayes’ theorem, the method stores underlying knowledge into a Fisher information matrix and fine-tunes parameters based on the knowledge. The proposed method would not introduce extra parameters and is less sensitive to hyperparameters than classical weight decay. Experiments on NWPUVHR-10 and DOTA data sets show that the proposed method is effective and works well with different object detectors. Numéro de notice : A2020-679 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2983201 Date de publication en ligne : 14/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2983201 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96182
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 11 (November 2020) . - pp 7705 - 7719[article]Application of convolutional and recurrent neural networks for buried threat detection using ground penetrating radar data / Mahdi Moalla in IEEE Transactions on geoscience and remote sensing, vol 58 n° 10 (October 2020)
[article]
Titre : Application of convolutional and recurrent neural networks for buried threat detection using ground penetrating radar data Type de document : Article/Communication Auteurs : Mahdi Moalla, Auteur ; Hichem Frigui, Auteur ; Andrew Karem, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 7022 - 7034 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] cible cachée
[Termes IGN] classification barycentrique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données radar
[Termes IGN] image radar moirée
[Termes IGN] mine antipersonnel
[Termes IGN] radar pénétrant GPR
[Termes IGN] réseau neuronal récurrent
[Termes IGN] sous-solRésumé : (auteur) We propose discrimination algorithms for buried threat detection (BTD) that exploit deep convolutional neural networks (CNNs) and recurrent neural networks (RNN) to analyze 2-D GPR B-scans in the down-track (DT) and cross-track (CT) directions as well as 3-D GPR volumes. Instead of imposing a specific model or handcrafted features, as in most existing detectors, we use large real GPR data collections and data-driven approaches that learn: 1) features characterizing buried explosive objects (BEOs) in 2-D B-scans, both in the DT and CT directions; 2) the variation of the CNN features learned in a fixed 2-D view across the third dimension; and 3) features characterizing BEOs in the original 3-D space. The proposed algorithms were trained and evaluated using large experimental GPR data covering a surface area of 120 000 m 2 from 13 different lanes across two U.S. test sites. These data include a diverse set of BEOs consisting of varying shapes, metal content, and underground burial depths. We provide some qualitative analysis of the proposed algorithms by visually comparing their performance and consistency along different dimensions and visualizing typical features learned by some nodes of the network. We also provide quantitative analysis that compares the receiver operating characteristics (ROCs) obtained using the proposed algorithms with those obtained using existing approaches based on CNN as well as traditional learning. Numéro de notice : A2020-586 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2978763 Date de publication en ligne : 25/03/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2978763 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95914
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 10 (October 2020) . - pp 7022 - 7034[article]CSVM architectures for pixel-wise object detection in high-resolution remote sensing images / Youyou Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)
[article]
Titre : CSVM architectures for pixel-wise object detection in high-resolution remote sensing images Type de document : Article/Communication Auteurs : Youyou Li, Auteur ; Farid Melgani, Auteur ; Binbin He, Auteur Année de publication : 2020 Article en page(s) : pp 6059 - 6070 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] image captée par drone
[Termes IGN] processeur graphiqueRésumé : (auteur) Detecting objects becomes an increasingly important task in very high resolution (VHR) remote sensing imagery analysis. With the development of GPU-computing capability, a growing number of deep convolutional neural networks (CNNs) have been designed to address the object detection challenge. However, compared with CPU, GPU is much more costly. Therefore, GPU-based methods are less attractive in practical applications. In this article, we propose a CPU-based method that is based on convolutional support vector machines (CSVMs) to address the object detection challenge in VHR images. Experiments are conducted on three VHR and two unmanned aerial vehicle (UAV) data sets with very limited training data. Results show that the proposed CSVM achieves competitive performance compared to U-Net which is an efficient CNN-based model designed for small training data sets. Numéro de notice : A2020-527 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2972289 Date de publication en ligne : 02/03/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2972289 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95705
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 9 (September 2020) . - pp 6059 - 6070[article]Heliport detection using artificial neural networks / Emre Baseski in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 9 (September 2020)PermalinkShip detection in SAR images via local contrast of Fisher vectors / Xueqian Wang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkVehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)PermalinkGeoNat v1.0: A dataset for natural feature mapping with artificial intelligence and supervised learning / Samantha T. Arundel in Transactions in GIS, Vol 24 n° 3 (June 2020)PermalinkPhotogrammetric determination of 3D crack opening vectors from 3D displacement fields / Frank Liebold in ISPRS Journal of photogrammetry and remote sensing, vol 164 (June 2020)PermalinkTraffic signal detection from in-vehicle GPS speed profiles using functional data analysis and machine learning / Yann Méneroux in International Journal of Data Science and Analytics JDSA, vol 10 n° 1 (June 2020)PermalinkAutomatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks / Mahmoud Saeedimoghaddam in International journal of geographical information science IJGIS, vol 34 n° 5 (May 2020)PermalinkAutomated terrain feature identification from remote sensing imagery: a deep learning approach / Wenwen Li in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)PermalinkGeocoding of trees from street addresses and street-level images / Daniel Laumer in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)PermalinkThe application of bidirectional reflectance distribution function data to recognize the spatial heterogeneity of mixed pixels in vegetation remote sensing: a simulation study / Yanan Yan in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 3 (March 2020)Permalink