Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet > détection d'objet
détection d'objetVoir aussi |
Documents disponibles dans cette catégorie (100)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
An internal-external optimized convolutional neural network for arbitrary orientated object detection from optical remote sensing images / Sihang Zhang in Geo-spatial Information Science, vol 24 n° 4 (October 2021)
[article]
Titre : An internal-external optimized convolutional neural network for arbitrary orientated object detection from optical remote sensing images Type de document : Article/Communication Auteurs : Sihang Zhang, Auteur ; Zhenfeng Shao, Auteur ; Xiao Huang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 654 - 665 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] image optique
[Termes IGN] optimisation (mathématiques)Résumé : (auteur) Due to the bird’s eye view of remote sensing sensors, the orientational information of an object is a key factor that has to be considered in object detection. To obtain rotating bounding boxes, existing studies either rely on rotated anchoring schemes or adding complex rotating ROI transfer layers, leading to increased computational demand and reduced detection speeds. In this study, we propose a novel internal-external optimized convolutional neural network for arbitrary orientated object detection in optical remote sensing images. For the internal optimization, we designed an anchor-based single-shot head detector that adopts the concept of coarse-to-fine detection for two-stage object detection networks. The refined rotating anchors are generated from the coarse detection head module and fed into the refining detection head module with a link of an embedded deformable convolutional layer. For the external optimization, we propose an IOU balanced loss that addresses the regression challenges related to arbitrary orientated bounding boxes. Experimental results on the DOTA and HRSC2016 benchmark datasets show that our proposed method outperforms selected methods. Numéro de notice : A2021-129 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/10095020.2021.1972772 Date de publication en ligne : 27/09/2021 En ligne : https://doi.org/10.1080/10095020.2021.1972772 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99355
in Geo-spatial Information Science > vol 24 n° 4 (October 2021) . - pp 654 - 665[article]ComNet: combinational neural network for object detection in UAV-borne thermal images / Minglei Li in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
[article]
Titre : ComNet: combinational neural network for object detection in UAV-borne thermal images Type de document : Article/Communication Auteurs : Minglei Li, Auteur ; Xingke Zhao, Auteur ; Jiasong Li, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6662 - 6673 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] image captée par drone
[Termes IGN] image thermique
[Termes IGN] piéton
[Termes IGN] saillance
[Termes IGN] véhiculeRésumé : (auteur) We propose a deep learning-based method for object detection in UAV-borne thermal images that have the capability of observing scenes in both day and night. Compared with visible images, thermal images have lower requirements for illumination conditions, but they typically have blurred edges and low contrast. Using a boundary-aware salient object detection network, we extract the saliency maps of the thermal images to improve the distinguishability. Thermal images are augmented with the corresponding saliency maps through channel replacement and pixel-level weighted fusion methods. Considering the limited computing power of UAV platforms, a lightweight combinational neural network ComNet is used as the core object detection method. The YOLOv3 model trained on the original images is used as a benchmark and compared with the proposed method. In the experiments, we analyze the detection performances of the ComNet models with different image fusion schemes. The experimental results show that the average precisions (APs) for pedestrian and vehicle detection have been improved by 2%~5% compared with the benchmark without saliency map fusion and MobileNetv2. The detection speed is increased by over 50%, while the model size is reduced by 58%. The results demonstrate that the proposed method provides a compromise model, which has application potential in UAV-borne detection tasks. Numéro de notice : A2021-632 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3029945 Date de publication en ligne : 21/10/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3029945 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98288
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 8 (August 2021) . - pp 6662 - 6673[article]CNN-based RGB-D salient object detection: Learn, select, and fuse / Hao Chen in International journal of computer vision, vol 129 n° 7 (July 2021)
[article]
Titre : CNN-based RGB-D salient object detection: Learn, select, and fuse Type de document : Article/Communication Auteurs : Hao Chen, Auteur ; Yongjian Deng, Auteur ; Guosheng Lin, Auteur Année de publication : 2021 Article en page(s) : pp 2076 - 2096 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] approche hiérarchique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] image RVB
[Termes IGN] profondeur
[Termes IGN] saillance
[Termes IGN] segmentation sémantiqueRésumé : (auteur) The goal of this work is to present a systematic solution for RGB-D salient object detection, which addresses the following three aspects with a unified framework: modal-specific representation learning, complementary cue selection, and cross-modal complement fusion. To learn discriminative modal-specific features, we propose a hierarchical cross-modal distillation scheme, in which we use the progressive predictions from the well-learned source modality to supervise learning feature hierarchies and inference in the new modality. To better select complementary cues, we formulate a residual function to incorporate complements from the paired modality adaptively. Furthermore, a top-down fusion structure is constructed for sufficient cross-modal cross-level interactions. The experimental results demonstrate the effectiveness of the proposed cross-modal distillation scheme in learning from a new modality, the advantages of the proposed multi-modal fusion pattern in selecting and fusing cross-modal complements, and the generalization of the proposed designs in different tasks. Numéro de notice : A2021-697 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-021-01452-0 Date de publication en ligne : 05/05/2021 En ligne : https://doi.org/10.1007/s11263-021-01452-0 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98532
in International journal of computer vision > vol 129 n° 7 (July 2021) . - pp 2076 - 2096[article]Trajectory and image-based detection and identification of UAV / Yicheng Liu in The Visual Computer, vol 37 n° 7 (July 2021)
[article]
Titre : Trajectory and image-based detection and identification of UAV Type de document : Article/Communication Auteurs : Yicheng Liu, Auteur ; Luchuan Liao, Auteur ; Hao Wu, Auteur ; et al., Auteur Année de publication : 2021 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] Aves
[Termes IGN] caméra de surveillance PTZ
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] drone
[Termes IGN] forme caractéristique
[Termes IGN] interférence
[Termes IGN] objet mobile
[Termes IGN] reconnaissance de formes
[Termes IGN] trajectoire (véhicule non spatial)Résumé : (auteur) Much more attentions have been attracted to the inspection and prevention of unmanned aerial vehicle (UAV) in the wake of increasing high frequency of security accident. Many factors like the interferences and the small fuselage of UAV pose challenges to the timely detection of the UAV. In our work, we present a system that is capable of detecting, recognizing, and tracking an UAV using single camera automatically. For our method, a single pan–tilt–zoom (PTZ) camera detects flying objects and gets their trajectories; then, the trajectory identified as a UAV guides the camera and PTZ to capture the detailed region image of the target. Therefore, the images can be classified into the UAV and interference classes (such as birds) by the convolution neural network classifier trained with our image dataset. For the target recognized as a UAV with the double verification, the radio jammer emits the interferential radio to disturb its control radio and GPS. This system could be applied in some complex environment where many birds and UAV appear simultaneously. Numéro de notice : A2021-541 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01937-y Date de publication en ligne : 29/07/2020 En ligne : https://doi.org/10.1007/s00371-020-01937-y Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98020
in The Visual Computer > vol 37 n° 7 (July 2021)[article]PolSAR ship detection based on neighborhood polarimetric covariance matrix / Tao Liu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 6 (June 2021)
[article]
Titre : PolSAR ship detection based on neighborhood polarimetric covariance matrix Type de document : Article/Communication Auteurs : Tao Liu, Auteur ; Ziyuan Yang, Auteur ; Armando Marino, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 4874 - 4887 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] détection d'objet
[Termes IGN] données polarimétriques
[Termes IGN] image radar moirée
[Termes IGN] image Radarsat
[Termes IGN] matrice de covariance
[Termes IGN] navire
[Termes IGN] polarimétrie radar
[Termes IGN] voisinage (relation topologique)Résumé : (auteur) The detection of small ships in polarimetric synthetic aperture radar (PolSAR) images is still a topic for further investigation. Recently, patch detection techniques, such as superpixel-level detection, have stimulated wide interest because they can use the information contained in similarities among neighboring pixels. In this article, we propose a novel neighborhood polarimetric covariance matrix (NPCM) to detect the small ships in PolSAR images, leading to a significant improvement in the separability between ship targets and sea clutter. The NPCM utilizes the spatial correlation between neighborhood pixels and maps the representation for a given pixel into a high-dimensional covariance matrix by embedding spatial and polarization information. Using the NPCM formalism, we apply a standard whitening filter, similar to the polarimetric whitening filter (PWF). We show how the inclusion of neighborhood information improves the performance compared with the traditional polarimetric covariance matrix. However, this is at the expense of a higher computation cost. The theory is validated via the simulated and measured data under different sea states and using different radar platforms. Numéro de notice : A2021-424 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3022181 Date de publication en ligne : 22/09/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3018638 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97780
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 6 (June 2021) . - pp 4874 - 4887[article]Reconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne / Valentin Desbiolles in XYZ, n° 167 (juin 2021)PermalinkStructure-aware completion of photogrammetric meshes in urban road environment / Qing Zhu in ISPRS Journal of photogrammetry and remote sensing, vol 175 (May 2021)PermalinkA CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)PermalinkRotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)PermalinkImproving the unsupervised mapping of riparian bugweed in commercial forest plantations using hyperspectral data and LiDAR / Kabir Peerbhay in Geocarto international, vol 36 n° 4 ([01/03/2021])PermalinkMulti-level progressive parallel attention guided salient object detection for RGB-D images / Zhengyi Liu in The Visual Computer, vol 37 n° 3 (March 2021)PermalinkPBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery / Xian Sun in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)PermalinkDeep traffic light detection by overlaying synthetic context on arbitrary natural images / Jean Pablo Vieira de Mello in Computers and graphics, vol 94 n° 1 (February 2021)PermalinkDetection of pictorial map objects with convolutional neural networks / Raimund Schnürer in Cartographic journal (the), vol 58 n° 1 (February 2021)PermalinkSemi-supervised joint learning for hand gesture recognition from a single color image / Chi Xu in Sensors, vol 21 n° 3 (February 2021)Permalink