Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet > détection d'objet
détection d'objetVoir aussi |
Documents disponibles dans cette catégorie (140)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Novel fusion approach on automatic object extraction from spatial data: case study Worldview-2 and TOPO5000 / Umut Gunes Sefercik in Geocarto international, vol 33 n° 10 (October 2018)
[article]
Titre : Novel fusion approach on automatic object extraction from spatial data: case study Worldview-2 and TOPO5000 Type de document : Article/Communication Auteurs : Umut Gunes Sefercik, Auteur ; Serkan Karakis, Auteur ; Can Atalay, Auteur ; Ibrahim Yigit, Auteur ; Umit Gokmen, Auteur Année de publication : 2018 Article en page(s) : pp 1139 - 1154 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] détection d'objet
[Termes IGN] détection du bâti
[Termes IGN] extraction automatique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] extraction du réseau routier
[Termes IGN] filtre de Wallis
[Termes IGN] image numérique
[Termes IGN] image Worldview
[Termes IGN] modèle numérique de surface
[Termes IGN] TurquieRésumé : (auteur) The automatic extraction of information content from remotely sensed data is always challenging. We suggest a novel fusion approach to improve the extraction of this information from mono-satellite images. A Worldview-2 (WV-2) pan-sharpened image and a 1/5000-scaled topographic vector map (TOPO5000) were used as the sample data. Firstly, the buildings and roads were manually extracted from WV-2 to point out the maximum extractable information content. Subsequently, object-based automatic extractions were performed. After achieving two-dimensional results, a normalized digital surface model (nDSM) was generated from the underlying digital aerial photos of TOPO5000, and the automatic extraction was repeated by fusion with the nDSM to include individual object heights as an additional band for classification. The contribution was tested by precision, completeness and overall quality. Novel fusion technique increased the success of automatic extraction by 7% for the number of buildings and by 23% for the length of roads. Numéro de notice : A2019-047 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2017.1353646 Date de publication en ligne : 27/07/2017 En ligne : https://doi.org/10.1080/10106049.2017.1353646 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92068
in Geocarto international > vol 33 n° 10 (October 2018) . - pp 1139 - 1154[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 059-2018041 RAB Revue Centre de documentation En réserve L003 Disponible Augmented reality meets computer vision : efficient data generation for urban driving scenes / Hassan Abu Alhaija in International journal of computer vision, vol 126 n° 9 (September 2018)
[article]
Titre : Augmented reality meets computer vision : efficient data generation for urban driving scenes Type de document : Article/Communication Auteurs : Hassan Abu Alhaija, Auteur ; Siva Karthik Mustikovela, Auteur ; Lars Mescheder, Auteur ; Andreas Geiger, Auteur ; Carsten Rother, Auteur Année de publication : 2018 Article en page(s) : pp 961 - 972 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] détection d'objet
[Termes IGN] réalité augmentée
[Termes IGN] scène urbaine
[Termes IGN] vision par ordinateurRésumé : (Auteur) The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data. Numéro de notice : A2018-417 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1070-x Date de publication en ligne : 07/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1070-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90900
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 961 - 972[article]Adaptive correlation filters with long-term and short-term memory for object tracking / Chao Ma in International journal of computer vision, vol 126 n° 8 (August 2018)
[article]
Titre : Adaptive correlation filters with long-term and short-term memory for object tracking Type de document : Article/Communication Auteurs : Chao Ma, Auteur ; Jia-Bin Huang, Auteur ; Xiaokang Yang, Auteur ; Ming-Hsuan Yang, Auteur Année de publication : 2018 Article en page(s) : pp 771 - 796 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] détection d'objet
[Termes IGN] filtre adaptatif
[Termes IGN] méthode fondée sur le noyau
[Termes IGN] méthode robuste
[Termes IGN] poursuite de cibleRésumé : (Auteur) Object tracking is challenging as target objects often undergo drastic appearance changes over time. Recently, adaptive correlation filters have been successfully applied to object tracking. However, tracking algorithms relying on highly adaptive correlation filters are prone to drift due to noisy updates. Moreover, as these algorithms do not maintain long-term memory of target appearance, they cannot recover from tracking failures caused by heavy occlusion or target disappearance in the camera view. In this paper, we propose to learn multiple adaptive correlation filters with both long-term and short-term memory of target appearance for robust object tracking. First, we learn a kernelized correlation filter with an aggressive learning rate for locating target objects precisely. We take into account the appropriate size of surrounding context and the feature representations. Second, we learn a correlation filter over a feature pyramid centered at the estimated target position for predicting scale changes. Third, we learn a complementary correlation filter with a conservative learning rate to maintain long-term memory of target appearance. We use the output responses of this long-term filter to determine if tracking failure occurs. In the case of tracking failures, we apply an incrementally learned detector to recover the target position in a sliding window fashion. Extensive experimental results on large-scale benchmark datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods in terms of efficiency, accuracy, and robustness. Numéro de notice : A2018-414 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1076-4 Date de publication en ligne : 16/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1076-4 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90897
in International journal of computer vision > vol 126 n° 8 (August 2018) . - pp 771 - 796[article]A light and faster regional convolutional neural network for object detection in optical remote sensing images / Peng Ding in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
[article]
Titre : A light and faster regional convolutional neural network for object detection in optical remote sensing images Type de document : Article/Communication Auteurs : Peng Ding, Auteur ; Ye Zhang, Auteur ; Wei-Jian Deng, Auteur ; Ping Jia, Auteur ; Arjan Kuijper, Auteur Année de publication : 2018 Article en page(s) : pp 208 - 218 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] détection d'objet
[Termes IGN] image aérienne
[Termes IGN] image terrestre
[Termes IGN] représentation multiple
[Termes IGN] réseau neuronal convolutifRésumé : (auteur) Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently. Numéro de notice : A2018-288 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.05.005 Date de publication en ligne : 14/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.05.005 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90403
in ISPRS Journal of photogrammetry and remote sensing > vol 141 (July 2018) . - pp 208 - 218[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018071 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018073 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018072 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Predicting foreground object ambiguity and efficiently crowdsourcing the segmentation(s) / Danna Gurari in International journal of computer vision, vol 126 n° 7 (July 2018)
[article]
Titre : Predicting foreground object ambiguity and efficiently crowdsourcing the segmentation(s) Type de document : Article/Communication Auteurs : Danna Gurari, Auteur ; Kun He, Auteur ; Bo Xiong, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 714 - 730 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] détection d'objet
[Termes IGN] réalité de terrain
[Termes IGN] segmentation d'image
[Termes IGN] zone saillante 3DRésumé : (Auteur) We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as “ambiguous” or “not ambiguous” to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid “ground truth” foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47% of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths. Numéro de notice : A2018-412 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1065-7 Date de publication en ligne : 05/02/2018 En ligne : https://doi.org/10.1007/s11263-018-1065-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90887
in International journal of computer vision > vol 126 n° 7 (July 2018) . - pp 714 - 730[article]Application of deep learning for object detection / Ajeet Ram Pathak in Procedia Computer Science, vol 132 (2018)PermalinkPermalinkFacade repetition detection in a fronto-parallel view with fiducial lines extraction / Hongfei Xiao in Neurocomputing, vol 273 (January 2018)PermalinkRéseaux de neurones convolutionnels profonds pour la détection de petits véhicules en imagerie aérienne / Jean Ogier du Terrail (2018)PermalinkUtilisation de véhicules traceurs pour la détection et la localisation de l'infrastructure routière par apprentissage automatique / Yann Méneroux (2018)PermalinkOccupancy modelling for moving object detection from Lidar point clouds: A comparative study / Wen Xiao in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-2/W4 (September 2017)PermalinkEstimating the spatial distribution, extent and potential lignocellulosic biomass supply of Trees Outside Forests in Baden-Wuerttemberg using airborne LiDAR and OpenStreetMap data / Joachim Maack in International journal of applied Earth observation and geoinformation, vol 58 (June 2017)PermalinkSemiautomatic detection and classification of materials in historic buildings with low-cost photogrammetric equipment / Javier Sanchez in Journal of Cultural Heritage, vol 25 (May - June 2017)PermalinkApplying detection proposals to visual tracking for scale and aspect ratio adaptability / Dafei Huang in International journal of computer vision, vol 122 n° 3 (May 2017)PermalinkA classification-segmentation framework for the detection of individual trees in dense MMS point cloud data acquired in urban areas / Martin Weinmann in Remote sensing, vol 9 n° 3 (March 2017)Permalink