Descripteur
Documents disponibles dans cette catégorie (1512)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
RoofN3D: a database for 3D building reconstruction with deep learning / Andreas Wichmann in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)
[article]
Titre : RoofN3D: a database for 3D building reconstruction with deep learning Type de document : Article/Communication Auteurs : Andreas Wichmann, Auteur ; Amgad Agoub, Auteur ; Valentina Schmidt, Auteur Année de publication : 2019 Article en page(s) : pp 435 - 443 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] .Net
[Termes IGN] apprentissage profond
[Termes IGN] base de données localisées 3D
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] semis de points
[Termes IGN] toitRésumé : (Auteur) Machine learning methods, in particular those based on deep learning, have gained in importance through the latest development of artificial intelligence and computer hardware. However, the direct application of deep learning methods to improve the results of 3D building reconstruction is often not possible due, for example, to the lack of suitable training data. To address this issue, we present RoofN3D which provides a three-dimensional (3D) point cloud training dataset that can be used to train machine learning models for different tasks in the context of 3D building reconstruction. The details about RoofN3D and the developed framework to automatically derive such training data are described in this paper. Furthermore, we provide an overview of other available 3D point cloud training data and approaches from current literature in which solutions for the application of deep learning to 3D point cloud data are presented. Finally, we exemplarily demonstrate how the provided data can be used to classify building roofs with the PointNet framework. Numéro de notice : A2019-248 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.85.6.435 En ligne : https://doi.org/10.14358/PERS.85.6.435 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93004
in Photogrammetric Engineering & Remote Sensing, PERS > vol 85 n° 6 (June 2019) . - pp 435 - 443[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2019061 SL Revue Centre de documentation Revues en salle Disponible Integrated relative orientation based on point and line features via Plücker coordinates / Qinghong Sheng in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 4 (avril 2019)
[article]
Titre : Integrated relative orientation based on point and line features via Plücker coordinates Type de document : Article/Communication Auteurs : Qinghong Sheng, Auteur ; Rui Yang, Auteur ; Hui Xiao, Auteur Année de publication : 2019 Article en page(s) : pp 305 - 311 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] méthode des moindres carrés
[Termes IGN] orientation relative
[Termes IGN] transformation de coordonnéesRésumé : (Auteur) Relative orientation based on point and line features can significantly reduce the problems caused by single flight strip bending and strengthen geometric stability. Nevertheless, the dimensional difference of points and lines and the inconsistency of their methods of representation result in low calculation efficiency. In this paper, it is proposed that three-dimensional points and four-dimensional lines are uniformly represented by Plücker coordinates to achieve integrated coordinate transformation. According to the geometric conditions that lines intersect lines and planes intersect planes, a relative orientation model based on pointline via Plücker coordinates (P-LPRO) is established, and the error equation is linearized by the least-squares method. Experimental results show that the integrated adjustment solution of P-LPRO is more accurate than the traditional method and can achieve faster convergence speed. Numéro de notice : A2019-164 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.85.4.305 Date de publication en ligne : 01/04/2019 En ligne : https://doi.org/10.14358/PERS.85.4.305 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92569
in Photogrammetric Engineering & Remote Sensing, PERS > vol 85 n° 4 (avril 2019) . - pp 305 - 311[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2019041 SL Revue Centre de documentation Revues en salle Disponible Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective / Mohammad D. Hossain in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective Type de document : Article/Communication Auteurs : Mohammad D. Hossain, Auteur ; Dongmei Chen, Auteur Année de publication : 2019 Article en page(s) : pp 115 - 134 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse d'image orientée objet
[Termes IGN] appariement de données localisées
[Termes IGN] apprentissage automatique
[Termes IGN] classification hybride
[Termes IGN] image à haute résolution
[Termes IGN] objet géographique
[Termes IGN] segmentation d'image
[Termes IGN] segmentation en régions
[Termes IGN] segmentation par décomposition-fusionRésumé : (Auteur) Image segmentation is a critical and important step in (GEographic) Object-Based Image Analysis (GEOBIA or OBIA). The final feature extraction and classification in OBIA is highly dependent on the quality of image segmentation. Segmentation has been used in remote sensing image processing since the advent of the Landsat-1 satellite. However, after the launch of the high-resolution IKONOS satellite in 1999, the paradigm of image analysis moved from pixel-based to object-based. As a result, the purpose of segmentation has been changed from helping pixel labeling to object identification. Although several articles have reviewed segmentation algorithms, it is unclear if some segmentation algorithms are generally more suited for (GE)OBIA than others. This article has conducted an extensive state-of-the-art survey on OBIA techniques, discussed different segmentation techniques and their applicability to OBIA. Conceptual details of those techniques are explained along with the strengths and weaknesses. The available tools and software packages for segmentation are also summarized. The key challenge in image segmentation is to select optimal parameters and algorithms that can general image objects matching with the meaningful geographic objects. Recent research indicates an apparent movement towards the improvement of segmentation algorithms, aiming at more accurate, automated, and computationally efficient techniques. Numéro de notice : A2019-138 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.009 Date de publication en ligne : 23/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.009 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92469
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 115 - 134[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Building detection and regularisation using DSM and imagery information / Yousif A. Mousa in Photogrammetric record, vol 34 n° 165 (March 2019)
[article]
Titre : Building detection and regularisation using DSM and imagery information Type de document : Article/Communication Auteurs : Yousif A. Mousa, Auteur ; Petra Helmholz, Auteur ; David Belton, Auteur ; Dimitri Bulatov, Auteur Année de publication : 2019 Article en page(s) : pp 85 - 107 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] détection du bâti
[Termes IGN] extraction automatique
[Termes IGN] masque
[Termes IGN] modèle numérique de surface
[Termes IGN] polygone
[Termes IGN] régularisation
[Termes IGN] simplification de contourRésumé : (Auteur) An automatic method for the regularisation of building outlines is presented, utilising a combination of data‐ and model‐driven approaches to provide a robust solution. The core part of the method includes a novel data‐driven approach to generate approximate building polygons from a list of given boundary points. The algorithm iteratively calculates and stores likelihood values between an arbitrary starting boundary point and each of the following boundary points using a function derived from the geometrical properties of a building. As a preprocessing step, building segments have to be identified using a robust algorithm for the extraction of a digital elevation model. Evaluation results on a challenging dataset achieved an average correctness of 96·3% and 95·7% for building detection and regularisation, respectively. Numéro de notice : A2019-454 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/phor.12275 Date de publication en ligne : 26/03/2019 En ligne : https://doi.org/10.1111/phor.12275 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92867
in Photogrammetric record > vol 34 n° 165 (March 2019) . - pp 85 - 107[article]Learning to segment moving objects / Pavel Tokmakov in International journal of computer vision, vol 127 n° 3 (March 2019)
[article]
Titre : Learning to segment moving objects Type de document : Article/Communication Auteurs : Pavel Tokmakov, Auteur ; Cordelia Schmid, Auteur ; Karteek Alahari, Auteur Année de publication : 2019 Article en page(s) : pp 282 - 301 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] cohérence temporelle
[Termes IGN] image vidéo
[Termes IGN] objet mobile
[Termes IGN] reconnaissance d'objets
[Termes IGN] réseau neuronal convolutif
[Termes IGN] séquence d'imagesRésumé : (Auteur) We study the problem of segmenting moving objects in unconstrained videos. Given a video, the task is to segment all the objects that exhibit independent motion in at least one frame. We formulate this as a learning problem and design our framework with three cues: (1) independent object motion between a pair of frames, which complements object recognition, (2) object appearance, which helps to correct errors in motion estimation, and (3) temporal consistency, which imposes additional constraints on the segmentation. The framework is a two-stream neural network with an explicit memory module. The two streams encode appearance and motion cues in a video sequence respectively, while the memory module captures the evolution of objects over time, exploiting the temporal consistency. The motion stream is a convolutional neural network trained on synthetic videos to segment independently moving objects in the optical flow field. The module to build a “visual memory” in video, i.e., a joint representation of all the video frames, is realized with a convolutional recurrent unit learned from a small number of training video sequences. For every pixel in a frame of a test video, our approach assigns an object or background label based on the learned spatio-temporal features as well as the “visual memory” specific to the video. We evaluate our method extensively on three benchmarks, DAVIS, Freiburg-Berkeley motion segmentation dataset and SegTrack. In addition, we provide an extensive ablation study to investigate both the choice of the training data and the influence of each component in the proposed framework. Numéro de notice : A2018-601 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1122-2 Date de publication en ligne : 22/09/2018 En ligne : https://doi.org/10.1007/s11263-018-1122-2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92528
in International journal of computer vision > vol 127 n° 3 (March 2019) . - pp 282 - 301[article]Semantic understanding of scenes through the ADE20K dataset / Bolei Zhou in International journal of computer vision, vol 127 n° 3 (March 2019)PermalinkEfficiently annotating object images with absolute size information using mobile devices / Martin Hofmann in International journal of computer vision, vol 127 n° 2 (February 2019)PermalinkEquivalent constraints for two-view geometry : Pose solution/pure rotation identification and 3D reconstruction / Qi Cai in International journal of computer vision, vol 127 n° 2 (February 2019)PermalinkSeamline network generation based on foreground segmentation for orthoimage mosaicking / Li Li in ISPRS Journal of photogrammetry and remote sensing, vol 148 (February 2019)PermalinkBayesian iterative reconstruction methods for 3D X-ray Computed Tomography / Camille Chapdelaine (2019)PermalinkChallenging deep image descriptors for retrieval in heterogeneous iconographic collections / Dimitri Gominski (2019)PermalinkDataPink, l'IA au service de l'information géographique / Anonyme in Géomatique expert, n° 126 (janvier - février 2019)PermalinkLU-Net, An efficient network for 3D LiDAR point cloud semantic segmentation based on end-to-end-learned 3D features and U-Net / Pierre Biasutti (2019)PermalinkMultimodal scene understanding: algorithms, applications and deep learning, ch. 8. Multimodal localization for embedded systems: a survey / Imane Salhi (2019)PermalinkPermalink