Détail de l'auteur
Auteur Lin Chen |
Documents disponibles écrits par cet auteur (5)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Deep learning feature representation for image matching under large viewpoint and viewing direction change / Lin Chen in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)
[article]
Titre : Deep learning feature representation for image matching under large viewpoint and viewing direction change Type de document : Article/Communication Auteurs : Lin Chen, Auteur ; Christian Heipke, Auteur Année de publication : 2022 Article en page(s) : pp 94 -112 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image aérienne oblique
[Termes IGN] orientation d'image
[Termes IGN] reconnaissance de formes
[Termes IGN] réseau neuronal siamois
[Termes IGN] SIFT (algorithme)Résumé : (auteur) Feature based image matching has been a research focus in photogrammetry and computer vision for decades, as it is the basis for many applications where multi-view geometry is needed. A typical feature based image matching algorithm contains five steps: feature detection, affine shape estimation, orientation assignment, description and descriptor matching. This paper contains innovative work in different steps of feature matching based on convolutional neural networks (CNN). For the affine shape estimation and orientation assignment, the main contribution of this paper is twofold. First, we define a canonical shape and orientation for each feature. As a consequence, instead of the usual Siamese CNN, only single branch CNNs needs to be employed to learn the affine shape and orientation parameters, which turns the related tasks from supervised to self supervised learning problems, removing the need for known matching relationships between features. Second, the affine shape and orientation are solved simultaneously. To the best of our knowledge, this is the first time these two modules are reported to have been successfully trained together. In addition, for the descriptor learning part, a new weak match finder is suggested to better explore the intra-variance of the appearance of matched features. For any input feature patch, a transformed patch that lies far from the input feature patch in descriptor space is defined as a weak match feature. A weak match finder network is proposed to actively find these weak match features; they are subsequently used in the standard descriptor learning framework. The proposed modules are integrated into an inference pipeline to form the proposed feature matching algorithm. The algorithm is evaluated on standard benchmarks and is used to solve for the parameters of image orientation of aerial oblique images. It is shown that deep learning feature based image matching leads to more registered images, more reconstructed 3D points and a more stable block geometry than conventional methods. The code is available at https://github.com/Childhoo/Chen_Matcher.git. Numéro de notice : A2022-502 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.06.003 Date de publication en ligne : 14/06/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.06.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101000
in ISPRS Journal of photogrammetry and remote sensing > vol 190 (August 2022) . - pp 94 -112[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022081 SL Revue Centre de documentation Revues en salle Disponible 081-2022083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Feature detection and description for image matching: from hand-crafted design to deep learning / Lin Chen in Geo-spatial Information Science, vol 24 n° 1 (March 2021)
[article]
Titre : Feature detection and description for image matching: from hand-crafted design to deep learning Type de document : Article/Communication Auteurs : Lin Chen, Auteur ; Franz Rottensteiner, Auteur ; Christian Heipke, Auteur Année de publication : 2021 Article en page(s) : pp 58 - 74 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement automatique
[Termes IGN] appariement d'images
[Termes IGN] appariement de formes
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image aérienne oblique
[Termes IGN] orientation d'image
[Termes IGN] SIFT (algorithme)Résumé : (Auteur) In feature based image matching, distinctive features in images are detected and represented by feature descriptors. Matching is then carried out by assessing the similarity of the descriptors of potentially conjugate points. In this paper, we first shortly discuss the general framework. Then, we review feature detection as well as the determination of affine shape and orientation of local features, before analyzing feature description in more detail. In the feature description review, the general framework of local feature description is presented first. Then, the review discusses the evolution from hand-crafted feature descriptors, e.g. SIFT (Scale Invariant Feature Transform), to machine learning and deep learning based descriptors. The machine learning models, the training loss and the respective training data of learning-based algorithms are looked at in more detail; subsequently the various advantages and challenges of the different approaches are discussed. Finally, we present and assess some current research directions before concluding the paper. Numéro de notice : A2021-297 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10095020.2020.1843376 Date de publication en ligne : 17/11/2020 En ligne : https://doi.org/10.1080/10095020.2020.1843376 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97379
in Geo-spatial Information Science > vol 24 n° 1 (March 2021) . - pp 58 - 74[article]
Titre : Deep learning for feature based image matching Type de document : Thèse/HDR Auteurs : Lin Chen, Auteur ; Christian Heipke, Directeur de thèse Editeur : Munich : Bayerische Akademie der Wissenschaften Année de publication : 2021 Collection : DGK - C, ISSN 0065-5325 num. 867 Importance : 159 p. Format : 21 x 30 cm Note générale : bibliographie
Diese Arbeit ist gleichzeitig veröffentlicht in: Wissenschaftliche Arbeiten der Fachrichtung Geodäsie und Geoinformatik der Leibniz UniversitätHannoverISSN 0174-1454, Nr. 369, Hannover 2021Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] appariement d'images
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] descripteur
[Termes IGN] image aérienne oblique
[Termes IGN] orientation d'image
[Termes IGN] orthoimageRésumé : (auteur) Feature based image matching aims at finding matched features between two or more images. It is one of the most fundamental research topics in photogrammetry and computer vision. The matching features area prerequisite for applications such as image orientation, Simultaneous Localization and Mapping (SLAM) and robot vision. A typical feature based matching algorithm is composed of five steps: feature detection, affine shape estimation, orientation, description and descriptor matching. Today, the employment of deep neural network has framed those different steps as machine learning problems and the matching performance has been improved significantly. One of the main reasons why feature based image matching may still prove difficult is the complex change between different images, including geometric and radiometric transformations. If the change between images exceeds a certain level, it will also exceed the tolerance of those aforementioned separate steps and, in turn, cause feature based image matching to fail.
This thesis focuses on improving feature based image matching against large viewpoint and viewing direction change between images. In order to improve the feature based image matching performance under these circumstances, affine shape estimation, orientation and description are solved with deep learning architectures. In particular, Convolutional Neural Networks (CNN) are used. For the affine shape and orientation learning, the main contribution of this thesis is two fold. First, instead of a Siamese CNN, only one branch is needed and the loss is built based on the geometric measures calculated from the mean gradient or second moment matrix. Therefore, for each of the input patches, a global minimum, namely the canonical feature, exists. Second, both the affine shape and orientation are solved simultaneously within one network by combining the loss used for affine shape and orientation learning. To the best of the author’s knowledge, this is the first time these two modules are reported to have been successfully trained simultaneously. For the descriptor learning part, a new weak match is defined. For any input feature patch, a slightly transformed patch that lies far from the input feature patch in descriptor space is defined as a weak match feature. A weak match finder network is proposed to actively find these weak match features. In a following step, the found weak matches are used in the standard descriptor learning framework. In this way, the intra-variance of the appearance of matched feature patch pairs is explored in depth and, accordingly, the invariance of feature descriptors against viewpoint and viewing direction change is improved. The proposed feature based image matching method is evaluated on standard benchmarks and is used to solve for the parameters of image orientation. For the image orientation task, aerial oblique images are taken into account. Through analysis of the experiments conducted for small image blocks, it is shown that deep learning feature based image matching leads to more registered images, more reconstructed 3D points and a more stable block connection.Note de contenu : 1- Introduction
2- Basics
3- Related work
4- Deep learning feature representation
5- Experiments and results
6- Discussion
7- Conclusion and outlookNuméro de notice : 17673 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : PhD dissertation : Fachrichtung Geodäsie und Geoinformatik : Hanovre : 2021 En ligne : https://dgk.badw.de/fileadmin/user_upload/Files/DGK/docs/c-867.pdf Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97999 Context pyramidal network for stereo matching regularized by disparity gradients / Junhua Kang in ISPRS Journal of photogrammetry and remote sensing, vol 157 (November 2019)
[article]
Titre : Context pyramidal network for stereo matching regularized by disparity gradients Type de document : Article/Communication Auteurs : Junhua Kang, Auteur ; Lin Chen, Auteur ; Fei Deng, Auteur ; Christian Heipke, Auteur Année de publication : 2019 Article en page(s) : pp 201 - 215 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] appariement de formes
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] gradient
[Termes IGN] vision par ordinateur
[Termes IGN] vision stéréoscopiqueRésumé : (Auteur) Also after many years of research, stereo matching remains to be a challenging task in photogrammetry and computer vision. Recent work has achieved great progress by formulating dense stereo matching as a pixel-wise learning task to be resolved with a deep convolutional neural network (CNN). However, most estimation methods, including traditional and deep learning approaches, still have difficulty to handle real-world challenging scenarios, especially those including large depth discontinuity and low texture areas.
To tackle these problems, we investigate a recently proposed end-to-end disparity learning network, DispNet (Mayer et al., 2015), and improve it to yield better results in these problematic areas. The improvements consist of three major contributions. First, we use dilated convolutions to develop a context pyramidal feature extraction module. A dilated convolution expands the receptive field of view when extracting features, and aggregates more contextual information, which allows our network to be more robust in weakly textured areas. Second, we construct the matching cost volume with patch-based correlation to handle larger disparities. We also modify the basic encoder-decoder module to regress detailed disparity images with full resolution. Third, instead of using post-processing steps to impose smoothness in the presence of depth discontinuities, we incorporate disparity gradient information as a gradient regularizer into the loss function to preserve local structure details in large depth discontinuity areas.
We evaluate our model in terms of end-point-error on several challenging stereo datasets including Scene Flow, Sintel and KITTI. Experimental results demonstrate that our model decreases the estimation error compared with DispNet on most datasets (e.g. we obtain an improvement of 46% on Sintel) and estimates better structure-preserving disparity maps. Moreover, our proposal also achieves competitive performance compared to other methods.Numéro de notice : A2019-496 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.09.012 Date de publication en ligne : 27/09/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.09.012 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93729
in ISPRS Journal of photogrammetry and remote sensing > vol 157 (November 2019) . - pp 201 - 215[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019113 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019112 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Estimation of forest above-ground biomass by geographically weighted regression and machine learning with Sentinel imagery / Lin Chen in Forests, vol 9 n° 10 (October 2018)
[article]
Titre : Estimation of forest above-ground biomass by geographically weighted regression and machine learning with Sentinel imagery Type de document : Article/Communication Auteurs : Lin Chen, Auteur ; Chunying Ren, Auteur ; Bai Zhang, Auteur ; Zongming Wang, Auteur ; Yanbiao Xi, Auteur Année de publication : 2018 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] arbre caducifolié
[Termes IGN] biomasse aérienne
[Termes IGN] Chine
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] image multibande
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] modèle de simulation
[Termes IGN] montagne
[Termes IGN] régression géographiquement pondérée
[Termes IGN] surveillance forestière
[Termes IGN] texture d'image
[Termes IGN] variable biophysique (végétation)Résumé : (Auteur) Accurate forest above-ground biomass (AGB) is crucial for sustaining forest management and mitigating climate change to support REDD+ (reducing emissions from deforestation and forest degradation, plus the sustainable management of forests, and the conservation and enhancement of forest carbon stocks) processes. Recently launched Sentinel imagery offers a new opportunity for forest AGB mapping and monitoring. In this study, texture characteristics and backscatter coefficients of Sentinel-1, in addition to multispectral bands, vegetation indices, and biophysical variables of Sentinal-2, based on 56 measured AGB samples in the center of the Changbai Mountains, China, were used to develop biomass prediction models through geographically weighted regression (GWR) and machine learning (ML) algorithms, such as the artificial neural network (ANN), support vector machine for regression (SVR), and random forest (RF). The results showed that texture characteristics and vegetation biophysical variables were the most important predictors. SVR was the best method for predicting and mapping the patterns of AGB in the study site with limited samples, whose mean error, mean absolute error, root mean square error, and correlation coefficient were 4 × 10−3, 0.07, 0.08 Mg·ha−1, and 1, respectively. Predicted values of AGB from four models ranged from 11.80 to 324.12 Mg·ha−1, and those for broadleaved deciduous forests were the most accurate, while those for AGB above 160 Mg·ha−1 were the least accurate. The study demonstrated encouraging results in forest AGB mapping of the normal vegetated area using the freely accessible and high-resolution Sentinel imagery, based on ML techniques. Numéro de notice : A2018-478 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/f9100582 Date de publication en ligne : 20/09/2018 En ligne : https://doi.org/10.3390/f9100582 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91180
in Forests > vol 9 n° 10 (October 2018)[article]