Descripteur



Etendre la recherche sur niveau(x) vers le bas
Activity recognition in residential spaces with Internet of things devices and thermal imaging / Kshirasagar Naik in Sensors, vol 21 n° 3 (February 2021)
![]()
[article]
Titre : Activity recognition in residential spaces with Internet of things devices and thermal imaging Type de document : Article/Communication Auteurs : Kshirasagar Naik, Auteur ; Tejas Pandit, Auteur ; Nitin Naik, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 988 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] compréhension de l'image
[Termes descripteurs IGN] contrôle par télédétection
[Termes descripteurs IGN] détection d'événement
[Termes descripteurs IGN] espace intérieur
[Termes descripteurs IGN] image RVB
[Termes descripteurs IGN] image thermique
[Termes descripteurs IGN] intelligence artificielle
[Termes descripteurs IGN] internet des objets
[Termes descripteurs IGN] itération
[Termes descripteurs IGN] modèle stéréoscopique
[Termes descripteurs IGN] objet mobile
[Termes descripteurs IGN] reconnaissance automatique
[Termes descripteurs IGN] reconnaissance d'objets
[Termes descripteurs IGN] scène 3DRésumé : (auteur) In this paper, we design algorithms for indoor activity recognition and 3D thermal model generation using thermal images, RGB images, captured from external sensors, and the internet of things setup. Indoor activity recognition deals with two sub-problems: Human activity and household activity recognition. Household activity recognition includes the recognition of electrical appliances and their heat radiation with the help of thermal images. A FLIR ONE PRO camera is used to capture RGB-thermal image pairs for a scene. Duration and pattern of activities are also determined using an iterative algorithm, to explore kitchen safety situations. For more accurate monitoring of hazardous events such as stove gas leakage, a 3D reconstruction approach is proposed to determine the temperature of all points in the 3D space of a scene. The 3D thermal model is obtained using the stereo RGB and thermal images for a particular scene. Accurate results are observed for activity detection, and a significant improvement in the temperature estimation is recorded in the 3D thermal model compared to the 2D thermal image. Results from this research can find applications in home automation, heat automation in smart homes, and energy management in residential spaces. Numéro de notice : A2021-159 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/s21030988 date de publication en ligne : 02/02/2021 En ligne : https://doi.org/10.3390/s21030988 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97075
in Sensors > vol 21 n° 3 (February 2021) . - n° 988[article]Determination of the under water position of objects by reflectorless total stations / Štefan Rákay in Survey review, vol 53 n°376 (January 2021)
![]()
[article]
Titre : Determination of the under water position of objects by reflectorless total stations Type de document : Article/Communication Auteurs : Štefan Rákay, Auteur ; Slavomír Labant, Auteur ; Karol Bartoš, Auteur Année de publication : 2021 Article en page(s) : pp 35 - 43 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Bathymétrie
[Termes descripteurs IGN] bathymétrie laser
[Termes descripteurs IGN] erreur géométrique
[Termes descripteurs IGN] lever bathymétrique
[Termes descripteurs IGN] mesurage électronique de distances
[Termes descripteurs IGN] objet
[Termes descripteurs IGN] onde électromagnétique
[Termes descripteurs IGN] réfraction de l'eau
[Termes descripteurs IGN] scène sous-marine
[Termes descripteurs IGN] tachéomètre électronique
[Termes descripteurs IGN] vitesseRésumé : (auteur) When surveying through a water surface, a distortion of several centimetres caused by the refraction and the change in the velocity of the electromagnetic waves can be observed. Therefore, neither the position nor the height of an underwater point (object), which can be seen from above the water surface, is correctly measured. The authors want to point out the magnitude of geometric errors when measuring to points under water as well as the computation of correct under water positions of points from measurement through a water layer. A practical experiment was performed for a water depth of 0.16 m. Numéro de notice : A2021-048 Affiliation des auteurs : non IGN Thématique : POSITIONNEMENT Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/00396265.2019.1683488 date de publication en ligne : 03/11/2019 En ligne : https://doi.org/10.1080/00396265.2019.1683488 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96783
in Survey review > vol 53 n°376 (January 2021) . - pp 35 - 43[article]Holographic SAR tomography 3-D reconstruction based on iterative adaptive approach and generalized likelihood ratio test / Dong Feng in IEEE Transactions on geoscience and remote sensing, vol 59 n° 1 (January 2021)
![]()
[article]
Titre : Holographic SAR tomography 3-D reconstruction based on iterative adaptive approach and generalized likelihood ratio test Type de document : Article/Communication Auteurs : Dong Feng, Auteur ; Daoxiang An, Auteur ; Leping Chen, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 305 - 315 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes descripteurs IGN] angle azimutal
[Termes descripteurs IGN] holographie
[Termes descripteurs IGN] image radar moirée
[Termes descripteurs IGN] itération
[Termes descripteurs IGN] ligne de visée
[Termes descripteurs IGN] reconstruction 3D
[Termes descripteurs IGN] scène 3D
[Termes descripteurs IGN] tomographie radarRésumé : (auteur) Holographic synthetic aperture radar (HoloSAR) tomography is an attractive imaging mode that can retrieve the 3-D scattering information of the observed scene over 360° azimuth angle variation. To improve the resolution and reduce the sidelobes in elevation, the HoloSAR imaging mode requires many passes in elevation, thus decreasing its feasibility. In this article, an imaging method based on iterative adaptive approach (IAA) and generalized likelihood ratio test (GLRT) is proposed for the HoloSAR with limited elevation passes to achieve super-resolution reconstruction in elevation. For the elevation reconstruction in each range-azimuth cell, the proposed method first adopts the nonparametric IAA to retrieve the elevation profile with improved resolution and suppressed sidelobes. Then, to obtain sparse elevation estimates, the GLRT is used as a model order selection tool to automatically recognize the most likely number of scatterers and obtain the reflectivities of the detected scatterers inside one range-azimuth cell. The proposed method is a super-resolving method. It does not require averaging in range and azimuth, thus it can maintain the range-azimuth resolution. In addition, the proposed method is a user parameter-free method, so it does not need the fine-tuning of any hyperparameters. The super-resolution power and the estimation accuracy of the proposed method are evaluated using the simulated data, and the validity and feasibility of the proposed method are verified by the HoloSAR real data processing results. Numéro de notice : A2021-034 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2994201 date de publication en ligne : 22/05/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2994201 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96736
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 1 (January 2021) . - pp 305 - 315[article]MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds / Haifeng Luo in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)
![]()
[article]
Titre : MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds Type de document : Article/Communication Auteurs : Haifeng Luo, Auteur ; Chongcheng Chen, Auteur ; Lina Fang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 8301 - 8315 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] cognition
[Termes descripteurs IGN] données lidar
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] représentation multiple
[Termes descripteurs IGN] scène urbaine
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] semis de pointsRésumé : (auteur) Semantic segmentation is one of the fundamental tasks in understanding and applying urban scene point clouds. Recently, deep learning has been introduced to the field of point cloud processing. However, compared to images that are characterized by their regular data structure, a point cloud is a set of unordered points, which makes semantic segmentation a challenge. Consequently, the existing deep learning methods for semantic segmentation of point cloud achieve less success than those applied to images. In this article, we propose a novel method for urban scene point cloud semantic segmentation using deep learning. First, we use homogeneous supervoxels to reorganize raw point clouds to effectively reduce the computational complexity and improve the nonuniform distribution. Then, we use supervoxels as basic processing units, which can further expand receptive fields to obtain more descriptive contexts. Next, a sparse autoencoder (SAE) is presented for feature embedding representations of the supervoxels. Subsequently, we propose a regional relation feature reasoning module (RRFRM) inspired by relation reasoning network and design a multiscale regional relation feature segmentation network (MS-RRFSegNet) based on the RRFRM to semantically label supervoxels. Finally, the supervoxel-level inferences are transformed into point-level fine-grained predictions. The proposed framework is evaluated in two open benchmarks (Paris-Lille-3D and Semantic3D). The evaluation results show that the proposed method achieves competitive overall performance and outperforms other related approaches in several object categories. An implementation of our method is available at: https://github.com/HiphonL/MS_RRFSegNet . Numéro de notice : A2020-738 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2985695 date de publication en ligne : 28/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2985695 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96363
in IEEE Transactions on geoscience and remote sensing > Vol 58 n° 12 (December 2020) . - pp 8301 - 8315[article]Parsing very high resolution urban scene images by learning deep ConvNets with edge-aware loss / Xianwei Zheng in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)
![]()
[article]
Titre : Parsing very high resolution urban scene images by learning deep ConvNets with edge-aware loss Type de document : Article/Communication Auteurs : Xianwei Zheng, Auteur ; Linxi Huan, Auteur ; Gui-Song Xia, Auteur ; Jianya Gong, Auteur Année de publication : 2020 Article en page(s) : pp 15-28 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] classification basée sur les régions
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] contour
[Termes descripteurs IGN] image à très haute résolution
[Termes descripteurs IGN] méthode fondée sur le noyau
[Termes descripteurs IGN] scène urbaine
[Termes descripteurs IGN] segmentation sémantiqueRésumé : (Auteur) Parsing very high resolution (VHR) urban scene images into regions with semantic meaning, e.g. buildings and cars, is a fundamental task in urban scene understanding. However, due to the huge quantity of details contained in an image and the large variations of objects in scale and appearance, the existing semantic segmentation methods often break one object into pieces, or confuse adjacent objects and thus fail to depict these objects consistently. To address these issues uniformly, we propose a standalone end-to-end edge-aware neural network (EaNet) for urban scene semantic segmentation. For semantic consistency preservation inside objects, the EaNet model incorporates a large kernel pyramid pooling (LKPP) module to capture rich multi-scale context with strong continuous feature relations. To effectively separate confusing objects with sharp contours, a Dice-based edge-aware loss function (EA loss) is devised to guide the EaNet to refine both the pixel- and image-level edge information directly from semantic segmentation prediction. In the proposed EaNet model, the LKPP and the EA loss couple to enable comprehensive feature learning across an entire semantic object. Extensive experiments on three challenging datasets demonstrate that our method can be readily generalized to multi-scale ground/aerial urban scene images, achieving 81.7% in mIoU on Cityscapes Test set and 90.8% in the mean F1-score on the ISPRS Vaihingen 2D Test set. Code is available at: https://github.com/geovsion/EaNet. Numéro de notice : A2020-703 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.09.019 date de publication en ligne : 14/10/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.09.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96228
in ISPRS Journal of photogrammetry and remote sensing > vol 170 (December 2020) . - pp 15-28[article]Visualization of 3D property data and assessment of the impact of rendering attributes / Stefan Seipel in Journal of Geovisualization and Spatial Analysis, vol 4 n° 2 (December 2020)
PermalinkWeighted spherical sampling of point clouds for forested scenes / Alex Fafard in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 10 (October 2020)
PermalinkX-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data / Danfeng Hong in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
PermalinkGeometric distortion of historical images for 3D visualization / Evelyn Paiz-Reyes in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-2 (August 2020)
PermalinkSemi-automatic identification of submarine pipelines with synthetic aperture sonar Images / Victor Hugo Fernandes in Marine geodesy, Vol 43 n° 4 (July 2020)
PermalinkAn Illumination Insensitive descriptor combining the CSLBP features for street view images in augmented reality: experimental studies / Zejun Xiang in ISPRS International journal of geo-information, vol 9 n° 6 (June 2020)
PermalinkHeuristic sample learning for complex urban scenes: Application to urban functional-zone mapping with VHR images and POI data / Xiuyuan Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)
PermalinkObject-based incremental registration of terrestrial point clouds in an urban environment / Xuming Ge in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)
PermalinkCreation of inspirational Web Apps that demonstrate the functionalities offered by the ArcGIS API for JavaScript / Arthur Genet (2020)
PermalinkSimplicial complexes reconstruction and generalisation of 3d lidar data in urban scenes / Stéphane Guinard (2020)
Permalink