Recognition: 3 theorem links
· Lean TheoremTriple Spectral Fusion for Sensor-based Human Activity Recognition
Pith reviewed 2026-05-08 18:28 UTC · model grok-4.3
The pith
A triple spectral fusion framework applies adaptive filtering in Fourier, graph Fourier and wavelet domains to fuse IMU sensor data for human activity recognition.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that adaptive filtering performed successively in the Fourier domain for noise suppression, the graph Fourier domain for merging homogeneous and heterogeneous information from a dynamic IMU node graph, and the wavelet domain for context redundancy reduction produces fused features that support more accurate identification of daily activities than prior single-domain or non-spectral approaches.
What carries the argument
The triple spectral fusion process, which organizes each IMU's sensors into posture and motion modality nodes, constructs a dynamic heterogeneous graph, and applies adaptive complementary filtering in the Fourier domain, adaptive filtering in the graph Fourier domain, and adaptive wavelet frequency selection.
If this is right
- The framework achieves effective fusion of homogeneous and heterogeneous node information through graph Fourier domain filtering.
- Adaptive wavelet frequency selection shortens feature lengths while preserving long-term context correlations.
- The overall method produces superior recognition results across ten standard benchmark datasets compared with previous techniques.
- Organizing sensors into posture and motion modality nodes enables targeted noise suppression before graph-level merging.
Where Pith is reading between the lines
- The graph construction step may reveal activity-specific sensor correlation patterns that could be inspected to improve sensor placement guidelines.
- Because feature length is reduced, the approach could support lower-latency inference on wearable devices if computational overhead remains modest.
- The same sequence of domain-specific filters might transfer to other multi-sensor time-series tasks such as gesture recognition or fall detection.
Load-bearing premise
That adaptive filtering across the three spectral domains will merge posture, motion and context signals from IMUs while suppressing noise and redundancy without discarding critical activity information or creating artifacts.
What would settle it
If re-running the experiments on the ten benchmark datasets yields no consistent gains in accuracy or F1 score over strong baseline methods, or if the filtered signals contain visible distortions that alter activity patterns, the claimed benefit of the triple-domain approach would be refuted.
Figures
read the original abstract
The field of sensor-based human activity recognition (HAR) mainly uses posture, motion and context data of Inertial Measurement Units (IMUs) to identify daily activities. Despite the advancements in learning-based methods, it is challenging to perform information fusion from the temporal perspective due to the complexities in fusing heterogeneous sensor data and establishing long-term context correlations. This paper proposes a novel triple spectral fusion framework tailored for HAR. First, we develop an adaptive complementary filtering technique for noise suppression and organize each IMU's sensors into posture and motion modality nodes. Given that IMU nodes form a dynamic heterogeneous graph, we then apply adaptive filtering within the graph Fourier domain to merge both homogeneous and heterogeneous node information. Furthermore, an adaptive wavelet frequency selection approach is implemented to suppress context redundancy and shorten the length of features. This approach enhances both timestamp-based graph aggregation and the correlation of long-term contexts. Our framework uses adaptive filtering in the Fourier, graph Fourier, and wavelet domains, enabling effective multi-sensor fusion and context correlation. Extensive experiments on ten benchmark datasets demonstrate the superior performance of our framework. Project page: https://github.com/crocodilegogogo/TSF-TPAMI2026.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes a Triple Spectral Fusion (TSF) framework for sensor-based Human Activity Recognition using IMU data. It organizes sensors into posture and motion modality nodes, applies adaptive complementary filtering for noise suppression, performs adaptive filtering in the graph Fourier domain on a dynamic heterogeneous graph to merge homogeneous and heterogeneous information, and uses adaptive wavelet frequency selection to suppress context redundancy while enhancing long-term correlations. The central claim is that this multi-domain spectral approach enables effective multi-sensor fusion and yields superior performance on ten benchmark datasets.
Significance. If the experimental results and implementation details hold, the framework offers a potentially useful spectral-domain perspective on fusing heterogeneous IMU modalities and managing long-term context in HAR, which could complement existing learning-based methods. The explicit provision of a GitHub project page supports reproducibility, a strength that aids verification of the adaptive filtering steps across Fourier, graph Fourier, and wavelet domains.
major comments (2)
- [Abstract] Abstract: The assertion of 'superior performance' on ten benchmark datasets is presented without any quantitative metrics, baseline comparisons, ablation studies, or statistical tests. This absence is load-bearing for the central claim that the triple spectral fusion enables effective fusion and context correlation.
- [Abstract] Abstract: No equations, pseudocode, or parameter definitions are supplied for the adaptive complementary filtering, graph Fourier filtering on posture/motion nodes, or wavelet frequency selection. Without these, it is impossible to verify whether the approach avoids signal loss or artifacts as implicitly assumed in the weakest point of the framework.
minor comments (2)
- [Abstract] The manuscript would benefit from a brief overview of the ten datasets (e.g., names, sensor counts, activity classes) to contextualize the claimed generality.
- [Abstract] The link to the project page is helpful; the manuscript should explicitly state what code and hyperparameters are released to support the reproducibility claim.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address each major comment point-by-point below, providing clarifications from the full text and proposing targeted revisions to the abstract where they strengthen the presentation without misrepresenting the work.
read point-by-point responses
-
Referee: The assertion of 'superior performance' on ten benchmark datasets is presented without any quantitative metrics, baseline comparisons, ablation studies, or statistical tests. This absence is load-bearing for the central claim that the triple spectral fusion enables effective fusion and context correlation.
Authors: We agree that the abstract would be strengthened by including quantitative support for the performance claim. The full manuscript (Section 4) reports results across all ten datasets with direct comparisons to multiple state-of-the-art baselines, component-wise ablation studies, and statistical tests including paired t-tests for significance. To address the concern, we will revise the abstract to incorporate key aggregate metrics (e.g., average accuracy and F1-score gains over the strongest baseline) while preserving conciseness. revision: yes
-
Referee: No equations, pseudocode, or parameter definitions are supplied for the adaptive complementary filtering, graph Fourier filtering on posture/motion nodes, or wavelet frequency selection. Without these, it is impossible to verify whether the approach avoids signal loss or artifacts as implicitly assumed in the weakest point of the framework.
Authors: The abstract is intentionally high-level and does not contain equations or pseudocode, which is standard. The complete formulations appear in the manuscript body: adaptive complementary filtering equations and parameters in Section 3.2; graph Fourier adaptive filtering on the dynamic heterogeneous posture/motion graph (including the filtering operator and node merging) in Section 3.3; and adaptive wavelet frequency selection criteria with pseudocode in Section 3.4. The adaptive design explicitly selects frequencies and nodes to retain signal energy and long-term correlations, with this property validated via ablations and visualizations in the experiments. We can add a concise reference to these sections in the revised abstract. revision: partial
Circularity Check
No significant circularity in the derivation chain
full rationale
The paper describes a triple spectral fusion framework using adaptive filtering in Fourier, graph Fourier, and wavelet domains on IMU posture/motion nodes. No equations, self-definitions, fitted parameters renamed as predictions, or self-citation chains are present in the provided abstract or description that reduce any claim to its inputs by construction. Performance is asserted via experiments on ten independent benchmark datasets, providing external validation rather than internal tautology. The framework is presented as an independent construction without load-bearing reductions to ansatz or prior self-work.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith.Cost (J(x) = ½(x + x^{-1}) − 1)washburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
F_L = I + D^{-1/2} A D^{-1/2} = 2I − L, F_H = I − D^{-1/2} A D^{-1/2} = L. ... ˜Y = α_L F_L X + α_H F_H X with α_L + α_H = 1.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Wearable full-body motion tracking of activities of daily living predicts disease trajectory in duchenne muscular dystrophy,
V . Ricotti, B. Kadirvelu, V . Selby, R. Festenstein, E. Mercuri, T. Voit, and A. A. Faisal, “Wearable full-body motion tracking of activities of daily living predicts disease trajectory in duchenne muscular dystrophy,”Nature Medicine, vol. 29, no. 1, pp. 95–103, 2023. 1
2023
-
[2]
Xsleepnet: Multi-view sequential model for automatic sleep staging,
H. Phan, O. Y. Ch ´en, M. C. Tran, P . Koch, A. Mertins, and M. De Vos, “Xsleepnet: Multi-view sequential model for automatic sleep staging,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 5903–5915, 2022. 1
2022
-
[3]
HiSC4D: Human-centered interaction and 4D scene capture in large-scale space using wearable IMUs and LiDAR,
Y. Dai, Z. Wang, X. Lin, C. Wen, L. Xu, S. Shen, Y. Ma, and C. Wang, “HiSC4D: Human-centered interaction and 4D scene capture in large-scale space using wearable IMUs and LiDAR,”IEEE Trans- actions on Pattern Analysis and Machine Intelligence, vol. 46, no. 12, pp. 11 236–11 253, 2024. 1
2024
-
[4]
Class-agnostic repetitive action counting using wearable de- vices,
D. D. Nguyen, L. T. Nguyen, Y. Huang, C. Pham, and M. Hoai, “Class-agnostic repetitive action counting using wearable de- vices,”IEEE Transactions on Pattern Analysis and Machine Intelli- gence, vol. 47, no. 6, pp. 4984–4995, 2025. 1
2025
-
[5]
Cross-modal federated human activity recognition,
X. Yang, B. Xiong, Y. Huang, and C. Xu, “Cross-modal federated human activity recognition,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 8, pp. 5345–5361, 2024. 1
2024
-
[6]
Layout-agnostic human activity recognition in smart homes through textual descriptions of sensor triggers (tdost),
M. Thukral, S. G. Dhekane, S. K. Hiremath, H. Haresamudram, and T. Ploetz, “Layout-agnostic human activity recognition in smart homes through textual descriptions of sensor triggers (tdost),”Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 9, no. 1, pp. 1–38, 2025. 1
2025
-
[7]
STADe: Sensory temporal action detection via temporal-spectral represen- tation learning,
B. Li, H. Duan, Y. Liu, L. Zhang, W. Cui, and J. T. Zhou, “STADe: Sensory temporal action detection via temporal-spectral represen- tation learning,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 47, no. 9, pp. 8117–8133, 2025. 1
2025
-
[8]
Diversityone: A multi-country smartphone sensor dataset for everyday life behavior modeling,
M. Busso, A. Bontempelli, L. J. Malcotti, L. Meegahapola, P . Kun, S. Diwakar, C. Nutakki, M. D. R. Britez, H. Xu, D. Songet al., “Diversityone: A multi-country smartphone sensor dataset for everyday life behavior modeling,”Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 9, no. 1, pp. 1–49, 2025. 1
2025
-
[9]
Past, present, and future of sensor-based human activity recog- nition using wearables: A surveying tutorial on a still challenging task,
H. Haresamudram, C. I. Tang, S. Suh, P . Lukowicz, and T. Ploetz, “Past, present, and future of sensor-based human activity recog- nition using wearables: A surveying tutorial on a still challenging task,”Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 9, no. 2, pp. 1–44, 2025. 1
2025
-
[10]
Decomposing and fusing intra- and inter-sensor spatio-temporal signal for multi-sensor wearable human activity recognition,
H. Xie, H. Li, C. Zheng, H. Yuan, G. Liao, J. Liao, and L. Liu, “Decomposing and fusing intra- and inter-sensor spatio-temporal signal for multi-sensor wearable human activity recognition,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 13, 2025, pp. 14 441–14 449. 1, 2
2025
-
[11]
Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities,
K. Chen, D. Zhang, L. Yao, B. Guo, Z. Yu, and Y. Liu, “Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities,”ACM Computing Surveys, vol. 54, no. 4, pp. 1–40, 2021. 1, 3, 8 17
2021
-
[12]
Deep convolutional neural networks on multi-channel time series for human activity recognition
J. Yang, M. N. Nguyen, P . P . San, X. Li, and S. Krishnaswamy, “Deep convolutional neural networks on multi-channel time series for human activity recognition.” inInternational Joint Conference on Artificial Intelligence, vol. 15, 2015, pp. 3995–4001. 1, 2, 3
2015
-
[13]
Sensing fine-grained hand activity with smartwatches,
G. Laput and C. Harrison, “Sensing fine-grained hand activity with smartwatches,” inProceedings of the CHI Conference on Human Factors in Computing Systems, 2019, pp. 1–13. 1, 3, 4
2019
-
[14]
Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recog- nition,
F. J. Ord ´o˜nez and D. Roggen, “Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recog- nition,”Sensors, vol. 16, no. 1, pp. 115–140, 2016. 1, 2, 3, 8, 9, 10
2016
-
[15]
Deepsense: A unified deep learning framework for time-series mobile sensing data processing,
S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelzaher, “Deepsense: A unified deep learning framework for time-series mobile sensing data processing,” inInternational Conference on World Wide Web, 2017, pp. 351–360. 1, 2, 3, 8, 9, 10
2017
-
[16]
Attnsense: Multi-level attention mechanism for multimodal human activity recognition
H. Ma, W. Li, X. Zhang, S. Gao, and S. Lu, “Attnsense: Multi-level attention mechanism for multimodal human activity recognition.” inInternational Joint Conference on Artificial Intelligence, 2019, pp. 3109–3115. 1, 2, 3, 4, 5
2019
-
[17]
Globalfusion: A global attentional deep learning framework for multisensor information fusion,
S. Liu, S. Yao, J. Li, D. Liu, T. Wang, H. Shao, and T. Abdelzaher, “Globalfusion: A global attentional deep learning framework for multisensor information fusion,”Proceedings of the ACM on Inter- active, Mobile, Wearable and Ubiquitous Technologies, vol. 4, no. 1, pp. 1–27, 2020. 1, 2, 3, 8, 9, 10
2020
-
[18]
Attend and discriminate: Beyond the state-of-the-art for human activity recognition using wearable sensors,
A. Abedin, M. Ehsanpour, Q. Shi, H. Rezatofighi, and D. C. Ranas- inghe, “Attend and discriminate: Beyond the state-of-the-art for human activity recognition using wearable sensors,”Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 5, no. 1, pp. 1–22, 2021. 1, 2, 3, 5, 8, 9, 10
2021
-
[19]
Towards a dynamic inter- sensor correlations learning framework for multi-sensor-based wearable human activity recognition,
S. Miao, L. Chen, R. Hu, and Y. Luo, “Towards a dynamic inter- sensor correlations learning framework for multi-sensor-based wearable human activity recognition,”Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 3, pp. 1–25, 2022. 1, 2, 4, 5, 8, 9, 10
2022
-
[20]
Human activity recognition from wearable sensor data using self-attention,
S. Mahmud, M. Tonmoy, K. K. Bhaumik, A. Rahman, M. A. Amin, M. Shoyaib, M. A. H. Khan, and A. A. Ali, “Human activity recognition from wearable sensor data using self-attention,” in European Conference on Artificial Intelligence, 2020, pp. 1332–1339. 1, 2, 3, 8, 9, 10
2020
-
[21]
Two- stream convolution augmented Transformer for human activity recognition,
B. Li, W. Cui, W. Wang, L. Zhang, Z. Chen, and M. Wu, “Two- stream convolution augmented Transformer for human activity recognition,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 1, 2021, pp. 286–293. 1, 2
2021
-
[22]
IF- ConvTransformer: A framework for human activity recognition using IMU fusion and ConvTransformer,
Y. Zhang, L. Wang, H. Chen, A. Tian, S. Zhou, and Y. Guo, “IF- ConvTransformer: A framework for human activity recognition using IMU fusion and ConvTransformer,”Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 2, pp. 1–26, 2022. 1, 2, 7, 8, 9, 10
2022
-
[23]
Nonlinear complemen- tary filters on the special orthogonal group,
R. Mahony, T. Hamel, and J.-M. Pflimlin, “Nonlinear complemen- tary filters on the special orthogonal group,”IEEE Transactions on Automatic Control, vol. 53, no. 5, pp. 1203–1218, 2008. 1, 4, 5
2008
-
[24]
Separating movement and gravity components in an acceleration signal and implications for the assessment of human daily physi- cal activity,
V . T. Van Hees, L. Gorzelniak, E. C. Dean Le ´on, M. Eder, M. Pias, S. Taherian, U. Ekelund, F. Renstr¨om, P . W. Franks, A. Horschet al., “Separating movement and gravity components in an acceleration signal and implications for the assessment of human daily physi- cal activity,”PloS One, vol. 8, no. 4, p. e61691, 2013. 2, 3, 5
2013
-
[25]
Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble,
H. Guo, L. Chen, L. Peng, and G. Chen, “Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble,” inProceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2016, pp. 1112–
2016
-
[26]
Detecting unseen falls from wearable devices using channel-wise ensemble of autoencoders,
S. S. Khan and B. Taati, “Detecting unseen falls from wearable devices using channel-wise ensemble of autoencoders,”Expert Systems with Applications, vol. 87, pp. 280–290, 2017. 2
2017
-
[27]
IoT-FAR: A multi-sensor fusion approach for IoT-based firefighting activity recognition,
X. Chai, B. G. Lee, C. Hu, M. Pike, D. Chieng, R. Wu, and W.-Y. Chung, “IoT-FAR: A multi-sensor fusion approach for IoT-based firefighting activity recognition,”Information Fusion, vol. 113, p. 102650, 2025. 2
2025
-
[28]
Multi-modal convolutional neural networks for activity recognition,
S. Ha, J.-M. Yun, and S. Choi, “Multi-modal convolutional neural networks for activity recognition,” inInternational Conference on Systems, Man, and Cybernetics, 2015, pp. 3017–3022. 2
2015
-
[29]
Gesture recognition with adaptive-weight- based residual multiheadcrossattention fusion based on multi- level feature information,
Z. Li and D. Shou, “Gesture recognition with adaptive-weight- based residual multiheadcrossattention fusion based on multi- level feature information,”Information Fusion, vol. 115, p. 102789,
-
[30]
Human activity recognition using wearable sensors by deep convolutional neural networks,
W. Jiang and Z. Yin, “Human activity recognition using wearable sensors by deep convolutional neural networks,” inProceedings of the ACM International Conference on Multimedia, 2015, pp. 1307–
2015
-
[31]
Bayesian co-boosting for multi-modal gesture recognition,
J. Wu and J. Cheng, “Bayesian co-boosting for multi-modal gesture recognition,”The Journal of Machine Learning Research, vol. 15, no. 1, pp. 3013–3036, 2014. 2
2014
-
[32]
Feature selection based on mutual information for human activity recog- nition,
B. Fish, A. Khan, N. H. Chehade, C. Chien, and G. Pottie, “Feature selection based on mutual information for human activity recog- nition,” inInternational Conference on Acoustics, Speech and Signal Processing, 2012, pp. 1729–1732. 2
2012
-
[33]
Self-attention net- works for human activity recognition using wearable devices,
C. Betancourt, W.-H. Chen, and C.-W. Kuan, “Self-attention net- works for human activity recognition using wearable devices,” in International Conference on Systems, Man, and Cybernetics, 2020, pp. 1194–1199. 3
2020
-
[34]
A survey on heterogeneous graph embedding: methods, techniques, applications and sources,
X. Wang, D. Bo, C. Shi, S. Fan, Y. Ye, and S. Y. Philip, “A survey on heterogeneous graph embedding: methods, techniques, applications and sources,”IEEE Transactions on Big Data, 2022. 3, 11
2022
-
[35]
Learning compact features for human activity recognition via probabilistic first-take-all,
J. Ye, G.-J. Qi, N. Zhuang, H. Hu, and K. A. Hua, “Learning compact features for human activity recognition via probabilistic first-take-all,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 1, pp. 126–139, 2018. 3
2018
-
[36]
Multi-scale signed recurrence plot based time series classification using inception architectural networks,
Y. Zhang, Y. Hou, K. OuYang, and S. Zhou, “Multi-scale signed recurrence plot based time series classification using inception architectural networks,”Pattern Recognition, vol. 123, p. 108385,
-
[37]
Activity classification using realistic data from wearable sensors,
J. Parkka, M. Ermes, P . Korpipaa, J. Mantyjarvi, J. Peltola, and I. Korhonen, “Activity classification using realistic data from wearable sensors,”IEEE Transactions on Information Technology in Biomedicine, vol. 10, no. 1, pp. 119–128, 2006. 3
2006
-
[38]
Comparative study on classifying human activities with miniature inertial and magnetic sensors,
K. Altun, B. Barshan, and O. Tunc ¸el, “Comparative study on classifying human activities with miniature inertial and magnetic sensors,”Pattern Recognition, vol. 43, no. 10, pp. 3605–3620, 2010. 3
2010
-
[39]
A comparison of feature extraction methods for the classification of dynamic activities from accelerometer data,
S. J. Preece, J. Y. Goulermas, L. P . Kenney, and D. Howard, “A comparison of feature extraction methods for the classification of dynamic activities from accelerometer data,”IEEE Transactions on Biomedical Engineering, vol. 56, no. 3, pp. 871–879, 2008. 3
2008
-
[40]
Activity recognition of assembly tasks using body-worn microphones and accelerometers,
J. A. Ward, P . Lukowicz, G. Troster, and T. E. Starner, “Activity recognition of assembly tasks using body-worn microphones and accelerometers,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1553–1567, 2006. 3
2006
-
[41]
Deep lookup network,
Y. Guo, L. Wang, W. Mao, X. Dong, Y. Wang, L. Liu, and W. An, “Deep lookup network,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 48, no. 1, pp. 202–218, 2026. 3
2026
-
[42]
Difformer: Multi-resolutional differencing transformer with dynamic ranging for time series analysis,
B. Li, W. Cui, L. Zhang, C. Zhu, W. Wang, I. W. Tsang, and J. T. Zhou, “Difformer: Multi-resolutional differencing transformer with dynamic ranging for time series analysis,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 11, pp. 13 586–13 598, 2023. 3
2023
-
[43]
Self-supervised contrastive representation learning for semi-supervised time-series classification,
E. Eldele, M. Ragab, Z. Chen, M. Wu, C.-K. Kwoh, X. Li, and C. Guan, “Self-supervised contrastive representation learning for semi-supervised time-series classification,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 12, pp. 15 604– 15 618, 2023. 3
2023
-
[44]
PRECYSE: Predicting cybersickness using transformer for multimodal time-series sensor data,
D. Jeong and K. Han, “PRECYSE: Predicting cybersickness using transformer for multimodal time-series sensor data,”Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 8, no. 2, pp. 1–24, 2024. 3
2024
-
[45]
Convboost: Boosting convnets for sensor-based activity recognition,
S. Shao, Y. Guan, B. Zhai, P . Missier, and T. Pl ¨otz, “Convboost: Boosting convnets for sensor-based activity recognition,”Proceed- ings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 7, no. 2, pp. 1–21, 2023. 3, 8, 9, 10
2023
-
[46]
Multi-scale context-aware networks based on fragment association for human activity recognition,
Z. Wang, H. Liu, B. Zhao, Q. Shen, M. Li, N. Que, M. Yan, and J. Xin, “Multi-scale context-aware networks based on fragment association for human activity recognition,” inInternational Joint Conference on Artificial Intelligence, 2024, pp. 3169–3177. 3, 7
2024
-
[47]
Inertial attitude and position reference system development for a small UAV,
D. Jung and P . Tsiotras, “Inertial attitude and position reference system development for a small UAV,” inIn AIAA Infotech@ Aerospace 2007 Conference and Exhibit, 2007, pp. 2763–2778. 4
2007
-
[48]
An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,
S. Bai, J. Z. Kolter, and V . Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” inInternational Conference on Learning Representations Workshop, 2018. 5
2018
-
[49]
Generalized digital butterworth filter design,
I. W. Selesnick and C. S. Burrus, “Generalized digital butterworth filter design,”IEEE Transactions on Signal Processing, vol. 46, no. 6, pp. 1688–1694, 1998. 5 18
1998
-
[50]
F. R. Chung and F. C. Graham,Spectral Graph Theory. American Mathematical Soc., 1997, no. 92. 6
1997
-
[51]
Semi-supervised classification with graph convolutional networks,
T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” inInternational Conference on Learning Representations, 2016. 6
2016
-
[52]
A review of multimodal human activity recognition with special emphasis on classification, applications, challenges and future directions,
S. K. Yadav, K. Tiwari, H. M. Pandey, and S. A. Akbar, “A review of multimodal human activity recognition with special emphasis on classification, applications, challenges and future directions,” Knowledge-Based Systems, vol. 223, p. 106970, 2021. 7
2021
-
[53]
A theory for multiresolution signal decomposition: the wavelet representation,
S. G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 674–693, 1989. 7
1989
-
[54]
Categorical Reparameterization with Gumbel-Softmax
E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with gumbel-softmax,”arXiv preprint arXiv:1611.01144, 2016. 7
work page internal anchor Pith review arXiv 2016
-
[55]
Exploring fine-grained sparsity in convolutional neural networks for efficient inference,
L. Wang, Y. Guo, X. Dong, Y. Wang, X. Ying, Z. Lin, and W. An, “Exploring fine-grained sparsity in convolutional neural networks for efficient inference,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 4, pp. 4474–4493, 2023. 7
2023
-
[56]
Attention is all you need,
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in Neural Information Processing Systems, vol. 30, 2017. 7
2017
-
[57]
Transition-aware human activity recognition using smart- phones,
J.-L. Reyes-Ortiz, L. Oneto, A. Sam `a, X. Parra, and D. An- guita, “Transition-aware human activity recognition using smart- phones,”Neurocomputing, vol. 171, pp. 754–767, 2016. 8
2016
-
[58]
The university of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices,
H. Gjoreski, M. Ciliberto, L. Wang, F. J. O. Morales, S. Mekki, S. Valentin, and D. Roggen, “The university of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices,”IEEE Access, vol. 6, pp. 42 592–42 604, 2018. 8
2018
-
[59]
Privacy and utility preserving sensor-data transformations,
M. Malekzadeh, R. G. Clegg, A. Cavallaro, and H. Haddadi, “Privacy and utility preserving sensor-data transformations,”Per- vasive and Mobile Computing, vol. 63, p. 101132, 2020. 8
2020
-
[60]
Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition,
A. Stisen, H. Blunck, S. Bhattacharya, T. S. Prentow, M. B. Kjær- gaard, A. Dey, T. Sonne, and M. M. Jensen, “Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition,” inProceedings of the ACM Conference on Embedded Networked Sensor Systems, 2015, pp. 127–140. 8
2015
-
[61]
Hu- man daily activity and fall recognition using a smartphone’s acceleration sensor,
C. Chatzaki, M. Pediaditis, G. Vavoulas, and M. Tsiknakis, “Hu- man daily activity and fall recognition using a smartphone’s acceleration sensor,” inInternational Conference on Information and Communication Technologies for Ageing Well and E-Health. Springer, 2016, pp. 100–118. 8
2016
-
[62]
Efficient dense labelling of human activity sequences from wearables using fully convolutional networks,
R. Yao, G. Lin, Q. Shi, and D. C. Ranasinghe, “Efficient dense labelling of human activity sequences from wearables using fully convolutional networks,”Pattern Recognition, vol. 78, pp. 252–266,
-
[63]
The opportunity challenge: A benchmark database for on-body sensor-based activity recogni- tion,
R. Chavarriaga, H. Sagha, A. Calatroni, S. T. Digumarti, G. Tr ¨oster, J. d. R. Mill ´an, and D. Roggen, “The opportunity challenge: A benchmark database for on-body sensor-based activity recogni- tion,”Pattern Recognition Letters, vol. 34, no. 15, pp. 2033–2042,
2033
-
[64]
Introducing a new benchmarked dataset for activity monitoring,
A. Reiss and D. Stricker, “Introducing a new benchmarked dataset for activity monitoring,” inInternational Symposium on Wearable Computers, 2012, pp. 108–109. 8
2012
-
[65]
On-body localization of wear- able devices: An investigation of position-aware activity recogni- tion,
T. Sztyler and H. Stuckenschmidt, “On-body localization of wear- able devices: An investigation of position-aware activity recogni- tion,” inInternational Conference on Pervasive Computing and Com- munications, 2016, pp. 1–9. 8
2016
-
[66]
Fusion of smartphone motion sensors for physical activity recog- nition,
M. Shoaib, S. Bosch, O. D. Incel, H. Scholten, and P . J. Havinga, “Fusion of smartphone motion sensors for physical activity recog- nition,”Sensors, vol. 14, no. 6, pp. 10 146–10 176, 2014. 8
2014
-
[67]
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,
K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” inInternational Conference on Computer Vision, 2015, pp. 1026–1034. 8
2015
-
[68]
Adam: A Method for Stochastic Optimization
D. P . Kingma and J. Ba, “Adam: A method for stochastic optimiza- tion,”arXiv preprint arXiv:1412.6980, 2014. 8
work page internal anchor Pith review arXiv 2014
-
[69]
Mixup: Beyond empirical risk minimization,
H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “Mixup: Beyond empirical risk minimization,” inInternational Conference on Learning Representations, 2018. 8
2018
-
[70]
Predictive maintenance for general aviation using convolutional Transformers,
H. Yang, A. LaBella, and T. Desell, “Predictive maintenance for general aviation using convolutional Transformers,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 11, 2022, pp. 12 636–12 642. 8, 9, 10 Ye Zhangreceived the Ph.D. degree in Elec- tronic Science and Technology from National University of Defense Technology (NUDT...
2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.