posted ago by Thisisnotanexit ago by Thisisnotanexit +6 / -0

Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds

Abstract: A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing and swarming technologies promises to transform how we think about collective and collaborative cognitive tasks across domains, ranging from research to entertainment, and from therapeutics to military applications. As these tools continue to improve, we are prompted to monitor how they may affect our society on a broader level, but also how they may reshape our fundamental understanding of agency, responsibility, and other key concepts of our moral landscape.

A micro-Doppler spectrogram denoising algorithm for radar human activity recognition

Abstract: Radar signal recognition based on micro-Doppler spectrogram has been widely used in human action recognition tasks. However, in practical application scenarios, radar signals inevitably have noise, which leads to different degrees of deformation of the spectrogram graph structure and affects the accuracy of subsequent recognition algorithms. In this paper, we present “ACFL”, a novel algorithm for micro-Doppler spectrogram denoising, which aims to reduce the impact of noise on human action recognition. ACFL employs amplitude–frequency two-dimensional clustering and fuzzy logic clustering selection mechanism to remove noise elements from the spectrogram. Moreover, to address the issue of noise leakage or target missing under time-varying noise and action conditions, ACFL adopts spectrogram segmentation based on short-term Rényi entropy. By dividing the spectrogram into intervals with different time–frequency distributions, the dynamic spectrogram denoise over time is achieved. Simulation and measured data experiments demonstrate that the proposed algorithm not only achieves a higher-quality denoised spectrogram but also significantly improves the accuracy of human action recognition under noisy conditions.

A Study on Radar Target Detection Based on Deep Neural Networks

Abstract: Target detection is one of the main radar applications frequently encountered in practice. Target detection can be regarded as a classification, which distinguishes whether the signal under-tested consists of an echo from a target (target present) or just corresponds to the noise (target absent). Deep neural network (DNN) is a hot topic for classification, and has successfully applied in different areas of science. Recently, many researchers have proposed DNNs to radar applications. However, there is still no research on applying DNNs directly to the target detection in radar. In this paper, we analyze a possible application of DNNs to the target detection in radar, DNNs based detectors are designed, and the performance of the detector is demonstrated by comparing with traditional target detectors.

Wi-CHAR: A WiFi Sensing Approach with Focus on Both Scenes and Restricted Data

Abstract: Significant strides have been made in the field of WiFi-based human activity recognition, yet recent wireless sensing methodologies still grapple with the reliance on copious amounts of data. When assessed in unfamiliar domains, the majority of models experience a decline in accuracy. To address this challenge, this study introduces Wi-CHAR, a novel few-shot learning-based cross-domain activity recognition system. Wi-CHAR is meticulously designed to tackle both the intricacies of specific sensing environments and pertinent data-related issues. Initially, Wi-CHAR employs a dynamic selection methodology for sensing devices, tailored to mitigate the diminished sensing capabilities observed in specific regions within a multi-WiFi sensor device ecosystem, thereby augmenting the fidelity of sensing data. Subsequent refinement involves the utilization of the MF-DBSCAN clustering algorithm iteratively, enabling the rectification of anomalies and enhancing the quality of subsequent behavior recognition processes. Furthermore, the Re-PN module is consistently engaged, dynamically adjusting feature prototype weights to facilitate cross-domain activity sensing in scenarios with limited sample data, effectively distinguishing between accurate and noisy data samples, thus streamlining the identification of new users and environments. The experimental results show that the average accuracy is more than 93% (five-shot) in various scenarios. Even in cases where the target domain has fewer data samples, better cross-domain results can be achieved. Notably, evaluation on publicly available datasets, WiAR and Widar 3.0, corroborates Wi-CHAR’s robust performance, boasting accuracy rates of 89.7% and 92.5%, respectively. In summary, Wi-CHAR delivers recognition outcomes on par with state-of-the-art methodologies, meticulously tailored to accommodate specific sensing environments and data constraints.
Keywords: WiFi sensing; cross-domain; few-shot learning; human activity recognition

Internet of Things Meets Brain–Computer Interface: A Unified Deep Learning Framework for Enabling Human-Thing Cognitive Interactivity

A brain–computer interface (BCI) acquires brain signals, analyzes, and translates them into commands that are relayed to actuation devices for carrying out desired actions. With the widespread connectivity of everyday devices realized by the advent of the Internet of Things (IoT), BCI can empower individuals to directly control objects such as smart home appliances or assistive robots, directly via their thoughts. However, realization of this vision is faced with a number of challenges, most importantly being the issue of accurately interpreting the intent of the individual from the raw brain signals that are often of low fidelity and subject to noise. Moreover, preprocessing brain signals and the subsequent feature engineering are both time-consuming and highly reliant on human domain expertise. To address the aforementioned issues, in this paper, we propose a unified deep learning-based framework that enables effective human-thing cognitive interactivity in order to bridge individuals and IoT objects. We design a reinforcement learning-based selective attention mechanism (SAM) to discover the distinctive features from the input brain signals. In addition, we propose a modified long short-term memory to distinguish the interdimensional information forwarded from the SAM. To evaluate the efficiency of the proposed framework, we conduct extensive real-world experiments and demonstrate that our model outperforms a number of competitive state-of-the-art baselines. Two practical real-time human-thing cognitive interaction applications are presented to validate the feasibility of our approach.

Internet of Things Meets Brain-Computer Interface: A Unified Deep Learning Framework for Enabling Human-Thing Cognitive Interactivity