In the last decade, ambulatory cardiac monitoring has witnessed significant growth, driven by technological breakthroughs.
The miniaturization of cardiac monitoring devices has enhanced their form factor, while optimized battery life enables prolonged continuous monitoring. Improved communication methods facilitate near real-time data transfer to clinical teams.
However, this surge in technology adoption has led to a staggering increase in the volume of ambulatory monitoring data.
This data deluge is not confined to the medical industry alone; it emanates from the consumer wearable market, with patients increasingly seeking their cardiologists’ insights on data from wrist wearables.
Consequently, the volume of data has outpaced the clinical review capacity of both ambulatory monitoring and physician practices.
Despite technological strides, the prevalent issue lies in the production of electrocardiogram (ECG) recordings laden with noisy signals.
These disturbances, at best, result in a high volume of unusable information, and at worst the risk of misinterpretation by software, generating an overwhelming number of unnecessary alerts that demand the attention of overburdened clinical experts.
Despite advances in technology, recent studies reveal that three main types of signal noise persist — motion artifact (MA), baseline wander (BW), and Powerline Noise (PN).
Clinicians, relying on third-party assistance for ambulatory monitoring programs, may not immediately perceive the gravity of the noise issue.
These third-party services employ clinical experts to sift through data, passing along only the fraction of data that offers meaningful insights. However, the identification of high-quality data within this vast sea of information demands extensive review hours.
As physicians increasingly review data from consumer health wearables without the filtration of third-party services, the stark evidence of noisy signals causing false positives is becoming apparent.
Numerous solutions have been proposed to tackle the noise issue, ranging from machine learning and artificial intelligence to neural network approaches.
However, the challenge with AI-based solutions lies in the continuous need for model training and general concern from clinical experts asking, “Tell me again how it works?”
A promising approach gaining traction involves combining non-AI signal processing with a signal quality score. This innovative method provides an exceptionally reliable signal quality score, requiring minimal to no continuous training for clinicians to efficiently triage large volumes of data.
By diagnosing conditions from high-quality data segments, clinicians can save substantial time, paving the way for a more efficient and accurate cardiac monitoring landscape.
The recent surge in ambulatory monitoring has inundated cardiovascular care with a plethora of data.
Despite strides in technology, persistent electrocardiogram signal noise remains a challenge, leading to misinterpretations and excessive alerts.
Proposed solutions, such as AI applications, encounter obstacles in ongoing training demands.
A promising avenue involves integrating non-AI signal processing with a robust quality scoring methodology, providing clinicians with an effective tool to sift through data and enhance the precision of cardiac monitoring.
Addressing this fundamental challenge of signal quality, I believe, is pivotal to revolutionizing the landscape of remote cardiac care, ensuring a more accurate and efficient approach to cardiovascular health.