<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.3.4">Jekyll</generator><link href="https://adammj.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://adammj.com/" rel="alternate" type="text/html" /><updated>2025-10-18T09:29:40-07:00</updated><id>https://adammj.com/feed.xml</id><title type="html">Curiosity-Fueled</title><subtitle>Curiosity-Fueled</subtitle><author><name>Adam Jones</name></author><entry><title type="html">Accepted manuscript</title><link href="https://adammj.com/blog/accepted-manuscript/" rel="alternate" type="text/html" title="Accepted manuscript" /><published>2025-08-23T00:00:00-07:00</published><updated>2025-08-23T00:00:00-07:00</updated><id>https://adammj.com/blog/accepted-manuscript</id><content type="html" xml:base="https://adammj.com/blog/accepted-manuscript/"><![CDATA[<p>Now that the 12 month embargo period has passed, I can share the “accepted manuscript” version of the paper. This is identical, except in formatting, to the “published journal article” version of the paper.</p>

<ul>
  <li><a href="/assets/files/accepted_manuscript.pdf">accepted manuscript version</a></li>
</ul>

<p>FYI: The article sharing policies for Elsevier are here: <a href="https://www.elsevier.com/about/policies-and-standards/sharing#1-quick-definitions">Article Sharing</a>.</p>

<p>Originally posted on <a href="https://cardiosomnography.com/blog/">cardiosomnography.com</a>.</p>]]></content><author><name>Adam Jones</name></author><category term="sleep_paper" /><category term="cardiosomnography" /><summary type="html"><![CDATA[Now that the 12 month embargo period has passed, I can share the “accepted manuscript” version of the paper. This is identical, except in formatting, to the “published journal article” version of the paper. accepted manuscript version FYI: The article sharing policies for Elsevier are here: Article Sharing. Originally posted on cardiosomnography.com.]]></summary></entry><entry><title type="html">Clock-aligned results</title><link href="https://adammj.com/blog/clock-time/" rel="alternate" type="text/html" title="Clock-aligned results" /><published>2025-02-09T00:00:00-08:00</published><updated>2025-02-09T00:00:00-08:00</updated><id>https://adammj.com/blog/clock-time</id><content type="html" xml:base="https://adammj.com/blog/clock-time/"><![CDATA[<p>I’ve had a few discussions in the last month on using the model to score data during the daytime for both a follow-up scientific study as well as personal research.</p>

<h3 id="papers-dataset">Paper’s dataset</h3>

<p>As I detailed in the Methods of the paper, the data came from 5 different sleep datasets, with the only “time” requirement being that the recording was from 5 to 15hrs long. As a consequence, almost all of the data was recorded during the typical nighttime period.</p>

<p>This bias for the nighttime recordings (for the training/validation/testing sets) can more easily be visualized by looking at the clock-aligned stage-time summary (figure below, top-right panel). The gray dashed line shows the percent of recordings for any given period of the night. There are a handful of recordings that extend beyond the window shown, but the stage ratios fluctuate too wildly with only a few recordings remaining in the sample.</p>

<p>Additionally, in the paper’s Supplementary Information, I showed the performance when stratifying the epochs across time (when aligned to the beginning of every recording) (figure below, lower-left panel). When the recordings are clock-aligned, we can see that performance is nearly identical. Of note, the lower performance for REM at the beginning of the night and N3 at the end of the night are primarily a function of the lower prevalence of those stages during those periods (see top-right panel). Otherwise, the performance is relatively consistent across time.</p>

<div style="text-align: center;">
<b>stage summary and results</b><br />
<img src="/assets/images/time_of_day_perf.png" caption="" alt="alter-text" height="" width="" position="center" command="fill" option="q100" class="img-fluid" title="" webp="false" />
<div style="font-size: 90%">
The panels on the left are from the paper (Fig. 1d, and Supplementary Fig. S5a). The panels on the right are the same data, but clock-aligned. The clock-aligned data stops once there are fewer than 5 recordings in a period.
</div>
</div>
<p><br /></p>

<h3 id="future-improvements">Future improvements</h3>

<p>If the model is going to be used outside the normal nighttime window, then it will need to be trained and tested during those periods. To that end, I plan to make some changes in the future.</p>

<ol>
  <li>
    <p><strong>Find more ECG sleep data to “fill out” the clock.</strong></p>

    <ul>
      <li>I won’t be able to change the current dataset, but this could be a second revision to the dataset. The point here would be to have every hour in a 24-hr day covered with some data, even if the majority of the data is still during the nighttime.</li>
    </ul>
  </li>
  <li>
    <p><strong>Adjust the clock input for the model.</strong></p>

    <ul>
      <li>
        <p>The current input for the clock time is a single number representing the number of days +/- the nearest midnight (i.e, the “midnight offset”). This would typically mean the values range from -0.5 (noon before) to 0.5 (noon after).</p>
      </li>
      <li>
        <p>To prevent tricky discontinuities, I’m going to change this to 2 numbers which represent the circular encoded time. This would look like (cos(2*pi*t), sin(2*pi*t)), where t would be still be the same “midnight offset” value used previously.</p>
      </li>
    </ul>
  </li>
</ol>

<h3 id="time-manipulation">Time manipulation</h3>

<p>To drive home the point about the need for a better encoding for the clock time, I shifted the testing set recording start times. In the figure below, you can see that the performance (and predicted sleep stage ratios) is relatively constant in the expected normal midnight offset range [-0.5, 0.5]. However, beyond that, it changes drastically. Note that at every integer day forward or backward the performance <strong>should be</strong> the same (since the clock time is again the same).</p>

<div style="text-align: center;">
<b>time shift results</b><br />
<img src="/assets/images/time_shift_results.png" caption="" alt="alter-text" height="" width="" position="center" command="fill" option="q100" class="img-fluid" title="" webp="false" />
<div style="font-size: 90%">
The midnight offset for each of the recordings in the testing set was shifted either forward or backward in time. On the left is the performance. On the right is the predicted sleep stage ratio.
</div>
</div>
<p><br /></p>

<p>Originally posted on <a href="https://cardiosomnography.com/blog/">cardiosomnography.com</a>.</p>]]></content><author><name>Adam Jones</name></author><category term="sleep_paper" /><category term="plans" /><category term="sleep_staging_model" /><category term="cardiosomnography" /><summary type="html"><![CDATA[I’ve had a few discussions in the last month on using the model to score data during the daytime for both a follow-up scientific study as well as personal research. Paper’s dataset As I detailed in the Methods of the paper, the data came from 5 different sleep datasets, with the only “time” requirement being that the recording was from 5 to 15hrs long. As a consequence, almost all of the data was recorded during the typical nighttime period. This bias for the nighttime recordings (for the training/validation/testing sets) can more easily be visualized by looking at the clock-aligned stage-time summary (figure below, top-right panel). The gray dashed line shows the percent of recordings for any given period of the night. There are a handful of recordings that extend beyond the window shown, but the stage ratios fluctuate too wildly with only a few recordings remaining in the sample. Additionally, in the paper’s Supplementary Information, I showed the performance when stratifying the epochs across time (when aligned to the beginning of every recording) (figure below, lower-left panel). When the recordings are clock-aligned, we can see that performance is nearly identical. Of note, the lower performance for REM at the beginning of the night and N3 at the end of the night are primarily a function of the lower prevalence of those stages during those periods (see top-right panel). Otherwise, the performance is relatively consistent across time. stage summary and results The panels on the left are from the paper (Fig. 1d, and Supplementary Fig. S5a). The panels on the right are the same data, but clock-aligned. The clock-aligned data stops once there are fewer than 5 recordings in a period. Future improvements If the model is going to be used outside the normal nighttime window, then it will need to be trained and tested during those periods. To that end, I plan to make some changes in the future. Find more ECG sleep data to “fill out” the clock. I won’t be able to change the current dataset, but this could be a second revision to the dataset. The point here would be to have every hour in a 24-hr day covered with some data, even if the majority of the data is still during the nighttime. Adjust the clock input for the model. The current input for the clock time is a single number representing the number of days +/- the nearest midnight (i.e, the “midnight offset”). This would typically mean the values range from -0.5 (noon before) to 0.5 (noon after). To prevent tricky discontinuities, I’m going to change this to 2 numbers which represent the circular encoded time. This would look like (cos(2*pi*t), sin(2*pi*t)), where t would be still be the same “midnight offset” value used previously. Time manipulation To drive home the point about the need for a better encoding for the clock time, I shifted the testing set recording start times. In the figure below, you can see that the performance (and predicted sleep stage ratios) is relatively constant in the expected normal midnight offset range [-0.5, 0.5]. However, beyond that, it changes drastically. Note that at every integer day forward or backward the performance should be the same (since the clock time is again the same). time shift results The midnight offset for each of the recordings in the testing set was shifted either forward or backward in time. On the left is the performance. On the right is the predicted sleep stage ratio. Originally posted on cardiosomnography.com.]]></summary></entry><entry><title type="html">Intentions for the blog</title><link href="https://adammj.com/blog/intentions_for_the_blog/" rel="alternate" type="text/html" title="Intentions for the blog" /><published>2024-12-10T00:00:00-08:00</published><updated>2024-12-10T00:00:00-08:00</updated><id>https://adammj.com/blog/intentions_for_the_blog</id><content type="html" xml:base="https://adammj.com/blog/intentions_for_the_blog/"><![CDATA[<p>In reflecting back on my research journey so far, I realized that just after my innate curiosity, one aspect that has most consistently driven me towards figuring something out or making something was to share it. This is one reason why I’ve enjoyed conference presentations. Not only because they have a firm deadline, but also because I get to polish up everything I’ve been working on and show it to an audience (not that I mind too much showing my works in progress, with their rough edges and wires sticking out). However, there’s a lot of the journey, the trials and errors, the dead ends, and the elegant solutions to some incremental-but-necessary step, that just won’t make it into the final, succinct story.</p>

<p>That’s what I think I want to share here: the incremental steps along the way. Obviously, not everything I work on will I be able to share. And, even for the things that I can share, there will be times I’ll have to delay sharing—such as crucial details about a paper I’m currently working on. However, in that case, I’m going to try writing posts along the way, and then make them visible when the time comes. So that others, when reading the paper, can get a more complete story than a “tight” Methods section provides.</p>

<p>As a project that’s been going on for quite a long time, and into the foreseeable future, I will be crossposting everything from <a href="cardiosomnography.com">cardiosomnography.com</a> here as well. I’ve also included posts from a short-lived blog about personal knowledge bases (PKBs). It’s a topic that I’ve been interested in for quite some time, but just haven’t found the time to make real progress on the ideas that I think are still lacking in the tools of today.</p>

<p>Finally, I’m still deciding on a post cadence.</p>]]></content><author><name>Adam Jones</name></author><category term="blog" /><category term="plans" /><summary type="html"><![CDATA[In reflecting back on my research journey so far, I realized that just after my innate curiosity, one aspect that has most consistently driven me towards figuring something out or making something was to share it. This is one reason why I’ve enjoyed conference presentations. Not only because they have a firm deadline, but also because I get to polish up everything I’ve been working on and show it to an audience (not that I mind too much showing my works in progress, with their rough edges and wires sticking out). However, there’s a lot of the journey, the trials and errors, the dead ends, and the elegant solutions to some incremental-but-necessary step, that just won’t make it into the final, succinct story. That’s what I think I want to share here: the incremental steps along the way. Obviously, not everything I work on will I be able to share. And, even for the things that I can share, there will be times I’ll have to delay sharing—such as crucial details about a paper I’m currently working on. However, in that case, I’m going to try writing posts along the way, and then make them visible when the time comes. So that others, when reading the paper, can get a more complete story than a “tight” Methods section provides. As a project that’s been going on for quite a long time, and into the foreseeable future, I will be crossposting everything from cardiosomnography.com here as well. I’ve also included posts from a short-lived blog about personal knowledge bases (PKBs). It’s a topic that I’ve been interested in for quite some time, but just haven’t found the time to make real progress on the ideas that I think are still lacking in the tools of today. Finally, I’m still deciding on a post cadence.]]></summary></entry><entry><title type="html">History and future plans for CSG</title><link href="https://adammj.com/blog/csg-history-and-future/" rel="alternate" type="text/html" title="History and future plans for CSG" /><published>2024-12-09T00:00:00-08:00</published><updated>2024-12-09T00:00:00-08:00</updated><id>https://adammj.com/blog/csg-history-and-future</id><content type="html" xml:base="https://adammj.com/blog/csg-history-and-future/"><![CDATA[<p>Below is the history of and my future plans for cardiosomnography (CSG). I will try to keep this post evergreen.</p>

<p><br /></p>

<h2 id="2009-2013">2009-2013</h2>

<p>My self-initiated research began in the fall of 2009, and over the next 4 years I developed several devices, software, and an iPhone app.</p>

<div style="font-size: 1.125em">
<b>2009</b></div>

<p>I started meditating and immediately began looking for an objective way to track my progress. In December, I built my first device for amplifying the heartbeat intervals using an off-the-shelf <a href="https://en.wikipedia.org/wiki/Photoplethysmogram">photoplethysmography</a> finger-tip sensor (PPG) that was then connected to a laptop and analyzed with code I wrote in <a href="https://en.wikipedia.org/wiki/LabVIEW">LabVIEW</a>.</p>

<div style="font-size: 1.125em">
<b>2010</b></div>

<p>Not content to be tethered to my laptop, and annoyed by how noisy PPG is, I soon built a biofeedback device using a microcontroller (<a href="Logomatic v2">Logomatic v2</a>), screen (<a href="https://newhavendisplay.com/content/app_notes/ST7565.pdf">ST7565</a>), and a Polar receiver (<a href="https://www.sparkfun.com/datasheets/Wireless/General/RMCM01.pdf">RMCM01</a>) to wirelessly receive heartbeats from a <a href="https://pimage.sport-thieme.com/facebook-open-graph/146-1262">Polar HR T31 strap</a> (which sends a 5 kHz radio pulse each beat).</p>

<div style="text-align: center;">
<b>"the device" (sorry, never named it)</b><br />
<img src="/assets/images/device_movie.gif" caption="" alt="alter-text" height="" width="" position="center" command="fill" option="q100" class="img-fluid" title="" webp="false" />
<div style="font-size: 90%">
This is a short movie shows the device in action while I was wearing the HR strap. This early version of the interface lacked some of the later features.
</div>
</div>
<p><br /></p>

<h3 id="2010-sleep-interest">2010 sleep interest</h3>
<p>That summer, I discovered that if I wore the strap while sleeping, I observed clear transitions and cycles. At this point I knew nothing about sleep stages or the research on them (and sleep wasn’t in the zeitgeist yet). However, this marked the beginning of my interest in sleep stages, and in the idea of measuring them with the heart.</p>

<div style="text-align: center;">
<b>Recording HRV while sleeping</b>
<img src="/assets/images/sleeping_hrv.png" caption="" alt="alter-text" height="" width="" position="center" command="fill" option="q100" class="img-fluid" title="" webp="false" />
<div style="font-size: 90%">
This was recorded the night of July 11, 2010. I transformed the RR intervals into the frequency domain, and divided it into slices (Low: 0.01-0.11 Hz; Medium: 0.12-0.20 Hz; High: 0.15-0.40 Hz). You can see periods of time where one slice makes up the plurality of total power, with sharp transitions between these periods.
</div>
</div>
<p><br /></p>

<p>I also began recruiting beta testers among my friends and coworkers. I wanted to see what the variations in data looked like, and how robust the device and algorithms were. At the time, I only had a single device, so this meant loaning it out each time.</p>

<p>Not content with the low-resolution monochromatic screen, and the unnecessary bulk of my pocketable (but still too-large) device, I decided to build an iPhone app.</p>
<ul>
  <li>
    <p><strong>Plan A:</strong> To keep things simple, I stuck with the 5 kHz strap+receiver. I designed my first manufactured PCB to attach to the iPhone’s <a href="https://pinoutguide.com/PortableDevices/ipod_pinout.shtml">30-pin dock connector</a>.</p>

    <p>I had thought about using the connector’s audio input to receive the voltage blips from the receiver IC directly. However, I wasn’t yet confident that I wouldn’t miss any of the blips. So, I had a real-time microcontroller act as the middleman.</p>

    <div style="text-align: center;">
<b>iPhone heartbeat receiver dongle</b>
<img src="/assets/images/iphone_dongle.jpg" caption="" alt="alter-text" height="" width="" position="center" command="fill" option="q100" class="img-fluid" title="" webp="false" />
<div style="font-size: 90%">
Two sides of the dongle. The large footprint on the left was for the receiver (<a href="https://www.sparkfun.com/datasheets/Wireless/General/RMCM01.pdf">RMCM01</a>), and the smaller dense footprint was for a little microcontroller (<a href="https://www.nxp.com/docs/en/data-sheet/LPC111X.pdf">LPC1114</a>) for sensing the voltage blips from the receiver and communicating with the iPhone through the 30-pin connector (pads on left edges).
</div>
</div>
    <p><br /></p>

    <p>Unfortunately, when I received the PCBs, I realized they were too thick for the connector’s legs 😭 (which were meant to straddle the top and bottom). However, in the weeks it took between uploading the design files and the finished board arriving, I had set my sights on an even better idea…</p>
  </li>
  <li>
    <p><strong>Plan B:</strong> The <a href="https://en.wikipedia.org/wiki/IPhone_4">iPhone 4</a> (<a href="https://www.wired.com/2010/06/iphone-4-holding-it-wrong/">“you’re holding it wrong”</a>) was released that year and had Bluetooth 2.1. I also stumbled upon one of the first Bluetooth HR straps, the <a href="https://simpleeye.com/wp-content/uploads/2011/10/ZephyrHXM.jpg">Zephyr HxM Bluetooth</a>. The final missing piece was a developer that created a BT API for jailbroken phones (I can’t seem to recall the name of the library right now), since Apple didn’t yet give developers an API to access BT.</p>

    <p>A year or so later I found two companies selling headphone dongles that received the 5 kHz pulse directly. So, even though the BLE strap was my preferred input, I designed my app to allow the user to select either a BLE strap or a “traditional” 5 kHz strap with a separately purchased headphone dongle.</p>
  </li>
</ul>

<div style="font-size: 1.125em">
<b>2011-2013</b></div>

<p>Over the next several years, I quickly learned the ropes of app development and programming best practices that I hadn’t yet learned (having taught myself <a href="https://en.wikipedia.org/wiki/GW-BASIC">GW-BASIC</a> around 1989 and learning a dozen more languages along the way).</p>

<div style="text-align: center;">
<b>iPhone app, spectrogram view</b>
<img src="/assets/images/iphone_spectrogram.png" caption="" alt="alter-text" height="" width="" position="center" command="fill" option="q100" class="img-fluid" title="" webp="false" />
<div style="font-size: 90%">
From August, 2011. I spent a lot of time making this view easy to navigate (with all the fun pan and zoom gestures), while also making sure to prevent any unnecessary calculations for data that wasn't visible.
</div>
</div>
<p><br /></p>

<p>I began attending local iPhone developer meetups and recruiting more beta testers. Now the testers just needed an iPhone (or for the one Android user, I mailed one), and I could loan them one of several straps.</p>

<p><br /></p>

<h2 id="2014-2018">2014-2018</h2>

<p>When I began working on a BS in psychology at the <a href="https://www.uh.edu/">University of Houston</a>, this research then expanded into a collaboration with <a href="https://www.ece.uh.edu/faculty/sheth">Dr. Bhavin R. Sheth</a>. We decided to tackle trying to score sleep using ECG from a small dataset (n=63) from the Veterans’ Affairs (VA).</p>

<div style="font-size: 1.125em">
<b>2014-2015</b></div>

<p>While I certainly learned a lot of analysis techniques during my BS and MS in mechanical engineering, I now had to teach myself <a href="https://en.wikipedia.org/wiki/Machine_learning">machine learning</a> and <a href="https://en.wikipedia.org/wiki/Robust_statistics">robust statistics</a>.</p>

<p>At the time, I was converting the ECG recordings into RR intervals. This is when I found out that algorithms for doing this are… to be kind, not very robust. Every single algorithm and tool I could get my hands on required manual intervention and annotation on all but the most clean and pristine data; none of them handled noise gracefully. Thus began a side project of building a much more robust heartbeat (<a href="https://en.wikipedia.org/wiki/QRS_complex">R wave</a>) detector.</p>

<p>Developing my new robust heartbeat algorithm took on even more importance when I stumbled upon the National Sleep Research Resource (<a href="https://www.sleepdata.org">NSSR</a>) and the Sleep Heart Health Study (<a href="https://www.sleepdata.org/datasets/shhs">SHHS</a>) dataset, which took us from 63 recordings to over 8,000. There was no way on earth I could babysit even the best of the existing algorithms on every single recording.</p>

<h3 id="2015-sfn--k0280">2015 SfN  (k=0.280)</h3>

<p>I gave my first talk at the <a href="https://www.abstractsonline.com/Plan/ViewAbstract.aspx?mID=3744&amp;sKey=99334038-f24c-4261-9e23-573b83a467fe&amp;cKey=d557c8d2-23f1-4a10-b062-c312bbd16172&amp;mKey=d0ff4555-8574-4fbb-b9d4-04eec8ba0c84">2015 Society for Neuroscience’s (SfN) annual meeting in Chicago</a>. At the time, I was using combinations of “traditional” machine learning techniques on tons of hand-crafted features (from the RR itself, the spectrum of the RR, etc.), but the results were not great. The <a href="https://en.wikipedia.org/wiki/Cohen's_kappa">Cohen’s kappa</a> was 0.28—pretty abysmal. I also presented a new parametric non-sinusoidal function that I found better matched the RR intervals during respiratory sinus arrhythmia (<a href="https://en.wikipedia.org/wiki/Vagal_tone#Respiratory_sinus_arrhythmia">RSA</a>) than a simple sine wave.</p>

<div style="font-size: 1.125em">
<b>2016-2017</b></div>

<p>However, a few months after my November 2015 talk, I gave up on RR intervals. And, much to the chagrin of my collaborator, I began throwing <a href="https://en.wikipedia.org/wiki/Neural_network">neural networks</a> (NNs) at the problem (In 2007, I had explored using NNs during my MS as a way to find nonlinear functions to fit the very nonlinear data from my experiments, long before the deep learning breakthrough in 2012). Given that this was temporal data, I originally started with LSTM- and GRU-based architectures.</p>

<h3 id="2017-sfn-k0530">2017 SfN (k=0.530)</h3>

<p>I presented these findings at the <a href="http://www.abstractsonline.com/pp8/index.html#!/4376/presentation/28775">2017 SfN meeting in D.C.</a> on a “dynamic” poster (i.e., a large TV). The results were now a quite respectable Cohen’s kappa of 0.530—better than state-of-the-art on 5-stage scoring for “EEG-less” methods (any method for sleep staging that makes no use of brain, i.e., EEG data). For reference, the current state-of-the-art on 5-stage scoring was k=0.510 from <a href="#references">Sady et al, 2013 [1]</a>.</p>

<p>However, I wasn’t content with <strong>just</strong> state-of-the-art for EEG-less methods. My aim was now firmly fixed on matching the performance of human-scored polysomnography (PSG, or a traditional “sleep study”).</p>

<div style="font-size: 1.125em">
<b>2018</b></div>

<p>I needed a bigger (and better) model, and to train that, I also needed more data. The first thing to go was the <a href="https://en.wikipedia.org/wiki/Recurrent_neural_network">RNN</a> architecture. I began experimenting with a new “backbone”, the <a href="https://arxiv.org/abs/1803.01271">Temporal Convolution Network</a> (TCN), which made the network completely feed-forward and faster to train.</p>

<p>That summer I began lab rotations for my <a href="https://ngp.usc.edu">neuroscience PhD</a> at the <a href="https://www.usc.edu">University of Southern California</a>.</p>

<h3 id="2018-sfn-k0710">2018 SfN (k=0.710)</h3>

<p>I gave my second talk at the <a href="https://www.abstractsonline.com/pp8/#!/4649/presentation/38904">2018 SfN meeting in San Diego</a>, where I demonstrated that we had reached a Cohen’s kappa of 0.710 on 5-stage scoring. We were now significantly better than state-of-the-art, and finally within the range of expert human-scored PSG.</p>

<p>It was then that I remembered my original HRV device, and felt I needed to begin experimenting on myself. Finding, once again, that there was no reasonably-priced device (<strong>narrator: there was, but it was only pointed out to him years later</strong>), I started designing one. I was going to make it wireless, for comfort, and attach to the “bare” Polar strap (they have snaps for attaching the wireless modules, so that the strap can be washed).</p>

<p>I ordered the parts, and started experimenting with various techniques of recording a clean signal without a separate ground electrode. However, my required coursework and lab rotations just took up most of my time, and I was feeling impatient. So, I did what I thought was the next-best thing: I ordered the recently-released 2nd generation <a href="https://ouraring.com">Oura Ring</a>, in the hopes that it might be good enough.</p>

<p><br /></p>

<h2 id="2019-2021">2019-2021</h2>

<p>So, the projects (both the model and the hardware) were on hold while I worked on research for my PhD with <a href="http://ilab.usc.edu">Dr. Laurent Itti</a>. I think there was a pandemic somewhere in here, and time lost all meaning. However, every few months I would occasionally experiment with different aspects of the model and training.</p>

<div style="font-size: 1.125em">
<b>2020</b></div>

<p>In 2020 a new, published, state-of-the-art threshold was reached for EEG-less methods, k=0.585 from <a href="#references">Sun et al. [2]</a>. However, unbeknownst to anyone that didn’t attend my 2018 SfN talk, this was significantly below the k=0.710 I had already presented.</p>

<p><br /></p>

<h2 id="2022-2024">2022-2024</h2>

<p>After a 3-year hiatus, I began working on the research again. My model had been collecting dust—the world none the wiser and unable to use it. So, this time, I had set my sights on publishing the model and releasing the code to the world.</p>

<div style="font-size: 1.125em">
<b>2022</b></div>

<p>For my PhD qualifying exam in January, I switched my final PhD project to “finishing up” the not-yet-published sleep staging model as well as the extensions I had long known were in the pipeline (see <a href="#future-plans">Future Plans</a> below).</p>

<p>In the midst of greatly expanding the training data (making sure to never test on any subject I had ever trained any previous model on), I found a glaring issue with one of the NSRR datasets. So, since I had to replace thousands of recordings, I decided to also target a smoother age distribution.</p>

<div style="font-size: 1.125em">
<b>2023</b></div>

<p>While putting the finishing touches on the first paper, I realized there was a bigger story that could be told. The additional analyses, including those suggested from an invited talk I gave, delayed the paper’s submission by several months.</p>

<p>However, by this time, I had already started to work on the next phase of the research in parallel. Unfortunately, those details will remain under wraps until the next paper or two is published.</p>

<p>In November, we submitted the paper to <a href="https://www.sciencedirect.com/journal/computers-in-biology-and-medicine">Computers in Biology and Medicine</a> (CIBM).</p>

<h3 id="2023-phd-defense">2023 PhD defense</h3>

<p>For those in attendance, they got to see the main findings in the first paper, as well as the aforementioned second phase that will make up the next paper or two.</p>

<div style="font-size: 1.125em">
<b>2024</b></div>

<h3 id="2024-cibm-k0725">2024 CIBM (k=0.725)</h3>

<p>To address the biggest concern from the initial round of reviews, I taught myself about meta-analyses and non-inferiority testing. And, after another two rounds of reviews, the paper was accepted by CIBM on April 28th.</p>

<p><a id="future-plans"></a></p>

<h2 id="future-plans">Future Plans</h2>

<h3 id="models-and-code">Models and code</h3>

<p>The most immediate plans I have for the research are the following:</p>

<ol>
  <li>To finish converting the data preprocessing pipeline from MATLAB to Python, so that others can more easily (and freely) use it.</li>
  <li>To test two of the commercial sensors (<a href="/blog/testing-sensors/">Blog update</a>: I’ve begun testing them).</li>
  <li>To write and release a simple 100% free iPhone app for recording and saving the data from those sensors.</li>
</ol>

<h3 id="research">Research</h3>

<p>As mentioned above, I’m currently working on the next phase of the research.</p>

<h2 id="call-for-input">Call for Input</h2>

<p>Since the first paper was published, I’ve been privately contacted by numerous researchers and clinicians around the world on using and extending the model to make it even more useful to them and sleep medicine in general. If you’re involved in sleep medicine, please reach out to me, as I’m trying to get as many ideas and perspectives as possible on where to extend this research to have the greatest benefit for human wellbeing.</p>

<p><br />
<br /></p>

<p><a id="references"></a></p>

<p><strong>References:</strong></p>

<ul>
  <li>
    <p>[1] <a href="https://doi.org/10.1016/j.compbiomed.2013.04.011">C. C.R. Sady et al., “Automatic sleep staging from ventilator signals in non-invasive ventilation,” Computers in Biology and Medicine, vol. 43, no. 7, pp. 833–839, Aug. 2013, doi: 10.1016/j.compbiomed.2013.04.011.</a></p>
  </li>
  <li>
    <p>[2] <a href="https://doi.org/10.1093/sleep/zsz306">H. Sun et al., “Sleep staging from electrocardiography and respiration with deep learning,” Sleep, vol. 43, no. 7, p. zsz306, Jul. 2020, doi: 10.1093/sleep/zsz306.</a></p>
  </li>
</ul>

<p>Originally posted on <a href="https://cardiosomnography.com/blog/">cardiosomnography.com</a>.</p>]]></content><author><name>Adam Jones</name></author><category term="history" /><category term="plans" /><category term="sleep_paper" /><category term="cardiosomnography" /><summary type="html"><![CDATA[Below is the history of and my future plans for cardiosomnography (CSG). I will try to keep this post evergreen. 2009-2013 My self-initiated research began in the fall of 2009, and over the next 4 years I developed several devices, software, and an iPhone app. 2009 I started meditating and immediately began looking for an objective way to track my progress. In December, I built my first device for amplifying the heartbeat intervals using an off-the-shelf photoplethysmography finger-tip sensor (PPG) that was then connected to a laptop and analyzed with code I wrote in LabVIEW. 2010 Not content to be tethered to my laptop, and annoyed by how noisy PPG is, I soon built a biofeedback device using a microcontroller (Logomatic v2), screen (ST7565), and a Polar receiver (RMCM01) to wirelessly receive heartbeats from a Polar HR T31 strap (which sends a 5 kHz radio pulse each beat). "the device" (sorry, never named it) This is a short movie shows the device in action while I was wearing the HR strap. This early version of the interface lacked some of the later features. 2010 sleep interest That summer, I discovered that if I wore the strap while sleeping, I observed clear transitions and cycles. At this point I knew nothing about sleep stages or the research on them (and sleep wasn’t in the zeitgeist yet). However, this marked the beginning of my interest in sleep stages, and in the idea of measuring them with the heart. Recording HRV while sleeping This was recorded the night of July 11, 2010. I transformed the RR intervals into the frequency domain, and divided it into slices (Low: 0.01-0.11 Hz; Medium: 0.12-0.20 Hz; High: 0.15-0.40 Hz). You can see periods of time where one slice makes up the plurality of total power, with sharp transitions between these periods. I also began recruiting beta testers among my friends and coworkers. I wanted to see what the variations in data looked like, and how robust the device and algorithms were. At the time, I only had a single device, so this meant loaning it out each time. Not content with the low-resolution monochromatic screen, and the unnecessary bulk of my pocketable (but still too-large) device, I decided to build an iPhone app. Plan A: To keep things simple, I stuck with the 5 kHz strap+receiver. I designed my first manufactured PCB to attach to the iPhone’s 30-pin dock connector. I had thought about using the connector’s audio input to receive the voltage blips from the receiver IC directly. However, I wasn’t yet confident that I wouldn’t miss any of the blips. So, I had a real-time microcontroller act as the middleman. iPhone heartbeat receiver dongle Two sides of the dongle. The large footprint on the left was for the receiver (RMCM01), and the smaller dense footprint was for a little microcontroller (LPC1114) for sensing the voltage blips from the receiver and communicating with the iPhone through the 30-pin connector (pads on left edges). Unfortunately, when I received the PCBs, I realized they were too thick for the connector’s legs 😭 (which were meant to straddle the top and bottom). However, in the weeks it took between uploading the design files and the finished board arriving, I had set my sights on an even better idea… Plan B: The iPhone 4 (“you’re holding it wrong”) was released that year and had Bluetooth 2.1. I also stumbled upon one of the first Bluetooth HR straps, the Zephyr HxM Bluetooth. The final missing piece was a developer that created a BT API for jailbroken phones (I can’t seem to recall the name of the library right now), since Apple didn’t yet give developers an API to access BT. A year or so later I found two companies selling headphone dongles that received the 5 kHz pulse directly. So, even though the BLE strap was my preferred input, I designed my app to allow the user to select either a BLE strap or a “traditional” 5 kHz strap with a separately purchased headphone dongle. 2011-2013 Over the next several years, I quickly learned the ropes of app development and programming best practices that I hadn’t yet learned (having taught myself GW-BASIC around 1989 and learning a dozen more languages along the way). iPhone app, spectrogram view From August, 2011. I spent a lot of time making this view easy to navigate (with all the fun pan and zoom gestures), while also making sure to prevent any unnecessary calculations for data that wasn't visible. I began attending local iPhone developer meetups and recruiting more beta testers. Now the testers just needed an iPhone (or for the one Android user, I mailed one), and I could loan them one of several straps. 2014-2018 When I began working on a BS in psychology at the University of Houston, this research then expanded into a collaboration with Dr. Bhavin R. Sheth. We decided to tackle trying to score sleep using ECG from a small dataset (n=63) from the Veterans’ Affairs (VA). 2014-2015 While I certainly learned a lot of analysis techniques during my BS and MS in mechanical engineering, I now had to teach myself machine learning and robust statistics. At the time, I was converting the ECG recordings into RR intervals. This is when I found out that algorithms for doing this are… to be kind, not very robust. Every single algorithm and tool I could get my hands on required manual intervention and annotation on all but the most clean and pristine data; none of them handled noise gracefully. Thus began a side project of building a much more robust heartbeat (R wave) detector. Developing my new robust heartbeat algorithm took on even more importance when I stumbled upon the National Sleep Research Resource (NSSR) and the Sleep Heart Health Study (SHHS) dataset, which took us from 63 recordings to over 8,000. There was no way on earth I could babysit even the best of the existing algorithms on every single recording. 2015 SfN (k=0.280) I gave my first talk at the 2015 Society for Neuroscience’s (SfN) annual meeting in Chicago. At the time, I was using combinations of “traditional” machine learning techniques on tons of hand-crafted features (from the RR itself, the spectrum of the RR, etc.), but the results were not great. The Cohen’s kappa was 0.28—pretty abysmal. I also presented a new parametric non-sinusoidal function that I found better matched the RR intervals during respiratory sinus arrhythmia (RSA) than a simple sine wave. 2016-2017 However, a few months after my November 2015 talk, I gave up on RR intervals. And, much to the chagrin of my collaborator, I began throwing neural networks (NNs) at the problem (In 2007, I had explored using NNs during my MS as a way to find nonlinear functions to fit the very nonlinear data from my experiments, long before the deep learning breakthrough in 2012). Given that this was temporal data, I originally started with LSTM- and GRU-based architectures. 2017 SfN (k=0.530) I presented these findings at the 2017 SfN meeting in D.C. on a “dynamic” poster (i.e., a large TV). The results were now a quite respectable Cohen’s kappa of 0.530—better than state-of-the-art on 5-stage scoring for “EEG-less” methods (any method for sleep staging that makes no use of brain, i.e., EEG data). For reference, the current state-of-the-art on 5-stage scoring was k=0.510 from Sady et al, 2013 [1]. However, I wasn’t content with just state-of-the-art for EEG-less methods. My aim was now firmly fixed on matching the performance of human-scored polysomnography (PSG, or a traditional “sleep study”). 2018 I needed a bigger (and better) model, and to train that, I also needed more data. The first thing to go was the RNN architecture. I began experimenting with a new “backbone”, the Temporal Convolution Network (TCN), which made the network completely feed-forward and faster to train. That summer I began lab rotations for my neuroscience PhD at the University of Southern California. 2018 SfN (k=0.710) I gave my second talk at the 2018 SfN meeting in San Diego, where I demonstrated that we had reached a Cohen’s kappa of 0.710 on 5-stage scoring. We were now significantly better than state-of-the-art, and finally within the range of expert human-scored PSG. It was then that I remembered my original HRV device, and felt I needed to begin experimenting on myself. Finding, once again, that there was no reasonably-priced device (narrator: there was, but it was only pointed out to him years later), I started designing one. I was going to make it wireless, for comfort, and attach to the “bare” Polar strap (they have snaps for attaching the wireless modules, so that the strap can be washed). I ordered the parts, and started experimenting with various techniques of recording a clean signal without a separate ground electrode. However, my required coursework and lab rotations just took up most of my time, and I was feeling impatient. So, I did what I thought was the next-best thing: I ordered the recently-released 2nd generation Oura Ring, in the hopes that it might be good enough. 2019-2021 So, the projects (both the model and the hardware) were on hold while I worked on research for my PhD with Dr. Laurent Itti. I think there was a pandemic somewhere in here, and time lost all meaning. However, every few months I would occasionally experiment with different aspects of the model and training. 2020 In 2020 a new, published, state-of-the-art threshold was reached for EEG-less methods, k=0.585 from Sun et al. [2]. However, unbeknownst to anyone that didn’t attend my 2018 SfN talk, this was significantly below the k=0.710 I had already presented. 2022-2024 After a 3-year hiatus, I began working on the research again. My model had been collecting dust—the world none the wiser and unable to use it. So, this time, I had set my sights on publishing the model and releasing the code to the world. 2022 For my PhD qualifying exam in January, I switched my final PhD project to “finishing up” the not-yet-published sleep staging model as well as the extensions I had long known were in the pipeline (see Future Plans below). In the midst of greatly expanding the training data (making sure to never test on any subject I had ever trained any previous model on), I found a glaring issue with one of the NSRR datasets. So, since I had to replace thousands of recordings, I decided to also target a smoother age distribution. 2023 While putting the finishing touches on the first paper, I realized there was a bigger story that could be told. The additional analyses, including those suggested from an invited talk I gave, delayed the paper’s submission by several months. However, by this time, I had already started to work on the next phase of the research in parallel. Unfortunately, those details will remain under wraps until the next paper or two is published. In November, we submitted the paper to Computers in Biology and Medicine (CIBM). 2023 PhD defense For those in attendance, they got to see the main findings in the first paper, as well as the aforementioned second phase that will make up the next paper or two. 2024 2024 CIBM (k=0.725) To address the biggest concern from the initial round of reviews, I taught myself about meta-analyses and non-inferiority testing. And, after another two rounds of reviews, the paper was accepted by CIBM on April 28th. Future Plans Models and code The most immediate plans I have for the research are the following: To finish converting the data preprocessing pipeline from MATLAB to Python, so that others can more easily (and freely) use it. To test two of the commercial sensors (Blog update: I’ve begun testing them). To write and release a simple 100% free iPhone app for recording and saving the data from those sensors. Research As mentioned above, I’m currently working on the next phase of the research. Call for Input Since the first paper was published, I’ve been privately contacted by numerous researchers and clinicians around the world on using and extending the model to make it even more useful to them and sleep medicine in general. If you’re involved in sleep medicine, please reach out to me, as I’m trying to get as many ideas and perspectives as possible on where to extend this research to have the greatest benefit for human wellbeing. References: [1] C. C.R. Sady et al., “Automatic sleep staging from ventilator signals in non-invasive ventilation,” Computers in Biology and Medicine, vol. 43, no. 7, pp. 833–839, Aug. 2013, doi: 10.1016/j.compbiomed.2013.04.011. [2] H. Sun et al., “Sleep staging from electrocardiography and respiration with deep learning,” Sleep, vol. 43, no. 7, p. zsz306, Jul. 2020, doi: 10.1093/sleep/zsz306. Originally posted on cardiosomnography.com.]]></summary></entry><entry><title type="html">Interesting findings</title><link href="https://adammj.com/blog/interesting-findings/" rel="alternate" type="text/html" title="Interesting findings" /><published>2024-10-02T00:00:00-07:00</published><updated>2024-10-02T00:00:00-07:00</updated><id>https://adammj.com/blog/interesting-findings</id><content type="html" xml:base="https://adammj.com/blog/interesting-findings/"><![CDATA[<p>Since I had published the diagram of “roughly” the ECG scaling that should be used for the network input, I have wanted to quantify how much that scaling matters to the network.</p>

<p>One thing that was very obvious by looking at the data from the different <a href="https://sleepdata.org">NSRR</a> studies was that the ECG amplitudes were all over the place, even when using the same equipment.</p>

<p>So, I needed to normalize these amplitudes as best as I could. What I settled on was the following: First, I wanted the median value to be 0 (which makes sense, as the isoelectric line should, roughly, represent zero potential). Second, since most of my other inputs were approximately in the range [-1.0, 1.0], I wanted the ECG to be in the same range. This is because neural networks train better when the inputs are all roughly in the same numerical range, with few large excursions. This also led me to deciding to clip the ECG range to exactly [-1.0, 1.0], as sometimes electrical artifacts in the recordings are orders of magnitude greater than the heartbeat amplitudes. Third, since I knew my range, I wanted to find a scale that would make the best use of this range. Thus, I decided to have at least the 90th percentile of all heartbeat values contained within [-0.5, 0.5]. This also allowed for the natural biological variation in amplitude, with a low likelihood of any heartbeat itself being clipped.</p>

<p>The pipeline was built around this, and worked like a charm. However, since the pipeline is still in MATLAB and divided into several processing steps, I wanted to produce a rough guide for those wanting to get started right away. The question then becomes, if they weren’t using the same pipline as was published, would it lead to differences in results? I didn’t know.</p>

<p>Therefore, I decided to finally evaluate it on the full testing set, by scaling (and clipping, as appropriate) the ECG by values from 0.125 to 8.0. What I found was that in the range 0.5x to 2.0x, the performance impact is negligible. Beyond that range, the performance does start to be meaningfully impacted. This is great, as it means that the network is pretty tolerant to “improper” scaling.</p>

<p>I should note, I kinda expected this to some extent, as I already designed the network to be insensitive to polarity. This means that the features extracted had to tolerate not just a fractional scaling, but a complete flip in the sign of all of the data.</p>

<p>The second interesting finding is on running the model on CUDA vs CPU. I had done all of the training and evaluation on NVIDIA GPUs with the CUDA backend. However, in releasing the model, I realized that I needed to make it accessible to those that don’t have GPUs. Furthermore, if only performing inference, a GPU isn’t really necessary on a model of this size. Therefore, when I published the code, I tweaked it slightly to allow for CUDA or CPU. For those that have not delved into the deep hole that is floating point operations and representations, the following will seem strange: Your computer makes a lot of compromises to store real numbers. There is a standard, IEEE 754, that most will try to follow. However, you can relax these constraints to get better performance. Long story short, the Pytorch backends for CUDA and CPU produce slightly different results. For classifications, this is less likely to be an issue. However, I have now quantified it. Overall 98.2% of the 571,141 scored epochs in the testing set have the same prediction when inference is performed on either CUDA or CPU. This means that about 1.8% of the epochs had outputs that were right on the “border” between being classified as one stage or the other, and that when changing the backend, the prediction switches to the “other” stage.</p>

<p>Originally posted on <a href="https://cardiosomnography.com/blog/">cardiosomnography.com</a>.</p>]]></content><author><name>Adam Jones</name></author><category term="sleep_staging_model" /><category term="cardiosomnography" /><summary type="html"><![CDATA[Since I had published the diagram of “roughly” the ECG scaling that should be used for the network input, I have wanted to quantify how much that scaling matters to the network. One thing that was very obvious by looking at the data from the different NSRR studies was that the ECG amplitudes were all over the place, even when using the same equipment. So, I needed to normalize these amplitudes as best as I could. What I settled on was the following: First, I wanted the median value to be 0 (which makes sense, as the isoelectric line should, roughly, represent zero potential). Second, since most of my other inputs were approximately in the range [-1.0, 1.0], I wanted the ECG to be in the same range. This is because neural networks train better when the inputs are all roughly in the same numerical range, with few large excursions. This also led me to deciding to clip the ECG range to exactly [-1.0, 1.0], as sometimes electrical artifacts in the recordings are orders of magnitude greater than the heartbeat amplitudes. Third, since I knew my range, I wanted to find a scale that would make the best use of this range. Thus, I decided to have at least the 90th percentile of all heartbeat values contained within [-0.5, 0.5]. This also allowed for the natural biological variation in amplitude, with a low likelihood of any heartbeat itself being clipped. The pipeline was built around this, and worked like a charm. However, since the pipeline is still in MATLAB and divided into several processing steps, I wanted to produce a rough guide for those wanting to get started right away. The question then becomes, if they weren’t using the same pipline as was published, would it lead to differences in results? I didn’t know. Therefore, I decided to finally evaluate it on the full testing set, by scaling (and clipping, as appropriate) the ECG by values from 0.125 to 8.0. What I found was that in the range 0.5x to 2.0x, the performance impact is negligible. Beyond that range, the performance does start to be meaningfully impacted. This is great, as it means that the network is pretty tolerant to “improper” scaling. I should note, I kinda expected this to some extent, as I already designed the network to be insensitive to polarity. This means that the features extracted had to tolerate not just a fractional scaling, but a complete flip in the sign of all of the data. The second interesting finding is on running the model on CUDA vs CPU. I had done all of the training and evaluation on NVIDIA GPUs with the CUDA backend. However, in releasing the model, I realized that I needed to make it accessible to those that don’t have GPUs. Furthermore, if only performing inference, a GPU isn’t really necessary on a model of this size. Therefore, when I published the code, I tweaked it slightly to allow for CUDA or CPU. For those that have not delved into the deep hole that is floating point operations and representations, the following will seem strange: Your computer makes a lot of compromises to store real numbers. There is a standard, IEEE 754, that most will try to follow. However, you can relax these constraints to get better performance. Long story short, the Pytorch backends for CUDA and CPU produce slightly different results. For classifications, this is less likely to be an issue. However, I have now quantified it. Overall 98.2% of the 571,141 scored epochs in the testing set have the same prediction when inference is performed on either CUDA or CPU. This means that about 1.8% of the epochs had outputs that were right on the “border” between being classified as one stage or the other, and that when changing the backend, the prediction switches to the “other” stage. Originally posted on cardiosomnography.com.]]></summary></entry><entry><title type="html">Testing Movesense MD and Polar H10</title><link href="https://adammj.com/blog/testing-sensors/" rel="alternate" type="text/html" title="Testing Movesense MD and Polar H10" /><published>2024-09-16T00:00:00-07:00</published><updated>2024-09-16T00:00:00-07:00</updated><id>https://adammj.com/blog/testing-sensors</id><content type="html" xml:base="https://adammj.com/blog/testing-sensors/"><![CDATA[<p>As I mentioned on the <a href="/more/#equipment">ECG Equipment</a> section of the <a href="/more/">More</a> page, I am currently only aware of two commercial ECG sensors that might work for all-night ECG recordings. I can now begin testing these two, as Movesense recently sent me a Movesense MD sensor (thank you, Movesense) and I purchased a Polar H10 sensor.</p>

<p>Furthermore, since there are currently no (100%) free apps that allow users to record and download the data from both of these devices on the (US) Apple App Store, I plan to write an app to record and export ECG data. The app would be no frills, but also 100% free—with no strings attached. I don’t yet have a timeline for a release date, but considering I already wrote an app years ago for the another bluetooth sensor, I think this shouldn’t take too long.</p>

<p>Originally posted on <a href="https://cardiosomnography.com/blog/">cardiosomnography.com</a>.</p>]]></content><author><name>Adam Jones</name></author><category term="ecg_sensors" /><category term="ecg_app" /><category term="cardiosomnography" /><summary type="html"><![CDATA[As I mentioned on the ECG Equipment section of the More page, I am currently only aware of two commercial ECG sensors that might work for all-night ECG recordings. I can now begin testing these two, as Movesense recently sent me a Movesense MD sensor (thank you, Movesense) and I purchased a Polar H10 sensor. Furthermore, since there are currently no (100%) free apps that allow users to record and download the data from both of these devices on the (US) Apple App Store, I plan to write an app to record and export ECG data. The app would be no frills, but also 100% free—with no strings attached. I don’t yet have a timeline for a release date, but considering I already wrote an app years ago for the another bluetooth sensor, I think this shouldn’t take too long. Originally posted on cardiosomnography.com.]]></summary></entry><entry><title type="html">Interview for USC Viterbi News</title><link href="https://adammj.com/blog/USC-interview/" rel="alternate" type="text/html" title="Interview for USC Viterbi News" /><published>2024-08-19T00:00:00-07:00</published><updated>2024-08-19T00:00:00-07:00</updated><id>https://adammj.com/blog/USC-interview</id><content type="html" xml:base="https://adammj.com/blog/USC-interview/"><![CDATA[<p>This interview was conducted by the University of Southern California Viterbi News.</p>

<ul>
  <li><a href="https://viterbischool.usc.edu/news/2024/08/heart-data-unlocks-sleep-secrets/">Interview at USC Viterbi News (usc.edu)</a></li>
</ul>

<p>Originally posted on <a href="https://cardiosomnography.com/blog/">cardiosomnography.com</a>.</p>]]></content><author><name>Adam Jones</name></author><category term="interview" /><category term="cardiosomnography" /><summary type="html"><![CDATA[This interview was conducted by the University of Southern California Viterbi News. Interview at USC Viterbi News (usc.edu) Originally posted on cardiosomnography.com.]]></summary></entry><entry><title type="html">New, “no demographics” model</title><link href="https://adammj.com/blog/new-model/" rel="alternate" type="text/html" title="New, “no demographics” model" /><published>2024-08-06T00:00:00-07:00</published><updated>2024-08-06T00:00:00-07:00</updated><id>https://adammj.com/blog/new-model</id><content type="html" xml:base="https://adammj.com/blog/new-model/"><![CDATA[<p>I received some requests to provide a version of the model that does not require demographic information (age and sex). Therefore, I trained a model where that information is not required (and, indeed, is ignored if provided).</p>

<p>The model works the same way as the primary model, except it ignores any demographic information provided.</p>

<p>The Cohen’s kappa for this model on the testing set is 0.718, which is a slight (&lt;1%) performance impact as compared to the primary model.</p>

<p>To reiterate, I just trained this model, so it is not in the paper. Furthermore, there were no structural changes to the neural network. The difference is that the input for the demographics is just drawn from a random distribution.</p>

<p>Originally posted on <a href="https://cardiosomnography.com/blog/">cardiosomnography.com</a>.</p>]]></content><author><name>Adam Jones</name></author><category term="sleep_staging_model" /><category term="cardiosomnography" /><summary type="html"><![CDATA[I received some requests to provide a version of the model that does not require demographic information (age and sex). Therefore, I trained a model where that information is not required (and, indeed, is ignored if provided). The model works the same way as the primary model, except it ignores any demographic information provided. The Cohen’s kappa for this model on the testing set is 0.718, which is a slight (&lt;1%) performance impact as compared to the primary model. To reiterate, I just trained this model, so it is not in the paper. Furthermore, there were no structural changes to the neural network. The difference is that the input for the demographics is just drawn from a random distribution. Originally posted on cardiosomnography.com.]]></summary></entry><entry><title type="html">Interview for Healio</title><link href="https://adammj.com/blog/Healio-interview/" rel="alternate" type="text/html" title="Interview for Healio" /><published>2024-08-05T00:00:00-07:00</published><updated>2024-08-05T00:00:00-07:00</updated><id>https://adammj.com/blog/Healio-interview</id><content type="html" xml:base="https://adammj.com/blog/Healio-interview/"><![CDATA[<p>This interview was conducted by Healio.</p>

<ul>
  <li><a href="https://www.healio.com/news/pulmonology/20240805/qa-electrocardiographybased-sleep-stage-scoring-on-par-with-polysomnography">Interview at Healio (healio.com)</a></li>
</ul>

<p>Originally posted on <a href="https://cardiosomnography.com/blog/">cardiosomnography.com</a>.</p>]]></content><author><name>Adam Jones</name></author><category term="interview" /><category term="cardiosomnography" /><summary type="html"><![CDATA[This interview was conducted by Healio. Interview at Healio (healio.com) Originally posted on cardiosomnography.com.]]></summary></entry><entry><title type="html">Interview for UH</title><link href="https://adammj.com/blog/UH-interview/" rel="alternate" type="text/html" title="Interview for UH" /><published>2024-07-02T00:00:00-07:00</published><updated>2024-07-02T00:00:00-07:00</updated><id>https://adammj.com/blog/UH-interview</id><content type="html" xml:base="https://adammj.com/blog/UH-interview/"><![CDATA[<p>This interview was conducted by the University of Houston Newsroom.</p>

<ul>
  <li><a href="http://uscholars.uh.edu/news-events/stories/2024/july/07022024-sheth-sleep-staging-monitoring.php">Interview at UH Newsroom (uh.edu)</a></li>
</ul>

<p>Originally posted on <a href="https://cardiosomnography.com/blog/">cardiosomnography.com</a>.</p>]]></content><author><name>Adam Jones</name></author><category term="interview" /><category term="cardiosomnography" /><summary type="html"><![CDATA[This interview was conducted by the University of Houston Newsroom. Interview at UH Newsroom (uh.edu) Originally posted on cardiosomnography.com.]]></summary></entry></feed>