We talk to Curzio Vasapollo about the practicalities and challenges of measuring the sleeping brain using EEG (electroencephalography). We also discuss what happens in a sleep lab vs tracking your sleep at home, AI, biotech and loads more.
Prefer to read? Download the full episode transcript here
Episode Highlights:
- 03:52 Description of ZMax platform
- 04:13 Zmax applications – research, lucid dreaming, neuro-feedback
- 07:47 What is EEG?
- 12:01 Misconceptions of ‘brain-waves’
- 15:24 What is a hypnogram?
- 16:58 What is light sleep (N1 sleep)?
- 19:46 N2 sleep, K-complexes and sleep spindles
- 22:28 Slow wave (deep) sleep or N3 sleep
- 26:28 REM detection and lucid dreaming
- 31:15 Why a sleep cycles is not 90 minutes
- 32:33 Contrasting EEG sleep recording at home and in the sleep lab
- 32:33 The limitations of FitBits and movement based sleep trackers
- 35:04 Challenges of building an accurate EEG wearable
- 56:14 Turning raw EEG data into a usable hypnogram with AI
- 65:01 Lucid dreaming research using ZMax
- 69:05 Detecting sleep apnea with a wearable
- 73:05 the future of sleep technology is in biotech, not hardware
Episode Homepage: https://sleepjunkies.com/podcast/measuring-the-sleeping-brain/
This episode’s guest:

Curzio (Kurt) Vasapollo is the inventor of Zmax, an EEG-based sleep acquisition tool for monitoring sleep accurately outside of the sleep lab with applications for sleep research, lucid dreaming and bio/neuro-feedback
Links:
Zmax website – https://hypnodynecorp.com/
ResearchGate – https://www.researchgate.net/profile/Curzio_Vasapollo
Overview of EEG and sleep (Medscape) – https://emedicine.medscape.com/article/1140322-overview
Sleep tracking guide – https://sleepjunkies.com/features/the-ultimate-guide-to-sleep-tracking/
More Episodes:
[powerpress_playlist limit=20]
Full Transcript:
Jeff: Hi Kurt how you doing?
Kurt: I’m doing fine thank you.
Jeff: I’m going to really try and pick your brains today. Before we get into the question and answer, I like to give guests the opportunity to what I call an elevator pitch and now I just mentioned you’ve created this platform called ZMax. Now, in a succinct way, because it’s a very sophisticated platform that you’ve created, can you just give us a very brief overview what your platform does?
Description of ZMax platform
Kurt: So, ZMax is a sleep acquisition and analysis system. It consists of both hardware and software. The hardware is a comfortable headband that you put on your head before you go to sleep and the software is a toolset for displaying data, that is acquired and for analyzing features of sleep.
Some of the applications of ZMax for exemplar in sleep research, sleep researchers across Europe use ZMax to capture data from their participants in their sleep studies.
For example; the Stockholm University and Karolinska Institute, Radboud University, Netherlands Harrison Institute, in Amsterdam with Dr Van Someron. Max Planck Institute of Psychiatry, University of Freiburg, University of Essex, University of Leeds.
Another application that comes to mind is lucid dreaming. ZMax has stimulation capabilities. So, for people that are interested in lucid dreaming ZMax is the most accurate EEG based sleep cue delivery system that’s available.
It’s also scriptable, the another one is biofeedback and neurofeedback. So, for example; ZMax captures the amount of oxygen in your blood and your heart rate movement and the muscle tension in your face. So, from these things, as you can see on our website as well, you can create relaxation setups and biofeedback software.
[05:22] Jeff: Okay, fantastic. So, Kurt can you tell us a bit about your journey into creating ZMax and how you came up with the initial idea for the platform and how you developed ZMax and where it is now?
Creating an EEG Wearable
Kurt: Well, ZMax started out as a lucid dreaming project and I had tried the Nova Dreamer, this was the first lucid dreaming hardware. But unfortunately, I lost that and also it was quite uncomfortable on the face because it was an actual sleep mask. It covered your face and with any facial twitch it would then tickle or scratch and wake you up.
So, I decided to make my own system and that it would be based on EEG because well, the way that you normally detect lucid dreaming is by looking at eye movements and the Nova dreamer was accomplishing that with an infrared sensor.
But when I got halfway through, I realized that actually the determination of the sleep phase which in my case, I thought I would just download some paper which would explain to me what kind of algorithm I could use to look at. However, many seconds of data and figure out what sleep state the person was in.
When I got there and I started downloading a bunch of papers from the internet and took months to replicate all of their algorithms; I realized these things don’t work. And about halfway through I realized that the challenge of determining whether a person is in the dream state or not was something that wasn’t really solved to the point where I could just download an algorithm and do it.
So, that started the actual project which is basically by now I would say, 90% of the effort went into the data analysis related to sleep data, to determine the sleep structure from a sleep recording using the forehead EEG channels.
[07:26] Jeff: Okay. So, I want to move on to the next part of the interview and this is going to be a bit of an explainer for the listeners, for anyone who’s interested in to the actual science of what goes on in the sleeping brain. So, to start off with, can you just give us a brief overview of what EEG is?
EEG and sleep, an explainer
[07:46] Kurt: So, as you said, EEG is a technique that we use to measure what the brain is doing, by looking at very coarse type of electrical activity that can be measured on the scalp.
When you sleep, your brain undergoes several state transitions and it goes into some states which we were able to identify. And each state has some particular markers as to the electrical activity that it’s producing.
So, we can look at the electrical activity of the brain and figure out because we can see these features; that means that currently the brain of the sleeping person is in this or that particular stage.
Well, let me put to you this way, the brain has many modules and they’re all doing different things and as they do something complex like a cognitive task; they’re all doing different things.
The pixel analogy
So, all of the impulses all the electrical impulses get mixed up and the result is that you don’t see very much, it’s as if you were trying to detect the average color of your TV, you’re not going to see an image if it was all blurry would just turn to gray, right?
What happens during sleep is that all the pixels all the pixels are trying to do the same thing, which we think are cleaning up garbage that got collected within the cells and trying to prune memories that are not getting used, transferring short-term memory to long-term memory, memory consolidation, all this kind of stuff.
So, as it doing that, it’s almost as if you had a TV, where every pixel is trying to take on the same color. So, once you measure that outside the skull where everything is blurred out, you can still see some things like you imagine a TV that’s all blurry but every pixel is trying to turn red and blue, red and blue then you can still see that activity.
That’s why sleep EEG is much more interesting than wake EEG because it’s all blurred out.
And the future for EEG, what we really expect, if we’re going to interface with the human brain, we need to get more resolution. How to do that, it’s a whole different you know ballpark and as far as the discussion goes; but that’s why during way you don’t have very interesting EEG activity recorded by EEG.
It’s not that there’s no interesting activity, it’s just that we can’t record it. Well, that’s all.
[10:02] Jeff: Great analogy with the pixels.
[10:06] Kurt: And let me tell you something else about that, about the pixel analogy. So, when you read these manuals like what we follow the AASM manual for sleep scoring which tells you if you see this thing then it’s this stage. And if you see this other feature it’s that other sleep stage.
And routinely throughout the transition from one state to the next, what you see is both occurring and what I think is going on there simply, the brain is trying to synchronize because it’s going to eventually end up in the same state but it’s not a computer you know.
It’s not like it’s got a central controller that says, okay, now we switch to REM. So, you it’s perfectly conceivable for one part of the brain to be in slow-wave sleep and turning into REM sleep before the other part.
So, that’s why sometimes you can see these features call occurring because you might be caching data from different neuronal clusters that are in different sleep states. So, essentially your brain is not necessarily always all in this in the same sleep State.
Now, of course if this superposition continues for very long then there might be some underlying problem. And so, this is the kind of stuff that if you go to a hospital and you have an EEG with 16 channels and you’re getting a human to analyze that you might discover and say, oh! Look this this part of the brain is not even sleeping or whatever.
I don’t know about that because it’s not even my field but that would be one of the differences between something that you can take home and ZMax that’s got two channels and for example; going to a hospital where they’re going to wire you to death and you’re going to have wires all over your body because that’s got more channels. But for most people, their brain just follows the same sleep structure and so, having two channels at the front is perfectly sufficient.
[11:50] Jeff: Again, fascinating and I mean, this talk is a real education for me. So, we often hear about brain waves no, that’s not strictly the best way of thinking about what EEG is measuring because the EEG is measuring electrical activity.
Misconception about brainwaves
[12:04] Kurt: Now the great misconception that people have, is that there is such a thing as an alpha wave or a beta wave or a gamma wave. And if you go anywhere on the internet you will read pretty much the same stuff which is that alpha is between 10 and 11 Hertz and beta is between 20 and 30 Hertz or something like that.
But actually, brain activity is not that deterministic, it’s not like looking at some kind of sensor reading from a camera or some microphone; which is the whole complexity of creating wearable technology.
I mean, it’s a whole difference between biological data and in digital data and people tend to think of EEG in terms of digital data so, what frequency – what frequency. But when you start looking at it, what I found out is that, every person has a different range of frequency and these things often coexist and then there are variations which are known but don’t carry any meaning.
Then there’s variations that are known and there they’re not normal but they’re benign and then there are other variations that indicate some pathology possibly.
And so, it’s a great big mess trying to make sense of all these features and figuring out what’s going on. So, I guess the first thing to know would be the following; if you put on an EEG and you’re not sleeping, you’re not going to see almost anything out of it.
You might be lucky if you close your eyes to see some alpha waves, these are ten Hertz oscillations; we think and there’s a lot of literature on it most of which I don’t even know but we think that that’s just the basic rhythm of neurons, firing when they don’t have any input and they’re not doing something useful.
So, for example; your visual cortex, the part of your brain that interprets images that sits at the back of your head which is not easy to see with ZMax as most people have hair there.
But for people who don’t have hair if you put ZMax on the back, you wear it front back. Then immediately when you close your eyes you would see alpha waves.
I mean, you’re not seeing something interesting you’re seeing the lack of something interesting all the enormous amount of processing that your brain does to get you to see things and translate images to symbols.
That is not happening and instead of that, you have some basic firing of the neurons which most likely doesn’t mean very much; you always need to take this with a grain of salt because we don’t know exactly what’s going on in there for the most part.
[14:29] Jeff: Essentially, you’re saying it looks like noise.
[14:31] Kurt: Well, but the way that neurons create noise is not what you would expect to be noise. like in digital signals noise means broadband frequency means like Shh, right? When the brain is noisy or it doesn’t do anything useful, it seems like it’s got this alpha rhythm, which is just doing that protect it like with this rhythm of 10 Hertz.
But so, that’s why when you close your eyes and you’re acquiring signal from the back of your head, which is not what you do normally with ZMax usually acquire the front. But at the back you can immediately see alpha waves now some people have alpha waves from the front as well like 87% of the people do. So, that’s the only thing you can see on EEG when you close your eyes that’s it. Sleep is the interesting part of using EEG.
[15:16] Jeff: And you just briefly explain what a hypnogram is we’ve used this term a couple of times.
What is a hypnogram?
[15:24] Kurt: Generally speaking, that the hypnogram is this wave this blocky wave that you might have seen a few like hypnograms on google and you’ll see a lot of them and it’s a graphical representation of the state of the brain throughout sleep.
And it’s used by researchers as well when you get a sleep report for example; if you go to a hospital and do a sleep study. They give you a hypnogram together with a lot of other metrics.
If you have a hypnogram, let’s say the hypnogram is a square wave, that is a sequence of different epochs. There’s N1 N2 N3 wake and REM. We have these 5 different states in which we break down sleep.
If you have a sequence of these markers for example; wake, wake, wake and N1 & N2 & N3 then over the whole night that becomes a hypnogram. And if you look at that, you can already tell a lot about what the brain and the body generally did during sleep.
From this hypnogram I’m just talking about if you were to take to take a sleep study in a hospital, they would give you a sleep report. You can Google sleep report and then it will show you samples of what it might look like.
But at the top they have dozens of numbers with codes like total sleep time TST, REM latency and all of these things are drawn directly from the hypnogram.
[16:49] Jeff: Great, thanks. Let’s talk about the different sleep stages themselves and we’ll start off with light sleep or N1 sleep.
Stage 1 or N1 (light sleep) explained
[17:00] Kurt: Right. So, if you look at a hypnogram you’re going to see the first period is always w it’s always awake because you start from when you’re awake. So, that’s uncontroversial that’s just when you’re moving around trying to get to sleep perhaps, perhaps bathroom breaks and also at the end of the night once you wake up again that there’s a big block of wake. Then the after wake, the first thing that happens is that you’re going to light sleep or n1 sleep.
And N1 is the is not associated with any visible features on the EEG channels, other than perhaps if you’re an alpha producer, the alpha waves disappear. And it has the peculiarity that if I wake you out during n1 sleep, most people will think I was not sleeping; but actually, you were not fully conscious.
You might have what are called hypnagogic, imagery which just means you’re not really dreaming but kind of, I don’t know, I guess drunk on those chemicals that your brain is secreting to get you to fall asleep. So, n1 is not considered to be very restful at all but it’s just an interstitial period between wake and actually doing restorative processing in actual sleep. So, that’s N1.
[18:20] Jeff: Okay. So, with regards to our current understanding of n1 light sleep, what would you say is the role or the function of N 1?
Does N1 sleep have a function?
[18:32] Kurt: I don’t think N1 actually has a function but I’m sure there are plenty of hypotheses out there; but what it looks to me it’s just simply a transition phase.
Because I imagine it would be no, biological process is one rosier like digital right imagine if you were to fall asleep boom and now, you’re making delta waves, would be really strange. We don’t have other things that work like that, right?
So, you eat something and then after you start digesting everything is gradual in biology, right? So, the transition from wake to sleep is this period called n1 I don’t think that it’s got a particular function.
If you were to get into evolutionary biology, you might say that you’re trying to see if you can actually go to sleep and if there’s a predator that comes at you within five minutes then you’re still awake enough to be able to respond quickly or something like that.
But what it seems to me is that it’s just a transition period, where your level of arousal is still too high to begin creating the features that are characteristic of sleep like the delta waves and stuff like that.
[19:37] Jeff: Okay. Great so, that’s n1 light sleep. So, what happens after this phase of N2?
Stage 2 (N2) Sleep, K-complexes and Sleep Spindles
[19:45] Kurt: After N1, we have a face that’s called N2 sleep and this is when you can begin to see two very distinctive features of sleep EEG which are K complexes and sleep spindles.
K complexes are very easy to spot, triangular waves and they just happen like bang like that they’re not a continuous activity, they’re spot-like instantaneous activity.
And they look like a zig zag wave and then there’s another feature of N2 sleep which and which also occurs during N3 but it begins on and these are sleep spindles – fast oscillations that occur generally around 12 Hertz but there’s a lot of individual variability.
And they’re supposed to say something about the hippocampus communicating with your neocortex; might have to do with memory consolidation but in particularly with forgetting.
Because active forgetting is an important part of memory like in even in AI. If you’d never do prune, you don’t have any negative component to learning you just saturate the system you can learn anything.
I have one friend in fact and he is the only EEG I’ve ever seen that does not have sleep spindles and he has this very weird memory, where he remembers the silliest details from two decades ago. And he can see the exact words and you know we haven’t studied him or anything.
[21:22] Jeff: So, sleep spindles involved with learning but not learning as we think of it but in forgetting.
[21:29] Kurt: Yeah, you know the thing is when I say, these things I’m telling you my current understanding and the research is you know, every day there’s new papers coming out. So, there are people that research only sleep spindles that’s all they do for a living for 20 years.
So, I’m sure if they heard that statement from me, they could say well actually that’s not completely accurate. To me what they are, they’re little scribbles that appear on the EEG.
And if I can identify scribble versus non-scribble and mark that is n2 I’ve already segmented N2 & N3 from everything else and so you see how there’s a very different angle on this from a data analytics point of view.
And if you were to get the guy that studies only spindle and you told them take a recording and tell me where the spindles are and write a software for it, you wouldn’t be able to do that.
So, it’s very broad it’s very specialized so, I want to make sure I don’t make statements that are outside of my realm of expertise.
[22:21] Jeff: Let’s move on N2 and then we hit deep sleep slow-wave sleep N3.
Stage 3 (Slow wave) Sleep (N3)
[22:28] Kurt: So, we hit N3 and here I have to talk about my personal grievance with the division between N2 and N3 because it’s one of the most unscientific things about sleep staging. So, basically, the most notable it’s a feature of sleep is that you have these Delta waves also called slow waves that’s why they call it slow wave sleep, right?
And people have decided for some reason that when you have more than a specific amount of Delta waves within one epoch, which remember is an arbitrary 30 second period; then that’s going to qualify as N3.
The way that my algorithm works internally with Zmax, I don’t really have in N2 & N3; I produce them afterwards to make the researchers happy and I hope for the best that it’s close enough to what they expect.
But it’s a smooth continuum which is exactly what you see on the electrical activity. In the electrical activity, there is a smooth continuum of increase in slow wave production as you move from N2 to N3.
If I want to be more precise let’s say there are some people and some nights in which throughout this continuous process, there’s a point in which the production of delta waves increases all of a sudden.
So, in this continuous process there’s a bump that you can identify. So, it’s not to say, that there’s no state transition of some sort. But as far as I’m concerned that did this determination of N2 versus + N3 is one of the most problematic, it’s the one that changes a lot depending on who’s scoring and how much attention they’re paying to different human scores on two different Apple.
Or if they redo the same scoring the next week, they will give a different result there. One time they will say yeah, it’s not really exceeding the other time it will say yeah, I reached it so you see how error-prone this process is.
And so for this reason I think of N2 and N3 as really the same phase, but a phase in which there’s a continuous increase in slow wave activity.
[24:40] Jeff: That’s really interesting, the fact that these states are knots as we might perceive them, as deterministic as you say, a hard transition between N2 and N3. So, again what would you say about the function of N3 deep sleep.
What are the functions of slow-wave sleep?
[25:03] Kurt: Well, I think that is something where you probably should ask a researcher. I’m not really I mean, the functions, right? So, it’s being associated with all sorts of things with memory consolidation with repair of the body then there’s the idea that there’s some metabolic processes in the brain.
So, it’s not just a computing device it’s not just a chip right because it’s biological. So, those cells are the same stuff that’s inside all the cells in your body they got ribosomes, they got lysosomes, they produce junk they need to get rid of the junk which is a byproduct of the metabolism.
So, not all the brain is doing is cognitive, some of this stuff might just be produced by the brain going in and getting rid of junk substances and breaking them down which could be something that doesn’t really like to take place at the same time as you’re actually using those neurons.
So, but there’s new papers coming out trying to make sense of what exactly the brain is doing depending on the electrical activity. So, like we used to think that dreaming only happens in REM sleep and now, there are people saying that actually you can have dreams also occur during slow-wave sleep.
[26:18] Jeff: Okay, that’s a nice segue into the next stage of sleep. REM or rapid eye movement sleep.
REM sleep – the actionable phase
[26:26] Kurt: Yes. So, REM is actually for me the most important phase because it has a phase that’s actionable in some way. Because as you remember so, this started as a lucid dreaming project and so, very quickly lucid dreaming is a state of consciousness in which your brain is asleep but you can still be conscious within the dream except you don’t have any input so, the brain is generating everything that you feel and see.
And lucid dreaming is a very unstable, very fragile state of consciousness in which you’re able to remember or notice that you’re in a dream while you’re in a dream. And the problem there is that because your brain is basically already kind of fully working at that point, it doesn’t really like to stay asleep so, one of the big challenges in lucid dreaming is not so, much to lucid dream but to avoid waking up because people get excited.
So, anyway the idea is that you wait until REM sleep and the way you do that. is by looking at a movement. So, the one part of the ZMax system that’s working. The best that I’m really satisfied about is the REM detection which works in real-time unlike the hypnogram production which is works offline you record the night and then after it produces the hypnogram.
But the REM detection is done live and it’s done by looking at the peculiar features of REM sleep, which are eye movements.
Now, I don’t know how much in detail want me to go into it but the problem is that actually sleep has a lot of things that happen they look like eye movements. So, getting the real eye movements versus the fake eye movement that’s where all the difficulty is. But basically, in REM sleep, you’re removing your real body eyes in the same way that you’re moving your dream eyes.
And so, you can look at a movement switch produce a very strong electrical oscillation or like the EEG and say, oh! Okay, I see now the guys moving the eyes and he doesn’t seem to be awake and there’s no movement and this and that. So, that’s the REM sleep phase.
[28:29] Jeff: Okay and what about the characteristics of REM when you’re looking at the EG?
What are the characteristics of REM?
[28:36] Kurt: So, in REM sleep, you don’t have alpha, you don’t have spindles, you don’t have K complexes and you don’t have slow-wave sleep, you don’t have slow waves or delta waves.
All you have is a pretty flat electrical activity with some theta activity which anyways is present all the way throughout the recording but it’s very visible in rem because there’s almost nothing else and you have eye movements.
The eye movement activity is very evident on the EEG signal, even though it’s captured on the forehead. If you go and do this in a lab with all the electrodes what they do is they put some electrodes on your forehead and some of them near your eye and the ones near the eyes they call it EOG for electrooculography but thing is it’s all superimposed see so even the EOG channels get a little bit of EEG and the EEG certainly gets a whole lot of EOG.
Separating EEG and EOG with software
So, what I do with ZMax is I’m able to use software to separate the two and to identify the EOG’s that are superimposed on the EEG s. And by doing that, I can get away with just having a simple headband without having extra wires or telling you to connect things near your eyes. So, basically all you see is data activity with eye movements.
You also see some other things remember the ZMax is multi-sensor so, the difference between eye movement…REM and eye movements in wake. For example, is that, during REM you’re not moving around. The eye movements also are very rounded at the tips compared to eye movement so, you can see during REM that’s probably because the paralysis of the body that happens throughout REM sleep is not complete.
So, it doesn’t affect the eyes but it’s also not zero at the eyes so; the eye movements in sleep are more subdued than then during wake. And you also see some bursts of EMG activity which is a electromyography, it’s just the facial twitches you sometimes have small twitches given. If you look at a pet while they’re sleeping sometimes see they have small movements of the paws over there at the face.
[30:49] Jeff: Yeah. Okay, brilliant so, for anyone who’s unaware, we’ve just described these different states, these different stages of sleep. And during the night we will cycle through these in the order that we’ve just described them; and I believe it’s around 90 minutes for the average length of a sleep cycle and we’ll get five or six different sleep cycles during the night.
Why a sleep cycle is not 90 minutes
[31:16] Kurt: So, this other idea that you have all of these sleep cycles and there are six sleep cycles throughout the night or five sleep cycles. So, first of all it depends how long you sleep the more you sleep you’re going to have more cycles clearly. Second thing it depends how well you sleep because if you have an awakening the cycle is interrupted.
And it’s got to begin again and so, what I often see in a recording is that it’s impossible to determine how many cycles there were. Simply, because there’s no clear way to determine what is a cycle so let’s say you go N1 & N2 & N3 then you wake up then you go back N1, N2, N3. How long is that? I don’t know. It might be 2/3 of the size of the regular sleep cycle and then you have a full one later, do you call that one, we call it 2/3? It’s not clear.
So, we just call a cycle, if you were not disrupted in your sleep then the sequence of going from N 1 2 & 3 and then back to n1 and REM we’d called it a cycle in practice though it’s a lot harder than then reading stuff on the internet might have you believe because there’s disturbances and the brain jumps back and forth.
It’s also indicative of some types of disorders if you cannot stay in slowly sleep, like you might see somebody that goes in one and two and three and then and two and three and then two and then one and it you can see that it’s not stable.
So, that’s the case in which you should actually go and visit a sleep lab and try to figure out what’s going on; but it’s something that would be revealed easily with ZMax if you try for a night.
[32:48] Jeff: Okay, brilliant thanks Kurt, that leads nicely on to what I want to talk about next. So, we’ve discussed the different sleep stages and their characteristics. Now, I just want to talk a little bit about actually what happens if you were to book yourself in for a sleep study, if you went to a sleep lab, a sleep clinic and you were to undergo a full clinical grade sleep study, also known as a PSG polysomnogram. And also talked a little bit about the advantages of going to a sleep clinic but also some of the disadvantages to try to monitor sleep in those conditions.
Sleep lab vs home EEG monitoring
[33:32] Kurt: Right. Well, the sleep lab test is a lot more comprehensive because as all these channels, not only does it have sometimes sixteen, sometimes even as high as 256, which means your head is literally covered by electrodes look like some Borg Queen type creature.
But as you can imagine it’s very inconvenient but it’s not just that then you get the pulse oximeter on the finger so, you got stuff tied to your fingertip with tape and you’ve got stuff across your chest which measures the respiratory effort.
And then you’ve got a thing stuck in your nose as well called a cannula and it measures your breathing. As you can imagine this is very accurate in how it measures things. The problem with tests in the sleep lab is that you’re a sleeping in an unfamiliar environment and be you’ve got all of these contraptions hooked to your body. So, what are you really measuring?
You’re measuring a very disturbed sleep now because this is recognized what they do is usually like in a study, they will have three nights and the first one they call it the habituation night. But I think it’s a little bit preposterous that you can have bitch rate to sleeping like that in a day, with those kinds of thing.
[34:48] Jeff: Well, instinctively it makes sense new environments we’re not going to sleep assembly and is restful as we are in our bed and I believe there’s plenty of research to back this up to.
[35:00] Kurt: Well, there’s this tactile sensation of stuff adhering to your body which is it’s very unfamiliar then it restricts your ability to turn around because you can feel there’s something.
So, with ZMax, you only have two electrodes and you still have a lot of the other sensors like the movement sensor, you have the heartrate instead of using a finger sensor, it’s detecting it directly from your forehead.
There is a breathing sensor that’s external and you can use it if you want to capture your SPO2 level and your breathing and yes, that also needs to be stuck in the nose as well but it’s optional.
But for determining the sleep structure that’s not you know, we have to determine because sleep studies, they do them for analyzing breathing problems and sleeping problems that have other causes, right?
So, if you’re talking about breathing its very different story but basically with ZMax that the reason you can stage sleep even with only two channels; is that the brain is not really usually doing different things in different locations at different times like there might be a discrepancy of at most 30 seconds for when the particular piece of the brain enters a specific stage.
But it doesn’t really matter, the only challenge is that when you only have two electrodes you got to make sure they don’t fall off. So, when people try it normally the first time they don’t understand how to put on the headband, they have to try it a couple times.
Then I give them a few tips now, I’ve got a video on YouTube that shows them how exactly to put it on and then once they resolve that problem, it’s basically as good as having 16 electrodes.
For healthy subject that we have to underscore that the ZMax is a tool for healthy subjects to determine the sleep structure and of course if you’re not a healthy subject but you didn’t know you might find out this way. But it’s not a system that can analyze the patterns of people that are very abnormal, brain physiology or electroencephalography during sleep.
So, I guess these two distinctions are important between studying sleep structure versus study breathing is one and then studying a healthy subject versus studying somebody that has some recognized pathologies is another difference.
The main benefit with ZMax is that it’s simple, you to scrape the skin, you don’t need to apply gel or glue to your face, you don’t need to go to a sleep lab, you don’t need to plan when you’re going to bed.
Like if somebody tells me okay now and we’re ready go to bed there’s no way I can go to sleep, right? So, some people can’t even do the PSG because, they just won’t fall asleep. This you just put it on and then you go to sleep whenever you feel like it doesn’t require preparation and it’s it’s still something that you need to put on the head.
So, it’s not as comfortable as having nothing at all, it’s not as comfortable as like a wristband or something. But the compromise between how much accuracy you get in the EEG data and the sleep structure, versus how little discomfort it actually gives you because just one piece it doesn’t have all the wires.
In fact, you know, the researchers asked me all the time why don’t you add EMG, why aren’t you at EOG, why? I’m like no, I’m not going to add anything if I start any stuff, we get what you already have and then and then nobody wants to put it on anymore. So, it’s got that benefit that it’s easy to put on and you’re measuring your actual sleep in your home.
[38:34] Jeff: Okay. So, I wanted to ask the question, your product is very much an EG product; but we see a lot of other products that are dedicated towards and this idea of measuring or quantifying sleep. So, what are your views on products and services that are trying to do this without using EEG?
Can you measure sleep stages without EEG?
[38:55] Kurt: Oh it’s hopeless. If it was for me, if I had an actual need to figure out what’s going on with my sleep and the only option was to go to a sleep lab or buy a wristband or something like that or an app, I will go to the sleep lab.
I mean, I’ve looked in very close detail at hundreds and hundreds of recordings and remember my ZMax is multi sensor so, what that means is that I don’t just have the EEG.
Right below the EEG on each individual screen, I have the accelerometer track which shows me the movement and then I have the PPG, which is an optical way of measuring the heartbeats. So, I have the heart rate below that as well. And while on the aggregate, if you were to take hundreds of these there’s yes, it’s true that there’s more heart rate variability during REM sleep than not.
And yes, it’s true you have a higher heart rate during awake I would never be able to just use movement and heart rate to create a hypnogram for a single person and then show it to them in the morning and tell them it’s right. That’s just not doable in my view.
It’s not always the case that the client is going to make a purchasing decision based on accuracy, it’s sometimes it’s a purchasing decision that it’s based on novelty and some people are early adopters, they like technology and they want something that’s technological but it’s also effortless to use.
So, a wristband doesn’t require you to do anything, something you put on your head is already a little bit more of a commitment.
[40:25] Jeff: Yeah, you’re right. I think a lot of people get into sleep technology because they’re early adopters and because they want to find out about their sleep. But as you say the problem is there’s no way to validate the data from some of these new more advanced sleep wearables that are coming out.
Consumer sleep technology and raw data
[40:50] Kurt: Well, I think the problem is that the average consumer does not know how to interpret raw data so, they’re not requiring the raw data and because they don’t require it’s not there and because it’s not there you can claim just absolutely anything. That’s it in a nutshell. You know having something peer-reviewed, I don’t know what to say. There’s only one test if something works or not that’s not a study made on somebody else with data that you couldn’t see it is you take it go to bed.
Wear it, get the raw data in the morning, check that the determination of the sleep phases corresponds to what you see in the EEG. And by the way check if the EEG has any features on it or if it’s just noise or what it is, if you don’t do that you just no no just the rep.
[41:40] Jeff: So, what would you say about things like Moore’s law, about computing power, about miniaturization and about cramming more and more technology into these new gadgets that are coming up and also layering, machine learning and predictive analytics. Would you say that there’s ever going to be a situation where we’ll be able to monitor sleep without monitoring the sleeping brain?
Can AI help with sleep staging algorithms?
[42:09] Kurt: The thing with the eye, it’s only able to recover patterns that were captured. So, take the human visual cortex that’s the best part of our brain and the brain is the most intelligent part of the universe. So, if you look at the visual cortex because there are some things in humans that really are not very nice.
Like you breathe talk and eat from the same hole, that’s not very intelligent design but there are some things that are really optimized like the visual cortex. And I think that’s because we’re monkeys, we need it to have really good 3d vision to navigate the trees and so on and so forth, but take that as an example.
You cannot look at a picture with just static and make things out of it because there’s no signal in the image.
So, the problem with the buzz words and with the AI is that AI has been able to do a lot that we previously thought you cannot do like winning at goal, winning a chess of driving cars. But the problem with the data still remains that there’s still the old saying what is it garbage in garbage out.
So, if you don’t have the thing that you’re trying to detect in the data, there is no amount of intelligence not even a futuristic super intelligent the size of a planet can go in and give you data that’s just not there. So, you’ll never get away from the need to acquire the signal before you go and process it.
So, I think we’re talking about accelerometer data and in heart rate data, right? Now, if there’s something new that we don’t know, like for example there’s a Fibonacci series encoded in every you know, wake to N1 transition in the heart rate that we never really figured out because we’re too dumb, then okay, maybe in that case but you know, because it’s biological signal. I doubt that this I can awesome some kind of slam-dunk thing that we can grab, that we previously missed.
[43:53] Jeff: Okay so, to try and summarize what you’re saying is the sleep lab test, is very comprehensive one and indeed it’s the only way to do thorough medical diagnostic testing for sleep disorders. However, there are lots of drawbacks, you need loads of equipment you’re sleeping in an unfamiliar environment which means the sleep that’s recorded, isn’t necessarily how you’re sleeping at home.
You need trained professionals operating the equipment, you need to attach lots of sensors to the body. However, the other option is sleep tracking consumer technology which you’re saying doesn’t match up in terms of accuracy. So, with ZMax, you’ve tried to bridge this gap and bring clinical grade EEG data into a wearable, that you can use at home.
So, that sounds like a massive challenge. So, can you explain some of these challenges that you’ve faced in undertaking this this project?
The challenges of building a wearable EEG sleep monitor
[45:01] Kurt: Well, with the home device, it really depends what kind of home device you’re designing. I started from the assumption that because I’m very sensitive to anything that happens during sleep including what I have on my head so, if it’s too big I’m not even going to wear it then if I don’t wear who’s going to wear it.
I mean it has to be comfortable, not to be comfortable one of the things that I discovered is that size matters a lot because you sometimes sleep on your side and sometimes for example I sleep on my side with my face against the mattress or the pillow.
And so, the size of the thing if it’s going to limit the movements and limit the sleeping position so, you can assume or throughout the night you’re moving around and you hear this thing or you feel it pressing against your head, that’s not good.
So, the width is the most important dimension and the thickness as well. I made it so, that my nose is going to bump against things before the device bumps against things. When you’re trying to make it this small all the engineering challenges related to any electronic device, especially analog circuitry become a million times more difficult.
For example; the battery so, because it’s small and most of the space in any consumer device, is occupied by the battery and the battery itself is a maximum capacity is limited by the power density. And we just you know, the technology that we have today is we have these lithium polymer batteries they have the highest energy density other than using nuclear which probably not be very nice.
[46:49] Kurt: But I don’t think many people would want to go for that solution no, because then you would have a fluorescent forehead in there or keep you up. But the energy density limit is a real problem, there’s so many people now so much well-funded research in trying to make batteries with higher density because they’re really being the limit of the miniaturization of devices.
And so, in the case of Zmax it’s the same way and when you have the battery occupying most of the space, but still it’s that small so, the battery doesn’t have much juice to operate on. And so what can you do well for example; you cannot transmit data wirelessly, which is what a lot of gadgets do?
But the problem is this that they try to do processing on a single chip on the device because processing is cheap in energy terms. But for example; radio transmission is quite expensive so Z- max I wanted it to be a real-time system because I also want it to be lucid dreaming device. So, it has to be able to communicate with the PC which by the way is wonderful, because now you’re not limited by the power of the small microcontrollers on the device as to what kind of processing you can do.
You can have your whole PC do the processing and just use the Z max as a data collection tool and a stimulator. If your PC is not enough you can connect it to the cloud it’s not a problem so, that’s very convenient.
Designing a custom wireless protocol
But when the battery is so small for example; I couldn’t use Bluetooth because Bluetooth is too energy intensive and some people were sad – what about low-energy Bluetooth.
They really messed it up – because then what they did is they imposed a max and maximum data bandwidth. So, you have the normal Bluetooth, which is unbearably energy intensive and the battery wouldn’t even last the whole night.
Then you have the Bluetooth Low Energy thing which limits the data rate and so, I wouldn’t be able to stream the raw data. So, in the end I had to make my own radio circuitry and my own radio protocol and my own radio everything, which is it’s like developing a new Bluetooth from scratch but I had to do it.
And there’s another problem with the radio, that is which is specific to sleep and that is that when you’re using a wireless mouse or a wireless remote or anything, you can point it towards the receiver, right? But when you’re sleeping, you can tell the person hey sleep oriented towards the receiver, they’re not going to do that. They might sleep facing left or facing right.
The receiver is usually the PC which is sitting on a horizontal plane on one side of the person so, now if they’re facing towards the computer perfect signal, facing away now, you got a problem it’s not only facing away. It’s bouncing against the wall which creates a very quick refraction so, the signal is transmitted twice with some very small interval, which kills the signal.
And on the other side, you have the head which is mostly water which is a very good way of insulating against wireless frequencies water. And so it’s the worst possible case for data transmission is a head worn device. So, the technical challenge is to get something that would transmit signal reliably from the bed, regardless of the position.
And with that little battery that I have in it you know just because of the size, if I couldn’t make it as big as a brick all of this wouldn’t not have been a problem, just use a big battery use Bluetooth. And I can go into all of the different key parts of the system and it’s just the same thing that making, it small creates engineering difficulties.
[50:32] Jeff: So, that’s just one small aspect which you’ve described in a sort of nightmarish challenging. Getting the wireless working but the hub of it the crux of it is actually measuring the electrical.
Measuring EEG is the easy part…
[50:48] Kurt: I think that actually that’s because that’s the main feature right but actually measuring EEG itself is not that hard like it’s a it’s a decades-old technology. When it becomes hard is when you have to do it so small and when you have to do it in a way that doesn’t have the cables.
So, making it small means I cannot buy an EEG amplifier, I have to create it from scratch so made out of individual amplifiers and little resistors and there’s like 20-30 components just to build an EEG. But at least, they’re small enough and they can be positioned well enough that I can make it small enough.
[51:28] Kurt: So, we talked before that you had to go through different types of compounds for the contacts and the adhesives exploring what worked best.
Choosing electrode materials
[51:41] Jeff: Well, there’s they’re a bunch of materials today, that are able to capture signal but not well enough so, that then you can make sense of it like conductive textiles which all of the gizmos that you can buy online.
They work based on conductive text so because it looks cool. It looks like there’s nothing to replace and enter dry electrodes and unfortunately, they’re very bad for signal acquisition.
There are gadgets on the market that are so naive they just have metallic surfaces that directly come in contact with the skin, are the ones using conductive rubber. So, I spent six months doing only that just researching the electrode material.
You got dozens and dozens of different conductive textiles from the ones that are using for EMI insulation, which is what they’re all used for otherwise they wouldn’t be in the market, to custom-making Gold ones, to the ones that they use in nuclear plants for insulation against radiation.
Like well, I had a book letter sent by a company with like 30 different types I still have the pictures and then the conductive rubber that wasn’t good enough. So, there was a company that made aerospace applications and so, they had this other conductive rubber which wasn’t stretchy at all so, for that was bad but it had a very high silver component to be more conductive.
So, tried that as well but they were all very bad until finally trying different types of gels, I settled for a gel electrode and this is the only one that gives me a signal that’s good enough to work. Now, this is the deciding factor see this is the deciding point where, if you were a big company and you’ve got 40 million bucks invested in you and you’ve got a board of directors and people wanting RI.
I couldn’t tell them you know, we’re going to use the gel electrodes because the signal is nice. Because then they’re going to say no, well how many people are going to buy it then? We want something that looks cool so, we put on Kickstarter and all these people are going to get it because it looks cool and then it’s you know, $179 is too cheap to return it. You see how the incentives here are different and that’s why I was able to make.
[53:49] Jeff: Okay. So, we’ve got getting the wireless to work, we’ve got getting the sensors to work, how about the on the software side of things? As you said sometimes, your kind of just staring to the matrix and it’s hard to….
Decoding the data – staring into the matrix
[54:08] Kurt: But you are staring at an incomprehensible mess of a matrix contaminated by sweat draw artifacts, blinking artifacts, movement artifacts, electrodes coming off artifacts. Some stuff we were never able to eventually identify.
Jeff: Just describe to some of those sorts of challenges where you’re looking at is just looks like noise and you said there’s structure in that noise you’ve got to extract a signal within.
[54:48] Kurt: So, when I got done with the hardware, I thought okay, I’m done. Now, I just did the sleep staging and finally I get to market, this was five years ago. And then in those five years actually had to develop me my sons arrogant if I say, I had to develop the science to write the software. But that’s pretty much what it is simply because this…
Okay, I’m not going to make a claim, let’s state facts. If you go to sleep center a sleep lab where sleep scientists are analyzing sleep and you ask them how do you translate the recording you have in a hypnogram? They pay somebody to do it, how much do they pay him? $2 $5 $10 no, sometimes it’s a hundred dollars.
So, it’s not a cheap process imagine a study with hundreds of participants, that’s tens of thousands of dollars paid to these people. So, there is a specific job, the sleep technician which their job is to put the electrodes on the patient and to give you the hypnogram. They look at the data and they when they give you the hypnogram.
Designing an automatic sleep scoring algorithm
This means that there’s no algorithm on the market which reliable and reliably, you can pay and you can feed it into a computer and it can tell you this is this stage this is that stage, this is that stage.
There are some companies that claim to do it and they charge you just as much as the technician and I was able to dig deep enough to figure out that actually it’s not fully automatic. There’s a guy there and then the algorithm does half of it and the guy does the other half, that’s why it’s still expensive.
[56:08] Jeff: So, what you’re talking about is taking raw data as such whether it’s from Z max or from any other.
[56:17] Kurt: Exactly that conversion is a very nasty data analysis problem. And as far as I can still see up to this day not only has it not been solved but people are looking the wrong way and making the wrong assumption so, they’re not going to solve it anytime soon. No, matter how much they throw out the problem, I can give you the buzzwords and going to excruciating detail but I want.
So, I really went in really deep into trying to make this transition because the raw data doesn’t do any good, what you want to see is the stages, right? So, challenges related to that; I think the number one challenge is the variability between people which even in the scientific literature and nobody’s discussing as far as I can see.
A nasty, data analysis problem
When you read these things very naively on the internet say, okay alpha is 10 Hertz and beta is that is 20 Hertz and whatever and that’s just nonsense. It’s not a digital signal. And there are really hard cases like for example; the sleep spindles are supposed to be at 12 Hertz and the Alpha is at 10 Hertz and that gives you a really nice clean way of segmenting wake from slow wave sleep.
However, many people have alpha and spindles that are partly or fully overlapped and when you get those things and you feed that kind of patient’s data into some of these algorithms that are used by, I don’t know the Zeo?
For example Zeo used to use something like this a naive type of bandpass filtering, where they would just look at the magnitude of the activity in this frequency band versus their other frequency band and then make a comparison and a determination based on that.
That yields not just lack of accuracy but a catastrophe; as in your whole slow wave sleep becomes wake. So, the variability a and the overlap of the frequency creates a need for going into the signal and first of all figuring out what type of brain is this. If you don’t do this first then there’s no way you can score.
I mean you’re going to get a nice result in 20% of the people so saw in the other 30% and then for 50% is going to be a disaster. And this is why companies don’t normally show you the raw data because if they did, you would realize half of the people it’s just a disaster in its spouting nonsense.
And so, to get that to work reliably on 99% of the people, meant that I had to find a way to determine from the data first of all; what is really alpha and where is it and where is the spindles and where are they and where’s the data and which ones are the eye movements? And it’s like a puzzle so, it was tough to say the least.
[58:55] Jeff: Okay. So, it’s very clear that it’s a big puzzle, is a huge complex problem trying to decipher what’s going on in the brain, while we’re asleep. So, can we go back to a topic we touched on a bit earlier and these buzzwords we hear all the time artificial intelligence and machine learning. So, how did that fit into the design and the development of ZMax?
AI is not smart enough yet
[59:24] Kurt: Well, a lot of people that are using AI to do stuff, they have huge datasets. Like for example; the vision data set for it self-driving cars is huge and because AI is still pretty stupid, it might seem smart but the reason it seems smart is because it’s got one superhuman faculty and that is to use a lot of sensory data. So, it’s not really a brain yet, what it is it’s like a first layer of your visual cortex in terms of lack of intelligence.
It’s really still very stupid, you can do fantastic things because it turns out that if you have even a primitive sensing layer but you give it an unbelievable amount of data. Like think of all the images on Google Images and then if you search for box it will actually return you the image of a box; but they have a database that’s so big and though that is also marked because I have the keywords.
And that is maybe not perfect marking but then there’s actually perfect marking there’s datasets of millions of images you can bootstrap from. With sleep you might have a lot of recordings and they might have a division into epochs but they’re not annotated meaning that no one has a data set of 1 million sleep recordings.
Where every sleep spindle is marked beginning to end every sweat artefact is marked beginning to end, we don’t have that. So, they’re applying the same stuff that works on enormous data sets and trying to apply it on very limited data sets that are not even annotated. I don’t know how in-depth you want me to go with this but I just said a discussion with somebody who raised his hand while I was given a speech and I was saying the AI on sleep still doesn’t work.
And he said,” I did my PhD thesis using convolutional neural networks which are right now you know, super-hot topics CNN’s. And then he said, okay no, problem I replicated his code because he’s got it on GitHub and I ran my recordings through it and just like I told him all yielded is a disaster. It’s not to diss the guys for a smart guy appreciate what he’s doing I think it’s useful to do that but then when we looked at it, we found out the cause.
And the cause is that they’re still looking their classifiers and they’re looking at each epoch individually so, they don’t know the sleep structure. They don’t know what the sleep structure is so, an alpha at 12 Hertz and a spindle at 12 Hertz they’re going to look like the same thing to it so, of course it’s going to get confused. Imagine, if I tell you categorize men and women but now, I start putting women’s faces on men, it’s going to throw you off right.
So, it’s just being blind as to what the algorithm is doing is a problem with eye, right? When you when you use most of these pattern recognizers, one of the problems is you don’t know where it’s doing because you train the neural net and then there’s these other researches going on to trying to do the reverse process.
[01:02:18] Jeff: So, if we don’t know what’s going on how do we go about training a neural network to figure out what’s going on in the EEG?
[01:02:28] Kurt: Well, you can train it on the features and then you can train it theoretically can be done, you just need much better than a better data sets and you need a lot of them. Because the recognizer in AI doesn’t need to know what it’s looking at all it’s doing is looking at.
Features are represented as partitions in a very high dimensional space like points and if you can just figure out what points belongs to where, it’s done.
The question is, how can you teach it where they are? And the good things about AI is that it can learn implicitly from examples, that is you don’t need you don’t need to create an algorithm for it to say, okay now this is what the spindle looks like.
I mean it gets really technical but I think it will be done eventually, it just it’s not there yet and to get it to work that way, it still requires the same amount of work or maybe worse but it’s much more boring work where you would have to annotate millions of records.
So, and who knows you know, research is going fast maybe tomorrow we get an algorithm that does everything I’m saying cannot be done. But my question is the data in there because I’m sure one thing if you didn’t capture the data then no matter what AI you use, it’s not going to be able to create it out of thin air to match your expectations.
[01:03:44] Jeff: There are huge massive datasets out there. Right now, I talk to CEOs of big companies they’ve got millions of tens of millions of nights of data sleep data but as you say, it’s not annotated.
[01:03:56] Kurt: So, it’s like having I don’t know millions of pictures but there’s nothing written on it. So, yeah, I mean at some level you might be able to discover relationships. I think there’s something it gets into a much subtler conversation; if you can do automatic extraction of things that are similar but the point is, you also want to break that down into epochs and the epochs remember are a convention, they’re kind of artificial.
So, if you let the AI figure out what goes with what, it might come up with something that looks completely different from sleep staging. But it might be more interesting and more accurate so we should use that if it does if it does do that.
[01:04:32] Jeff: Awesome. We could carry on that conversation longer but we have to move. So, Kurt I just briefly specific applications about something like ZMax. I mean, we’ve talked about it originally came from this idea of lucid dreaming, can you talk about you know the broad range of applications?
Zmax applications – lucid dreaming
[01:04:55] Kurt: On the broadest level for consumers that are not sleep technicians or sleep researchers, one of the coolest thing that you can do with ZMax is lucid dreaming.
I guarantee you that the other things that they’re selling for cheap on eBay don’t work and they’re not scriptable ZMax is verifiably detecting REM sleep, it’s used by lucid dreaming researchers because it’s able to actually trigger accuse during REM sleep and only REM sleep and it can show you exactly the stimulation points in the morning.
And you can script it using JavaScript, you can decide okay if we’re in a REM epoch then play but play me back the sound or do this color stimulation or do whatever you need to do in your lucid dreaming experiment or protocol. So, you can use it for that, that’s the most, I would say most readily packaged thing that you can just buy in and get an immediate benefit.
Then if you want to learn more about sleep, it’s an amazing educational tool because you know for you you’ve got a blood pressure meter, you can get this at the supermarket nowadays. A scale for your weight in a mirror to see what you look like.
Sleep is very important biological function for which people aren’t really able to go out and buy something that works like a scale it tells me show it to me what happened.
And sometimes you can discover very interesting things out of this like I remember one of the most famous of the researchers that I would that I’m working with, that he was he started to use ZMax as an early adopter 2 years ago while I was still in development. But we put it on him and we discovered that he had snoring and sleep apnea and he didn’t know and he’s asleep researcher, right?
And I was able to show him look here breathing interruption boom here desaturation going down 287 causes a position change and it disrupts the sleep. So, now we’re going to do experiments for him and what we’re going to experiment on is the following since it’s also stimulation tool right so, we’re going to take this since he’s the only confirmed sleep apnea patient that I know of.
So, we’re going to do a real-time sleep apnea detection and when he stops breathing there’s a delay of about 15 seconds between stopping breathing and the blood oxygen going down to dangerous levels which then causes a sensation of choking and causes him to change position or wake up.
So, we’re going to detect the interruption in breathing and give a vibratory stimulation immediately so, that he is able to turn around or anyway come back to a less deep sleep stage without waking up; and without having to have the apnea we’re going to experiment how that works.
So, you see that on the whole as a platform where you can try different things, it has a usefulness that perhaps goes a little bit beyond just capturing data.
Biofeedback, neurofeedback, sleep and relaxation
Another completely different vertical is biofeedback, neurofeedback and generally biofeedback is the process by which, you take some data you acquire from the body and turn that into something that you can either see or hear generally.
And so, for example you can get a measure of your movement and play sound based on your movement or you can do with the breathing or with your brain activity and because ZMax streams all of these data channels in real time with very low latency.
You’re able to feed them into some software that can translate these things into sounds and what that allows you to do, is learn to control your heart rate for example; your respiration the amount of facial tension, the ideal you want to reduce or to develop the ability to be perfectly still this is interesting for people that are trying either to meditate or to learn relaxation for insomnia and generally being able to get to sleep quickly.
[01:08:49] Jeff: Yeah and sleep apnea briefly. Can you can explain what you’re working on now, you published some videos about some new features recently?
ZMax and sleep apnea detection
[01:08:59] Kurt: So, basically people some people stop breathing while they’re sleeping and in some people it’s really dangerous because they’re just going to suffocate and so people that have this problem, they already know about it, they know everything about sleep labs and they have a CPAP machine at home.
But and I don’t think I can add anything to that conversation because that’s really you know you need machinery to help you breathe. There’re however many more people than those, that have that symptoms but not have the same symptom but not as serious.
So, in fact it might be latent and subtle and not even be something that causes them to wake up. If you were a ZMax with the nasal sensor for the whole night you might see that at certain points you stopped breathing and then you can see what happened after that.
And you can say, okay did I wake up or not? Now, let’s assume alike the vast majority of people, it’s going to be a mild thing, right? So, you had an oxygen desaturation that means your blood oxygen level for example went from 93 to 88 and then you turn around. And the next epoch was still n2 or n3 so that shows you that you didn’t actually wake up.
However, what you have to do is wear it while you’re not sleeping and try to hold your breath and wait until that number goes from 97, 90 whatever it is to 88 and check how that feels. And you’re going to be suffocating like it’s going to be really unpleasant. I think some people are not even going to be able to hold their breath that long.
So, that means that while you’re trying to be peaceful and sleep like a baby, you’re almost getting choked to death, that can’t be good for you, right? So, the application since the ZMax also does stimulation what we’re testing from now is to intervene after you stop breathing but before you have the desaturation; because it takes some time you know the oxygen is a good 90 X.
The blood is very good buffer for oxygen so, you can hold your breath for a while and you don’t feel anything special but it drops off all of a sudden after 15 seconds that exactly mirrors the amount of oxygen, the stories in the blood that’s stored in the blood.
That means that we have time to intercept an interruption in your breathing in the air flow and before you have the desaturation, before your blood oxygen actually has time to go down by that much, we can intervene with a sound or vibration something that allows you to snap out of it so to speak whether it is moving around perhaps not even waking up, but anyway something that stops it.
And if we can stop it then what we can do is, for the people that have these problems but not severe enough to wake up or to require hospital treatment or to require CPAP machine or to actually stop these apneas from occurring. If we can do that it may translate into an enormously improved quality of sleep as perceived when the person wakes up.
There’s no way that removing how I tell you the number this particular person we were looking at, they had at least 20 to 30 different and have the file I can show you if we do video next time.
We had at least 20 to 30 different episodes of nearly choking to death – 88 saturation throughout the night. Imagine that we removed that, how much better he’s going to feel in the morning, if his whole night wasn’t spent suffocating.
[01:12:25] Jeff: Is a huge problem and there definitely there’s loads and loads of room for improvements in not just diagnostics with treatment as well. So, it’s really interesting.
Well, it’s been a fascinating conversation and we’ve got to bring this to a conclusion unfortunately but before you go Kurtz can I get your thoughts on where you think we might be in 10 20 30 years’ time, with regards to sleep and its relationship with technology?
The future of sleep is in biotechnology, not hardware
[01:12:57] Kurt: Right, so, most people underestimate the progress that we’re going to make in biotech in the next 10 years. I remember when I first started using computers and it was 13, 14 years old before like 11 from the age of 11. and if you ask the average person what is a floppy disk how do you format it, how do you use a computer, mouse, they didn’t know.
So, I remember an age in which this stuff just didn’t exist and today you got grannies on the bus discussing about whether 64 gigabytes is big enough or not. That has been the revolution in computing but the next revolution is in biotech. And so, in the state of biotech today, it’s like a tsunami of the horizon and it’s like seeing those big computers you know as big as a room and then in the 70s and being able to forecast.
So, that means that you know and in 2018 we’re going to have the iPhone and but this that’s the same thing that’s about to happen in biotech. So, it’s going to be one revolution after the other bang! I’m seeing life-changing things and it’s if people are going to wonder where the hell did that come from because they didn’t follow the research for the past 10 20 years or 50 sometimes.
But and these things are going to be so, much more important than computing because they actually affect your life, they affect your well-being. So, psychological interventions for depression and of course sleep being one of the most problematic things almost everybody’s sleep-deprived. We have the urgent need to sleep better with less hours.
Of course, it would be better if we could sleep more hours better but people are realistically not going to go for that so, we’re going to try to sleep less hours better and get the same amount of restfulness and slow we’re going to have pills.
[01:14:41] Jeff: Right, okay. I was going to say, give some specific things that people can visualize.
[01:14:46] Kurt: So, for example there might be a biomimetic molecule that act in the same way as the hormones that drive you to begin to fall asleep with. So, the knowledge that we have to gain in these fields with which we are doing slowly but as we get better and better screening methods even more, is to figure out exactly what is the pathway.
Because maybe the first iterations you’re going to have to take a cocktail of six things but know, what I’m saying cocktail it sounds really bad right because you think cocktail of drugs.
Progressing in biotech means that you can you can interface with the human body without all that all the side effects. So, you wouldn’t be able to have medicines that have no side effects where you initially maybe take a cocktail of six different things that recreate the same exact metabolic processes that lead you to fall asleep naturally and sleep well.
But then maybe the next citation isn’t going to be six anymore because you found the one precursor the signals the whole system like maybe with something in the retina that’s triggered by blue light. And now, we have a thing that can go there are some antagonists that can go there and block that and you take it at 6:00 p.m. and then by the time it’s 9:00 p.m. your body thinks you’ve been in total darkness for three hours, okay?
So, that kind of revolution I think is inevitable so, I don’t think it’s necessarily going to be tech but perhaps tech has a big role in developing the biotech. Because we need better and better ways of being able to look inside the brain and we need ways of collecting more data from more people which also means keeping the technology very simple to use.
So, a bunch of different text they used to be really invasive are not moving to be more wearable and so, if before there was a PSG now there’s ZMax perhaps at some point in the future is going to be something that you put inside the contact lens and then eventually it’s going to be like one of the different modules that you put inside your stable brain implant library; where it’s just going to download it to your PC while you’re sleeping and it doesn’t even need electrodes at that point. Well, you’ll hear about all of that on your podcast when it happens.
[01:16:52] Jeff: Yes, exactly thanks so much if people are interested in what ZMax can do over there an individual, whether they’re affiliated to some research institution you know, how do they get hold of ZMax?
[01:17:02] Kurt: well first of all I think they’re going to have a bunch of different questions about the electrodes, about the data and they can get answers to all of that on my website which is hypnodyneCorp.com hypno you know like hypnosis but it actually means sleeping grace. Okay, hypnodynecorp.com and there you can download the viewer for the ZMax data and you can also download sample data files.
You have data files and NPSG files from the same person the same night acquired concurrently that you can download so, you can compare for yourself what does the ZMax data look like versus what does the PSG data look like and that you can download data files that include auto scoring so; you can see the scoring quality.
And generally speaking any questions you might have including; videos of the software, video tutorials for lucid dreaming and biofeedback, everything is on the site with links to YouTube. So, hopefully that will answer all questions you might have.
[01:18:00] Jeff: That’s great. Thanks so, much could really appreciate your time today.
[01:18:03] Kurt: Thank you appreciate it.
How can it be that the sleep stages are so tightly defined if we have no reliable and consistent way of measuring these? If there is disagreement in interpretation between two people on the data?
How thorough is the data on these sleep fases?
And what are the most accurate measurements of sleep quality? I mean, that is the point right, to improve sleep quality?
Hi Harmen,
Great question. I think there’s a bit of a misnomer about sleep staging because we’re used to seeing representations of sleep staging in a hypnogram format, which is a square, blocky graph showing strictly defined stages of sleep.
However, as Kurt explained, brain activity is not deterministic like this, and it’s possible that different regions of the brain are simutaneous experiencing different states of the classically defined sleep stages.
This would explain phenomena such as lucid dreaming, sleep walking, sleep paralyisis etc, but this is not possible to represent in a simplistic hynogram.
EEG and polysomnography is currently the most widely used and recommended, gold standard’ for estimating or ‘measuring sleep and sleep stages. But as we discussed with another guest, Guy Leschziner on the podcast recently, other tools like fMRI imaging can give far more detailed insights into the sleeping brain but these techniques are even more cumbersome, expensive etc.
So even though two human scorers might come up with two different sleep staging estimations of a PSG recording, for now, at least in clinical settings we have to overwhelming rely on EEG/PSG to diagnose and quantify sleep.
This will likely change in the coming years as AI and machine learning starts to become more integrated into sleep scoring, reducing human error and pushing the research forward.