New Study Aims to Induce Lucid Dreams with Light, Sound, and Vibration

Brian Gilan
4 min readSep 8, 2022

Teams across the world are working on making lucid dreaming more accessible using technology. One example is an upcoming lucid dreaming induction study (full details here) that I’ll summarize below.

This study will use a combination of the sense-initiated lucid dream (SSILD) technique and targeted lucidity reactivation (TLR).

SSILD is a technique in which one sequentially shifts attention between visual, audio, and tactile sensations. In order, they observe visuals with their eyes closed (e.g. moving dark blobs), listen for any sounds either coming from within (e.g. heartbeat, breath) or externally (e.g. distant motorcycle), and feel bodily sensations (e.g. contact points between the body and the mattress).

TLR associates visual, audio, and tactile stimuli with a lucid mind-state prior to sleep, and then provides the same stimuli during REM sleep. In this study, the sleep lab will train participants to engage in the SSILD technique while they also receive external visual (blinking lights), audio (beeps), and tactile (vibration) cues (i.e. SSILD + TLR == lucid dreams).

The study will recruit ~60 participants across three sleep labs in The Netherlands, Canada, and Italy — making it the largest sample size to date for a lucid dreaming induction study that uses physiological measurements.

After an intake session that provides education and training, participants will be asked to complete a daily dream diary and a dream lucidity survey (DLQ) to begin practicing dream recall and reporting skills. Participants will also wear a Fitbit Inspire 2 device to track their sleep schedule and total sleep time for two weeks.

Then, within the 2-week period, participants will go into a sleep lab twice for morning naps around 7 AM with a 2.5-hour sleep opportunity. One nap will be a stimulation session and one nap will be a control session. In both sessions, participants will wear an EEG headband (ZMax) to measure brainwaves and 3 additional EMG chin electrodes to measure muscle tone and help detect REM sleep. Before each session, participants will also be (re)trained to associate the external cues (e.g. light, sound, vibration) with the SSILD technique. This training step may prove important to properly prime participants’ minds prior to the sessions, so they react mindfully when they recognize these cues within REM sleep.

In a stimulation session, REM sleep will be detected in real time by manual scorers and an auto-detection algorithm. 20 seconds after REM sleep is detected, visual, audio, and tactile stimulation cues will be administrated to the participant with the intention of triggering lucidity by the primed association between the cues and the SSILD technique. Cue intensity thresholds will be calibrated for each participant to help balance the ability of the participant to perceive the cues while not waking them up. If a participant suspects they are within a dream, they are encouraged to do a reality check, and then do a predefined set of eye movements—left-right-left-right (LRLR)—that can be recorded with sensors to verify lucidity within the dream. Participants are asked to complete the LRLR eye movements every 30 seconds while lucid and to avoid exciting activities (e.g. flying) that might jolt them out of the lucid dream. Once the REM sleep period ends, participants will be waked to report any subjective experience and complete a questionnaire.

The control session will not have the visual, audio, and tactile cues played when REM sleep is detected, but participants will be told that cues are possible in both nap sessions, so participants don’t assume they are in the control session in their second session if they already received a cue in the first session.

The first, in-lab phase of this study sets the foundation for a much-larger second phase that’ll recruit distributed participants that partake from the comfort of their own beds, using learnings from the first phase of the study.

Another exciting outcome of this study is the development and validation of the first open-source dream engineering toolbox: Dreamento. This will be a comprehensive open-source software package to monitor, analyze, and modulate sleep in real time. The study will also do post hoc analysis to investigate the feasibility of fully automating the induction of lucid dreaming with Dreamento and a wearable EEG headband.

Tools like the Dreamento will help fuel innovation in lucid dreaming technology, and communities like Tech for Dreaming will continue developing projects that use technology to make lucid dreaming more accessible.

Come join us in the Tech for Dreaming Discord server to chat more about how technology can make lucid dreaming more accessible!

--

--

Brian Gilan

Interests: digital health, wearables, sleep & dreams; upgrading health, intelligence, and consciousness; understanding the nature of reality.