Oct’ 21 - Dec’ 21
Tools Used
Figma, After Effects, Blender, Unreal Engine
Tools Used
Figma, After Effects, Blender, Unreal Engine

A communication device based on brain computer interfacing technology which shares your stories as you see in your mind while you narrate them to your loved ones from afar, empowering you as a storyteller, increasing intimacy & creating shared memories.

My Contribution
I led ideation, prototyping, storyboarding, interaction design, 3d modelling & unreal development. I was an integral part in directing the product towards its end goal & helped during the research phase.
2 x Designers
1 x Researcher
backgrounD & RESEARCH

Home is more than just a physical space

This was a graduate student project where my team was tasked to help physically separated families build and maintain close emotional relationships that help them feel at home. To find more about this space, we conducted background research on “home” and emotions, and identified that:

The concept of “home” can be described as more than a physical space: it is an emotional space composed of our connections to others.

Approximately 14 million people migrated in 2019 for education and job opportunities, leaving their families behind.

People who are far away, keep each other updated about day-to-day activities over audio visual mediums like SMS and phone calls.


Young adults’ separation from loved ones often directly impacts their ability to feel at home. We as graduate students resonated with this. We decided to focus on students or working adults away from their families from the age of 18-35 as our target user group.

We interviewed 8 participants

We conducted semi-structured interviews with 8 participants who were either students or working adults away from their families, ages between 20 to 32. Our participants were away from their families ranging from 5 months to 5 years and were still finding ways to keep in touch.

What did we learn?
People primarily use their time connecting with loved ones to share about their everyday lives and interpersonal interactions.
Young adults are careful about the social identity they present and maintain with their parents.
The true value of their interactions is grounded in the inherent intimacy of spending time in the presence of another person.
Shared images and videos are primary means of communication, but with time people lose deep connection and nuanced conversations.

How might we help  families who are physically separated reestablish deep emotional connections that have faded with distance and time, and allow them to feel emotionally close, promote nuanced conversations and create shared memories?

Ideate & Downselect

We collectively brought together 120 ideas

Generating insights from our research and we split individually & ideated 30 ideas each using methods such as 8x8 and 2x2, keeping our learnings in mind. We drew quick sketches and storyboards to explain our vision. We had novel physical and digital designs covering a wide range of needs--from smart fridges to family archival instruments.

The team came up with a set of Design Principles to guide our down selection:

Deliver a co-present experience

Build connection through co-present activities and conversations to synchronize experiences.

Promote deep emotional connection

Renew deep emotional bonds within families, including those which have faded with distance and time.

Provide a space of comfort & familiarity

Facilitate comfort and familiarity in our solution as a way to build trust—a foundational aspect of any home.

Finding themes & patterns
We plotted the ideas to similar attributes, and discussed over them based on its novel, interactive & social aspects.

Downselected 3 ideas

shareout with peers & professors

Through the feedback we received, we understood all 3 of our ideas aligned with our design principles and research, but each idea was conceived focusing on technologies like LEDs, radio-wave tech, and augmented reality, rather than enriching the user’s connection through experience.

Are we building an emotion-first product or a tech-first product? We had unintentionally limited our ideation outcomes and lost the need-le in the haystack.


What would it look like to authentically connect intimately with your loved one?

We went back to our transcripts, the single thread tying our whole research was that families connect over long distances by sharing ‘episodes’ of their day and updating each other on personal events. In essence, they share stories.

Why are stories important?

We engage with other people by telling the stories of our lives: whether fact or fiction. We are drawn to stories because they are our lived experiences; when we share stories with others, we share ourselves. It generates empathy and connection between speakers and listeners.

Insufficient communication mediums

People largely share their stories over audiovisual mediums like facetime, text messages and phone calls. All these are communication hacks to translate and share their thoughts, experiences and impressions to another person. In oral communication, the audience hears the story and needs to imagine the story mentally to understand the story. This might pose a huge cognitive challenge to them (Sundmark, cited in Wallin, 2015) hence making them passive listeners.

Era of Brain Communication Hacks

What if people could transform their oral stories into dynamic visual worlds like they see it, in real time?

After further going through another round of ideation, we consolidated our fresh perspectives with user goals & decided to build:

Lucid, is a communication device based on brain computer interfacing technology which shares your stories as you see in your mind while you narrate them to your loved ones from afar, empowering you as a storyteller, increasing intimacy & creating shared memories.

Initial storyboarding by me

Lucid as a product in plausible future rather than in an impossible science fiction

It is easy to dismiss the idea of decoding brain signals to 3d projections. I decided to dive into to research and estimate the possibility of this idea. Perhaps the only action we perform that isn’t driven by muscles is thought. Our brain drives our muscles through electrical signals, which are detectable through our skin through techniques such as EMG and EEG.

We found some interesting studies which helped us place this product in a plausible future.

In a study from Imperial college of London, they explored a mental image reconstruction method through measured electroencephalogram (EEG) using a generative adversarial network (GAN). After sufficient training, the neural network is able to reconstruct mental images from brain activity recordings.
OpenAi has been working on neural networks like CLIP & DALL-E, which creates images from text captions for a wide range of concepts expressible in natural language. Current release is able to create plausible images for a great variety of sentences that explore the compositional structure of language.
Prototype testing

How do users perceive Lucid?

We ran a series of prototype tests and continued our secondary research to guide us in building lucid. We tried to address every gap in our understanding and the overall design of the product through rapid low fidelity prototyping.



Users are concerned about what will be projected and the mental boundaries.


Lucid will use spoken words from the story as confirmation mechanism to project visuals.


The stories users share with their loved ones are personal, unique and mostly positive or funny.


Lucid will work with the user’s preferences on when to project and what to project.


Users have an unsettling feeling of how realistic these artificial visuals will be.


Lucid will work on a suitable visual style that avoids the uncanny valley.

To learn in detail about our prototype tests and learnings, feel free to go through the sections below.

introducing lucid
funtional role
how does it work?

Translating human brain activity into 3-dimensional holographic projections

When our brains summon a memory, it retrieves the most salient parts of the story; our imagination fills in the rest. These are known as brain waves. When using Lucid, these brain waves are then captured by our “receiver” headband and encoded as data signals.

Brain signals to visuals

Once encoded, this data is sent from the receiver to our visualizing element, the “producer” (secondary device). The producer then constructs the necessary color and depth data into visuals.

Spoken word to visuals

As the user tells the story, the words are sent to a text encoder that is trained to map the text to a representation space. We capture the semantic information of the text and generate the visual.

Finding the perfect match

The producer does a quick match between these two visuals and narrows down the ones with maximum match and converts to 3D- holographic projections that illustrate our memories as we see them. This we call a memento.

1 min explaining how lucid works
WHAT will memories look like?

Point clouds are dreamy and realistic at the same time.

Learning from our prototype testing, I decided to go for “pointillism” as the visual style for projecting memories. It isn't exactly realistic, thereby avoiding the uncanny valley & at the same time it gave a fresh and dreamy aesthetic.

I did multiple tests in meta shape, unreal & aftereffects - before I was able to achieve the intended output.

Outdoor bridge test
Outdoor landscape test
Indoor object test

Final Visual Output


Lucid catering to both offline memory sharing & synchronous calls.

Our prototype testing helped us narrow down the features we want to include in our product. It also helped us pick what was most important for the users.


Registration & Login

We had quite a few discussions on how the device is going to be used. Can multiple people sign in on to the device? Or is it like a landline connected to a home? For the time being, to keep things simple - we decided to allow users to login with existing social medias & create their own username and password.

Onboarding Tutorial

For first time users, we introduce the basic gestures & voice commands to interact with the device. I researched through oculus & hive to understand how things were done and adopted the best practices onto Lucid.

Lucid Synchronous Call

Lucid synchronous call allows users to see projections of their memories as they were narrating it to their loved ones. We included option to switch off visuals, when users don't want their memories to be projected.

Offline Compose Mode

For people to get practice & for the system to understand how the user is going to visualize, we have an offline compose mode - where they can play around. Users can also save these projections & share it later.

Memento Library

Users can share memories from offline mode. They are stored in the memento library.

Connect your media

We understood people share personal story referencing their friends, different places. Not always we might find a match for someone’s “brother” or “bedroom”. I decided to ask the user to connect their cloud photos & other social media, for the system to build a more relevant projection.

We also allow the users to choose not to connect their media and have placeholders or no visuals when there is anything personal, they are narrating.


Lucid as an intimate, private & futuristic brand.

We developed a set of brand values to set an intentional standard for our product design. I decided to go with a fluid, soft gradient as brand style which both calms and stimulates our bodies, putting the user in the right place for introspection and focused insight. Our prototyping results helped us decide the necessary interactions on the devices.


If you have 12 mins to spare, here is our final presentation video

I worked on storyboarding, asset creation to final editing for this video. It was well received and appreciated by the teaching team & my peers.

Closing notes

We skipped the complications related to physical design of the receiver and producer

There is no current technology or research that was pointing us towards 3d projections in thin air.  There always was a layer that was required for the light to be stopped, like smoke or screens or glass.

Though EEG headsets allows you to detect brain activity through electrodes, I understood we need at least 12 different specific positions for full brain sensing. The receiver I designed did not fall into those positions. I prioritized the aesthetic of the form.

We did not solve the existing problems with augmented reality

We immediately ran into issues when we started working out a user flow around augmented reality. We were split on what form of password to use, typing in AR is a pain & voice recognition wasn't the safest form - we discussed other forms like eye / palm scanning but decided to skip any of those complications for this project.

We initially discussed editing memories as people narrate their stories, apart from letting the system know through dictation. We wanted to allow users to pick and select specific objects, we later decided that would add more complications in AR for first version of this product.

We skipped “Training Lucid AI” section

We wanted to include a training section, where all new users start. It prompts the user to follow an onboarding session with AI, to help it understand how the user visualizes as they tell their stories.

I came up with a gamified model, which the user has to complete before they can start having synchronous Lucid Calls. I came up with a 3-level system of imagining stories the AI narrates & narrating stories to the AI for it build the user’s stories. The users get to rate the accuracy of the AI. I wanted to restrict the users from making synchronous, until they complete the level and gain confidence in working with the AI.

This was different research in itself, considering the time we had - we decided to park detailing this out.