A communication device based on brain computer interfacing technology which shares your stories as you see in your mind while you narrate them to your loved ones from afar, empowering you as a storyteller, increasing intimacy & creating shared memories.
This was a graduate student project where my team was tasked to help physically separated families build and maintain close emotional relationships that help them feel at home. To find more about this space, we conducted background research on “home” and emotions, and identified that:
The concept of “home” can be described as more than a physical space: it is an emotional space composed of our connections to others.
Approximately 14 million people migrated in 2019 for education and job opportunities, leaving their families behind.
People who are far away, keep each other updated about day-to-day activities over audio visual mediums like SMS and phone calls.
Young adults’ separation from loved ones often directly impacts their ability to feel at home. We as graduate students resonated with this. We decided to focus on students or working adults away from their families from the age of 18-35 as our target user group.
We conducted semi-structured interviews with 8 participants who were either students or working adults away from their families, ages between 20 to 32. Our participants were away from their families ranging from 5 months to 5 years and were still finding ways to keep in touch.
How might we help families who are physically separated reestablish deep emotional connections that have faded with distance and time, and allow them to feel emotionally close, promote nuanced conversations and create shared memories?
Generating insights from our research and we split individually & ideated 30 ideas each using methods such as 8x8 and 2x2, keeping our learnings in mind. We drew quick sketches and storyboards to explain our vision. We had novel physical and digital designs covering a wide range of needs--from smart fridges to family archival instruments.
The team came up with a set of Design Principles to guide our down selection:
Build connection through co-present activities and conversations to synchronize experiences.
Renew deep emotional bonds within families, including those which have faded with distance and time.
Facilitate comfort and familiarity in our solution as a way to build trust—a foundational aspect of any home.
Through the feedback we received, we understood all 3 of our ideas aligned with our design principles and research, but each idea was conceived focusing on technologies like LEDs, radio-wave tech, and augmented reality, rather than enriching the user’s connection through experience.
Are we building an emotion-first product or a tech-first product? We had unintentionally limited our ideation outcomes and lost the need-le in the haystack.
We went back to our transcripts, the single thread tying our whole research was that families connect over long distances by sharing ‘episodes’ of their day and updating each other on personal events. In essence, they share stories.
We engage with other people by telling the stories of our lives: whether fact or fiction. We are drawn to stories because they are our lived experiences; when we share stories with others, we share ourselves. It generates empathy and connection between speakers and listeners.
People largely share their stories over audiovisual mediums like facetime, text messages and phone calls. All these are communication hacks to translate and share their thoughts, experiences and impressions to another person. In oral communication, the audience hears the story and needs to imagine the story mentally to understand the story. This might pose a huge cognitive challenge to them (Sundmark, cited in Wallin, 2015) hence making them passive listeners.
What if people could transform their oral stories into dynamic visual worlds like they see it, in real time?
After further going through another round of ideation, we consolidated our fresh perspectives with user goals & decided to build:
It is easy to dismiss the idea of decoding brain signals to 3d projections. I decided to dive into to research and estimate the possibility of this idea. Perhaps the only action we perform that isn’t driven by muscles is thought. Our brain drives our muscles through electrical signals, which are detectable through our skin through techniques such as EMG and EEG.
We found some interesting studies which helped us place this product in a plausible future.
We ran a series of prototype tests and continued our secondary research to guide us in building lucid. We tried to address every gap in our understanding and the overall design of the product through rapid low fidelity prototyping.
Users are concerned about what will be projected and the mental boundaries.
Lucid will use spoken words from the story as confirmation mechanism to project visuals.
The stories users share with their loved ones are personal, unique and mostly positive or funny.
Lucid will work with the user’s preferences on when to project and what to project.
Users have an unsettling feeling of how realistic these artificial visuals will be.
Lucid will work on a suitable visual style that avoids the uncanny valley.
To learn in detail about our prototype tests and learnings, feel free to go through the sections below.
When our brains summon a memory, it retrieves the most salient parts of the story; our imagination fills in the rest. These are known as brain waves. When using Lucid, these brain waves are then captured by our “receiver” headband and encoded as data signals.
Once encoded, this data is sent from the receiver to our visualizing element, the “producer” (secondary device). The producer then constructs the necessary color and depth data into visuals.
As the user tells the story, the words are sent to a text encoder that is trained to map the text to a representation space. We capture the semantic information of the text and generate the visual.
The producer does a quick match between these two visuals and narrows down the ones with maximum match and converts to 3D- holographic projections that illustrate our memories as we see them. This we call a memento.
Learning from our prototype testing, I decided to go for “pointillism” as the visual style for projecting memories. It isn't exactly realistic, thereby avoiding the uncanny valley & at the same time it gave a fresh and dreamy aesthetic.
I did multiple tests in meta shape, unreal & aftereffects - before I was able to achieve the intended output.
Our prototype testing helped us narrow down the features we want to include in our product. It also helped us pick what was most important for the users.
We had quite a few discussions on how the device is going to be used. Can multiple people sign in on to the device? Or is it like a landline connected to a home? For the time being, to keep things simple - we decided to allow users to login with existing social medias & create their own username and password.
For first time users, we introduce the basic gestures & voice commands to interact with the device. I researched through oculus & hive to understand how things were done and adopted the best practices onto Lucid.
Lucid synchronous call allows users to see projections of their memories as they were narrating it to their loved ones. We included option to switch off visuals, when users don't want their memories to be projected.
For people to get practice & for the system to understand how the user is going to visualize, we have an offline compose mode - where they can play around. Users can also save these projections & share it later.
Users can share memories from offline mode. They are stored in the memento library.
We understood people share personal story referencing their friends, different places. Not always we might find a match for someone’s “brother” or “bedroom”. I decided to ask the user to connect their cloud photos & other social media, for the system to build a more relevant projection.
We also allow the users to choose not to connect their media and have placeholders or no visuals when there is anything personal, they are narrating.
We developed a set of brand values to set an intentional standard for our product design. I decided to go with a fluid, soft gradient as brand style which both calms and stimulates our bodies, putting the user in the right place for introspection and focused insight. Our prototyping results helped us decide the necessary interactions on the devices.
I worked on storyboarding, asset creation to final editing for this video. It was well received and appreciated by the teaching team & my peers.
There is no current technology or research that was pointing us towards 3d projections in thin air. There always was a layer that was required for the light to be stopped, like smoke or screens or glass.
Though EEG headsets allows you to detect brain activity through electrodes, I understood we need at least 12 different specific positions for full brain sensing. The receiver I designed did not fall into those positions. I prioritized the aesthetic of the form.
We immediately ran into issues when we started working out a user flow around augmented reality. We were split on what form of password to use, typing in AR is a pain & voice recognition wasn't the safest form - we discussed other forms like eye / palm scanning but decided to skip any of those complications for this project.
We initially discussed editing memories as people narrate their stories, apart from letting the system know through dictation. We wanted to allow users to pick and select specific objects, we later decided that would add more complications in AR for first version of this product.
We wanted to include a training section, where all new users start. It prompts the user to follow an onboarding session with AI, to help it understand how the user visualizes as they tell their stories.
I came up with a gamified model, which the user has to complete before they can start having synchronous Lucid Calls. I came up with a 3-level system of imagining stories the AI narrates & narrating stories to the AI for it build the user’s stories. The users get to rate the accuracy of the AI. I wanted to restrict the users from making synchronous, until they complete the level and gain confidence in working with the AI.
This was different research in itself, considering the time we had - we decided to park detailing this out.