The Extended Mind
Meditative VR experience
Team Size: 1
Unity, Processing, EEG, Blender
This project centered around the concept of Distributed Cognition and Extended Mind Theory, and used these concepts as a basis, along with the combination of portable EEG and the Unity engine, to create a Virtual Reality playful experience that can be controlled by a combination of gaze and brainwave activities. It serves as a proof- of-concept tech demo on the viability of using the Brain-Computer technology as a Human-Computer Interaction(HCI) control input, as well as creates narration about how human cognition is distributed in technological systems outside the brain.
Developed using Unity, Processing and the Neurosky Mindwave Mobile headset.
Design Process
While designing this project, our main guiding question is to design a narrative that reflects a speculative future based on the concept of extended mind theory, while solving design challenges centered around the EEG device such as latency issues and lack of feedback. The resulting proof-of-concept prototype is an interaction model of using gaze and focus level to manipulate the gravity of the virtual world with focus level.
Second Prototype
The final scene, as mentioned above, took form when designing for the first iteration of the final prototype, in which we built a scene with 3 sets of objects that react differently with gaze, and having the overall gravity controlled by the user’s attention level. I started by making three groups of objects that react to gravity when being gazed at, react to gravity when not being gazed at, and receive an upward force when being gazed at, respectively. The first two sets of objects worked as intended, and created some interesting interactions when combined together. The third set turned out to be more about gaze than about attention level, and didn’t provide enough feedback on what the current attention level was. Both the changing gravity level and the force applied were based on attention level, so they counteracted each other. In the next prototypes, these objects were being replaced by blocks that weren’t affected by gaze to provide an overall visualization of the current gravity level.
Third Prototype
For prototype 3, We mainly focused on tackling the performance issues of using the provided Unity asset, as well as the visualization aspect of the work. During the previous prototypes, Wemainly focused on getting the interactions to work on a small set of objects, so the performance issue wasn’t taken into account. AsWe started to incorporate more objects and complex interactions into a bigger scene, the lagging caused by connecting the EEG headset directly to Unity created a noticeable lagging when the project runs, and impacted the overall performance. To solve this problem, We wrote a processing sketch that used the Mindset Library(Cardoso, 2014) to read the attention level from the EEG, and then visualize and convert them into keystrokes from 0~9, each number representing the attention level from 0 to 100 in intervals of 10. Based on feedback from the instructor and peers, We also started to work on an in-game visualization of the current attention level that will gradually turn from green to red corresponding to readings from 0 to 100.
Reflections
Since we were working with some new technologies such as EEG sensors and custom processing scripts, the most challenging part of this project was about problem-solving. Many different problems arose, both design-wise and technical-wise, as the prototyping went on, but we managed to solve most of them by the end of the semester, which was satisfying. What we hope to achieve by doing this project is to visualize a speculative future of distributed cognition where more human-computer interactions are enabled without the use of controllers, and provide a possible solution to the problems that designers will inevitably encounter when developing BCI interfaces using EEG.
A future direction of this project could be to provide support for different portable EEG devices available on the market, so that it does not have to exist as an installation, and could be distributed as an online software. Another possible direction is to modify the headband of the VR headset to be more comfortable when wearing it together with BCI devices, or even integrate the devices into the headband to open up more possibilities of collaboration between BCI and the virtual world.Another interesting direction that this project could lead to is further to further develop the narration of this scene. It is possible to create interactive experiences, competitive or casual games, virtual exhibitions and even physical installations based on this model of interaction. As working with novel technologies is never easy, more difficulties will likely arise during further development, but a good starting point, as the guests suggested during the final critiques, is to start to speculate possible forms that this would take place. A sketch, or a Wizard-Of-Oz demo could all be a solid next step towards this direction.
Initial Prototype
During the initial prototype, the main focus was to build interactions that serve as a proof-of-concept of the workflow of using EEG and Unity together, and having interactions that are easier to visualize. The outcome is a scene that showcases 2D vision manipulation, where the area of the playground visible is determined by the user’s attention level. The proposed narration with that prototype was mostly borrowing from existing games and mechanics: some kind of exploration experience, or a game of chess with the added layer of vision control. The critiques we received from peers and instructors for this iteration was that it focused too much on the technological side, and needed more work on the narratives.