March 11, 2019No Comments

How I Got to Where I Am Now

I'll be the first to admit it's a shame I didn't start writing regular blog posts about project Notusia from the beginning. That being said, going forward it makes sense that what I write is current and relevant to the work ongoing work that is happening with Notusia instead of a constant backlog of things that were done weeks if not months ago.

However, there is a certain amount of contextual background that is inherently necessary to understanding what I'm doing now, and as such I shall give a brief overview of where I've been leading up to this point.

Emobodied and Spatial Cognition

I first learned about the psychological theories of embodied cognition and spatial cognition (also related to spatial memory) in my sophomore-level Cognitive Science class. I won't get into the details, but in essence, we think using not only our brains but also our bodies and the environments around us. I was fascinated with these concepts and their possible benefits if/when applied to digital products and experiences. Being surrounded by people staring at their phones and laptop screens all day, I had quickly become disenchanted with the idea of applying my Interaction Design skills towards designing yet another app or screen-based digital product. Instead, I was intrigued by the thought of enabling people to take advantage of their own bodies and spaces around them as a way to increase beneficial cognitive function while simultaneously challenging the paradigm of screen-based experiences.

3D Information Architecture

Information Architecture, Organizational Systems, Data Structures, etc. I love them all. I couldn't tell you why exactly—but I think it has something to do with the comfort and ease of knowing where things are meant to go. As anyone who knows me well will tell you, one of my favorite sayings is:

"A place for everything, and everything in it's place"

In fact, one of my favorite "relaxing" activities (even as a child) was organizing my room. There's just something so satisfying about spreading everything out, sorting through it all, separating things into piles based on type, splitting piles into sub-collections, designating places for certain collections, and then neatly putting everything away. A good organizational system is both logical and intuitive. It's logical in that you can follow a hierarchy to locate an existing item or to find a place for a new item. It's intuitive in that you can learn the organization structure/model easily enough that you no longer have to think about it.

But I digress...

While many computer systems work with relational database systems that conceptually function in multiple dimension, almost every single user-facing system for organization is two-dimensional—maps, spreadsheets, hierarchy trees, flow diagrams, etc. Furthermore, the world we live in is three-dimensional and, evolutionarily-speaking, we humans are adapted and designed to work quite optimally in this physically tangible way.

All this, combined with the aforementioned theories of spatial and emobodied cognition, led me to think about opportunities that might lie within the overlapping capabilities of digital relational database systems and our human intuition for working with physical things in space.

Augmented/Mixed Reality

I realized the most obvious way of a creating a digital, spatial, interactive experience that would allow for integration with a back-end database is augmented/mixed reality. At the time I was taking a Design for Emerging Tech class where we were learning the basics of how to create augmented reality apps for iOS using Unity. However, even that was a challenge, and I quickly came to the realization that I simply wasn't going to be able to build up enough of the necessary skillset in time to build an augmented reality prototype in Unity—let alone build and connect a back-end database.

One the one hand this created a huge setback for me as I had this concept I had been so eager to build and test but with no way to do it. On the other hand it created a critical turning point in my thesis work. I ended up testing the concept of 3D information architecture using a simple paper prototype instead, and came to the realization that not only was it a difficult concept for people to grasp, but what I was really interested in was actually less about the output of the system (the display of information, 2D vs. 3D) and more about the input of the system (how people control and interact with information).

Body Language & Dance

In conjunction with my inspiration from the psychological research of embodied and spatial cognition, I also took a step back to my own experiences growing up. From age six up until I left home for college at eighteen, I took weekly classes in classic ballet, contemporary, jazz, and tap dancing. I also grew up doing a lot of music —thank you Mum and Dad for supporting 10 years of violin lessons and singing around the house— and I could write a whole other blog post about my theories on how music, memory, systems, and "flow" are all interconnected, but what really caught my attention for project Notusia was the emobodied version of music: dance. At its most fundamental level, dance is a form of innate human expression—even babies can dance. At the same time though, dance has been built up in various forms and practices to be a skill of mastery—a system of movements with names that can be taught, learned, reproduced, and eventually mastered. In essence, dance can (and has already) become a set of learned behaviors, serving as a form of communication both independently and as a part of a larger system (e.g. with music, or as two or more people dancing with each other). Sound like anything else we do? Typing on keyboards? Clicking buttons? Scrolling and swiping? These are all learned behaviors. But most of them don't take advantage of our bodies much, besides our fingertips and eyeballs.

So with all this in mind I dove into research on dance and how it could parallel the work I was trying to do with Notusia. I started taking dance classes again and even discovered the hidden practice of "dance notation" where mapping out the human body and writing a dance down on paper ends up looking more like a series of ancient hieroglyphs than anything else. I became particularly interested in the opportunities that lay within the practice of tap dancing given its distinct movements, sounds, rhythms, and the fact that it was conveniently isolated to just one body part: feet. I used this as a jumping-off point for a series of prototypes/experiments exploring how people felt about using their feet as a method of input and thinking about all the possible combinations of different movements and patterns that can come out of just using feet (toe vs. heel vs. flat-foot, one foot vs. both feet, one movement vs. a pattern of multiple movements, etc.)

Motion Control

I'm still continuing to explore and prototype with feet, but as the final showcase for project Notusia draws closer I've been looking into expanding my prototypes into a more full-bodied experience. This started out as a series of expert interviews with professionals in the field who have worked in the space of motion data capture and gesture-control, and is now taking form in my own process as I experiment with existing technologies like the Microsoft's XBox Kinect and Rebecca Fiebrink's Wekinator.

March 7, 2019No Comments

The Beginning and Why Paper Is Powerful

I started actively working on my senior thesis project, Notusia, almost five months ago in September 2018. But it was actually a few months prior when I got my inspiration for the project. I was in the middle of an incredible internship experience at IDEO and was inspired by the unmistakably tangible style of project collaboration that seemed to imbue the office and its people. Tangentially, I had the opportunity to participate in a live demo of a product by an augmented reality startup called Spatial. I was blown away—my childhood sci-fi fantasies were fast becoming reality. For whatever reason, in that moment, the dots started to connect and I came to the realization that I, as a Designer, could actually participate in the process of building the future I'd always dreamed about.

When I think back to these things which initially inspired Notusia, it makes sense what I'm doing now—experimenting with motion data, gesture control, body language, etc. But ironically, that's not where started.

During the first few months of my research and prototyping, I dove head first into the world of information architecture. I've always loved systems and structural organization as a way of sense-making, and combined with the research I had been doing into the psychological theories of embodied and spatial cognition, I thought I had a brilliant idea: 3D Information Architecture.

With the powers of augmented/virtual reality and relational database algorithms, I envisioned a world where we could physically walk through space and interact with any given collection of information. There was only one problem:

As a Designer, I didn't have the hard skills to build out an immersive and interactive augmented reality experience, let alone a backend data structure to support it all.

My solution?

Craft supplies.

In order to start experimenting with even just the concept of three-dimensional nested data structures, I built my first prototype for testing using nothing more than standard copy paper, scotch tape, and some string.

You can see more about that prototype here, and ultimately the takeaway I got from it was that people found it pretty difficult to wrap their heads around organizational hierarchies that span all the x, y, and z axises. But I saved myself a lot of time and effort by finding a way to test the core concept just using some paper before getting caught up in all the details of trying to build a 3D interactive environment in Unity and C#.