Octi was ahead of the curve for augmented reality applications in iOS. I helped concept the first product designs for Octi, assisting in a 7.5 million dollar raise. For the following year, I helped design, direct and launch the product to market. The app was featured in App Stores "Best New Apps".
Many popular augmented reality apps share one familiar task: plane detection — the process of dropping an object into a scene. Some plane detection flows provide less friction than others, but generally require you to point your camera and find a flat surface before enabling features. When I started designing for AR, there was interest in readjusting this flow hierarchy by focusing on people first.
Exploring a new route by removing plane detection entirely. Dropping objects into an AR scene is missing a vital subject — people. If products are consistent at altering our reality, they should at least attempt to connect us with others in the process. Skipping plane detection enables a new flow where we can focus on people first. However, this implies the dependency of new technology: the ability to exclusively detect human movement and train it to react with that observation on a mobile device. The implied technology would require a flow for people to interact with that information.
The interface for the network commonly shows the skeletal tracking as lines and joints as dots. Since Octi is focused on gestures and culture, I was very influenced by Keith Harring. His work is a great example of symbolism, expression, color, and movement. So instead of the single lines, we created animated, round vectors that varied in color. The opacity of the color was determined by the strength of the recognition.
A small team and I had the vision to expand this type of communication by bringing a neural network into a digital product. We (our brilliant computer vision team) started training different gestures that would launch effects. We started designing and experimenting with ways this could be used to communicate on iOS and real life.
Reacting to information from other movements is nothing new. Body language complements verbal communication in social interaction. Without saying a word, we can recognize when someone is celebrating or distraught. AR should utilize this expression as a new way to interact. We began focusing on common and popular gestures in our daily lives.
With the ability to detect movement, we started adding in animated characters that would copy your movement and exploring new styles of effects.
The goal for Octi: Create an exciting video to communicate with friends. We tried various routes to make it as easy as possible to open your camera and create a dynamic video. Combining the neural network and plane detection simultaneously required us to time prompts and educate the process in real-time. After a few rounds of testing, we decided to split ways with ARKit and focus on body detection. This helped us jump straight to educating users on how to use a new camera. In addition to showcasing features in onboarding, Octi provides various locations in the app for an opportunity to learn.
Your page is optimized to share anywhere. Designed to create and curate content with the ability to share inside and out of the network. It’s an…. interesting time for both entertainment and communication. Octi experiments with a new type of expression in augmented reality.