Notes from SCREENS Day One: Yellow API, Convergence and Embedded Devices
Ash Thorp, FITC speaker, FITC Tokyo 2016

News

Notes from SCREENS Day One: Yellow API, Convergence and Embedded Devices

Below, guest blogger Patrick Dinnen shares his afternoon session notes from day one at SCREENS.

YellowAPI with Brad Wing
Yellow API is the digital transformation of Yellow Pages. It is and will remain a free platform. Google, Yahoo, Bing location based services in Canada are powered by Yellow. Exclusive deals with the big guys ended in the last year so now it's open to everyone. They have 1,500,000 business listings geocoded. They're changing the model with several ways for developers using the API to make money:

  • integrate API data into location search and pay for leads;
  • display local mobile PPC ads;
  • provide geocoded coupons and deals that come via Red Flag Deals;

Exploring Convergence with Peter Nitsch
Peter talked about how Japanese companies disrupted the Swiss watch makers in the 70s and 80s. The Swiss were focussed on the high end market with big margins. So they didn't see value in the mass-produceable, cheap watches based on quartz movements that the Japanese companies were producing. It wasn't until the Swiss took on those innovations and came up with the Swatch that the decline of the Swiss industry was halted.

Teehan and Lax's Plus Labs are working in a similar space, understanding capabilities of all the modular technologies available to us today and framing them for their clients. T+L are in the business of changing consumer perceptions. It used to be simple when there was a single channel but now we're in the digital world where it is far from simple. In 2002 email or web were your options for digital marketing.

Now, in 2011 we have desktop (we know what this is and it's not going away); mobile; TV (APIs aren't there yet but will be big. 60% of new TVs in 2014 will come with connectivity); physical internet is the least mature but deserves most exploration. Smart wristbands, RFID, WiFI connected objects are all beginnings of what will happen here.

Channels now are far more complex; increasingly integrated and evolving fast. It's crucial for agencies to understand this space.

Moore's law and modularisation is going to accelerate the pace of change. In the tech world there's lots of modularisation happening. Intel did it with OpenCV for example, a bunch of computer vision research became a library with their sole aim to sell more chips. T+L decided to setup a group inside to explore and understand these modules and frame them.

+labs is integrated within agency but has own staff that doesn't work on client work. Their mandate is to reduce complexity (for team and all staff in the agency by instilling culture of tinkering and hacking) and to explore possibilities to show teaching designers and clients what's possible.

They have a physical space, staff and budget - an investment by the agency.

They ran a one day Arduino workshop for everyone in the agency. All started from scratch and built real projects. Showed how easy it is to mock-up hardware prototypes.

Projects are bread and butter of what they do. The primary function of core team is to sketch and build fast and re-evaluate regularly. Their process goes from modules -> exploration -> capabilities -> experiment.

The team started simple when their capabilities were low. Looked at NFC for passive interaction. A much talked about technology but typically all people think about is payment. They wanted to explore the possibilities for interaction in the physical space. For example, at Google I/O Peter saw a NFC pillar to check in on Foursquare with a tap. This area of tangible user interfaces, "interfaces that are not glass", is a theme.

Next the team looked at ideas around big data. Looked at Twitter with idea of cleaning it up and filtering some noise (lots of people have tried this). Trying to remove the user learning of complex filtering UIs and replace it with machine learning. They used Carrot2, an open-source library. They had some success but shelved results until later when they came back and developed a cluster visualization for the FITC ETA event.

Touch Vision is a more recent project that uses markers and mobile computer vision to detect the location of multiple screens and interact with them using the mobile device. It takes the elements we've seen with AR demos and puts them together in a different configuration. The idea of crossing multiple target screens is where it becomes compelling and it was a designer who pointed out they needed that - they never originally intended to do that.

They're open sourcing parts of this and are big believers in that approach as they're building on other's open-source modules and want to contribute back.

Workspace is a project they're working on now. It starts from the metaphor of virtual chemistry set. The technology is a projection mapped table with embedded RFID tag readers. In phase 2 start moving projectors around and using quad-warping to keep the projection looking right. In phase 3 they plan to tack on a Kinect to do dynamic projection mapping.

The idea of "intended unintended mistakes" is one value. As an example, they have a $13k MS Surface coffee table which runs Vista - so it can't even connect a Kinect and nobody uses it. But, for prototyping the Workspace piece they got OpenFrameworks running on the Surface. A side-effect was that they'd created a presence sensing table as connecting a Kinect was the natural thing to do once OF was running. This sparks conversations amongst the designers about what they could do with that.

A New Landscape of Embedded Devices for Games with Adam Gutterman
Adam sees five screens to rule our lives for a while: phone, tablet, laptop, desktop, TV. He asks not how we can interact with these screens but how do the 5 screens interact with each other? Scrabble is an early example of iPad and iPhone screens working together, the tile racks live on the phones and the board on the iPad.

Remote app on iPhone interacts with library on desktop - client to client no cloud stuff. Rowmote on iPad allows control of apps from iPad to laptop (and from there on to TV). Lets him watch Netflix from bed.

My Empire is a now discontinued Facebook game with a cut-down version of the experience on iPhone/pad as a companion. Family Guy online (all built in Unity) is an upcoming MMO and will also have iOS companion apps.

Do we have screen agnostic games here now? Yes, Words with Friends.

Author once and deploy anywhere is the promise of Unity. PC, web, mac, iOS, Android, XBox, Wii... We don't port to everything though as that is a lot of work. That's where Union comes in, we think about new platforms and sometimes do the port too if it makes sense.

Marshall McLuhan said "the medium is the message." Angry Birds and Fruit Ninja were the big hit games on iOS. Why? They totally fit with the touchscreen's capabilities to the extent they wouldn't be fun with a mouse. They used the medium of the touchscreen incredibly well. Adam hopes this won't be the future but instead the medium is not the message but your content, that is deployed across platforms, is what matters.