posted: June 24, 2014 | author: Chris Morabito
At I/O 2013 Google unveiled an overhauled user interface for Google Maps. They broke with the tired side-panel and gave us a new full-screen map interface. Redesigning an application like Google Maps, which is used by over 1 billion people, is no easy feat! Three lessons to take away: think big, question everything, and listen to your users.
Google wanted to redefine expectations of online maps. They also wanted to leverage new technology like HTML5 and WebGL. Existing maps seemed like paper maps with a search box on top, they wanted to rethink maps as dynamic entities. Designers made hundreds of wireframe sketches and, once refined, began fleshing them out with colors and details. They created build system for their design mock-ups (like developers have for code), which enabled other Googlers to see the latest thoughts and the path forward.
“If it ain’t broke don’t fix it” didn’t apply. Map tiles were redesigned at all zoom levels, to make them cleaner, more contextual, and less overwhelming. The idea of a “map of a place” was developed, incorporating the idea of how you’d give directions to a friend–only highlighting the most important details. When mapping a place important roads are brought forward, and less important details sink into the background. Search results were also updated to differentiate markers by their ranking, where the top results appear larger in size and include descriptions.
Initial users complained that maps were slow, which is was on limited connections, the interface felt slow because of the loading animation. To make the map feel faster, engineers displayed the search box immediately with an initial set of static tiles. They found that users either want to search of look at the map for a few seconds to get their bearings when they first came in to the application, so the staged load put the critical features in front of users while the rest loaded. The lesson being: make it feel fast, even if it isn’t.
Those of us charged with developing map applications know what a challenge it can be to create a functional, intuitive interface. Google has broken the mold, and given us a new model to consider when designing our interfaces and visualisations.
Google Earth Engine allows researchers to perform distributed geospatial analysis on Google’s global store of Landsat satellite imagery. Earth Engine’s purpose is not just to visualize data, but to extract information from it.
Google has archived all 40 years of Landsat data, in its full spectral range: far IR through the visual spectrum. Petabytes of data, and massive compute resources. Google composites cloud-free images, a global 15m satellite basemap. After they had current imagery complete, they went back and created historical cloud-free imagery for each year of Landsat’s existence, viewable as timelapse video. 2 million compute hours, over 66,000 computers, completed in 1.5 days.
Earth Engine is backed by a public raster and vector data stores (Landsat, MODIS, Terrain, Land Cover, Atmospheric). Computation resources are exposed through web APIs. Private Maps Engine data can also be analysed in Earth Engine. Algorithms are mapped over entire collections of filterable images for analysis, in parallel. The Earth Engine GUI allows you to teach Earth Engine to classify pixels but dropping markers and polygons, and the GUI allows you to write code to perform your own analysis, both on-demand.
Some incredible examples of the information you can extract from satellite imagery with powerful analytics.
This was a fun one. Google’s Santa Tracker allows people around the word to follow Santa as he traverses the globe the night of Christmas Eve. Googlers built the tracker in their 20% time on the Google Developer Platform, with tools that are available to anyone outside of Google. Using the latest web technologies, Google Maps, and App Engine; applications tracked Santa across a variety of platforms. The App Engine service was the authoritative source of Santa’s current location and his stats for the night(presents delivered, etc.).
In additional to the web experience, the team built a native Android app for optimum performance. This also allowed the app to issue notifications when Santa took off. The Maps API for Android showed Santa’s location, complete with 3D buildings. Games were implemented in OpenGL, and integrated Google Play Games services.
Chromcast was introduced shortly before the launch of Santa Tracker, and the team wanted to take advantage of the technology. Chromecast ran the web application, and was controlled from the Android app. Chromecast runs the Chrome browser onboard, but with low-powered hardware.
The team ran dry runs, because the tracker had to perform perfectly on Christmas Eve. They load tested the application on App Engine, up to 10,000 queries-per-second. Interestingly, Santa Tracker lead to an increase in Google traffic, which ordinarily would have decreased on Christmas day. Traffic was managed by delaying user’s feeds by up to 2 minutes, and by adding options to the service responses which could remotely disable features or render Santa at a higher altitude to reduce the number of Google Earth requests.
This session was a great way to start the day, very inspirational. This session was presented by Google’s ATAP (Advanced Technology and Projects) group, and highlighted 3 of the 11 projections they’re currently working on: Tango, Ara, and Spotlight.
Project Tango is a project we’ve mentioned previously here on the woolpert_labs blog; it’s a 7-inch tablet device, equipped with a variety of sensors, that maps the environment around it “at human scale”. By fusing cameras, motion, and depth sensors, Tango creates a detailed 3D picture of indoor spaces. Some very exciting demos were shown; the most impressive of which was walking across a room, up 5 flights of stairs, across another room, down another 5 flights of stairs, and returned to the starting point–all with very minimal drift. It will be very exciting to see this technology evolve, and to see the implications on indoor mapping.
Project Ara is a reimagining of how smartphones are constructed. Ara asks, “instead of buying a phone for its camera, what if you could buy a camera for your phone”. Ara devices use a modular stucture which will allow users to hot-swap components–say to get a higher-resolution camera or a battery pack with more capacity. This revolutionary technology is still in its early “sprial 1″ stages, but already a proof-of-concept phone running Android has been built (though though the “demo gods” were not friendly to seeing this on stage today).
In the words of John Lasseter, “technology inspires art, art challenges technology”. And so, ATAP’s Project Spotlight is working on new techniques for storytelling on mobile devices. Spotlight uses your device as a window into another world. Moto X users were treated to the first example of this with “Windy Day”, a short film which was rendered client side on their phones. The next experience will be “Duet”, which was the primary focus during the last third of the session. ATAP engineers worked with Glenn Keane, a former Disney animator, to develop a way to translate his frames of pencil sketches into “graphite constellations” which render the story on your phone. The final version will be interactive, but we saw a cinematic rendering today; which was, in a word, beautiful.
The next session I attended detailed the improvements in the new Android app runtime, ART. ART replaces Dalvik and brings substantial performance improvements. With improvements to compilation techniques, garbage collection, and support for 64-bit architectures, the Android team has achieved some truly remarkable leaps that will debut in the “L” release of Android later this year.
At 7:00 this morning Google began handing out our new Android Wear devices: your choice of either Samsung’s Gear Live or LG’s G Watch. The devices are pretty comparable, but I chose the LG. Setting up the watch required installing preview versions of Google Play Services and the Android Wear companion app, which I did at the hotel this morning.
Powering on the device presented its own challege, as it required connecting the charging cradle to power. There are no power outlets available on the first floor of Moscone West, and they’re not letting us upstairs until 9:00. I was able to power it up by connecting the USB to my laptop instead of the wall adapter. Once powered on, the G Watch required an software update, and from there the setup was very straightforward. The device pairs with your Android phone via Bluetooth–thankfully the device list was sorted by signal strength, or I never would have found mine in the multitude of watches being set up this morning!
This morning after the keynote we all received non-descript cardboard packages, told only that a few Googlers discovered in their 20% time what you can do with cardboard.
As it turns out, it’s a virtual reality headset for your phone! The cardboard folds up to make a viewer. Download an app from the Google Play Store, drop your phone in, and you’re treated to some virtual scenes from Google Earth, Street View, and more!
Google is restructuring many of their first-party web properties (e.g. GMail) to use Polymer, in fact its new “Materials Design” language is implemented with it. Polymer is very promising, it’s a forward-looking technology that reimagines how to build websites. Infact, the question was posed: could you restructure the entire HTML standard as an open-source collection of Web Components?
“YouTube should be viewable by everyone on every screen.”
“The web is finally a viable option for distributing immersive experiences across a variety of platforms.”
The YouTube team shared their experience building YouTube as a singular cross-platform experience. To ease maintenance and fragmentation, Google develops the YouTube application using web standards with much of the code shared across Desktop, Mobile, Game Consoles, and Smart TVs and Blu-Ray players. They work with system-on-a-chip manufacturers to build video codes into their hardware so device manufacturers can take advantage of them in their next generation of hardware. The YouTube team has also built an instrumentation and experimentation engine into their applications, allowing them to quickly deploy changes and get feedback on how it performs in the real world.
It’s lunchtime and I feel like I’m able to take a breath for the first time all day. The line to get into the keynote this morning wrapped around the block and then some. For whatever reason, we didn’t get into the hall until 5 minutes after the keynote started–so I felt behind the ball from the start
The keynote focus on Android; Android, Android everywhere. Later this year, Google will be releasing Android “L”, and for the first time is giving developers early preview access to develop against. The “L” release of Android brings the biggest user interface overhaul of the mobile operating since Ice Cream Sandwich; but Google isn’t stopping there. The “Material” design language Google is introducing is meant to extend beyond Android, to be cross-platform. Google is bringing Material Design to the web with their Polymer HTML extension library. Material Design elements are meant to have the physics of cardstock, but also size themselves dynamically and respond to user interaction with animations and ripples of color.
Google is bring Android everywhere. Android in your watch with Android Wear; Android in your car with Android Auto; Android in your home with Android TV and Chromecast; and even Android on your Chromebook with ported apps.
All attendees will receive either an LG or Samsung Android Wear watch, a Motorolla 360 watch later this summer, and a mysterious #cardboard package…
As one of the lucky winners of this year’s lottery, I have the privilege to be in San Francisco this week for I/O, Google’s annual developer conference. The agenda is developer-focused, but the keynote always delivers new product announcements and usually includes demonstrations of “moonshot” projects such as Glass and self-driving cars.
I’m checked into the hotel, checked into the conference, and ready for two days of full-time Googling!
Conference swag includes a t-shirt and water bottle (so far…).