The design team behind Oscar started and ended their process fixated on the user experience. Many healthcare providers still send new customers stacks of paperwork for onboarding, and Oscar jumped wholly into online questionnaires, tutorials, and app. Over the iterative lifecycle, here are a few key learnings they found:
Like enterprise app design, healthcare apps should be seen as a consumer product (people don’t shed their skin and become mindless patients).
87.8% of people who avoid early care do so because of bureaucracy, insurance issues, and price. Telemedicine is a glimmer of hope – connecting doctors directly with patients.
With healthcare apps, less is truly more. People tend to use healthcare apps rarely and often forget about them in between uses. The app needs to be more intuitive than innovative. Make it SIMPLE.
Test early and often using prototypes to course correct along the way.
The team was successful in limiting navigation buttons to give users a more guiding approach (forcing function).
They added CTAs for calling their doctor throughout the app at key touchpoints. This way, users understood WHEN they should be seeking help.
Getting users to spend LESS time on the app (meaning, they got what they needed and got off) became the goal. They needed to define success differently than other kinds of apps.
Sesame Workshop and IBM announced last year that they would work together on a line of cognitive apps, games, and educational toys. The first in that line of apps just completed its first pilot trial, introducing the app to 150 students in Georgia’s Gwinnett County Public Schools. The app incorporates a lot of the learnings on early child education at Sesame Workshop and is powered by IBM’s Watson, with it all dressed up with Sesame Street characters.
The app was tested in live classrooms on tablets, and educational videos and word games were used to enhance each student’s vocabulary. Teachers and parents could monitor each student’s development in real time and adjust lessons for each child based on need. The AI also personalized the app for each child using adaptive assessments.
The first large-scale test was a big success over its two-week run. Students acquired new vocabulary as a result of the app, which translated to real world – students called spiders “arachnid” and began recognizing “camouflaged” animals when in an out-of-app discussion.
Framer, a prototyping software that started out as a web app, finally adds a design suite of its own. Framer carved out its niche in the battle for the best prototyping solution by allowing designers to get deeper into animations, transitions, and interactivity. The cost was that designers needed to get down and dirty with Coffeescript, a tall order for designers who may or may not be familiar with HTML or CSS. This also invariably sparked a conversation about whether designers (UX or otherwise) should learn to code. Framer finally started to bridge that gap by offering first Sketch/Photoshop integration to handle design, and tons of tutorial videos to get designers started. Now it provides its heavily Sketch-inspired design workspace to let designers handle the visuals before flipping to Code to define interactivity.
Framer also teased options for designing responsiveness, allowing designers to set rules for how elements on a page change as the screen size changes. Framer has defined itself by focusing on really in-depth mobile (especially native) prototyping. It’ll be really interesting to see it evolve further along with competitors such as Axure, Craft, and the impending release of Adobe XD.
This week at Google I/O 2017, their annual developer conference, Google Assistant stole the show with its huge push in conversational interfaces and focus on context. Here are some of the cooler takeaways from the announcement:
Google Assistant focuses on a continued conversation with the assistant, picking up the context along the way from verbal conversation, typed, and image spicked up on the camera.
Google Assistant will pass off to branded chat bots to complete transactions seamlessly within the app. You can tell GA what you want for lunch, then be greeted by the Panera chat bot which then completes your order. This presents a really cool way to help build brand personality through its chat bot.
Google Assistant leverages Google Lens (and a lot of their AR learnings) to incorporate the phone’s camera. Users can point at text in languages they do not understand in the real world, and get an overlay on their phones translating it. Pointing the camera at a router’s information takes the data and makes it actionable.
Also uses Google Lens to allow users to point their cameras at a venue and immediately get information such as overall reviews, expense, and which friends have visited.
Pinterest added a new feature this week to its mobile app which detects and picks apart what is in an image you capture on your phone’s camera. Called Visual Guides, the feature populates a few tags for things that exist in the image (even some more abstract ones like “design”). You can then tap a tag to see related pins.
Pinterest has worked hard to define itself against Twitter, Facebook, and Snap, and this is another feature that helps. Pinterest has set itself up as a way to look at one idea/recipe/thing and then find multiple tangentially related ideas/recipes/things. The less friction users experience in discovering new ideas, the easier it is for them to go deeper into the service. The hope with Visual Guides is create a new kind of user behavior that allows users to spawn ideas from their day-to-day or anything that catches their eye.
Google Classroom became available for free to anyone with a personal Google account. Originally, Classroom was only available to users with Google education accounts. Users can easily set up and manage a course with a suite of tools for grading, assignments, etc.
The interest extends to people outside a traditional classroom, such as skill coaches, hobby instructors, and anyone keeping track of progress in a group. This includes anything from Girl Scouts completing robotics badges to Dungeon Masters introducing DnD.
Google Classroom is just a part of the rush for Edutech solutions. As archaic solutions (such as Blackboard) show their age, startups and established companies alike are vying for the education space. Classroom also opens a new source of ad revenue, now that it is no longer tied to an education account (since Google does not use ads on those accounts).
Elon Musk has been working on Neuralink, a human-computer brain interface project, between his other small ventures, Tesla and SpaceX. Neuralink may be the most ambitious of the three, however, as it aims to create a much more intimate connection between our brains and computers. Musk will of course take on the role of CEO at Neuralink, making him CEO of all three companies.
The hope behind Neuralink is to increase the efficiency of communication between two or more people with the help of cloud-based AI. Verbal or other types of unaided communication are always subject to misinterpretation, or are otherwise “lossy”, says Musk. Neuralink would allow a better transmission of ideas without (hopefully), the limitations of language. It could also be used to look up information similar to how you might on your phone, except the information is available within your minds eye. The technology would likely be refined for better delivery methods, up to the point of not just knowing but understanding an idea instantly. Musk maintains that this is as big a jump as it seems – people already rely on their phones to search for any kind of information they please, and often feel bad when they are separated from their phone for even a day.
WaitByWhy does a deeper dive into Neuralink’s potential, includes cute scribbles, and refers to the tech as a “wizard hat” (which it pretty much is). Musk makes the estimate that the technology will be available in 8-10 years, but the company will focus on therapeutic applications for those with disabilities to start.
An 11th grade designer, a 21 year old start up veteran, and an astrophysics post-doc/NASA Hubble Fellow walk into a pre-seed funding pitch sounds like the start to a bad joke, but it isn’t for the team behind TagDat.
TagDat is the recently funded and launched app meant to go toe-to-toe with Yelp as a local reviews app for restaurants. One of the key differentiator is that users can tag restaurants with emojis like “authentic”, “spicy”, and “cool” to give others a snapshot of what the restaurant is all about. It also uses a bright and playful color palette more like Foursquare than Yelp.
What’s more interesting than the app is the team itself. The app is the brain child of 11th grade designer Buffy Li (cheered on by her cat with 27,400 followers, Bailey). This is (only?) her second mobile app – her first was Haorizi, a Chinese social network for women. Buffy teamed up with Wilson Li (no relation), a University of California student, who brought his previous startup knowledge, and Zheng Cai (31), a astrophysics post-doc and Nasa Hubble Fellow who brought his big data experience.
Why it’s Hot
Another great example of simplification and a cool new use for emojis.
This week, Amazon debuted AmazonFresh Pickup, “drive-up groceries delivered to your trunk. The beta launched on Tuesday, allowing testers to order groceries online, set a time slot for pickup, and drive to an AmazonFresh Pickup location where their car is loaded up with groceries in a few minutes. Groceries can be picked up as soon as 15 minutes after ordering, and there is no minimum order. The new offering comes free with an Amazon Prime subscription. AmazonFresh Pickup will enter beta, meaning it will only be available to Amazon employees (and then only employees in two Seattle locations).
This announcement comes three months after Amazon Go, the grocery store without a checkout, debuted. Both represent a rethinking and retooling of grocery stores by Amazon, and a push into the brick and mortar space more generally. Both seek to save the user time: either by cutting out the checkout process (and lines!) and the other by taking out in-store shopping altogether. Amazon Go is also currently in beta.
According to the New York Times, one of Amazon’s next forays may be into stores that sell furniture and home appliances, using AR or VR to see how couches, tablets, etc will look in your home.
Why it’s hot:
Technology and strategic analysis leveraged to save shoppers time and long lines.
Will probably save you the 1-2 hours per week going grocery shopping.
Floyd County Productions, the animation company behind Archer, will be releasing an augmented reality app alongside the release of Archer’s eighth season. The app uses the phone’s camera to pick up on certain images in the show (or real world objects such as billboards) to give users clues to a mystery in the app that is separate but related to the show. The goal was to give fans a way to explore more of the show’s world without disrupting the experience for casual viewers with no knowledge of the app. Users are rewarded with secret goodies, leading to a nice feedback loop to come back next episode.
The app grew out of mixed-media games and puzzles the Archer team has experimented before: in season 6, a hex code was hidden in an episode which, when decoded, sent users to a microsite to explore the psyche of one of its characters.
Why it’s Hot:
Uses tech to give fans a new way to interact with a TV show, an otherwise passive medium.
Nice evolution of other games and goodies Floyd County has experimented with before.