The designers observed that parents have so little time nowardays, and it might be better for them to retain information in an interactive app than reading through dozens of parenting books. The designers believe there are two main benefits of getting the app over the book the app is based on: 1) parents can familiarize themselves with parenting advice quickly; 2) it serves as a quick reminder about the book’s tenets when faced with conflict. Parents can even print out cards with tips when they complete a scenario in the app. The team hopes to incorporate feedback from parents who use the app to craft new scenarios in upcoming versions.
Successful design systems need investment of resources. Neglect the system and it quickly becomes out of date (and who wants to use dated code?). Small incremental updates over time keep the system working.
A team should own the system, and be responsible for supporting, developing, evangelizing, and managing the whole thing. This makes it more likely that the system stays relevant.
Continuous communication with designers and developers is crucial. Both should feel heard, although a final decision must be made about what to include and exclude.
People need to want to use the design system. Make it the path of least resistance and show value by recording wins and evangelizing.
Good design systems should scale, so plan the architecture in advance.
Most importantly, if it’s harder for people to use than their current system, people just won’t use it. Just because it might be an internal tool, don’t treat it as an afterthought – simplify until it’s easier than the ad-hoc systems designers and devs are using.
Today, Snapchat begins rolling out its big redesign that CEO Evan Spiegel says separates the ‘social’ from the ‘media.’ While Snapchat opens to the camera as always, a feed devoted to your friends now lives to the left and a Discover feed devoted to exploring professional creator content lives to the right. The Discover feed combines automated analysis of past viewing behavior and human curation to better weed out the kind of click-bait content that has plagued Facebook. As social media companies look inward at the fake news problem, Spiegel believes Snapchat can solve it with a curation board that sifts through everything that appears on Discover.
The new algorithmic redesign makes Snapchatmore Instragram-like – rather than highlighted most recent Stories (which emphasized oversharers), the new Snapchat algorithms puts a spotlight on your friend’s Stories. The Discover feed also includes features for users to see less content from creators they don’t want to see, giving users an easy fix if the mix of algorithm and human curation isn’t jiving with them.
Google announced an extension Thursday for its growing library of 3D objects, the Poly API. Using the API, developers can bring in pre-baked 3D objects into their projects, making for a much faster workflow. This represents another step for Google to bring AR and VR developers to their toolset. The Poly API is not limited to games or apps, it can be used by web devs across mobile and desktop experiences. Designers are hungry for ways to quickly prototype AR and VR experiences, and perhaps this brings us one step closer. For the adventurous, here’s the Google API Documentation.
VR has been the subject of a ton of chatter, but seems to have little momentum. A recent Techcrunch article dove into some reasons why VR seems stuck, and some ways to get us to the inflection point and into a whole new interaction paradigm.
Get the phone off your forehead. Today’s phones were not designed to be stuck to your forehead (battery life, weight distribution are off). It’s an easy challenge to overcome in an industry obsessed with the right customer experience, and as VR tech advances, the more affordable it becomes. Advances in dedicated VR headsets are also propelling them forward (Tsunami VR reduced the time to replace a pneumatic drill component from a 2 hour process to only 15 minutes).
Create more VR gaming studios. Bungie Studio’s Halo was the reason to own the Xbox, and a tipping point will come when a studio creates the must-have VR game. Unfortunately, translating console games to VR is not going to cut it – dedicated VR gaming studios need to explore the medium and create games that truly take advantage of it.
All roads lead to Hollywood. While there are a handful of VR cinematic experiences, there aren’t enough on the market just yet. It’s hard to get past Hollywood execs who immediately ask “Where are the headsets?”. Ultimately, we need hardware people want to put on their heads as enthusiastically as putting their smartphone in their pocket.
The inability for Americans of all types to come to the table and resolve differences has been a powerful talking point in recent years. Enter Kailo, a web platform for visualizing and vetting arguments. The way it works is this: a user creates an assertion (such as “Eating Meat is Wrong”), and other users submit arguments for and against the assertion, listed as Pros and Cons. Still other users can vote on which arguments are the most powerful, and even create sub-arguments on a particular point.
This would all be rather confusing to follow if it weren’t for the unique way Kailo is visualizing the arguments in real time. A “Discussion Topology” is presented to the user in real time, giving them a birds-eye view of the conversation. Users can then click or tap to see what arguments are getting the most votes, and which are branching off. A handy interactive hierarchy sits at the top of each argument page to help clarify the main arguments for and against.
Kailo hosts debates ranging from the political (“Democrats should not cooperate with the Donald Trump Presidency”), to the philosophical (“Human life should be valued above animal life”), and even to entertainment (“The entire GOT cast is a secret Targaryen”). Moderators keep tabs on joke posts to keep the conversation on track. It remains to be seen if Kailo can help solve an issue as seemingly unsolvable as resolving American political differences, but in the mean time maybe it can help us finally figure out whether Tyrion is a secret Targaryen.
VR as a use for education, especially medical education, has been talked out for a long time. The new Unity-based Think F.A.S.T. VR app puts us one step closer to VR integration with medical education. The acronym stands for Facial drooping, Arm weakness, Speech difficulties, and time, which maps to the key areas for diagnosis of a stroke. The app features a spiced up VR doctor’s office with a patient at its center. A little sci-fi magic is added with a floating eye that walks the user through the process. The system can detect when the user is moving around the room and track hand movements, as well as even detecting voice responses and answering questions.
Admittedly more of a tech demo than a finalized product, Think F.A.S.T. debuted this week at a medical conference.
The Stranger Things iOS game hits the app store one month before the show returns to Netflix! Made by BonusXP, the media tie-in game is an action-adventure inspired by old Super Nintendo titles. Unlike a lot of media-tie in games, however, this one is getting good reviews. The game is free to play, has no in-app purchases, and if you collect all the video cassettes to unlock an exclusive trailer. And a few cute references for fans of the series (some NPCs will even say “Justice for Barb!”).
Why it’s hot:
Shows what a great product and attention to detail can bring to a well-loved show.
Giphy Embed is a new plugin that lets users drag and drop gifs onto a live site. The plugin is activated by the website and displays an ‘add gif’ button at the bottom of the user’s viewport. Users can click away to add random gifs to the page, and drag them around to customize the page to their liking. Developers can predefine what gifs to use, even using their own custom sticker packs (so no need for the brand to worry about off brand imagery on their site). Users can then share their creations.
It’s no secret that Giphy sees the future of communication as visual, and Giphy’s Director of Product explained that they see Giphy Embed as a visual commentary tool that can spur engagement on any site. Try it live on Thought Catalog and Quote Catalog.
Why it’s Hot
Increase user engagement and time on site by letting users add fun stickers that are predefined or customized to the brand.
In the tradition of ‘what is old is new again’, you too can make your favorite website look like your old GeoCities gif page.
Last month, polling app Polly, racked up 20 million users and climbed to the #13 social app in the US. Polly is simple – create a question along with a few predefined answers. The fact that answers are defined by the poll creator solves the problem of a lot of polling apps, as open-ended answers have often led to cyberbullying. Users can send the poll through web, mobile app, and Snapchat. Naturally, the last method led to its greatest amount of growth since users do not need the app to answer questions as long as they answer through Snapchat. So far, Polly has been a hit among teens who can put a simple poll together quickly in a clean interface and share with friends. But its key to growth can also be its undoing: if Snapchat or Instagram decide to create their own polling features (or really any other reason), Polly’s distribution disappears. Currently, Polly has a light feature set, but plans for in-app messaging and other goodies look like they’re on the horizon. We’ll have to see if they can keep the interface simple and clean enough to keep teens happy.
Back in June, Facebook researchers published a paper about an experiment with AI chatbots. The original idea was to see how chatbots would perform in a negotiation game against each other, not a human negotiator. The Atlantic picked up on a strange line in the research in which the report read that the bot-to-bot conversation “led to divergence from human language”. At first pass, it seems the chatbots were spouting gibberish, but on further investigation the researchers found the chatbots had developed their own language for negotiation. Naturally, the project was shut down before the impending robot uprising, and the sensationalist titles practically wrote themselves.
While the reality is not quite as exciting as the titles would lead you to believe, the experiment begs the question of whether chatbots could benefit from a language of their own. While natural language processing helps them better understand our needs, there is a tremendous amount of info in interactive maps, geolocation data, and various web services. Today, there are predefined web services chatbots could use, but the next step seems to be creating a more standard language for bots and services to interact (think API for AI).
In InVision’s latest design article, the writer takes a dive into what started as an unlikely dance-venture for people with Parkinson’s disease, that is now becoming an AR revolution with Google Glass. Dance for PD started as a therapeutic dance class (yes, dance) for people with Parkinson’s. The class focused on moving through simple dance moves as a way to ease out of the freezing episodes that Parkinson’s brings on. When the group won an award from Google to bring the dance instruction to AR, some really unique design challenges cropped up. Some were technological – Google Glass could only play video for a few minutes before overheating and shutting down. Most notably, the dance instruction AR prototype was far better received by patients than normal PD exercise instruction in the same format. The users ultimately felt more engaged performing dance than engaging in exercises seen as more clinical.
The design team behind Oscar started and ended their process fixated on the user experience. Many healthcare providers still send new customers stacks of paperwork for onboarding, and Oscar jumped wholly into online questionnaires, tutorials, and app. Over the iterative lifecycle, here are a few key learnings they found:
Like enterprise app design, healthcare apps should be seen as a consumer product (people don’t shed their skin and become mindless patients).
87.8% of people who avoid early care do so because of bureaucracy, insurance issues, and price. Telemedicine is a glimmer of hope – connecting doctors directly with patients.
With healthcare apps, less is truly more. People tend to use healthcare apps rarely and often forget about them in between uses. The app needs to be more intuitive than innovative. Make it SIMPLE.
Test early and often using prototypes to course correct along the way.
The team was successful in limiting navigation buttons to give users a more guiding approach (forcing function).
They added CTAs for calling their doctor throughout the app at key touchpoints. This way, users understood WHEN they should be seeking help.
Getting users to spend LESS time on the app (meaning, they got what they needed and got off) became the goal. They needed to define success differently than other kinds of apps.
Sesame Workshop and IBM announced last year that they would work together on a line of cognitive apps, games, and educational toys. The first in that line of apps just completed its first pilot trial, introducing the app to 150 students in Georgia’s Gwinnett County Public Schools. The app incorporates a lot of the learnings on early child education at Sesame Workshop and is powered by IBM’s Watson, with it all dressed up with Sesame Street characters.
The app was tested in live classrooms on tablets, and educational videos and word games were used to enhance each student’s vocabulary. Teachers and parents could monitor each student’s development in real time and adjust lessons for each child based on need. The AI also personalized the app for each child using adaptive assessments.
The first large-scale test was a big success over its two-week run. Students acquired new vocabulary as a result of the app, which translated to real world – students called spiders “arachnid” and began recognizing “camouflaged” animals when in an out-of-app discussion.
Framer, a prototyping software that started out as a web app, finally adds a design suite of its own. Framer carved out its niche in the battle for the best prototyping solution by allowing designers to get deeper into animations, transitions, and interactivity. The cost was that designers needed to get down and dirty with Coffeescript, a tall order for designers who may or may not be familiar with HTML or CSS. This also invariably sparked a conversation about whether designers (UX or otherwise) should learn to code. Framer finally started to bridge that gap by offering first Sketch/Photoshop integration to handle design, and tons of tutorial videos to get designers started. Now it provides its heavily Sketch-inspired design workspace to let designers handle the visuals before flipping to Code to define interactivity.
Framer also teased options for designing responsiveness, allowing designers to set rules for how elements on a page change as the screen size changes. Framer has defined itself by focusing on really in-depth mobile (especially native) prototyping. It’ll be really interesting to see it evolve further along with competitors such as Axure, Craft, and the impending release of Adobe XD.
This week at Google I/O 2017, their annual developer conference, Google Assistant stole the show with its huge push in conversational interfaces and focus on context. Here are some of the cooler takeaways from the announcement:
Google Assistant focuses on a continued conversation with the assistant, picking up the context along the way from verbal conversation, typed, and image spicked up on the camera.
Google Assistant will pass off to branded chat bots to complete transactions seamlessly within the app. You can tell GA what you want for lunch, then be greeted by the Panera chat bot which then completes your order. This presents a really cool way to help build brand personality through its chat bot.
Google Assistant leverages Google Lens (and a lot of their AR learnings) to incorporate the phone’s camera. Users can point at text in languages they do not understand in the real world, and get an overlay on their phones translating it. Pointing the camera at a router’s information takes the data and makes it actionable.
Also uses Google Lens to allow users to point their cameras at a venue and immediately get information such as overall reviews, expense, and which friends have visited.
Pinterest added a new feature this week to its mobile app which detects and picks apart what is in an image you capture on your phone’s camera. Called Visual Guides, the feature populates a few tags for things that exist in the image (even some more abstract ones like “design”). You can then tap a tag to see related pins.
Pinterest has worked hard to define itself against Twitter, Facebook, and Snap, and this is another feature that helps. Pinterest has set itself up as a way to look at one idea/recipe/thing and then find multiple tangentially related ideas/recipes/things. The less friction users experience in discovering new ideas, the easier it is for them to go deeper into the service. The hope with Visual Guides is create a new kind of user behavior that allows users to spawn ideas from their day-to-day or anything that catches their eye.
Google Classroom became available for free to anyone with a personal Google account. Originally, Classroom was only available to users with Google education accounts. Users can easily set up and manage a course with a suite of tools for grading, assignments, etc.
The interest extends to people outside a traditional classroom, such as skill coaches, hobby instructors, and anyone keeping track of progress in a group. This includes anything from Girl Scouts completing robotics badges to Dungeon Masters introducing DnD.
Google Classroom is just a part of the rush for Edutech solutions. As archaic solutions (such as Blackboard) show their age, startups and established companies alike are vying for the education space. Classroom also opens a new source of ad revenue, now that it is no longer tied to an education account (since Google does not use ads on those accounts).
Elon Musk has been working on Neuralink, a human-computer brain interface project, between his other small ventures, Tesla and SpaceX. Neuralink may be the most ambitious of the three, however, as it aims to create a much more intimate connection between our brains and computers. Musk will of course take on the role of CEO at Neuralink, making him CEO of all three companies.
The hope behind Neuralink is to increase the efficiency of communication between two or more people with the help of cloud-based AI. Verbal or other types of unaided communication are always subject to misinterpretation, or are otherwise “lossy”, says Musk. Neuralink would allow a better transmission of ideas without (hopefully), the limitations of language. It could also be used to look up information similar to how you might on your phone, except the information is available within your minds eye. The technology would likely be refined for better delivery methods, up to the point of not just knowing but understanding an idea instantly. Musk maintains that this is as big a jump as it seems – people already rely on their phones to search for any kind of information they please, and often feel bad when they are separated from their phone for even a day.
WaitByWhy does a deeper dive into Neuralink’s potential, includes cute scribbles, and refers to the tech as a “wizard hat” (which it pretty much is). Musk makes the estimate that the technology will be available in 8-10 years, but the company will focus on therapeutic applications for those with disabilities to start.
An 11th grade designer, a 21 year old start up veteran, and an astrophysics post-doc/NASA Hubble Fellow walk into a pre-seed funding pitch sounds like the start to a bad joke, but it isn’t for the team behind TagDat.
TagDat is the recently funded and launched app meant to go toe-to-toe with Yelp as a local reviews app for restaurants. One of the key differentiator is that users can tag restaurants with emojis like “authentic”, “spicy”, and “cool” to give others a snapshot of what the restaurant is all about. It also uses a bright and playful color palette more like Foursquare than Yelp.
What’s more interesting than the app is the team itself. The app is the brain child of 11th grade designer Buffy Li (cheered on by her cat with 27,400 followers, Bailey). This is (only?) her second mobile app – her first was Haorizi, a Chinese social network for women. Buffy teamed up with Wilson Li (no relation), a University of California student, who brought his previous startup knowledge, and Zheng Cai (31), a astrophysics post-doc and Nasa Hubble Fellow who brought his big data experience.
Why it’s Hot
Another great example of simplification and a cool new use for emojis.
This week, Amazon debuted AmazonFresh Pickup, “drive-up groceries delivered to your trunk. The beta launched on Tuesday, allowing testers to order groceries online, set a time slot for pickup, and drive to an AmazonFresh Pickup location where their car is loaded up with groceries in a few minutes. Groceries can be picked up as soon as 15 minutes after ordering, and there is no minimum order. The new offering comes free with an Amazon Prime subscription. AmazonFresh Pickup will enter beta, meaning it will only be available to Amazon employees (and then only employees in two Seattle locations).
This announcement comes three months after Amazon Go, the grocery store without a checkout, debuted. Both represent a rethinking and retooling of grocery stores by Amazon, and a push into the brick and mortar space more generally. Both seek to save the user time: either by cutting out the checkout process (and lines!) and the other by taking out in-store shopping altogether. Amazon Go is also currently in beta.
According to the New York Times, one of Amazon’s next forays may be into stores that sell furniture and home appliances, using AR or VR to see how couches, tablets, etc will look in your home.
Why it’s hot:
Technology and strategic analysis leveraged to save shoppers time and long lines.
Will probably save you the 1-2 hours per week going grocery shopping.
Floyd County Productions, the animation company behind Archer, will be releasing an augmented reality app alongside the release of Archer’s eighth season. The app uses the phone’s camera to pick up on certain images in the show (or real world objects such as billboards) to give users clues to a mystery in the app that is separate but related to the show. The goal was to give fans a way to explore more of the show’s world without disrupting the experience for casual viewers with no knowledge of the app. Users are rewarded with secret goodies, leading to a nice feedback loop to come back next episode.
The app grew out of mixed-media games and puzzles the Archer team has experimented before: in season 6, a hex code was hidden in an episode which, when decoded, sent users to a microsite to explore the psyche of one of its characters.
Why it’s Hot:
Uses tech to give fans a new way to interact with a TV show, an otherwise passive medium.
Nice evolution of other games and goodies Floyd County has experimented with before.