kit kat gives delayed fliers a proverbial break…

In both a kind gesture and a great demonstration of its brand’s stated “purpose”, Kit Kat created a vending machine at Sao Paulo airport in Brazil that reads travelers’ boarding passes and dispenses them a Kit Kat if their flight is delayed.

Why It’s Hot:
It’s always great to see a brand use its marketing dollars toward something that isn’t completely and utterly self-serving. Instead of an ad, they made something that might truly do something for people, a true relationship builder. Plus, they did it to address a notoriously painful experience all of us have had.

[Source]

google’s magical VR doodle…

If you missed it, Google released its first 360-degreee video doodle yesterday – an homage to a French silent filmmaker and artist Georges Méliès, commemorating his film “The Conquest of the Pole”.

Why It’s Hot:

When even Google Doodles start to show up in 360-degree video, you know it’s bleeding mainstream. Storytelling in 2018 isn’t just a passive experience, it’s an interactive one that immerses the viewer in the story. As we approach video projects in the future, we should be designing for the experience, not just a two-dimensional stream.

[Source]

the hills walls are alive…


The latest innovation to come out of the Disney Research / Carnegie Mellon partnership would basically move sensors into the very structures around us to enable interaction.

As they say, they’ve created Smart Walls that “function as a gigantic trackpad, sensing a user and their movements. Rather than using a camera to locate a user and track their movement, as other systems do, this system relies on a grid of “large electrodes” covered in a layer of water-based paint that conducts electricity. 

The result: a wall so smart, it could play a game of vertical Twister with you, and also tell if you were cheating. It can even sense if you’re holding a hair dryer really close to it through electromagnetic resonance…users could play video games by using different poses to control them, change the channel on their TV with a wave of their arm, or slap the wall directly to turn off the lights, no need for light switches.”

Why It’s Hot:

Previously, we’ve relied on hardware to do the kind of sensing, responding, and controlling that the Smart Wall concept would. Things like Microsoft Kinect, or controlling our lights through Philips Hue on our smartphone. Having this capability fade into the background could basically allow us to control our spaces as if by magic.

[Futurism]

it’s gotta be the shoes…


And if Nike’s vision evolves, it could, in fact, be the shoes…that are a gateway to exploring more of what we might like.

They’ve already dabbled, and we’ve already discussed connected jerseys, that bring you content specific to the team and player at the tap of an NFC enabled phone. Nike’s latest “AF1 NikeConnect QS NYC”  sneakers “will come with an NFC (near-field communication) chip embedded under a NikeConnect logo on the heel of the sneaker. By using the NikeConnect app on a phone, you can tap your phone on the sensor and gain access to exclusive content and Nike events in New York City, as well as an opportunity to purchase other popular Nike kicks.” 

Why it’s hot:

What Nike is doing is an interesting approach in a world where we’re overwhelmed with stuff and information. By making the things we buy portals to more of what we might like, it seems an attempt to make anything an easy gateway to discovery, circumventing all the noise involved in finding things on our own on the vast and wide internet. And if all Nike Connect products are linked to your personal account, Nike could conceivably provide you with even better inspiration based on the sum total of your “Nike closet”.

[Source]

adidas makes 30,000 highlight reels…


In advance of this year’s Boston Marathon, Adidas says it’s planning to capture and create personal highlight videos for all 30,000 runners of this year’s race.

According to the plan: “Adidas will deliver videos to the 30,000 runners taking part in the marathon within a few hours of them completing the race. Along with the runner’s personal highlights, the Here to Create Legend videos will also feature general race day footage and music.”

How it works: “RFID tags in the runners’ race bibs and street mats that emit ultra-high frequency radio waves will provide Adidas with data on each runners’ performance. Using this technology, the sportswear brand is able to capture all the footage for the videos with just seven cameras and a team of 20 people spread across the 26.2-mile course.

Why It’s Hot:

We often talk about the trend of ultra-personalized product or service experiences, but marketing hasn’t necessarily been a major part of that conversation. As this becomes peoples’ overall expectation of brands, it will have to adapt.

Plus, in another time, Adidas would’ve just made a nice commercial touting its 30 years of race sponsorship. But instead, it decided to devote time, money, and effort to adding something memorable to the experience of the athletes running the race it’s sponsoring.

[Source]

AI helps deliver JFK’s words from beyond the grave…

On a fateful day in November of 1963, JFK never got to make his “Trade Mart” speech in Dallas. But thanks to the UK’s The Times and friends, we now have a glimpse at what that speech would’ve sounded like that day. Using the speech’s text and AI, The Times:

“Collected 831 analog recordings of the president’s previous speeches and interviews, removing noise and crosstalk through audio processing as well as using spectrum analysis tools to enhance the acoustic environment. Each audio file was then transferred into the AI system, where they used methods such as deep learning to understand the president’s unique tone and quirks expressed in his speech. In the end, the sound engineers took 116,777 sound units from the 831 clips to create the final audio.”

Why It’s Hot:

It seems we’re creating a world where anyone could be imitated scientifically. While in an instance like this, it’s great – to hear JFK’s words spoken, especially the sentiment in the clip above, was a joy for someone who cares about history and this country, especially given its current climate. But what if the likeness wasn’t recreated to deliver a speech written by him during his time, but rather something he never actually said or intended to say? Brings a whole new meaning to “fake news”.

[Listen to the full 22 minute version straight from the Source]

gesture control comes to amazon drones…

Amazon has been testing drones for 30 minute or less deliveries for a couple of years now. We’ve seen their patents for other drone-related ideas, but the latest is one describing drones that would respond to both gestures and commands. In effect, they’re trying to make the drones more than sentient technological vessels, and more human-friendly, so if the drone is headed toward the wrong spot you could wave your hands to indicate its error, or tell it where to set your item down for final delivery. As described in the source article:

Depending on a person’s gestures — a welcoming thumbs-up, shouting or frantic arm waving — the drone can adjust its behavior, according to the patent. As described in the patent, the machine could release the package it’s carrying, change its flight path to avoid crashing, ask humans a question or abort the delivery.

Among several illustrations in the design, a person is shown outside a home, flapping his arms in what Amazon describes as an “unwelcoming manner,” to showcase an example of someone shooing away a drone flying overhead. A voice bubble comes out of the man’s mouth, depicting possible voice commands to the incoming machine.

“The human recipient and/or the other humans can communicate with the vehicle using human gestures to aid the vehicle along its path to the delivery location,” Amazon’s patent states.”

Why it’s hot:

This adds a new layer to the basic idea of small aerial robots dropping items you order out of the air. The more they can humanize the robots, the more they mimic actually deliverymen. And given the feedback we have seen on social about Amazon’s own human delivery service, this could be a major improvement.

[Source]

sell my old clothes, i’m off to the cloud…

In the latest episode of life imitating art is a Y Combinator startup whose proposition is essentially uploading your brain to the cloud. Per the source: “Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.”

Why It’s Hot:

What’s not hot is you have to die in order to do it, but what’s interesting is the idea of exploring our consciousness as almost iPhone storage. That reincarnation by technology could be possible.

[Source]

stay perfectly hydrated with gatorade gx…

Gatorade introduced a prototype product it’s calling “Gatorade Gx”. It’s a combination of a patch you wear while working out, training, or whatever you call your physical/athletic activity, and a connected water bottle. It basically monitors how you’re sweating as you train, “capturing fluid, electrolyte, and sodium loss”.  Based on this, it lets you know when you should drink more, and if what you should drink is something specific based on your unique needs. That something specific being a “Pod” that has certain formula of electrolytes or nutrients you are losing as you sweat (your “electrolyte and carbohydrate needs”).

Why it’s hot:

As we see more uses of technologies like AI, biometrics, and connected sensors, products and services are becoming ultra personal. This is a personal hydration coach, filling a knowledge gap that otherwise only cues from your body might indicate you need. We should be keeping an eye on how brands are taking the old idea of “personalization” to its truest form, creating new ways to give them more than just a basic product or service.

[Source]

google AI predicts heart attacks by scanning your eye…

This week, the geniuses at Google and its “health-tech subsidiary” Verily announced AI that can predict your risk of a major cardiac event with roughly the same accuracy as the currently-accepted method using just a scan of your eye.

They have created an algorithm that analyzes the back of your eye for important predictors of cardiovascular health “including age, blood pressure, and whether or not [you] smoke” to assess your risk.

As explained via The Verge:

“To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Why It’s Hot:

This type of application of AI can help doctors quickly know what to look into, and shows how AI could help them spend less time diagnosing, and more time treating. It’s a long way from being completely flawless right now, but in the future, we might see an AI-powered robot instead of a nurse before we see the doctor.

[Source]

AR coming to eBay…


Details are a bit scant, but eBay announced this week it will soon be integrating AR functionality into its app.

Per Fortune, “The San Jose, California-based marketplace said it’s working on an AR kit that, for example, will let car enthusiasts see how the images of new wheels would look on their vehicles before making a purchase. Another feature will help sellers select the correct box size for an item by overlaying an image of the box on the merchandise.”

Why It’s Hot:

While not the newest kid on the block (eg, Ikea has used AR for years), eBay is a massive marketplace where millions of people globally buy and sell things. With physical retail integrating technology to fight back against the convenience of e-commerce, this is an example of e-commerce trying to bring elements of physical retail to the digital world. One of the big disadvantages of e-commerce is usually you’ll only see a bunch of images of a product, which in eBay’s case may or may not be of the actual product you’re buying. The ability to see what something looks like in a virtual 3 dimensions is a major new advantage.

Also to note this week – eBay hired former Twitter data scientist Jan Pedersen to lead its AI efforts.

[Source]

Ally’s attempt to hijack Super Bowl ads…

Instead of spending money on an ad, Ally Bank created an Augmented Reality game for this year’s Super Bowl. As explained in the video above, while other brands spent big money on the big game, Ally’s “Big Save” app allowed users to compete to see who could grab the most virtual money, after identifying what they were saving money toward. The game allegedly only activated during commercial breaks, and users would tap and drag AR dollar bills into a small AR piggy bank. The user with the highest score / most virtual money saved got a real cash money prize to be used toward their real-life savings goal.

Why It’s Hot:

Without knowing how successful it was, their different approach to the Super Bowl as a marketing moment is interesting. On one hand, it’s nice to see them trying to do some good with their budget. It also gave them specific insight into the things users were saving for that they could use later for marketing or to create products addressing those things. On the other, it seems a very noisy time to try and get people to ignore friends, family, and entertainment to play a game, albeit one with a nice prize. Either way, you can appreciate their attempt to hijack peoples’ attention during the commercial breaks coveted by other marketers.

[Source]

it’s just an ad…BUT WHY IS IT JUST AN AD?!?!

Amazon revealed its Alexa Super Bowl spot this week, and as you can see above, the premise is – imagine what it would be like if you were speaking to various celebrities instead of what at this point is a borderline monotone, virtually personality-less Alexa. There’s the anthemic 90-second version above, plus 30-second editions focused on specific personalities like you see below.

Why It’s Hot:

In a world where we’ll inevitably rely on speaking to digital assistants, why wouldn’t Amazon, Google, or any others give you the ability to choose your assistant’s voice and personality? And, why didn’t Amazon do it as part of this campaign? We’ve seen it in concept videos, but is this more than just an ad? Having GPS directions read to you by Arnold Schwarzenegger is one thing, but a true assistant you can interact with is a much different scenario. When can we expect this eminently possible future?

[Source]

giving “use your brain” new meaning…

One of the most progressive concepts we saw coming out of CES was Nissan’s “Brain to Vehicle” technology, in which an autonomous car would read your brainwaves to sense what you were thinking and respond. For example, if the driver is manually driving, the car could sense that he or she will be turning, and automatically adjust for the perfect turn as he or she actually turns the vehicle.

Now, this week, a Japanese company Cyberdyne got FDA approval for an exoskeleton that helps people who can’t walk, to walk, by sensing their brains signals to their legs to move. To quote: “HAL involves sensors that attach to the users’ legs, which detect bioelectric signals sent from the brain to the muscles, triggering the exoskeleton to move. When people use the technology, it is the individual whose nervous system is controlling the exoskeleton, not some independent control. Nonetheless, it is able to take the intention of the users and magnify their strength by a factor of 10 — supporting both its weight and that of the wearer while they move around.”

Why It’s Hot:

Voice seemingly just emerged as the new standard interface, but already we can see another future possibility. It will be interesting to see what technology can do when it’s able to sense and respond to us without us even needing to take any physical action whatsoever. On the one hand, it makes things effortless, on the other…talk about the dangers if the machines go rogue.

dragon drive: jarvis for your car…

The wave of magical CES 2018 innovations has begun to roll in, and among those already announced is a company called Nuance Communications’s “Dragon Drive” – an (extremely) artificially intelligent assistant for your car.

According to Digital Trends

“By combining conversational artificial intelligence with a number of nonverbal cues, Dragon Drive helps you talk to your car as though you were talking to a person. For example, the AI platform now boasts gaze detection, which allows drivers to get information about and interact with objects and places outside of the car simply by looking at them and asking Dragon Drive for details. If you drive past a restaurant, you can simply focus your gaze at said establishment and say, “Call that restaurant,” or “How is that restaurant rated?” Dragon Drive provides a “meaningful, human-like response.”

Moreover, the platform enables better communication with a whole host of virtual assistants, including smart home devices and other popular AI platforms. In this way, Dragon Drive claims, drivers will be able to manage a host of tasks all from their cars, whether it’s setting their home heating system or transferring money between bank accounts.

Dragon Drive’s AI integration does not only apply to external factors, but to components within the car as well. For instance, if you ask the AI platform to find parking, Dragon Drive will take into consideration whether or not your windshield wipers are on to determine whether it ought to direct you to a covered parking area to avoid the rain. And if you tell Dragon Drive you’re cold, the system will automatically adjust the car’s climate (but only in your area, keeping other passengers comfortable).

Why It’s Hot:

Putting aside the question of how many AI assistants we might have in our connected future, what was really interesting to see was the integration of voice and eye tracking biometrics. Things like using your voice as your key (/to personalize your settings to you and your passengers), the car reminding you of memories that happened at locations you’re passing, and identifying stores/buildings/restaurants/other things along your route with just a gaze, it’s amazing to think what the future holds when all the technologies we’ve only just seen emerging in recent years converge.

[More info]

become a jedi master with AR…


Fortuitously timed, a genius developer has created an app that lets you appear to wield a Star Wars styled Light Saber using Augmented Reality. Per its creator:

“It’s an iPhone app that turns a rolled up piece of paper into a virtual lightsaber. I think the best thing about it is that it brings a special effect that has typically been reserved for advanced video editors to a mass audience.”

Why It’s Hot:
Augmented Reality has of course seen many new uses since becoming a widely available capability on iOS. Some are useful, and some just let you live out childhood fantasies like this. In either case, it’s amazing the digital layer of the world we are building on top of the physical one we have known for our entire lives.

[Source]

create connected 3D printed objects…


3D printers helped us make a great leap into autonomous making with the ability to create our own physical “products”. But in a world where increasingly physical objects and products are connected, it’s frustrating not to be able to create 3D things that can be connected to digital devices. Enter researchers from University of Washington, who have “developed a way to 3D print plastic objects and sensors capable of communicating wirelessly with other smart devices, without the need for batteries or other electronics”.

As they say:

“The key idea behind our design is to communicate by reflections. The way that we do this is by reflecting Wi-Fi signals in the environment, similar to how you can use a mirror to reflect light. We 3D print antennas and switches that allow us to reflect radio signals. Using these components, we can build sensors that can detect mechanical motion, like water flow sensors and wind speed sensors. These sensors can then translate mechanical motion into reflections of Wi-Fi signals. As a result, we can create printable objects that can communicate wirelessly with Wi-Fi- enabled devices.”

Why It’s Hot:
It’s a primitive solution, but at least it’s an attempt to start enabling us to create our own “smart” products. In a world where soon almost all products will be connected, this is a promising step towards a true maker economy.

[Source]

tl;dr officially graduates to nm;dr…

Everything you think you know about content consumption on the internet is true.

Notre Dame researchers recently found that 73% of Redditors who volunteered for their study didn’t actually click through to links they upvoted, 84% clicked on content in less than 50% of their pageloads, and 94% did so in less than 40% of their pageloads.

Why it’s hot:

As people, it’s not. We’ve become a headline society.

As we all know, “fake news” is now a legitimate cultural phenomenon, and the lack of investigation and questioning the accuracy or legitimacy of content, opinions, ratings, even social media accounts means manipulative powers that can and have been misused by those with nefarious objectives.

But as marketers, before we make any ad, digital experience, tweet, product, or even business decision, the headline test has never been more important.

A good exercise is to write the positive headlines you hope to see as a result of what you’re thinking of doing, and the potential negative ones. Look at both, then decide the fate and/or form of your effort.

[ieee.org]

On a much lighter note, as a bonus, Google’s Santa Tracker experience is now live with Santa’s Village. Leading up to the holidays, it’s offering “access to games, a learning experience about holiday traditions around the world, and a Code Lab teaching kids basic coding skills” and an advent calendar unlocking a new game or experience each day between now and Christmas.

google maps adds wait times…


Knowing when a local business is busy is helpful, knowing how long you would have to wait if you went is even better. Enter Google.

Google’s Search and Maps apps, and Google.com now provide users estimated wait times for both local restaurants and grocery stores (see above).

Now, you’ll be able to see how long would you wait if you went right now, or when there’s a shorter wait if that’s what you need. It also lets you know when peak times are so you can avoid them, or prepare yourself for the pain. Here’s how it works:

“Google’s new restaurant wait times also comes from the aggregated and anonymized data from users who opted in to Google Location History – the same data that powers popular times, wait times and visit duration.”

Why it’s hot:

Forever, one of the first questions when you have to go to the grocery store or a restaurant, is – I wonder how long I’ll have to wait. With one simple new feature, Google has removed this age-old mystery. They’re not the first to do it for restaurants, but considering how many people use Google to find one, they certainly have the power to affect the most users.

augmented reality helps with autism…


Brain Power is a suite of AR and VR-like apps that work with Google Glass, to help people with autism learn crucial social interaction skills.

A few examples – “Emotional Charades” teaches them to identify emotions in real peoples’ faces. “Transition Master” helps them get comfortable with new circumstances before entering them. And “Face2Face” teaches them to make eye contact with others.

It makes it all a game, but unlike other types of teaching moments on an iPad, they’re always experiencing things without the artificial interference of a device screen, where really they’re just interacting with themselves.

Why it’s hot:
Earlier in the year, we talked a lot about the augmented self – how technology was helping us become almost super-human. But, it’s not just that. As Brain Power shows, it’s even helping learn basic human skills we might have a hard time with otherwise. While this is for autistic children to learn social skills, there’s no reason any child couldn’t learn through digital technology that feels like real life.

Plus, it’s another example of hardware getting out of the way. The only device needed here is the wearable Google Glass, which makes the experience feel like real life with a digital layer, rather than an artificial, screen-based experience.

 

the camera doesn’t lie, but the algorithm might…

Algorithms fooling algorithms may be one of the most 21st century things to happen yet. But, it did. Researchers at MIT used an algorithm to 3D print versions of a model object, programming them to be recognized as certain other things by Google’s image recognition technology. In short, they fooled Google image recognition into thinking a 3D printed stuffed turtle was a rifle. They also made a 3D printed stuffed baseball appear to be espresso, and a picture of a cat appear to be guacamole. Technology truly is magic.

Their explanation:

“We do this using a new algorithm for reliably producing adversarial examples that cause targeted misclassification under transformations like blur, rotation, zoom, or translation, and we use it to generate both 2D printouts and 3D models that fool a standard neural network at any angle.

It’s actually not just that they’re avoiding correct categorization — they’re classified as a chosen adversarial class, so we could have turned them into anything else if we had wanted to. The rifle and espresso classes were chosen uniformly at random.”

Why it’s hot:
Clearly there are implications for the practicality of image recognition. If they can do this fairly easily in a lab setting, what’s to stop anyone with enough technical savvy from doing this in the real world, perhaps reversing the case and disguising a rifle as a stuffed turtle to get through an artificially intelligent, image recognition technology-driven security checkpoint? Another scary implication mentioned was self-driving cars. It just shows we need much more ethical hacking to plan for and prevent these kind of security concerns.

your palm is your password…

In the last 12 months, biometric technology seems to have really started to hit the mainstream. We’ve got fingerprint scanning, facial recognition, retina scans, and microchipping. All require either specific technology to read, or as Redrock Biometrics explains about facial recognition – “it’s easy to fake”, using a picture instead of a real face. So Redrock is introducing palm scanning as a new authentication method, which works with any device that has a camera. Take a picture of your palmprint, and it becomes your unique signature – wave it in front of any camera, and you’re in.

The official explanation of how it works:

The PalmID Capture Module uses sophisticated machine vision techniques to convert RGB video of the palm into a template for authentication. The PalmID Matching Module can run server side or locally. In just 10-100 milliseconds, it can match the authentication attempt against the enrollment template, using proprietary algorithms extensively tested against tens of thousands of palm images.

 

To date, no single biometrics technology has been able to satisfy these differing needs and expectations. And, consumers have to enroll their finger, iris, face or palm for every device that uses biometrics for authentication.

 

What if there were a new biometric approach that met the security, convenience and reliability needs for many industries in one solution? And, what if consumers had to enroll just once and many devices would immediately recognize them

Why it’s hot:

This raised the question – how will we manage these multiple methods in a future where there might be several ways to authenticate people? This is being pitched as a solution almost anyone can implement, and they say the wave of the hand shows intention that a retina scan, for example, doesn’t. But is this just a stopgap until a more sophisticated technology makes retina scanning, or microchipping, or something else altogether ubiquitous?

zero training = zero problem, for AlphaGo Zero…


One of the major milestones in the relatively short history of AI is when Google’s AlphaGo beat the best human Go player in the world in three straight games early last year. In order to prepare AlphaGo for its match, Google trained it using games played by other Go players, so it could observe and learn which moves win and which don’t. It learned from essentially watching others.

This week, Google announced AlphaGo Zero, AI that completely taught itself to win at Go. All Google gave it was the rules, and by experimenting with moves on its own, it learned how to play, and beat its predecessor AlphaGo 100 games to zero after just over a month of training.

Why It’s Hot:

AI is becoming truly generative with what DeepMind calls “tabula rasa learning”. While a lot of AI we still see on a daily basis is extremely primitive in comparison, the future of AI is a machine’s ability to create things with basic information and a question. And ultimately, learning on its own can lead to better results. As researchers put it, “Even when reliable data sets are available, they may impose a ceiling on the performance of systems trained in this manner…By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.”

nike connected jersey…

Nike added a new layer of to clothing recently when it introduced connected NBA jerseys.

To coincide with its new status as official NBA gear provider, jersey owners can now tap their iPhone 7 with iOS11 on the jersey’s tag to activate “premium content” via NFC.

Per 9-to-5 mac:

“Essentially what happens is customers can purchase a jersey for their favorite player and unlock “premium content” about that player via the NikeConnect app. That premium content includes things such as “pregame arrival footage,” highlight reels, music playlists from players, and more. Just so everything comes full circle, the jerseys can unlock boosts for players in NBA 2K18.”

Why It’s Hot:

Everything is now a platform. With AR, NFC, and QR truly becoming mainstream, and mixed reality and AI presumably not long behind them, we’re interacting with things in a whole new way. This is a relatively light example – less utility, more entertainment – but it shows how technology is integrating into everything to provide a new layer of experience to even the clothes we wear.

buy your next couch online…

Campaign may be to furniture what Casper is to mattresses. Finally you can get the previously mythical combination of quality furniture that is shippable using normal delivery methods, and that requires minimal assembly. It’s also billed as being “built for life”, with prices on par with Crate and Barrel, or West Elm, and ships for “free”.

Why It’s Hot:
Great products are designed around removing pain points from the customer experience. The long transit times (and coordinating final delivery) that can come with freight shipping (+the cost), and the overly frustrating and laborious assembly required with other furniture purchased digitally are two major headaches when buying furniture online. Campaign solves for both. Meanwhile, IKEA is still trying to figure out how to make a flat-packable couch.

the foul stench of possible identity theft…

Smell of Data from Leanne Wijnsma on Vimeo.

Some obviously creative innovators have recently created the “Smell of Data” to alert you instantly when your personal information is at risk of being compromised while adventuring around the internet.

Per these geniuses –

“Smell of Data aims to give internet users moment-to-moment updates on whether their private information is at risk of being leaked…The Smell of Data is a new scent developed as an alert mechanism for a more instinctive data…Smell data? Beware of data leaks. They can lead to privacy violation, behavior control, and identity theft.”

“To utilize the Smell of Data, a scent dispenser is charged with the specially developed fragrance, and then connected to a smartphone, tablet, or computer via Wi-Fi. The device is able to detect when a paired system attempts to access an unprotected website on an unsecured network and will emit a pungent puff of the Smell of Data as a warning signal.”

Why It’s Pungent Hot:

It’s seemingly an interesting play on an old method of playing on peoples’ senses in order to condition behavior. While it’s obviously a bit silly, the point is – very often we’re not thinking how what we do digitally could lead to trouble later. Now there’s a Pavlovian way to get us to stop and think.

OK Google, Am I Depressed?


See gif of how it works here.

As reported by The Verge, yesterday Google rolled out a new mobile feature to help people who might think they’re depressed sort it out. Now, when someone searches “depression” on Google from a mobile device (as in the screenshot above), it suggests “check if you’re clinically depressed” – connecting users to a 9 question quiz to help them find out if they need professional help.

Why It’s Hot:

As usual, Google shows that utility is based on intent – instead of just connecting people to information, they’re connecting information to people. In this case, it could be particularly impactful since “People who have symptoms of depression — such as anxiety, insomnia, or fatigue — wait an average of six to eight years before getting treatment, according to the National Alliance on Mental Illness.” 

disney creates a magical bench…

…you could interact with pretty much anything your mind can dream up.

Disney Research developed a somewhat lo-fi solution for mixed reality that requires no special glasses or singularity type of stuff. Its “Magic Bench” allows people to interact with things that aren’t there, watching the action in 3rd person view, on a screen broadcasting them. It even provides haptic feedback to make it feel like the imaginary character or object truly is on the bench with you.

Why It’s Hot:

1) It’s a great example of technology enabling a physical experience without getting in the way. Historically, augmented/mixed reality required some type of personal technology like glasses/headset, or a phone. This requires nothing from the user but their presence.

2) It shows how Disney is using technology to create experiences that extend its “magical” brand into the digital age.

 

student teacher…

An 11 year old Tennessee girl recently found a way to instantly detect lead in water, cutting the time it used to take to do so drastically. Previously, you had to take a water sample and send it off to a lab for analysis, now all you need is her contraption and a smartphone. She discovered her solution when she read about a new type of nanotechnology on MIT’s website, and imagined its new application in its new context.

Here’s how it works:
“Her test device, which she has dubbed “Tethys,” uses a disposable cartridge containing chemically treated carbon nanotube arrays. This connects with an Arduino technology-based signal processor with a Bluetooth attachment. The graphene within the nanotube is highly sensitive to changes in flow of current. By treating the tube with atoms that are sensitive to lead, Rao is able to measure whether potable water is contaminated with lead, beaming the results straight to a Bluetooth-enabled smartphone. When it detects levels higher than 15 parts per million, the device warns that the water is unsafe.”

Why it’s hot:

1) Never let “can we do this” stop you
2) Never let “how can we do this” stop you
3) Some of the best solutions come when you put two (or more) things together

This offers a good lesson in a few important ingredients for innovation – how much you care, how much you believe, and how creative you can be. When all are high, you can create amazing things. Know what’s possible, believe that anything is, and let nothing stop you. Let’s do it.

the ultimate convenience of the ultimate convenience store…

A company called Wheely’s has created Moby Mart, a 24/7, on demand, self-driving, drone and digital assistant serviced, all electric, environmentally friendly, grab and go, digital payment only convenience store, currently autonomously piloting the streets of Shanghai. Or, as they put it on their website – “It is the store that comes to you, instead of you coming to the store.”

Bonus non-product marketing demo video:

Why it’s hot:

It’s interesting to think about what the world could look like when a number of often separate technologies come together. This may just be a primitive attempt at imagining it, but imagine the ultimate convenience provided by combining a number of technologies individually aimed at creating convenience for people. Anything could be delivered to you wherever you are, without direct human assistance.