gesture control comes to amazon drones…

Amazon has been testing drones for 30 minute or less deliveries for a couple of years now. We’ve seen their patents for other drone-related ideas, but the latest is one describing drones that would respond to both gestures and commands. In effect, they’re trying to make the drones more than sentient technological vessels, and more human-friendly, so if the drone is headed toward the wrong spot you could wave your hands to indicate its error, or tell it where to set your item down for final delivery. As described in the source article:

Depending on a person’s gestures — a welcoming thumbs-up, shouting or frantic arm waving — the drone can adjust its behavior, according to the patent. As described in the patent, the machine could release the package it’s carrying, change its flight path to avoid crashing, ask humans a question or abort the delivery.

Among several illustrations in the design, a person is shown outside a home, flapping his arms in what Amazon describes as an “unwelcoming manner,” to showcase an example of someone shooing away a drone flying overhead. A voice bubble comes out of the man’s mouth, depicting possible voice commands to the incoming machine.

“The human recipient and/or the other humans can communicate with the vehicle using human gestures to aid the vehicle along its path to the delivery location,” Amazon’s patent states.”

Why it’s hot:

This adds a new layer to the basic idea of small aerial robots dropping items you order out of the air. The more they can humanize the robots, the more they mimic actually deliverymen. And given the feedback we have seen on social about Amazon’s own human delivery service, this could be a major improvement.


sell my old clothes, i’m off to the cloud…

In the latest episode of life imitating art is a Y Combinator startup whose proposition is essentially uploading your brain to the cloud. Per the source: “Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.”

Why It’s Hot:

What’s not hot is you have to die in order to do it, but what’s interesting is the idea of exploring our consciousness as almost iPhone storage. That reincarnation by technology could be possible.


stay perfectly hydrated with gatorade gx…

Gatorade introduced a prototype product it’s calling “Gatorade Gx”. It’s a combination of a patch you wear while working out, training, or whatever you call your physical/athletic activity, and a connected water bottle. It basically monitors how you’re sweating as you train, “capturing fluid, electrolyte, and sodium loss”.  Based on this, it lets you know when you should drink more, and if what you should drink is something specific based on your unique needs. That something specific being a “Pod” that has certain formula of electrolytes or nutrients you are losing as you sweat (your “electrolyte and carbohydrate needs”).

Why it’s hot:

As we see more uses of technologies like AI, biometrics, and connected sensors, products and services are becoming ultra personal. This is a personal hydration coach, filling a knowledge gap that otherwise only cues from your body might indicate you need. We should be keeping an eye on how brands are taking the old idea of “personalization” to its truest form, creating new ways to give them more than just a basic product or service.


google AI predicts heart attacks by scanning your eye…

This week, the geniuses at Google and its “health-tech subsidiary” Verily announced AI that can predict your risk of a major cardiac event with roughly the same accuracy as the currently-accepted method using just a scan of your eye.

They have created an algorithm that analyzes the back of your eye for important predictors of cardiovascular health “including age, blood pressure, and whether or not [you] smoke” to assess your risk.

As explained via The Verge:

“To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Why It’s Hot:

This type of application of AI can help doctors quickly know what to look into, and shows how AI could help them spend less time diagnosing, and more time treating. It’s a long way from being completely flawless right now, but in the future, we might see an AI-powered robot instead of a nurse before we see the doctor.


AR coming to eBay…

Details are a bit scant, but eBay announced this week it will soon be integrating AR functionality into its app.

Per Fortune, “The San Jose, California-based marketplace said it’s working on an AR kit that, for example, will let car enthusiasts see how the images of new wheels would look on their vehicles before making a purchase. Another feature will help sellers select the correct box size for an item by overlaying an image of the box on the merchandise.”

Why It’s Hot:

While not the newest kid on the block (eg, Ikea has used AR for years), eBay is a massive marketplace where millions of people globally buy and sell things. With physical retail integrating technology to fight back against the convenience of e-commerce, this is an example of e-commerce trying to bring elements of physical retail to the digital world. One of the big disadvantages of e-commerce is usually you’ll only see a bunch of images of a product, which in eBay’s case may or may not be of the actual product you’re buying. The ability to see what something looks like in a virtual 3 dimensions is a major new advantage.

Also to note this week – eBay hired former Twitter data scientist Jan Pedersen to lead its AI efforts.


Ally’s attempt to hijack Super Bowl ads…

Instead of spending money on an ad, Ally Bank created an Augmented Reality game for this year’s Super Bowl. As explained in the video above, while other brands spent big money on the big game, Ally’s “Big Save” app allowed users to compete to see who could grab the most virtual money, after identifying what they were saving money toward. The game allegedly only activated during commercial breaks, and users would tap and drag AR dollar bills into a small AR piggy bank. The user with the highest score / most virtual money saved got a real cash money prize to be used toward their real-life savings goal.

Why It’s Hot:

Without knowing how successful it was, their different approach to the Super Bowl as a marketing moment is interesting. On one hand, it’s nice to see them trying to do some good with their budget. It also gave them specific insight into the things users were saving for that they could use later for marketing or to create products addressing those things. On the other, it seems a very noisy time to try and get people to ignore friends, family, and entertainment to play a game, albeit one with a nice prize. Either way, you can appreciate their attempt to hijack peoples’ attention during the commercial breaks coveted by other marketers.


it’s just an ad…BUT WHY IS IT JUST AN AD?!?!

Amazon revealed its Alexa Super Bowl spot this week, and as you can see above, the premise is – imagine what it would be like if you were speaking to various celebrities instead of what at this point is a borderline monotone, virtually personality-less Alexa. There’s the anthemic 90-second version above, plus 30-second editions focused on specific personalities like you see below.

Why It’s Hot:

In a world where we’ll inevitably rely on speaking to digital assistants, why wouldn’t Amazon, Google, or any others give you the ability to choose your assistant’s voice and personality? And, why didn’t Amazon do it as part of this campaign? We’ve seen it in concept videos, but is this more than just an ad? Having GPS directions read to you by Arnold Schwarzenegger is one thing, but a true assistant you can interact with is a much different scenario. When can we expect this eminently possible future?


giving “use your brain” new meaning…

One of the most progressive concepts we saw coming out of CES was Nissan’s “Brain to Vehicle” technology, in which an autonomous car would read your brainwaves to sense what you were thinking and respond. For example, if the driver is manually driving, the car could sense that he or she will be turning, and automatically adjust for the perfect turn as he or she actually turns the vehicle.

Now, this week, a Japanese company Cyberdyne got FDA approval for an exoskeleton that helps people who can’t walk, to walk, by sensing their brains signals to their legs to move. To quote: “HAL involves sensors that attach to the users’ legs, which detect bioelectric signals sent from the brain to the muscles, triggering the exoskeleton to move. When people use the technology, it is the individual whose nervous system is controlling the exoskeleton, not some independent control. Nonetheless, it is able to take the intention of the users and magnify their strength by a factor of 10 — supporting both its weight and that of the wearer while they move around.”

Why It’s Hot:

Voice seemingly just emerged as the new standard interface, but already we can see another future possibility. It will be interesting to see what technology can do when it’s able to sense and respond to us without us even needing to take any physical action whatsoever. On the one hand, it makes things effortless, on the other…talk about the dangers if the machines go rogue.

dragon drive: jarvis for your car…

The wave of magical CES 2018 innovations has begun to roll in, and among those already announced is a company called Nuance Communications’s “Dragon Drive” – an (extremely) artificially intelligent assistant for your car.

According to Digital Trends

“By combining conversational artificial intelligence with a number of nonverbal cues, Dragon Drive helps you talk to your car as though you were talking to a person. For example, the AI platform now boasts gaze detection, which allows drivers to get information about and interact with objects and places outside of the car simply by looking at them and asking Dragon Drive for details. If you drive past a restaurant, you can simply focus your gaze at said establishment and say, “Call that restaurant,” or “How is that restaurant rated?” Dragon Drive provides a “meaningful, human-like response.”

Moreover, the platform enables better communication with a whole host of virtual assistants, including smart home devices and other popular AI platforms. In this way, Dragon Drive claims, drivers will be able to manage a host of tasks all from their cars, whether it’s setting their home heating system or transferring money between bank accounts.

Dragon Drive’s AI integration does not only apply to external factors, but to components within the car as well. For instance, if you ask the AI platform to find parking, Dragon Drive will take into consideration whether or not your windshield wipers are on to determine whether it ought to direct you to a covered parking area to avoid the rain. And if you tell Dragon Drive you’re cold, the system will automatically adjust the car’s climate (but only in your area, keeping other passengers comfortable).

Why It’s Hot:

Putting aside the question of how many AI assistants we might have in our connected future, what was really interesting to see was the integration of voice and eye tracking biometrics. Things like using your voice as your key (/to personalize your settings to you and your passengers), the car reminding you of memories that happened at locations you’re passing, and identifying stores/buildings/restaurants/other things along your route with just a gaze, it’s amazing to think what the future holds when all the technologies we’ve only just seen emerging in recent years converge.

[More info]

become a jedi master with AR…

Fortuitously timed, a genius developer has created an app that lets you appear to wield a Star Wars styled Light Saber using Augmented Reality. Per its creator:

“It’s an iPhone app that turns a rolled up piece of paper into a virtual lightsaber. I think the best thing about it is that it brings a special effect that has typically been reserved for advanced video editors to a mass audience.”

Why It’s Hot:
Augmented Reality has of course seen many new uses since becoming a widely available capability on iOS. Some are useful, and some just let you live out childhood fantasies like this. In either case, it’s amazing the digital layer of the world we are building on top of the physical one we have known for our entire lives.


create connected 3D printed objects…

3D printers helped us make a great leap into autonomous making with the ability to create our own physical “products”. But in a world where increasingly physical objects and products are connected, it’s frustrating not to be able to create 3D things that can be connected to digital devices. Enter researchers from University of Washington, who have “developed a way to 3D print plastic objects and sensors capable of communicating wirelessly with other smart devices, without the need for batteries or other electronics”.

As they say:

“The key idea behind our design is to communicate by reflections. The way that we do this is by reflecting Wi-Fi signals in the environment, similar to how you can use a mirror to reflect light. We 3D print antennas and switches that allow us to reflect radio signals. Using these components, we can build sensors that can detect mechanical motion, like water flow sensors and wind speed sensors. These sensors can then translate mechanical motion into reflections of Wi-Fi signals. As a result, we can create printable objects that can communicate wirelessly with Wi-Fi- enabled devices.”

Why It’s Hot:
It’s a primitive solution, but at least it’s an attempt to start enabling us to create our own “smart” products. In a world where soon almost all products will be connected, this is a promising step towards a true maker economy.


tl;dr officially graduates to nm;dr…

Everything you think you know about content consumption on the internet is true.

Notre Dame researchers recently found that 73% of Redditors who volunteered for their study didn’t actually click through to links they upvoted, 84% clicked on content in less than 50% of their pageloads, and 94% did so in less than 40% of their pageloads.

Why it’s hot:

As people, it’s not. We’ve become a headline society.

As we all know, “fake news” is now a legitimate cultural phenomenon, and the lack of investigation and questioning the accuracy or legitimacy of content, opinions, ratings, even social media accounts means manipulative powers that can and have been misused by those with nefarious objectives.

But as marketers, before we make any ad, digital experience, tweet, product, or even business decision, the headline test has never been more important.

A good exercise is to write the positive headlines you hope to see as a result of what you’re thinking of doing, and the potential negative ones. Look at both, then decide the fate and/or form of your effort.


On a much lighter note, as a bonus, Google’s Santa Tracker experience is now live with Santa’s Village. Leading up to the holidays, it’s offering “access to games, a learning experience about holiday traditions around the world, and a Code Lab teaching kids basic coding skills” and an advent calendar unlocking a new game or experience each day between now and Christmas.

google maps adds wait times…

Knowing when a local business is busy is helpful, knowing how long you would have to wait if you went is even better. Enter Google.

Google’s Search and Maps apps, and now provide users estimated wait times for both local restaurants and grocery stores (see above).

Now, you’ll be able to see how long would you wait if you went right now, or when there’s a shorter wait if that’s what you need. It also lets you know when peak times are so you can avoid them, or prepare yourself for the pain. Here’s how it works:

“Google’s new restaurant wait times also comes from the aggregated and anonymized data from users who opted in to Google Location History – the same data that powers popular times, wait times and visit duration.”

Why it’s hot:

Forever, one of the first questions when you have to go to the grocery store or a restaurant, is – I wonder how long I’ll have to wait. With one simple new feature, Google has removed this age-old mystery. They’re not the first to do it for restaurants, but considering how many people use Google to find one, they certainly have the power to affect the most users.

augmented reality helps with autism…

Brain Power is a suite of AR and VR-like apps that work with Google Glass, to help people with autism learn crucial social interaction skills.

A few examples – “Emotional Charades” teaches them to identify emotions in real peoples’ faces. “Transition Master” helps them get comfortable with new circumstances before entering them. And “Face2Face” teaches them to make eye contact with others.

It makes it all a game, but unlike other types of teaching moments on an iPad, they’re always experiencing things without the artificial interference of a device screen, where really they’re just interacting with themselves.

Why it’s hot:
Earlier in the year, we talked a lot about the augmented self – how technology was helping us become almost super-human. But, it’s not just that. As Brain Power shows, it’s even helping learn basic human skills we might have a hard time with otherwise. While this is for autistic children to learn social skills, there’s no reason any child couldn’t learn through digital technology that feels like real life.

Plus, it’s another example of hardware getting out of the way. The only device needed here is the wearable Google Glass, which makes the experience feel like real life with a digital layer, rather than an artificial, screen-based experience.


the camera doesn’t lie, but the algorithm might…

Algorithms fooling algorithms may be one of the most 21st century things to happen yet. But, it did. Researchers at MIT used an algorithm to 3D print versions of a model object, programming them to be recognized as certain other things by Google’s image recognition technology. In short, they fooled Google image recognition into thinking a 3D printed stuffed turtle was a rifle. They also made a 3D printed stuffed baseball appear to be espresso, and a picture of a cat appear to be guacamole. Technology truly is magic.

Their explanation:

“We do this using a new algorithm for reliably producing adversarial examples that cause targeted misclassification under transformations like blur, rotation, zoom, or translation, and we use it to generate both 2D printouts and 3D models that fool a standard neural network at any angle.

It’s actually not just that they’re avoiding correct categorization — they’re classified as a chosen adversarial class, so we could have turned them into anything else if we had wanted to. The rifle and espresso classes were chosen uniformly at random.”

Why it’s hot:
Clearly there are implications for the practicality of image recognition. If they can do this fairly easily in a lab setting, what’s to stop anyone with enough technical savvy from doing this in the real world, perhaps reversing the case and disguising a rifle as a stuffed turtle to get through an artificially intelligent, image recognition technology-driven security checkpoint? Another scary implication mentioned was self-driving cars. It just shows we need much more ethical hacking to plan for and prevent these kind of security concerns.

your palm is your password…

In the last 12 months, biometric technology seems to have really started to hit the mainstream. We’ve got fingerprint scanning, facial recognition, retina scans, and microchipping. All require either specific technology to read, or as Redrock Biometrics explains about facial recognition – “it’s easy to fake”, using a picture instead of a real face. So Redrock is introducing palm scanning as a new authentication method, which works with any device that has a camera. Take a picture of your palmprint, and it becomes your unique signature – wave it in front of any camera, and you’re in.

The official explanation of how it works:

The PalmID Capture Module uses sophisticated machine vision techniques to convert RGB video of the palm into a template for authentication. The PalmID Matching Module can run server side or locally. In just 10-100 milliseconds, it can match the authentication attempt against the enrollment template, using proprietary algorithms extensively tested against tens of thousands of palm images.


To date, no single biometrics technology has been able to satisfy these differing needs and expectations. And, consumers have to enroll their finger, iris, face or palm for every device that uses biometrics for authentication.


What if there were a new biometric approach that met the security, convenience and reliability needs for many industries in one solution? And, what if consumers had to enroll just once and many devices would immediately recognize them

Why it’s hot:

This raised the question – how will we manage these multiple methods in a future where there might be several ways to authenticate people? This is being pitched as a solution almost anyone can implement, and they say the wave of the hand shows intention that a retina scan, for example, doesn’t. But is this just a stopgap until a more sophisticated technology makes retina scanning, or microchipping, or something else altogether ubiquitous?

zero training = zero problem, for AlphaGo Zero…

One of the major milestones in the relatively short history of AI is when Google’s AlphaGo beat the best human Go player in the world in three straight games early last year. In order to prepare AlphaGo for its match, Google trained it using games played by other Go players, so it could observe and learn which moves win and which don’t. It learned from essentially watching others.

This week, Google announced AlphaGo Zero, AI that completely taught itself to win at Go. All Google gave it was the rules, and by experimenting with moves on its own, it learned how to play, and beat its predecessor AlphaGo 100 games to zero after just over a month of training.

Why It’s Hot:

AI is becoming truly generative with what DeepMind calls “tabula rasa learning”. While a lot of AI we still see on a daily basis is extremely primitive in comparison, the future of AI is a machine’s ability to create things with basic information and a question. And ultimately, learning on its own can lead to better results. As researchers put it, “Even when reliable data sets are available, they may impose a ceiling on the performance of systems trained in this manner…By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.”

nike connected jersey…

Nike added a new layer of to clothing recently when it introduced connected NBA jerseys.

To coincide with its new status as official NBA gear provider, jersey owners can now tap their iPhone 7 with iOS11 on the jersey’s tag to activate “premium content” via NFC.

Per 9-to-5 mac:

“Essentially what happens is customers can purchase a jersey for their favorite player and unlock “premium content” about that player via the NikeConnect app. That premium content includes things such as “pregame arrival footage,” highlight reels, music playlists from players, and more. Just so everything comes full circle, the jerseys can unlock boosts for players in NBA 2K18.”

Why It’s Hot:

Everything is now a platform. With AR, NFC, and QR truly becoming mainstream, and mixed reality and AI presumably not long behind them, we’re interacting with things in a whole new way. This is a relatively light example – less utility, more entertainment – but it shows how technology is integrating into everything to provide a new layer of experience to even the clothes we wear.

buy your next couch online…

Campaign may be to furniture what Casper is to mattresses. Finally you can get the previously mythical combination of quality furniture that is shippable using normal delivery methods, and that requires minimal assembly. It’s also billed as being “built for life”, with prices on par with Crate and Barrel, or West Elm, and ships for “free”.

Why It’s Hot:
Great products are designed around removing pain points from the customer experience. The long transit times (and coordinating final delivery) that can come with freight shipping (+the cost), and the overly frustrating and laborious assembly required with other furniture purchased digitally are two major headaches when buying furniture online. Campaign solves for both. Meanwhile, IKEA is still trying to figure out how to make a flat-packable couch.

the foul stench of possible identity theft…

Smell of Data from Leanne Wijnsma on Vimeo.

Some obviously creative innovators have recently created the “Smell of Data” to alert you instantly when your personal information is at risk of being compromised while adventuring around the internet.

Per these geniuses –

“Smell of Data aims to give internet users moment-to-moment updates on whether their private information is at risk of being leaked…The Smell of Data is a new scent developed as an alert mechanism for a more instinctive data…Smell data? Beware of data leaks. They can lead to privacy violation, behavior control, and identity theft.”

“To utilize the Smell of Data, a scent dispenser is charged with the specially developed fragrance, and then connected to a smartphone, tablet, or computer via Wi-Fi. The device is able to detect when a paired system attempts to access an unprotected website on an unsecured network and will emit a pungent puff of the Smell of Data as a warning signal.”

Why It’s Pungent Hot:

It’s seemingly an interesting play on an old method of playing on peoples’ senses in order to condition behavior. While it’s obviously a bit silly, the point is – very often we’re not thinking how what we do digitally could lead to trouble later. Now there’s a Pavlovian way to get us to stop and think.

OK Google, Am I Depressed?

See gif of how it works here.

As reported by The Verge, yesterday Google rolled out a new mobile feature to help people who might think they’re depressed sort it out. Now, when someone searches “depression” on Google from a mobile device (as in the screenshot above), it suggests “check if you’re clinically depressed” – connecting users to a 9 question quiz to help them find out if they need professional help.

Why It’s Hot:

As usual, Google shows that utility is based on intent – instead of just connecting people to information, they’re connecting information to people. In this case, it could be particularly impactful since “People who have symptoms of depression — such as anxiety, insomnia, or fatigue — wait an average of six to eight years before getting treatment, according to the National Alliance on Mental Illness.” 

disney creates a magical bench…

…you could interact with pretty much anything your mind can dream up.

Disney Research developed a somewhat lo-fi solution for mixed reality that requires no special glasses or singularity type of stuff. Its “Magic Bench” allows people to interact with things that aren’t there, watching the action in 3rd person view, on a screen broadcasting them. It even provides haptic feedback to make it feel like the imaginary character or object truly is on the bench with you.

Why It’s Hot:

1) It’s a great example of technology enabling a physical experience without getting in the way. Historically, augmented/mixed reality required some type of personal technology like glasses/headset, or a phone. This requires nothing from the user but their presence.

2) It shows how Disney is using technology to create experiences that extend its “magical” brand into the digital age.


student teacher…

An 11 year old Tennessee girl recently found a way to instantly detect lead in water, cutting the time it used to take to do so drastically. Previously, you had to take a water sample and send it off to a lab for analysis, now all you need is her contraption and a smartphone. She discovered her solution when she read about a new type of nanotechnology on MIT’s website, and imagined its new application in its new context.

Here’s how it works:
“Her test device, which she has dubbed “Tethys,” uses a disposable cartridge containing chemically treated carbon nanotube arrays. This connects with an Arduino technology-based signal processor with a Bluetooth attachment. The graphene within the nanotube is highly sensitive to changes in flow of current. By treating the tube with atoms that are sensitive to lead, Rao is able to measure whether potable water is contaminated with lead, beaming the results straight to a Bluetooth-enabled smartphone. When it detects levels higher than 15 parts per million, the device warns that the water is unsafe.”

Why it’s hot:

1) Never let “can we do this” stop you
2) Never let “how can we do this” stop you
3) Some of the best solutions come when you put two (or more) things together

This offers a good lesson in a few important ingredients for innovation – how much you care, how much you believe, and how creative you can be. When all are high, you can create amazing things. Know what’s possible, believe that anything is, and let nothing stop you. Let’s do it.

the ultimate convenience of the ultimate convenience store…

A company called Wheely’s has created Moby Mart, a 24/7, on demand, self-driving, drone and digital assistant serviced, all electric, environmentally friendly, grab and go, digital payment only convenience store, currently autonomously piloting the streets of Shanghai. Or, as they put it on their website – “It is the store that comes to you, instead of you coming to the store.”

Bonus non-product marketing demo video:

Why it’s hot:

It’s interesting to think about what the world could look like when a number of often separate technologies come together. This may just be a primitive attempt at imagining it, but imagine the ultimate convenience provided by combining a number of technologies individually aimed at creating convenience for people. Anything could be delivered to you wherever you are, without direct human assistance.

first responder app cuts emergency response time in half…

The European Heart Rhythm Association have developed a “First Responder” app that claims to cut the time for someone in cardiac arrest to receive CPR by more than half. It essentially sends out a signal to all app users, who are people trained to provide CPR, and if one is close by, they can answer the call. The app even provides directions. In their tests, the EHRA saw the following results:

Why It’s Hot:

First, it’s almost shocking no one has already done this. The technology required has been available for years. But, for me, it’s the massive impact such a simple solution can have. Every minute they’re able to shave off the response time “increases a victim’s chance of survival by ten percent.” Considering heart attacks are the #1 cause of death in our own country, I wonder how long until we can adopt a similar model.

your face is your ticket…

Jet Blue is now piloting airport technology that would replace your boarding pass with a scan of your face.

Here’s how it works:

“The process is fairly simple: Passengers step up to a camera to have their picture taken. The picture is then compared with passport photos in the CBP database and to verify flight details. If successful, the passenger is notified that they are cleared to board by an on-screen message at the camera terminal.”

Why it’s hot:

I’m not sure how smooth the experience sounds at the moment, but the idea of never needing to have a boarding pass either physically or on your phone, and just being able to walk on to your flight sounds pretty no non-sense  (except you still have to remember what seat you’re in). It makes you think – there are probably many such things that “outerweb” technology could replace that we currently do with our phones. What happens to our phones when biometrics and other technologies can enable us to do what we’re now doing with our smartphones?


holograms, benjamin…

Some genius developer has boldly chosen to experiment with perhaps the world’s most forgotten voice assistant, Microsoft Cortana, and imagined what interacting with her could be like if you added another dimension to it.

In his words – “It’s basically what I imagined Microsoft’s version of Alexa or Google Home would be like if they were to use the holographic AI sidekick from the Halo franchise.”

As seen in the video above, in his prototype, it’s as if you’re speaking to an actual artificial person, making the experience feel more human.

Why it’s hot:
Amazon recently released the Echo Show, which allows skillmakers to add a “face” to their interactions, but this makes that look like a kids toy. This shows how what started not long ago as primitive voice technology on a phone, could quickly turn into actual virtual assistants that look and act like humans, powered by the underlying technology. Plus, apparently 145 million people may not ignore they have access to Cortana in the future.

googler creates AI that creates video using one image…

One of the brilliant minds at Google has developed an algorithm that can (and has) create video from a single image. The AI does this by predicting what each of the next frames would be based on the previous one, and in this instance did it 100,000 times to produce the 56 minute long video you see above. Per its creator:

“I used videos recorded from trains windows, with landscapes that moves from right to left and trained a Machine Learning (ML) algorithm with it. What you see at the beginning is what the algorithm produced after very little learnings. It learns more and more during the video, that’s why there are more and more realistic details. Learnings is updated every 20s. The results are low resolution, blurry, and not realistic most of the time. But it resonates with the feeling I have when I travel in a train. It means that the algorithm learned the patterns needed to create this feeling. Unlike classical computer generated content, these patterns are not chosen or written by a software engineer.

Why it’s hot:

Creativity and imagination have been among the most inimitable human qualities since forever. And anyone who’s ever created anything remotely artistic will tell you inspiration isn’t as easy as hitting ‘go’. While this demonstration looks more like something you’d see presented as an art school video project than a timeless social commentary regaled in a museum, it made me wonder – what if bots created art? Would artists compete with them? Would they give up their pursuit because bots can create at the touch of a button? Would this spawn a whole new area of human creativity out of the emotion of having your work held up next to programmatic art? Could artificial intelligence ever create something held up against real human creativity?

repeat after me…

A Canadian company called Lyrebird has created a way to replicate anyone’s voice using AI. After capturing 60 seconds of anyone talking, the machine can reproduce an individual’s way of speaking. They say they’ve already received thousands of ideas on how people could use this new capability:

Some companies, for example, are interested in letting their users choose to have audio books read in the voice of either famous people or family members. The same is true of medical companies, which could allow people with voice disabilities to train their synthetic voices to sound like themselves, if recorded samples of their speaking voices exist. Another interesting idea is for video game companies to offer the ability for in-game characters to speak with the voice of the human player.


But even bigger, they say their technology will allow people to create a unique voice of their own, with the ability to fully control even the emotion with which it speaks.

Why it’s hot

Besides the fact that it’s another example of life imitating art, we already live in a world where we have quite a bit of control over how we portray ourselves to the world. In the future, could we choose our own voice? Could we have different voices for every situation? How might we ever really be sure we know who we’re speaking to? Does the way someone has chosen to sound change the way we get to know them? And, what if the voices of our friends and family can now be preserved in perpetuity?


the internet of graphene…

[image and subject matter courtesy of digital trends]

Scientists/researchers from Trinity College of Dublin recently published a concept of printed, 2D transistors made of graphene that could instantly turn “dumb” physical objects into connected/”living” products.

According to them…

“You could imagine the possibility of one day having printed circuitry on food packaging, so that rather than having a barcode, you have a circuit that can communicate information to the user…That could mean a carton of milk that sends you a text message when your milk is about to go off. Another possible usage, Coleman said, is the concept of paper-thin displays, which could be embedded into newspapers or magazines, or slung up on the wall like a moving poster.


Why it’s hot:

I don’t profess to understand all the history and intricacies of circuits and transistors in the slightest, but I do see how the breakthrough idea of using a low-cost material that would make large scale implementation could have massive implications for the future of products. All of a sudden everything with a physical surface could become digitally-enabled, able to “come alive” in a sense and communicate, or even entertain us, opening up a whole new layer in the physical world.