New Ford CEO Jim Farley’s plan for the automaker includes a heavy dose of software and services for its commercial vehicle business as well as new consumer experiences to drive loyalty.
Why It’s Hot // The convergence always-on connection and data commercialization brings a world of new opportunities to marketers and brands seeking to redefine their businesses – while also adding fuel the the fiery debate about the trade-offs between privacy and personalized experiences.
Ford, which is in the middle of a turnaround of its core business, is trying to navigate a shift to electric vehicles, autonomous vehicles as well as an industry that is increasingly more about software. Farley takes over for Jim Hackett, who streamlined the automaker over the last three years.
Farley outlined a series of leadership changes and a plan that includes “expanding its commercial vehicle business with a suite of software services that drive loyalty and recurring revenue streams” and “unleashing technology and software in ways that set Ford apart from competitors.”
Ford is also looking for a new CIO as Jeff Lemmer is retiring Jan. 1. His successor will lead Ford’s technology and software platform.
The tech strategy from Farley lands after a Sept. 16 investor presentation by Kenneth Washington CTO. Washington outlined the connectivity required from smart vehicles in the future that will include 5G, satellites and edge, cloud, and fog computing.
Washington added that Ford has hired more than 3,000 advanced computing experts to work on the tech stack and surrounding technologies including things like smart cities, mobility services, edge computing, and analytics.
If you were to tear down a future Ford, say, 10 years from now, the biggest difference you’d see is that the software, compute and sensing services are being serviced by a central compute module. And that’s really important because that’s more like we’re accustomed to seeing with the smartphones and the smart devices that we surround ourselves in our homes with every day. So this design that you would see would enable us to really leverage the power of high bandwidth connectivity that happens around the vehicle.
In the future, vehicle changes will be handled with updates via software and algorithms instead of hardware, said Washington. These updates would start with software, but design of electrical architecture as well as shared memory and power systems for various zones of the vehicle would be critical.
Other key points about Ford’s tech stack include:
Ford uses QNX, Autosar and Linux to develop is operating system and tech stack.
The automaker builds on top of that OS with middleware from its internal software team.
In 2020, Ford began equipping most of its redesigned vehicles with the ability for advanced over-the-air updates.
The data from those updates on vehicles like the F-150 and Bronco will help Ford iterate.
There are 5 million Ford connected vehicles in the field today.
Ford sees opportunities in services to optimizes Ford fleets for small business owners.
New cool thing alert! Haven’t brands been disrupted enough in 2020 already?
“Utilizing AI & machine-learning algorithms capable of identifying apparel featured within video content, droppTV enables instant, click-to-buy purchasing, letting viewers shop directly inside the video and also browse artists’ virtual pop-up stores to seamlessly purchase merchandise like limited-edition streetwear.
Currently piloting with music videos, the platform aims to fuse entertainment with retail to create immersive and connected experiences directly linking brands and creators with their audiences. PSFK identified droppTV for research on innovative retail strategies for the disrupted 2020 holiday season—check out more inspiration here.”
Why it’s hot?
Monetizing identifying AI is a long time coming, being able to seamlessly integrate that technology into consumer behavior is a big step in a new direction.
Amazon’s new fitness band adds body fat, movement, sleep and mood to the mountain of data Amazon is amassing. Whether streaming on Amazon Prime, shopping on Amazon.com, buying groceries at Whole Foods, Amazon is ready to…errrr…help?
Why it’s Hot – The increasing convergence of our digital and analog lives is brining the questions of privacy and data sovereignty to the forefront, while also creating new potential opportunities for marketers (just think about what a partnership between Microsoft and Walmart to buy TikTok could mean).
From The Verge:
mazonAmazon is getting into the health gadget market with a new fitness band and subscription service called Halo. Unlike the Apple Watch or even most basic Fitbits, the Amazon Halo band doesn’t have a screen. The app that goes along with it comes with the usual set of fitness tracking features along with two innovative — and potentially troubling — ideas: using your camera to create 3D scans for body fat and listening for the emotion in your voice.
The Halo band will cost $99.99 and the service (which is required for Halo’s more advanced features) costs $3.99 per month. Amazon is launching it as an invite-only early access program today with an introductory price of $64.99 that includes six months of the service for free. The Halo service is a separate product that isn’t part of Amazon Prime.
The lack of a screen on the Halo band is the first indicator that Amazon is trying to carve out a niche for itself that’s focused a little less on sports and exercise and a little more on lifestyle changes. Alongside cardio, sleep, body fat, and voice tone tracking, a Halo subscription will offer a suite of “labs” developed by partners. They’re short challenges designed to improve your health habits — like meditation, improving your sleep habits, or starting up basic exercise routines.
The Halo band “is not a medical device,” Amazon tells me. As such, it hasn’t submitted the device to the FDA for any sort of approval, including the lighter-touch “FDA clearance” that so many other fitness bands have used.
The Amazon Halo intro video | Source: Amazon
THE HALO BAND HARDWARE
TheThe Halo Band consists of a sensor module and a band that clicks into it on top. It’s a simple concept and one we’ve seen before. The lack of a display means that if you want to check your steps or the time, you’ll need to strap something else to your wrist or just check your phone.
The band lacks increasingly standard options like GPS, Wi-Fi, or a cellular radio, another sign that it’s meant to be a more laid-back kind of tracker. It has an accelerometer, a temperature sensor, a heart rate monitor, two microphones, an LED indicator light, and a button to turn the microphones on or off. The microphones are not for speaking to Alexa, by the way, they’re there for the voice tone feature. There is explicitly no Alexa integration.
It communicates with your phone via Bluetooth, and it should work equally well with both iPhones and Android phones. The three main band colors that will be sold are onyx (black), mineral (light blue), and rose gold (pink-ish).
There will of course be a series of optional bands so you can choose one to match your style — and all of them bear no small resemblance to popular Apple Watch bands. The fabric bands will cost $19.99 and the sport bands will be $15.99.
Amazon intends for users to leave the Halo Band on all the time: the battery should last a full week and the sensor is water resistant up to 5ATM. Amazon calls it “swimproof.”
But where the Halo service really differentiates itself is in two new features, called Body and Tone. The former uses your smartphone camera to capture a 3D scan of your body and then calculate your body fat, and the latter uses a microphone on the Halo Band to listen to the tone of your voice and report back on your emotional state throughout the day.
BodyBody scans work with just your smartphone’s camera. The app instructs you to wear tight-fitting clothing (ideally just your underwear) and then stand back six feet or so from your camera. Then it takes four photos (front, back, and both sides) and uploads them to Amazon’s servers where they’re combined into a 3D scan of your body that’s sent back to your phone. The data is then deleted from Amazon’s servers.
Once you have the 3D scan, Amazon uses machine learning to analyze it and calculate your body fat percentage. Amazon argues that body fat percentage is a more reliable indicator of health than either weight or body mass index. Amazon also claims that smart scales that try to measure body fat using bioelectrical impedance are not as accurate as its scan. Amazon says it did an internal study to back up those claims and may begin submitting papers to peer-reviewed medical journals in the future.
Finally, once you have your scan, the app will give you a little slider you can drag your finger on to have it show what you would look like with more or less body fat.
That feature is meant to be educational and motivational, but it could also be literally dangerous for people with body dysmorphic disorder, anorexia, or other self-image issues. I asked Amazon about this directly and the company says that it has put in what it hopes are a few safeguards: the app recommends you only scan yourself every two weeks, it won’t allow the slider to show dangerously low levels of body fat, and it has information about how low body fat can increase your risk for certain health problems. Finally, although anybody 13 years of age and up can use the Halo Band, the body scan feature will only be allowed for people 18 or older.
TRACKING THE TONE OF YOUR VOICE
TheThe microphone on the Amazon Halo band isn’t meant for voice commands; instead it listens to your voice and reports back on what it believes your emotional state was throughout the day. If you don’t opt in, the microphone on the Band doesn’t do anything at all.
Once you opt in, the Halo app will have you read some text back to it so that it can train a model on your voice, allowing the Halo band to only key in on your tone and not those around you. After that, the band will intermittently listen to your voice and judge it on metrics like positivity and energy.
It’s a passive and intermittent system, meaning that you can’t actively ask it to read your tone, and it’s not listening all of the time. You can also mute the mic at any time by pressing the button until a red blinking LED briefly appears to show you it’s muted.
Amazon is quick to note that your voice is never uploaded to any servers and never heard by any humans. Instead, the band sends its audio snippets to your phone via Bluetooth, and it’s analyzed there. Amazon says that the Halo app immediately deletes the voice samples after it analyzes it for your emotional state.
It picks up on the pitch, intensity, rhythm, and tempo of your voice and then categorizes them into “notable moments” that you can go back and review throughout the day. Some of the emotional states include words like hopeful, elated, hesitant, bored, apologetic, happy, worried, confused, and affectionate.
We asked Amazon whether this Tone feature was tested across differing accents, gender, and cultures. A spokesperson says that it “has been a top priority for our team” but that “if you have an accent you can use Tone but your results will likely be less accurate. Tone was modeled on American English but it’s only day one and Tone will continue to improve.”
BothBoth the Body and Tone features are innovative uses of applied AI, but they are likely to set off any number of privacy alarm bells. Amazon says that it is being incredibly careful with user data. The company will post a document detailing every type of data, where it’s stored, and how to delete it.
Every feature is opt-in, easy to turn off, and it’s easy to delete data. For example, there’s no requirement you create a body scan and even if you do, human reviewers will never see those images. Amazon says the most sensitive data like body scans and Tone data are only stored locally (though photos do need to temporarily be uploaded so Amazon’s servers can build the 3D model). Amazon isn’t even allowing Halo to integrate with other fitness apps like Apple Health at launch.
Some of the key points include:
Your Halo profile is distinct from your Amazon account — and will need to be individually activated with a second factor like a text message so that anybody else that might share your Amazon Prime can’t get to it.
You can download and delete any data that’s stored in the cloud at any time, or reset your account to zero.
Body scans and tone data can be individually deleted separately from the rest of your health data.
Body scans are only briefly uploaded to Amazon’s servers then deleted “within 12 hours” and scan images are never shared to other apps like the photo gallery unless you explicitly export an image.
Voice recordings are analyzed locally on your phone and then deleted. “Speech samples are processed locally and never sent to the cloud,” Amazon says, adding that “Tone data won’t be used for training purposes.”
Data can be shared with third parties, including some partners like WW (formerly Weight Watchers). Data generated by the “labs” feature is only shared as anonymous aggregate info.
ACTIVITY AND SLEEP TRACKING
TheThe body scanning and tone features might be the most flashy (or, depending on your perspective, most creepy) parts of Halo, but the thing you’ll likely spend the most time watching is your activity score.
Amazon’s Halo app tracks your cardio fitness on a weekly basis instead of daily — allowing for rest days. It does count steps, but on a top level what you get is an abstracted score (and, of course, a ring to complete) that’s more holistic. Just as Google did in 2018, Amazon has worked with the American Heart Association to develop the abstracted Activity score.
The Halo band uses its heart monitor to distinguish between intense, moderate, and light activity. The app combines those to ensure you’re hitting a weekly target. Instead of the Apple Watch’s hourly “stand” prompts, the Halo app tracks how long you have been “sedentary.” If you go for more than 8 hours without doing much (not counting sleep), the app will begin to deduct from your weekly activity score.
The Halo band can automatically detect activities like walking and running, but literally every other type of exercise will need to be manually entered into the app. The whole system feels less designed for workout min-maxers and more for people who just want to start being more active in the first place.
Speaking of heart tracking, the Halo band doesn’t proactively alert you to heart conditions like a-fib, nor does it do fall detection.
The Halo band’s sleep tracking similarly tries to create an abstracted score, though you can dig in and view details on your REM sleep and other metrics. One small innovation that the Halo band shares with the new Fitbit is temperature monitoring. It uses a three-day baseline when you are sleeping and from there can show a chart of your average body temperature when you wake up.
HALO LABS, PARTNERSHIPS, AND THE SUBSCRIPTION
Finally,Finally, Amazon has partnered with several third parties to create services and studies to go along with the Halo service. For example, if your health care provider’s system is compatible with Cerner, you can choose to share your body fat percentage with your provider’s electronic medical records system. Amazon says it will also be a fully subsidized option for the John Hancock Vitality wellness program.
The flagship partnership is with WW, which syncs up data from Halo into WW’s own FitPoints system. WW will also be promoting the Halo Band itself to people who sign up for its service.
There are dozens of lower-profile partnerships, which will surface in the Halo app as “Labs.” Many of the labs will surface as four-week “challenges” designed to get you to change your health habits. Partners creating Labs range from Mayo Clinic, Exhale, Aaptiv, Lifesum, Headspace, and more. So there might be a lab encouraging you to give yoga a try, or a set of advice on sleeping better like kicking your pet out of your bedroom.
Amazon says each Lab needs to be developed with “scientific evidence” of its effectiveness and Amazon will audit them. Data crated from these challenges will be shared with those partners, but only in an aggregated, anonymous way.
Virtually all the features discussed here are part of the $3.99/month Halo subscription. If you choose to let it lapse, the Halo band will still do basic activity and sleep tracking.
In charging a monthly subscription, Amazon is out on a limb compared to most of its competitors. Companies like Fitbit and Withings offer some of the same features you can get out of the Halo system, including sleep tracking and suggestions for improving your fitness. They also have more full-featured bands with displays and other functionality. And of course there’s the Apple Watch, which will have deeper and better integrations with the iPhone than will ever be possible for the Halo band.
Overall, Halo is a curious mix. Its hardware is intentionally less intrusive and less feature-rich than competitors, and its pricing strategy puts Amazon on the hook for creating new, regular content to keep people subscribed (exercise videos seem like a natural next step). Meanwhile, the body scanning feature goes much further than other apps in directly digitizing your self-image — which is either appealing or disturbing depending on your relationship to your self image. And the emotion tracking with Tone is completely new and more than a little weird.
The mix is so eclectic that I can’t possibly guess who it might appeal to. People who are more serious about exercise and fitness will surely want more than what’s on offer in the hardware itself, and people who just sort of want to be a little more active may balk at the subscription price. And since the Halo band doesn’t offer the same health alerts like fall detection or abnormal heart rate detection, using it as a more passive health monitor isn’t really an option either.
That doesn’t mean the Halo system can’t succeed. Amazon’s vision of a more holistic health gadget is appealing, and some of its choices in how it aggregates and presents health data is genuinely better than simple step counting or ring completion.
We won’t really know how well the Halo system does for some time, either. Amazon’s opening it up as an early access program for now, which means you need to request to join rather than just signing up and buying it.
If you popped into Twitter this week you probably came across GPT-3. It was created by research lab OpenAI, and is a new AI language model that can do some truly incredible things, from writing poetry to composing business memos to generating functioning code.
The company launched the service in beta last month and has gradually widened access.
The AI has basically been “trained” on an archive of the internet called the Common Crawl, which contains nearly one trillion words of data. That’s why it can demonstrate so many different capabilities like creating a website similar to Google to finishing a VC’s blog post.
Super interesting but still has some bugs to work out too.
This post is one of the best GPT-3 evaluations I’ve seen. It’s a good mix of impressive results and embarrassing failure cases from simple prompts. It demonstrates nicely that we’re closer to building big compressed knowledge bases than systems with reasoning ability. https://t.co/a5Nq006dMD
Why it’s hot: It’s a huge leap forward in terms to AI, which can enable a lot of different applications. That said, it still has some kinks to work out but we can only imagine what something like GBT-4 will be like. If you think about it, the first iPhone has come a looooooong way and it’ll be interesting to see where this AI goes in the future.
Stitch Fix Is Attracting Loyal Customers Without a Loyalty Program
As their customer base has grown in recent years, so too has the revenue they generate from each active customer. Even amidst the pain the apparel industry has been experiencing, over the last few months of the coronavirus pandemic, Stitch Fix has managed to weather the storm with only a slight revenue decline – mostly due to the decision to close warehouses for a period.
WHY IT’S HOT: In a world where “loyalty” tends to cost businesses and marketers money, in the form of deals and discounts, Stitch Fix is a testament to the the power of data to drive true personalization across the customer experience.
From The Motley Fool:
A personal stylist armed with a powerful data-driven selection algorithm creates a great customer experience.
In the highly competitive clothing industry, loyal customers are worth their weight in gold. Stores go to great lengths to attract repeat customers with programs that provide rewards, discounts, or exclusive offers for loyal members. But even with these programs, customers are hard to keep. A 2019 survey by Criteo found that 72% of apparel shoppers were open to considering other brands, which is why what Stitch Fix(NASDAQ:SFIX) has done to create loyal clients without a loyalty program is so special.
Let’s look at this personalized online clothing retailer’s loyal customers, how data science is helping build loyalty into the process, and what management is doing to further capitalize on the company’s momentum.
Loyal customers spend more
Clothing stores have seen a significant drop in spending in the past few months, but Stitch Fix’s most recent quarterly revenue only declined by 9% year over year. Impressively, this decline was not due to a drop in demand, but because the company chose to close its warehouses for part of the quarter as it put safety measures in place for its staff. This strong result against a backdrop of abysmal retail clothing spending was powered in part by the company’s auto-ship customers.
In the most recent earnings call, CEO Katrina Lake indicated that customers who sign up to receive “Fixes” (shipments of clothes) automatically and on a regular basis “achieved the strongest levels of ownership retention in the last three years.” She added that “this large contingent of loyal and highly engaged clients” are “very valuable.” Having a stable base of repeat clients helps the company better predict demand trends, shape inventory purchases, and forecast appropriate staffing levels.
Additional benefits from Stitch Fix’s loyal customers show up in the revenue-per-active-client metric. At the end of the day, consumers vote with their wallets. And impressively, this number has increased for the last eight quarters in a row. It’s clear Stitch Fix clients love the service as they are willing to spend more over time.
Possibly the biggest reason clients are spending more is that they are better matched with items they love.
Data science helps improve the customer experience
Making great clothing selections is key to the client experience for Stitch Fix. The job of keeping this recommendation engine humming and improving it over time is the company’s data scientist team. This group is over 100 strong and many of its members have Ph.D.s in data science or related fields. The team received a patent on its Smart Fix Algorithm and has other patents pending. You can see the amazing detail that goes into this process on the Algorithms Tour section of the Stitch Fix website.
This algorithm is also driving selections for the direct buy offering, which allows clients to purchase clothing without the commitment of the five-item fix. This new service is taking off and its low return rates show that clients love it. Lake shared that “people keeping things that they love is ultimately like the true Northstar of our business and that’s really where we’re orienting a lot of our efforts again.” One of these new efforts is focused on pushing the envelope of how stylists engage with clients.
Doubling down on personalized service
On the last earnings call, Stitch Fix President Elizabeth Spaulding discussed a pilot program that “provide[s] clients with increased stylist engagement and the opportunity to select items in their fixes.” This program, currently being tested in the U.S. and the U.K., connects the client on a video call with a stylist while their fix is being created. This allows the client direct input into their selections and enables the stylist to become better acquainted with the client’s clothing choices.
This innovative approach plays to the company’s strengths and could further build its loyal client following. Spaulding indicated that more would be shared in upcoming calls, but said that “We believe this enhanced styling experience will appeal to an even broader set of clients as consumers seek high-touch engagement while not going into stores.”
Yesterday a smart person named Thomas Dimson, who formerly wrote “the algorithm” at Instagram, launched a site that uses the Natural Language Processing (NLP) algorithm: Transformers, and OpenAI‘s infamous GPT-2 AI-powered text generator, to generate and define new English words, and use them in a sentence.
A disclaimer at the bottom of the site reads: Words are not reviewed and may reflect bias in the training set.
You can also write your own neologism and the AI will define it for you. It’s a fun diversion, but does it have any use? Probably not in this form. But it speaks to how AI may be used in the fun-and-games side of life, but also how it may ultimately shape the foundations of how we communicate.
Why it’s hot:
It’s fun to participate in the creation of something new (without having to work too hard), and language is the perfect playground for experimentation.
As AI becomes more influential in our daily lives, it’s interesting (and perhaps a little disturbing) to imagine the ways in which it may take part in creating the very words we use to communicate. What else might AI give us that we have heretofore considered to be the exclusive domain of humans?
The phrase Zoom meeting has been uttered countlessly over the past few weeks, as businesses around the world have turned to the video conferencing app to connect for meetings. Indeed, a number of folks we’ve queried in our #WFH Diaries series have reported being on Zoom essentially all day long.
Throw in some Zoom happy hours, and Zoom wine nights, and Zoom card games, and it can be a bit much. Matt Reed, a creative technologist at Redpepper in Nashville, was certainly feeling the strain, anyway.
“My number of Zoom meetings has gone through the mesosphere and is currently on Mars,” Reed writes on his agency’s blog. “There’s barely even time for bio-breaks, Reddit, or actually getting work done. It’s as if Zoom has turned into the Oasis from Ready Player One, where everyone spends every waking hour of their day inside.”
So, Reed flexed his creative tech chops and came up with an amusing solution. He built a digital A.I.-powered twin of himself, named Zoombot, and had the clone show up for the Zoom meetings in his place.
Zoombot uses advanced A.I. speech recognition and text-to-speech tools to actually respond to other people in the meetings. Also, Reed didn’t warn his colleagues he was doing this—and their reactions in the video are priceless.
Why it’s hot? Way to break the break the endless monotony of video calls using your digital twin. And the best part is that Reed is spending all the free time “making that coffee whip stuff everybody is making,” he reveals. “Stuff is delicious.”
Google subsidiary DeepMind has unveiled an AI called Agent57 that can beat the average human at 57 classic Atari games.
The system achieved this feat using deep reinforcement learning, a machine learning technique that helps an AI improve its decisions by trying out different approaches and learning from its mistakes.
In their blog post announcing the release, DeepMind trumpets Agent57 as the most general Atari57 agent since the benchmark’s inception, the one that finally obtains above human-level performance not only on easy games, but also across the most demanding games.
Why it’s hot:
By machines learning how to play these complex games, they will attain the capability of thinking and acting strategically.DeepMind’s general-purpose learning algorithms allow the machine to learn through gamification to try and acquire human-like intelligence and behavior.
Sales of voice control devices are expected to experience a boom in growth, thanks to people being locked down and working from home. This is also expected to fuel growth in the broader ecosystem of smart home devices – as instructions to minimize contact with objects that haven’t been disinfected, make things like connected light switches, thermostats and door locks more appealing than ever.
Why It’s Hot: A critical mass of device penetration and usage will undoubtedly make this a more meaningful platform for brands and marketers to connect and engage with consumers.
With so many millions of people working from home, the value of voice control during the pandemic will ensure that this year, voice control device shipments will grow globally by close to 30% over 2019–despite the key China market being impacted during the first quarter of 2020, according to global tech market advisory firm, ABI Research.
Woman Preparing Meal At Home Asking Digital Assistant Question
Last year, 141 million voice control smart home devices shipped worldwide, the firm said. Heeding the advice to minimize COVID-19 transmission from shared surfaces, even within a home, will help cement the benefits of smart home voice control for millions of consumers, ABI Research said.
“A smarter home can be a safer home,” said Jonathan Collins, ABI research director, in a statement. “Key among the recommendations regarding COVID-19 protection in the home is to clean and disinfect high-touch surfaces daily in household common areas,” such as tables, hard-backed chairs, doorknobs, light switches, remotes, handles, desks, toilets, and sinks.
Voice has already made significant inroads into the smart home space, Collins said. Using voice control means people can avoid commonly touched surfaces around the home from smartphones, to TV remotes, light switches, thermostats, door handles, and more. Voice can also be leveraged for online shopping and information gathering, he said.
When used in conjunction with other smart home devices, voice brings greater benefits, Collins said.
“Voice can be leveraged to control and monitor smart locks to enable deliveries to be placed in the home or another secure location directly or monitored securely on the doorstep until the resident can bring them in,” he said.
Similarly, smart doorbells/video cameras can also ensure deliveries are received securely without the need for face-to-face interaction or exposure, he added. “Such delivery capabilities are especially valuable for those already in home quarantine or for those receiving home testing kits,” Collins said.
He believes that over the long term, “voice control will continue to be the Trojan horse of smart home adoption.” Right now, the pandemic is part of the additional motivation and incentive for voice control in the home to help drive awareness and adoption for a range of additional smart home devices and applications, Collins said.
“Greater emphasis and understanding, and above all, a change of habit and experience in moving away from physical actuation toward using voice in the home will support greater smart home expansion throughout individual homes,” he said. “A greater emphasis on online shopping and delivery will also drive smart home device adoption to ensure those deliveries are securely delivered.”
The legacy of COVID-19 will be that the precautions being taken now will continue for millions of people who are bringing new routines into their daily lives in and around their homes and will for a long time to come, Collins said.
“Smart home vendors and system providers can certainly emphasize the role of voice and other smart home implementations to improve the day-to-day routines within a home and the ability to minimize contact with shared surfaces, as well as securing and automating home deliveries.”
Additionally, he said there is value in integrating smart home monitoring and remote health monitoring with a range of features, such as collecting personal health data points like temperature, activity, and heart rate, alongside environmental data such as air quality and occupancy. This can “help in the wider response and engagement for smart city health management,” Collins said.
As machine learning and artificial intelligence usage proliferates in everyday products, there have been many attempts to make it easier to understand. The latest explainer comes from Google and the Oxford Internet Institute with “The A to Z of AI.”
At launch, the “A-Z of AI” covers 26 topics, including bias and how AI is used in climate science, ethics, machine learning, human-in-the-loop, and Generative adversarial networks (GANs).
The AI explainer from Google and Oxford will be “refreshed periodically, as new technologies come into play and existing technologies evolve.”
Why it’s hot:
AI is informing just about every facet of society. But AI is a thorny subject, fraught with complex terminology, contradictory information, and general confusion about what it is at its most fundamental level.
As Coronavirus fears spread and hand sanitizer and face masks fly off the shelves, the question is, how to we prevent and mitigate.
Researchers are looking to AI for the solution. “John Brownstein, chief innovation officer at Boston Children’s Hospital and a professor at Harvard Medical School, built a tool called Healthmap after SARS killed 774 people around the world in the mid-2000s, his team built a tool called Healthmap, which scrapes information about new outbreaks from online news reports, chatrooms and more. Healthmap then organizes that previously disparate data, generating visualizations that show how and where communicable diseases like the coronavirus are spreading. Healthmap’s output supplements more traditional data-gathering techniques used by organizations like the U.S. Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO). The project’s data is being used by clinicians, researchers and governments.”
Google has decided it wants to avoid potential gender bias in its AI system for identifying images, so it’s choosing to simply use the designator “person” instead.
From The Verge:
The company emailed developers today about the change to its widely used Cloud Vision API tool, which uses AI to analyze images and identify faces, landmarks, explicit content, and other recognizable features. Instead of using “man” or “woman” to identify images, Google will tag such images with labels like “person,” as part of its larger effort to avoid instilling AI algorithms with human bias.
“In the email to developers announcing the change, Google cited its own AI guidelines, Business Insider reports. “Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.”
Why it’s hot:
It’s interesting to see AI companies grapple with the reality of human social life, and navigate the shifting waters of public mores.
Avoiding bias is a major issue in society, and it’s very important that the companies building AI don’t build their human bias into it. But with any new technology, there can be unintended and unpredictable consequences down the line, from even seemingly innocuous or universally accepted ideas.
A skincare startup is tackling the complexity consumers face when navigating the category to select the best products for their skincare needs. Rather than adding to the clutter of products, ingredients and “proprietary formulas”, or attempting to educate consumers through exposure to research + science, Proven Skincare simply prescribes personalized solutions for each individual.
After collecting customer input based around 40 key factors, Proven Skincare’s AI combs through a comprehensive database of research, testimonials and dermatology expertise, to identify the best mix of ingredients for each person’s situation.
“The paradox of choice, the confusion that causes this frustrating cycle of trial and error, is too much for most people to bear,” says Zhao on the latest edition of Ad Age’s Marketer’s Brief podcast. “There’s a lot of cycles of buying expensive product, only for it to then sit on somebody’s vanity shelf for months to come.”
As the human body’s largest organ, skin should be properly cared for—using products and ingredients that have been proven to work for specific individuals. That’s the core mission behind Proven Skincare, a new beauty company that has tapped technology to research the best skincare regimen for consumers.
Why It’s Hot: In a world where the benefits of things like AI and big data are not often apparent to the “average” person, this is an example of technology that solves a real human problem, while remaining invisible (i.e. it’s not about the tech).
In its first-ever keynote at CES, Delta announced a new AI-driven system that will help it make smarter decisions when the weather turns tough and its finely tuned operations get out of whack. In a first for the passenger airline industry, the company built a full-scale digital simulation of its operations that its new system can then use to suggest the best way to handle a given situation with the fewest possible disruptions for passengers.
It’s no secret that the logistics of running an airline are incredibly complex, even on the best of days. On days with bad weather, that means airline staff must figure out how to swap airplanes between routes to keep schedules on track, ensure that flight crews are available and within their FAA duty time regulations and that passengers can make their connections.
“Our customers expect us to get them to their destinations safely and on time, in good weather and bad,” said Erik Snell, Delta’s senior vice president of its Operations & Customer Center. “That’s why we’re adding a machine learning platform to our array of behind-the-scenes tools so that the more than 80,000 people of Delta can even more quickly and effectively solve problems, even in the most challenging situations.”
The new platform will go online in the spring of this year, the company says, and, like most of today’s AI systems, will get smarter over time as it is fed more real-world data. Thanks to the included simulation of Delta’s operations, it’ll also include a post-mortem tool to help staff look at which decisions could have resulted in better outcomes.
Delivering on best in class CX in the airline industry is a beast, and Delta has consistently tried to win here (as previous covered by Forrester CX index and the like). Why lacking in the super-cool-tech factor, widespread use of AI In the airline industry makes a ton of sense.
28-year-old architect Nashin Mahtani’s website, PetaBencana.id, uses artificial intelligence and chat-bots to monitor and respond to social posts on Twitter, Facebook, and Telegram by communities in Indonesia hit by floods. The information is then displayed on a real-time map that is monitored by emergency services.
“Jakarta is the Twitter capital of the world, generating 2% of the world’s tweets, and our team noticed that during a flood, people were tweeting in real-time with an incredible frequency, even while standing in flood waters,” said Mahtani, a graduate of Canada’s University of Waterloo. Jakarta residents often share information with each other online about road blockages, rising waters and infrastructure failures.
Unlike other relief systems that mine data on social media, PetaBencana.id adopts AI-assisted “humanitarian chat-bots” to engage in conversations with residents and confirm flooding incidents. “This allows us to gather confirmed situational updates from street level, in a manner that removes the need for expensive and time-consuming data processing,” Mahtani said.
In early 2020, the project will go nationwide to serve 250 million people and include additional disasters such as forest fires, haze, earthquakes and volcanoes.
Why It’s Hot
Aggregating social data in real-time on a map allows for easy flow of information between residents in need and emergency services who can help them. In a situation when every second counts to help as many people as possible, this use of technology is truly life-saving.
The creator of the famous voice assistant dreams of a world where Alexa is everywhere, anticipating your every need.
Speaking with MIT Technology Review, Rohit Prasad, Alexa’s head scientist, revealed further details about where Alexa is headed next. The crux of the plan is for the voice assistant to move from passive to proactive interactions. Rather than wait for and respond to requests, Alexa will anticipate what the user might want. The idea is to turn Alexa into an omnipresent companion that actively shapes and orchestrates your life. This will require Alexa to get to know you better than ever before.
In June at the re:Mars conference, he demoed [view from 53:54] a feature called Alexa Conversations, showing how it might be used to help you plan a night out. Instead of manually initiating a new request for every part of the evening, you would need only to begin the conversation—for example, by asking to book movie tickets. Alexa would then follow up to ask whether you also wanted to make a restaurant reservation or call an Uber.
A more intelligent Alexa
Here’s how Alexa’s software updates will come together to execute the night-out planning scenario. In order to follow up on a movie ticket request with prompts for dinner and an Uber, a neural network learns—through billions of user interactions a week—to recognize which skills are commonly used with one another. This is how intelligent prediction comes into play. When enough users book a dinner after a movie, Alexa will package the skills together and recommend them in conjunction.
But reasoning is required to know what time to book the Uber. Taking into account your and the theater’s location, the start time of your movie, and the expected traffic, Alexa figures out when the car should pick you up to get you there on time.
Prasad imagines many other scenarios that might require more complex reasoning. You could imagine a skill, for example, that would allow you to ask your Echo Buds where the tomatoes are while you’re standing in Whole Foods. The Buds will need to register that you’re in the Whole Foods, access a map of its floor plan, and then tell you the tomatoes are in aisle seven.
In another scenario, you might ask Alexa through your communal home Echo to send you a notification if your flight is delayed. When it’s time to do so, perhaps you are already driving. Alexa needs to realize (by identifying your voice in your initial request) that you, not a roommate or family member, need the notification—and, based on the last Echo-enabled device you interacted with, that you are now in your car. Therefore, the notification should go to your car rather than your home.
This level of prediction and reasoning will also need to account for video data as more and more Alexa-compatible products include cameras. Let’s say you’re not home, Prasad muses, and a Girl Scout knocks on your door selling cookies. The Alexa on your Amazon Ring, a camera-equipped doorbell, should register (through video and audio input) who is at your door and why, know that you are not home, send you a note on a nearby Alexa device asking how many cookies you want, and order them on your behalf.
To make this possible, Prasad’s team is now testing a new software architecture for processing user commands. It involves filtering audio and visual information through many more layers. First Alexa needs to register which skill the user is trying to access among the roughly 100,000 available. Next it will have to understand the command in the context of who the user is, what device that person is using, and where. Finally it will need to refine the response on the basis of the user’s previously expressed preferences.
Why It’s Hot:“This is what I believe the next few years will be about: reasoning and making it more personal, with more context,” says Prasad. “It’s like bringing everything together to make these massive decisions.”
Adobe has previewed an AI tool that analyzes the pixels of a image to determine the probability that it’s been manipulated and the areas in which it thinks the manipulation has taken place, shown as a heat map.
It’s fitting that the company that made sophisticated photo manipulation possible would also create a tool to help combat its nefarious use. While it’s not live in Adobe applications yet, it could be integrated into them, such that users can quickly know whether what their looking at is “real” or not.
Up next: The inevitable headline of someone creating a tool that can trick the Adobe AI tool into thinking photo is real.
Why it’s hot:
Fake news is a big problem, and this might help us get to the truth of some matters of consequence.
But … not everything can be solved with AI. This might help people convince others that something they saw is in fact fake, but it doesn’t overcome the deeper problem of people’s basic gullibility, lack of critical thinking, and strong desire to justify their already entrenched beliefs.
Google said on Wednesday that it had achieved a long-sought breakthrough called “quantum supremacy,” which could allow new kinds of computers to do calculations at speeds that are inconceivable with today’s technology.
The Silicon Valley giant’s research lab in Santa Barbara, Calif., reached a milestone that scientists had been working toward since the 1980s: Its quantum computer performed a task that isn’t possible with traditional computers, according to a paper published in the science journal Nature.
A quantum machine could one day drive big advances in areas like artificial intelligence and make even the most powerful supercomputers look like toys. The Google device did in 3 minutes 20 seconds a mathematical calculation that supercomputers could not complete in under 10,000 years, the company said in its paper.
Scientists likened Google’s announcement to the Wright brothers’ first plane flight in 1903 — proof that something is really possible even though it may be years before it can fulfill its potential.
Still, some researchers cautioned against getting too excited about Google’s achievement since so much more work needs to be done before quantum computers can migrate out of the research lab. Right now, a single quantum machine costs millions of dollars to build.
Many of the tech industry’s biggest names, including Microsoft, Intel and IBM as well as Google, are jockeying for a position in quantum computing. And venture capitalists have invested more than $450 million into start-ups exploring the technology, according to a recent study.
China is spending $400 million on a national quantum lab and has filed almost twice as many quantum patents as the United States in recent years. The Trump administration followed suit this year with its own National Quantum Initiative, promising to spend $1.2 billion on quantum research, including computers.
A quantum machine, the result of more than a century’s worth of research into a type of physics called quantum mechanics, operates in a completely different manner from regular computers. It relies on the mind-bending ways some objects act at the subatomic level or when exposed to extreme cold, like the metal chilled to nearly 460 degrees below zero inside Google’s machine.
“We have built a new kind of computer based on some of the unusual capabilities of quantum mechanics,” said John Martinis, who oversaw the team that managed the hardware for Google’s quantum supremacy experiment. Noting the computational power, he added, “We are now at the stage of trying to make use of that power.”
On Monday, IBM fired a pre-emptive shot with a blog post disputing Google’s claim that its quantum calculation could not be performed by a traditional computer. The calculation, IBM argued, could theoretically be run on a current computer in less than two and a half days — not 10,000 years.
“This is not about final and absolute dominance over classical computers,” said Dario Gil, who heads the IBM research lab in Yorktown Heights, N.Y., where the company is building its own quantum computers.
Other researchers dismissed the milestone because the calculation was notably esoteric. It generated random numbers using a quantum experiment that can’t necessarily be applied to other things.
As its paper was published, Google responded to IBM’s claims that its quantum calculation could be performed on a classical computer. “We’ve already peeled away from classical computers, onto a totally different trajectory,” a Google spokesman said in a statement. “We welcome proposals to advance simulation techniques, though it’s crucial to test them on an actual supercomputer, as we have.”
“When you take a digital photo, you’re not actually shooting a photo anymore.
‘Most photos you take these days are not a photo where you click the photo and get one shot,’ said Ren Ng, a computer science professor at the University of California, Berkeley. ‘These days it takes a burst of images and computes all of that data into a final photograph.’
Computational photography has been around for years. One of the earliest forms was HDR, for high dynamic range, which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image.
Over the last few years, more sophisticated computational photography has rapidly improved the photos taken on our phones.”
This technology is evident in Google’s Night Sight, which is capable of capturing low-light photos without a flash.
Why it’s hot:
In a world where the veracity of photographs and videos is coming into question because of digital manipulation, it’s interesting that alteration is now baked in.
Tencent Shows The Future Of Ads; Will Add Ads In Existing Movies, TV Shows
One of China’s largest online video platforms is setting out to use technology to integrate branded content into movies and TV shows from any place or era.
(Yes, a Starbucks on Tatooine…or Nike branded footwear for the first moonwalk.)
Why It’s Hot:
Potentially exponential expansion of available ad inventory
Increased targetability by interest, plus top-spin of borrowed interest
Additional revenue streams for content makers
New questions of the sanctity of creative vision, narrative intent and historical truth
Advertising is an integral part of any business and with increasing competition, it’s more important than ever to be visible. Mirriad, a computer-vision and AI-powered platform company, recently announced its partnership with Tencent which is about the change the advertising game. If you didn’t know, Tencent is one of the largest online video platforms in China. So how does it change the advertising game, you ask?
Mirriad’s technology enables advertisers to reach their target audience by integrating branded content (or ads) directly into movies and TV series. So, for instance, if an actor is holding just a regular cup of joe in a movie, this new API will enable Tencent to change that cup of coffee into a branded cup of coffee. Matthew Brennan, a speaker and a writer who specialises in analysing Tencent & WeChat shared a glimpse of how this tech works.
While we’re not sure if these ads will be clickable, it’ll still have a significant subconscious impact, if not direct. Marketers have long talked of mood marketing that builds a personal connection between the brand and the targeted user. So, with the ability to insert ads in crucial scenes and moments, advertisers will now be able to engage with their target users in a way that wasn’t possible before.
Mirriad currently has a 2-year contract with Tencent where they’ll trial exclusively on the latter’s video platform. But if trials are successful in that they don’t offer a jarring viewing experience, we can soon expect this tech to go mainstream.
Would be hard to summarize this in-depth article/expose from NYT, but…
A.I. Is Learning From Humans. Many Humans.
Artificial intelligence is being taught by thousands of office workers around the world. It is not exactly futuristic work.
A.I., most people in the tech industry would tell you, is the future of their industry, and it is improving fast thanks to something called machine learning. But tech executives rarely discuss the labor-intensive process that goes into its creation. A.I. is learning from humans. Lots and lots of humans.
Tech companies keep quiet about this work. And they face growing concerns from privacy activists over the large amounts of personal data they are storing and sharing with outside businesses.
Tens of thousands more workers, independent contractors usually working in their homes, also annotate data through crowdsourcing services like Amazon Mechanical Turk, which lets anyone distribute digital tasks to independent workers in the United States and other countries. The workers earn a few pennies for each label.
Based in India, iMerit labels data for many of the biggest names in the technology and automobile industries. It declined to name these clients publicly, citing confidentiality agreements. But it recently revealed that its more than 2,000 workers in nine offices around the world are contributing to an online data-labeling service from Amazon called SageMaker Ground Truth. Previously, it listed Microsoft as a client.
One day, who knows when, artificial intelligence could hollow out the job market. But for now, it is generating relatively low-paying jobs. The market for data labeling passed $500 million in 2018 and it will reach $1.2 billion by 2023, according to the research firm Cognilytica. This kind of work, the study showed, accounted for 80 percent of the time spent building A.I. technology.
This work can be so upsetting to workers, iMerit tries to limit how much of it they see. Pornography and violence are mixed with more innocuous images, and those labeling the grisly images are sequestered in separate rooms to shield other workers, said Liz O’Sullivan, who oversaw data annotation at an A.I. start-up called Clarifai and has worked closely with iMerit on such projects.“I would not be surprised if this causes post-traumatic stress disorder — or worse. It is hard to find a company that is not ethically deplorable that will take this on,” she said. “You have to pad the porn and violence with other work, so the workers don’t have to look at porn, porn, porn, beheading, beheading, beheading
Rising suicide rates in the US are disproportionately affecting 10-24 year-olds, with suicide as the second leading cause of death after unintentional injuries. It’s a complex and multifaceted topic, and one that leaves those whose lives are impacted wondering what they could have done differently, to recognize the signs and intervene.
Researchers are fast at work figuring out whether a machine learning algorithm might be able to use data from an individual’s mobile device to assess risk and predict an imminent suicide attempt – before there may even be any outward signs. This work is part of the Mobile Assessment for the Prediction of Suicide (MAPS) study, involving 50 teenagers in New York and Pennsylvania. If successful, the effort could lead to a viable solution to an increasingly troubling societal problem.
Why It’s Hot
We’re just scratching the surface of the treasure trove of insights that might be buried in the mountains of data we’re all generating every day. Our ability to understand people more deeply, without relying on “new” sources of data, will have implications for the experiences brands and marketers deliver.
Keeping an eye on subtle changes in common health risks is not an easy task for the average person. Yet, by the time real symptoms are obvious, it’s often too late to take the kind of action that would prevent a problem from snow-balling.
Researchers at the University of Toronto have developed an app that appears capable of turning a 30-second selfie into a diagnostic tool for quantifying a range of health risks.
“Anura promises an impressively thorough physical examination for just half a minute of your time. Simply based on a person’s facial features, captured through the latest deep learning technology, it can assess heart rate, breathing, stress, skin age, vascular age, body mass index (yes, from your face!), Cardiovascular disease, heart attack and stroke risk, cardiac workload, vascular capacity, blood pressure, and more.”
It’s easy to be skeptical about the accuracy of results possible from simply looking at a face for 30 seconds, but the researchers have demonstrated accuracy of measuring blood pressure up to 96% – and when the objective is to give people a way of realizing when it might be time to take action, that level of accuracy may actually be more than enough.
Why It’s Hot
For marketers looking to better identify the times, places and people for whom their products and services are likely to be most relevant, the convergence of biometrics with advanced algorithms and AI – all in a device most people carry around with them every day – could be a game-changer.
(This also brings up perennial issues of privacy & personal information, and trade-offs we need to make for the benefits emerging tech provides.)
Last summer, Australia began testing drones at their beaches to help spot distressed swimmers – acting as overhead lifeguards. Now the same company that created that technology, Ripper Group, is creating an algorithm for their drones to spot crocodiles.
While not frequent, crocodile attacks have gone up in recent years. And crocodiles are not easily identified when they spend up to 45 minutes under murky water. So the Ripper Group is using machine learning to train drones to distinguish crocodiles from 16 other marine animals, boats, and humans through a large database of images.
The drones also include warning sirens and flotation devices for up to four people, to assist in emergency rescue when danger is spotted.
Why It’s Hot
Lifeguards are limited in what they can see and how quickly they can act. With the assistance of drones, beach goers can stay carefree.
British data science company DataSparQ has developed facial recognition-based AI technology to prevent entitled bros from cutting the line at bars. This “technology puts customers in an ‘intelligently virtual’ queue, letting bar staff know who really was next” and who’s cutting the line.
“The system works by displaying a live video of everyone queuing on a screen above the bar. A number appears above each customer’s head — which represents their place in the queue — and gives them an estimated wait time until they get served. Bar staff will know exactly who’s next, helping bars and pubs to maximise their ordering efficiency and to keep the drinks flowing.”
Using AI to help solve these types of trifling irritations is better than having to tolerate other people’s sense of entitlement, though it also highlights the need to police rude behavior through something other than raising your kids well.
In what now seems inevitable, an online fashion retailer in India owned by an e-commerce startup that’s backed by Walmart is doing research with Deep Neural Networks to predict which items a buyer will return before they buy the item.
With this knowledge, they’ll be better able to predict their returns costs, but more interestingly, they’ll be able to incentivize shoppers to NOT return as much, using both loss and gain offers related to items in one’s cart.
The nuts and bolts of it is: the AI will assign a score to you based on what it determines your risk of returning a specific item to be.This data could be from your returns history, as well as less obvious data points, such as your search/shopping patterns elsewhere online, your credit score, and predictions about your size and fit based on aggregated data on other people.
Then it will treat you differently based on that assessment. If you’re put in a high risk category, you may pay more for shipping, or you may be offered a discount in order to accept a no-returns policy tailored just for you. It’s like car insurance for those under 25, but on hyper-drive. If you fit a certain demo, you may start paying more for everything.
Preliminary tests have shown promise in reducing return rates.
So many questions:
Is this a good idea from a brand perspective? If this becomes a trend, will retailers with cheap capital that can afford high-returns volume smear this practice as a way to gain market share?
Will this drive more people to better protect their data and “hide” themselves online? We might be OK with being fed targeted ads based on our data, but what happens when your data footprint and demo makes that jacket you wanted cost more?
Will this encourage more people to shop at brick and mortar stores to sidestep retail’s big brother? Or will brick and mortar stores find a way to follow suit?
How much might this information flow back up the supply chain, to product design, even?
Why it’s hot
Returns are expensive for retailers. They’re also bad for the environment, as many returns are just sent to the landfill, not to mention the carbon emissions from sending it back.
So, many retailers are scrambling to find the balance between reducing friction in the buying process by offering easy returns, on the one hand, and reducing the amount of actual returns, on the other.
There’s been talk of Amazon using predictive models to ship you stuff without you ever “buying” it. You return what you don’t want and it eventually learns what you want to the point where you just receive a box of stuff at intervals, and money is extracted from your bank account. This also might reduce fossil fuels.
How precise can these predictive models get? And how might people be able to thwart them? Is there a non-dystopian way to reduce returns?
Neuralink, the Elon Musk-led startup that the multi-entrepreneur founded in 2017, is working on technology that’s based around “threads,” which it says can be implanted in human brains with much less potential impact to the surrounding brain tissue versus what’s currently used for today’s brain-computer interfaces. “Most people don’t realize, we can solve that with a chip,” Musk said to kick off Neuralink’s event, talking about some of the brain disorders and issues the company hopes to solve.
Musk also said that, long-term, Neuralink really is about figuring out a way to “achieve a sort of symbiosis with artificial intelligence.” He went on to say, “This is not a mandatory thing. This is something you can choose to have if you want.”
For now, however, the aim is medical, and the plan is to use a robot that Neuralink has created that operates somewhat like a “sewing machine” to implant this threads, which are incredibly thin (like, between 4 and 6 μm, which means about one-third the diameter of the thinnest human hair), deep within a person’s brain tissue, where it will be capable of performing both read and write operations at very high data volume.
These probes are incredibly fine, and far too small to insert by human hand. Neuralink has developed a robot that can stitch the probes in through an incision. It’s initially cut to two millimeters, then dilated to eight millimeters, placed in and then glued shut. The surgery can take less than an hour.
No wires poking out of your head
It uses an iPhone app to interface with the neural link, using a simple interface to train people how to use the link. It basically bluetooths to your phone,” Musk said.
Is there going to be a brain app store ? Will we have ads in our brain? “Conceivably there could be some kind of app store thing in the future,” Musk said. While ads on phones are mildly annoying, ads in the brain could be a disaster waiting to happen.
Why it’s hot? A.I.: you won’t be able to beat it, so join it Interfacing our brains with machines may save us from an artificial intelligence doomsday scenario. According to Elon Musk, if we want to avoid becoming the equivalent of primates in an AI-dominated world, connecting our minds to computing capabilities is a solution that needs to be explored.
“This is going to sound pretty weird, but [we want to] achieve a symbiosis with artificial intelligence,” Musk said. “This is not a mandatory thing! This is a thing that you can choose to have if you want. I think this is going to be something really important at a civilization-scale level. I’ve said a lot about A.I. over the years, but I think even in a benign A.I. scenario we will be left behind.”
Think about the kind of “straight from the brain data” we would have at our disposal and how will we use it?
Amazon is rolling out StyleSnap, its AI-enabled shopping feature that helps you shop from a photograph or snapshot. Consumers upload images to the Amazon app and it considers factors like brand, price and reviews to recommend similar items.
Amazon has been able to leverage data from brands sold on its site to develop products that are good enough or close enough to the originals, usually at lower price points, and thereby gain an edge, but its still only a destination for basics like T-shirts and socks. With StyleSnap, Amazon is hoping to further crack the online retailing sector with this new offering.
Why It’s Hot
Snapping and sharing is already part of retail culture, and now Amazon is creating a simple and seamless way of adding the shop and purchase to this ubiquitous habit. The combination of AI and user reviews in its algorithm could change the way we shop when recommendations aren’t only based on the look of an item, but also on how customers experience it.
Interest in Artificial Intelligence (AI) has dramatically increased in recent years and AI has been successfully applied to societal challenge problems. It has a great potential to provide tremendous social good in the future.
Real-life examples of AI are already being applied in about one-third of these use cases, albeit in relatively small tests. They range from diagnosing cancer to helping blind people navigate their surroundings, identifying victims of online sexual exploitation, and aiding disaster-relief efforts.
AI has a broad potential across a range of social domains.
These include maximizing student achievement and improving teachers’ productivity. For example, adaptive-learning technology could base recommended content to students on past success and engagement with the material.
Public and Social Sector
With an emphasis on currently vulnerable populations, these domains involve opening access to economic resources and opportunities, including jobs, the development of skills, and market information. For example, AI can be used to detect plant damage early through low-altitude sensors, including smartphones and drones, to improve yields for small farms.
Sustaining biodiversity and combating the depletion of natural resources, pollution, and climate change are challenges in this domain.
Some of the issues that we are currently facing with social data
Data needed for social-impact uses may not be easily accessible
Much of the data essential or useful for social-good applications are in private hands or in public institutions that might not be willing to share their data. These data owners include telecommunications and satellite companies; social-media platforms; financial institutions (for details such as credit histories); hospitals, doctors, and other health providers (medical information); and governments (including tax information for private individuals).
The expert AI talent needed to develop and train AI models is in short supply
The complexity of problems increases significantly when use cases require several AI capabilities to work together cohesively, as well as multiple different data-type inputs. Progress in developing solutions for these cases will thus require high-level talent, for which demand far outstrips supply and competition is fierce.
‘Last-mile’ implementation challenges are also a significant bottleneck for AI deployment for social good
Organizations may also have difficulty interpreting the results of an AI model. Even if a model achieves a desired level of accuracy on test data, new or unanticipated failure cases often appear in real-life scenarios.
Next time you pull up to a McDonald’s drive-thru, you might see exactly what you’re craving front and center. Menus will be personalized based on factors like weather, local events, restaurant traffic, and trending items.
This new technology will be powered by their acquisition of personalization company Dynamic Yield. The menu can be programmed against triggers with scenarios such as offering ice cream and iced coffee when the temperature rises above 80 degrees, or pushing hot chocolate when it starts to rain.
Once a person starts ordering, the menu will offer add-ons based on the previous selections made. For example, a person ordering a salad may be offered a smoothie instead of fries.
Why It’s Hot
McDonald’s already builds off of customer’s cravings. Now that these cravings can be predicted, personalized, and optimized over time, there’s a high likelihood that customers will be ordering more at the drive-thru window.