Activate This ‘Bracelet of Silence,’ and Alexa Can’t Eavesdrop

Last year, Ben Zhao decided to buy an Alexa-enabled Echo speaker for his Chicago home. Mr. Zhao just wanted a digital assistant to play music, but his wife, Heather Zheng, was not enthused. “She freaked out,” he said.

Ms. Zheng characterized her reaction differently. First she objected to having the device in their house, she said. Then, when Mr. Zhao put the Echo in a work space they shared, she made her position perfectly clear:“I said, ‘I don’t want that in the office. Please unplug it. I know the microphone is constantly on.’”

Mr. Zhao and Ms. Zheng are computer science professors at the University of Chicago, and they decided to channel their disagreement into something productive. With the help of an assistant professor, Pedro Lopes, they designed a piece of digital armor: a “bracelet of silence” that will jam the Echo or any other microphones in the vicinity from listening in on the wearer’s conversations.

The bracelet is like an anti-smartwatch, both in its cyberpunk aesthetic and in its purpose of defeating technology. A large, somewhat ungainly white cuff with spiky transducers, the bracelet has 24 speakers that emit ultrasonic signals when the wearer turns it on. The sound is imperceptible to most ears, with the possible exception of young people and dogs, but nearby microphones will detect the high-frequency sound instead of other noises.

“It’s so easy to record these days,” Mr. Lopes said. “This is a useful defense. When you have something private to say, you can activate it in real time. When they play back the recording, the sound is going to be gone.”

During a phone interview, Mr. Lopes turned on the bracelet, resulting in static-like white noise for the listener on the other end.

As American homes are steadily outfitted with recording equipment, the surveillance state has taken on an air of domesticity. Google and Amazon have sold millions of Nest and Ring security cameras, while an estimated one in five American adults now owns a smart speaker. Knocking on someone’s door or chatting in someone’s kitchen now involves the distinct possibility of being recorded.

It all presents new questions of etiquette about whether and how to warn guests that their faces and words could end up on a tech company’s servers, or even in the hands of strangers.

By design, smart speakers have microphones that are always on, listening for so-called wake words like “Alexa,” “Hey, Siri,” or “O.K., Google.” Only after hearing that cue are they supposed to start recording. But contractors hired by device makers to review recordings for quality reasons report hearing clips that were most likely captured unintentionally, including drug deals and sex.

Two Northeastern University researchers, David Choffnes and Daniel Dubois, recently played 120 hours of television for an audience of smart speakers to see what activates the devices. They found that the machines woke up dozens of times and started recording after hearing phrases similar to their wake words.

“People fear that these devices are constantly listening and recording you. They’re not,” Mr. Choffnes said. “But they do wake up and record you at times when they shouldn’t.”

Rick Osterloh, Google’s head of hardware, recently said homeowners should disclose the presence of smart speakers to their guests. “I would, and do, when someone enters into my home, and it’s probably something that the products themselves should try to indicate,” he told the BBC last year.

The “bracelet of silence” is not the first device invented by researchers to stuff up digital assistants’ ears. In 2018, two designers created Project Alias, an appendage that can be placed over a smart speaker to deafen it. But Ms. Zheng argues that a jammer should be portable to protect people as they move through different environments, given that you don’t always know where a microphone is lurking.

At this point, the bracelet is just a prototype. The researchers say that they could manufacture it for as little as $20, and that a handful of investors have asked them about commercializing it.

Source: NY Times

Why It’s Hot

Voice tech spawns voice protection tech. We can assume innovation to protect us from innovation is a trend worth following.

Mozilla’s holiday shopping guide rates creepiness of connected products with animated emoji

Be Smart. Shop Safe.” That’s the tag line for Mozilla’s initiative to spread awareness about the privacy status and risks of new connected products — and promote their brand as a privacy leader.

The privacy of physical connected products is new for many people, so getting people to consider privacy before impulsively slamming the BUY button is a big deal for an organization focused on privacy. Mozilla needed to make their report interesting to grab people’s attention.

Smart but simple UX and strong copy makes this happen.

A privacy focused shopping guide allows you to see which products meet Mozilla’s minimum privacy standards.

An animated emoji shows how “creepy” users have said various products are, regardless of their privacy rating.

Why it’s hot:

Is this the beginning of, if not a backlash, at least a recalibration of the excitement about smart IoT products?

Mozilla frames itself as the authority on the growing concern of privacy and getting into the product-rating game drives a new kind of awareness regarding physical products which many people have heretofore not had to consider.

Gathering data on creepiness sentiment is an interesting (and fun) approach to consumer metrics. Users can vote on the creepiness scale, but you have to give your email to see the results.

Source: Mozilla

Inside Amazon’s plan for Alexa to run your entire life

The creator of the famous voice assistant dreams of a world where Alexa is everywhere, anticipating your every need.

Speaking with MIT Technology Review, Rohit Prasad, Alexa’s head scientist, revealed further details about where Alexa is headed next. The crux of the plan is for the voice assistant to move from passive to proactive interactions. Rather than wait for and respond to requests, Alexa will anticipate what the user might want. The idea is to turn Alexa into an omnipresent companion that actively shapes and orchestrates your life. This will require Alexa to get to know you better than ever before.

In June at the re:Mars conference, he demoed [view from 53:54] a feature called Alexa Conversations, showing how it might be used to help you plan a night out. Instead of manually initiating a new request for every part of the evening, you would need only to begin the conversation—for example, by asking to book movie tickets. Alexa would then follow up to ask whether you also wanted to make a restaurant reservation or call an Uber.

A more intelligent Alexa

Here’s how Alexa’s software updates will come together to execute the night-out planning scenario. In order to follow up on a movie ticket request with prompts for dinner and an Uber, a neural network learns—through billions of user interactions a week—to recognize which skills are commonly used with one another. This is how intelligent prediction comes into play. When enough users book a dinner after a movie, Alexa will package the skills together and recommend them in conjunction.

But reasoning is required to know what time to book the Uber. Taking into account your and the theater’s location, the start time of your movie, and the expected traffic, Alexa figures out when the car should pick you up to get you there on time.

Prasad imagines many other scenarios that might require more complex reasoning. You could imagine a skill, for example, that would allow you to ask your Echo Buds where the tomatoes are while you’re standing in Whole Foods. The Buds will need to register that you’re in the Whole Foods, access a map of its floor plan, and then tell you the tomatoes are in aisle seven.

In another scenario, you might ask Alexa through your communal home Echo to send you a notification if your flight is delayed. When it’s time to do so, perhaps you are already driving. Alexa needs to realize (by identifying your voice in your initial request) that you, not a roommate or family member, need the notification—and, based on the last Echo-enabled device you interacted with, that you are now in your car. Therefore, the notification should go to your car rather than your home.

This level of prediction and reasoning will also need to account for video data as more and more Alexa-compatible products include cameras. Let’s say you’re not home, Prasad muses, and a Girl Scout knocks on your door selling cookies. The Alexa on your Amazon Ring, a camera-equipped doorbell, should register (through video and audio input) who is at your door and why, know that you are not home, send you a note on a nearby Alexa device asking how many cookies you want, and order them on your behalf.

To make this possible, Prasad’s team is now testing a new software architecture for processing user commands. It involves filtering audio and visual information through many more layers. First Alexa needs to register which skill the user is trying to access among the roughly 100,000 available. Next it will have to understand the command in the context of who the user is, what device that person is using, and where. Finally it will need to refine the response on the basis of the user’s previously expressed preferences.

Why It’s Hot:  “This is what I believe the next few years will be about: reasoning and making it more personal, with more context,” says Prasad. “It’s like bringing everything together to make these massive decisions.”

Firefox founder launches privacy-first browser that rewards users for allowing brands access to them

The beta version has been out for a while, but “Today marks the official launch of Brave 1.0, a free open-source browser. The beta version has already drawn 8 million monthly users, but now, the full stable release is available for Windows, macOS, Linux, Android, and iOS.

Brave promises to prioritize security by blocking third-party ads, trackers, and autoplay videos automatically. So you don’t need to go into your settings to ensure greater privacy, though you can adjust those settings if you want to.” (The Verge)

Internet heavy hitter Brendan Eich (creator of JavaScript/co-founder of Firefox/Mozilla) just launched the stable version of new privacy-focused Brave browser, employing the idea of a Basic Attention Token (BAT), which allows users to be paid in crypto-currency tokens for allowing brands access to their eyeballs. Eich calls it “a new system for properly valuing user attention.”

He explains it best:

Why it’s hot:

1. As tech giants increasingly impinge on privacy and gobble up every imaginable byte of data about everyone in exchange for “a better user experience,” Brave is claiming to have found a non-zero-sum game that everyone (users, advertisers, and publishers) can benefit from:

  • Users get lots more control over the ads they see and get rewarded with tokens for allowing ads.
  • Advertisers get more precise and engaged audiences, so in theory, better ROAS.
  • Content creators get more control over their publishing and their income. And users can tip content creators on a subscription-style basis not unlike Patreon.

That’s the idea, at least.

2. Its look and feel is very similar to Chrome, so migrating to Brave may be smooth enough to encourage more people to abandon the surveillance-state-as-a-service (SSaaS) that Google is verging on.

Source: The Verge

Orwellabama? Crimson Tide Track Locations to Keep Students at Game

Coach Nick Saban gets peeved at students leaving routs early. An app ties sticking around to playoff tickets, but also prompts concern from students and privacy watchdogs.

The Alabama football coach, has long been peeved that the student section at Bryant-Denny Stadium empties early. So this season, the university is rewarding students who attend games — and stay until the fourth quarter — with an alluring prize: improved access to tickets to the SEC championship game and to the College Football Playoff semifinals and championship game, which Alabama is trying to reach for the fifth consecutive season.

But to do this, Alabama is taking an extraordinary, Orwellian step: using location-tracking technology from students’ phones to see who skips out and who stays. “It’s kind of like Big Brother,” said Allison Isidore, a graduate student in religious studies from Montclair, N.J.

It also seems inevitable in an age when tech behemoths like Facebook, Google and Amazon harvest data from phones, knowing where users walk, what they watch and how they shop. Alabama isn’t the only college tapping into student data; the University of North Carolina uses location-tracking technology to see whether its football players and other athletes are in class.

Greg Byrne, Alabama’s athletic director, said privacy concerns rarely came up when the program was being discussed with other departments and student groups. Students who download the Tide Loyalty Points app will be tracked only inside the stadium, he said, and they can close the app — or delete it — once they leave the stadium. “If anybody has a phone, unless you’re in airplane mode or have it off, the cellular companies know where you are,” he said.

But Adam Schwartz, a lawyer for the Electronic Frontier Foundation, a privacy watchdog, said it was “very alarming” that a public university — an arm of the government — was tracking its students’ whereabouts.

“Why should packing the stadium in the fourth quarter be the last time the government wants to know where students are?” Schwartz said, adding that it was “inappropriate” to offer an incentive for students to give up their privacy. “A public university is a teacher, telling students what is proper in a democratic society.”

The creator of the app, FanMaker, runs apps for 40 colleges, including Clemson, Louisiana State and Southern California, which typically reward fans with gifts like T-shirts. The app it created for Alabama is the only one that tracks the locations of its students. That Alabama would want it is an example of how even a powerhouse program like the Crimson Tide is not sheltered from college football’s decline in attendance, which sank to a 22-year low last season.

The Tide Loyalty Points program works like this: Students, who typically pay about $10 for home tickets, download the app and earn 100 points for attending a home game and an additional 250 for staying until the fourth quarter. Those points augment ones they garner mostly from progress they have made toward their degrees — 100 points per credit hour. (A regular load would be 15 credits per semester, or 1,500 points.)

The students themselves had no shortage of proposed solutions.

“Sell beer; that would keep us here,” said Harrison Powell, a sophomore engineering major from Naples, Fla.

“Don’t schedule cupcakes,” said Garrett Foster, a senior management major from Birmingham, referring to Alabama’s ritually soft non-conference home schedule, which this year includes Western Carolina, Southern Mississippi and New Mexico State. (Byrne has set about beefing it up, scheduling home-and-home series with Texas, Wisconsin, Oklahoma and Notre Dame, but those don’t start until 2022.)

In the meantime, there is also time for students to solve their own problems, which is, after all, the point of going to college. An Alabama official figured it would not be long before pledges are conscripted to hold caches of phones until the fourth quarter so their fraternity brothers could leave early.

“Without a doubt,” said Wolf, the student from Philadelphia. “I haven’t seen it yet, but it’s the first game. There will be workarounds for sure.”

As for whether the app, with its privacy concerns, early bugs and potential loopholes, will do its job well enough to please Saban was not a subject he was willing to entertain as the sun began to set on Saturday. He was looking ahead to the next opponent: South Carolina.


Why It’s Hot:  

Another example of a brand/institution using gamification to influence behavior, this takes it a step further – pushing towards the edge of the privacy conversation, and perhaps leading us all to consider what might be an acceptable “exchange rate” for personal information.

Phone a Friend: a mobile app for predicting teen suicide attempts

Rising suicide rates in the US are disproportionately affecting 10-24 year-olds, with suicide as the second leading cause of death after unintentional injuries. It’s a complex and multifaceted topic, and one that leaves those whose lives are impacted wondering what they could have done differently, to recognize the signs and intervene.

Researchers are fast at work figuring out whether a machine learning algorithm might be able to use data from an individual’s mobile device to assess risk and predict an imminent suicide attempt – before there may even be any outward signs. This work is part of the Mobile Assessment for the Prediction of Suicide (MAPS) study, involving 50 teenagers in New York and Pennsylvania. If successful, the effort could lead to a viable solution to an increasingly troubling societal problem.

Why It’s Hot

We’re just scratching the surface of the treasure trove of insights that might be buried in the mountains of data we’re all generating every day. Our ability to understand people more deeply, without relying on “new” sources of data, will have implications for the experiences brands and marketers deliver.

Smart TVs track more than you think

There is growing concern over online data and user privacy, but people’s data is also being collected from their living rooms via their televisions.

Samba TV is a company that tracks viewer information to make personalized show recommendations. Samba TV’s software has been implemented by a dozen TV brands, including Sony, Sharp, TCL and Philips. When people set up their Smart TVs, a screen urges them to enable a service called Samba Interaction TV, saying it recommends shows and provides special offers. It doesn’t, however, tell the consumer how much of their data they’re collecting.

Samba TV can track everything that appears on TV on a second-by-second basis. It offers advertisers the ability to base their targeting on whether people watch conservative or liberal media outlets or which party’s presidential debate they watched. The big draw is that Samba TV can identify other devices in the household that share the TV’s internet connection. In turn, advertisers can pay the company to direct ads to other gadgets in a home after their TV commercials play.

Why it’s hot:

Most people think their data is only at risk while surfing the web. But that can’t be farther from the truth. By agreeing to Samba TV’s terms, most people are deceived into thinking they’re only agreeing to a minor recommendation feature, but they’re actually giving the company access to all their devices.


Amazon is taking a photo of your front door to show you where your package is

Amazon is delivering more than just your packages these days — it is also delivering photos. As part of the company’s efforts to make it even easier for you to receive your online orders, Amazon has taken to taking photos of your doorway to show exactly where your packages are being deposited. This will hopefully cut down on customer confusion, and also serves as photographic evidence of the successful delivery of your precious cargo.Amazon’s new picture-taking practice might also allow delivery folks to leave packages in more inconspicuous spots, like behind a bush or in a flower pot, as USA Today notes. Given the rise in package stealers, having a safe and somewhat surprising place to put your packages may not be such a bad idea and being able to document where that place is makes things easier.The new service is called Amazon Logistics Photo on Delivery and according to a company spokesperson, is “one of many delivery innovations we’re working on to improve convenience for customers.” Amazon Logistics in and of itself is one of those delivery innovations — it’s an Amazon-owned delivery network that is completely separate from other delivery services like FedEx or UPS. And while the Photo on Delivery program has been rolling out in batches for the last six months, it’s becoming more widespread. Now, folks who receive packages in the Seattle, San Francisco, and northern Virginia metro areas will likely be receiving photographic notifications of their delivery’s safe arrival.

 Of course, if the thought of someone taking a photo of your property doesn’t really sit all that well with you, don’t worry — Amazon is giving you a way to opt out of the feature, too. Simply head over to the Amazon website and navigate to the help and customer service tab. From there, you should be able to tell Amazon folks not to take an unapproved photo (assuming the photo-taking option is even available to you). But if you’re interested in seeing exactly where your packages are at the end of the day, Photo on Delivery may be the feature you have been waiting for.
Why Its Hot
Could this be data collection disguised as innovation? Or a way to cut down on false claims of lost packages and package stealing? In any case, I have always wanted my packages to be more inconspicuously placed and now they can be. Plus, why not gameify it? “Alexa, where’s my package”?

90 terabytes of facial recognition

China facial recognition

China is building the world’s most powerful facial recognition system with the power to identify any one of its 1.3 billion citizens within three seconds. The government states the system is being developed for security and official uses such as tracking wanted suspects and public administration and that commercial application using information sourced from the database will not be allowed under current regulations.

“[But] a policy can change due to the development of the economy and increasing demand from society,” said Chen Jiansheng, an associate professor at the department of electrical engineering at Tsinghua University and a member of the ministry’s Committee of Standardisation overseeing technical developments in police forces.

Chinese companies are already taking the commercial application of facial recognition technology to new heights. Students can now enter their university halls, travellers can board planes without using a boarding pass and diners can pay for a meal at KFC. Some other restaurants have even offered discounts to customers based on a machine that ranks their looks according to an algorithm. Customers with “beautiful” characteristics – such as symmetrical features – get better scores than those with noses that are “too big” or “too small” and those that get better scores will get cheaper meals.

More at South China Morning Post and ABS-CBN.

Why It’s Hot
Another weekly installment of balancing convenience and claims of safety with privacy and ethics. China is pushing us faster than most other countries to address this question sooner rather than later.

Pay for Art Using Your… Screenshots?

Customers gave up personal data for art pieces to take home at this London based pop-up store.

“London based street artist Ben Eine recently opened a pop-up shop for his work; instead of paying with cash, however, customers at this the Data Dollar Store have to give up their personal information. The shop, opened in collaboration with cybersecurity company Kaspersky Lab, is meant to make customers reevaluate the information they often freely give away by the social media channels they are using. “I’m concerned about how that information is used and why are we not rewarded for giving this information away,” Eine told Cnet. “Companies use that information and target us to sell products, to feed us information that we wouldn’t necessarily look at.”

The store was set up in a temporary space at the Old Street Underground station in London. When visitors entered the store, an employee showed them the purchasing options:

  • Giving up three photos or screenshots of your recent text or email conversations for a mug
  • Giving up the last three photos on your Camera Roll, or the last three text messages sent, for a tshirt
  • Handing your phone over to the assistant to select any five photos to keep, publicly displayed on a large TV screen in the store for the next two days (you can attempt to barter for which photos they’ll pick, but the choice is up to them).

The store is meant to raise awareness about the sometimes risky informational exchanges that are happening all the time online. According to Kaspersky Lab, while 74 percent of survey respondents were unconcerned about data security, 41 percent were unprotected from potential threats and 20 percent have been affected by cybercrime.

Source: PSFK

The Winners and Losers of YouTube’s Brand Saftey Crisis

The ripple of the video giant’s woes has gotten so great that some have predicted the impact from major brands could cost YouTube $750 million. Seemingly, there are some that are happy when such a kink in the armor is exposed, but there are myriad of stakeholders, each with their own perspective. With that amount of money – as well as brand reputation and confidence – at stake there are going to be some winners and losers, and here they are:


Old-fashioned publishers

These are the classic media players who started losing their lunch the second Google started owning the internet. One could imagine publishers grinning ear to ear, thinking, “Told ya so. Quality content isn’t so easy.” They can can make a more convincing case that knowing the content and the audience actually is still important.

This issue can resurface a shift to high-quality, direct-bought content, where brands have the most control but pay a premium for it in some cases.


Anyone selling streaming ads is in a good position – including Sling, Dish and even TV networks. Hulu, Roku, TV networks and anyone with a digital video platform will be showing off their highly curated content. These new shows and programming will look pretty good to anyone with a heightened interest in knowing exactly where their messages will appear.

Tech tools & 3rd Party Verification Partners

Brands have called for digital platforms like Facebook and Google to clean up the media supply chain and to be more transparent with data. The brand safety issue on YouTube is yet another bit of leverage to force more cooperation.

Independent third parties like Integral Ad Science, Double Verify, Moat and others will find more brands at their doorsteps looking for ways to ensure their ads appear near quality content.

The agency

One of the most important roles for agencies was helping brands make sure their ads didn’t show up in the wrong place by intimately knowing the targeting, brand safety protections and best practices of each channel. Well, now those services are increasingly valuable.


Net neutrality

When the Trump administration makes further moves to undo net neutrality, as many anticipate based on current momentum repealing FCC consumer protections, Google’s ability to defend it in idealistic terms could be undermined by all the talk about serving ads on terrorist video.


It took a long time for programmatic to stop being a dirty word. Programmatic advertising was once considered the least controlled, lowest quality ad inventory at the lowest price. In part, brands could start to pull back from blind, untargeted buying without transparency.


YouTube has said that part of its solution is to implement stricter community standards, and that could mean more bannings and ad blocking from their videos, impacting their earnings.They could be quicker to cut a channel at the smallest offense now that brands are watching closely.

Advertisers still on YouTube – this is a tricky one to classify and it’s too early to say. We’ll have to see how the video platform reacts over time to increasing pressures to allow verification partners and data trackers access within the garden’s walls.

Why It’s (Still) Hot:

This topic will continue to be important to the brands we represent, aim to represent and even those far from us that are faced with the same decision to either stay the course or sit it out. There is a lot of money moving around on media plans, a lot of POV’s being routed and a lot of reps working overtime to reassure teams of buyers/planners that they are taking brand safety very seriously. Often it’s not the crisis that defines a company, but what they do in the aftermath. Some are hopeful that this is a definitive crack in the ‘walled garden’-  but even if it is not, we’re all hoping for a better, safer platform at the end of this tunnel…a world where once again clients can be irked by their premium pre-rolls showing up prefacing water skiing squirrels and dancing cat videos instead of terrorist rhetoric.

Our Next Item Up for Bid: Your Personal Data


The broadband privacy rules created by the FCC last year and vigorously debated last night are in grave danger after the Senate voted to repeal them this morning.

The rules, which forced internet service providers to get permission before selling your data, were overturned using the little-used Congressional Review Act (CRA). This is now being called “the single biggest step backwards in online privacy in many years” by those that spoke out against the repeal as well as the co-creator of the 1996 Telecommunications Act, Sen. Ed Markey.

This is a big deal and a pretty bad idea for anyone even remotely concerned with privacy and limiting the already questionable practices of telecoms and ISPs.

Assuming that this resolution passes through the House, which seems likely at this point, your broadband and wireless internet service provider will have free reign to collect and sell personal data along to third parties. That information may include (but is not limited to) location, financial, healthcare and browsing data scraped from customers. As a result of the ruling, you can expect ISPs to begin collecting this data by default.

To play the devil’s advocate for a second, let’s assume there is some upside for companies in this deregulation: “You want the entrepreneurial spirit to thrive, but you have to be able to say ‘no, I don’t want you in my living room.’ Yes, we’re capitalists, but we’re capitalists with a conscience.” states Sen Ed Markey. But with the wireless and cable industries both operating as powerful oligopolies, consumers will be left with zero protection against price-gouging, no advocate for net neutrality, and as today demonstrates, far less control over their own data.

The broadband privacy rule, among other things, expanded an existing rule by defining a few extra items as personal information, such as browsing history. This information joins medical records, credit card numbers and so on as information that your ISP is obligated to handle differently, asking if it can collect it and use it.

There is Nothing Hot about this.

You can see the utility of the rule right away; browsing records are very personal indeed, and ISPs are in a unique position to collect pretty much all of it if you’re not actively obscuring it. Facebook and Google see a lot, sure, but ISPs see a lot too, and a very different set of data.

Why should they be able to aggregate it and sell it without your permission? Perhaps to gain competitive advantage or profit or to cull other aggregators, in order to better target ads or build a profile of you. The FCC thought not, and proposed the rule, part of which was rescinded by the new FCC leadership before it even took effect. *Sigh*

If consumers continue to lose trust in the platforms we employ to market our brands and begin to widely question their safety, security and data usage, we are in big trouble. We’re already challenged by a litany of brand safety concerns – bots, fraud, hackers, malware, viewability –  and solutions aimed to mitigate yet limit marketing effectiveness (ex. ad blocking) continue to gain momentum. While some of this is good digital evolution (flashback to needing a pop-up blocker just to endure an average online session), the lack of consumer trust quickly erodes to lack of brand trust and soon those left behind willingly (or unknowingly) allowing their data to be sold on the open market might not be the ones worth reaching.

ISPs can now sell your browsing history without permission, thanks to the Senate

Senate votes to allow ISPs to collect personal data without permission

IoT Oh My!

Sex toy maker We-Vibe has agreed to pay customers up to C$10,000 (£6,120) each after shipping a “smart vibrator” which tracked owners’ use without their knowledge.


The We-Vibe 4 Plus is a £90 bluetooth connected vibrator, which can be controlled through an app. Its app-enabled controls can be activated remotely, allowing, for instance, a partner on the other end of a video call to interact.

But the app came with a number of security and privacy vulnerabilities, allowing anyone within bluetooth range to seize control of the device.

In addition, data is collected and sent back to Standard Innovation, letting the company know about the temperature of the device and the vibration intensity – which, combined, reveal intimate information about the user’s sexual habits.

Why It’s Hot
As we continue to talk about our bodies as the ultimate database – along with the opportunities and risks that entails – few examples seem as illustrative as this!

If a picture paints a thousand words…

Experts who want to pierce North Korea’s extreme secrecy have to be creative. One surprisingly rich resource: the country’s own propaganda, like the photo below.

Thanks to high-tech forensics, analysts and intelligence agencies are using photos to track North Korea’s internal politics and expanding weapons programs with stunning granularity.

The bomb, Kim’s outfit, the entourage and the missile have all been thoroughly dissected

Analysts believe some details may have been deliberately revealed to demonstrate the country’s growing capabilities. Whereas most such images are doctored, if only to improve Mr. Kim’s appearance, they noticed that this was conspicuously unretouched — perhaps a message to the foreign intelligence agencies who conduct such analyses.

Whereas analysts had long doubted the country’s grandiose claims, analysts said, “2016 was about showing us all the capabilities that we had mocked.”

Why It’s Hot
As countries around the world debate security and the measures appropriate to ensure it, we must remain vigilant against unintended consequences to areas like privacy and data protection.

IoT and Health…magic answer for health or privacy hell?

The Internet of Things is here. While it seems so inevitable and welcome, in many ways it is a nightmare waiting to happen. In an article on and a report from Goldman Sachs, the authors outline the positive impact IoT/Big Data on transportation and healthcare. One author, Bruce Horsham claims a utopian — and a rather naïve — view of the IoT: “Far from being a gimmicky way to collect more data on customers – and sell them more things – IoT technology has the power to transform industries, manage costs and deliver quality outcomes to up and down the supply chain. Here’s how two industries are doing just that.”

While the advances IoT is doing for transportation is noteworthy, the risk/reward equation for healthcare is still to be determined. Consider this fact: Goldman Sachs recently estimated that Internet of Things (IoT) technology can save billions of dollars for asthma care. If 1/3 of all healthcare spending is on disease management, this is also a place where IoT can have the biggest, most immediate impact on cost and quality-of-care. The chart below shows how Goldman views it:

IoT Goldman savings H sauce 12.11

Why is this hot? Are we focused on the wrong thing? With 80 million wearable devices already being used, the flood of data is here, it is only the interconnectivity of all the healthcare systems that prevents its use. While everyone may rush to saying such data will provide savings and better health outcomes, the bottom line is we live in an era of data flood and data theft. At this time, there is NO widely accepted privacy policy or hack-proof software that protects us, our parents, our families. Why is privacy and theft an issue? Because of scale!

Example: Excellus, a health data provider, got hacked. If each individual health record is valued at $50, then the theft is worth $500 million.

Example: if true connectivity occurred, then millions of users with implanted monitoring devices, or those who use remote insulin devices or remote monitoring of their heart could be hacked and killed. It is that ugly and that simple.

This is a fast-evolving topic…the Goldman Sachs IoT inforgraphic is one we should all marvel at and hope that someone is watching our back.

IoT Goldman infg 12.11


Man caught sunbathing atop of wind turbine

You know privacy is taking a back seat to technology when you can’t sunbathe on top of a wind turbine without being discovered. Kevin Miller, vacationing in Rhode Island, was taking his drone for a spin in the countryside when he maneuvered it to the top of a 200-foot tall wind turbine. Hoping to capture some cool sights of the turbine against the blue sky, the drone instead spied upon a man sunning himself. The man took it all in good stride, sitting up and waving to the drone.

Why It’s Hot

Privacy is becoming a quaint notion, going the way of bygone eras like the horse and buggy, handwritten notes and family dinner time.  Soon there will be nowhere to hide, with drones everywhere, street cameras watching every move, and smartphones tracking our every movement.  Creepy perhaps, but it seems like it’s becoming more accepted in society.  Now you can’t even sunbathe on a wind turbine.

Protecting Yourself in the Facial-Recognition Era

Facial-recognition might make tagging photos a lot easier on Facebook, but let’s face it: there’s a lot of unresolved implications for the long-term about this increasingly integrated technologies. Rather than putting themselves at risk, some people are choosing just to opt-out altogether.

How do you go dark when your data is collected without your knowledge or consent?

It’s not a catch-all, but start with protective eyewear. The Privacy Visor, first unveiled in 2012, are glasses that reflect light in a way that confuses facial-detection software. More specifically, the glasses disrupt the patterns of light and dark spots around the eyes and nose that allow computer algorithms to recognize that a face is even present in the frame. Although the glasses don’t guarantee complete privacy, project lead Isao Echizen says tests have shown it to work over 90% of the time.


As camera technology has adapted to more closely mimic human sight, The Privacy Visor has been updated to better shield identities even as the scanning technology advances. Echizen is pushing forward with the concept’s commercialization, targeting mid-next year to begin production of more fashionable and effective versions.

Why It’s Hot

The Privacy Visor is a product that exists to treat the symptom of a much larger problem: maintaining control of one’s own identity. It might be good for those of us who are paranoid, and those of us who very legitimately want to maintain control of the information they create about themselves… online and out in the world. In fact some privacy advocates go as far as saying the Privacy Visor is a detriment to the cause because its very existence is in effect “giving in” to these huge pressures for amassing databases of personally identifiable information. So the bigger question becomes less about the specifics of this product, and more about the broader culture of acceptance we’re creating.

Via FastCompany

Hackers Remotely Shut Down a Jeep at 70 MPH

There are connected cars and then there are cars that can be ‘disconnected,’ at least from the driver.

In a somewhat on-the-edge experiment published this week, two well-known hackers tapped into a moving Jeep and remotely operated the radio, air conditioning and windshield wipers.

Then, while the car was moving at 70 miles an hour, the car’s acceleration was remotely shut down, leaving the volunteer driver & writer from Wired slowing on a busy highway.

Aerial view of a freeway interchange, LA

Aerial view of a freeway interchange, LA

The two hackers, Charlie Miller and Chris Vasalek, have been conducting car-hacking research to determine if an attacker could gain wireless control to vehicles via the internet.

They created software code that could send commands through the Jeep’s entertainment system to its dashboard functions, brakes, steering and transmission from a laptop many miles away, according to the first-person account in Wired.

Why Its Hot:

The timing of the widely reported Jeep experiment is interesting in light of a new privacy bill in congress that would stop car makers from using data collected from vehicles for advertising or marketing. More about the bill can be found here.

Drivers shouldn’t have to choose between being connected and being protected. Controlled demonstrations show how frightening it would be to have a hacker take over controls of a car. We need clear rules of the road that protect cars from hackers and American families from data trackers. When consumers feel safe with technology it flourishes but when the opposite is true it can become an epic failure.

But the Jeep episode shows that sending unwanted ads to drivers on the move may be the least of the issues. The silver lining in all of this is that the vulnerabilities in the coming interconnected world of things are being identified and highlighted.

As you might imagine, Fiat Chrysler has issued a software fix that Jeep owners can either download and install or ask a dealer to install it.  There will be more bumps in the road that get identified. Then they can be addressed.

Read the full story here

Google Eavesdrops On Its Users


Privacy campaigners and open source develops are up in arms over the secret installing of a Google listening software. The listening software, Chrome’s “OK, Google” hotword detection, is meant to detect words which make the computer respond when you talk to it. However, many users have complained its automatic activation without their permission when using Chromium, an open-source web browser from Google Chrome. This basically means that their computers are stealth configured to send what was being said in the room to somebody else, to a private company in another country, without your consent or knowledge.

Open-source advocates proposed that Google was downloading a “black box” on to their machines that was not open source and therefore could not be doing what it said it was meant to do. Google has since pulled its listening software from the open-source Chromium browser.


Why is it hot?

Voice search functions have become an accepted feature of modern smartphones, smart TVs, and now browsers, but they have caused concerns over privacy. While most services require users to opt in, many have question whether their use, which requires sending voice recordings potentially exposes private conservations held within the home.

Facebook Private Conversation Insights Now Available via DataSift

Yesterday’s DataSift Facebook webinar revealed a huge breakthrough for healthcare marketers. Through their partnership, Facebook is providing aggregated topic data to marketers on DataSift’s API. This represents the first time we’ve been able to pull back the curtain and see what Facebook users are talking about in their private conversations.



Now, before the Facebook Privacy conspiracy theorists get wound up, the data coming through DataSift’s API is:

  • Anonymized with no personal data available
  • Limited to adults 18+
  • Only available on topics with 100+ posts in any query

Until this option, the only way to get insight into what was happening on Facebook was to engage in the channel. Even then, marketers only got insight what happened on and around their own pages so of course that was a very limited and self-selected audience.


Now, marketers will be able to get insight into volumes of conversation on Facebook by using three different criteria:

  • Category (the example used was automotive)
  • Brand (for example, BMW, Ford, and Honda)
  • Media (defined by URLs for properties like BBC or CNN)
  • Feature / topic data (as defined by Boolean queries)

However, DataSift shared the image below on Facebook with the post:

“And #topicdata is all about #privacyfirst


KLICK: “…it looks like social listening may be on the cusp of removing one of the most troublesome blinders we’ve had to live with to date.”

Why It’s HOT: This tool will help marketers see what is happening on the world’s #1 social channel, Facebook.

Source: Klick Health

Mattel’s New Barbie is Listening to Your Kids

Mattel’s new “Hello Barbie” hits store shelves later this year and has more tricks up her sleeve than just saying hello. With the press of a button, Barbie’s embedded microphone turns on and records the voice of the child playing with her. The recordings are then uploaded to a cloud server, where voice detection technology helps the doll make sense of the data. The result? An inquisitive Barbie who remembers your dog’s name and brings up your favorite hobbies in your next chitchat.

Why It’s Hot: 
The overall concept is interesting — a toy that listens and learns your child’s preferences and adapts accordingly … a true demonstration of personalization. The obvious downside is that nobody really knows what is being done with all that data (collected from kids!) and puts a level of accountability on the company. For example, children confide in their toys – so what if a child admits that they get hit by a parent? Should Mattel be on the hook to report it?

Maintain Your Privacy by Wearing this New Accessory

With each new technology seems to come new concerns about privacy. New “privacy glasses” by software firm AVG seek to minimize worries over facial recognition on social media.

The glasses use infrared LEDs surrounding your eyes and nose to essentially “mess with” smartphone cameras when taking pictures. By distorting the light around one’s eyes and nose, it prevents facial recognition technology that is built into most social media platforms from identifying people.

Read more about the privacy glasses on Engadget.

Why It’s Hot | While the glasses are only in prototyping phase, and still just look like a novelty party accessory, it’s not surprising that companies are finding ways to prevent facial recognition technology. With each new innovation comes more worries about “big brother” watching over us, and these glasses are just a step in the direction of privacy over convenience. It’ll be interesting to watch which end of that struggle wins out over time.

Twitter Tracks Your Apps to Better Personalize Your Experience

As Twitter strives to provide an even more personalized experience for their users, the social media platform recently announced they will be tracking what apps mobile users have downloaded. With this new capability, Twitter can promise better targeting within their advertising as well. They can now enable advertisers to target based on whether users have installed their app, registered after downloading, etc.

For those of us that don’t want personal information given to advertisers for advertisements and Twitter recommendations that are that relevant, these privacy settings are adjustable. Users can opt-out of Twitter’s tracking capability. Read more via VentureBeat, and adjust your settings in the Twitter app.

Why It’s Hot | With the world of social media increasing the amount of personal data available online, the privacy discussion is more popular than ever. Beyond just using the information users give, now apps are including tracking mechanisms that track your mobile behavior. While research shows that (in general) Millennials don’t mind tracking if it means their experience will be more personalized or help them to find products more easily (eMarketer), Millennials are not the only mobile consumers. And so the privacy vs. personalization debate continues…

Google Says, “You’ve Got A Friend In Me”

Have you ever casually agreed to plans with a friend – “Sure, let’s grab drinks next Thursday after work” – and then totally forgot about it? Google is here to the rescue to help prevent this from ever happening again (and to make us better humans) with its updated mobile app. The catch? The tech giant will read your email.

The new feature of Google Now will automatically keep track of informal conversations in users’ Gmail inboxes so that they aren’t missed. The app will continuously scan through the inbox searching for “half-baked plans” that users haven’t added to their calendars and will ask the user about them (as in the photo above).

Simply put, if you get an email asking if you’d like to meet for drinks next Thursday at 7pm, Google Now will recognize it as an invitation. You’ll then be prompted to add it to your calendar.

Read more via Huffington Post.

Why It’s Hot: We’re constantly talking about new apps and gadgets that essentially do our thinking for us and can help us remember things we’ll likely forget. It seems that we may soon be able to literally tap into our memory via some new device.

Google’s new feature is just one of many updates that are designed to anticipate the user’s needs and handle tedious mobile tasks without the user having to swipe a finger.

On the flip side of convenience is, of course, the issue of privacy. As with so many suggestive, helpful, and perhaps slightly invasive apps, this is a major concern. As Huffington Post reports, Google has been criticized for its increasing “‘surveillance’ capabilities.” We’re seeing more and more that the convenience of having one’s device help make their life easier comes at the price of privacy.

#AlexFromTarget–The dark side of fame

One day you’re Alex, a 16-year-old teenager working at Target. The next day, you’re  a trending hashtag with 730,000 Twitter followers and 2.3 million Instagram followers.  His fame literally happened overnight, after a girl posted to her Twitter account a photo of Alex that she had seen on Tumblr.  She added the caption, YOOOOOOOOOO.


Alex was quickly overwhelmed with screaming girls looking to take selfies with him, a visit to the Ellen DeGeneres show, calls from modeling and talent agents and paparazzi waiting outside his door.  But he was also quickly targeted with tweets demanding he be fired from Target, insults denigrating his looks (e.g. “Alex from target is so damn ugly”) and even death threats (“Alex from target, I’ll find you and I will kill you”).  Social security numbers, checking accounts and phone records were leaked, and the family has reached out to local police, the school, and Target to put together security plans for Alex’s safety.  The family has even hired the founder of a teenage-centric, selfie-sharing app to help manage the online assaults.

Why It’s Hot

In today’s social environment, fame can happen literally overnight, even when you are not looking for it.  It can be fun, satisfying, exhilarating and a boost to the ego.  But it can also be dangerous and threatening.  Alex wasn’t seeking fame, but it came to him. At issue is how we protect the privacy of people who don’t want to be celebrities when all it takes is one tweet to make you a star.

Social toys let babies Instagram from their cot

Thanks to social media, privacy is something that pretty soon will belong to a museum! Especially as technology evolves, becoming part of people’s routines and normal life, and moral barriers get forgotten in the process.

Think about this: Those born today will not only have an intimate record of their day-to-day existence preserved online, but they’ll also grow up in a world where shedding their privacy is not only the norm, but expected.

Enter: New Born Fame, a set of interactive plush toys that let babies take selfies and automatically upload them to their Facebook, Twitter and other profiles, straight from the cradle.

If you think this is crazy, wait for the out-of-focus selfies, and disgusting shots of poop, in real time.

Developed by Laura Cornet, a Netherlands-based graduate student, this social interface takes the form of soft toys that hang down over the cot, and interacts with the baby’s social profiles — set up by their parent(s) and synced beforehand.
One of the toys features a camera and a GPS locator. Pull the small bird, and the baby sends out a randomly generated tweet. Pull the Facebook logo, and they automatically update their status along with their current location.
Pull the camera toy and a video is taken and uploaded onto Instagram. The ball shaped toy also lets babies take a selfie and upload it on all platforms.

Even though Cornet developed the project more as an investigation into the ethics of social media and to get parents thinking about the information they post about their children online without their consent, she admits there is also probably a commercial demand for toys like this.


Why this is NOT hot?

To name just one, the data will remain available for anyone — even, potentially, employers — to see long after they’ve grown up, and will probably come back to haunt them.


Sources: |

I know where your cat lives

Between hastags and video posts, the lives and actions of thousands (purrrrhaps millions) of cats have been documented online.  And because when photos are uploaded for posting they usually include meta data with geotags on longitude and latitude, it’s possible to know where those photos were taken and where those millions of cats live. But perhaps more ominously, it’s also a way to identify the addresses of the owners of all those cats.

To show how people and businesses can use this available data to further erode privacy, a Florida State University art profession, Art Mundy, has taken that aggregated data of uploaded photos to Flickr, Instagram and other platforms and used a super computer to show the location of about 1 million cats. (Each photo had been tagged with the word “cat.”)  He put this information on a website, along with photos and maps.  Time called it a Tinder for cats, with an seemingly endless stream of kitten photos for cat fans who stalk the Internet looking for cute kitty photos.  Although specific street addresses are not shown, the displayed maps hint at each cat’s location.

Why It’s Hot

Aside from being cute, all uploaded photos (plus your device) contain a lot of private data that could be used to track individuals and give away other information. Many people don’t realize how easily privacy is breached, often not in obvious ways. And regulations often don’t evolve as quickly as new technologies. So it’s beware to the consumer, even when doing something as harmless as posting cat photos.

Sneaky Facebook Study on Users’ Emotions


In 2012, Facebook conducted a study with Cornell University and the University of California-San Francisco that manipulated users’ News Feeds in order to test the emotional reactions. Participants were not notified that they were participating in a study. The study found that people who see less positive posts were more likely to write more negative posts and the converse to be true when they were exposed to more positive posts. Information-gathering on Facebook is not illegal and users sign over many privacy rights when agreeing to use Facebook. Despite apparent legality, Facebook users have expressed discontent with Facebook’s “sneaky research.”

Why It’s Hot…

The issue of privacy and ethics on social networks continues to be highly controversial.  Facebook manipulates users’ News Feeds all the time and the underlying issue is with Facebook not this particular study. 1.2 billion Facebook users appear to be “OK” with “interest-level data collection that comes with the advertising that makes the site free for people to utilize. But manipulating their emotions appears to go too far.” Integrating emotions on Facebook raises a larger issue that could involve targeted advertisements based on a person’s mood. Many users are angry because no one wants to feel manipulated and the overarching issue of privacy on the internet and specifically social media is a recurring issue that Facebook is dealing with. Companies like Facebook including Google and Yahoo have access to immense personal data and are faced with difficult decisions about whether or not to use or manipulate this information.

NSA Spying Hurts Tech, Communication, and Social Media Business

The NSA’s vast spying program negatively affects tech, communication, social media, and other industries, where the NSA has unconstitutionally acquired back-door access and information. Many individuals, companies, and governments are wary of doing business and sharing their personal information with companies such as Microsoft, Google, Yahoo, Hewlett Packard Facebook, Verizon, and AT&T to name a few. The latest US company to get hurt by the NSA is Verizon (our client), dropped by the German government as a service provider over worries about U.S. spying.



NSA Spying Hurts Tech and Comm Business













Why It’s Hot: As more revelations come out about the NSA’s no privacy policy, industries who service privacy-valuing individuals, businesses, and governments will suffer business and trust unless they take a stand and ensure privacy. 





Snapchat Settles Charges With F.T.C. That It Deceived Users

The Federal Trade Commission on Thursday said Snapchat had agreed to settle charges that the company was deceiving users about the ephemeral nature of the photos and video messages sent through its service. The messages were significantly less private than the company had said, the commission said.

In marketing the service, Snapchat has said that its messages “disappear forever.” But in its complaint, the commission said the messages, often called snaps, can be saved in several ways. The commission said that users can save a message by using a third-party app, for example, or employ simple workarounds that allow users to take a screenshot of messages without detection.

“While we were focused on building, some things didn’t get the attention they could have,” the company said in a statement.

Read more here.


Why It’s Hot

There’s another start up, app, technology, platform, etc every week. And they’re all developed and headed by CEOs younger and younger. Can we be surprised that basic functions are being over looked to market the hot new thing? Start ups need to be held accountable for their products and services and consumers from day 1.