With 3rd-party slowly-but-surely going the way of the dodo, the drive for marketers to develop data strategies that accelerate 1st-party data growth and utilization is fast becoming an existential imperative.
WHY IT’S HOT:Relationships and Relevance will matter more than ever, as marketers of all shapes and sizes strive to survive and thrive in a fundamentally changed world. (From “nice-to-have” to “mission-critical”)
‘Re-architecting the entire process’: How Vice is preparing for life after the third-party cookie
Vice Media Group pulls in 57.5 million global unique visitors a month, according to Comscore; Vice itself says it has a global audience of “more than 350 million individuals.” But only a minority of those users are logged in at any time. With third-party cookies soon to be obsolete and Apple clamping down on the free-for-all sharing of mobile IDs, Vice’s first-party data strategy aims to improve its registration process and double down on contextual ads.
In the latest example of bolstering its first-party data offering for advertisers, Vice Media Group is using a new tool from consumer reporting agency Experian and data platform Infosum.
That tool, Experian Match, those companies say, offers publishers more insights on their audiences without needing to use third-party cookies or requiring users to log in. In turn, they can offer advertisers more precision targeting options.
“What interests me the most is that there’s so much bias within data — for example, proxies to get into the definition [of an a target audience on an advertiser brief],” said Ryan Simone, Vice Media director of global audience solutions. “We are looking to eliminate bias in every instance. If a client says ‘this specific … group is what we are looking for,’ we can say on Vice — not through the proxies of third-party data or other interpretation’ that product A [should target] this content, this audience [and that’s] different from product B. It’s a much more sophisticated strategy and re-architecting the entire process.”
Publishers provide a first-party ID, IP address and timestamp data, which is matched with Experian’s own IP address and household-level socio-demographic data. This initial match is used to create the Experian Match mapping file, which is then stored in a decentralized data “bunker.” From here, all matching takes place using InfoSum’s decentralized marketing infrastructure, with publishers creating their own private and secure ”bunkers” and advertisers doing likewise, so individual personal customer data is never shared between publishers and advertisers.
Privacy and security were important considerations before committing to use the product, said Paul Davison, Vice Media Group vice president of agency development, for international in statement. But, he added, “Those concerns are solved instantly as no data has to be moved between companies.”
As for login data, Vice’s user registration process is fairly basic and doesn’t offer users much explanation about the benefits they will receive if they do so. Updating that is a work in progress, said Simone.
“There will be a lot more front-facing strategy” for encouraging sign-ups, he said. “We are looking to create greater value …. for our users.” (The company also collects first-party data through newsletters and experiential events, such as those held —pre-covid, at least — by Refinery29.)
Vice has worked with contextual intelligence platform Grapeshot long before it was acquired by Oracle in 2018. Beyond offering advertisers large audiences around marquee segments like “fashion” or “music,” Vice has begun working more recently to open up more prescriptive subsegments — like “jewelry” for example.
“People are scared to send out smaller audiences — but I’d rather provide something that’s exact. Opening that up provides greater insights,” especially when layered with first-party data sets gleaned through partnerships like Experian and Infosum, said Simone. Vice might not have a wealth of content around high fashion, for example, but consumers of a particular fashion house might still visit the site to read about politics or tech.
“Contextual has evolved and with the absence of the third-party cookie it’s all the more significant,” said Simone.
Publishers’ biggest differentiating features for advertisers are their audiences and the context within their ads will sit, said Alessandro De Zanche, founder of media consultancy ADZ Strategies.
“If they really want to progress and be more in control, publishers need to go back to the basics: rebuilding trust with the audience, being transparent, educating the audience on why they should give you consent — that’s the very first — then building on top of that,” De Zanche said.
“With all the technical changes and privacy regulations, if a publisher doesn’t rebuild the relationship and interaction with its audience, it will just be like trying to Sellotape their way forward.”
Amazon’s new fitness band adds body fat, movement, sleep and mood to the mountain of data Amazon is amassing. Whether streaming on Amazon Prime, shopping on Amazon.com, buying groceries at Whole Foods, Amazon is ready to…errrr…help?
Why it’s Hot – The increasing convergence of our digital and analog lives is brining the questions of privacy and data sovereignty to the forefront, while also creating new potential opportunities for marketers (just think about what a partnership between Microsoft and Walmart to buy TikTok could mean).
From The Verge:
mazonAmazon is getting into the health gadget market with a new fitness band and subscription service called Halo. Unlike the Apple Watch or even most basic Fitbits, the Amazon Halo band doesn’t have a screen. The app that goes along with it comes with the usual set of fitness tracking features along with two innovative — and potentially troubling — ideas: using your camera to create 3D scans for body fat and listening for the emotion in your voice.
The Halo band will cost $99.99 and the service (which is required for Halo’s more advanced features) costs $3.99 per month. Amazon is launching it as an invite-only early access program today with an introductory price of $64.99 that includes six months of the service for free. The Halo service is a separate product that isn’t part of Amazon Prime.
The lack of a screen on the Halo band is the first indicator that Amazon is trying to carve out a niche for itself that’s focused a little less on sports and exercise and a little more on lifestyle changes. Alongside cardio, sleep, body fat, and voice tone tracking, a Halo subscription will offer a suite of “labs” developed by partners. They’re short challenges designed to improve your health habits — like meditation, improving your sleep habits, or starting up basic exercise routines.
The Halo band “is not a medical device,” Amazon tells me. As such, it hasn’t submitted the device to the FDA for any sort of approval, including the lighter-touch “FDA clearance” that so many other fitness bands have used.
The Amazon Halo intro video | Source: Amazon
THE HALO BAND HARDWARE
TheThe Halo Band consists of a sensor module and a band that clicks into it on top. It’s a simple concept and one we’ve seen before. The lack of a display means that if you want to check your steps or the time, you’ll need to strap something else to your wrist or just check your phone.
The band lacks increasingly standard options like GPS, Wi-Fi, or a cellular radio, another sign that it’s meant to be a more laid-back kind of tracker. It has an accelerometer, a temperature sensor, a heart rate monitor, two microphones, an LED indicator light, and a button to turn the microphones on or off. The microphones are not for speaking to Alexa, by the way, they’re there for the voice tone feature. There is explicitly no Alexa integration.
It communicates with your phone via Bluetooth, and it should work equally well with both iPhones and Android phones. The three main band colors that will be sold are onyx (black), mineral (light blue), and rose gold (pink-ish).
There will of course be a series of optional bands so you can choose one to match your style — and all of them bear no small resemblance to popular Apple Watch bands. The fabric bands will cost $19.99 and the sport bands will be $15.99.
Amazon intends for users to leave the Halo Band on all the time: the battery should last a full week and the sensor is water resistant up to 5ATM. Amazon calls it “swimproof.”
But where the Halo service really differentiates itself is in two new features, called Body and Tone. The former uses your smartphone camera to capture a 3D scan of your body and then calculate your body fat, and the latter uses a microphone on the Halo Band to listen to the tone of your voice and report back on your emotional state throughout the day.
BodyBody scans work with just your smartphone’s camera. The app instructs you to wear tight-fitting clothing (ideally just your underwear) and then stand back six feet or so from your camera. Then it takes four photos (front, back, and both sides) and uploads them to Amazon’s servers where they’re combined into a 3D scan of your body that’s sent back to your phone. The data is then deleted from Amazon’s servers.
Once you have the 3D scan, Amazon uses machine learning to analyze it and calculate your body fat percentage. Amazon argues that body fat percentage is a more reliable indicator of health than either weight or body mass index. Amazon also claims that smart scales that try to measure body fat using bioelectrical impedance are not as accurate as its scan. Amazon says it did an internal study to back up those claims and may begin submitting papers to peer-reviewed medical journals in the future.
Finally, once you have your scan, the app will give you a little slider you can drag your finger on to have it show what you would look like with more or less body fat.
That feature is meant to be educational and motivational, but it could also be literally dangerous for people with body dysmorphic disorder, anorexia, or other self-image issues. I asked Amazon about this directly and the company says that it has put in what it hopes are a few safeguards: the app recommends you only scan yourself every two weeks, it won’t allow the slider to show dangerously low levels of body fat, and it has information about how low body fat can increase your risk for certain health problems. Finally, although anybody 13 years of age and up can use the Halo Band, the body scan feature will only be allowed for people 18 or older.
TRACKING THE TONE OF YOUR VOICE
TheThe microphone on the Amazon Halo band isn’t meant for voice commands; instead it listens to your voice and reports back on what it believes your emotional state was throughout the day. If you don’t opt in, the microphone on the Band doesn’t do anything at all.
Once you opt in, the Halo app will have you read some text back to it so that it can train a model on your voice, allowing the Halo band to only key in on your tone and not those around you. After that, the band will intermittently listen to your voice and judge it on metrics like positivity and energy.
It’s a passive and intermittent system, meaning that you can’t actively ask it to read your tone, and it’s not listening all of the time. You can also mute the mic at any time by pressing the button until a red blinking LED briefly appears to show you it’s muted.
Amazon is quick to note that your voice is never uploaded to any servers and never heard by any humans. Instead, the band sends its audio snippets to your phone via Bluetooth, and it’s analyzed there. Amazon says that the Halo app immediately deletes the voice samples after it analyzes it for your emotional state.
It picks up on the pitch, intensity, rhythm, and tempo of your voice and then categorizes them into “notable moments” that you can go back and review throughout the day. Some of the emotional states include words like hopeful, elated, hesitant, bored, apologetic, happy, worried, confused, and affectionate.
We asked Amazon whether this Tone feature was tested across differing accents, gender, and cultures. A spokesperson says that it “has been a top priority for our team” but that “if you have an accent you can use Tone but your results will likely be less accurate. Tone was modeled on American English but it’s only day one and Tone will continue to improve.”
BothBoth the Body and Tone features are innovative uses of applied AI, but they are likely to set off any number of privacy alarm bells. Amazon says that it is being incredibly careful with user data. The company will post a document detailing every type of data, where it’s stored, and how to delete it.
Every feature is opt-in, easy to turn off, and it’s easy to delete data. For example, there’s no requirement you create a body scan and even if you do, human reviewers will never see those images. Amazon says the most sensitive data like body scans and Tone data are only stored locally (though photos do need to temporarily be uploaded so Amazon’s servers can build the 3D model). Amazon isn’t even allowing Halo to integrate with other fitness apps like Apple Health at launch.
Some of the key points include:
Your Halo profile is distinct from your Amazon account — and will need to be individually activated with a second factor like a text message so that anybody else that might share your Amazon Prime can’t get to it.
You can download and delete any data that’s stored in the cloud at any time, or reset your account to zero.
Body scans and tone data can be individually deleted separately from the rest of your health data.
Body scans are only briefly uploaded to Amazon’s servers then deleted “within 12 hours” and scan images are never shared to other apps like the photo gallery unless you explicitly export an image.
Voice recordings are analyzed locally on your phone and then deleted. “Speech samples are processed locally and never sent to the cloud,” Amazon says, adding that “Tone data won’t be used for training purposes.”
Data can be shared with third parties, including some partners like WW (formerly Weight Watchers). Data generated by the “labs” feature is only shared as anonymous aggregate info.
ACTIVITY AND SLEEP TRACKING
TheThe body scanning and tone features might be the most flashy (or, depending on your perspective, most creepy) parts of Halo, but the thing you’ll likely spend the most time watching is your activity score.
Amazon’s Halo app tracks your cardio fitness on a weekly basis instead of daily — allowing for rest days. It does count steps, but on a top level what you get is an abstracted score (and, of course, a ring to complete) that’s more holistic. Just as Google did in 2018, Amazon has worked with the American Heart Association to develop the abstracted Activity score.
The Halo band uses its heart monitor to distinguish between intense, moderate, and light activity. The app combines those to ensure you’re hitting a weekly target. Instead of the Apple Watch’s hourly “stand” prompts, the Halo app tracks how long you have been “sedentary.” If you go for more than 8 hours without doing much (not counting sleep), the app will begin to deduct from your weekly activity score.
The Halo band can automatically detect activities like walking and running, but literally every other type of exercise will need to be manually entered into the app. The whole system feels less designed for workout min-maxers and more for people who just want to start being more active in the first place.
Speaking of heart tracking, the Halo band doesn’t proactively alert you to heart conditions like a-fib, nor does it do fall detection.
The Halo band’s sleep tracking similarly tries to create an abstracted score, though you can dig in and view details on your REM sleep and other metrics. One small innovation that the Halo band shares with the new Fitbit is temperature monitoring. It uses a three-day baseline when you are sleeping and from there can show a chart of your average body temperature when you wake up.
HALO LABS, PARTNERSHIPS, AND THE SUBSCRIPTION
Finally,Finally, Amazon has partnered with several third parties to create services and studies to go along with the Halo service. For example, if your health care provider’s system is compatible with Cerner, you can choose to share your body fat percentage with your provider’s electronic medical records system. Amazon says it will also be a fully subsidized option for the John Hancock Vitality wellness program.
The flagship partnership is with WW, which syncs up data from Halo into WW’s own FitPoints system. WW will also be promoting the Halo Band itself to people who sign up for its service.
There are dozens of lower-profile partnerships, which will surface in the Halo app as “Labs.” Many of the labs will surface as four-week “challenges” designed to get you to change your health habits. Partners creating Labs range from Mayo Clinic, Exhale, Aaptiv, Lifesum, Headspace, and more. So there might be a lab encouraging you to give yoga a try, or a set of advice on sleeping better like kicking your pet out of your bedroom.
Amazon says each Lab needs to be developed with “scientific evidence” of its effectiveness and Amazon will audit them. Data crated from these challenges will be shared with those partners, but only in an aggregated, anonymous way.
Virtually all the features discussed here are part of the $3.99/month Halo subscription. If you choose to let it lapse, the Halo band will still do basic activity and sleep tracking.
In charging a monthly subscription, Amazon is out on a limb compared to most of its competitors. Companies like Fitbit and Withings offer some of the same features you can get out of the Halo system, including sleep tracking and suggestions for improving your fitness. They also have more full-featured bands with displays and other functionality. And of course there’s the Apple Watch, which will have deeper and better integrations with the iPhone than will ever be possible for the Halo band.
Overall, Halo is a curious mix. Its hardware is intentionally less intrusive and less feature-rich than competitors, and its pricing strategy puts Amazon on the hook for creating new, regular content to keep people subscribed (exercise videos seem like a natural next step). Meanwhile, the body scanning feature goes much further than other apps in directly digitizing your self-image — which is either appealing or disturbing depending on your relationship to your self image. And the emotion tracking with Tone is completely new and more than a little weird.
The mix is so eclectic that I can’t possibly guess who it might appeal to. People who are more serious about exercise and fitness will surely want more than what’s on offer in the hardware itself, and people who just sort of want to be a little more active may balk at the subscription price. And since the Halo band doesn’t offer the same health alerts like fall detection or abnormal heart rate detection, using it as a more passive health monitor isn’t really an option either.
That doesn’t mean the Halo system can’t succeed. Amazon’s vision of a more holistic health gadget is appealing, and some of its choices in how it aggregates and presents health data is genuinely better than simple step counting or ring completion.
We won’t really know how well the Halo system does for some time, either. Amazon’s opening it up as an early access program for now, which means you need to request to join rather than just signing up and buying it.
Last year, Ben Zhao decided to buy an Alexa-enabled Echo speaker for his Chicago home. Mr. Zhao just wanted a digital assistant to play music, but his wife, Heather Zheng, was not enthused. “She freaked out,” he said.
Ms. Zheng characterized her reaction differently. First she objected to having the device in their house, she said. Then, when Mr. Zhao put the Echo in a work space they shared, she made her position perfectly clear:“I said, ‘I don’t want that in the office. Please unplug it. I know the microphone is constantly on.’”
Mr. Zhao and Ms. Zheng are computer science professors at the University of Chicago, and they decided to channel their disagreement into something productive. With the help of an assistant professor, Pedro Lopes, they designed a piece of digital armor: a “bracelet of silence” that will jam the Echo or any other microphones in the vicinity from listening in on the wearer’s conversations.
The bracelet is like an anti-smartwatch, both in its cyberpunk aesthetic and in its purpose of defeating technology. A large, somewhat ungainly white cuff with spiky transducers, the bracelet has 24 speakers that emit ultrasonic signals when the wearer turns it on. The sound is imperceptible to most ears, with the possible exception of young people and dogs, but nearby microphones will detect the high-frequency sound instead of other noises.
“It’s so easy to record these days,” Mr. Lopes said. “This is a useful defense. When you have something private to say, you can activate it in real time. When they play back the recording, the sound is going to be gone.”
During a phone interview, Mr. Lopes turned on the bracelet, resulting in static-like white noise for the listener on the other end.
As American homes are steadily outfitted with recording equipment, the surveillance state has taken on an air of domesticity. Google and Amazon have sold millions of Nest and Ring security cameras, while an estimated one in five American adults now owns a smart speaker. Knocking on someone’s door or chatting in someone’s kitchen now involves the distinct possibility of being recorded.
It all presents new questions of etiquette about whether and how to warn guests that their faces and words could end up on a tech company’s servers, or even in the hands of strangers.
By design, smart speakers have microphones that are always on, listening for so-called wake words like “Alexa,” “Hey, Siri,” or “O.K., Google.” Only after hearing that cue are they supposed to start recording. But contractors hired by device makers to review recordings for quality reasons report hearing clips that were most likely captured unintentionally, including drug deals and sex.
Two Northeastern University researchers, David Choffnes and Daniel Dubois, recently played 120 hours of television for an audience of smart speakers to see what activates the devices. They found that the machines woke up dozens of times and started recording after hearing phrases similar to their wake words.
“People fear that these devices are constantly listening and recording you. They’re not,” Mr. Choffnes said. “But they do wake up and record you at times when they shouldn’t.”
Rick Osterloh, Google’s head of hardware, recently said homeowners should disclose the presence of smart speakers to their guests. “I would, and do, when someone enters into my home, and it’s probably something that the products themselves should try to indicate,” he told the BBC last year.
The “bracelet of silence” is not the first device invented by researchers to stuff up digital assistants’ ears. In 2018, two designers created Project Alias, an appendage that can be placed over a smart speaker to deafen it. But Ms. Zheng argues that a jammer should be portable to protect people as they move through different environments, given that you don’t always know where a microphone is lurking.
At this point, the bracelet is just a prototype. The researchers say that they could manufacture it for as little as $20, and that a handful of investors have asked them about commercializing it.
“Be Smart. Shop Safe.” That’s the tag line for Mozilla’s initiative to spread awareness about the privacy status and risks of new connected products — and promote their brand as a privacy leader.
The privacy of physical connected products is new for many people, so getting people to consider privacy before impulsively slamming the BUY button is a big deal for an organization focused on privacy. Mozilla needed to make their report interesting to grab people’s attention.
Smart but simple UX and strong copy makes this happen.
A privacy focused shopping guide allows you to see which products meet Mozilla’s minimum privacy standards.
An animated emoji shows how “creepy” users have said various products are, regardless of their privacy rating.
Why it’s hot:
Is this the beginning of, if not a backlash, at least a recalibration of the excitement about smart IoT products?
Mozilla frames itself as the authority on the growing concern of privacy and getting into the product-rating game drives a new kind of awareness regarding physical products which many people have heretofore not had to consider.
Gathering data on creepiness sentiment is an interesting (and fun) approach to consumer metrics. Users can vote on the creepiness scale, but you have to give your email to see the results.
The creator of the famous voice assistant dreams of a world where Alexa is everywhere, anticipating your every need.
Speaking with MIT Technology Review, Rohit Prasad, Alexa’s head scientist, revealed further details about where Alexa is headed next. The crux of the plan is for the voice assistant to move from passive to proactive interactions. Rather than wait for and respond to requests, Alexa will anticipate what the user might want. The idea is to turn Alexa into an omnipresent companion that actively shapes and orchestrates your life. This will require Alexa to get to know you better than ever before.
In June at the re:Mars conference, he demoed [view from 53:54] a feature called Alexa Conversations, showing how it might be used to help you plan a night out. Instead of manually initiating a new request for every part of the evening, you would need only to begin the conversation—for example, by asking to book movie tickets. Alexa would then follow up to ask whether you also wanted to make a restaurant reservation or call an Uber.
A more intelligent Alexa
Here’s how Alexa’s software updates will come together to execute the night-out planning scenario. In order to follow up on a movie ticket request with prompts for dinner and an Uber, a neural network learns—through billions of user interactions a week—to recognize which skills are commonly used with one another. This is how intelligent prediction comes into play. When enough users book a dinner after a movie, Alexa will package the skills together and recommend them in conjunction.
But reasoning is required to know what time to book the Uber. Taking into account your and the theater’s location, the start time of your movie, and the expected traffic, Alexa figures out when the car should pick you up to get you there on time.
Prasad imagines many other scenarios that might require more complex reasoning. You could imagine a skill, for example, that would allow you to ask your Echo Buds where the tomatoes are while you’re standing in Whole Foods. The Buds will need to register that you’re in the Whole Foods, access a map of its floor plan, and then tell you the tomatoes are in aisle seven.
In another scenario, you might ask Alexa through your communal home Echo to send you a notification if your flight is delayed. When it’s time to do so, perhaps you are already driving. Alexa needs to realize (by identifying your voice in your initial request) that you, not a roommate or family member, need the notification—and, based on the last Echo-enabled device you interacted with, that you are now in your car. Therefore, the notification should go to your car rather than your home.
This level of prediction and reasoning will also need to account for video data as more and more Alexa-compatible products include cameras. Let’s say you’re not home, Prasad muses, and a Girl Scout knocks on your door selling cookies. The Alexa on your Amazon Ring, a camera-equipped doorbell, should register (through video and audio input) who is at your door and why, know that you are not home, send you a note on a nearby Alexa device asking how many cookies you want, and order them on your behalf.
To make this possible, Prasad’s team is now testing a new software architecture for processing user commands. It involves filtering audio and visual information through many more layers. First Alexa needs to register which skill the user is trying to access among the roughly 100,000 available. Next it will have to understand the command in the context of who the user is, what device that person is using, and where. Finally it will need to refine the response on the basis of the user’s previously expressed preferences.
Why It’s Hot:“This is what I believe the next few years will be about: reasoning and making it more personal, with more context,” says Prasad. “It’s like bringing everything together to make these massive decisions.”
The beta version has been out for a while, but “Today marks the official launch of Brave 1.0, a free open-source browser. The beta version has already drawn 8 million monthly users, but now, the full stable release is available for Windows, macOS, Linux, Android, and iOS.
Brave promises to prioritize security by blocking third-party ads, trackers, and autoplay videos automatically. So you don’t need to go into your settings to ensure greater privacy, though you can adjust those settings if you want to.” (The Verge)
He explains it best:
Why it’s hot:
1. As tech giants increasingly impinge on privacy and gobble up every imaginable byte of data about everyone in exchange for “a better user experience,” Brave is claiming to have found a non-zero-sum game that everyone (users, advertisers, and publishers) can benefit from:
Users get lots more control over the ads they see and get rewarded with tokens for allowing ads.
Advertisers get more precise and engaged audiences, so in theory, better ROAS.
Content creators get more control over their publishing and their income. And users can tip content creators on a subscription-style basis not unlike Patreon.
That’s the idea, at least.
2. Its look and feel is very similar to Chrome, so migrating to Brave may be smooth enough to encourage more people to abandon the surveillance-state-as-a-service (SSaaS) that Google is verging on.
The Alabama football coach, has long been peeved that the student section at Bryant-Denny Stadium empties early. So this season, the university is rewarding students who attend games — and stay until the fourth quarter — with an alluring prize: improved access to tickets to the SEC championship game and to the College Football Playoff semifinals and championship game, which Alabama is trying to reach for the fifth consecutive season.
But to do this, Alabama is taking an extraordinary, Orwellian step: using location-tracking technology from students’ phones to see who skips out and who stays. “It’s kind of like Big Brother,” said Allison Isidore, a graduate student in religious studies from Montclair, N.J.
It also seems inevitable in an age when tech behemoths like Facebook, Google and Amazon harvest data from phones, knowing where users walk, what they watch and how they shop. Alabama isn’t the only college tapping into student data; the University of North Carolina uses location-tracking technology to see whether its football players and other athletes are in class.
Greg Byrne, Alabama’s athletic director, said privacy concerns rarely came up when the program was being discussed with other departments and student groups. Students who download the Tide Loyalty Points app will be tracked only inside the stadium, he said, and they can close the app — or delete it — once they leave the stadium. “If anybody has a phone, unless you’re in airplane mode or have it off, the cellular companies know where you are,” he said.
But Adam Schwartz, a lawyer for the Electronic Frontier Foundation, a privacy watchdog, said it was “very alarming” that a public university — an arm of the government — was tracking its students’ whereabouts.
“Why should packing the stadium in the fourth quarter be the last time the government wants to know where students are?” Schwartz said, adding that it was “inappropriate” to offer an incentive for students to give up their privacy. “A public university is a teacher, telling students what is proper in a democratic society.”
The creator of the app, FanMaker, runs apps for 40 colleges, including Clemson, Louisiana State and Southern California, which typically reward fans with gifts like T-shirts. The app it created for Alabama is the only one that tracks the locations of its students. That Alabama would want it is an example of how even a powerhouse program like the Crimson Tide is not sheltered from college football’s decline in attendance, which sank to a 22-year low last season.
The Tide Loyalty Points program works like this: Students, who typically pay about $10 for home tickets, download the app and earn 100 points for attending a home game and an additional 250 for staying until the fourth quarter. Those points augment ones they garner mostly from progress they have made toward their degrees — 100 points per credit hour. (A regular load would be 15 credits per semester, or 1,500 points.)
The students themselves had no shortage of proposed solutions.
“Sell beer; that would keep us here,” said Harrison Powell, a sophomore engineering major from Naples, Fla.
“Don’t schedule cupcakes,” said Garrett Foster, a senior management major from Birmingham, referring to Alabama’s ritually soft non-conference home schedule, which this year includes Western Carolina, Southern Mississippi and New Mexico State. (Byrne has set about beefing it up, scheduling home-and-home series with Texas, Wisconsin, Oklahoma and Notre Dame, but those don’t start until 2022.)
In the meantime, there is also time for students to solve their own problems, which is, after all, the point of going to college. An Alabama official figured it would not be long before pledges are conscripted to hold caches of phones until the fourth quarter so their fraternity brothers could leave early.
“Without a doubt,” said Wolf, the student from Philadelphia. “I haven’t seen it yet, but it’s the first game. There will be workarounds for sure.”
As for whether the app, with its privacy concerns, early bugs and potential loopholes, will do its job well enough to please Saban was not a subject he was willing to entertain as the sun began to set on Saturday. He was looking ahead to the next opponent: South Carolina.
Why It’s Hot:
Another example of a brand/institution using gamification to influence behavior, this takes it a step further – pushing towards the edge of the privacy conversation, and perhaps leading us all to consider what might be an acceptable “exchange rate” for personal information.
Rising suicide rates in the US are disproportionately affecting 10-24 year-olds, with suicide as the second leading cause of death after unintentional injuries. It’s a complex and multifaceted topic, and one that leaves those whose lives are impacted wondering what they could have done differently, to recognize the signs and intervene.
Researchers are fast at work figuring out whether a machine learning algorithm might be able to use data from an individual’s mobile device to assess risk and predict an imminent suicide attempt – before there may even be any outward signs. This work is part of the Mobile Assessment for the Prediction of Suicide (MAPS) study, involving 50 teenagers in New York and Pennsylvania. If successful, the effort could lead to a viable solution to an increasingly troubling societal problem.
Why It’s Hot
We’re just scratching the surface of the treasure trove of insights that might be buried in the mountains of data we’re all generating every day. Our ability to understand people more deeply, without relying on “new” sources of data, will have implications for the experiences brands and marketers deliver.
There is growing concern over online data and user privacy, but people’s data is also being collected from their living rooms via their televisions.
Samba TV is a company that tracks viewer information to make personalized show recommendations. Samba TV’s software has been implemented by a dozen TV brands, including Sony, Sharp, TCL and Philips. When people set up their Smart TVs, a screen urges them to enable a service called Samba Interaction TV, saying it recommends shows and provides special offers. It doesn’t, however, tell the consumer how much of their data they’re collecting.
Samba TV can track everything that appears on TV on a second-by-second basis. It offers advertisers the ability to base their targeting on whether people watch conservative or liberal media outlets or which party’s presidential debate they watched. The big draw is that Samba TV can identify other devices in the household that share the TV’s internet connection. In turn, advertisers can pay the company to direct ads to other gadgets in a home after their TV commercials play.
Why it’s hot:
Most people think their data is only at risk while surfing the web. But that can’t be farther from the truth. By agreeing to Samba TV’s terms, most people are deceived into thinking they’re only agreeing to a minor recommendation feature, but they’re actually giving the company access to all their devices.
Amazon is delivering more than just your packages these days — it is also delivering photos. As part of the company’s efforts to make it even easier for you to receive your online orders, Amazon has taken to taking photos of your doorway to show exactly where your packages are being deposited. This will hopefully cut down on customer confusion, and also serves as photographic evidence of the successful delivery of your precious cargo.Amazon’s new picture-taking practice might also allow delivery folks to leave packages in more inconspicuous spots, like behind a bush or in a flower pot, as USA Today notes. Given the rise in package stealers, having a safe and somewhat surprising place to put your packages may not be such a bad idea and being able to document where that place is makes things easier.The new service is called Amazon Logistics Photo on Delivery and according to a company spokesperson, is “one of many delivery innovations we’re working on to improve convenience for customers.” Amazon Logistics in and of itself is one of those delivery innovations — it’s an Amazon-owned delivery network that is completely separate from other delivery services like FedEx or UPS. And while the Photo on Delivery program has been rolling out in batches for the last six months, it’s becoming more widespread. Now, folks who receive packages in the Seattle, San Francisco, and northern Virginia metro areas will likely be receiving photographic notifications of their delivery’s safe arrival.
Of course, if the thought of someone taking a photo of your property doesn’t really sit all that well with you, don’t worry — Amazon is giving you a way to opt out of the feature, too. Simply head over to the Amazon website and navigate to the help and customer service tab. From there, you should be able to tell Amazon folks not to take an unapproved photo (assuming the photo-taking option is even available to you). But if you’re interested in seeing exactly where your packages are at the end of the day, Photo on Delivery may be the feature you have been waiting for.
Could this be data collection disguised as innovation? Or a way to cut down on false claims of lost packages and package stealing? In any case, I have always wanted my packages to be more inconspicuously placed and now they can be. Plus, why not gameify it? “Alexa, where’s my package”?
China is building the world’s most powerful facial recognition system with the power to identify any one of its 1.3 billion citizens within three seconds. The government states the system is being developed for security and official uses such as tracking wanted suspects and public administration and that commercial application using information sourced from the database will not be allowed under current regulations.
“[But] a policy can change due to the development of the economy and increasing demand from society,” said Chen Jiansheng, an associate professor at the department of electrical engineering at Tsinghua University and a member of the ministry’s Committee of Standardisation overseeing technical developments in police forces.
Chinese companies are already taking the commercial application of facial recognition technology to new heights. Students can now enter their university halls, travellers can board planes without using a boarding pass and diners can pay for a meal at KFC. Some other restaurants have even offered discounts to customers based on a machine that ranks their looks according to an algorithm. Customers with “beautiful” characteristics – such as symmetrical features – get better scores than those with noses that are “too big” or “too small” and those that get better scores will get cheaper meals.
Why It’s Hot
Another weekly installment of balancing convenience and claims of safety with privacy and ethics. China is pushing us faster than most other countries to address this question sooner rather than later.
“London based street artist Ben Eine recently opened a pop-up shop for his work; instead of paying with cash, however, customers at this the Data Dollar Store have to give up their personal information. The shop, opened in collaboration with cybersecurity company Kaspersky Lab, is meant to make customers reevaluate the information they often freely give away by the social media channels they are using. “I’m concerned about how that information is used and why are we not rewarded for giving this information away,” Eine told Cnet. “Companies use that information and target us to sell products, to feed us information that we wouldn’t necessarily look at.”
The store was set up in a temporary space at the Old Street Underground station in London. When visitors entered the store, an employee showed them the purchasing options:
Giving up three photos or screenshots of your recent text or email conversations for a mug
Giving up the last three photos on your Camera Roll, or the last three text messages sent, for a tshirt
Handing your phone over to the assistant to select any five photos to keep, publicly displayed on a large TV screen in the store for the next two days (you can attempt to barter for which photos they’ll pick, but the choice is up to them).
The store is meant to raise awareness about the sometimes risky informational exchanges that are happening all the time online. According to Kaspersky Lab, while 74 percent of survey respondents were unconcerned about data security, 41 percent were unprotected from potential threats and 20 percent have been affected by cybercrime.
The ripple of the video giant’s woes has gotten so great that some have predicted the impact from major brands could cost YouTube $750 million. Seemingly, there are some that are happy when such a kink in the armor is exposed, but there are myriad of stakeholders, each with their own perspective. With that amount of money – as well as brand reputation and confidence – at stake there are going to be some winners and losers, and here they are:
These are the classic media players who started losing their lunch the second Google started owning the internet. One could imagine publishers grinning ear to ear, thinking, “Told ya so. Quality content isn’t so easy.” They can can make a more convincing case that knowing the content and the audience actually is still important.
This issue can resurface a shift to high-quality, direct-bought content, where brands have the most control but pay a premium for it in some cases.
Anyone selling streaming ads is in a good position – including Sling, Dish and even TV networks. Hulu, Roku, TV networks and anyone with a digital video platform will be showing off their highly curated content. These new shows and programming will look pretty good to anyone with a heightened interest in knowing exactly where their messages will appear.
Tech tools & 3rd Party Verification Partners
Brands have called for digital platforms like Facebook and Google to clean up the media supply chain and to be more transparent with data. The brand safety issue on YouTube is yet another bit of leverage to force more cooperation.
Independent third parties like Integral Ad Science, Double Verify, Moat and others will find more brands at their doorsteps looking for ways to ensure their ads appear near quality content.
One of the most important roles for agencies was helping brands make sure their ads didn’t show up in the wrong place by intimately knowing the targeting, brand safety protections and best practices of each channel. Well, now those services are increasingly valuable.
When the Trump administration makes further moves to undo net neutrality, as many anticipate based on current momentum repealing FCC consumer protections, Google’s ability to defend it in idealistic terms could be undermined by all the talk about serving ads on terrorist video.
It took a long time for programmatic to stop being a dirty word. Programmatic advertising was once considered the least controlled, lowest quality ad inventory at the lowest price. In part, brands could start to pull back from blind, untargeted buying without transparency.
YouTube has said that part of its solution is to implement stricter community standards, and that could mean more bannings and ad blocking from their videos, impacting their earnings.They could be quicker to cut a channel at the smallest offense now that brands are watching closely.
Advertisers still on YouTube – this is a tricky one to classify and it’s too early to say. We’ll have to see how the video platform reacts over time to increasing pressures to allow verification partners and data trackers access within the garden’s walls.
Why It’s (Still) Hot:
This topic will continue to be important to the brands we represent, aim to represent and even those far from us that are faced with the same decision to either stay the course or sit it out. There is a lot of money moving around on media plans, a lot of POV’s being routed and a lot of reps working overtime to reassure teams of buyers/planners that they are taking brand safety very seriously. Often it’s not the crisis that defines a company, but what they do in the aftermath. Some are hopeful that this is a definitive crack in the ‘walled garden’- but even if it is not, we’re all hoping for a better, safer platform at the end of this tunnel…a world where once again clients can be irked by their premium pre-rolls showing up prefacing water skiing squirrels and dancing cat videos instead of terrorist rhetoric.
The rules, which forced internet service providers to get permission before selling your data, were overturned using the little-used Congressional Review Act (CRA). This is now being called “the single biggest step backwards in online privacy in many years” by those that spoke out against the repeal as well as the co-creator of the 1996 Telecommunications Act, Sen. Ed Markey.
This is a big deal and a pretty bad idea for anyone even remotely concerned with privacy and limiting the already questionable practices of telecoms and ISPs.
Assuming that this resolution passes through the House, which seems likely at this point, your broadband and wireless internet service provider will have free reign to collect and sell personal data along to third parties. That information may include (but is not limited to) location, financial, healthcare and browsing data scraped from customers. As a result of the ruling, you can expect ISPs to begin collecting this data by default.
To play the devil’s advocate for a second, let’s assume there is some upside for companies in this deregulation: “You want the entrepreneurial spirit to thrive, but you have to be able to say ‘no, I don’t want you in my living room.’ Yes, we’re capitalists, but we’re capitalists with a conscience.” states Sen Ed Markey. But with the wireless and cable industries both operating as powerful oligopolies, consumers will be left with zero protection against price-gouging, no advocate for net neutrality, and as today demonstrates, far less control over their own data.
The broadband privacy rule, among other things, expanded an existing rule by defining a few extra items as personal information, such as browsing history. This information joins medical records, credit card numbers and so on as information that your ISP is obligated to handle differently, asking if it can collect it and use it.
There is NothingHotabout this.
You can see the utility of the rule right away; browsing records are very personal indeed, and ISPs are in a unique position to collect pretty much all of it if you’re not actively obscuring it. Facebook and Google see a lot, sure, but ISPs see a lot too, and a very different set of data.
Why should they be able to aggregate it and sell it without your permission? Perhaps to gain competitive advantage or profit or to cull other aggregators, in order to better target ads or build a profile of you. The FCC thought not, and proposed the rule, part of which was rescinded by the new FCC leadership before it even took effect. *Sigh*
If consumers continue to lose trust in the platforms we employ to market our brands and begin to widely question their safety, security and data usage, we are in big trouble. We’re already challenged by a litany of brand safety concerns – bots, fraud, hackers, malware, viewability – and solutions aimed to mitigate yet limit marketing effectiveness (ex. ad blocking) continue to gain momentum. While some of this is good digital evolution (flashback to needing a pop-up blocker just to endure an average online session), the lack of consumer trust quickly erodes to lack of brand trust and soon those left behind willingly (or unknowingly) allowing their data to be sold on the open market might not be the ones worth reaching.
Sex toy maker We-Vibe has agreed to pay customers up to C$10,000 (£6,120) each after shipping a “smart vibrator” which tracked owners’ use without their knowledge.
The We-Vibe 4 Plus is a £90 bluetooth connected vibrator, which can be controlled through an app. Its app-enabled controls can be activated remotely, allowing, for instance, a partner on the other end of a video call to interact.
But the app came with a number of security and privacy vulnerabilities, allowing anyone within bluetooth range to seize control of the device.
In addition, data is collected and sent back to Standard Innovation, letting the company know about the temperature of the device and the vibration intensity – which, combined, reveal intimate information about the user’s sexual habits.
Experts who want to pierce North Korea’s extreme secrecy have to be creative. One surprisingly rich resource: the country’s own propaganda, like the photo below.
Thanks to high-tech forensics, analysts and intelligence agencies are using photos to track North Korea’s internal politics and expanding weapons programs with stunning granularity.
The bomb, Kim’s outfit, the entourage and the missile have all been thoroughly dissected
Analysts believe some details may have been deliberately revealed to demonstrate the country’s growing capabilities. Whereas most such images are doctored, if only to improve Mr. Kim’s appearance, they noticed that this was conspicuously unretouched — perhaps a message to the foreign intelligence agencies who conduct such analyses.
Whereas analysts had long doubted the country’s grandiose claims, analysts said, “2016 was about showing us all the capabilities that we had mocked.”
The Internet of Things is here. While it seems so inevitable and welcome, in many ways it is a nightmare waiting to happen. In an article on CIO.com and a report from Goldman Sachs, the authors outline the positive impact IoT/Big Data on transportation and healthcare. One author, Bruce Horsham claims a utopian — and a rather naïve — view of the IoT: “Far from being a gimmicky way to collect more data on customers – and sell them more things – IoT technology has the power to transform industries, manage costs and deliver quality outcomes to up and down the supply chain. Here’s how two industries are doing just that.”
While the advances IoT is doing for transportation is noteworthy, the risk/reward equation for healthcare is still to be determined. Consider this fact: Goldman Sachs recently estimated thatInternet of Things (IoT) technology can save billions of dollars for asthma care. If 1/3 of all healthcare spending is on disease management, this is also a place where IoT can have the biggest, most immediate impact on cost and quality-of-care. The chart below shows how Goldman views it:
Example: Excellus, a health data provider, got hacked. If each individual health record is valued at $50, then the theft is worth $500 million.
Example: if true connectivity occurred, then millions of users with implanted monitoring devices, or those who use remote insulin devices or remote monitoring of their heart could be hacked and killed. It is that ugly and that simple.
You know privacy is taking a back seat to technology when you can’t sunbathe on top of a wind turbine without being discovered. Kevin Miller, vacationing in Rhode Island, was taking his drone for a spin in the countryside when he maneuvered it to the top of a 200-foot tall wind turbine. Hoping to capture some cool sights of the turbine against the blue sky, the drone instead spied upon a man sunning himself. The man took it all in good stride, sitting up and waving to the drone.
Why It’s Hot
Privacy is becoming a quaint notion, going the way of bygone eras like the horse and buggy, handwritten notes and family dinner time. Soon there will be nowhere to hide, with drones everywhere, street cameras watching every move, and smartphones tracking our every movement. Creepy perhaps, but it seems like it’s becoming more accepted in society. Now you can’t even sunbathe on a wind turbine.
Facial-recognition might make tagging photos a lot easier on Facebook, but let’s face it: there’s a lot of unresolved implications for the long-term about this increasingly integrated technologies. Rather than putting themselves at risk, some people are choosing just to opt-out altogether.
How do you go dark when your data is collected without your knowledge or consent?
It’s not a catch-all, but start with protective eyewear. The Privacy Visor, first unveiled in 2012, are glasses that reflect light in a way that confuses facial-detection software. More specifically, the glasses disrupt the patterns of light and dark spots around the eyes and nose that allow computer algorithms to recognize that a face is even present in the frame. Although the glasses don’t guarantee complete privacy, project lead Isao Echizen says tests have shown it to work over 90% of the time.
As camera technology has adapted to more closely mimic human sight, The Privacy Visor has been updated to better shield identities even as the scanning technology advances. Echizen is pushing forward with the concept’s commercialization, targeting mid-next year to begin production of more fashionable and effective versions.
Why It’s Hot
The Privacy Visor is a product that exists to treat the symptom of a much larger problem: maintaining control of one’s own identity. It might be good for those of us who are paranoid, and those of us who very legitimately want to maintain control of the information they create about themselves… online and out in the world. In fact some privacy advocates go as far as saying the Privacy Visor is a detriment to the cause because its very existence is in effect “giving in” to these huge pressures for amassing databases of personally identifiable information. So the bigger question becomes less about the specifics of this product, and more about the broader culture of acceptance we’re creating.
There are connected cars and then there are cars that can be ‘disconnected,’ at least from the driver.
In a somewhat on-the-edge experiment published this week, two well-known hackers tapped into a moving Jeep and remotely operated the radio, air conditioning and windshield wipers.
Then, while the car was moving at 70 miles an hour, the car’s acceleration was remotely shut down, leaving the volunteer driver & writer from Wired slowing on a busy highway.
Aerial view of a freeway interchange, LA
The two hackers, Charlie Miller and Chris Vasalek, have been conducting car-hacking research to determine if an attacker could gain wireless control to vehicles via the internet.
They created software code that could send commands through the Jeep’s entertainment system to its dashboard functions, brakes, steering and transmission from a laptop many miles away, according to the first-person account in Wired.
Why Its Hot:
The timing of the widely reported Jeep experiment is interesting in light of a new privacy bill in congress that would stop car makers from using data collected from vehicles for advertising or marketing. More about the bill can be found here.
Drivers shouldn’t have to choose between being connected and being protected. Controlled demonstrations show how frightening it would be to have a hacker take over controls of a car. We need clear rules of the road that protect cars from hackers and American families from data trackers. When consumers feel safe with technology it flourishes but when the opposite is true it can become an epic failure.
But the Jeep episode shows that sending unwanted ads to drivers on the move may be the least of the issues. The silver lining in all of this is that the vulnerabilities in the coming interconnected world of things are being identified and highlighted.
As you might imagine, Fiat Chrysler has issued a software fix that Jeep owners can either download and install or ask a dealer to install it. There will be more bumps in the road that get identified. Then they can be addressed.
Privacy campaigners and open source develops are up in arms over the secret installing of a Google listening software. The listening software, Chrome’s “OK, Google” hotword detection, is meant to detect words which make the computer respond when you talk to it. However, many users have complained its automatic activation without their permission when using Chromium, an open-source web browser from Google Chrome. This basically means that their computers are stealth configured to send what was being said in the room to somebody else, to a private company in another country, without your consent or knowledge.
Open-source advocates proposed that Google was downloading a “black box” on to their machines that was not open source and therefore could not be doing what it said it was meant to do. Google has since pulled its listening software from the open-source Chromium browser.
Why is it hot?
Voice search functions have become an accepted feature of modern smartphones, smart TVs, and now browsers, but they have caused concerns over privacy. While most services require users to opt in, many have question whether their use, which requires sending voice recordings potentially exposes private conservations held within the home.
Yesterday’s DataSift Facebook webinar revealed a huge breakthrough for healthcare marketers. Through their partnership, Facebook is providing aggregated topic data to marketers on DataSift’s API. This represents the first time we’ve been able to pull back the curtain and see what Facebook users are talking about in their private conversations.
Now, before the Facebook Privacy conspiracy theorists get wound up, the data coming through DataSift’s API is:
Anonymized with no personal data available
Limited to adults 18+
Only available on topics with 100+ posts in any query
Until this option, the only way to get insight into what was happening on Facebook was to engage in the channel. Even then, marketers only got insight what happened on and around their own pages so of course that was a very limited and self-selected audience.
Now, marketers will be able to get insight into volumes of conversation on Facebook by using three different criteria:
Category (the example used was automotive)
Brand (for example, BMW, Ford, and Honda)
Media (defined by URLs for properties like BBC or CNN)
Feature / topic data (as defined by Boolean queries)
However, DataSift shared the image below on Facebook with the post:
Mattel’s new “Hello Barbie” hits store shelves later this year and has more tricks up her sleeve than just saying hello. With the press of a button, Barbie’s embedded microphone turns on and records the voice of the child playing with her. The recordings are then uploaded to a cloud server, where voice detection technology helps the doll make sense of the data. The result? An inquisitive Barbie who remembers your dog’s name and brings up your favorite hobbies in your next chitchat.
Why It’s Hot:
The overall concept is interesting — a toy that listens and learns your child’s preferences and adapts accordingly … a true demonstration of personalization. The obvious downside is that nobody really knows what is being done with all that data (collected from kids!) and puts a level of accountability on the company. For example, children confide in their toys – so what if a child admits that they get hit by a parent? Should Mattel be on the hook to report it?
With each new technology seems to come new concerns about privacy. New “privacy glasses” by software firm AVG seek to minimize worries over facial recognition on social media.
The glasses use infrared LEDs surrounding your eyes and nose to essentially “mess with” smartphone cameras when taking pictures. By distorting the light around one’s eyes and nose, it prevents facial recognition technology that is built into most social media platforms from identifying people.
Why It’s Hot | While the glasses are only in prototyping phase, and still just look like a novelty party accessory, it’s not surprising that companies are finding ways to prevent facial recognition technology. With each new innovation comes more worries about “big brother” watching over us, and these glasses are just a step in the direction of privacy over convenience. It’ll be interesting to watch which end of that struggle wins out over time.
As Twitter strives to provide an even more personalized experience for their users, the social media platform recently announced they will be tracking what apps mobile users have downloaded. With this new capability, Twitter can promise better targeting within their advertising as well. They can now enable advertisers to target based on whether users have installed their app, registered after downloading, etc.
For those of us that don’t want personal information given to advertisers for advertisements and Twitter recommendations that are that relevant, these privacy settings are adjustable. Users can opt-out of Twitter’s tracking capability. Read more via VentureBeat, and adjust your settings in the Twitter app.
Why It’s Hot | With the world of social media increasing the amount of personal data available online, the privacy discussion is more popular than ever. Beyond just using the information users give, now apps are including tracking mechanisms that track your mobile behavior. While research shows that (in general) Millennials don’t mind tracking if it means their experience will be more personalized or help them to find products more easily (eMarketer), Millennials are not the only mobile consumers. And so the privacy vs. personalization debate continues…
Have you ever casually agreed to plans with a friend – “Sure, let’s grab drinks next Thursday after work” – and then totally forgot about it? Google is here to the rescue to help prevent this from ever happening again (and to make us better humans) with its updated mobile app. The catch? The tech giant will read your email.
The new feature of Google Now will automatically keep track of informal conversations in users’ Gmail inboxes so that they aren’t missed. The app will continuously scan through the inbox searching for “half-baked plans” that users haven’t added to their calendars and will ask the user about them (as in the photo above).
Simply put, if you get an email asking if you’d like to meet for drinks next Thursday at 7pm, Google Now will recognize it as an invitation. You’ll then be prompted to add it to your calendar.
Why It’s Hot: We’re constantly talking about new apps and gadgets that essentially do our thinking for us and can help us remember things we’ll likely forget. It seems that we may soon be able to literally tap into our memory via some new device.
Google’s new feature is just one of many updates that are designed to anticipate the user’s needs and handle tedious mobile tasks without the user having to swipe a finger.
On the flip side of convenience is, of course, the issue of privacy. As with so many suggestive, helpful, and perhaps slightly invasive apps, this is a major concern. As Huffington Post reports, Google has been criticized for its increasing “‘surveillance’ capabilities.” We’re seeing more and more that the convenience of having one’s device help make their life easier comes at the price of privacy.
One day you’re Alex, a 16-year-old teenager working at Target. The next day, you’re a trending hashtag with 730,000 Twitter followers and 2.3 million Instagram followers. His fame literally happened overnight, after a girl posted to her Twitter account a photo of Alex that she had seen on Tumblr. She added the caption, YOOOOOOOOOO.
Alex was quickly overwhelmed with screaming girls looking to take selfies with him, a visit to the Ellen DeGeneres show, calls from modeling and talent agents and paparazzi waiting outside his door. But he was also quickly targeted with tweets demanding he be fired from Target, insults denigrating his looks (e.g. “Alex from target is so damn ugly”) and even death threats (“Alex from target, I’ll find you and I will kill you”). Social security numbers, checking accounts and phone records were leaked, and the family has reached out to local police, the school, and Target to put together security plans for Alex’s safety. The family has even hired the founder of a teenage-centric, selfie-sharing app to help manage the online assaults.
Why It’s Hot
In today’s social environment, fame can happen literally overnight, even when you are not looking for it. It can be fun, satisfying, exhilarating and a boost to the ego. But it can also be dangerous and threatening. Alex wasn’t seeking fame, but it came to him. At issue is how we protect the privacy of people who don’t want to be celebrities when all it takes is one tweet to make you a star.
Thanks to social media, privacy is something that pretty soon will belong to a museum! Especially as technology evolves, becoming part of people’s routines and normal life, and moral barriers get forgotten in the process.
Think about this: Those born today will not only have an intimate record of their day-to-day existence preserved online, but they’ll also grow up in a world where shedding their privacy is not only the norm, but expected.
Enter: New Born Fame, a set of interactive plush toys that let babies take selfies and automatically upload them to their Facebook, Twitter and other profiles, straight from the cradle.
If you think this is crazy, wait for the out-of-focus selfies, and disgusting shots of poop, in real time.
Developed by Laura Cornet, a Netherlands-based graduate student, this social interface takes the form of soft toys that hang down over the cot, and interacts with the baby’s social profiles — set up by their parent(s) and synced beforehand.
One of the toys features a camera and a GPS locator. Pull the small bird, and the baby sends out a randomly generated tweet. Pull the Facebook logo, and they automatically update their status along with their current location.
Pull the camera toy and a video is taken and uploaded onto Instagram. The ball shaped toy also lets babies take a selfie and upload it on all platforms.
Even though Cornet developed the project more as an investigation into the ethics of social media and to get parents thinking about the information they post about their children online without their consent, she admits there is also probably a commercial demand for toys like this.
Why this is NOT hot?
To name just one, the data will remain available for anyone — even, potentially, employers — to see long after they’ve grown up, and will probably come back to haunt them.
Between hastags and video posts, the lives and actions of thousands (purrrrhaps millions) of cats have been documented online. And because when photos are uploaded for posting they usually include meta data with geotags on longitude and latitude, it’s possible to know where those photos were taken and where those millions of cats live. But perhaps more ominously, it’s also a way to identify the addresses of the owners of all those cats.
To show how people and businesses can use this available data to further erode privacy, a Florida State University art profession, Art Mundy, has taken that aggregated data of uploaded photos to Flickr, Instagram and other platforms and used a super computer to show the location of about 1 million cats. (Each photo had been tagged with the word “cat.”) He put this information on a website, along with photos and maps. Time called it a Tinder for cats, with an seemingly endless stream of kitten photos for cat fans who stalk the Internet looking for cute kitty photos. Although specific street addresses are not shown, the displayed maps hint at each cat’s location.
Why It’s Hot
Aside from being cute, all uploaded photos (plus your device) contain a lot of private data that could be used to track individuals and give away other information. Many people don’t realize how easily privacy is breached, often not in obvious ways. And regulations often don’t evolve as quickly as new technologies. So it’s beware to the consumer, even when doing something as harmless as posting cat photos.
In 2012, Facebook conducted a study with Cornell University and the University of California-San Francisco that manipulated users’ News Feeds in order to test the emotional reactions. Participants were not notified that they were participating in a study. The study found that people who see less positive posts were more likely to write more negative posts and the converse to be true when they were exposed to more positive posts. Information-gathering on Facebook is not illegal and users sign over many privacy rights when agreeing to use Facebook. Despite apparent legality, Facebook users have expressed discontent with Facebook’s “sneaky research.”
Why It’s Hot…
The issue of privacy and ethics on social networks continues to be highly controversial. Facebook manipulates users’ News Feeds all the time and the underlying issue is with Facebook not this particular study. 1.2 billion Facebook users appear to be “OK” with “interest-level data collection that comes with the advertising that makes the site free for people to utilize. But manipulating their emotions appears to go too far.” Integrating emotions on Facebook raises a larger issue that could involve targeted advertisements based on a person’s mood. Many users are angry because no one wants to feel manipulated and the overarching issue of privacy on the internet and specifically social media is a recurring issue that Facebook is dealing with. Companies like Facebook including Google and Yahoo have access to immense personal data and are faced with difficult decisions about whether or not to use or manipulate this information.