Some teens today, though they were born in the year 2000 or shortly after, crave to go back in time and experience the era just out of their grasp. They feel, like so many young people before them have, like they missed out on the best time by just a decade or two.
To relive the Y2K era, they run Instagram accounts like @2000sluv, @2000sjournals, @00sfreak, and @y2ktrashy and more, all stocked with a mix of content culled from the early 00’s — vacuuming up paparazzi pics, screen grabs, magazine pullouts, and catalog clips. They worship the early 2000s style and maintain an encyclopedic knowledge of 2000s pop culture. On YouTube, they vlog themselves “living in the 2000s for a day” or for the weekend.
Why it’s hot:
““The early 2000s were all about having fun and expressing your individuality. In today’s world, everything and everyone is taken so seriously”.
It’s interesting how these Y2K fans —who were babies or not even born yet— use their perception of the 2000s to cope with the anxiety of being a teenager today.
Hard-workers is a digital community for working class Americans to express their professional selves, learn from each other and most importantly take full pride in the hard work they do.
Through the app, people can: – Easily post your work related photos, videos and texts – Join your professional community and discuss relevant topics
– Ask for advice and get advice when needed
– Check out what is happening in adjacent occupations
– Interviews with real people a.k.a. other hardworkers
– News about things that impact hardworkers
– Advice on relevant questions such as: how to save money, how to find a side gig, how to prepare for a new job, where to go for education, etc.
The news:Google has signed a deal with Ascension, the second-largest hospital system in the US, to collect and analyze millions of Americans’ personal health data, according to the Wall Street Journal. Ascension operates in 150 hospitals in 21 states.
“Project Nightingale”: Eventually, data from all of the company’s patients (birth dates, lab results, diagnoses, and hospitalization records, for example) could be uploaded to Google’s cloud computing systems, with a view to using artificial intelligence to scan electronic records, or diagnose or identify medical conditions. The project, code-named Project Nightingale, began in secret last year, the WSJ reports. Neither patients nor doctors have been notified.
A touchy topic: Inevitably, there are worries. The company took control of the health division of its AI unit, DeepMind, back in November 2018, and people at the time warned it could pave the way for Google to access people’s private, identifiable health data. Ascension employees have raised concerns about how the data will be collected and shared, both technologically and ethically, the WSJ reports.
Eduardo Conrado, EVP of Strategy and Innovations at Ascension, released a statement challenging news reporting. He claims the work has been anything but secret:
Ascension’s work with Google has been anything but secret. In fact, Google first announced its work with us in July, on its Q2 earnings call. Acute care administrative and clinical leaders across Ascension have been informed of the work, enterprise-wide webinars have been held, and the clinical leaders of our employed physician group have been informed in detail about the project. In our deployment sites, front-line nurses and clinicians have not only been informed but have actively participated in the project.
After a while I reached a point that I suspect is familiar to most whistleblowers, where what I was witnessing was too important for me to remain silent. Two simple questions kept hounding me: did patients know about the transfer of their data to the tech giant? Should they be informed and given a chance to opt in or out?
The answer to the first question quickly became apparent: no. The answer to the second I became increasingly convinced about: yes. Put the two together, and how could I say nothing?
Why it’s hot: It’s hot because it’s not the first time that Google is in hot water, back in 2017, Google DeepMind received 1.6 million identifiable personal medical records on an “inappropriate legal basis”, according to a letter written by Fiona Caldicott at the UK’s National Data Guardian.
As brands like Google, Amazon Apple (and even Uber) move into healthcare, raises new questions around data and confidentiality, but also forces us to re-think: “Who do we have an actual relationship with—my doctor, my insurance company or a cloud service provider?”
For years, sports at the Olympic, professional, and collegiate levels has become very data-driven as decisions ranging from recruitment and training to strategy and in-game tactics rely upon statistics and a dynamic set of variables including personnel, game conditions, and scenarios.
For teams, AI brings the promise of big operational improvements:
AI can improve the value of cross-training by team role/position between 9% and 32%.
Up to 65% of long-term cognitive dysfunction due to concussions is preventable through the use of AI.
AI in sports can improve individual and team performance by an average of 17% and 28% respectively.
Amazon and The Cloud Wars The enterprise cloud is without question the foundation for digital business; companies in every industry and in every region of the world are betting their futures on the ability of cloud providers to help them remake themselves as digital powerhouses that can move at the speed of their customers, innovate on the fly, and deliver world-class customer experiences.
AI and real-time insights are key battlegrounds for winning for what journalists have called “The Cloud Wars” (Spoiler: Microsoft is winning).
The Fan Experience As competition heats up, Cloud providers like IBM, Microsoft and AWS are stepping up their game in another area: The Fan Experience—pursuing deals with major sports and entertainment institutions like the NFL, Wimbledon, and The Grammys to give fans an insider perspective into their favourite events.
“Insights Powered by AWS” Platform
Arguably, AWS is doing a better job at positioning itself as the standard in AI. While Microsoft and IBM provide inessential applications like Surface tablets for the NFL, or Watson-analyzed footage highlights to Wimbledon, AWS is doing a better job in communicating how teams are using AWS and machine learning to transform how sports are analyzed, played, coached, and experienced.
Why it’s hot: As the drama between Microsoft and Amazon continues to unfold, tangible, relevant and well told use cases are a key to cementing top of mind in the public view, drive relevance, and ultimately, consideration.
On a cold, sunny October day on the outskirts of Copenhagen, Denmark, a group of men dressed in black gathers outside Brondby Stadium to shoot off a couple of rockets, raise their fists and shout about how the home team will soon beat — and beat up — the visiting archnemesis, FC Copenhagen.
Police are out in force, riot helmets at the ready. Brondby-Copenhagen matches have a history of leading to vandalism, arrests and general mayhem.
Once the men’s chant is over, the group moves toward the stadium’s entrance, where the men — along with 21,000 other fans — are asked to remove masks, hats and glasses so a computer can scan their faces. The scans will be compared against a list of roughly 50 banned troublemakers and will be used to determine whether the spectators will be allowed in.
No one is stopped on this day. But since the system’s launch in July, it has caught four people on the blacklist, who were then turned over to police.
GDPR and Facial Recognition
The use of facial recognition is best known in China, but it is also used in countries including Israel, the U.S. and the United Arab Emirates. In Europe, biometric data is protected under the General Data Protection Regulation, arguably the world’s most comprehensive privacy law, which went into full effect in mid-2018. Several institutions that had been using facial recognition technology prior to that — including a school in Sweden and a police force in Wales — have since been challenged, with differing results.
What’s unique about the use of facial recognition in Brondby is that experts agree that it appears to be one of Europe’s first large-scale, private systems created and vetted in the era of GDPR.
Implementation and oversight
According to the Brondby soccer club’s security chief, Mickel Lauritsen, getting this system approved was a long process. It started almost five years ago, when the team kept getting fined for lax stadium security but felt hamstrung in its attempt to make improvements.
At first, stadium stewards were allowed to see only descriptions of the troublemakers they were expected to pick out of the crowd. Then they won approval to use photos. With that approval in place, the team launched its request for a facial recognition system and began nearly three years of negotiation involving the Danish Data Protection Agency, team lawyers, fan input and system developer Panasonic.
With the system in use now, Lauritsen says, he’s very careful to stay within its prescribed boundaries. That means pictures of those on the watchlist are entered into the system on game day and are deleted again at the end of the day. The system is not connected to the Internet. There’s a cross-check to avoid false positives.
Lauritsen says that at one point, the police asked him to enter a suspect’s picture into the system to help with an investigation. He said no.
“I know if I misuse the system, I’m not allowed to do anything with it going forward, and then we’ll be restricted in what we can do even further than we are now,” he explains.
Why it’s hot:
As Facial Recognition technology starts to improve and scale across private and public sectors, it is important to start having conversations about what these systems can and can’t do. Where is the line between protecting citizens’ privacy and using large-scale systems to improve the way we live?
In the U.S., no federal laws explicitly regulate facial recognition technology — yet — so discussion has been happening primarily at the state and local levels and usually focuses on the public sector. San Francisco, for example, recently banned the use of automated facial recognition by the police and city agencies. Some advocacy groups are looking to fashion to reclaim their privacy.
Back in July, Instagram rolled out a new feature powered by AI that notifies people when their comment may be considered offensive before it’s posted. This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification.
From early tests of this feature, Instagram found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.
Taking it up a notch.
Back in July, teenagers also told Instagram researchers that they often didn’t block bullies on the platform for two reasons:
First, blocking a bully — something that the bully is aware of — can actually escalate the situation and result in more abuse on the platform or elsewhere.
Second, when you block a bully, you render yourself invisible, but at the same time you give up your ability to see what the bully is doing. To counter abuse, you often have to know what is happening.
Instagram Restrict addresses these concerns by taking a more nuanced approach. If someone is bullying you on the platform—posting mean comments on your photos or sending you offensive messages—the new feature allows you to restrict the person’s actions:
Once you’ve restricted a user, comments on your posts from that person require your approval. You can see the comment and so can the bully, but unless you choose to release it, no one else can.
Messages from the restricted user will be sent to a separate inbox, and you can choose whether or not you want to read them. If you do, the bully won’t receive a read receipt.
“You need, as a target of bullying, to see what the actor is actually doing; Restrict gives a target of bullying a bit more power over the experience.”
— Adam Mosseri, Head of Instagram
Why it’s hot:
What if your bully doesn’t know that they’re bullying you?
By age 5, the average kid has 1,500 photos of them online.
90% of American children have a social media presence by age 2.
By 2030 parents sharing about their kids online will account to two-thirds of identity fraud.
Follower is a service that grants you a real life Follower for a day. A no-hassle unseen companion. Someone that watches, someone that sees you, someone who cares.
In order to be followed, you answer two questions:
Why do you want to be followed? Why should someone follow you?
If you are selected, you are given an app to download, and you wait. You don’t know when it will happen. The following lasts one day. At the end, you are left with one photo of you, taken by your Follower. The Follower stays just out of sight, but within your consciousness.
What is the relationship between attention and surveillance? There are sites you can go to to buy online followers, $10 can get you 1000 followers. We have this intense desire to be seen, to feel connected. But is that desire really fulfilled by watching your follower count tick upward? Could a real life follower provide something more meaningful or satisfying?
Why it’s hot:
There is little doubt we live in a surveillance state. And yet we continue to engage with faceless followers to assuage an intense desire to be seen. How do we reconcile these competing concepts? Follower flips the interface of app and user to renegotiate these relationships and find optimism in what cultural pundits have deemed a doomsday scenario.
The policy update, which will also be applied to Facebook, includes the removal of any content that makes a “miraculous” claim about a diet or weight-loss product which was simultaneously linked to a commercial offer such as a discount code or affiliate link.
As influencer culture has creeped its way into almost every business model, it has unsurprisingly increased the promotion of dieting products including “appetite-suppressing” meal supplements. Inevitably, this has raised concerns about the impact that diet and detox content can have on young people, especially their mental health and body image.
In 2017, a study titled “#StatusOfMind” by the Royal Society for Public Health (RSPH), found that Instagram is rated as the worst social media platform when it comes to its impact on young people’s mental health.
Why it’s hot: It’s only now we’re admitting the detrimental effects toxic posts like weight-loss promotion can have on people’s physical and mental health. It’s reassuring to see Instagram and Facebook take responsibility for what the platform has previously been exploited for. But it’s yet another reminder that platforms need to prepare themselves to be continually responsive to changes in society and media.
AGNES is a suit worn by students, product developers, designers, engineers, marketing, planners, architects, packaging engineers, and others to better understand the physical challenges associated with aging. Developed by AgeLab researchers and students, AGNES has been calibrated to approximate the motor, visual, flexibility, dexterity and strength of a person in their mid-70s. AGNES has been used in retail, public transportation, home, community, automobile, workplace and other environments.
The MIT AgeLab has homed in on one such paradox in particular: the profound mismatch between products built for older people and the products they actually want.
Only 20% of people who could benefit from hearing aids seek them out. Just 2% of those over 65 seek out personal emergency response technologies—the sorts of wearable devices that can call 911 with the push of a button—and many (perhaps even most) of those who do have them refuse to press the call button even after suffering a serious fall.
In every example, product designers thought they understood the demands of the older market, but underestimated how older consumers would flee any product giving off a whiff of “oldness.” After all, there can be no doubt that personal emergency response pendants are for “old people,” and as Pew has reported, only 35% of people 75 or older consider themselves “old.”
Why it’s hot: Tools like AGNES help us truly think differently about the needs of different people.
“Ranch is a rising iconic flavor in food and culture today,” Jacquie Klein, director of the brand studio that oversees Hidden Valley marketing.
“It’s found on more than half of restaurant menus and in 75 percent of homes in the U.S. It’s really embedded in our culture. We have more than 5 million Twitter conversations a year. We always love to see Hidden Valley Ranch fountains at weddings and mini-kegs at backyard barbecues.”
Hidden Valley has come out with some kind of oddball holiday collection the past two years, generating 3 billion social and other media impressions in total, Klein says. The brand also got some traction on Twitter after the Instagram site Pop Tart A Day posted an image of a ranch-flavored Pop-Tart, USA Today reported.
Why it’s hot?
Who knew a brand like Hidden Valley can keep its ear to the ground and pay attention to under-the-radar cultural shifts among young consumers? It’d be interesting to see how far Hidden Valley can play this out in the coming years.
Via, a leading provider and developer of on-demand public mobility, was selected by the New York City Department of Education to provide a school bus management system for the nation’s largest school district.
As the largest school district in the nation, the NYC Department of Education (DOE) transports approximately 150,000 students on 9,000 bus routes each and every day to get students safely to and from school across the City.
“Via for Schools” will be the first integrated, automated school bus routing, tracking, and communication platform in the world.
Via for Schools will utilize a flexible algorithm, which allows for both stop-to-school and home-to-school pickups, accommodating students regardless of their learning style, mobility constraints, or where they live.
Parents and students will have the ability to track, in real-time, their bus’ whereabouts and receive frequent and reliable communications in the event of service changes, improving safety and bringing important peace of mind to all users of the system. By utilizing Via’s best-in-class algorithms to optimize school bus routing, the Department of Education will be able to achieve operational efficiencies and reduce transportation costs.
Why it’s hot:
NYC has been a testing ground for partnering with brands to improve life in one of the most densely-populated cities in the world. This partnership is a slight variation on the same model, but rather than leasing out Via cars to the city, they’re giving away the technology behind Via.
Volvo has built its brand around “being the safest cars on the road”.
For years, the company developed research and processes that lead to numerous vehicle safety innovations that established their credibility.
At Volvo, the Accident Research Team has compiled real-world data since the 1970s to better understand what happens during a collision.
At the industry-level, women tend to be underrepresented in data (male crash test dummies are the standard). But not Volvo (they’ve incorporated men, women and children in their data over 40 years of research)
More importantly, new car manufacturers are stepping up to the “Safey” arena (specifically Tesla) and looking to take on Volvo.
So what do you do? Do you protect your moat with yet-another-ad saying how Volvo is a “different kind of safety”?
Well, yes and… (if you’re Vovlo) you build a library with all of your safety knowledge and make it publicly available.
Why it’s hot:
Looking beyond the obvious advertising play, what this Campaign (Big C) is helping solve for is the core brand challenge: trying to go from being simply the provider of cars to something that offers mobility on a broader scale.
The execution of it is flawlessly told with a gripping narrative, and maintains Volvo’s thought leadership in “Safety”.
Old-school video cameras have long watched over stores and gas stations. Now, a new wave of technology, once too expensive and complex to be used by anyone but the police, is making its way into mom-and-pop shops, front porches and residential streets.
Video cameras that flag unusual movements and recognize faces are being stuffed into popular “smart” doorbells that constantly send footage to the cloud.
AI-powered “video analytics” can identify specific actions like smoking, and search thousands of hours of archived footage for one person. It’s popping up in public schools, like in Broward County, Fla., which includes Parkland.
License-plate readers are now guarding the entrances of wealthy neighborhoods, tracking every vehicle that passes and automatically flagging blacklisted cars.
Forget doorbell cams — some Denver-area neighborhoods are now equipping their streets with cameras that will photograph your car and scan your license plate. Such license plate readers stand ever vigilant in 10 neighborhoods in Denver, Lone Tree, Sheridan and Aurora, according to Flock Safety, the company that sells them.
Images of the vehicle are then uploaded to the company’s Amazon Web Services cloud server and the data collected from the image becomes part of a searchable database. The HOA members with access can then search the database by time or vehicle description. They can also give police access to the footage if a crime is reported
Companies like Flock Safety charge about $2,000 a year for the installation, maintenance and data storage for each solar-powered camera. Most customers are homeowner associations or neighborhood groups that pay for the cameras collectively.
Why it’s hot:
Because peoples’ perceptions of crime often don’t align with reality, crime rates have dropped precipitously since the 1990s. Crime is often based more on media representations and anecdotal evidence than statistics. The growth of private security technology like license plate readers seems inevitable, showing that people are still willing to trade safety for privacy—for now.
In a movement reminiscent of the “virginity pledge” — a vogue in the late ’90s in which young people promised to wait until marriage to have sex, groups of parents are banding together and making public promises to withhold smartphones from their children until eighth grade.
My favorite reason why this is hot is this: The gap between rich and poor is now measured by the lack of tech. The rich are banning screens from schools, while public schools even offer digital-only preschools.
What does a world where those who get ahead, are the “have nots” rather than the “haves”?