China is building the world’s most powerful facial recognition system with the power to identify any one of its 1.3 billion citizens within three seconds. The government states the system is being developed for security and official uses such as tracking wanted suspects and public administration and that commercial application using information sourced from the database will not be allowed under current regulations.
“[But] a policy can change due to the development of the economy and increasing demand from society,” said Chen Jiansheng, an associate professor at the department of electrical engineering at Tsinghua University and a member of the ministry’s Committee of Standardisation overseeing technical developments in police forces.
Chinese companies are already taking the commercial application of facial recognition technology to new heights. Students can now enter their university halls, travellers can board planes without using a boarding pass and diners can pay for a meal at KFC. Some other restaurants have even offered discounts to customers based on a machine that ranks their looks according to an algorithm. Customers with “beautiful” characteristics – such as symmetrical features – get better scores than those with noses that are “too big” or “too small” and those that get better scores will get cheaper meals.
Why It’s Hot
Another weekly installment of balancing convenience and claims of safety with privacy and ethics. China is pushing us faster than most other countries to address this question sooner rather than later.
Customers gave up personal data for art pieces to take home at this London based pop-up store.
“London based street artist Ben Eine recently opened a pop-up shop for his work; instead of paying with cash, however, customers at this the Data Dollar Store have to give up their personal information. The shop, opened in collaboration with cybersecurity company Kaspersky Lab, is meant to make customers reevaluate the information they often freely give away by the social media channels they are using. “I’m concerned about how that information is used and why are we not rewarded for giving this information away,” Eine told Cnet. “Companies use that information and target us to sell products, to feed us information that we wouldn’t necessarily look at.”
The store was set up in a temporary space at the Old Street Underground station in London. When visitors entered the store, an employee showed them the purchasing options:
- Giving up three photos or screenshots of your recent text or email conversations for a mug
- Giving up the last three photos on your Camera Roll, or the last three text messages sent, for a tshirt
- Handing your phone over to the assistant to select any five photos to keep, publicly displayed on a large TV screen in the store for the next two days (you can attempt to barter for which photos they’ll pick, but the choice is up to them).
The store is meant to raise awareness about the sometimes risky informational exchanges that are happening all the time online. According to Kaspersky Lab, while 74 percent of survey respondents were unconcerned about data security, 41 percent were unprotected from potential threats and 20 percent have been affected by cybercrime.
The ripple of the video giant’s woes has gotten so great that some have predicted the impact from major brands could cost YouTube $750 million. Seemingly, there are some that are happy when such a kink in the armor is exposed, but there are myriad of stakeholders, each with their own perspective. With that amount of money – as well as brand reputation and confidence – at stake there are going to be some winners and losers, and here they are:
These are the classic media players who started losing their lunch the second Google started owning the internet. One could imagine publishers grinning ear to ear, thinking, “Told ya so. Quality content isn’t so easy.” They can can make a more convincing case that knowing the content and the audience actually is still important.
This issue can resurface a shift to high-quality, direct-bought content, where brands have the most control but pay a premium for it in some cases.
Anyone selling streaming ads is in a good position – including Sling, Dish and even TV networks. Hulu, Roku, TV networks and anyone with a digital video platform will be showing off their highly curated content. These new shows and programming will look pretty good to anyone with a heightened interest in knowing exactly where their messages will appear.
Tech tools & 3rd Party Verification Partners
Brands have called for digital platforms like Facebook and Google to clean up the media supply chain and to be more transparent with data. The brand safety issue on YouTube is yet another bit of leverage to force more cooperation.
One of the most important roles for agencies was helping brands make sure their ads didn’t show up in the wrong place by intimately knowing the targeting, brand safety protections and best practices of each channel. Well, now those services are increasingly valuable.
When the Trump administration makes further moves to undo net neutrality, as many anticipate based on current momentum repealing FCC consumer protections, Google’s ability to defend it in idealistic terms could be undermined by all the talk about serving ads on terrorist video.
It took a long time for programmatic to stop being a dirty word. Programmatic advertising was once considered the least controlled, lowest quality ad inventory at the lowest price. In part, brands could start to pull back from blind, untargeted buying without transparency.
YouTube has said that part of its solution is to implement stricter community standards, and that could mean more bannings and ad blocking from their videos, impacting their earnings.They could be quicker to cut a channel at the smallest offense now that brands are watching closely.
Advertisers still on YouTube – this is a tricky one to classify and it’s too early to say. We’ll have to see how the video platform reacts over time to increasing pressures to allow verification partners and data trackers access within the garden’s walls.
Why It’s (Still) Hot:
This topic will continue to be important to the brands we represent, aim to represent and even those far from us that are faced with the same decision to either stay the course or sit it out. There is a lot of money moving around on media plans, a lot of POV’s being routed and a lot of reps working overtime to reassure teams of buyers/planners that they are taking brand safety very seriously. Often it’s not the crisis that defines a company, but what they do in the aftermath. Some are hopeful that this is a definitive crack in the ‘walled garden’- but even if it is not, we’re all hoping for a better, safer platform at the end of this tunnel…a world where once again clients can be irked by their premium pre-rolls showing up prefacing water skiing squirrels and dancing cat videos instead of terrorist rhetoric.
The rules, which forced internet service providers to get permission before selling your data, were overturned using the little-used Congressional Review Act (CRA). This is now being called “the single biggest step backwards in online privacy in many years” by those that spoke out against the repeal as well as the co-creator of the 1996 Telecommunications Act, Sen. Ed Markey.
This is a big deal and a pretty bad idea for anyone even remotely concerned with privacy and limiting the already questionable practices of telecoms and ISPs.
Assuming that this resolution passes through the House, which seems likely at this point, your broadband and wireless internet service provider will have free reign to collect and sell personal data along to third parties. That information may include (but is not limited to) location, financial, healthcare and browsing data scraped from customers. As a result of the ruling, you can expect ISPs to begin collecting this data by default.
To play the devil’s advocate for a second, let’s assume there is some upside for companies in this deregulation: “You want the entrepreneurial spirit to thrive, but you have to be able to say ‘no, I don’t want you in my living room.’ Yes, we’re capitalists, but we’re capitalists with a conscience.” states Sen Ed Markey. But with the wireless and cable industries both operating as powerful oligopolies, consumers will be left with zero protection against price-gouging, no advocate for net neutrality, and as today demonstrates, far less control over their own data.
The broadband privacy rule, among other things, expanded an existing rule by defining a few extra items as personal information, such as browsing history. This information joins medical records, credit card numbers and so on as information that your ISP is obligated to handle differently, asking if it can collect it and use it.
There is Nothing Hot about this.
You can see the utility of the rule right away; browsing records are very personal indeed, and ISPs are in a unique position to collect pretty much all of it if you’re not actively obscuring it. Facebook and Google see a lot, sure, but ISPs see a lot too, and a very different set of data.
If consumers continue to lose trust in the platforms we employ to market our brands and begin to widely question their safety, security and data usage, we are in big trouble. We’re already challenged by a litany of brand safety concerns – bots, fraud, hackers, malware, viewability – and solutions aimed to mitigate yet limit marketing effectiveness (ex. ad blocking) continue to gain momentum. While some of this is good digital evolution (flashback to needing a pop-up blocker just to endure an average online session), the lack of consumer trust quickly erodes to lack of brand trust and soon those left behind willingly (or unknowingly) allowing their data to be sold on the open market might not be the ones worth reaching.
Sex toy maker We-Vibe has agreed to pay customers up to C$10,000 (£6,120) each after shipping a “smart vibrator” which tracked owners’ use without their knowledge.
The We-Vibe 4 Plus is a £90 bluetooth connected vibrator, which can be controlled through an app. Its app-enabled controls can be activated remotely, allowing, for instance, a partner on the other end of a video call to interact.
But the app came with a number of security and privacy vulnerabilities, allowing anyone within bluetooth range to seize control of the device.
In addition, data is collected and sent back to Standard Innovation, letting the company know about the temperature of the device and the vibration intensity – which, combined, reveal intimate information about the user’s sexual habits.
Why It’s Hot
As we continue to talk about our bodies as the ultimate database – along with the opportunities and risks that entails – few examples seem as illustrative as this!
Experts who want to pierce North Korea’s extreme secrecy have to be creative. One surprisingly rich resource: the country’s own propaganda, like the photo below.
Thanks to high-tech forensics, analysts and intelligence agencies are using photos to track North Korea’s internal politics and expanding weapons programs with stunning granularity.
Analysts believe some details may have been deliberately revealed to demonstrate the country’s growing capabilities. Whereas most such images are doctored, if only to improve Mr. Kim’s appearance, they noticed that this was conspicuously unretouched — perhaps a message to the foreign intelligence agencies who conduct such analyses.
Whereas analysts had long doubted the country’s grandiose claims, analysts said, “2016 was about showing us all the capabilities that we had mocked.”
Why It’s Hot
As countries around the world debate security and the measures appropriate to ensure it, we must remain vigilant against unintended consequences to areas like privacy and data protection.
The Internet of Things is here. While it seems so inevitable and welcome, in many ways it is a nightmare waiting to happen. In an article on CIO.com and a report from Goldman Sachs, the authors outline the positive impact IoT/Big Data on transportation and healthcare. One author, Bruce Horsham claims a utopian — and a rather naïve — view of the IoT: “Far from being a gimmicky way to collect more data on customers – and sell them more things – IoT technology has the power to transform industries, manage costs and deliver quality outcomes to up and down the supply chain. Here’s how two industries are doing just that.”
While the advances IoT is doing for transportation is noteworthy, the risk/reward equation for healthcare is still to be determined. Consider this fact: Goldman Sachs recently estimated that Internet of Things (IoT) technology can save billions of dollars for asthma care. If 1/3 of all healthcare spending is on disease management, this is also a place where IoT can have the biggest, most immediate impact on cost and quality-of-care. The chart below shows how Goldman views it:
Example: Excellus, a health data provider, got hacked. If each individual health record is valued at $50, then the theft is worth $500 million.
Example: if true connectivity occurred, then millions of users with implanted monitoring devices, or those who use remote insulin devices or remote monitoring of their heart could be hacked and killed. It is that ugly and that simple.
This is a fast-evolving topic…the Goldman Sachs IoT inforgraphic is one we should all marvel at and hope that someone is watching our back. http://www.goldmansachs.com/our-thinking/pages/iot-infographic.html
You know privacy is taking a back seat to technology when you can’t sunbathe on top of a wind turbine without being discovered. Kevin Miller, vacationing in Rhode Island, was taking his drone for a spin in the countryside when he maneuvered it to the top of a 200-foot tall wind turbine. Hoping to capture some cool sights of the turbine against the blue sky, the drone instead spied upon a man sunning himself. The man took it all in good stride, sitting up and waving to the drone.
Why It’s Hot
Privacy is becoming a quaint notion, going the way of bygone eras like the horse and buggy, handwritten notes and family dinner time. Soon there will be nowhere to hide, with drones everywhere, street cameras watching every move, and smartphones tracking our every movement. Creepy perhaps, but it seems like it’s becoming more accepted in society. Now you can’t even sunbathe on a wind turbine.
Facial-recognition might make tagging photos a lot easier on Facebook, but let’s face it: there’s a lot of unresolved implications for the long-term about this increasingly integrated technologies. Rather than putting themselves at risk, some people are choosing just to opt-out altogether.
How do you go dark when your data is collected without your knowledge or consent?
It’s not a catch-all, but start with protective eyewear. The Privacy Visor, first unveiled in 2012, are glasses that reflect light in a way that confuses facial-detection software. More specifically, the glasses disrupt the patterns of light and dark spots around the eyes and nose that allow computer algorithms to recognize that a face is even present in the frame. Although the glasses don’t guarantee complete privacy, project lead Isao Echizen says tests have shown it to work over 90% of the time.
As camera technology has adapted to more closely mimic human sight, The Privacy Visor has been updated to better shield identities even as the scanning technology advances. Echizen is pushing forward with the concept’s commercialization, targeting mid-next year to begin production of more fashionable and effective versions.
Why It’s Hot
The Privacy Visor is a product that exists to treat the symptom of a much larger problem: maintaining control of one’s own identity. It might be good for those of us who are paranoid, and those of us who very legitimately want to maintain control of the information they create about themselves… online and out in the world. In fact some privacy advocates go as far as saying the Privacy Visor is a detriment to the cause because its very existence is in effect “giving in” to these huge pressures for amassing databases of personally identifiable information. So the bigger question becomes less about the specifics of this product, and more about the broader culture of acceptance we’re creating.
There are connected cars and then there are cars that can be ‘disconnected,’ at least from the driver.
In a somewhat on-the-edge experiment published this week, two well-known hackers tapped into a moving Jeep and remotely operated the radio, air conditioning and windshield wipers.
Then, while the car was moving at 70 miles an hour, the car’s acceleration was remotely shut down, leaving the volunteer driver & writer from Wired slowing on a busy highway.
The two hackers, Charlie Miller and Chris Vasalek, have been conducting car-hacking research to determine if an attacker could gain wireless control to vehicles via the internet.
They created software code that could send commands through the Jeep’s entertainment system to its dashboard functions, brakes, steering and transmission from a laptop many miles away, according to the first-person account in Wired.
Why Its Hot:
The timing of the widely reported Jeep experiment is interesting in light of a new privacy bill in congress that would stop car makers from using data collected from vehicles for advertising or marketing. More about the bill can be found here.
Drivers shouldn’t have to choose between being connected and being protected. Controlled demonstrations show how frightening it would be to have a hacker take over controls of a car. We need clear rules of the road that protect cars from hackers and American families from data trackers. When consumers feel safe with technology it flourishes but when the opposite is true it can become an epic failure.
But the Jeep episode shows that sending unwanted ads to drivers on the move may be the least of the issues. The silver lining in all of this is that the vulnerabilities in the coming interconnected world of things are being identified and highlighted.
As you might imagine, Fiat Chrysler has issued a software fix that Jeep owners can either download and install or ask a dealer to install it. There will be more bumps in the road that get identified. Then they can be addressed.
Privacy campaigners and open source develops are up in arms over the secret installing of a Google listening software. The listening software, Chrome’s “OK, Google” hotword detection, is meant to detect words which make the computer respond when you talk to it. However, many users have complained its automatic activation without their permission when using Chromium, an open-source web browser from Google Chrome. This basically means that their computers are stealth configured to send what was being said in the room to somebody else, to a private company in another country, without your consent or knowledge.
Open-source advocates proposed that Google was downloading a “black box” on to their machines that was not open source and therefore could not be doing what it said it was meant to do. Google has since pulled its listening software from the open-source Chromium browser.
Why is it hot?
Voice search functions have become an accepted feature of modern smartphones, smart TVs, and now browsers, but they have caused concerns over privacy. While most services require users to opt in, many have question whether their use, which requires sending voice recordings potentially exposes private conservations held within the home.
Yesterday’s DataSift Facebook webinar revealed a huge breakthrough for healthcare marketers. Through their partnership, Facebook is providing aggregated topic data to marketers on DataSift’s API. This represents the first time we’ve been able to pull back the curtain and see what Facebook users are talking about in their private conversations.
Now, before the Facebook Privacy conspiracy theorists get wound up, the data coming through DataSift’s API is:
- Anonymized with no personal data available
- Limited to adults 18+
- Only available on topics with 100+ posts in any query
Until this option, the only way to get insight into what was happening on Facebook was to engage in the channel. Even then, marketers only got insight what happened on and around their own pages so of course that was a very limited and self-selected audience.
Now, marketers will be able to get insight into volumes of conversation on Facebook by using three different criteria:
- Category (the example used was automotive)
- Brand (for example, BMW, Ford, and Honda)
- Media (defined by URLs for properties like BBC or CNN)
- Feature / topic data (as defined by Boolean queries)
However, DataSift shared the image below on Facebook with the post:
“And #topicdata is all about #privacyfirst“
KLICK: “…it looks like social listening may be on the cusp of removing one of the most troublesome blinders we’ve had to live with to date.”
Why It’s HOT: This tool will help marketers see what is happening on the world’s #1 social channel, Facebook.
Source: Klick Health
Mattel’s new “Hello Barbie” hits store shelves later this year and has more tricks up her sleeve than just saying hello. With the press of a button, Barbie’s embedded microphone turns on and records the voice of the child playing with her. The recordings are then uploaded to a cloud server, where voice detection technology helps the doll make sense of the data. The result? An inquisitive Barbie who remembers your dog’s name and brings up your favorite hobbies in your next chitchat.
Why It’s Hot:
The overall concept is interesting — a toy that listens and learns your child’s preferences and adapts accordingly … a true demonstration of personalization. The obvious downside is that nobody really knows what is being done with all that data (collected from kids!) and puts a level of accountability on the company. For example, children confide in their toys – so what if a child admits that they get hit by a parent? Should Mattel be on the hook to report it?
With each new technology seems to come new concerns about privacy. New “privacy glasses” by software firm AVG seek to minimize worries over facial recognition on social media.
The glasses use infrared LEDs surrounding your eyes and nose to essentially “mess with” smartphone cameras when taking pictures. By distorting the light around one’s eyes and nose, it prevents facial recognition technology that is built into most social media platforms from identifying people.
Read more about the privacy glasses on Engadget.
Why It’s Hot | While the glasses are only in prototyping phase, and still just look like a novelty party accessory, it’s not surprising that companies are finding ways to prevent facial recognition technology. With each new innovation comes more worries about “big brother” watching over us, and these glasses are just a step in the direction of privacy over convenience. It’ll be interesting to watch which end of that struggle wins out over time.
As Twitter strives to provide an even more personalized experience for their users, the social media platform recently announced they will be tracking what apps mobile users have downloaded. With this new capability, Twitter can promise better targeting within their advertising as well. They can now enable advertisers to target based on whether users have installed their app, registered after downloading, etc.
For those of us that don’t want personal information given to advertisers for advertisements and Twitter recommendations that are that relevant, these privacy settings are adjustable. Users can opt-out of Twitter’s tracking capability. Read more via VentureBeat, and adjust your settings in the Twitter app.
Why It’s Hot | With the world of social media increasing the amount of personal data available online, the privacy discussion is more popular than ever. Beyond just using the information users give, now apps are including tracking mechanisms that track your mobile behavior. While research shows that (in general) Millennials don’t mind tracking if it means their experience will be more personalized or help them to find products more easily (eMarketer), Millennials are not the only mobile consumers. And so the privacy vs. personalization debate continues…
Have you ever casually agreed to plans with a friend – “Sure, let’s grab drinks next Thursday after work” – and then totally forgot about it? Google is here to the rescue to help prevent this from ever happening again (and to make us better humans) with its updated mobile app. The catch? The tech giant will read your email.
The new feature of Google Now will automatically keep track of informal conversations in users’ Gmail inboxes so that they aren’t missed. The app will continuously scan through the inbox searching for “half-baked plans” that users haven’t added to their calendars and will ask the user about them (as in the photo above).
Simply put, if you get an email asking if you’d like to meet for drinks next Thursday at 7pm, Google Now will recognize it as an invitation. You’ll then be prompted to add it to your calendar.
Read more via Huffington Post.
Why It’s Hot: We’re constantly talking about new apps and gadgets that essentially do our thinking for us and can help us remember things we’ll likely forget. It seems that we may soon be able to literally tap into our memory via some new device.
Google’s new feature is just one of many updates that are designed to anticipate the user’s needs and handle tedious mobile tasks without the user having to swipe a finger.
On the flip side of convenience is, of course, the issue of privacy. As with so many suggestive, helpful, and perhaps slightly invasive apps, this is a major concern. As Huffington Post reports, Google has been criticized for its increasing “‘surveillance’ capabilities.” We’re seeing more and more that the convenience of having one’s device help make their life easier comes at the price of privacy.
One day you’re Alex, a 16-year-old teenager working at Target. The next day, you’re a trending hashtag with 730,000 Twitter followers and 2.3 million Instagram followers. His fame literally happened overnight, after a girl posted to her Twitter account a photo of Alex that she had seen on Tumblr. She added the caption, YOOOOOOOOOO.
Alex was quickly overwhelmed with screaming girls looking to take selfies with him, a visit to the Ellen DeGeneres show, calls from modeling and talent agents and paparazzi waiting outside his door. But he was also quickly targeted with tweets demanding he be fired from Target, insults denigrating his looks (e.g. “Alex from target is so damn ugly”) and even death threats (“Alex from target, I’ll find you and I will kill you”). Social security numbers, checking accounts and phone records were leaked, and the family has reached out to local police, the school, and Target to put together security plans for Alex’s safety. The family has even hired the founder of a teenage-centric, selfie-sharing app to help manage the online assaults.
Why It’s Hot
In today’s social environment, fame can happen literally overnight, even when you are not looking for it. It can be fun, satisfying, exhilarating and a boost to the ego. But it can also be dangerous and threatening. Alex wasn’t seeking fame, but it came to him. At issue is how we protect the privacy of people who don’t want to be celebrities when all it takes is one tweet to make you a star.
Thanks to social media, privacy is something that pretty soon will belong to a museum! Especially as technology evolves, becoming part of people’s routines and normal life, and moral barriers get forgotten in the process.
Think about this: Those born today will not only have an intimate record of their day-to-day existence preserved online, but they’ll also grow up in a world where shedding their privacy is not only the norm, but expected.
Enter: New Born Fame, a set of interactive plush toys that let babies take selfies and automatically upload them to their Facebook, Twitter and other profiles, straight from the cradle.
If you think this is crazy, wait for the out-of-focus selfies, and disgusting shots of poop, in real time.
Even though Cornet developed the project more as an investigation into the ethics of social media and to get parents thinking about the information they post about their children online without their consent, she admits there is also probably a commercial demand for toys like this.
Why this is NOT hot?
To name just one, the data will remain available for anyone — even, potentially, employers — to see long after they’ve grown up, and will probably come back to haunt them.
Between hastags and video posts, the lives and actions of thousands (purrrrhaps millions) of cats have been documented online. And because when photos are uploaded for posting they usually include meta data with geotags on longitude and latitude, it’s possible to know where those photos were taken and where those millions of cats live. But perhaps more ominously, it’s also a way to identify the addresses of the owners of all those cats.
To show how people and businesses can use this available data to further erode privacy, a Florida State University art profession, Art Mundy, has taken that aggregated data of uploaded photos to Flickr, Instagram and other platforms and used a super computer to show the location of about 1 million cats. (Each photo had been tagged with the word “cat.”) He put this information on a website, along with photos and maps. Time called it a Tinder for cats, with an seemingly endless stream of kitten photos for cat fans who stalk the Internet looking for cute kitty photos. Although specific street addresses are not shown, the displayed maps hint at each cat’s location.
Why It’s Hot
Aside from being cute, all uploaded photos (plus your device) contain a lot of private data that could be used to track individuals and give away other information. Many people don’t realize how easily privacy is breached, often not in obvious ways. And regulations often don’t evolve as quickly as new technologies. So it’s beware to the consumer, even when doing something as harmless as posting cat photos.
In 2012, Facebook conducted a study with Cornell University and the University of California-San Francisco that manipulated users’ News Feeds in order to test the emotional reactions. Participants were not notified that they were participating in a study. The study found that people who see less positive posts were more likely to write more negative posts and the converse to be true when they were exposed to more positive posts. Information-gathering on Facebook is not illegal and users sign over many privacy rights when agreeing to use Facebook. Despite apparent legality, Facebook users have expressed discontent with Facebook’s “sneaky research.”
Why It’s Hot…
The issue of privacy and ethics on social networks continues to be highly controversial. Facebook manipulates users’ News Feeds all the time and the underlying issue is with Facebook not this particular study. 1.2 billion Facebook users appear to be “OK” with “interest-level data collection that comes with the advertising that makes the site free for people to utilize. But manipulating their emotions appears to go too far.” Integrating emotions on Facebook raises a larger issue that could involve targeted advertisements based on a person’s mood. Many users are angry because no one wants to feel manipulated and the overarching issue of privacy on the internet and specifically social media is a recurring issue that Facebook is dealing with. Companies like Facebook including Google and Yahoo have access to immense personal data and are faced with difficult decisions about whether or not to use or manipulate this information.
The NSA’s vast spying program negatively affects tech, communication, social media, and other industries, where the NSA has unconstitutionally acquired back-door access and information. Many individuals, companies, and governments are wary of doing business and sharing their personal information with companies such as Microsoft, Google, Yahoo, Hewlett Packard Facebook, Verizon, and AT&T to name a few. The latest US company to get hurt by the NSA is Verizon (our client), dropped by the German government as a service provider over worries about U.S. spying.
The Federal Trade Commission on Thursday said Snapchat had agreed to settle charges that the company was deceiving users about the ephemeral nature of the photos and video messages sent through its service. The messages were significantly less private than the company had said, the commission said.
In marketing the service, Snapchat has said that its messages “disappear forever.” But in its complaint, the commission said the messages, often called snaps, can be saved in several ways. The commission said that users can save a message by using a third-party app, for example, or employ simple workarounds that allow users to take a screenshot of messages without detection.
“While we were focused on building, some things didn’t get the attention they could have,” the company said in a statement.
Read more here.
Why It’s Hot
There’s another start up, app, technology, platform, etc every week. And they’re all developed and headed by CEOs younger and younger. Can we be surprised that basic functions are being over looked to market the hot new thing? Start ups need to be held accountable for their products and services and consumers from day 1.
Boeing, the largest global aircraft, rocket, fighter jet manufacturer, unveils the new Black phone. The phone is based on Google’s Android operating system and built into a black, tamper-proof handset capable of accessing multiple cell networks instead of a single network like a normal cellphone. In addition to encrypting calls, any attempt to open the casing of the Boeing Black Smartphone deletes all data and renders the device inoperable. It was also announced that the Silent Circle and Geeksphone have partnered to provide the first integrated smartphone built for privacy targeting selected consumers at a starting price of $629.
Play video here.
Why it’s hot
While privacy concerns are clearly gaining momentum in the minds of consumers, traditionally they have not shown much interest in paying for privacy protection, services or products. However with 55% of Americans saying that they judge one another by the mobile device they own, could privacy be the next must have for mobile phone users?