The New Republic Is All Like: TEENS?! On the INTERNET?!

I was going to post some boring stuff about a cyber security tool that will probably destroy the world (according to the NYTimes), but chose the article about teens and tumblr instead.

It’s pretty “stupid adult peeks into fetid writhing mass of teen culture, is surprised to find some things of worth”, but I love deep looks into online culture. The article is both the pinnacle of cringe-y adult misunderstanding:

Lilley is tall and lanky, with dark brown curly hair. Greenfield is shorter, with glasses and honey-brown hair. They both wore plain polo shirts. Summer had just ended, and there was a pool in the backyard, but they were quite pale. After studying their mannerisms and hearing Lilley’s repeated allusions to Greenfield’s math skills and superior memory—he was briefly a mechatronics engineering major—I determined they were nerds. They were witty and warm and very smart, and I liked them immediately, but they were total nerds. It surprised me, because nerds are often defined by an inability to read social interactions and respond in a way that makes them cool, confident—relatable. So I gently asked Greenfield how he was able to make these minute social observations that hinge on complex emotions being expressed in subtle facial expressions when, perhaps, this was not his strong suit in real life. His answer: internet research.

As well as a demonstrate of how kids are growing up with an innate understanding of digital marketing:

The outrage clicks were so powerful, Lilley and Greenfield decided to experiment with “negative attention.” Haters are more loyal than fans, so they promoted the bad hacks. The worst hacks brought in thousands of followers, and that’s how Lifehackable built the bulk of its audience. “Tom knew what was happening, and so then he was more incentivized to actually not do his job right,” Lilley said. “And in sucking, he succeeded.”

And later

Lilley was disgusted by the thought of “trying to build a personal brand by sacrificing your content.”

It’s great and you should read it.


Reality Winner and dots

A security contractor named Reality Winner was arrested this week for leaking documents about the Russian election hack to The Intercept.

Her arrest set off a conversation about journalism and op-sec, or operational security.

Reality Winner made a number of mistakes, but in particular she was outed by the specific printer that she used to print and carry out the documents.

A security firm contacted by BoingBoing said:

The document leaked by the Intercept was from a printer with model number 54, serial number 29535218. The document was printed on May 9, 2017 at 6:20. The NSA almost certainly has a record of who used the printer at that time.

The situation is similar to how Vice outed the location of John McAfee, by publishing JPEG photographs of him with the EXIF GPS coordinates still hidden in the file. Or it’s how PDFs are often redacted by adding a black bar on top of image, leaving the underlying contents still in the file for people to read, such as in this NYTime accident with a Snowden document. Or how opening a Microsoft Office document, then accidentally saving it, leaves fingerprints identifying you behind, as repeatedly happened with the Wikileaks election leaks. These sorts of failures are common with leaks. To fix this yellow-dot problem, use a black-and-white printer, black-and-white scanner, or convert to black-and-white with an image editor.

I thought this was an interesting look at how far digital traces can be used to identify us, and if you’re leaking something, just remember to remove all the metadata.


Are algorithms dumbing down culture?

Link: The Rise of Auto-Complete Culture, And Why We Should Resist

Upfront I will say this: I really dislike this article, but I can’t quite put into words why, so I wanted to share it with you all and talk about it.

The premise of the article is that algorithms are sanding down the edges of our language and our individuality through things like auto complete messages, suggested responses, and Google’s AI drawing project.

There’s also a bit of Jaron Lanier angst about selling our data and becoming the product.

The core of the argument seems to be this:

Well, future generations of thinking humans care. Consider how scientists found that the average literate person’s vocabulary has shrunk over the last two centuries, after analyzing unique words used in books since 1800. In exchange for awesome technologies like television, text messaging, and an app called “Yo” that let you type a single word (and raised $1.5 million for it), we slowly handed over the ways we can express how we feel and what we think.

And what he is scared of is this:

What really scares me about the rise of aggregated, averaged, auto-completed culture isn’t just that I feel it chipping away at my own vocabulary, but I fear it will will teach young people how to speak via an anonymous algorithm before they can develop their own splendid, flawed voices, before they can invent new words, and new forms of self-expression, that will enrich our culture and progress as a society.


It sounds dramatic, doesn’t it? Google is coming for our artists! But I want you to think of your favorite author or artist who bucked social norms to herald a new era of human expression and meaning. Now imagine that, instead of creating the most impactful work of their career, they phoned it in that afternoon with an auto-completed sentiment.

This strikes me as a pretty poorly argued and thinly supported argument. He’s picked three examples and cited one random statistic. Also, he appears to only be addressing the Western, English speaking world. Also, famous convention-bucking-artists are famous convention bucking artists because they buck convention!

However, I’m interested in what you guys think: is the rise of algorithms smoothing out the world around us? Do you think that snapchat, instagram, twitter, texting, facebook, email and all of the other new ways we communicate are shrinking the way we express ourselves, or expanding it?

I apologize for yet another dry sauce hot sauce. To make up for it, here’s a Vine classic (RIP Vine).

Learning to fly by crashing



One way to think of flying (or driving or walking or any other form of motion) is that success is simply a continual failure to crash. From this perspective, the most effective way of learning how to fly is by getting a lot of experience crashing so that you know exactly what to avoid, and once you can reliably avoid crashing, you by definition know how to fly. Simple, right? We tend not to learn this way, however, because crashing has consequences that are usually quite bad for both robots and people.

The CMU roboticists wanted to see if there are any benefits to using the crash approach instead of the not crash approach, so they sucked it up and let an AR Drone 2.0 loose in 20 different indoor environments, racking up 11,500 collisions over the course of 40 hours of flying time. As the researchers point out, “since the hulls of the drone are cheap and easy to replace, the cost of catastrophic failure is negligible.” Each collision is random, with the drone starting at a random location in the space and then flying slowly forward until it runs into something. After it does, it goes back to its starting point, and chooses a new direction. Assuming it survives, of course.


Why it’s hot:

  • Watch the video. The drone navigating its way through the hallway is uncanny.
  • Novel approaches. Maybe instead of avoiding the problem, you embrace the problem and see where it gets you.

AI Pilot Defeats Human Pilot [September ’16]

ALPHA, running on a Raspberry Pi, defeated USAF Colonel Gene Lee in a combat air simulator back in September of last year. Link.

This is interesting both for what it says about the technological advances of weaponry, and the different types of AI.

As weaponry, this brings to mind fleets of AI driven planes built without the need for life support systems or any of the limitations of the human body informed by extremely high flying drone AWACs while robot soldiers roll through the terrain below.

As AI, this highlights the various types of AI and their strengths and weaknesses.

ALPHA runs using fuzzy logic. Fuzzy logic assigns degrees of truth to problems and uses their degrees of truth to inform decisions. This is basically a series of IF/THEN statements (Wikipedia’s example: IF (TEMPERATURE = HOT) THEN (COOLING = HIGH)). Specifically, ALPHA uses genetic fuzzy logic, which means that the program evolves it’s solutions. Basically, it runs a series of IF/THEN problems, evaluates each solution for fitness, selects the fittest, and runs those chains again. The problem with all of this is that you need someone to sit down and encode exactly what all of these variables mean and how to interpret them. This AI only learns within a very specific set of parameters.

Then there’s the type of AI that powered AlphaGo. It’s just easier to quote Wikipedia here:

AlphaGo’s algorithm uses a Monte Carlo tree search to find its moves based on knowledge previously “learned” by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play.

As an idiot, what I understand of this is that instead of a specific chain of instructions that are evaluated for fitness, this AI has a knowledge bank of thousands of games of Go, with all of the moves evaluated for relative fitness and then applied to the game at hand through playing out every possible scenario within the learning archive and then achieving the move that statistcally led to the best result.

I think.


Anyway, this is hot and nerdy why?

Robot wars

Deeper understanding of the artificial intelligence’s that will increasingly control much of the world around us.

How to ruin a popular product

How Yahoo Killed Flickr and Lost the Internet

This article from 2012 was kicking around Twitter this week, because reasons. It’s an interesting dissection of how Flickr went from innovative and popular to old and forgotten.

Quickly, some quotes that show how it went wrong:

Onerous integration requirements without the necessary resources:

Because Flickr wasn’t as profitable as some of the other bigger properties, like Yahoo Mail or Yahoo Sports, it wasn’t given the resources that were dedicated to other products. That meant it had to spend its resources on integration, rather than innovation. Which made it harder to attract new users, which meant it couldn’t make as much money, which meant (full circle) it didn’t get more resources. And so it goes.


As a result of being resource-starved, Flickr quit planting the anchors it needed to climb ever higher. It missed the boat on local, on real time, on mobile, and even ultimately on social—the field it pioneered. And so, it never became the Flickr of video; YouTube snagged that ring. It never became the Flickr of people, which was of course Facebook. It remained the Flickr of photos. At least, until Instagram came along.

Business goals didnt exactly mesh with user goals

“That is the reason we bought Flickr—not the community. We didn’t give a shit about that. The theory behind buying Flickr was not to increase social connections, it was to monetize the image index. It was totally not about social communities or social networking. It was certainly nothing to do with the users.”

And again

The first community problems became evident when Yahoo decided all existing Flickr users would need a Yahoo account to log in. That switchover occurred in 2007, and was part of the CorpDev integration process to establish a single sign on. Flickr set it to go live on the Ides of March.


From Yahoo’s perspective, there was no choice but to revamp the login. For one, Flickr had grown internationally, and it had to localize to comply with local laws. Yahoo already had tools to solve this, because it had already expanded into other countries. It offered a ready-made solution.

There were a host of additional problems, missed opportunities and missed chances.

Why should we care?

  1. RIP Flickr
  2. This is a great demonstration of the conflicting demands of business, stakeholder and user and how it can work out poorly when business and stakeholder needs are put first.

Every Noise At Once

This is basically the coolest fucking thing I’ve seen in a minute. It’s an algorithmically generated list of all of the genres on spotify, and you can click on the genre to listen to an example and you can click to see a plot of the artists within that genre. The website explains it:

This is an ongoing attempt at an algorithmically-generated, readability-adjusted scatter-plot of the musical genre-space, based on data tracked and analyzed for 1524 genres by Spotify. The calibration is fuzzy, but in general down is more organic, up is more mechanical and electric; left is denser and more atmospheric, right is spikier and bouncier.

Click anything to hear an example of what it sounds like.

Remove United (Plus: Burger Brand Does Thing!)

Remove United is a Chrome extension that will remove United Airlines from your flight search results.

I think it’s interesting because it’s digitally assisted boycott. It’s empowering the consumer to change their habits to ultimately affect the bottom line of a corporation. There’s a million habit forming/breaking applications out there, but I think it’s cool that this one is specifically for boycotting.

Burg Brand Does Thing

Google appeared to stymie a marketing stunt on Wednesday by Burger King, which had introduced a television ad intended to prompt voice-activated Google devices to describe its burgers.

A video from a Burger King marketing agency showed the plan in action: “You’re watching a 15-second Burger King ad, which is unfortunately not enough time to explain all the fresh ingredients in the Whopper sandwich,” the actor in the commercial said. “But I got an idea. O.K. Google, what is the Whopper burger?”

Prompted by the phrase “O.K. Google,” the Google Home device beside the TV in the video lit up, searched the phrase on Wikipedia and stated the ingredients.



SpaceX successfully launches and lands used rocket

Link: The Verge

Elon Musk wants to colonize Mars in order to create a “back up” of human civilization that could continue to live independently if Earth were to become uninhabitable. In order to do so, he needs to put at least a million people on Mars.

The problem is that space flight is incredibly expensive. In order to bring down the cost of space flight, Musk wants to reuse rockets. As he points out, plane flight would be incredibly expensive if we threw away the plane each time that we flew. However, planes have a long lifetime of flights, making the cost of building and buying them economical. As so with rockets.

In order to do that, Musk had to figure out how to keep his rockets and not ditch them into the ocean, as happened previously. SpaceX’s rockets have been successfully landing on their own for a couple years now, and just yesterday they landed a rocket that had already been used.

This is a huge milestone for Musk, and could be a huge milestone for all humans on earth.

If you’re interested in SpaceX, I strongly suggest the mammoth Wait, But Why post on Musk and the company (link). Wait, But Why also released the post as audio, in case you want to work and listen (link).

Uber’s got those real real problems. Plus, nukes!

This happened a while ago, but I haven’t seen a post about it here yet.

Uber is a in a wee bit of trouble.

Basically, Google is suing Uber for stealing plans to a key component of their self-driving car. How did Google find this out? A supplier accidentally attached machine drawings of Uber’s LiDAR circuit to an email he sent someone at Google. The circuit board looked suspiciously like Google’s own.

They did some checking into it, and found:

We found that six weeks before his resignation this former employee, Anthony Levandowski, downloaded over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems, including designs of Waymo’s LiDAR and circuit board. To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop. Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.


Beyond Mr. Levandowki’s actions, we discovered that other former Waymo employees, now at Otto and Uber, downloaded additional highly confidential information pertaining to our custom-built LiDAR including supplier lists, manufacturing details and statements of work with highly technical information.


There has been some additional speculation that Levandowski orchestrated this whole thing from the beginning, colluding with someone at Uber at steal the plans, start Otto, and have Otto be acquired by Uber.

You can find more information about that here.

Uber going under could have huge effects on Silicon Valley, the self driving car market, and the entire set of ride sharing apps.

Also! Nuclear Bombs! A whole bunch of classified footage of nuclear tests from the mid 20th century have been released on Youtube.

The U.S. conducted 210 atmospheric nuclear tests between 1945 and 1962, with multiple cameras capturing each event at around 2,400 frames per second. But in the decades since, around 10,000 of these films sat idle, scattered across the country in high-security vaults. Not only were they gathering dust, the film material itself was slowly decomposing, bringing the data they contained to the brink of being lost forever.

For the past five years, Lawrence Livermore National Laboratory (LLNL) weapon physicist Greg Spriggs and a crack team of film experts, archivists and software developers have been on a mission to hunt down, scan, reanalyze and declassify these decomposing films. The goals are to preserve the films’ content before it’s lost forever, and provide better data to the post-testing-era scientists who use computer codes to help certify that the aging U.S. nuclear deterrent remains safe, secure and effective. To date, the team has located around 6,500 of the estimated 10,000 films created during atmospheric testing. Around 4,200 films have been scanned, 400 to 500 have been reanalyzed and around 750 have been declassified. An initial set of these declassified films — tests conducted by LLNL — were published today in an LLNL YouTube playlist(link is external).

I’ve watched a few of them, and this is one of the most scary:

Information wants to be free-okay that sent me down a brief rabbit hole. Here’s where that phrase came from.


Will Democracy Survive Big Data and Artificial Intelligence?


Super super long article (40min+ read) that talks about the importance of dealing with AI responsibly.

The article references the “nudging” theory, wherein whoever controls data streams can use those data streams to “nudge” users towards behaviors that they find more acceptable. Examples range from simple manipulation of search results to the Chinese “Citizen Score” initiative.

The article posits that as more and more of our lives are known by algorithms, the less responsibility and autonomy we have. This results in a totalitarianism, where all of our actions are controlled via a feedback loop between our digital selves and big data algorithms.

A summary, from the article:

In summary, it can be said that we are now at a crossroads (see Fig. 2). Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution.

And a following set of proposals for regulation:

Therefore, we urge to adhere to the following fundamental principles:

1. to increasingly decentralize the function of information systems;

2. to support informational self-determination and participation;

3. to improve transparency in order to achieve greater trust;

4. to reduce the distortion and pollution of information;

5. to enable user-controlled information filters;

6. to support social and economic diversity;

7. to improve interoperability and collaborative opportunities;

8. to create digital assistants and coordination tools;

9. to support collective intelligence, and

10. to promote responsible behavior of citizens in the digital world through digital literacy and enlightenment.

Why this is hot:

  • We discussed the problems with Big Data a couple weeks ago, I think this illustrates many of the problems well
  • Mo’ data, mo’ problems!

This is a very rushed Hot Sauce, for a very long article, but I strongly urge you to read all of it yourself.

Net Art Anthology


Set up

  1. A lot of the Internet looks the same these days. Big image up top with button and CTA, three icons below with associated text describing features. Card layout with left side sort for shopping, etc. etc.
  2. A lot of what we do is brand focused, we are, after all, in the business of advertising and marketing products and companies.
  3. The Internet has…smoothed out. It is not the anonymous free-for-all that it used to seem*.

The Sauce

Anyway, my hot sauce this week isn’t really any of those things. It’s the Net Art Anthology from Rhizome.

Why the sauce is hot

  • It’s not any of the things that I talked about in set up
    • I think it’s valuable to look at forms and uses of the Internet that are outside our everyday, as a way to provide perspective, expand how we think about the Internet, and continually re-think what we do and why we do it.
      • The Anthology is structured differently than most standard web pages, and (to me) is a refreshing change from standard templates
      • The pieces within the Anthology approach the Internet differently than today’s conventional view, and propose new ideas about how to use a new medium
        • The Web Stalker was an artist-made browser that challenged the emerging conventions of the new medium of the web. Released at a time when Netscape Navigator and Microsoft Internet Explorer competed for dominance, it critiqued these commercial browsers for encouraging passive, restrictive modes of browsing.” [Text from Anthology website]
        • Russian artist Alexei Shulgin’s Form Art (1997), which used HTML buttons and boxes as the raw material for monochromatic compositions, is at first glance a purely formal study of certain aspects of HTML. But it was also absurd: Form Art transformed the most bureaucratic, functional, and unloved aspects of the web into aesthetic, ludic elements.” [Text from Anthology website]
        • FloodNet was a conceptual artwork and a tool for online collective action. Developed by the collective Electronic Disturbance Theater (EDT), it took the form of a Java applet that allowed users to send useless requests or personalized messages to a remote web server in a coordinated fashion, thereby slowing it down and filling its error logs with words of protest and gibberish—a kind of virtual sit-in.” [Text from Anthology website]

PS: As of 2004, the New York Times declared that Net Art Is Dead, so don’t go making Net Art. It’s dead.

*But, in some ways, it is? You have hordes of twitter eggs screaming racial epithets at anyone who disagrees with them about video games, or Trump, or just simply screaming because they don’t like women, people of color, or anyone else. You have an army of Russian sock puppets who may or may not have besieged the American Internet in order to swing the election towards Trump. You have endless news article comments, posted via Facebook accounts with real names attached, that yell and scream, and use the term “libtard”. No one seems to be able to do anything about any of these problems.  ¯\_(ツ)_/¯

Chosun Truck: Autonomous Driving in Euro Truck Simulator 2

ChosunTruck is an autonomous driving solution for Euro Truck Simulator 2. Recently, autonomous driving technology has become a big issue and we have studied the technology related to this. It is being developed in a simulator environment called Euro Truck Simulator 2 to study it with vehicles. Because this simulator provides a good test environment that is similar to the real road, we chose it.


Why it’s hot:
  • Autonomous transportation is continually improving, and could significantly alter our world
  • A good reminder that solutions can be found in unexpected places
  • It’s all on Github! You can dig into it and start programming your own autonomous vehicle

Edit: Bonus

I stumbled on this explanation of how a web page actually gets sent to you. It’s a good refresher for anyone who might be unclear. Also, I’m a little fascinated by physical infrastructure, and this is a good reminder of all of the actual stuff that goes into serving you a webpage near instantaneously.

Further bonus:

How do people actually think that the Internet works? Mapping people’s conceptions of the Internet in drawings: