bots against bias…

Research late last year revealed that sources quoted in Financial Times articles were ~80% men, and only ~20% women. To fix this, FT recently revealed a new bot aimed at balancing those numbers, calling it “She Said, He Said”.

According to its press release, “She Said, He Said” “uses pronouns and first names [in an article] to determine whether a source is male or a female”, then it will “integrate prompts into the CMS to highlight any gender imbalance prior to publication and remind editors to think about sourcing at the commissioning stage”.

This follows FT’s previously revealed “JanetBot”, which “tracks the number of women featured in images on the home page”, giving real-time feedback to editors as they change what’s featured over the course of each day. It’s all part of a greater strategy FT is using to try and balance its appeal among both genders.

Why It’s Hot

There’s obviously plenty of room for technology to surface bias in news reporting, and it’s great to see one of the world’s most prominent daily outlets using it to do just that. It’s another example of how technology can help us see things we otherwise might not, and allow us to correct it – effectively balancing our human capabilities.

[Source]

Your Kid’s Computer Has Dinner Covered.

Neural networks are computer learning algorithms that mimic the interconnected neurons of a living brain, managing astonishing feats of image classification, speech recognition, or music generation by forming connections between simulated neurons.

I’m not a neural network expert, so I had to look that one up when I heard that there was a grad student who loaded a neural network code on her 2010 MacBook Pro, and started training it on a bunch of recipes and cocktails.

Here’s a few recipes the network has generated:

Pears Or To Garnestmeam

meats

¼ lb bones or fresh bread; optional
½ cup flour
1 teaspoon vinegar
¼ teaspoon lime juice
2  eggs

Brown salmon in oil. Add creamed meat and another deep mixture.

Discard filets. Discard head and turn into a nonstick spice. Pour 4 eggs onto clean a thin fat to sink halves.

Brush each with roast and refrigerate.  Lay tart in deep baking dish in chipec sweet body; cut oof with crosswise and onions.  Remove peas and place in a 4-dgg serving. Cover lightly with plastic wrap.  Chill in refrigerator until casseroles are tender and ridges done.  Serve immediately in sugar may be added 2 handles overginger or with boiling water until very cracker pudding is hot.

Yield: 4 servings

This is from a network that’s been trained for a relatively long time – starting from a complete unawareness of whether it’s looking at prose or code, English or Spanish, etc, it’s already got a lot of the vocabulary and structure worked out.

This is particularly impressive given that it has the memory of a goldfish – it can only analyze 65 characters at a time, so by the time it begins the instructions, the recipe title has already passed out of its memory, and it has to guess what it’s making. It knows, though, to start by browning meat, to cover with plastic wrap before chilling in the refrigerator, and to finish by serving the dish.

Compare that to a recipe generated by a much earlier version of the network:

Immediately Cares, Heavy Mim

upe, chips

3  dill loasted substetcant
1  cubed chopped  whipped cream
3  unpreased, stock; prepared; in season
1  oil
3 cup milk
1 ½ cup mOyzanel chopped
½ teaspoon lemon juice
1 ¼ teaspoon chili powder
2 tablespoon dijon stem – minced
30  dates afrester beater remaining

Bake until juice. Brush from the potato sauce: Lightly butter into the viscin. Cook combine water. Source: 0 25 seconds; transfer a madiun in orenge cinnamon with electres if the based, make drained off tala whili; or chicken to well. Sprinkle over skin greased with a boiling bowl.  Toast the bread spritkries.

Yield: 6 servings

which bakes first, has the source in the middle of the recipe directions, mixes sweet and savory, and doesn’t yet know that you can’t cube or chop whipped cream.

An even earlier version of the network hasn’t yet figured out how long an ingredients list should be; it just generates ingredients for pages and pages:

Tued Bick Car

apies

2 1/5 cup tomato whene intte
1 cup with (17 g cas pans or
½ cup simmer powder in patsorwe ½ tablespoon chansed in
1 ½ cup nunabes baste flour fite (115 leclic
2 tablespown bread to
¼ cup 12″. oz mice
1  egg barte, chopped shrild end
2 cup olasto hote
¼ cup fite saucepon; peppen; cut defold
12 cup mestsentoly speeded boilly,, ( Hone
1  Live breseed
1  22 ozcugarlic
1 cup from woth a soup
4 teaspoon vinegar
2 9/2 tablespoon pepper garlic
2 tablespoon deatt

And here’s where it started out after only a few tens of iterations:

ooi eb d1ec Nahelrs  egv eael
ns   hi  es itmyer
aceneyom aelse aatrol a
ho i nr  do base
e2
o cm raipre l1o/r Sp degeedB
twis  e ee s vh nean  ios  iwr vp  e
sase
pt e
i2h8
ePst   e na drea d epaesop
ee4seea .n anlp
o s1c1p  ,  e   tlsd
4upeehe
lwcc   eeta  p ri  bgl as eumilrt

Even this shows some progress compared to the random ASCII characters it started with – it’s already figured out that lower case letters predominate, and that there are lots of line breaks. Pretty impressive!

Why It’s Hot:
Progress, progress, progress. Sometimes we take for granted how long and arduous the road to further our convenience is, or how well-equipped technology actually gets us from point A to B. We don’t always need to look under that hood, but we should be happy someone does, and technology such as machine learning neural networks continue to evolve to make our lives easier-or more entertaining until they get something right.  As the ability to learn from the tons of content mankind has already created continues to improve, there really is some scary (don’t)DIY frontiers on the horizon. Forget about wondering if your kid lifted their essay content from an online wiki source, worry instead if he loaded a code, taught his Mac to ingest thousands of volumes of American history and spit out a dissertation on the significance of the Lincoln-Douglass debates without penning a word. Then don’t punish that kid, get him a job making me new cocktails.
Click here if you want to see the cocktails it created:

A Bot That Cares

The Internet offers many means of managing stress, just as it offers a steady stream of triggers to increase it. Since November, Twitter users have been experiencing the effects of what appears to be an endless flow of news, mostly bad, often confusing. That this news can happen anytime, often at night, leads many Twitter users to refresh their stream repeatedly. Something might happen.

Follow TinyCareBot on Twitter, and receive an immediate response with a piece of advice to help you manage your day. Developed a few days after the Election by playwright and author Jonathan Sun (@jonnysun), TinyCareBot has gathered almost 70,000 followers in three months.
Why This Is Hot?
Twitter bots are economical and effective. Programable with only a modicum of coding experience, reactive Twitter bots are a way to humanize a brand’s social media presence through simple, simulated conversation.

The Bots Have Eyes

Though fraudulent bots have existed for years, advertisers are truly beginning to feel the impact of them as more and more brands are increasing their ad spend on websites and social media channels. A study from the Association of National Advertisers reports that online “bots” that mimic human behavior with false clicks on ads could cost the industry more than $7 billion in wasted spending.

Using technology to determine whether online ads were viewed by real humans or fake bots, fraud detection company White Ops estimated that the average company lost between $250K and $42M in ad spending thanks to bots.

The report also found that ads that are purchased to optimize impressions or those targeted to very specific demographics get the most exposure to bots. The reason, is that the demand for views on these ads is greater than the number of humans that will actually see the ad.

As brands increase their KPIs for impressions and simultaneously implement specific targeting capabilities online, bots are the ones filling in the holes to make sure advertisers seemingly meet their goals. It’s a classic problem of supply and demand-with bots falsely increasing the supply curve to reach a deceiving “equilibrium.” Some publishers will even pay third-party vendors to “source traffic” to their site—increasing both views and the percentage of views that come from fake users.

“The problem is bots often behave better than humans. That is to say, if a campaign is trying to reach the goal of clicking an ad or spending time on a page, the bots will click more often and stay longer. The result: Agencies are left optimizing for bot behavior, not human actions.” –Digiday

But there is hope. A White Ops case study found that brand survey results on inventory with bot-blocking technology had 22% higher brand engagement and 9% higher view rates than the inventory without the blocker. Thus, bot-blockers can both save money as well as improve campaign results.

Why It’s Hot: Ad fraud is a huge problem that will continue to misrepresent the success of online campaigns as well as cost brands millions of dollars if not combatted. Though agencies are beginning to certify publishers in the buying space to help filter out “sourced traffic,” the issue of ad fraud is still not top-of-mind in the industry.

Source: http://mashable.com/2016/01/21/advertising-fraud-study/#EAFtED3ZHgqQ