
Fake Instagram ad for a Fake Russian Group
Reporting by the Associated Press released on Thursday reveals the extent of Russian internet ‘active measures,’ with detailed forensic evidence of worldwide operations between March 2015 and May 2016 aimed at “a master list of individuals whom Russia would like to spy on, embarrass, discredit or silence.” The Wall Street Journal says the Justice Department has identified “more than six members of the Russian government” who were allegedly behind the hack of the Democratic National Committee.
Executives from companies with a major presence on the internet have testified before House and Senate Committees this week. It’s safe to summarize their positions as: We’re sorry about the Russians exploiting our services, it’s not that big a deal, and we alone know how to fix this.
The Russia story, however, is just the tip of the iceberg. The declining cost of technologies involved in manipulating social media content is making these sorts of activities affordable to non-state entities. In Poland, bots have already been used to spread propaganda on behalf of pharmaceutical and natural resource companies.
***
Thanks to a misstep by the hacking entity known to US Intelligence as Fancy Bear, a researcher working with the SecureWorks cybersecurity firm was able to effectively “look over the shoulder” of the hackers. The details of the thousands of emails targeting individuals and corporations were recorded, revealing a much broader set of targets than the previous reporting indicated.

A breakdown of targets outside Russia, Via SecureWorks
The AP obtained the data recently, boiling it down to 4,700 individual email addresses, and then connecting roughly half to account holders. The AP validated the list by running it against a sample of phishing emails obtained from people targeted and comparing it to similar rosters gathered independently by other cybersecurity companies, such as Tokyo-based Trend Micro and the Slovakian firm ESET .
The Secureworks data allowed reporters to determine that more than 95 percent of the malicious links were generated during Moscow office hours — between 9 a.m. and 6 p.m. Monday to Friday….
…The list skewed toward workers for defense contractors such as Boeing, Raytheon and Lockheed Martin or senior intelligence figures, prominent Russia watchers and — especially — Democrats. More than 130 party workers, campaign staffers and supporters of the party were targeted, including Podesta and other members of Clinton’s inner circle.
***
The Senate and House hearings largely consisted of testimony from lawyers employed by Facebook, Twitter, and Google. [The running internet joke was they would have invited somebody from Microsoft’s Bing, but couldn’t find them in search results.]
Senator Diane Feinstein did some serious finger wagging.
Via the Los Angeles Times:
Sen. Dianne Feinstein (D-San Francisco), a member of the Senate panel, warned the California-based companies that they needed to be more aggressive at stopping secret foreign use of their technology — or Congress would step in.
“You bear this responsibility. You created these platforms and now they are being misused,” she said. “You have to be the ones to do something about it or we will.”
Liz Posner at Alternet which, along with other independent outlets, has seen declines in traffic due to platform manipulation, summed up the day nicely:
For what it’s worth, the tech titans have been relatively cooperative with the process. Twitter announced last week that it would ban all ads from RT and Sputnik, two news sites with ties to the Russian government. And all three built sympathetic lines into their testimonies on Capitol Hill. “The foreign interference we saw was reprehensible,” Facebook’s Stretch told senators. But their promises are weak-willed. As Tim Wu, a professor of law at Columbia University, told the New York Times, “I like that they are contrite, but these issues are existential and they aren’t taking any structural changes. These are Band-Aids.”
Therein lies the key issue: only structural changes—to the way these companies make vast sums of ad revenue, in particular—will stop political interference like what we saw boil over in the 2016 election. It would take a complete structural overhaul to stop this kind of infiltration. Advertisers both large- and small-scale have been flocking to both Google and Facebook ads for years. The advertising process is automated and difficult to monitor. And boosting the visibility of viral content is built into the very heart of companies’ business models: on Facebook, you’re more likely to see ads that your friends “liked.” Google rewards websites that adhere to its vast and ever-changing SEO rules and arbitrarily punishes independent voices like AlterNet at a whim, and on Google Ads, a competitive marketplace for advertisers, companies need only bid a few dollars higher than their competitors in order to push their ads up to the top of a search result page.
The big deal coming out of these hearing were copies of Facebook ads placed by Russian entities and a lengthy list of Twitter accounts used to disseminate and amplify messages. As has been reported previously, these ads weren’t just partisan; they were designed to exploit fear and sow division.
A post at Daily Kos reviews a PBS Frontline story called Putin’s Revenge, delving into the underlying motivations and the extent of Russian active measures relating to the election.
What’s important in this overview of the 2016 election’s perfect storm is that it was influenced at some key moments such as the intersection of WikiLeaks, the Trump grabber tapes, and the news of Russian election interference by the PBO administration.
Combined with the still unfolding investigative story of targeting ads and voting in the actual election, we are suffering the tragedy of 45* and hoping democracy will survive…
…More importantly the program should make viewers more sensitive to how the strategy of tension still exists in the 21st Century and that while there is some trust in US ability to counter Russian active measures, we also are increasingly aware of Trumpian complicity in crippling the federal government.
At TV Worth Watching, Alex Strachan gets into a significant bit of history:
There’s a telling story — skirted over in the Frontline documentary but telling just the same — that Putin, at KGB headquarters for Eastern Europe in Dresden, was literally the last person to turn out the lights, while, outside on the street, a mob went wild, intoxicated by freedom and hopped up on booze. The story is that Putin locked the door behind him, looked at the mob with disgust and resolved then-and-there that the new Russia, his Russia, would never again be reduced by such squalor and disorder.
***
In researching today’s column, I stumbled across a rather amazing essay borne out of a Google/Jigsaw funded June 2017 convening organized and led by Samuel Woolley, Research Director of the new DigIntel Lab at the Institute for the Future.
As we’ve learned over the past couple of years, real social media users are influenced by bots, botnets, and sock puppet accounts to the point they willingly participate in sharing false or inflammatory content with their networks.
The bottom line here is that the process of manufactured consent as described by Noam Chomsky, is moving beyond simple print and visual formats and becoming more powerful as a result.
When thousands, or tens of thousands of sockpuppet and automated accounts are operated by a single user or group, they can create the impression that many thousands of people believe the same thing. In the same way that mass media like television and radio were once used to manufacture consent, bots can be used to manufacture social consensus.
The essay sounds a somber warning about computational propaganda expanding beyond the political realm:
Until now, computational propaganda has been limited largely to politics. Operating the thousands of sockpuppets and tens or hundreds of thousands of bots necessary to run a large-scale information operation is costly. However, as with every new technology, the cost of operating botnets is decreasing. The software used to wield this type of automation is more accessible than ever before.
Increasingly, bots are being used to target issue-based groups, brands, and corporations. Hollywood films like last year’s Ghostbusters reboot and celebrities like Shia Labeouf have been targeted by troll armies organized on Reddit and 4chan. It probably can’t be long before attackers running a group of sockpuppet accounts in the US seeds fake news about a corporation—say, a massive data breach—and amplifies that messaging with 10,000 bots until the story is trending on Twitter, and waits until the story is picked up by mainstream media before making a large stock purchase, or simply claiming victory.
According to Woolley, there are defenses the dominant social media and search platforms could undertake, including utilizing technology to label automated messaging, sharing data with outside researchers on the dynamics of how bot messages spread, and using algorithmic “shadow bans” on problematic accounts, making them invisible without actually removing them from the platform.
There are huge ethical and political problems associated with reining in activities on social media platforms, and I don’t want to minimize them. But I will point out that the advent of the automobile necessitated state and social intervention to make roads safer. People still break traffic laws, but there are consequences when they are caught. Meanwhile, life is safer and saner for the rest of us.
Looking for some action? Check out the Weekly Progressive Calendar, published every Friday in this space, featuring Demonstrations, Rallies, Teach-ins, Meet Ups and other opportunities to get your activism on.
Did you enjoy this article? Subscribe to “The Starting Line” and get an email every time a new article in this series is posted!
I read the Daily Fishwrap(s) so you don’t have to… Catch “the Starting Line” Monday thru Friday right here at San Diego Free Press (dot) org. Send your hate mail and ideas to DougPorter@SanDiegoFreePress.Org Check us out on Facebook and Twitter.
I confess to relative ignorance in this discussion of high technology cyber space events, engineering and how it is used to influence others. There is one thing that does creep into my mind though, and it just might be investigating crimes for most of adult life created this instinct.
How many others are actually utilizing this technology and techniques to influence others? Where does this operation rate compared to them, in terms of resources spent and the requisite people involved? How many people are capable of executing this type of operation? Does this compare to the spam attacks we tolerate everyday and does its’ influence appear similar to spam? And the biggest question I have is how accurate were the experts from the major cyber space protection companies world wide when they stated more or less that ” attribution can be difficult if not impossible because so much of this can be faked”? I think the show was Zero Hour.
Those questions stand out because they were not only asked, but not answered. No response from our experts at NSA or in other agencies with high tech abilities. And the longer they remain unanswered, the more I suspect it is because they don’t want us to know the answer. They don’t know for sure. Or even worse, they know because they are the ones faking it. I haven’t seen those questions answered yet. I could have missed them. But I haven’t seen them yet.