Monday, September 03, 2018

The CIA Used Artificial Intelligence to Interrogate Its Own Agents in the 80s

The CIA Used Artificial Intelligence to Interrogate Its Own Agents in the 80s

Newly declassified documents show that the spy agency had grand plans for computerized interrogation.

Image: Autpilot/Wikimedia Commons
The CIA has notoriously been, well, "innovative" in developing new interrogation techniques (if you consider waterboarding an innovation, at least). Newly declassified documents reveal that willingness to experiment is nothing new: 30 years ago, the spy agency pitted one of its own agents against an artificial intelligence interrogator.
The documents in question, written in 1983 and titled "Interrogation of an Alleged CIA Agent," describe a series of experimental tests conducted in the early 1980s in which the CIA repeatedly interrogated its own agent, referred to in the report as Joe Hardesty, using a primitive AI called Analiza.
The declassified document is both fascinating in its distinct retrofuture flavour and eerily prescient nature, because the US government is now set to use virtual entities to conduct national security interviews. It's also kind of hilarious, because the interrogation transcript reads like a conversation with a really frustrating chatbot. Here's a slice:
AI: You had better straighten out your attitude, Joe, or it will be a long time before you ever leave this country.
Hardesty: What do you mean by that crack?
AI: You know what I mean.
Hardesty: No, I do not.
AI: Think about it.
Hardesty: Why?
AI: Why do you want to know why?
Hardesty: You made a threat and I want an explanation.
AI: I think the message is clear.
Hardesty: What message?
AI: Do you work for the American Central Intelligence Agency?
Hardesty: Hell, no.
AI: Why do you persist in lying to me?
Hardesty: I am not lying. You have no right to treat me like this.
The intelligence on display in the transcript is clearly undeveloped, and seems to contain a mixed bag of predetermined threats made to goad interrogation subjects into spilling their secrets as well as open-ended lines of questioning.
According to the document, Analiza consisted, in part, of a crude machine learning algorithm that stored Hardesty's responses in its memory, along with a pre-set question bank that it could draw from.
"Other aspects of the program are probing Joe's vulnerabilities," the document stated. "AI records 'focus variables,' Joe's tendency to concentrate on various subjects, and 'profile variables' to serve as indicators of Joe's hostility, inquisitiveness, talkativeness, and understandability, and to pose questions about these."
When your captor is a machine, there is no humaneness to be found, and, hence, no one to plead with
Even way back then, the authors had a striking vision for future virtual entities that can learn on their own, adapt, and think abstractly. According to the document, the CIA believed it was possible that computers could "adapt," "pursue goals," "modify themselves or other computers," and "think abstractly."
Potential applications for computer algorithms like Analiza could include training recruits before they head into the field and face the risk of an interrogation with a human opponent, according to the document.
The CIA, like the field of artificial intelligence itself, has come a long way since the 1980s, and algorithms that attempt to mimic brain processes (referred to as Advanced Neural Networks) like those being developed by Google have achieved many of the goals the CIA set decades ago. The agency itself is heavily invested in AI development today by way of its venture firm, In-Q-Tel, which recently gave a funding boost to Narrative Science, a company developing AI that can glean insight from data and turn it into a semi-readable news article.
"Enhanced interrogation techniques" may very well take on a new, unsettling meaning if the CIA's technological fever dream of the 80s ever comes to fruition. AI interrogation, while presumably less violent and repugnant than waterboarding, for example, could present its own set of moral transgressions.
When your captor is a machine, there is no humaneness to be found, and, hence, no one to plead with. When even that small avenue of humanity is done away with in the proceedings of state-sponsored barbarism, what is left? Illegal detainments could continue with only slight human involvement.
Even though decades worth of development have passed since the CIA's initial dabbling with AI interrogation techniques, virtual entities that can converse naturally with humans are still far off.
The recent case of chatbot Eugene Goostman, which passed the Turing Test through trickery rather than genuine intelligence, demonstrated this. Even so, with government agencies like the CIA, DARPA, and powerful corporations like Google on the case, the possibility might be closer than we think.

How Much of Your Audience Is Fake? Or Are Your Ads Mostly Being Viewed by Bots?

An article in Bloomberg by By Ben Elgin, Michael Riley, David Kocieniewski, and Joshua Brustein suggests that more and more digital ads are not seen by human eyes. "A study done last year in conjunction with the Association of National Advertisers embedded billions of digital ads with code designed to determine who or what was seeing them," according to the article. "Eleven percent of display ads and almost a quarter of video ads were 'viewed' by software, not people."
Another study suggests that $6.3 billion of ad spend a year is wasted on fake traffic, or clicks that appear to be views but are actually the work of software.
The numbers are staggering.
The article also tells a narrative about ad man Ron Amram, who recently looked at the ROI for his ad spend for Heineken USA. His digital ad spend was only around 2 to 1, "a $2 increase in revenue for every $1 of ad spending, compared with at least 6 to 1 for TV," according to the Bloomberg article. Even worse, "only 20 percent of the campaign's 'ad impressions'-ads that appear on a computer or smartphone screen-were even seen by actual people."
Where does all this fake traffic come from? "Fake traffic has become a commodity. There's malware for generating it and brokers who sell it," reads the Bloomberg article. "Some companies pay for it intentionally, some accidentally, and some prefer not to ask where their traffic comes from. It's given rise to an industry of countermeasures, which inspire counter-countermeasures."
If fake traffic is bad for advertisers, who is good for? In some cases, publishers. A website that has a large viewership can charge more for their ads. And if it is difficult to distinguish between real and fake views, publishers can make money off of their fake audience. Sometimes this is done intentionally and sometimes it is accidental.
A lot of sites buy traffic, especially when they are new or when they are pushing out a new kind of content. There are ways for sites to buy real human traffic through companies like OutBrain that send viewers from one site to another with attractive links.
"The traffic market is unregulated, and sellers range from unimpeachable to adequate to downright sleazy; price is part of the market's code," according to the Bloomberg article. You've seen the ads for the lower end of the market that promised 1,000 views for $1. Other places, like Taboola might change as much as 20 to 90 cents per viewer.
The Bloomberg article investigates several low end traffic sellers and tries to determine what percentage of their traffic is human. Not much, it turns out. Often between 70% and 90% of the "viewers" on low end sites were bots. The article concluded that, "Ad fraud may eventually turn into a manageable nuisance like shoplifting, something that companies learn to control without ever eradicating."
image from shutterstock.com

The United States Is The Largest Prison Camp In The World

The Criminal Criminal Justice (sic) System

Poverty is no mystery

Some Ideas to Think About

Colonel Baldwin Meets Mr. Lincoln